The goal of this informal lunch seminar (short) series is to present either a math or a CS research area and encourage informal discussions between math and CS participants.

Unless otherwise specified, the working group will meet on thursdays, 12:30-13:30. Participants are invited to bring their lunch.

- Thursday, Feb 18, 2016, 12:30-13:30: Monique Teillaud (INRIA Nancy - Grand Est, LORIA)
- Friday, Jan 29, 2016, 12:00-13:00: Arash Atashpendar (UL)
- Thursday, Dec 10, 2015, 12:30-13:30: Jean-Sébastien Coron (UL)
- Tuesday, May 26, 2015, 12:30-13:30: Tanja Schilling (UL)
- Wednesday, May 13, 2015, 12:30-13:30: Shree Krishna Sharma (UL)
- Thursday May 7, 2015,
**13:30-14:30**: Dimitrios Kampas (UL) - Apr 16, 2015, 12:30-13:30: Jean-Sebastien Coron (UL)
- Apr 2, 2015, 12:30-13:30: Ronan Fleming (UL)
- Thursday, Mar 12, 2015, 12:30-13:30: Lars Beex (UL)
- Dec 11, 2014, 12:30-13:30: Giovanni Peccati (UL)
- Nov 27, 2014, 12:30-13:30: Qiang TANG (UL)
- Friday, Nov 7, 2014, 12:30-13:30: Gabor Wiese (UL)
- Oct 23, 12:30-13:30: Peter Ryan (UL)
- Oct 16, 12:30-13:30: Ivan Nourdin (UL)

Room: F213.

Abstract:

CGAL, the Computational Geometry Algorithms Library (www.cgal.org), provides industrial and academic users with useful and reliable geometric algorithms. CGAL is distributed under a double license scheme: GPL and commercial. It is used in a wide range of fields (see http://www.cgal.org/projects.html for a sample). CGAL is developed in the framework of an open source project, through which contributors gain impact and visibility. The talk presents an overview of the contents of CGAL and of its characteristics: robustness, genericity, flexibility, and efficiency. It also sketches the organization of the CGAL project. Then it focuses on triangulation and meshes packages and finally shows some work in progress, including work in non-Euclidean geometries.

Room: F213.

Abstract:

We present and prove a theorem related to maximal initial deletion masks in the realm of supersequences, from which we derive a new proof for a Hamming weight clustering scheme as a corollary: in the context of deletion channels, a codeword y of length n can be subject to s random deletions that lead to a received s-deleted codeword x of length k, which is a subsequence of the originally intended codeword by the sender. The supersequences that could have led to the received subsequence are clustered into r=n-k+1 groups according to r=h(y)-h(x) and it turns out that the size of each such cluster is independent of the exact form of the subsequence and that it is purely a function of n, k, r and h(x). We derive a new proof for this theorem that gives more insight into the structure of this space and we also give a simple closed form expression for computing the size of each cluster. Moreover, we present a recursion based approach for computing the size of a cluster, which provides yet another proof for the theorem. Time permitting, we will also discuss potential lines of attacks based on the said clustering for proving minimal entropy in the context of leaked subsequences, a study rooted in an analysis of quantum key establishment protocols.

Room: E 212.

Abstract:

In 2009 there has been a revolution in cryptography with the description of the first fully homomorphic encryption scheme by Craig Gentry. Fully homomorphic encryption allows computations to be performed on ciphertexts. This enables cloud computing: the user can encrypt his data before sending it to the cloud; the cloud can still search, sort and edit the data on his behalf; the data is kept in encrypted form in the cloud, so that the cloud learns nothing about the user's data; eventually the cloud returns encrypted answers, that only the user can decrypt. In this talk we will explain the underlying mathematics of fully homomorphic encryption.

Room: F213.

Abstract:

Connectivity percolation is the transition in which isolated clusters of solid particles in a fluid (or of voids in a solid) become connected in some sense to form a system-spanning network. This network has a significant effect on the mechanical and transport properties of the material on a macroscopic scale. If, for example, an electrically insulating polymer is mixed with conductive fibres such as carbon nanotubes, the conductivity of the composite increases by many orders orders of magnitude near the percolation transition of the filler material.

We investigate percolation in suspensions of fibres by means of connectedness percolation theory and by specialized Monte Carlo simulations. Our study covers the entire range of aspect ratios from spheres to extremely slender rods. The theory and the simulations agree very well for aspect ratios down to values as low as 10. The percolation threshold for both hard and ideal rod-like particles of aspect ratios below 1000 deviates significantly from the inverse aspect ratio scaling prediction, thought to be valid in the limit of infinitely slender rods and often used as a rule of thumb for fibres in composite materials. Hence, most fibres that are currently used as fillers in composite materials cannot be regarded as practically infinitely slender from the point of view of percolation theory.

Comparing percolation thresholds of hard rods and new benchmark results for ideal rods, we find that (i) for large aspect ratios, they differ by a factor that is inversely proportional to the connectivity distance between the hard cores, and (ii) they approach the slender rod limit differently.

We also study the effects of polydispersity on the percolation transition. We discuss length and diameter polydispersity as well as dispersity in the connectedness criterion. The main result is that the percolation threshold shows universal behaviour, i.e.~it depends only on certain cumulants of the size distribution.

Room: F213.

Abstract:

Random Matrix Theory (RMT) is an important multivariate statistics tool in order to carry out asymptotic analysis for multidimensional problems since it allows easier approximations and provides closed-form expressions due to the law of large numbers. This tool has been widely in variety of research areas such nuclear physics, finance and large dimensional signal processing in wireless communications. In the field of wireless communications, it has been used in various applications such as asymptotic capacity analysis, and modelling transmit/receive correlation in Multiple Input Multiple Output (MIMO) channels. Starting with the main principles and the basic laws of RMT, this talk presents the recent advances in RMT for wireless communications. Subsequently, this talk focuses on the application of RMT in Cognitive Radio (CR) communications, specifically on the eigenvalue-based spectrum awareness techniques which deal with the blind acquisition of the radio parameters such as spectrum occupancy, signal to noise ratio, etc.

Slides.

Room: F213.

Abstract:

Automated topic identification has gained a significant attention since a vast amount of documents in digital forms are widespread and continuously increasing. Latent Dirichlet allocation (LDA) is a standard topic detection method, which crafts a set of words out of a collection of documents. It is claimed that the significantly co-occurring words in a document provide information about the content of a document. To infer about the document topics, LDA assumes that the permutation of the words in the document leaves the model unaffected (bag-of-words assumption). In this work, we extend the bag-of-words practice and deduce the topic by considering the inherent sequential structure of language.

Room: F213.

Abstract:

Boneh et al. showed at Crypto 99 that moduli of the form N=p^r q can be factored in polynomial time when r=log p. Their algorithm is based on Coppersmith's technique for finding small roots of polynomial equations, which is based on the LLL algorithm. In this talk we will show that N=p^r q^s can also be factored in polynomial time when r or s is at least (log p)^3; therefore we identify a new class of integers that can be efficiently factored.

In this talk we will first recall the LLL algorithm for finding small vectors in a lattice. We will then recall Coppersmith's algorithm for finding small roots of modular polynomial equations. We then recall its application to factoring N=pq when the high-order bits of p are known. We will then proceed with Boneh et al. algorithm for factoring N=p^r q in polynomial time, and eventually our extension to N=p^r q^s.

The article is available here. Slides.

This is a joint work with Rina Zeitoun, Jean-Charles Faugere and Guenael Renault.

Room: F213.

Abstract:

Solving systems of non-linear equations is an essential part of deterministic modelling of biochemical reaction network dynamics. We introduce the mathematical structure of this problem at the intersection of variational analysis and hypergraph theory. In particular, we introduce a new class of function, called duplomonotone, which is strictly broader than the class of monotone function. We introduce some of the main properties of duplomonotone functions and provide an example of a nonlinear duplomonotone function used to model a system of biochemical reactions. We introduce a derivative-free line search algorithm for finding zeros of systems of duplomonotone equations that is linearly convergent to a zero of the function [1]. From numerical experiments, this classification of the key function appearing in deterministic models of systems of biochemical reactions appears to hold more generally, though it is an open problem to establish the conditions on a biochemical network such that it is certainly duplomonotone. Further open mathematical problems in deterministic modelling of biochemical reaction network dynamics are introduced with concluding remarks on some of the mutually beneficial opportunities that exist for mathematical and biological research in general.

References:

[1] Aragon Artacho, F. J., and Fleming, R. M. T. Globally convergent algorithms for finding zeros of duplomonotone mappings. Optimization Letters, Sept. 2015.

Room: F213.

Abstract:

Materials with discrete meso- and micro-structures are frequently modelled using lattice models (fabrics, paper materials, collagen networks, open-cell metal foams, etcetera). The individual springs and beams in lattice models represent individual fibre-segments or individual struts in metal foams. Consequently, important mechanics such as individual fibre failure and inter-fibre-bond failure can be modelled in detail. The disadvantage of small-scale lattice models is their prohibitive use for large-scale, industrially relevant computations. Multiscale methods can be used to overcome this problem and in this work, the quasicontinuum (QC) method is adopted because it has a specific combination of advantages for lattice models - which will be addressed in the presentation.

The QC method was so far only used to model non-dissipative atomistic lattices (i.e. based on unconstrained energy minimisation), but in this presentation a virtual-power-based variant will be introduced that is able to include dissipative mechanics often required for lattice models (i.e. based on constrained energy minimisation). We will show the applicability for lattice models with dissipative springs (local constraints) and bond failure (nonlocal constraints). Furthermore, the two reduction steps of the QC method will be clearly addressed including the influence they both have on the total error.

Room: F213.

Abstract:

In a recent series of papers De, Diakonikolas and Servedio [1,2] have used tools from Gaussian analysis in order to study the problem of finding a polynomial time deterministic approximate counting algorithm for degree-d polynomial threshold functions. The main contributions of the authors are based on a pervasive use of our theory for probabilistic approximations on general Gaussian spaces [4], as well as on a remarkable invariance principle from [3], which is reminiscent of the famous 'Lindeberg swapping trick' (that one sometimes uses in the classroom to derive a non-Fourier proof of the central limit theorem). In this talk, I will discuss my understanding of the results from [1,2] and of their proofs. I will also try to describe how these findings are situated in the vast landscape of the so-called 'universality phenomenon', which is one of the leading themes of modern probability theory.

References:

[1] A. De, I. Diakonikolas and R. Servedio (2014): Deterministic Approximate Counting for Juntas of Degree-2 Polynomial Threshold Functions, IEEE Conference on Computational Complexity (CCC), to appear

[2] A. De and R. Servedio (2014): Efficient deterministic approximate counting for low-degree polynomial threshold functions, 46th Annual Symposium on Theory of Computing (STOC), to appear

[3] E. Mossel, R. O'Donnel and K. Oleszkiewicz (2010). Noise stability of functions with low influences: invariance and optimality. Annals of Mathematics 171(1), 295-341

[4] I. Nourdin and G. Peccati (2012). Normal Approximations Using Malliavin Calculus: from Stein's Method to Universality. Cambridge Tracts in Mathematics. Cambridge University Press.

Room: F213.

Abstract:

In modern cryptography, mathematically intractable problems play an essential role. Such problems include factorizing big numbers, computing discrete logarithms, learning parity with noise (LPN), learning with errors (LWE), etc. Among them, LPN and LWE are particularly interesting because they are considered hard even in the presence of quantum computers (in contrast, factorization and discrete logarithms are easy). In this talk, I will briefly introduce LPN and LWE and show some example applications for them.

Room: F213.

Abstract:

Modular forms are functions -- classically situated in complex analysis, but nowadays perceived in an algebro-geometric context -- that have deeply rooted arithmetic significance. For instance, some modular forms "know" the number of points that an elliptic curve admits over all finite fields; others "know" the factorisation types of an integer polynomial modulo all prime numbers. Both these concepts, and many more, can be unified in the framework of Galois representations, that is, representations of the symmetry (=automorphism) group of the field of algebraic numbers. That group is perceived by many as the major object of study in "Algebraic Number Theory". One of the major lines in current "Arithmetic Geometry" research is to prove that all Galois representations are parametrised by modular forms (and their generalisations). Whereas Galois representations are usually very hard or even impossible to compute explicitly, modular forms are easily approximated on a computer, allowing to approach Galois representations computationally through the parametrisation. In the talk, we'll give examples of the arithmetic information contained in modular forms and how to compute them. Finally, we'll give a hint on the parametrisation of Galois representations through modular forms, especially in view of the examples.

slides.

Room: F213.

Abstract:

I will describe a little information theory puzzle that arose from some explorations in Quantum Key Establishment protocols. The problem is easy to describe but appears not to have been considered before in the literature. Suppose that we have a random bit string y of length n and we chose at random a subset S of {1,...,n} size k. from this we reveal the bits at the locations indexed by the elements of S, but without revealing S. The question now is: how much information are we revealing about y?

slides.

Room: F213.

Abstract:

In recent years, compressed sensing has attracted considerable attention in areas of applied mathematics, computer science, and electrical engineering. It is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This takes advantage of the signal's sparseness or compressibility in some domain, allowing the entire signal to be determined from relatively few measurements. The aim of my talk is to give the rudiments of the theory, by focusing in particular on a noteworthy phase transition phenomenon. The talk should be easily accessible to non-specialists.

slides.