Our research project is concerned with the automatic derivation of properties and formulae concerning classical elementary and special functions. We are located in Orsay, within the Inria-Microsoft Research Joint Centre, with participants from the Algorithms project at Inria Rocquencourt. Please contact us for additional information, visits, internships, etc.
The aim of this project is to automate the computation of numerous mathematical formulae that are needed in analysis. We intend to show that large parts of the code dealing with elementary and special functions in computer algebra systems can actually be generated automatically in a uniform manner.
The starting point is to promote linear differential or difference equations to the status of data-structures representing their solutions. New efficient algorithms operating directly on this data structure will be designed as part of this project. As a showcase for this project, we intend to develop an encyclopedia of special functions available on the web. In addition to our existing prototype, this new version will provide dynamical entries depending on user queries; integral transforms (Laplace, Fourier,…); more information on each function.
In order to achieve the desired efficiency, the project contains an important theoretical investment on the complexity of fundamental algorithms of computer algebra. Computer algebra is devoted to the study of effective mathematics and the complexity of corresponding algorithms. The emphasis is on exact mathematical objects (integers, rational numbers, polynomials, algebraic numbers,…) rather than numerical approximations; the foundations lie in algebra. So far, most of the impact of this area has been in pure mathematics and in education.
The applications of computer algebra systems such as Maple or Mathematica to applied, discrete or experimental mathematics have often been restricted by either the scarceness of exact solutions or the huge memory requirements on practical problems. Our long-term project aims at improving this state of affairs and bring computer algebra to a stage where it can generate efficient and guaranteed numerical code for approximating the solutions of large classes of problems of real-life size. We pursue three main ideas rooted in computer science.
Very few equations admit explicit solutions. This does not mean that questions about these solutions cannot be answered. Indeed, even when explicit solutions exist, some of their properties can be accessed more efficiently from the defining equations. By considering equations themselves as data-structures, one is led to develop algorithms that construct equations satisfied by objects of interest as well as algorithms that extract information from these equations. This idea is classical in the case of roots of polynomials, where the polynomials themselves are a good data-structure for representing the roots. Algorithmic tools operating on this data-structure are the classical Euclidean division, Euclidean algorithm and, in the case of multivariate systems, Gröbner bases.
More recently, it has been realized (in a large part by members of this project), that another important class of examples is provided by linear differential equations. The tools there are non-commutative analogues of the algorithms used for polynomials. To give an idea of the impact of this idea, it can be estimated that approximately 60% of Abramowitz and Stegun’s Handbook of Mathematical Functions (one of the most cited references in mathematics, physics, chemistry and engineering sciences), covering the knowledge of the best specialists on about 100 mathematical functions can be generated automatically by computer algebra programs using systems of linear differential equations as the basic data-structure. This has been implemented in a prototype on-line encyclopedia.
Obvious advantages of computer generation are added interactivity, a virtually unbounded set of functions, and automatically generated symbolic or numerical code. The next steps in this area will be to add more interactivity to this prototype, to extend it so as to deal with integral transforms and more general parameterized integrals. In the long term, another application of this data-structure is the generation of good approximations for otherwise difficult to evaluate multiple integrals arising in theoretical physics or chemistry.
Intermediate expression swell is a common disease of computer algebra algorithms: integers or polynomials intervening in intermediate computations are much larger than the actual input and output and most of the computation time is spent on them. In several important cases, the actual expression of these objects are not needed for the full computation but only pieces of information concerning them. It has been realized that using for data-structure programs that evaluate these polynomials or integers or more general mathematical objects results in huge memory savings.
All the algorithms then need to be revisited since arithmetic operations become costless while equality testing becomes expensive. The results of carrying out this program for polynomial systems are spectacular: the worst-case complexity which was exponential in the size of the result has become only polynomial. This is the result of a long research program by the TERA group, to which some of us have contributed.
Another recent application of this idea to differential equations has led us to algorithms whose complexity is almost optimal (up to logarithmic factors) in the size of the result for their solutions. Such results have an immediate impact throughout large parts of computer algebra.
Another source of computational complexity comes from the nature of the problems that are being attacked. The ill-conditioning of equations or matrices which causes difficulties in numerical analysis has a counterpart in the size of exact representations. Typically, the existence of a close-by singular matrix or polynomial or polynomial system forces the representations of solutions to involve very large integers. Thus the intrinsic complexity of a problem depends on how far (in a mathematically quantifiable sense) it lies from singularities. One is thus led to a theory of geometric computational complexity. There, global upper bounds are refined by a complexity measure depending on the geometry of the input data-structure space.
A lot of work is yet to be done, both experimentally and theoretically, to develop geometry-aware algorithms. For instance, we have recently developed and precisely analyzed an algorithm that converges quadratically to solutions of systems, even in the presence of multiplicities or clusters of roots, by analyzing numerically and on the fly the nature of the close-by singularity. The goal there is to enlarge as much as possible the class of problems for which answers can be computed in time almost linear in the size of appropriate data structures for both input and output. Again, results obtained there are of high relevance not only in our project.
2014
Journal articles
Conference papers
2013
Conference papers