Abstract: We present the canonical Groebner Cover method for discussing a parametric polynomial system of equations. Its objective is to decompose the parameter space into subsets (segments) for which it exists a generalized reduced Groebner basis in the whole segment with fixed set of leading power products on it. Wibmer's Theorem guarantees its existence. The Groebner Cover was designed in a paper of the authors. The Singular grobcov.lib library implementing it is developed by A. Montes and recently it uses Kapur-Sun Wang algorithm as first step. The algorithm is canonic and groups the solutions having the same kind of properties into different disjoint segments. We give an elementary example showing how to use the software, that can be used as a simple tutorial. Even if the algorithms involved have high complexity, we show how in practice it is effective in many applications of medium difficulty. We show that the software is very suitable for automatic deduction of geometric theorems, and apply this to prove the generalized Steiner-Lehmus Theorem. Another interesting application is to provide a taxonomy for exact geometrical loci computations, that is experimentally implemented in a web based application of the dynamic geometry software Geogebra. This application is explaineded in another talk in the parametric polynomial system session of the congress. Finally we summarize the steps of the algorithm.
Abstract: In maximum likelihood estimation, the parameters of the model are estimated by maximizing the likelihood function, which maps the parameters to the likelihood of observing the given data. Transforming this optimization problem into the solution of a multivariate polynomial system via a Computer Algebra System (CAS) can provide two advantages (i) all the local maximums can be discovered and a global maximum selected among the more frequently encountered local maximums, and (ii) once the symbolic result has been computed, it can be used in numerical evaluations in a split second. This reduces the computation time in an iterative algorithm where parameter estimation - global maximization problems need to be computed often, such as in robust fitting. This technique is applicable where surface reconstructions from point clouds generated by laser scanning technology are encountered, e.g., in robotics, computer vision, digital photogrammetry, computational geometry, digital building modelling, forest planning and operational activities, etc. Point clouds, however, are limited due to the fact that occlusions, multiple reflectance, and off-surface points corrupt the data thus necessitating the need for robust fitting techniques. In this contribution, we applied CAS to solve the maximization of the likelihood function in case of three different robust techniques, namely; Danish, RANSAC (Random Sample Consensus) and Expectation Maximization methods, where in the latest case the likelihood function is generated for Gaussian mixture of two distributions. Different algebraic techniques have been employed to solve the resulting polynomial systems, e.g., Sylvester resultant, improved Dixon resultant developed by Kapur-Saxena-Yang, Dixon resultant accelerated by the Early Discovery of Factors method developed by Lewis, as well as Groebner Basis. Numerical examples with data from real laser scanner experiments illustrates the power of the suggested technique, and its advantages compared to the principal component analysis (PCA) as well as the singular value decomposition (SVD), which are alternatives of the maximum likelihood estimation.
CoCoA is a "well-established" Computer Algebra System dating back to 1989.
It was created as a laboratory for studying Computational Commutative
Algebra, and specifically Groebner bases... and still today maintains this
In the last few years CoCoA has undergone a profound change: the code has been totally re-written in C++, and is now based upon its integral open source C++ library, called CoCoALib. Putting the mathematical capabilities into a software library facilitates integration with other software.
At ICMS 2010 we presented the first prototype of CoCoA-5, which then implemented a subset of the new CoCoA language, and was able to compute just some fairly basic operations on polynomials.
Four years later...
We have finished the design and implementation of the completely new CoCoALanguage. Superficially it resembles the CoCoA-4 language, is largely backward compatible with it, and maintains or improves the naturalness and ease of use for which CoCoA-4 was noted. The clearly defined semantics of the new language make it both more robust and more flexible than CoCoA-4.
The internal software design makes it easy to render new extensions to CoCoALib (whether by the authors, or by contributors) accessible via the interactive CoCoA-5 system, so there's no need to wrestle with C++ to use them.
Mathematically speaking, CoCoA-5 can now do (almost) everything that CoCoA-4 could do, and faster. It can also do many things CoCoA-4 cannot. The openness and clean design of CoCoALib and CoCoA-5 is intended to encourage extensions by users outside the main development team. For instance,
(1) new contributed functions integrated in CoCoALib operations on Mayer-Vietoris trees (Saenz de Cabezon), Janet Bases (Albert, Seiler)
(2) integration of external libraries Normaliz (Bruns, Ichim, Soeger) -- almost "symbiotic", Frobby (Roune)
Abstract: With the increasing capability of symbolic computation in recent years, considerable progress has been made in the area of advanced computational algebraic geometry. One such advanced computational method is the methodology of Groebner bases which were introduced in 1965 by Buchberger. Especially, Buchberger was the first to provide a useful algorithm for the determination of Groebner bases. This algorithm has been implemented in many mathematical symbolic computational systems, and it is primarily because of this that the application of the Groebner basis methodology has now become a feasible option for many scientific and engineering applications. The purpose of the talk is to demonstrate the utility of the Groebner basis methodology in the analysis of nonlinear mechanics problems, such as static cable deformations, steady state vibrations, and plate bending. The MAPLE package is used in all cases to implement the Groebner basis calculation which converts a set of coupled polynomial algebraic equations into an equivalent set of uncoupled polynomial algebraic equations (the Groebner basis). The methodology is found to be worthy of further investigation and potentially effective in the analysis of nonlinear problems occurring in a variety of engineering applications.
Abstract: Solution of nonlinear systems of equations is an indispensable task in almost all geosciences such as geodesy and geoinformatics. The rapid development of computer algebra in the last half of the century, with one of the most important mile stones in 1965 - the publication of the Groebner Basis theory, as well as the enormous improvement of the computer algebra systems (CAS) such as Mathematica and Maple (just to mention but few) starting 30 years ago, gave the possibility to solve most of these systems uniquely in symbolic form in a wide area of different sciences. To realize the technological needs of the geoscience society (researchers, university professors, and students as well), we published our book ten years ago, titled as Solving Algebraic Computational problems in Geodesy in 2005 at the Springer Verlag. This was a pioneering work at that time dealing with the basics of computer algebra and providing many practical applications to geosciences problems. The second edition of this book appeared 5 years later with the title Algebraic Geodesy and Geoinformatics, which already explained both algebraic ("exact") and numerical ("approximate") methods (such as linear homotopy), since the great size of some realistic geodetic and geoinformatic problems could not be solved by pure symbolic algebraic methods. This edition was also accompanied by an electronic supplement in Mathematica code providing the algorithms, functions and solutions of the problems discussed in the book. This book, whose foreword was written by Prof. Buchberger (the father of Groebner basis) was a great success in the geodetic society where it has sold numerous copies and enjoyed wider citation. The Mathematica codes were also inspiring for many scientist and students and stored permanently in the Wolfram Library Archive. In the meantime, new technologies have risen in geosciences such as laser scanning while new techniques have also been developed in the CAS software like symbolic regression. All of these require the introduction of new methods such as Groebner based robust parameter estimation, new error models differing from the well-known least squares, and using Pareto optimum consideration instead, parallel symbolic computations or satellite control algorithm, resultant methods, etc. Computer algebra methods can be effectively utilized in most of these techniques. The success of the first two editions and the new development has motivated the preparation of the third edition of our book currently underway. As many scientific books, this work is expected to be based partly on research publications appearing in the last few years. Two examples to demonstrate the applications of the Groebner basis in these new topics will be presented at the talk.
Abstract: We present generic, type safe Groebner bases software. The implemented algorithms distinguish Groebner base computation in polynomials rings over fields, rings with pseudo division, parameter rings, regular rings, Euclidean rings, non-commutative fields in commuting, solvable and free-non-commuting main variables. The interface, class organization is described in the object-oriented programming environment of the Java Algebra System (JAS). Different critical pair selection strategies and reduction algorithms can be provided by dependency injection. Different implementations can be selected for the mentioned coefficient rings through factory classes and methods. Groebner bases algorithms can be composed according to application needs and/or hardware availability. For example, versions for shared memory sequential or parallel computation, term order optimization or fraction free coefficient ring computation can be composed. For distributed memory compute clusters there are OpenMPI, MPJ and plain TCP/IP implementations of Buchberger's algorithm with optimized distributed storage of reduction polynomials.
Abstract: We propose a modular method for verifying the correctness of a Groebner basis candidate. When it is difficult to compute a Groebner basis of the ideal I generated by f1,...,fm by the Buchberger algorithm or the F4 algorithm over the rationals, it is often useful to get a Groebner basis candidate G by applying modular methods such as Hensel lifting or Chinese remainder theorem. If it is proven that G is a subset of I then the verification is easy. Arnold showed that if I is homogeneous the check is essentially reduced to check whether G is a Groebner basis of the ideal generated by G itself. For an inhomogeneous ideal, we propose to check that G is a subset of I by computing an exact generating relation g=h1*f1+...+hm*fm for all g in G. After computing the coefficient polynomials h1,...,hm over a finite field, we reduce the number of terms in hi's so that their coefficients are uniquely determined. Then we replace the coefficients by indeterminates and we get a system of linear equations. We try to solve this system by applying Hensel lifting. If we can get the solution, then it directly shows that g belongs to I. The whole procedure is implemented in Risa/Asir, which is an open source general computer algebra system being developed by OpenXM committers and aims at large scale and efficient polynomial computation. By applying this method we succeeded in verifying the correctness of a Groebner basis candidate computed in Romanovski et al. In their paper the candidate was computed by a black-box software system and it has been necessary to verify the candidate for ensuring the mathematical correctness of the paper.
Abstract: In this talk, zero-dimensional ideals in the formal power series and the associated vector space consisting of algebraic local cohomology classes are considered in the context of Grothendieck local duality. In our previous works, we published algorithms for computing standard bases of the Jacobi ideals of isolated hypersurface singularities, by using algebraic local cohomology. Here, we present a new algorithm for the transformation of a standard basis w. r. t. any given local ordering into a standard basis with respect to any other local ordering. This algorithm is based on algebraic local cohomology classes, and always outputs REDUCED standard bases. (The computer algebra system SINGULAR has a command "stdff" for computing standard bases. However, it does not output REDUCED standard bases.) Let f be a holomorphic function defining an isolated singularity at the origin of n-dimensional complex space and J be the Jacobi ideal of the function f. Let fix a local ordering. Then, by our previous algorithm, a basis of algebraic local cohomolog class w.r.t. f, can be computed, together with a standard basis of J in the local ring. As a basis of algebraic local cohomolog classes w.r.t. f, has a lot of information of the isolated singularity, one can use the information to transform the standard basis into standard basis with respect to any other local ordering. One of the good information, tell us how to compute normal forms with respect to J in the local ring. For a function h, the normal form of h modulo J w.r.t. a given local ordering, can be easily computed by a basis of algebraic local cohomolog classes w.r.t. f. Actually, this method for computing normal forms, works quite powerfully for the new algorithm of change of ordering. (So, we do not need Mora's tangent cone algorithm.) The proposed algorithm has already been implemented in a computer algebra system Risa/Asir. A demonstration of the program is given in the talk.
Abstract: In this talk we show how the theory of Groebner bases can be represented in the computer system Theorema, a system initiated by Bruno Buchberger in the mid-nineties. Although the main purpose of Theorema is to serve mathematical theory exploration and, in particular, automated reasoning, following the main strategy of Theorema, the system also provides good facilities for carrying out computations in a generic way using the philosophy of domains, categories and functors (in a new way). The main differences between Theorema and "traditional" computer algebra systems is that in Theorema one can both program (and, hence, compute) and prove (generate and verify proofs of theorems and algorithms). In fact, Theorema is just an (elegant) version of predicate logic. Hence, algorithms / programs in Theorema are just equational (recursive) statements in predicate logic and their application to data is just a special case of simplification w.r.t. equational logic as part of predicate logic. The Theorema notation of predicate logic is intuitive and close to the usual textbook notation of mathematics. Hence algorithms and theorems in Theorema are both easy to formulate and to read as well. Because of that, reasoning about algorithms can be done in exactly the same framwork as computing with them. This reasoning, of course, includes proving correctness and termination properties. We present one particular representation of Groebner bases theory among many possible "views" of the theory. In this representation, we use functors to construct hierarchies of domains (e.g. for monomials, terms, polynomials, etc.) in a nicely structured way. In this talk, we do not focus on describing a particularly efficient implementation of Buchberger's or other algorithms, but focus on the methodology which should be a model for gradually more efficient implementations based on more refined and powerful theorems or at least programming tricks, data structures, etc.
Abstract: The most expensive part of the known algorithms in the calculation of primary fundamental invariants (of rings of polynomial invariants of finite linear groups over an arbitrary field) is the computation of the radical of polynomial ideals generated by regular sequences. Thus, in this paper, we develop effective methods for such calculation. For this purpose, we introduce first a new notion of genericity (called weak Noether position) and exhibit a novel deterministic algorithm to put an ideal in Noether position. Then, we use this algorithm and also the algorithm due to Krick and Logar (to compute radical of ideals) to present an efficient algorithm to calculate the radical of polynomial ideals generated by regular sequences. Furthermore, we apply this algorithm, to improve the classical methods of computing primary invariants which are based on radical computation. Finally, we have implemented in Maple the mentioned algorithms (to compute the radical of ideals and also primary invariants) and compare the proposed algorithms, via a set of benchmarks, with the corresponding functions in Maple and Magma. The experiments we made seem to show that these first implementations are more efficient than the corresponding functions in Maple and Magma.
Abstract: The purpose of this talk is to report a success story of including Groebner basis theory in teaching mechanics to engineering students at Tennessee Technological University. One of the authors (JL) has developed a graduate one-semester course "Advanced Computational Methods in Engineering" in which doctoral, master, and Fast-Track undergraduate engineering students learn the theory of Groebner bases and apply it to solving various engineering problems. The course has been taught since 2006 and approximately thirty students have completed it. The first author will report on a few best student research projects in the areas of electrical, mechanical, civil, and engineering mechanics. These projects have been done with Maple Computer Algebra System. One project which is currently under way (with the second author) is to create a Maple package for teaching "Advanced Mechanics of Materials" and "Reinforced Composite Materials". The package uses Groebner basis techniques. When using the package, students will not only better understand material properties taught in the courses, but, equally importantly, they will learn how to describe these properties mathematically. Thus, a general goal of teaching these methods is to expand engineering curriculum at TTU and expose students to areas of advanced mathematics not traditionally included in engineering curricula.
Abstract: The numerical evaluation of the normalizing constant for a given statistical distribution is a fundamental problem in statistics. For example, the normalizing constant of the Gaussian distribution is expressed in terms of a rational expression of a parameter of the distribution named as the standard deviation. However, normalizing constants of many interesting stasistical distributions do not have such closed expressions. The holonomic gradient method, HGM in short, is a general method to evaluate normalizing constant numerically for several parameters in the framework of Zeilberger's holonomic systems approach. In fact, broad classes of normalizing constants are holonomic functions with respect to parameters. Then, such normalizing constants satisfy holonomic systems of linear partial differential equations. The HGM consists of three steps for a given normalizing constant. (1) Find a holonomic system satisfied by the normalizing constant. We may use computational or theoretical methods to find it. Groebner basis and related methods are used. (2) Find an initial value vector for the holonomic system. This is equivalent to evaluating the normalizing constant and its derivatives at a point. This step is usually performed by a series expansion. (3) Solve the holonomic system numerically. We use several methods in numerical analysis such as the Runge-Kutta method of solving ordinary differential equations and efficient solvers of systems of linear equations. The HGM was proposed in 2011 by a group of people inclusing us and has given several new results. For example, the orthant probability is the normalizing constant of the multivariate normal distribution restricted to the first orthant. The HGM can evaluate it in a high accuracy up to the 20 dimensional case when the mean vector is near the origin. In the 20 dimensional case, we numerically solve ordinary differential equation of rank 2^20 =20,148,576. We have developed software packages for the HGM. Packages based on computer algebra systems help us to solve steps (1) and (2). We have implemeted the step (3) for the Fisher-Bingham distribution, the Bingham distribution, the orthant probability, the Fisher distribution on SO(3), some of A-distributions, and the distribution function of the largest root of a Wishart matrix in the language C and/or in the system for statistics R. An implementation for the polyhedral probability is a project in progress. We find an interesting interplay with systems for polytopes in the project. References and current implementations can be found in http://www.math.kobe-u.ac.jp/OpenXM/Math/hgm/ref-hgm.html