The broad aim of my research is to understand the inherent computational
complexity of algebraic, geometric and numerical problems.
Both the structural approach and the quest for lower complexity bounds
are an enterprise genuine to theoretical computer science.
However, the successful treatment of these questions not only calls for
methods from discrete mathematics, but from a larger spectrum
of mathematics, which includes algebraic geometry, topology and representation theory.
My research can be roughly divided as follows:
**
**
- algebraic complexity with emphasis on lower bounds
- structural complexity in (semi-)algebraic geometry
- geometry and analysis of numerical algorithms
- geometric complexity theory

**1.** My research monograph with Clausen and Shokrollahi describes
the state of the art of algebraic complexity as of 1997. It has considerably
contributed to the visibility of this field and spawned progress in
different directions, in particular for the problem of matrix multiplication.
My habilitation thesis, published as a Springer monograph in 2000,
further develops an algebraic theory of NP-completeness initiated
by Valiant.
This theory is currently receiving attention in connection with
the holographic algorithms recently introduced by Valiant,
as well as in relation with the Blum-Shub-Smale model of computation.
Moreover, it is the starting point in Mulmuley and Sohoni's
geometric complexity approach.

**2.** In the past few years, in joint work with Cucker, the emphasis of my
research was on the development of a structural complexity theory for problems
of (semi-)algebraic geometry. It turns out that the computation of some natural
geometric or topological invariants (dimension, degree, Euler characteristic,
Betti numbers, Hilbert polynomial) can be captured in terms of a few natural
complexity classes. This work represents significant progress for the
Blum-Shub-Smale model of computation. Moreover, it complements our knowledge on
symbolic algorithms for solving systems of polynomial equations. Of course,
numerous open questions remain.

**3.** A new project recently started in collaboration with Cucker and Lotz
is the smoothed analysis of condition numbers of numerical analysis.
Smoothed analysis is a new type of analysis of algorithms,
introduced by Spielman and Teng, that interpolates betwen worst case and
average case analysis. (They used it to give a compelling theoretical
justification of the simplex method's success in practice.)
The average case analysis of the number of steps of iterative numerical
algorithms in terms of a condition number is a subject introduced
by Smale and further developed by several researchers
(Renegar, Demmel, Shub, Smale, among others).
We were able to extend several known results about the average case
to smoothed analysis, drawing on tools from differential and integral
geometry. This seems just the beginning of the exploration of a promising
new research area.
Currently, we try to analyse several optimization algorithms
(interior point, 2nd order or semidefinite programming) in this new framework.
(This is supported by the recently granted DFG project BU 1371/2-1.)

**4.** A fascinating and fresh approach to the P-NP question
(and lower bound questions in general) was suggested by Mulmuley and Sohoni in 2001.
They formulated Valiant's algebraic version of the P-NP problem as
an orbit closure problem of geometric invariant theory (following Strassen).
Then they translated this problem to specific questions about the splitting
behaviour of representations. Even though the latter are known to be very
difficult, there is some hope that progress along this line is possible.
(For instance, it is a relatively new insight that the positivity of
Littlewood Richardson coefficients can be decided in polynomial time.)