Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge local branch to main #77

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 89 additions & 0 deletions docs/algorithms/chebyshev.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
.. _analysis_chebyshev:

***********************
Chebyshev Approximation
***********************

Matrix product states (MPS) and operators (MPO) can be expanded on the basis of
Chebyshev polynomials, allowing to approximate and compose functions.

In principle, this method can be performed for arbitrary multivariate functions.
However, its cost scales exponentially with the dimension of the function, and
the method converges efficiently only for highly-differentiable functions.

For these reasons, currently the SeeMPS library contains methods to perform Chebyshev
expansions of univariate functions. The method works for both MPS and MPO initial states,
and converges efficiently for analytical or highly-differentiable functions.

Computation of the expansion coefficients
=========================================
The expansion of an univariate function on the basis of Chebyshev polynomials is of the form

.. math::
p_d(x)=\sum_{k=0}^d c_k T_k(x),

where

.. math::
T_{k+1}(x) =2 x T_k(x)-T_{k-1}(x),\;k\geq 1

are the Chebyshev polynomials of order :math:`c_k`.

The coefficients :math:`c_k` contain the information of the function. These coefficients
can be given by the projection of the function on the basis, or by interpolation on a collection
of nodes. The library presents the method :func:`~seemps.analysis.chebyshev.projection_coefficients` for the former, and
:func:`~seemps.analysis.chebyshev.interpolation_coefficients` for the latter, both on Chebyshev-Gauss or Chebyshev-Lobatto nodes.
These methods only depend on the univariate function, its domain of definition, and optionally the
chosen interpolation order. This order can be estimated to machine precision using the :func:`~seemps.analysis.chebyshev.estimate_order`
routine.

Expansion in the Chebyshev basis
================================
Once the expansion coefficients are computed for the function, the series can be applied on a
generic initial condition. This condition can be an MPS or MPO. These expansions can be respectively
performed using the methods :func:`~seemps.analysis.chebyshev.cheb2mps` and :func:`~seemps.analysis.chebyshev.cheb2mpo`.

The initial conditions must have a support on the canonical Chebyshev interval :math:`[-1, 1]`.
For MPS, this initial support is to be understood as the minimum and maximum values of the corresponding
vector, while for MPO it is to be understood as the smallest and largest eigenvalues.
If the initial condition has a larger support, it must be rescaled using an affine transformation.
This is performed by default by the main algorithm assuming that the initial condition is defined on
the domain of definition of the expansion coefficients.

The standard evaluation of the partial sum is based on constructing the Chebyshev polynomials
:math:`T_k` using the recurrence relation, and then performing the linear combination with :math:`c_k` weights.
This can be performed by setting the flag ``clenshaw`` to ``False`` in :func:`~seemps.analysis.chebyshev.cheb2mps` or :func:`~seemps.analysis.chebyshev.cheb2mpo`.

However, there is a more efficient evaluation routine based on Clenshaw's evaluation method. This
procedure avoids computing the intermediate Chebyshev coefficients, and shows a more robust and efficient
performance. It is set by default through the flag ``clenshaw`` set to ``True``. However, this method is very
susceptible to an overestimation of the interpolation order, showing a degrading performance in that case.

Constructing the initial condition
==================================
This method requires an initial condition, either MPS or MPO, to perform function composition. This
initial condition must be passed to the argument ``initial_mps`` or ``initial_mpo``. However, the initial
conditions for the case of function "loading", which are given by discretized domains, can be built
automatically by passing an :class:`~seemps.analysis.mesh.Interval` object to the ``domain`` argument.

These discretized domains can be alternatively built by creating an :class:`~seemps.analysis.mesh.Interval` object, such as a :class:`~seemps.analysis.mesh.RegularInterval`
or :class:`~seemps.analysis.mesh.ChebyshevInterval`, and constructing the corresponding MPS with the routine :py:func:`~seemps.analysis.factories.mps_interval`.

Multivariate functions
======================

This method enables the construction of multivariate functions by composing functions on multivariate
initial conditions. These conditions can be constructed by tensorized products or sums on univariate states.
These operations can be performed with the methods :func:`~seemps.analysis.factories.mps_tensor_product` and/or :func:`~seemps.analysis.factories.mps_tensor_sum`.
These initial conditions may have a growing support, so they must be rescaled appropriately to fit in :math:`[-1, 1]`.

An example on how to use these functions is shown in `Chebyshev.ipynb <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/Chebyshev.ipynb>`_.

.. autosummary::
:toctree: generated/

~seemps.analysis.chebyshev.projection_coefficients
~seemps.analysis.chebyshev.interpolation_coefficients
~seemps.analysis.chebyshev.estimate_order
~seemps.analysis.chebyshev.cheb2mps
~seemps.analysis.chebyshev.cheb2mpo
11 changes: 11 additions & 0 deletions docs/algorithms/generated/seemps.analysis.chebyshev.cheb2mpo.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.chebyshev.cheb2mpo
==================================

.. currentmodule:: seemps.analysis.chebyshev



.. autofunction:: seemps.analysis.chebyshev.cheb2mpo

11 changes: 11 additions & 0 deletions docs/algorithms/generated/seemps.analysis.chebyshev.cheb2mps.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.chebyshev.cheb2mps
==================================

.. currentmodule:: seemps.analysis.chebyshev



.. autofunction:: seemps.analysis.chebyshev.cheb2mps

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.chebyshev.estimate\_order
=========================================

.. currentmodule:: seemps.analysis.chebyshev



.. autofunction:: seemps.analysis.chebyshev.estimate_order

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.chebyshev.interpolation\_coefficients
=====================================================

.. currentmodule:: seemps.analysis.chebyshev



.. autofunction:: seemps.analysis.chebyshev.interpolation_coefficients

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.chebyshev.projection\_coefficients
==================================================

.. currentmodule:: seemps.analysis.chebyshev



.. autofunction:: seemps.analysis.chebyshev.projection_coefficients

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.lagrange.lagrange\_basic
========================================

.. currentmodule:: seemps.analysis.lagrange



.. autofunction:: seemps.analysis.lagrange.lagrange_basic

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.lagrange.lagrange\_local\_rank\_revealing
=========================================================

.. currentmodule:: seemps.analysis.lagrange



.. autofunction:: seemps.analysis.lagrange.lagrange_local_rank_revealing

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.analysis.lagrange.lagrange\_rank\_revealing
==================================================

.. currentmodule:: seemps.analysis.lagrange



.. autofunction:: seemps.analysis.lagrange.lagrange_rank_revealing

3 changes: 3 additions & 0 deletions docs/algorithms/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,6 @@ Index of algorithms
runge_kutta
crank_nicolson
tebd_evolution
chebyshev
tt-cross
lagrange
23 changes: 23 additions & 0 deletions docs/algorithms/lagrange.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
.. currentmodule:: seemps.analysis.lagrange


.. _alg_lagrange:

**************************************
Multiscale interpolative constructions
**************************************

The MPS representation of an univariate polynomial interpolant can be efficiently constructed using multiscale interpolative constructions, following Lindsey's method (see Ref. https://arxiv.org/pdf/2311.12554). These methods leverage the Lagrange interpolation framework, specifically utilizing Chebyshev-Lobatto nodes. The SeeMPS library implements the interpolative constructs for the univariate scenario. The extensions to the multivariate case have not been implemented yet.

In the following, we provide an overview of the method. Essentially, the basic construct :func:`~lagrange_basic` implements an universal construction by assembling three distinct tensor cores corresponding to the left-most, bulk and right-most edges. The left-most core is function-dependent, while all the rest only depend on the interpolation order. After assembling, these cores are combined to form a Lagrange interpolant across two Chebyshev-Lobatto grids, one for each half of the domain. This approach severely overestimates the required bond dimension of the interpolant, requiring a large-scale final simplification.

This method can be enhanced by performing rank-revealing optimizations using SVD decomposition. The corresponding method, :func:`~lagrange_rank_revealing`, avoids the need for a large-scale final simplification.

Finally, the interpolative constructions can be developed on top of a local Lagrange interpolation framework, :func:`~lagrange_local_rank_revealing`. Then, the tensor cores become sparse, largely improving the overall efficiency of the algorithm.

.. autosummary::
:toctree: generated/

~lagrange_basic
~lagrange_rank_revealing
~lagrange_local_rank_revealing
27 changes: 27 additions & 0 deletions docs/algorithms/tt-cross.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
.. _alg_ttcross:

*******************************************
Tensor-train cross-interpolation (TT-Cross)
*******************************************

Tensor-train cross-interpolation, known as TT-Cross or TCI, is a method that computes the tensor-train representation of a black-box function by sampling some of its elements along some patterns known as crosses. As it does not act on the explicit tensor representation, it provides an exponential advantage over the standard Schmidt decomposition and evades the *curse of dimensionality*.

There are several variants available for TT-Cross. Each shows different advantages and disadvantages in terms of computational cost and accuracy for different initial conditions. This library implements three:

1. :func:`~seemps.analysis.cross.cross_dmrg`: Based on two-site optimizations combining the skeleton decomposition and the Schmidt decomposition. It is efficient for structures of low physical dimension, such as binary MPS, as it can increase the bond dimension by several units for each sweep. Inefficient for structures of large physical dimension due to its computational complexity. Has an associated parameter dataclass given by :class:`~seemps.analysis.cross.CrossStrategyDMRG`.

2. :func:`~seemps.analysis.cross.cross_greedy`: Based on two-site optimizations performing greedy searches for maximum-volume pivots. Efficient for structures of large physical dimension, such as tensor-trains with dense modes, due to its advantageous computational complexity. Inefficient for structures of reduced physical dimension, as it increases the bond dimension by one each sweep. Presents the ``full search`` variant or the ``partial search`` variants. Has an associated parameter dataclass given by :class:`~seemps.analysis.cross.CrossStrategyGreedy`.

3. :func:`~seemps.analysis.cross.cross_maxvol`: Based on rank-adaptive one-site optimizations using the rectangular skeleton decomposition. Can be seen as a middle ground between the former two methods. Has an associated parameter dataclass given by :class:`~seemps.analysis.cross.CrossStrategyMaxvol`.

Moreover, this method performs the decomposition of a given input black-box. This black-box can take several different forms and serve for different application domains. This library implements the class :class:`~seemps.analysis.cross.black_box.BlackBox` and the following subclasses:

1. :class:`~seemps.analysis.cross.black_box.BlackBoxLoadMPS`: Required to load functions with quantized degrees of freedom in MPS. Allows for both the *serial* and *interleaved* qubit orders.

2. :class:`~seemps.analysis.cross.black_box.BlackBoxLoadTT`: Required to load functions in tensor-trains, where each degree of freedom is assigned a full tensor of physical dimension equal to the density of the discretization.

3. :class:`~seemps.analysis.cross.black_box.BlackBoxLoadMPO`: Required to load bivariate functions in MPO, by computing the equivalent MPS representation. This MPS is of square physical dimension (e.g. 4 for a MPO of dimenson 2) and can be unfolded in the end to the required MPO.

4. :class:`~seemps.analysis.cross.black_box.BlackBoxComposeMPS`: Required to compose scalar functions on collections of MPS.

An example on how to use TCI for all these scenarios is shown in `TT-Cross.ipynb <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/TT-Cross.ipynb>`_.
13 changes: 7 additions & 6 deletions docs/seemps_analysis.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
.. _seemps_analysis:

**************************
Quantum numerical analysis
**************************
***********************************
Quantum-inspired numerical analysis
***********************************


.. toctree::
:maxdepth: 1

seemps_analysis_spaces
seemps_analysis_states
seemps_analysis_operators
seemps_analysis_spaces
seemps_analysis_loading
seemps_analysis_differentiation
seemps_analysis_integration
seemps_analysis_interpolation
seemps_analysis_ttcross
seemps_analysis_chebyshev
seemps_analysis_optimization
36 changes: 33 additions & 3 deletions docs/seemps_analysis_integration.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. currentmodule:: seemps.analysis.integrals
.. currentmodule:: seemps.analysis.integration

.. _analysis_integration:

Expand All @@ -11,14 +11,44 @@ Functions encoded in MPS can be efficiently integrated, by contracting those qua
.. math::
\int f(x)\mathrm{d}x \simeq \sum_i f(x_i) \Delta{x} = \langle g | f\rangle.

In the following table we find both functions that construct the states associated to various quadratures---i.e. `mps_*` functions---and a function that implements the integral using any of those rules :py:func:`integrate_mps`
In this scenario, the quadrature corresponding to the state :math:`\langle g |` is given by the midpoint quadrature rule. More sophisticated quadrature rules result in more efficient convergence rates---i.e. requiring less nodes or tensor cores to compute an accurate estimation of the true integral.

In the following table we find both functions that construct the states associated to various quadratures---i.e. ``mps_*`` functions---and a function that implements the integral using any of those rules :func:`integrate_mps`. These quadrature rules divide in two families:

Newton-Côtes quadratures
------------------------
These are useful to integrate equispaced discretizations, each of increasing order. Compatible with discretizations stemming from :class:`~seemps.analysis.mesh.RegularInterval` objects. The larger the order, the better the convergence rate. However, large-order quadratures impose restrictions on the amounts of qubits they support:

- :func:`mps_simpson` requires a number of qubits divisible by 2.
- :func:`mps_fifth_order` requires a number of qubits divisible by 4.

.. autosummary::
:toctree: generated/

~integrate_mps
~mps_midpoint
~mps_trapezoidal
~mps_simpson
~mps_fifth_order

Clenshaw-Curtis quadratures
---------------------------
These are useful to integrate irregular discretizations on either the Chebyshev zeros (Chebyshev-Gauss nodes) or the Chebyshev extrema (Chebyshev-Lobatto nodes). These have an exponentially better rate of convergence than the Newton-Côtes ones. Compatible with discretizations stemming from :class:`~seemps.analysis.mesh.ChebyshevInterval` objects.

.. autosummary::
:toctree: generated/

~mps_fejer
~mps_clenshaw_curtis

Integration
-----------
The standard method for integration consists in first constructing the multivariate quadrature rule using the previous routines, together with :class:`~seemps.analysis.factories.mps_tensor_product` and :class:`~seemps.analysis.factories.mps_tensor_sum` tensorized operations. Then, this quadrature is to be contracted with the desired MPS target using the scalar product routine :class:`~seemps.state.scprod`. However, for ease of use, a helper routine :class:`~seemps.analysis.integration.integrate_mps` is given that automatically computes the best possible quadrature rule associated to a :class:`~seemps.analysis.mesh.Mesh` object, and contracts with the target MPS to compute the integral:

.. autosummary::
:toctree: generated/

~integrate_mps

Note that this helper routine is only valid for standard function representations in MPS with binary quantization, while the former method is applicable in all cases.

An example on how to use these functions is shown in `Integration.ipynb <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/Integration.ipynb>`_.
Loading
Loading