Skip to content

Commit

Permalink
Improve documentation from PaulaGarciaMolina/main
Browse files Browse the repository at this point in the history
  • Loading branch information
juanjosegarciaripoll authored Jul 1, 2024
2 parents f3790d8 + 3e25b19 commit c1c33e4
Show file tree
Hide file tree
Showing 10 changed files with 314 additions and 1 deletion.
2 changes: 1 addition & 1 deletion docs/algorithms/arnoldi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ based on previous estimates, using the formula :math:`|{\xi_{k+1}}\rangle=(1-\ga
memory factor :math:`\gamma=-0.75`.


The rule in :ref:`alg_descent` is a particular case of the Arnoldi iteration with a Krylov basis with $L=2$.
The rule in :ref:`alg_descent` is a particular case of the Arnoldi iteration with a Krylov basis with :math:`L=2`.

Ref. :cite:t:`GarciaMolina2024` presents this algorithm and its implementation for global optimization problems. It is also suitable for evolution problems.

Expand Down
11 changes: 11 additions & 0 deletions docs/algorithms/generated/seemps.evolution.euler.euler.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.evolution.euler.euler
============================

.. currentmodule:: seemps.evolution.euler



.. autofunction:: seemps.evolution.euler.euler

11 changes: 11 additions & 0 deletions docs/algorithms/generated/seemps.evolution.euler.euler2.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.evolution.euler.euler2
=============================

.. currentmodule:: seemps.evolution.euler



.. autofunction:: seemps.evolution.euler.euler2

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.evolution.runge\_kutta.runge\_kutta
==========================================

.. currentmodule:: seemps.evolution.runge_kutta



.. autofunction:: seemps.evolution.runge_kutta.runge_kutta

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@


seemps.evolution.runge\_kutta.runge\_kutta\_fehlberg
====================================================

.. currentmodule:: seemps.evolution.runge_kutta



.. autofunction:: seemps.evolution.runge_kutta.runge_kutta_fehlberg

3 changes: 3 additions & 0 deletions docs/algorithms/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ Index of algorithms
tensor_split
mps_simplification
gradient_descent
cgs
arnoldi
dmrg
runge_kutta
crank_nicolson
tebd_evolution
70 changes: 70 additions & 0 deletions docs/algorithms/runge_kutta.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
.. _mps_runge_kutta:

*******************
Runge-Kutta methods
*******************

Runge-Kutta methods use a Taylor expansion of the state to approximate

.. math::
\psi_{k+1} = \psi_k + \sum_{p}\frac{1}{p!}(-\Delta \beta H)^p\frac{\partial^{(p)} \psi}{\partial \beta^{(p)}}.
:math:`\Delta \beta` can either be real or imaginary time, and hence these
methods are suitable for both evolution and optimization problems. When using
MPS, these methods perform a global optimization of the MPS at each step,
i.e., they update all tensors simultaneously.

The order of the expansion `p` determines the truncation error of the method, which
is :math:`O(\Delta \beta ^{p+1})`, and also the cost of the method, since
a higher order implies more operations. Thus, it is important to consider
trade-off in cost and accuracy to choose the most suitable method for each application.

The SeeMPS library considers four methods.

1. Euler method
----------------

This is an explicit, first-order Taylor approximation of the evolution, with an error :math:`\mathcal{O}(\Delta\beta^2)`
and a simple update with a fixed time-step :math:`\beta_k = k \Delta\beta`.

.. math::
\psi_0 &= \psi(\beta_0), \\
\psi_{k+1} &= \psi_k - \Delta\beta H \psi_k, \quad \text{for } k=0,1,\dots,N-1.
2. Improved Euler or Heun method
---------------------------------

This is a second-order, fixed-step explicit method that uses two matrix-vector multiplications and two linear combinations of
vectors to achieve an error :math:`\mathcal{O}(\Delta\beta^3)`.

.. math::
\psi_{k+1} = \psi_k - \frac{\Delta\beta}{2} \left[v_1 + H(\psi_k - \Delta\beta v_1)\right], \\
\text{with } v_1 = H \psi_k.
3. Fourth-order Runge-Kutta method
-----------------------------------

This algorithm achieves an error :math:`\mathcal{O}(\Delta\beta^5)` using four matrix-vector multiplications and four linear combinations of vectors.

.. math::
\psi_{k+1} &= \psi_k + \frac{\Delta\beta}{6}(v_1 + 2v_2 + 2v_3 + v_4), \\
v_1 &= -H \psi_k, \\
v_2 &= -H\left(\psi_k + \frac{\Delta\beta}{2}v_1\right), \\
v_3 &= -H\left(\psi_k + \frac{\Delta\beta}{2}v_2\right), \\
v_4 &= -H\left(\psi_k + \Delta\beta v_3\right).
4. Runge-Kutta-Fehlberg method
-------------------------------
The Runge-Kutta-Fehlberg algorithm is an adaptative step-size solver that combines a fifth-order accurate integrator
:math:`O(\Delta\beta^5)` with a sixth-order error estimator :math:`O(\Delta\beta^6)`. This combination dynamically adjusts the step size :math:`\Delta\beta`
to maintain the integration error within a specified tolerance. The method requires an initial estimate of the step size, which can be obtained from a simpler
method. Each iteration involves six matrix-vector multiplications and six linear combinations, and it may repeat the evolution steps if the proposed step size
is deemed unsuitable.

.. autosummary::
:toctree: generated/

~seemps.evolution.euler.euler
~seemps.evolution.euler.euler2
~seemps.evolution.runge_kutta.runge_kutta
~seemps.evolution.runge_kutta.runge_kutta_fehlberg
2 changes: 2 additions & 0 deletions docs/seemps_analysis_differentiation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ leading to
This approximation is improved with a smoother formula to avoid noise resilience following `Holoborodko <http://www.holoborodko.com/pavel/numerical-methods/numerical-derivative/smooth-low-noise-differentiators>`_.

An example on how to use these functions is shown in `Differentiation.ipynb <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/Differentiation.ipynb>`_.

.. autosummary::
:toctree: generated/

Expand Down
2 changes: 2 additions & 0 deletions docs/seemps_examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ In the directory `examples` of the library, we have created various Jupyter note
- `Computation of a spin-1/2 model ground state using DMRG <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/DMRG.ipynb>`_.
- `Solution of a Hamiltonian PDE <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/PDE.ipynb>`_.
- `Function interpolation <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/Interpolation.ipynb>`_.
- `Function differentiation <https://github.com/juanjosegarciaripoll/seemps2/blob/main/examples/Differentiation.ipynb>`_.


Further examples of use appear in the datasets and associated codes to the following publications:

Expand Down
192 changes: 192 additions & 0 deletions examples/Differentiation.ipynb

Large diffs are not rendered by default.

0 comments on commit c1c33e4

Please sign in to comment.