Releases: cornellius-gp/gpytorch
Releases · cornellius-gp/gpytorch
v1.5.1 (bug fixes, improvements)
New features
- Add
gpytorch.kernels.PiecewisePolynomialKernel
(#1738) - Include ability to turn off diagonal correction for SGPR models (#1717)
- Include ability to cast LazyTensor to half and float types (#1726)
Performance improvements
- Specialty MVN log_prob method for Gaussians with sum-of-Kronecker covariances (#1674)
- Ability to specify devices when concatenating rows of LazyTensors (#1712)
- Improvements to LazyTensor symeig method (#1725)
Bug fixes
- Fix to computing batch sizes of kernels (#1685)
- Fix SGPR prediction when
fast_computations
flags are turned off (#1709) - Improve stability of
stable_qr
function (#1714) - Fix bugs with pyro integration for full Bayesian inference (#1721)
num_classes
ingpytorch.likelihoods.DirichletLikelihood
should be an integer (#1728)
v1.5.0 - GPLVM, Polya-Gamma Augmentation, Faster SGPR Models
This release adds 2 new model classes, as well as a number of bug fixes:
- GPLVM models for unsupervised learning
- Polya-Gamma GPs for GP classification
In addition, this release contains numerous improvements to SGPR models (that have also been included in prior bug-fix releases).
New features
- Add example notebook that demos binary classification with Polya-Gamma augmentation (#1523)
- New model class: Bayesian GPLVM with Stochastic Variational Inference (#1605)
- Periodic kernel handles multi-dimensional inputs (#1593)
- Add missing data gaussian likelihoods (#1668)
Performance
Fixes
v1.4.2 (bug fixes)
Various bug fixes, including
- Use current PyTorch functionality (#1611, #1586)
- Bug fixes to Lanczos factorization (#1607)
- Fixes to SGPR model (#1607)
- Various fixes to LazyTensor math (#1576, #1584)
- SmoothedBoxPrior has a sample method (#1546)
- Fixes to additive-structure models (#1582)
- Doc fixes {#1603)
- Fix to index kernel and LCM kernels (#1608, #1592)
- Fixes to KeOps bypass (#1609)
v1.4.1 (Bug Fixes)
Fixes
- Simplify interface for 3+ layer DSPP models (#1565)
- Fix marginal log likelihood calculation for exact Bayesian inference w/ Pyro (#1571)
- Remove CG warning for small matrices (#1562)
- Fix Pyro cluster-multitask example notebook (#1550)
- Fix gradients for KeOps tensors (#1543)
- Ensure that gradients are passed through lazily-evaluated kernels (#1518)
- Fix bugs for models with batched fantasy observations (#1529, #1499)
- Correct default
latent_dim
value for LMC variational models (#1512)
New features
- Create
gpytorch.utils.grid.ScaleToBounds
utility to replacegpytorch.utils.grid.scale_to_bounds
method (#1566) - Fix skip connections in Deep GP example (#1531)
- Add fantasy point support for structured kernel interpolation models (#1545)
Documentation
Performance
v1.4.0 Major performance improvements, especially to Kronecker-factorized models.
This release includes many major speed improvements, especially to Kronecker-factorized multi-output models.
Performance improvements
- Major speed improvements for Kronecker product multitask models (#1355, #1430, #1440, #1469, #1477)
- Unwhitened VI speed improvements (#1487)
- SGPR speed improvements (#1493)
- Large scale exact GP speed improvements (#1495)
- Random Fourier feature speed improvements (#1446, #1493)
New Features
- Dirichlet Classification likelihood (#1484) - based on Milios et al. (NeurIPS 2018)
- MultivariateNormal objects have a
base_sample_shape
attribute for low-rank/degenerate distributions (#1502)
New documentation
- Tutorial for designing your own kernels (#1421)
Debugging utilities
- Better naming conventions for AdditiveKernel and ProductKernel (#1488)
gpytorch.settings.verbose_linalg
context manager for seeing what linalg routines are run (#1489)- Unit test improvements (#1430, #1437)
Bug Fixes
inverse_transform
is applied to the initial values of constraints (#1482)psd_safe_cholesky
obeys cholesky_jitter settings (#1476)- fix scaling issue with priors on variational models (#1485)
Breaking changes
MultitaskGaussianLikelihoodKronecker
(deprecated) is fully incorporated inMultitaskGaussianLikelihood
(#1471)
v1.3.1 (Bug Fixes)
Fixes
- Spectral mixture kernels work with SKI (#1392)
- Natural gradient descent is compatible with batch-mode GPs (#1416)
- Fix prior mean in whitened SVGP (#1427)
- RBFKernelGrad has no more in-place operations (#1389)
- Fixes to ConstantDiagLazyTensor (#1381, #1385)
Documentation
- Include example notebook for multitask Deep GPs (#1410)
- Documentation updates (#1408, #1434, #1385, #1393)
Performance
- KroneckerProductLazyTensors use root decompositions of children (#1394)
- SGPR now uses Woodbury formula and matrix determinant lemma (#1356)
Other
GPyTorch 1.3: Pytorch 1.7 compatibility; performance improvements; MVM-based variational models
This release primarily focuses on performance improvements, and adds contour integral quadrature based variational models.
Major Features
Variational models with contour integral quadrature
- Add an MVM-based approach to whitened variatiational inference (#1372)
- This is based on the work in Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization
Minor Features
Performance improvements
- Kronecker product models compute a deterministic logdet (faster than the Lanczos-based logdet) (#1332)
- Improve efficiency of
KroneckerProductLazyTensor
symeig method (#1338) - Improve SGPR efficiency (#1356)
Other improvements
SpectralMixtureKernel
accepts arbitrary batch shapes (#1350)- Variational models pass around arbitrary
**kwargs
to theforward
method (#1339) gpytorch.settings
context managers keep track of their default value (#1347)- Kernel objects can be pickle-d (#1336)
Bug Fixes
v1.2.1 (Bug fixes)
This release includes the following fixes:
- Fix caching issues with variational GPs (#1274, #1311)
- Ensure that constraint bounds are properly cast to floating point types (#1307)
- Fix bug with broadcasting multitask multivariate normal shapes (#1312)
- Bypass KeOps for small/rectangular kernels (#1319)
- Fix issues with
eigenvectors=False
in LazyTensor#symeig (#1283) - Fix issues with fixed-noise LazyTensor preconditioner (#1299)
- Doc fixes (#1275, #1301)
GPyTorch 1.2: PyTorch 1.6 compatibility; new/improved approximate GP models; natural gradient descent; new specialty kernels
Major Features
New variational and approximate models
This release features a number of new and added features for approximate GP models:
- Linear model of coregionalization for variational multitask GPs (#1180)
- Deep Sigma Point Process models (#1193)
- Mean-field decoupled (MFD) models from "Parametric Gaussian Process Regressors" (Jankowiak et al., 2020) (#1179)
- Implement natural gradient descent (#1258)
- Additional non-conjugate likelihoods (Beta, StudentT, Laplace) (#1211)
New kernels
We have just added a number of new specialty kernels:
gpytorch.kernels.GaussianSymmetrizedKLKernel
for performing regression with uncertain inputs (#1186)gpytorch.kernels.RFFKernel
(random Fourier features kernel) (#1172, #1233)gpytorch.kernels.SpectralDeltaKernel
(a parametric kernel for patterns/extrapolation) (#1231)
More scalable sampling
- Large-scale sampling with contour integral quadrature from Pleiss et al., 2020 (#1194)
Minor features
- Ability to set amount of jitter added when performing Cholesky factorizations (#1136)
- Improve scalability of KroneckerProductLazyTensor (#1199, #1208)
- Improve speed of preconditioner (#1224)
- Add symeig and svd methods to LazyTensors (#1105)
- Add TriangularLazyTensor for Cholesky methods (#1102)
Bug fixes
- Fix initialization code for
gpytorch.kernels.SpectralMixtureKernel
(#1171) - Fix bugs with LazyTensor addition (#1174)
- Fix issue with loading smoothed box priors (#1195)
- Throw warning when variances are not positive, check for valid correlation matrices (#1237, #1241, #1245)
- Fix sampling issues with Pyro integration (#1238)
GPyTorch 1.1: PyTorch 1.5 compatibility, improved multitask-GP stability
Major features
- GPyTorch is compatible with PyTorch 1.5 (latest release)
- Several bugs with task-independent multitask models are fixed (#1110)
- Task-dependent multitask models are more batch-mode compatible (#1087, #1089, #1095)
Minor features
gpytorch.priors.MultivariateNormalPrior
has an expand method (#1018)- Better broadcasting for batched inducing point models (#1047)
LazyTensor
repeating works with rectangular matrices (#1068)gpytorch.kernels.ScaleKernel
inherits theactive_dims
property from its base kernel (#1072)- Fully-bayesian models can be saved (#1076)
Bug Fixes
gpytorch.kernels.PeriodicKernel
is batch-mode compatible (#1012)- Fix
gpytorch.priors.MultivariateNormalPrior
expand method (#1018) - Fix indexing issues with
LazyTensors
(#1029) - Fix constants with
gpytorch.mlls.GammaRobustVariationalELBO
(#1038, #1053) - Prevent doubly-computing derivatives of kernel inputs (#1042)
- Fix initialization issues with
gpytorch.kernels.SpectralMixtureKernel
(#1052) - Fix stability of
gpytorch.variational.DeltaVariationalStrategy