Releases: cornellius-gp/gpytorch
v1.13
What's Changed
main
anddevelop
branches by @gpleiss in #2542- Include jaxtyping to allow for Tensor/LinearOperator typehints with sizes. by @gpleiss in #2543
- fix: replace deprecated scipy.integrate.cumtrapz with cumulative_trapezoid by @natsukium in #2545
- use common notation for normal distribution N(\mu, \sigma^2) by @partev in #2547
- fix broken link in Simple_GP_Regression.ipynb by @partev in #2546
- Deprecate last_dim_is_batch (bump PyTorch version to >= 2.0) by @gpleiss in #2549
- Added ability for priors of transformed distributions to have their p… by @hvarfner in #2551
- Avoid unnecessary memory allocation for covariance downdate in SGPR prediction strategy by @JonathanWenger in #2559
- Fix VNNGP with batches by @LuhuanWu in #2375
- fix a typo by @partev in #2570
- fix a typo by @partev in #2571
- DOC: improve the formatting in the documentation by @partev in #2578
New Contributors
- @natsukium made their first contribution in #2545
- @hvarfner made their first contribution in #2551
Full Changelog: v1.12...v1.13
v1.12
What's Changed
- Minor patch to Matern covariances by @j-wilson in #2378
- Fix error messages for ApproximateGP.get_fantasy_model by @gpleiss in #2374
- Fix lazy kernel slicing when there are multiple outputs by @douglas-boubert in #2376
- Fix training status of noise model of
HeteroskedasticNoise
after exceptions by @fjzzq2002 in #2382 - Stop rbf_kernel_grad and rbf_kernel_gradgrad creating the full covariance matrix unnecessarily by @douglas-boubert in #2388
- Likelihood bugfix by @gpleiss in #2395
- Update RTD configuration, and linear_operator requirement. by @gpleiss in #2399
- Better support for missing labels by @Turakar in #2288
- Fix latex of gradients in docs by @jlperla in #2404
- Skip the warning in
gpytorch.lazy.__getattr__
if name starts with_
by @saitcakmak in #2423 - Fix KeOps regressions from #2296. by @gpleiss in #2413
- Update index.rst by @mkomod in #2449
python
should also be a runtime dependency by @jaimergp in #2457- fix a typo: cannonical -> canonical by @partev in #2461
- Update distributions.rst by @chrisyeh96 in #2487
- Fix flaky SVGP classification test by @gpleiss in #2495
- DOC: Fix typo in docstring. by @johanneskopton in #2493
- fix a typo by @partev in #2464
- DOC: fix formatting issue in RFFKernel documentation by @partev in #2463
- DOC: fix broken formatting in leave_one_out_pseudo_likelihood.py by @partev in #2462
ConstantKernel
by @SebastianAment in #2511- DOC: fix broken URL in periodic_kernel.py by @partev in #2513
- Bug: Exploit Structure in get_fantasy_strategy by @naefjo in #2494
- Matern52 grad by @m-julian in #2512
- Added optional
kwargs
toExactMarginalLogLikelihood
call by @rafaol in #2522 - Corrected configuration of
exclude
statements inpre-commit
configuration by @JonathanWenger in #2541
New Contributors
- @douglas-boubert made their first contribution in #2376
- @fjzzq2002 made their first contribution in #2382
- @jlperla made their first contribution in #2404
- @mkomod made their first contribution in #2449
- @jaimergp made their first contribution in #2457
- @partev made their first contribution in #2461
- @chrisyeh96 made their first contribution in #2487
- @johanneskopton made their first contribution in #2493
- @naefjo made their first contribution in #2494
- @rafaol made their first contribution in #2522
Full Changelog: v1.11...v1.12
v1.11
What's Changed
- Fix solve_triangular(Tensor, LinearOperator) not supported in VNNGP by @Turakar in #2323
- Metrics fixes and cleanup by @JonathanWenger in #2325
- Lock down doc requirements to prevent RTD failures. by @gpleiss in #2339
- Fix typos in multivariate_normal.py by @manuelhaussmann in #2331
- add Hamming IMQ kernel by @samuelstanton in #2327
- Use torch.cdist for
dist
by @esantorella in #2336 - Enable fantasy models for multitask GPs Reborn by @yyexela in #2317
- Clean up deprecation warnings by @saitcakmak in #2348
- More informative string representation of MultitaskMultivariateNormal distributions. by @gpleiss in #2333
- Mean and kernel functions for first and second derivatives by @ankushaggarwal in #2235
- Bugfix: double added log noise prior by @LuisAugenstein in #2355
- Remove Module.getattr by @saitcakmak in #2359
- Remove num_outputs from IndependentModelList by @saitcakmak in #2360
- keops periodic and keops kernels unit tests by @m-julian in #2296
- Deprecate checkpointing by @gpleiss in #2361
New Contributors
- @Turakar made their first contribution in #2323
- @manuelhaussmann made their first contribution in #2331
- @esantorella made their first contribution in #2336
- @yyexela made their first contribution in #2317
- @ankushaggarwal made their first contribution in #2235
- @LuisAugenstein made their first contribution in #2355
Full Changelog: v1.10...v1.11
v1.10
What's Changed
- Re-add pyro + torch_master check by @gpleiss in #2241
- Fix silently ignored arguments in IndependentModelList by @saitcakmak in #2249
- fix bug in nearest_neighbor_variational_strategy by @LuhuanWu in #2243
- Move infinite interval bounds check into Interval constructor by @Balandat in #2259
- Use ufmt for code formatting and import sorting by @Balandat in #2262
- Update nearest_neighbors.py by @yw5aj in #2267
- Use raw strings to avoid "DeprecationWarning: invalid escape sequence" by @saitcakmak in #2282
- Fix handling of re-used priors by @Balandat in #2269
- Fix BernoulliLikelihood documentation by @gpleiss in #2285
- gpytorch.settings.variational_cholesky_jitter can be set dynamically. by @gpleiss in #2255
- Likelihood docs update by @gpleiss in #2292
- Improve development/contributing documentation by @gpleiss in #2293
- Use raw strings to avoid "DeprecationWarning: invalid escape sequence" by @saitcakmak in #2295
- Update SGPR notebook by @gpleiss in #2303
- Update linear operator dependency to 0.4.0 by @gpleiss in #2321
New Contributors
Full Changelog: v1.9.1...v1.10
v1.9.1 (bug fixes)
What's Changed
- Fix LMCVariationalStrategy example in docs by @adamjstewart in #2112
- Accept closure argument in NGD optimizer
step
by @dannyfriar in #2118 - Fix bug with Multitask DeepGP predictive variances. by @gpleiss in #2123
- Autogenerate parameter types in documentation from python typehints by @gpleiss in #2125
- Retiring deprecated versions of
psd_safe_cholesky
,NotPSDError
, andassert_allclose
by @SebastianAment in #2130 - fix custom dtype_value_context setting by @sdaulton in #2132
- Include linear operator in installation instructions by @saitcakmak in #2131
- Fixes HalfCauchyPrior by @feynmanliang in #2137
- Fix return type of
Kernel.covar_dist
by @Balandat in #2138 - Change variable name for better understanding by @findoctorlin in #2135
- Expose jitter by @hughsalimbeni in #2136
- Add HalfNormal prior distribution for non-negative variables. by @ZitongZhou in #2147
- Fix multitask/added_loss_term bugs in SGPR regression by @gpleiss in #2121
- fix bugs in test half Cauchy prior. by @ZitongZhou in #2156
- Generalize RandomModule by @feynmanliang in #2164
- MMVN.to_data_independent_dist returns correct variance for non-interleaved MMVN distributions. by @gpleiss in #2172
- Update MSLL in metrics.py by @jongwonKim-1997 in #2177
- Update multitask example notebook by @gpleiss in #2190
- Fix exception message for missing kernel lazy kernel attribute by @dannyfriar in #2195
- Improving
_sq_dist
whenx1_eq_x2
by @SebastianAment in #2204 - Fix docs/requirements.txt by @gpleiss in #2206
- As per issue '#2175 [Docs] GP Regression With Uncertain Inputs'. by @corwinjoy in #2200
- Avoid evaluating kernel when adding jitter by @gpleiss in #2189
- Avoid evaluating kernel in
expand_batch
by @dannyfriar in #2185 - Deprecating
postprocess
by @SebastianAment in #2205 - Make PiecewisePolynomialKernel GPU compatible by @gpleiss in #2217
- Let
LazyEvaluatedKernelTensor
recall the grad state at instantiation by @SebastianAment in #2229 - Doc Update for Posterior Model Distribution and Posterior Predictive Distribution by @varunagrawal in #2230
- Fix 08_Advanced_Usage links by @st-- in #2240
- Add
device
property toKernel
s, add unit tests by @Balandat in #2234 - pass **kwargs to ApproximateGP.call in DeepGPLayer by @IdanAchituve in #2224
New Contributors
- @dannyfriar made their first contribution in #2118
- @SebastianAment made their first contribution in #2130
- @feynmanliang made their first contribution in #2137
- @findoctorlin made their first contribution in #2135
- @hughsalimbeni made their first contribution in #2136
- @ZitongZhou made their first contribution in #2147
- @jongwonKim-1997 made their first contribution in #2177
- @corwinjoy made their first contribution in #2200
- @varunagrawal made their first contribution in #2230
- @st-- made their first contribution in #2240
- @IdanAchituve made their first contribution in #2224
Full Changelog: v1.9.0...v1.9.1
v1.9.0 (LinearOperator)
Starting with this release, the LazyTensor
functionality of GPyTorch has been pulled out into its own separate Python package, called linear_operator. Most users won't notice the difference (at the moment), but power users will notice a few changes.
If you have your own custom LazyTensor code, don't worry: this release is backwards compatible! However, you'll see a lot of annoying deprecation warnings 😄
LazyTensor -> LinearOperator
- All
gpytorch.lazy.*LazyTensor
classes now live in thelinear_operator
repo, and are now calledlinear_operator.operator.*LinearOperator
.- For example,
gpytorch.lazy.DiagLazyTensor
is nowlinear_operator.operators.DiagLinearOperator
- The only major naming change:
NonLazyTensor
is nowDenseLinearOperator
- For example,
gpytorch.lazify
andgpytorch.delazify
are nowlinear_operator.to_linear_operator
andlinear_operator.to_dense
, respectively.- The
_quad_form_derivative
method has been renamed to_bilinear_derivative
(a more accurate name!) LinearOperator
method names now reflect their corresponding PyTorch names. This includes:add_diag
->add_diagonal
diag
->diagonal
inv_matmul
->solve
symeig
->eigh
andeigvalsh
LinearOperator
now has themT
property
torch_function functionality
LinearOperators are now compatible with the torch api! For example, the following code works:
diag_linear_op = linear_operator.operators.DiagLinearOperator(torch.randn(10))
torch.matmul(diag_linear_op, torch.randn(10, 2)) # returns a torch.Tensor!
Other files that have moved:
gpytorch.functions
- all of the core functions used by LazyTensors now live in the LinearOperator repo. This includes: diagonalization, dsmm, inv_quad, inv_quad_logdet, matmul, pivoted_cholesky, root_decomposition, solve (formally inv_matmul), and sqrt_inv_matmulgpytorch.utils
- a few have moved to the LinearOperator repo. This includes: broadcasting, cholesky, contour_intergral_quad, getitem, interpolation, lanczos, linear_cg, minres, permutation, stable_pinverse, qr, sparse, SothcasticLQ, and toeplitz.
Full Changelog: v1.8.1...v1.9.0
v1.8.1
Bug fixes
- MultitaskMultivariateNormal: fix tensor reshape issue by @adamjstewart in #2081
- Fix handling of prior terms in ExactMarginalLogLikelihood by @saitcakmak in #2039
- Fix bug in preconditioned KISS-GP / Hadamard Multitask GPs by @gpleiss in #2090
- Add constant_constraint to ConstantMean by @gpleiss in #2082
New Contributors
Full Changelog: v1.8.0...v1.8.1
v1.8.0 (Nearest Neighbor Variational Gaussian Processes)
Major Features
New Contributors
- @adamjstewart made their first contribution in #2061
- @m-julian made their first contribution in #2054
- @ngam made their first contribution in #2059
- @LuhuanWu made their first contribution in #2026
Full Changelog: v1.7.0...v1.8.0
v1.7.0 - gpytorch.metrics, variance reduction, variational fantasy models, improved gpytorch.priors
Important: This release requires Python 3.7 (up from 3.6) and PyTorch 1.10 (up from 1.9)
New Features
- gpytorch.metrics module offers easy-to-use metrics for GP performance.(#1870) This includes:
- gpytorch.metrics.mean_absolute_error
- gpytorch.metrics.mean_squared_error
- gpytorch.metrics.mean_standardized_log_loss
- gpytorch.metrics.negative_log_predictive_density
- gpytorch.metrics.quantile_coverage_error
- Large scale inference (using matrix-multiplication techniques) now implements the variance reduction scheme described in Wenger et al., ICML 2022. (#1836)
- This makes it possible to use LBFGS, or other line search based optimization techniques, with large scale (exact) GP hyperparameter optimization.
- Variational GP models support online updates (i.e. “fantasizing new models). (#1874)
- This utilizes the method described in Maddox et al., NeurIPS 2021
- Improvements to gpytorch.priors
Minor Features
- Add LeaveOneOutPseudoLikelihood for hyperparameter optimization (#1989)
- The PeriodicKernel now supports ARD lengthscales/periods (#1919)
- LazyTensors (A) can now be matrix multiplied with tensors (B) from the left hand side (i.e. B x A) (#1932)
- Maximum Cholesky retries can be controlled through a setting (#1861)
- Kernels, means, and likelihoods can be pickled (#1876)
- Minimum variance for FixedNoiseGaussianLikelihood can be set with a context manager (#2009)
Bug Fixes
- Fix backpropagation issues with KeOps kernels (#1904)
- Fix broadcasting issues with lazily evaluated kernels (#1971)
- Fix batching issues with PolynomialKernel (#1977)
- Fix issues with PeriodicKernel.diag() (#1919)
- Add more informative error message when train targets and the train prior distribution mismatch (#1905)
- Fix issues with priors on ConstantMean (#2042)
v.1.6.0 - Compatibility with PyTorch 1.9/1.10, multitask variational models, performance improvements
This release contains several bug fixes and performance improvements.
New Features
- Variational multitask models can output a single task per input (rather than all tasks per input) (#1769)
Small fixes
- LazyTensor#to method more closely matches the torch Tensor API (#1746)
- Add type hints and exceptions to kernels to improve usability (#1802)
Performance
- Improve the speed of fantasy models (#1752)
- Improve the speed of solves and log determinants with KroneckerProductLazyTensor (#1786)
- Prevent explicit kernel evaluation when expanding a LazyTensor kernel (#1813)