Releases: tensorly/torch
0.5.0
Release 0.4.0
What's Changed
- Remove unnecessary print statement by @colehawkins in #15
- Adds factorized linear implementations by @JeanKossaifi in #16
- Remove unused import that is not a dependency by @arinbjornk in #19
- Embedding layer supports CUDA Tensor by @JeremieMelo in #24
- Contiguous nn.Parameter by @JeremieMelo in #25
- Factorized linear supports implementation switch and gradient checkpoint by @JeremieMelo in #26
- bugfix: missing dilation by @Yosshi999 in #27
- Replacing collections with collections.abc for Python 3.10 support by @bonevbs in #29
- Adds DenseTensors and ComplexFactorizedTensors by @JeanKossaifi in #30
New Contributors
- @arinbjornk made their first contribution in #19
- @JeremieMelo made their first contribution in #24
- @Yosshi999 made their first contribution in #27
- @bonevbs made their first contribution in #29
Full Changelog: 0.3.0...0.4.0
TensorLy-Torch version 0.3.0
TensorLy-Torch Release 0.3.0
TensorLy-Torch just got even easier to use for tensorized deep learning, with indexible factorized tensors, seamless compatibility with torch functions, tensorized embedding layers and more!
New features
Faster general_1D_conv, speeds up CP convolutions
Indexable TensorizedTensors, #7 : Factorized tensors can now be indexed just like regular tensors. The result will still be a factorized tensor whenever possible, and a dense tensor otherwise.
>>> import tltorch
>>> cp_tensor = tltorch.FactorizedTensor.new((3, 4, 2), rank=0.9, factorization='cp')
# Initialise the tensor with random values
>>> cp_tensor.normal_(0, 0.02)
>>> print(cp_tensor)
CPTensor(shape=(3, 4, 2), rank=2)
>>> cp_tensor[:2, :2]
CPTensor(shape=(2, 2, 2), rank=2)
>>> cp_tensor[2, 3, 1]
tensor(0.0250, grad_fn=<SumBackward0>)
# Note how, above, indexing tracks gradients as well!
New BlockTT factorization, generalizes tt-matrices
>>> ftt = tltorch.TensorizedTensor.new((5, (2, 2, 2), (3, 3, 3)), rank=0.5, factorization='BlockTT')
>>> ftt
BlockTT, shape=[5, 8, 27], tensorized_shape=(5, (2, 2, 2), (3, 3, 3)), rank=[1, 20, 20, 1])
>>> ftt[2]
BlockTT, shape=[8, 27], tensorized_shape=[(2, 2, 2), (3, 3, 3)], rank=[1, 20, 20, 1])
>>> ftt[0, :2, :2]
tensor([[-0.0009, 0.0004],
[ 0.0007, 0.0003]], grad_fn=<SqueezeBackward0>)
get_tensorized_shape: linear layers can now be automatically tensorized to a convenient shape
Tensorized embeddings : Add factorized embedding layer and tests #10 , thanks to @colehawkins
Initialise factorized tensors directly with Pytorch, for initialisations based on normal distribution:
from torch.nn import init
import tltorch
cp_tensor = tltorch.FactorizedTensor.new((3, 4, 2), rank=0.9, factorization='cp')
init.kaiming_normal(cp_tensor)
Improvements
TuckerTensor: unsqueezed_modes option
TRL: added init_from_linear
FactorizedConvolutions now have a reset_parameters method and are initialised by default when created from random values
Layers and factorized tensors now accept a device and type as parameter
Tensor dropout now accepts min_dim
and min_values
Bug fixes
Fixed bugs for TT in rank in init_from_tensor, transduction and tensor creation.
Bug fix when creating a factorized conv from a factorization.
Linear layer class method preserve context
Contiguous issue in TuckerTensor thanks to @colehawkins, #9
Fixed tensor dropout for p=1
Initialise weights when creating new random layer
Release 0.2.0
A full rewriting of TensorLy-Torch!