Releases: Yura52/delu
Releases · Yura52/delu
v0.0.26
v0.0.25
Performance
- Significantly improve the efficiency of
delu.nn.NLinear
for cases where batch size is greater than 1. The larger the input dimensions -- the larger the speedup. Since the computation algorithm is updated, the results can be slightly different with the new version (the underlying "math" is totally the same).
v0.0.23
v0.0.22
v0.0.21
This is a relatively big release after v0.0.18.
Breaking changes
delu.iter_batches
: now,shuffle
is a keyword-only argumentdelu.nn.Lambda
- now, this module accepts only the functions from the
torch
module or methods oftorch.Tensor
- now, the passed callable is not accessible as a public attribute
- now, this module accepts only the functions from the
delu.random.seed
: the algorithm computing the library- and device-specific seeds changed, so the result can change compared to the previous versions- In the following functions, the first arguments are now positional-only:
delu.to
delu.cat
delu.iter_batches
delu.Timer.format
delu.data.Enumerate
delu.nn.Lambda
delu.random.seed
delu.random.set_state
New features
-
Added
delu.tools
-- a new home forEarlyStopping
,Timer
and other general tools. -
Added
delu.nn.NLinear
-- a module representing N linear layers that are applied to N different inputs:
(*B, *N, D1) -> (*B, *N, D2)
, where*B
are the batch dimensions. -
Added
delu.nn.named_sequential
-- a shortcut for creatingtorch.nn.Sequential
with named modules withoutOrderedDict
:sequential = delu.nn.named_sequential( ('linear1', nn.Linear(10, 20)), ('activation', nn.ReLU()), ('linear2', nn.Linear(20, 1)) )
-
delu.nn.Lambda
: now, the constructor accepts keyword arguments for the callable:m = delu.nn.Lambda(torch.squeeze, dim=1)
-
delu.random.seed
- the algorithm computing random seeds for all libraries was improved
- now,
None
is allowed asbase_seed
; in this case, an unpredictable seed generated by OS will be used and returned:
truly_random_seed = delu.random.seed(None)
-
delu.random.set_state
: now, omitting the'torch.cuda'
is allowed to avoid setting the states of CUDA RNGs
Deprecations & Renamings
delu.data
was renamed todelu.utils.data
. The old name is now a deprecated alias.delu.Timer
anddelu.EarlyStopping
were moved to the newdelu.tools
submodule. The old names are now deprecated aliases.
Dependencies
- Now,
torch >=1.8,<3
Documentation
- Updated logo
- Simplified structure
- Removed the only (and not particularly representative) end-to-end example
Project
- Migrate from sphinx doctest to xdoctest
v0.0.18
v0.0.17
Breaking changes
delu.cat
: now, the input must be a list (before, any iterable was allowed)delu.Timer
: now,print(timer)
,str(timer)
,f'{timer}'
etc. return the full-precision representation (without rounding to seconds)
New features
delu.EarlyStopping
: a simpler replacement fordelu.ProgressTracker
, but now, without tracking the best score. The usage is very similar, see the documentation.delu.cat
: now, supports nested collections (e.g. the input can be a list oftuple[Tensor, dict[str, tuple[Tensor, Tensor]]]
)
Deprecations
delu.ProgressTracker
: instead, usedelu.EarlyStopping
delu.data.FnDataset
: no alternatives are provided
v0.0.15
Breaking changes
delu.iter_batches
is now powered bytorch.arange/randperm
, the interface was changed accordinglydelu.Timer
: the methodsadd
andsub
are removed
New features
delu.to
: liketorch.Tensor.to
, but for (nested) collections of tensorsdelu.cat
: liketorch.cat
, but for collections of tensorsdelu.iter_batches
is now faster and has a better interface
Deprecations
delu.concat
is deprecated in favor ofdelu.cat
delu.hardware.free_memory
is now a deprecated alias todelu.cuda.free_memory
- deprecate
delu.data.Stream
- deprecate
delu.data.collate
- instead, use
torch.utils.data.dataloader.default_collate
- instead, use
- deprecate
delu.data.make_index_dataloader
- instead, use
delu.data.IndexDataset
+torch.utils.data.DataLoader
- instead, use
- deprecate
delu.evaluation
- instead, use
torch.nn.Module.eval
+torch.inference_mode
- instead, use
- deprecate
delu.hardware.get_gpus_info
- instead, use the corresponding functions from
torch.cuda
- instead, use the corresponding functions from
- deprecate
delu.improve_reproducibility
- instead, use
delu.random.seed
and manually set settings mention here
- instead, use
Documentation
- many improved explanations and examples
Dependencies
- require
python>=3.8
- remove
tqdm
andpynvml
from dependencies
Project
- switch from flake8 to ruff
- move tool settings from setup.cfg to pyproject.toml for coverage, isort, mypy
- freeze versions in requirements_dev.txt
v0.0.13
THE PROJECT WAS RENAMED FROM "Zero" TO "DeLU"
The changes since v0.0.8:
Deprecations
delu.data.IndexLoader
is deprecated in favor of the newdelu.data.make_index_dataloader
New features
delu.data.make_index_dataloader
is a better replacement for the deprecateddelu.data.IndexLoader
Documentation
- change theme to Furo (view)
Project
- move most tool settings to
pyproject.toml
v0.0.8
API changes:
zero.random.seed
- the argument must be less than
2 ** 32 - 10000
- the
seed
argument was renamed tobase_seed
- the argument must be less than
New features:
zero.random.seed
:- sets better seeds based on the given argument
- the new argument
one_cuda_seed
(False
by default) allows to choose if the same seed is set for all CUDA devices
Documentation
- style improvements
Dependencies
numpy>=1.18