Skip to content

Latest commit

 

History

History
1037 lines (657 loc) · 40.9 KB

RELEASENOTES.md

File metadata and controls

1037 lines (657 loc) · 40.9 KB

TorchSharp Release Notes

Releases, starting with 9/2/2021, are listed with the most recent release at the top.

NuGet Version 0.104.0

This is a big change in implementation, but not as big in API surface area. Many of the builtin modules, but not all, were re-implemented in managed code calling into native code via the functional APIs. This has several advantages:

  1. Align with the Pytorch implementations.
  2. More easily expose module attributes as properties as Pytorch does.
  3. In some cases, avoid native code altogether.
  4. The builtin modules can serve as "best practice" examples for custom module authors.

Breaking Changes:

The names of several arguments have been changed to align better with Pytorch naming. This may break code that passes such arguments by name, but will be caught at compile time.

The argument defaults for torch.diagonal() and Tensor.diagonal() arguments have been corrected.

Issues fixed:

#1397 Look into whether parameter creation from a tensor leads to incorrect dispose scope statistics. This bug was discovered during testing of the PR.
#1210 Attribute omissions.
#1400 There may be an error in torchvision.transforms.GaussianBlur
#1402 diagonal() has incorrect default

API Changes:

#1382: Add support for torch.nn.functional.normalize

NuGet Version 0.103.1

Breaking Changes: #1376 torch.Tensor.backward's function signature has been updated to match PyTorch's implementation. Previously, passing create_graph or retain_graph by position would work like PyTorch's torch.Tensor.backward, but not if passing by name (create_graph's value was swapped with retain_graph). This has been corrected; however, this means any code that passes create_graph or retain_graph by name needs to be updated to reflect the intended functionality.

Bug Fixes:

#1383 torch.linalg.vector_norm: Make ord-argument optional, as specified in docs
#1385 PackedSequence now participates in the DisposeScope system at the same level as Tensor objects.
#1387 Attaching tensor to a DisposeScope no longer makes Statistics.DetachedFromScopeCount go negative.
#1390 DisposeScopeManager.Statistics now includes DisposedOutsideScopeCount and AttachedToScopeCount. ThreadTotalLiveCount is now exact instead of approximate. ToString gives a useful debug string, and documentation is added for how to troubleshoot memory leaks. Also DisposeScopeManager.Statistics.TensorStatistics and DisposeScopeManager.Statistics.PackedSequenceStatistics provide separate metrics for these objects.
#1392 ToTensor() extension method memory leaks fixed.

NuGet Version 0.103.0

Move to libtorch 2.4.0.

NuGet Version 0.102.8

Bug Fixes:

#1359 torch.nn.functional.l1_loss computes a criterion with the MSE, not the MAE.

NuGet Version 0.102.6

Breaking Changes:

When creating a tensor from a 1-D array, and passing in a shape, there is now an ambiguity between the IList and Memory overloads of torch.tensor(). The ambiguity is resolved by removing the dimensions argument if it is redundant, or by an explicit cast to IList if it is not.

API Changes:

#1326 Allow arrays used to create tensors to be larger than the tensor. Create tensors from a Memory instance.

Bug Fixes:

#1334 MultivariateNormal.log_prob() exception in TorchSharp but works in pytorch.

NuGet Version 0.102.5

Breaking Changes:

torchvision.dataset.MNIST will try more mirrors. The thrown exception might be changed when it fails to download MNIST, FashionMNIST or KMNIST.
ObjectDisposedException will now be thrown when trying to use a disposed dispose scopes.
The constructor of dispose scopes is no longer public. Use torch.NewDisposeScope instead.

API Changes:

#1317 How to set default device type in torchsharp.
#1314 Grant read-only access to DataLoader attributes
#1313 Add 'non_blocking' argument to tensor and module 'to()' signatures.
#1291 Tensor.grad() and Tensor.set_grad() have been replaced by a new property Tensor.grad.
A potential memory leak caused by set_grad has been resolved.
Include method of dispose scopes has been removed. Use Attach instead.
Two more Attach methods that accepts IEnumerable<IDisposable>s and arrays as the parameter have been added into dispose scopes.
A new property torch.CurrentDisposeScope has been added to provide the ability to get the current dispose scope.
Add module hooks that take no input/output arguments, just the module itself.

Bug Fixes:

#1300 Adadelta, Adam and AdamW will no longer throw NullReferenceException when maximize is true and grad is null.
torch.normalwill now correctly return a leaf tensor.<br/> New optionsdisposeBatchanddisposeDatasethave been added intoDataLoader`.
The default collate functions will now always dispose the intermediate tensors, rather than wait for the next iteration.

Bug Fixes:

TensorDataset will now keep the aliases detached from dispose scopes, to avoid the unexpected disposal.
DataLoaderEnumerator has been completely rewritten to resolve the unexpected shuffler disposal, the ignorance of drop_last, the incorrect count of worker, and the potential leak cause by multithreading.
#1303 Allow dispose scopes to be disposed out of LIFO order.

NuGet Version 0.102.4

Breaking Changes:

Correct torch.finfo. (torch.set_default_dtype, Categorical.entropy, _CorrCholesky.check, Distribution.ClampProbs, FisherSnedecor.rsample, Gamma.rsample, Geometric.rsample, distributions.Gumbel, Laplace.rsample, SigmoidTransform._call and SigmoidTransform._inverse are influenced.)

API Changes:

#1284 make torch.unique and torch.unique_consecutive public.

NuGet Version 0.102.3

Breaking Changes:

The 'paddingMode' parameter of convolution has been changed to 'padding_mode', and the 'outputPadding' is now 'output_padding'.

API Changes:

#1243 fuse_conv_bn_weights and fuse_linear_bn_weights are added.
#1274 ConvTranspose3d does not accept non-uniform kernelSize/stride values

NuGet Version 0.102.2

Bug Fixes:

#1257 InverseMelScale in NewDisposeScope doesn't dispose tensors

NuGet Version 0.102.1

Breaking Changes:

The kernelSize parameter in the function and class of AvgPool1D was renamed to kernel_size to match PyTorch naming. The stride parameter in the torch.nn.functional.avg_pool1d call now defaults to kernelSize instead of 1, to match the PyTorch behavior.

Bug Fixes:

module.load_state_dict() throws error for in-place operation on a leaf variable that requires grad.
#1250 cstr and npstr for 0d tensors
#1249 torch.nn.functional.avg_pool1d is not working correctly
module.load() with streams which don't read the requested # of bytes throws error.
#1246 Issue running in notebook on Apple Silicon

NuGet Version 0.102.0

This release upgrades the libtorch backend to v2.2.1.

Breaking Changes:

The Ubuntu builds are now done on a 22.04 version of the OS. This may (or may not) affect TorchSharp use on earlier versions.
The default value for the end_factor argument in the constructor for LinearLR was changed to 1.0 to match PyTorch.
Any code that checks whether a device is 'CUDA' and does something rather than checking that it isn't 'CPU' will now fail to work, since there is now support for the 'MPS' device on MacOS.

API Changes:

#652: Apple Silicon support .
#1219: Added support for loading and saving tensors that are >2GB.

Bug Fixes:

Fixed LinearLR scheduler calculation with misplaced parentheses
Added get_closed_form_lr to scheduler to match PyTorch behavior when specifying epoch in .step()

NuGet Version 0.101.6

API Changes:

#1223: Missing prod function torch.prod or a.prod() where a is Tensor
#1201: How to access the attributes of a model?
#1094: ScriptModule from Stream / ByteArray
#1149: Implementation for torch.autograd.functional.jacobian to compute Jacobian of a function
Implemenation for a custom torch.autograd.Function class

Bug Fixes:

#1198: CUDA not available when calling backwards before using CUDA
#1200: Bugs in torch.nn.AvgPool2d and torch.nn.AvgPool3d methods.

NuGet Version 0.101.5

Bug Fixes:

#1191 : Having trouble moving a module from one GPU to another with gradients.

NuGet Version 0.101.4

A fast-follow release addressing a regression in v0.101.3

Bug Fixes:

#1185 : Incomplete transfer of module to device (only with 0.101.3)

NuGet Version 0.101.3

Breaking Changes:

The base OptimizerState class was modified and includes two changes:

  1. Custom optimizer state objects derived from OptimizerState must now explicitly pass the related torch.nn.Parameter object to the OptimizerState base constructor to maintain correct linkage.
  2. Custom state objects must implement an Initialize function. This function is responsible for initializing the properties of the state. Note that this function can be called as a re-intialization, so proper disposal of the previous tensor objects should be handled.

API Changes:

Introduced InferenceMode, a block-based scoping class for optimizing TorchSharp model inference by disabling gradient computation and enhancing performance.
Added Tensor.to_type() conversion aliases for short, half, bfloat16, cfloat, and cdouble.
Added Module.to() conversion aliases for all the scalar types.
All distribution classes now implement IDisposable.

Bug Fixes:

#1154 : mu_product was not initialized in NAdam optimizer
#1170 : Calling torch.nn.rnn.utils.pad_packed_sequence with a CUDA tensor and unsorted_indices threw an error
#1172 : optim.LoadStateDict from an existing StateDictionary updated to make sure to copy value and to the right device.
#1176 : When specific Optimizers load in a conditional tensor, made sure to copy to the right device.
#1174 : Loading CUDA tensor from stream threw an error
#1179 : Calling Module.to() with the ParameterList and ParameterDict module didn't move the parameters stored in the field.
#1148 : Calling Module.to() shouldn't be differentiable
#1126 : Calling ScriptModule.to() doesn't move attributes
#1180 : Module.to(ScalarType) has restrictions in PyTorch which aren't restricted in TorchSharp.

NuGet Version 0.101.2

API Changes:

Added extension method ScalarType.ElementSize() to get the size of each element of a given ScalarType.
Added methods for loading and saving individual tensors with more overloads.
Added 'persistent' flag to register_buffer()

Bug Fixes:

Fixed byte stream advancement issue in non-strict mode, ensuring proper skipping of non-existent parameters while loading models.

NuGet Version 0.101.1

This is a fast-follower bug fix release, addressing persistent issues with stability of using TorchScript from TorchSharp.

Bug Fixes:

#1047 Torchscript execution failures (hangs, access violation, Fatal error. Internal CLR fatal error. (0x80131506) )

NuGet Version 0.101.0

This is an upgrade to libtorch 2.1.0. It also moves the underlying CUDA support to 12.1 from 11.7, which means that all the libtorch-cuda-* packages have been renamed. Please update your CUDA driver to one that support CUDA 12.1.

API Changes:

Enhanced Module.load function to return matching status of parameters in non-strict mode via an output dictionary.
Introduced attribute-based parameter naming for module state dictionaries, allowing custom names to override default field names.

NuGet Version 0.100.7

Breaking Changes:

DataLoader should no longer be created using new -- instead, the overall pattern is followed, placing the classes in TorchSharp.Modules and the factories in the static class. This will break any code that creates a DataLoader, but can be fixed by:

  1. Removing the new in new torch.utils.data.DataLoader(...)
  2. Adding a using TorchSharp.Modules (C#) or open TorchSharp.Modules (F#) to files where DataLoader is used as a type name.

API Changes:

Adding an IterableDataset abstract class, and making TensorDataset derive from it.
Moving the DataLoader class to TorchSharp.Modules and adding DataLoader factories.
#1092: got error when using DataLoader
#1069: Implementation of torch.sparse_coo_tensor for sparse tensor creation
Renamed torch.nn.functional.SiLU -> torch.nn.functional.silu
Added a set of generic Sequential classes.

Bug Fixes:

#1083: Compiler rejects scalar operand due to ambiguous implicit conversion

NuGet Version 0.100.6

Bug Fixes:

ScriptModule: adding forward and the ability to hook.
Update to SkiaSharp 2.88.6 to avoid the libwebp vulnerability.
#1105: Dataset files get written to the wrong directory
#1116: Gradient null for simple calculation

NuGet Version 0.100.5

Breaking Changes:

Inplace operators no longer create an alias, but instead return 'this'. This change will impact any code that explicitly calls Dispose on a tensor after the operation.

Bug Fixes:

#1041 Running example code got error in Windows 10
#1064 Inplace operators create an alias
#1084 Module.zero_grad() does not work
#1089 max_pool2d overload creates tensor with incorrect shape

NuGet Version 0.100.4

Breaking Changes:

The constructor for TensorAccessor is now internal, which means that the only way to create one is to use the data<T>() method on Tensor. This was always the intent.

API Changes:

Tensor.randperm_out() deprecated.
torch.randperm accepts 'out' argument
Adding PReLU module.
Adding scaled_dot_product_attention.
The constructor for TensorAccessor was made internal
torchvision.utils.save_image implemented
torchvision.utils.make_grid implemented
torchvision.transforms.RandAugment implemented

Bug Fixes:

Fixed torch.cuda.synchronize() method
Suppress runtime warning by setting align_corners to 'false'
Fixed argument validation bug in Grayscale
#1056: Access violation with TensorAccessor.ToArray - incompatible data types
#1057: Memory leak with requires_grad

NuGet Version 0.100.3

This release is primarily, but not exclusively, focused on fixing bugs in distributions and adding a few new ones.

Breaking Changes:

The two main arguments to torch.linalg.solve() and torch.linalg.solve_ex() were renamed 'A' and 'B' to align with PyTorch.

API Changes:

Adding torch.linalg.solve_triangular()
Adding torch.distributions.MultivariateNormal
Adding torch.distributions.NegativeBinomial
Adding in-place versions of Tensor.triu() and Tensor.tril()
Adding torch.linalg.logsigmoid() and torch.nn.LogSigmoid
A number of distributions were missing the mode property.
Adding a C#-like string formatting style for tensors.

Bug Fixes:

TorchVision rotate(), solarize() and invert() were incorrectly implemented.
Fixed bug in Bernoulli's entropy() and log_prob() implementations.
Fixed bug in Cauchy's log_prob() implementation.
Fixed several bugs in HalfCauchy and HalfNormal.
The Numpy-style string formatting of tensors was missing commas between elements

NuGet Version 0.100.2

API Changes:

Add torchvision.datasets.CelebA()
Add support for properly formatting Tensors in Polyglot notebooks without the 'Register' call that was necessary before.

Bug Fixes:

#1014 AdamW.State.to() ignores returns
#999 Error in Torchsharp model inference in version 0.100.0

NuGet Version 0.100.1

Breaking Changes:

TorchSharp no longer supports any .NET Core versions prior to 6.0. .NET FX version support is still the same: 4.7.2 and up.

API Changes:

Added operator functionality to Torchvision, but roi are still missing.
Added support for additional types related to TorchScript modules. Scripts can now return lists of lists and tuples of lists and tuples, to an arbitrary level of nesting. Scripts can now accept lists of Tensors.

Bug Fixes:

#1001 Issue with resnet50, resnet101, and resnet152

NuGet Version 0.100.0

Updated backend binaries to libtorch v2.0.1.

Updated the NuGet metadata to use a license expression rather than a reference to a license file. This will help with automated license checking by users.

Breaking Changes:

With v2.0.1, torch.istft() expects complex numbers in the input tensor.

API Changes:

#989 Adding anomaly detection APIs to torch.autograd

Fixed Bugs:

NuGet Version 0.99.6

Breaking Changes:

There was a second version of torch.squeeze() with incorrect default arguments. It has now been removed.

API Changes:

Removed incorrect torch.squeeze() method.
Adding two-tensor versions of min() and max()

Fixed Bugs:

#984 Conversion from System.Index to TensorIndex is missing
#987 Different versions of System.Memory between build and package creation.

NuGet Version 0.99.5

API Changes:

Added Tensorboard support for histograms, images, video, and text.

NuGet Version 0.99.4

Breaking Changes:

There were some changes to the binary format storing optimizer state. This means that any such state generated before updating to this version is invalid and will likely result in a runtime error.

API Changes:

Adding torch.tensordot
Adding torch.nn.Fold and Unfold modules.
Adding Module.call() to all the Module<T...> classes. This wraps Module.forward() and allows hooks to be registered. Module.forward() is still available, but the most general way to invoke a module's logic is through call().
Adding tuple overloads for all the padding-related modules.
Adding support for exporting optimizer state from PyTorch and loading it in TorchSharp

Fixed Bugs:

#842 How to use register_forward_hook?
#940 Missing torch.searchsorted
#942 nn.ReplicationPad1d(long[] padding) missing
#943 LRScheduler.get_last_lr missing
#951 DataLoader constructor missing drop_last parameter
#953 TensorDataset is missing
#962 Seed passed to torch.random.manual_seed(seed) is unused
#949 Passing optimizer state dictionary from PyTorch to TorchSharp
#971 std results are inconsistent

NuGet Version 0.99.3

API Changes:

Fixing misspelling of 'DetachFromDisposeScope,' deprecating the old spelling.
Adding allow_tf32
Adding overloads of Module.save() and Module.load() taking a 'Stream' argument.
Adding torch.softmax() and Tensor.softmax() as aliases for torch.special.softmax()
Adding torch.from_file()
Adding a number of missing pointwise Tensor operations.
Adding select_scatter, diagonal_scatter, and slice_scatter
Adding torch.set_printoptions
Adding torch.cartesian_prod, combinations, and cov.
Adding torch.cdist, diag_embed, rot90, triu_indices, tril_indices

Fixed Bugs:

#913 conv = nn.Conv2d(c1, 1, 1, bias=False).requires_grad_(False)
#910 nn.Module.modules is missing
#912 nn.Module save and state_ dict method error

NuGet Version 0.99.2

API Changes:

Adding 'maximize' argument to the Adadelta optimizer
Adding linalg.ldl_factor and linalg.ldl_solve
Adding a couple of missing APIs (see #872)
Adding SoftplusTransform
Support indexing and slicing of Sequential
Adding ToNDArray() to TensorAccessor

Fixed Bugs:

#870 nn.AvgPool2d(kernel_size=3, stride=2, padding=1) torchsharp not support padding
#872 Tensor.masked_fill_(mask, value) missing
#877 duplicate module parameters called named_parameters() while load model by cuda
#888 THSTensor_meshgrid throws exception

NuGet Version 0.99.1

Breaking Changes:

The options to the ASGD, Rprop, and RMSprop optimizers have been changed to add a 'maximize' flag. This means that saved state dictionaries for these optimizers will not carry over.

The return type of Sequential.append() has changed from 'void' to 'Sequential.' This breaks binary compatibility, but not source compat.

API Changes:

Added a number of 1.13 APIs under torch.special
Added a maximize flag to the ASGD, Rprop and RMSprop optimizers.
Added PolynomialLR scheduler
The return type of Sequential.append() has changed from 'void' to 'Sequential.'
Added 1-dimensional array overloads for torch.as_tensor()

Fixed Bugs:

#836 Categorical seems to be miscalculated
#838 New Bernoulli get "Object reference not set to an instance of an object."
#845 registered buffers are being ignored in move model to device
#851 tensor.ToString(TorchSharp.TensorStringStyle.Numpy)
#852 The content returned by torch.nn.Sequential.append() is inconsistent with the official

NuGet Version 0.99.0

This is an upgrade to libtorch 1.13. It also moves the underlying CUDA support to 11.7 from 11.3, which means that all the libtorch-cuda-* packages have been renamed.

Breaking Changes:

See API Changes.

API Changes:

Removed Tensor.lstsq, paralleling PyTorch. Use torch.linalg.lstsq, instead. This is a breaking change.
Added 'left' Boolean argument to torch.linalg.solve()

NuGet Version 0.98.3

Fixed Bugs:

MultiStepLR scheduler was not computing the next LR correctly.
Fixed incorrect version in package reference.
Added missing package references to TorchVision manifest.

NuGet Version 0.98.2

Breaking Changes:

The .NET 5.0 is no longer supported. Instead, .NET 6.0 is the minimum version. .NET FX 4.7.2 and higher are still supported.

API Changes:

Support 'null' as input and output to/from TorchScript.
Added support for label smoothing in CrossEntropyLoss.
Added torchaudio.transforms.MelSpectrogram().
Adding squeeze_()
Adding older-style tensor factories -- IntTensor, FloatTensor, etc.

Fixed Bugs:

#783 Download progress bar missing
#787 torch.where(condition) → tuple of LongTensor function missing
#799 TorchSharp.csproj refers Skia

Source Code Cleanup:

Moved P/Invoke declarations into dedicated class.
Added C# language version to all .csproj files.

NuGet Version 0.98.1

Breaking Changes:

TorchVision and TorchAudio have beeen moved into their own NuGet packages, which need to be added to any project using their APIs.

ModuleList and ModuleDict are now generic types, taking the module type as the type parameter. torch.nn.ModuleDict() will return a ModuleDict, which torch.nn.ModuleDict() will return a ModuleDict, where T must be a Module type.

Fixed Bugs:

#568 Overloads for Named Tensors
#765 Support invoking ScriptModule methods
#775 torch.jit.load: support specifying a target device
#792 Add SkiaSharp-based default imager for torchvision.io

API Changes:

Generic ModuleDict and ModuleList
Added torchaudio.transforms.GriffinLim Added support for named tensors Added default dim argument value for 'cat'

NuGet Version 0.98.0

Breaking Changes:

Some parameter names were changed to align with PyTorch. This affects names like 'dimension,' 'probability,' and 'keepDims' and will break code that is passing these parameters by name.

Module.to(), cpu(), and cuda() were moved to a static class for extension methods. This means that it is necessary to have a 'using TorchSharp;' (C#) or 'open TorchSharp' (F#) in each file using them.

Doing so (rather than qualifying names with 'TorchSharp.') was already recommended as a best practice, since such a using/open directive will allows qualified names to align with the PyTorch module hierarchy.

Loss functions are now aligned with the PyTorch APIs. This is a major change and the reason for incrementing the minor version number. The most direct consequence is that losses are modules rather than delegates, which means you need to call .forward() to actually compute the loss. Also, the factories are in torch.nn rather than torch.nn.functional and have the same Pascal-case names as the corresponding types. The members of the torch.nn.functional static class are now proper immediate loss functions, whereas the previous ones returned a loss delegate.

Generic Module base class. The second major change is that Module is made type-safe with respect to the forward() function. Module is now an abstract base class, and interfaces IModule<T,TResult>, IModule<T1,T2,TResult>,... are introduced to define the signature of the forward() function. For most custom modules, this means that the base class has to be changed to Module<Tensor,Tensor>, but some modules may require more significant changes.

ScriptModule follows this pattern, but this version introduces ScriptModule<T...,TResult> base classes, with corresponding torch.jit.load<T...,TResult>() static factory methods.

Fixed Bugs:

#323 forward() should take a variable-length list of arguments
#558 Fix deviation from the Pytorch loss function/module APIs
#742 Ease of use: Module.to method should be generic T -> T
#743 Ease of use: module factories should have dtype and device
#745 Executing a TorchScript that returns multiple values, throws an exception
#744 Some of functions with inconsistent argument names
#749 functional.linear is wrong
#761 Stateful optimizers should have support for save/load from disk.
#771 Support more types for ScriptModule

API Changes:

Module.to(), cpu(), and cuda() were redone as extension methods. The virtual methods to override, if necessary, are now named '_to'. A need to do so should be extremely rare.
Support for saving and restoring hyperparameters and state of optimizers
Loss functions are now Modules rather than delegates.
Custom modules should now use generic versions as base classes.
ScriptModule supports calling methods other than forward()
Added torch.jit.compile().

NuGet Version 0.97.6

Breaking Changes:

This release changes TorchSharp.torchvision from a namespace to a static class. This will break any using directives that assumes that it is a namespace.

Fixed Bugs:

#719 ResNet maxpool
#730 Sequential.Add
#729 Changing torchvision namespace into a static class?

API Changes:

Adding 'append()' to torch.nn.Sequential
Adding torch.numel() and torch.version
Adding modifiable global default for tensor string formatting

NuGet Version 0.97.5

Fixed Bugs:

#715 How to implement the following code

API Changes:

Add functional normalizations
Added torch.utils.tensorboard.SummaryWriter. Support for scalars only.

NuGet Version 0.97.3

Fixed Bugs:

#694 torch.log10() computes torch.log()
#691 torch.autograd.backward()
#686 torch.nn.functional.Dropout() doesn't have the training argument.

API Changes:

Add repeat_interleave()
Add torch.broadcast_shapes()
Added meshgrid, mT, mH, and H
Added additional distributions.
Add dct and mu-law to torchaudio Added torchvision -- sigmoid_focal_loss()
Update the arguments of dropout() in Tacotron2
Add static function for all(), any(), tile(), repeat_interleave().
Add an implementation of the ReduceLROnPlateau learning rate scheduler.

NuGet Version 0.97.2

Breaking Changes:

This release contains a breaking change__ related to torch.tensor() and torch.from_array() which were not adhering to the semantics of the Pytorch equivalents (torch.from_numpy() in the case of torch.from_array()).

With this change, there will be a number of different APIs to create a tensor form a .NET array. The most significant difference between them is whether the underlying storage is shared, or whether a copy is made. Depending on the size of the input array, copying can take orders of magnitude more time in creation than sharing storage, which is done in constant time (a few μs).

The resulting tensors may be reshaped, but not resized.

// Never copy:
public static Tensor from_array(Array input)

// Copy only if dtype or device arguments require it:
public static Tensor frombuffer(Array input, ScalarType dtype, long count = -1, long offset = 0, bool requiresGrad = false, Device? device = null)
public static Tensor as_tensor(Array input,  ScalarType? dtype = null, Device? device = null)
public static Tensor as_tensor(Tensor input, ScalarType? dtype = null, Device? device = null)

// Always copy:
public static Tensor as_tensor(IList<<VARIOUS TYPES>> input,  ScalarType? dtype = null, Device? device = null)
public static Tensor tensor(<<VARIOUS TYPES>> input, torch.Device? device = null, bool requiresGrad = false)

Fixed Bugs:

#670 Better align methods for creating tensors from .NET arrays with Pytorch APIs. This is the breaking change mentioned earlier.
#679 The default value of onesided or torch.istft() is not aligned with PyTorch

API Changes:

Added torch.nn.init.trunc_normal_
Added index_add, index_copy, index_fill
Added torch.frombuffer()
Added torch.fft.hfft2, hfftn, ihfft2, ihfftn
Adding SequentialLR to the collection of LR schedulers.
Add 'training' flag to functional dropout methods.
Add missing functions to torchaudio.functional
Adding TestOfAttribute to unit tests

NuGet Version 0.97.1

This release is made shortly after 0.97.0, since it adresses a serious performance issue when creating large tensors from .NET arrays.

Fixed Bugs:

#670 Tensor allocation insanely slow for from_array()

API Changes:

RNN, LSTM, GRU support PackedSequence
Add element-wise comparison methods of torch class.
Fix clamp and (non)quantile method declarations
Implementing isnan()
Added torchaudio.models.Tacotron2()

NuGet Version 0.97.0

Fixed Bugs:

#653:Tensor.to(Tensor) doesn't change dtype of Tensor.

API Changes:

Add ability to load and save TorchScript modules created using Pytorch
Add torch.utils.rnn
Add torchvision.io
Add Tensor.trace() and torch.trace() (unrelated to torch.jit.trace)
Add Tensor.var and Tensor.var_mean
Add torchaudio.datasets.SPEECHCOMMANDS
Add torchaudio.Resample()

NuGet Version 0.96.8

Breaking Changes:

This release contains a fix to inadvertent breaking changes in 0.96.7, related to Tensor.str(). This fix is itself breaking, in that it breaks any code that relies on the order of arguments to str() introduced in 0.96.7. However, since the pre-0.96.7 argument order makes more sense, we're taking this hit now rather than keeping the inconvenient order in 0.96.7.

Fixed Bugs:

#618 TorchSharp.Modules.Normal.sample() Expected all tensors [...]
#621 torch.roll missing
#629 Missing dependency in 0.96.7 calling TorchSharp.torchvision.datasets.MNIST
#632 gaussian_nll_loss doesn't work on GPU

API Changes:

Add torchaudio.datasets.YESNO().
Added torch.from_array() API to create a tensor from an arbitry-dimension .NET array.
Added torch.tensor() overloads for most common dimensions of .NET arrays: ndim = [1,2,3,4]
Added the most significant API additions from Pytorch 1.11.
Added juliastr() and npstr().
Added two torchaudio APIs.
Added 'decimals' argument to Tensor.round()
Changed tensor.str() to undo the breaking change in 0.96.7
Added torch.std_mean()

NuGet Version 0.96.7

Dependency Changes:

This version integrates with the libtorch 1.11.0 backend. API updates to follow.

API Changes:

Strong name signing of the TorchSharp library to allow loading it in .NET Framework strongly name signed apps.
Added the 'META' device type, which can be used to examine the affect of shape from tensor operations without actually doing any computations.
Added a few methods from the torch.nn.utils namespace.
Add torch.stft() and torch.istft()

Fixed Bugs:

#567 pad missing the choice to fill at start or end

NuGet Version 0.96.6

API Changes:

#587 Added the Storage classes, and Tensor.storage()
Added torchvision.models.resnet***() factories
Added torchvision.models.alexnet() factory
Added torchvision.models.vgg*() factories
Added 'skip' list for loading and saving weights.
Added torchvision.models.interception_v3() factory
Added torchvision.models.googlenet() factory

Fixed Bugs:

#582 unbind missing
#592 GRU and Input and hidden tensors are not at the same device,[...]
Fixed Module.Dispose() and Sequential.Dispose() (no issue filed)

NuGet Version 0.96.5

Same-day release. The previous release was made without propert testing of the ToString() improvements in a notebook context. It turned out that when the standard Windows line-terminator "\r\n" is used in a VS Code notebook, an extra blank line is created.

This release fixes that by allowing the caller of ToString() to pass in the line terminator string that should be used when formatting the string. This is easily done in the notebook.

NuGet Version 0.96.4

In this release, the big change is support for .NET FX 4.7.2 and later.

There are no breaking changes that we are aware of, but see the comment on API Changes below -- backporting code to .NET 4.7 or 4.8, which were not previously supported, may lead to errors in code that uses tensor indexing.

API Changes:

Due to the unavailability of System.Range in .NET FX 4.7, indexing of tensors using the [a..b] syntax is not available. In its place, we have added support for using tuples as index expressions, with the same semantics, except that the "from end" unary operator ^ of the C# range syntax is not available. The tuple syntax is also available for versions of .NET that do support System.Range

A second piece of new functionality was to integrate @dayo05's work on DataLoader into the Examples. A couple of MNIST and CIFAR data sets are now found in torchvision.datasets

A Numpy-style version of ToString() was added to the existing Julia-style, and the argument to the verbose ToString() was changed from 'Boolean' to an enumeration.

A number of the "bugs" listed below represent missing APIs.

Fixed Bugs:

#519 Multiprocessing dataloader support
#529 pin_memory missing
#545 Implement FractionalMaxPool{23}d
#554 Implement MaxUnpool{123}d
#555 Implement LPPool{12}d
#556 Implement missing activation modules
#559 Implement miscellaneous missing layers.
#564 torch.Tensor.tolist
#566 Implicit conversion of scalars to tensors
#576 load_state_dict functionality

NuGet Version 0.96.3

API Changes:

NOTE: This release contains breaking changes.

The APIs to create optimizers all take 'parameters()' as well as 'named_parameters()' now.
Support for parameter groups in most optimizers.
Support for parameter groups in LR schedulers.

Fixed Bugs:

#495 Add support for OptimizerParamGroup
#509 Tensor.conj() not implemented
#515 what's reason for making register_module internal?
#516 AdamW bug on v0.96.0
#521 Can't set Tensor slice using indexing
#525 LSTM's forward function not work with null hidden and cell state
#532 Why does storing module layers in arrays break the learning process?

NuGet Version 0.96.2

NOT RELEASED

NuGet Version 0.96.1

API Changes:

Fixed Bugs:

Using libtorch CPU packages from F# Interactive required explicit native loads

#510 Module.Load throws Mismatched state_dict sizes exception on BatchNorm1d

NuGet Version 0.96.0

API Changes:

NOTE: This release contains breaking changes.

'Module.named_parameters()', 'parameters()', 'named_modules()', 'named_children()' all return IEnumerable instances instead of arrays.
Adding weight and bias properties to the RNN modules.
Lower-cased names: Module.Train --> Module.train and Module.Eval --> Module.eval

Fixed Bugs:

#496 Wrong output shape of torch.nn.Conv2d with 2d stride overload
#499 Setting Linear.weight is not reflected in 'parameters()'
#500 BatchNorm1d throws exception during eval with batch size of 1

NuGet Version 0.95.4

API Changes:

Added OneCycleLR and CyclicLR schedulers
Added DisposeScopeManager and torch.NewDisposeScope() to facilitate a new solution for managing disposing of tensors with fewer usings.
Added Tensor.set_()
Added 'copy' argument to Tensor.to()

NOTES:
The 'Weight' and 'Bias' properties on some modules have been renamed 'weight' and 'bias'.
The 'LRScheduler.LearningRate' property has been removed. To log the learning rate, get it from the optimizer that is in use.

Fixed Bugs:

#476 BatchNorm does not expose bias,weight,running_mean,running_var
#475 Loading Module that's on CUDA
#372 Module.save moves Module to CPU
#468 How to set Conv2d kernel_size=(2,300)
#450 Smoother disposing

NuGet Version 0.95.3

API Changes:

The previously unused Tensor.free() method was renamed 'DecoupleFromNativeHandle()' and is meant to be used in native interop scenarios.
Tensor.Handle will now validate that the internal handle is not 'Zero', and throw an exception when it is. This will catch situations where a disposed tensor is accessed.

Fixed Bugs:

There were a number of functions in torchvision, as well as a number of optimizers, that did not properly dispose of temporary and intermediate tensor values, leading to "memory leaks" in the absence of explicit GC.Collect() calls.
A couple of randint() overloads caused infinite recursion, crashing the process.

NuGet Version 0.95.2

API Changes:

Added a Sequential factory method to create Sequential from a list of anonymous submodules.
Added TotalCount and PeakCount static properties to Tensor, useful for diagnostic purposes.

Fixed Bugs:

#432 Sequential does not dispose of intermediary tensors.

NuGet Version 0.95.1

This version integrates with LibTorch 1.10.0.

API Changes:

Added a 'strict' option to Module.load().

See tracking issue #416 for a list of new 1.10.0 APIs. dotnet#416

NuGet Version 0.93.9

Fixed Bugs:

#414 LRScheduler -- not calling the optimizer to step() [The original, closing fix was actually incorrect, but was then fixed again.]

API Changes:

Added the NAdam and RAdam optimizers.
Added several missing and new learning rate schedulers.

NuGet Version 0.93.8

Fixed Bugs:

#413 Random Distributions Should Take a Generator Argument
#414 LRScheduler -- not calling the optimizer to step()

API Changes:

Added Module.Create() to create a model and load weights.

NuGet Version 0.93.6

Fixed Bugs:

#407 rand() and randn() must check that the data type is floating-point.
#410 Support for passing random number generators to rand(), randn(), and randint()

API Changes:

Added some overloads to make F# usage more convenient.
Added convenience overloads to a number of random distribution factories.
Added '_' to the torch.nn.init functions. They overwrite the input tensor, so they should have the in-place indicator.

NuGet Version 0.93.5

Fixed Bugs:

#399 Data() returns span that must be indexed using strides.

This was a major bug, affecting any code that pulled data out of a tensor view.

API Changes:

Tensor.Data() -> Tensor.data()
Tensor.DataItem() -> Tensor.item()
Tensor.Bytes() -> Tensor.bytes
Tensor.SetBytes() -> Tensor.bytes

NuGet Version 0.93.4

This release introduces a couple of new NuGet packages, which bundle the native libraries that you need:

TorchSharp-cpu
TorchSharp-cuda-linux
TorchSharp-cuda-windows

NuGet Version 0.93.1

With this release, the native libtorch package version was updated to 1.9.0.11, and that required rebuilding this package.

NuGet Version 0.93.0

With this release, releases will have explicit control over the patch version number.

Fixed Bugs:

Fixed incorrectly implemented Module APIs related to parameter / module registration.
Changed Module.state_dict() and Module.load() to 'virtual,' so that saving and restoring state may be customized.
#353 Missing torch.minimum (with an alternative raising exception)
#327 Tensor.Data should do a type check
#358 Implement ModuleList / ModuleDict / Parameter / ParameterList / ParameterDict

API Changes:

Removed the type-named tensor factories, such as 'Int32Tensor.rand(),' etc.

Documentation Changes:

Added an article on creating custom modules.

NuGet Version 0.92.52220

This was the first release since moving TorchSharp to the .NET Foundation organization. Most of the new functionality is related to continuing the API changes that were started in the previous release, and fixing some bugs.

Fixed Bugs:

#318 A few inconsistencies with the new naming

Added Features:

''' torch.nn.MultiHeadAttention torch.linalg.cond torch.linalg.cholesky_ex torch.linalg.inv_ex torch.amax/amin torch.matrix_exp torch.distributions.* (about half the namespace) '''

API Changes:

CustomModule removed, its APIs moved to Module.