Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDTensors] JLArrays Extension #1508

Merged
merged 44 commits into from
Sep 24, 2024
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
fc3ca6c
Start adding JLArrays extension
kmp5VT Jun 21, 2024
0c9aae6
Bump version
kmp5VT Jun 21, 2024
506a071
format
kmp5VT Jun 21, 2024
c4dad63
Add JLArrays to test to see what we need to add in NDTensors
kmp5VT Jun 21, 2024
f23d305
Add registryt to `TypeParameterAcessors
kmp5VT Jun 21, 2024
822411c
Fix spelling
kmp5VT Jun 22, 2024
e213bff
remove unnecessary functions
kmp5VT Jun 22, 2024
7068a63
format
kmp5VT Jun 22, 2024
b8b46a3
Remove dir
kmp5VT Jun 22, 2024
fd0866e
rename folder
kmp5VT Jun 22, 2024
2caf2d1
[no ci] alphabetize libraries
kmp5VT Jun 22, 2024
9ad5670
Merge branch 'main' into kmp5/feature/JLArrays_extension
kmp5VT Jun 23, 2024
0f1f277
Add JLArrays as dep and move CUDA to extras
kmp5VT Jun 23, 2024
95881d5
Moving to make JLArrays always run
kmp5VT Jun 23, 2024
0acdaec
Add cuda to see if theres still issues on Jenkins (my machine is fine)
kmp5VT Jun 23, 2024
a4687bd
Fix import
kmp5VT Jun 23, 2024
cdee270
Add Extension functions to JLArrays
kmp5VT Jun 23, 2024
d0b6a2f
Merge branch 'main' into kmp5/feature/JLArrays_extension
kmp5VT Jun 24, 2024
d6e675d
Merge branch 'main' into kmp5/feature/JLArrays_extension
kmp5VT Jun 26, 2024
bdacf0e
Fix the linear algebra and add jl to base tests
kmp5VT Jun 27, 2024
d2be3b3
format
kmp5VT Jun 27, 2024
9428514
Try activate before update registry
kmp5VT Jun 27, 2024
b3a2d87
Merge branch 'main' into kmp5/feature/JLArrays_extension
kmp5VT Jun 27, 2024
9165664
Move cuda back to deps
kmp5VT Jun 27, 2024
1cd131d
There is some issues with JLArrays on lower versions of Julia
kmp5VT Jun 28, 2024
6654671
Merge branch 'main' into kmp5/feature/JLArrays_extension
kmp5VT Jul 2, 2024
8a32daf
add back cuda
kmp5VT Jul 2, 2024
ad02066
Having JLArrays when testing GPUs creates an issue in test
kmp5VT Jul 2, 2024
8534859
Not using JLArrays in early versions of Julia
kmp5VT Jul 2, 2024
e2aa194
Move CUDA to extra
kmp5VT Jul 2, 2024
1078405
Add JLArrays back to deps
kmp5VT Jul 2, 2024
f47cf11
Bump CUDA test from 1.6 to 1.8
kmp5VT Jul 3, 2024
094c3e3
Small fix
kmp5VT Jul 9, 2024
b751d64
Merge branch 'main' into kmp5/feature/JLArrays_extension
kmp5VT Sep 4, 2024
1a14a33
Sparse arrays compat for lower versions
kmp5VT Sep 5, 2024
0aee394
Allow LinearAlgebr v0
mtfishman Sep 5, 2024
4e5c622
Allow Random v0
mtfishman Sep 5, 2024
1b976d0
Merge remote-tracking branch 'upstream/main' into kmp5/feature/JLArra…
kmp5VT Sep 18, 2024
6027cb6
Update to tests
kmp5VT Sep 18, 2024
12f43cb
typo
kmp5VT Sep 18, 2024
638377c
Remove Jenkins CUDA 1.8
kmp5VT Sep 19, 2024
4208487
Move default_typeparameters to AbstractArray
kmp5VT Sep 22, 2024
14922b1
Remove file
kmp5VT Sep 23, 2024
38092d8
AbstractGPUArrays -> AbstractArray
kmp5VT Sep 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion NDTensors/Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "NDTensors"
uuid = "23ae76d9-e61a-49c4-8f12-3f1a16adf9cf"
authors = ["Matthew Fishman <[email protected]>"]
version = "0.3.34"
version = "0.3.35"

[deps]
Accessors = "7d9f7c33-5ae7-4f3b-8dc6-eff91059b697"
Expand Down Expand Up @@ -36,6 +36,7 @@ AMDGPU = "21141c5a-9bdb-4563-92ae-f87d6854732e"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
HDF5 = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
JLArrays = "27aeb0d3-9eb9-45fb-866b-73c2ecf80fcb"
MappedArrays = "dbb5928d-eab1-5f90-85c2-b9b0edb7c900"
Metal = "dde4c033-4e86-420c-a63e-0dd931031962"
Octavian = "6fd5a793-0b7e-452c-907f-f8bfe9c57db4"
Expand All @@ -47,6 +48,7 @@ NDTensorsAMDGPUExt = ["AMDGPU", "GPUArraysCore"]
NDTensorsCUDAExt = ["CUDA", "GPUArraysCore"]
NDTensorsGPUArraysCoreExt = "GPUArraysCore"
NDTensorsHDF5Ext = "HDF5"
NDTensorsJLArraysExt = ["GPUArraysCore", "JLArrays"]
NDTensorsMappedArraysExt = ["MappedArrays"]
NDTensorsMetalExt = ["GPUArraysCore", "Metal"]
NDTensorsOctavianExt = "Octavian"
Expand All @@ -70,6 +72,7 @@ GPUArraysCore = "0.1"
HDF5 = "0.14, 0.15, 0.16, 0.17"
HalfIntegers = "1"
InlineStrings = "1"
JLArrays = "0.1"
LinearAlgebra = "1.6"
mtfishman marked this conversation as resolved.
Show resolved Hide resolved
MacroTools = "0.5"
MappedArrays = "0.4"
Expand All @@ -95,6 +98,7 @@ AMDGPU = "21141c5a-9bdb-4563-92ae-f87d6854732e"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
HDF5 = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
JLArrays = "27aeb0d3-9eb9-45fb-866b-73c2ecf80fcb"
Metal = "dde4c033-4e86-420c-a63e-0dd931031962"
Octavian = "6fd5a793-0b7e-452c-907f-f8bfe9c57db4"
TBLIS = "48530278-0828-4a49-9772-0f3830dfa1e9"
Expand Down
3 changes: 1 addition & 2 deletions NDTensors/ext/NDTensorsAMDGPUExt/set_types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,5 @@ using AMDGPU: AMDGPU, ROCArray
function TypeParameterAccessors.default_type_parameters(::Type{<:ROCArray})
return (Float64, 1, AMDGPU.Mem.HIPBuffer)
end
TypeParameterAccessors.position(::Type{<:ROCArray}, ::typeof(eltype)) = Position(1)
TypeParameterAccessors.position(::Type{<:ROCArray}, ::typeof(ndims)) = Position(2)

TypeParameterAccessors.position(::Type{<:ROCArray}, ::typeof(storagemode)) = Position(3)
6 changes: 0 additions & 6 deletions NDTensors/ext/NDTensorsCUDAExt/set_types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,6 @@ using CUDA: CUDA, CuArray
using NDTensors.TypeParameterAccessors: TypeParameterAccessors, Position
using NDTensors.GPUArraysCoreExtensions: storagemode

function TypeParameterAccessors.position(::Type{<:CuArray}, ::typeof(eltype))
return Position(1)
end
function TypeParameterAccessors.position(::Type{<:CuArray}, ::typeof(ndims))
return Position(2)
end
function TypeParameterAccessors.position(::Type{<:CuArray}, ::typeof(storagemode))
return Position(3)
end
Expand Down
3 changes: 3 additions & 0 deletions NDTensors/ext/NDTensorsJLArraysExt/NDTensorsJLArraysExt.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
module NDTensorsJLArraysExt
include("set_types.jl")
end
7 changes: 7 additions & 0 deletions NDTensors/ext/NDTensorsJLArraysExt/set_types.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# TypeParameterAccessors definitions
using NDTensors.TypeParameterAccessors: TypeParameterAccessors, Position
using JLArrays: JLArray

function TypeParameterAccessors.default_type_parameters(::Type{<:JLArray})
return (Float64, 1)
end
7 changes: 0 additions & 7 deletions NDTensors/ext/NDTensorsMetalExt/set_types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,6 @@ using Metal: Metal, MtlArray
using NDTensors.TypeParameterAccessors: TypeParameterAccessors, Position
using NDTensors.GPUArraysCoreExtensions: storagemode

## TODO remove TypeParameterAccessors when SetParameters is removed
function TypeParameterAccessors.position(::Type{<:MtlArray}, ::typeof(eltype))
return Position(1)
end
function TypeParameterAccessors.position(::Type{<:MtlArray}, ::typeof(ndims))
return Position(2)
end
function TypeParameterAccessors.position(::Type{<:MtlArray}, ::typeof(storagemode))
return Position(3)
end
Expand Down
6 changes: 5 additions & 1 deletion NDTensors/test/NDTensorsTestUtils/device_list.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
using NDTensors: NDTensors
using Pkg: Pkg
using JLArrays
using NDTensors: NDTensors
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved

if "cuda" in ARGS || "all" in ARGS
## Right now adding CUDA during Pkg.test results in a
## compat issues. I am adding it back to test/Project.toml
Expand Down Expand Up @@ -28,6 +30,7 @@ function devices_list(test_args)
devs = Vector{Function}(undef, 0)
if isempty(test_args) || "base" in test_args
push!(devs, NDTensors.cpu)
# push!(devs, jl)
end

if "cuda" in test_args || "cutensor" in test_args || "all" in test_args
Expand All @@ -47,5 +50,6 @@ function devices_list(test_args)
if "metal" in test_args || "all" in test_args
push!(devs, NDTensors.MetalExtensions.mtl)
end

return devs
end
3 changes: 2 additions & 1 deletion NDTensors/test/Project.toml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
BlockArrays = "8e7c35d0-a365-5155-bbbb-fb81a777f24e"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Combinatorics = "861a8166-3701-5b0c-9a16-15d98fcdc6aa"
Compat = "34da2185-b29b-5c13-b0c7-acf172513d20"
Dictionaries = "85a47980-9c8c-11e8-2b9f-f7ca1fa99fb4"
EllipsisNotation = "da5c29d0-fa7d-589e-88eb-ea29b0a81949"
FillArrays = "1a297f60-69ca-5386-bcde-b61e274b549b"
GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
ITensors = "9136182c-28ba-11e9-034c-db9fb085ebd5"
JLArrays = "27aeb0d3-9eb9-45fb-866b-73c2ecf80fcb"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MappedArrays = "dbb5928d-eab1-5f90-85c2-b9b0edb7c900"
NDTensors = "23ae76d9-e61a-49c4-8f12-3f1a16adf9cf"
Expand All @@ -29,5 +29,6 @@ cuTENSOR = "2.0"

[extras]
AMDGPU = "21141c5a-9bdb-4563-92ae-f87d6854732e"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Metal = "dde4c033-4e86-420c-a63e-0dd931031962"
cuTENSOR = "011b41b2-24ef-40a8-b3eb-fa098493e9e1"
Loading