Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce AbstractGPUSparseArray #431

Open
albertomercurio opened this issue Sep 27, 2022 · 4 comments
Open

Introduce AbstractGPUSparseArray #431

albertomercurio opened this issue Sep 27, 2022 · 4 comments

Comments

@albertomercurio
Copy link
Contributor

Hello,

dense CuArrays are recognized as AbstractGPUArray type. Indeed, if I do

using CUDA
using GPUArrays

A = CUDA.rand(10, 10)
A isa AbstractGPUArray

it returns true. However, it returns false for a sparse AbstractCuSparseMatrix array, such as CuSparseMatrixCSR type. If I do

A2 = CuSparseMatrixCSR(A)
A2 isa AbstractCuSparseMatrix
A2 isa AbstractGPUArray

the first returns true, while the second not.

I think that the support for sparse types is essential for developing external packages which use if conditions without import the whole CUDA.jl package. Such as the ExponentialUtilities.jl package, which have a special condition only for AbstractGPUArray types, which however fails if I insert a sparse array.

@maleadt
Copy link
Member

maleadt commented Sep 27, 2022

However, it returns false for a sparse AbstractCuSparseMatrix array, such as CuSparseMatrixCSR type.

That's intentional, because we don't have multiple inheritance. If we want CuSparseMatrixCSR to be <: AbstractSparseArray, we cannot also have it be a subtype of AbstractGPUArray (which is a direct subtype of AbstractArray).

So I'm not sure what you're suggesting here with 'Support for AbstractCuSparseMatrix'. A CUDA-specific type is never going to be used in GPUArrays, which sits at a higher level of abstraction. We could define AbstractSparseGPUArray <: AbstractSparseArray here and have the CUDA sparse array types be a subtype of that, but right now (without any sparse array functionality in GPUArrays) that won't buy you anything.

@albertomercurio
Copy link
Contributor Author

For example, here the ExponentialUtilities.jl package declares a function for AbstractGPUArray, which is called only if I insert a dense CuArray.

I need a way to declare a function for a sparse AbstractCuSparseMatrix, but the only way I found until now is to import the CUDA.jl package, which I don't know if it is a good idea.

What do you think?

@albertomercurio
Copy link
Contributor Author

I try to explain better what I mean.

I think that it is better to move these lines which are actually in the CUDA.jl package

https://github.com/JuliaGPU/CUDA.jl/blob/cea77a5aa60c0689bf8055f966722e5b3a3f8cf1/lib/cusparse/array.jl#L12-L14

into this package. Such as the current dense case, which is defined here

abstract type AbstractGPUArray{T, N} <: DenseArray{T, N} end
const AbstractGPUVector{T} = AbstractGPUArray{T, 1}
const AbstractGPUMatrix{T} = AbstractGPUArray{T, 2}
const AbstractGPUVecOrMat{T} = Union{AbstractGPUArray{T, 1}, AbstractGPUArray{T, 2}}
# convenience aliases for working with wrapped arrays
const WrappedGPUArray{T,N} = WrappedArray{T,N,AbstractGPUArray,AbstractGPUArray{T,N}}
const AnyGPUArray{T,N} = Union{AbstractGPUArray{T,N}, WrappedGPUArray{T,N}}

In this way, we can add also the AbstractCuSparseMatrix into the AnyGPUArray type.

@maleadt maleadt changed the title Support for the AbstractCuSparseMatrix type Introduce AbstractGPUSparseArray Oct 26, 2022
@maleadt
Copy link
Member

maleadt commented Oct 26, 2022

In this way, we can add also the AbstractCuSparseMatrix into the AnyGPUArray type.

That's different from the AbstractGPUArray test you requested first (which is a subtype of DenseArray, so that wouldn't work). Maybe we shouldn't have done that, i.e., AbstractGPUArray<:AbstractArray instead, but I don't think we can safely change that now. But introducing a separate AbstractGPUSparseArray (which wouldn't be a subtype of AbstractGPUArray, but could be included in AnyGPUArray) seems like it could work.

I'm not sure how useful it would be though, because almost none of the AnyGPUArray functionality would be SparseArray-compatible. And we don't have any other GPU back-end that supports sparse arrays(rocSPARSE exists, but AFAIK AMDGPU.jl doesn't use it).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants