Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

promote_type on GPU arrays of Float32 and ComplexF32 promotes to UnionAll #543

Open
nHackel opened this issue Jun 24, 2024 · 0 comments
Open

Comments

@nHackel
Copy link

nHackel commented Jun 24, 2024

Hello,

I stumbled over this difference between CPU and GPU arrays:

julia> using JLArrays

julia> using CUDA

julia> a = rand(Float32, 1); b = rand(ComplexF32, 1);

julia> promote_type(typeof(a), typeof(b))
Vector{ComplexF32} (alias for Array{Complex{Float32}, 1})

julia> promote_type(typeof(JLArray(a)), typeof(JLArray(b)))
JLArray{T, 1} where T

julia> promote_type(typeof(CuArray(a)), typeof(CuArray(b)))
CuArray{T, 1, CUDA.DeviceMemory} where T

I've noticed this behaviour originally on CuArrays, but noticed it is also present for JLArrays, so I hope this is the correct repository. I have not been able to test it out on other GPU arrays (yet)

@nHackel nHackel changed the title promote_type work GPU arrays of Float32 and ComplexF32 promotes to UnionAll promote_type on GPU arrays of Float32 and ComplexF32 promotes to UnionAll Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant