Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Internal error in package accelerate and LLVM.PTX backend: CUDA Exception - misaligned address #529

Open
sergiodguezc opened this issue Apr 9, 2023 · 1 comment

Comments

@sergiodguezc
Copy link

Description
I encountered an internal error in the accelerate package while running my code. The error message was:

Internal error in package accelerate
Please submit a bug report at https://github.com/AccelerateHS/accelerate/issues

CUDA Exception: misaligned address

CallStack (from HasCallStack):
  internalError: Data.Array.Accelerate.LLVM.PTX.State:53:9

Steps to reproduce
Currently, I don't have an example to reproduce the error, but I am trying my best to get one. I will attach it to this issue as soon as I have it.

Expected behaviour
I expected the code to run without any errors.

Environment

  • Accelerate : 1.3.0.0
  • Accelerate backend(s): accelerate-llvm-ptx
    • Device Query
 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA GeForce GTX 760 (192-bit)"
  CUDA Driver Version / Runtime Version          11.4 / 10.2
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 1482 MBytes (1554251776 bytes)
  ( 6) Multiprocessors, (192) CUDA Cores/MP:     1152 CUDA Cores
  GPU Max Clock rate:                            889 MHz (0.89 GHz)
  Memory Clock rate:                             2800 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 393216 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS
  • GHC: 8.10.7
  • OS: Arch Linux 6.1.22-1

Additional context
Note that when using the LLVM.Native backend, the code runs without any errors. Please let me know if you need any additional information or if there's anything else I can do to help diagnose and fix this issue. Thank you.

@ivogabe
Copy link
Contributor

ivogabe commented Apr 12, 2023

We've recently also seen this in a larger project, but we couldn't create a small reproduction from that (yet). It would be really useful if you could create a small reproduction! That would make it a lot easier to debug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants