You need NVIDIA GPU with minimum 4 GB of VRAM (recommended 8 GB of VRAM to be able to run big Polygon map)
On the PC, you can clone this repository. You may use Visual Studio Code to develop your project by running code .
in a terminal opened in the cloned folder. VS Code can also be used remotely. The process is described here.
You can use the nvidia-smi command to retrieve information about the installed GPU. This also allows you to monitor its usage.
The CUDA Toolkit 12.3 needs to be installed. To access the nvcc compiler, you have to add it to PATH. You can do that with the following commands:
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
To avoid having to run this every time you open a console, you can add them to your .bashrc file.
To avoid installing CUDA Runtime API and NVCC Compiler globally, you can create conda environment and install cudatoolkit that includes CUDA Runtime API and NVCC compiler. In order to use CUDA at all, you need to install appropriate Nvidia driver for your graphics card. Note that, depending on your graphics card, certain driver version will be available. Newer versions of CUDA requires higher driver versions. You can check which driver version you need for which CUDA on this link and driver version for your grahic card here
Now, you can create a .cu file to start programming. Check out the CUDA C++ Programming Guide or other tutorials to learn how to write your program. The programming guide also contains instructions on compilation. StopWatch.h uses C++11 features, so at minimum, you'll need a compile command like this:
nvcc -std=c++11 YourCode.cu -o RunMe
You can also setup a CMakeList to simplify compiling you code. A CMakeList.txt could look like this:
cmake_minimum_required(VERSION 3.24)
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CUDA_ARCHITECTURES native)
set(CMAKE_CUDA_SEPARABLE_COMPILATION ON)
project(CudaLab VERSION 0.1 LANGUAGES CXX CUDA)
find_package(CUDAToolkit)
include_directories(${CUDAToolkit_INCLUDE_DIRS})
add_executable(YourProgram YourCudaFile.cu OtherSourceFile.cpp)
# You can add multiple executables for your tasks
Avoid committing your binary files to git. You should set up a .gitignore file to exlude binaries and possibly other autogenerated or project files from git.
Resources:
- CUDA C++ Programming Guide; most relevant chapters:
- CUDA Runtime API to look up CUDA runtime functions; most relevant:
- 6.1 - Device Management: device initialization, synchronization
- 6.3 - Error Handling: CUDA functions return cudaError_t, which should be checked (e.g. memory allocation could fail if device out of memory)
- 6.9 - Memory Management: allocate, copy, free memory