-
Notifications
You must be signed in to change notification settings - Fork 16
CPlusPlus Performance Portability For OpenACC and OpenMP Folks
C++ portability is basically CUDA with a different syntax, and it is applicable to any code. It is merely a library, not a separate language, and it is already well supported by all major compilers. It is similar to CUDA because you specify a kernel that operates on one thread and the number of threads to run. A "kernel" is identical to the code you find inside a loop or a set of loops, and the number of threads is the product of the loop dimensions outside the kernel code.
real stateTend (nx ,ny,nz,numState);
real stateFluxLimits(nx+1,ny,nz,numState);
!$acc parallel loop collapse(4)
do l = 1 , numState
do k = 1 , nz
do j = 1 , ny
do i = 1 , nx
stateTend(i,j,k,l) = - ( stateFluxLimits(i+1,j,k,l) -
stateFluxLimits(i ,j,k,l) ) / dx;
enddo
enddo
enddo
enddo
typedef yakl::Array<float,4,yakl::memDevice,yakl::styleFortran> real4d;
using yakl::fortran::parallel_for;
using yakl::fortran::Bounds;
real4d stateTend ("stateTend" ,nx ,ny,nz,numState);
real4d stateFluxLimits("stateFluxLimits",nx+1,ny,nz,numState);
// do l = 1 , numState
// do k = 1 , nz
// do j = 1 , ny
// do i = 1 , nx
parallel_for( Bounds<4>(numState,nz,ny,nx) ,
YAKL_LAMBDA(int l, int k, int j, int i) {
stateTend(i,j,k,l) = - ( stateFluxLimits(i+1,j,k,l) -
stateFluxLimits(i ,j,k,l) ) / dx;
});
typedef yakl::Array<float,4,yakl::memDevice,yakl::styleC> real4d;
using yakl::c::parallel_for;
using yakl::c::Bounds;
real4d stateTend ("stateTend" ,numState,nz,ny,nx );
real4d stateFluxLimits("stateFluxLimits",numState,nz,ny,nx+1);
// for (int l=0; l < numState; l++) {
// for (int k=0; k < nz; k++) {
// for (int j=0; j < ny; j++) {
// for (int i=0; i < nx; i++) {
parallel_for( Bounds<4>(numState,nz,ny,nx) ,
YAKL_LAMBDA(int l, int k, int j, int i) {
stateTend(l,k,j,i) = - ( stateFluxLimits(l,k,j,i+1) -
stateFluxLimits(l,k,j,i ) ) / dx;
});
- The Kernel Approach
- How To Express a Kernel in C++
- C++ "Lambda"s
- C++ Templates
- From Loops To Kernels: An Example
- GPU Limitations and the Pains They Cause
- Further Resources
Performance portability is a bit of a unicorn in the real world. We all want something that works optimally on all architectures with a single source, but in the real world, we have to make compromises. The goal should be to find the route with the fewest limitations and the most flexibility. When it comes to portability, the "kernel" model has a lot of advantages. This is the model used in Nvidia's CUDA language, for instance.
The gist of the kernel approach is to give the following information:
- A kernel that performs the work of a single thread's index
- The number of threads to run
You basically just say, "Here's what I want to do for a given thread, and here's how many threads I have." With this information, a compiler and / or library can then launch those threads in whatever manner it wants. If you're on the CPU, you can wrap the kernel in a simple for
loop. If you're on the GPU you can launch it as a GPU kernel. If you're on the Intel MIC, you can tile the parallelism for appropriate caching.
As far as I can tell, the kernel approach is the most portable avenue we currently have for general computations. You can, of course, use tuned libraries like cuBLAS and cuFFT for specific computations and create Domain Specific Languages for slightly less specific cases. But the kernel approach allows you to define the "kernel" however you want for truly generic computations.
A well-coded directives-based approach like OpenACC or OpenMP offload will do this anyway. The kernel code is simply the code inside a set of loops, and the number of threads for the kernel is the product of the loop dimensions. Consider the codes below, for instance
! These loops will create nx*ny*nz threads
!$acc parallel loop collapse(3)
do k= 1 , nz
do j = 1 , ny
do i = 1 , nx
c(i,j,k) = a(i,j,k) + b(i,j,k) ! This is the kernel code
enddo
enddo
enddo
// These loops will create nx*ny*nz threads
#pragma acc parallel loop collapse(3)
for (int k=0; k<nz; k++) {
for (int j=0; j<ny; j++) {
for (int i=0; i<nx; i++) {
c[k][j][i] = a[k][j][i] + b[k][j][i] // This is the kernel code
}
}
}
In the C++ kernel-based approach, though, you can explicitly launch the kernel code on different backends yourself rather than having to rely on the compiler to do the right thing or waiting on them to implement the specific backend you want and how you want it. Compiler development is usually substantially slower because they have to be more general than a project's development needs to be.
To launch some kernel code on multiple possible backends, you must be able to pass the kernel code as a function parameter. In C++, the most flexible and convenient way to do this is to pass a class object. It's true that you can pass functions as pointers in C and C++, but function pointers are harder to inline and must pass variables by parameter in order to operate on them, which can be cumbersome to code. That's not to mention the fact that you cannot use a CPU function pointer on the GPU because they use different memory address spaces.
So, you're passing a class object that "does something", meaning you'll need to call a member function of that class. The launcher you're passing your kernel to needs to know which function to call. While you could just agree on a common function name (e.g., snuffaluffagus()
) for all of your kernel class objects to use, a more convenient thing to do is to overload the parentheses, ()
, operator. The benefit of this is that when you use the parentheses operator on the class object, it looks like a function call. A C++ class that overloads the parentheses operator is called a "functor". An example of a functor that adds two arrays, a
and b
, and stores the result in c
is below:
class AddKernel {
float *a, *b, *c;
public:
AddKernel(float *aIn, float *bIn, float *cIn) {
a = aIn;
b = bIn;
c = cIn;
}
void operator() (int i) const {
c[i] = a[i] + b[i];
}
};
For those less familiar with C++, the function with no return type that has the same name as the class is called the constructor, and in this case, it initializes the pointers so the operator()
function can operate on them. Notice that the operator()
function takes a single parameter: the thread ID; and the function does something specific to that thread ID. You would then pass an instance of this class to a launcher function that would run it with something like the following on the CPU in serial:
function launcher(int nThreads, AddKernel &kernel) {
for (int i=0; i<nThreads; i++) {
kernel(i);
}
}
Running this kernel on the GPU would look more like:
__global__ void cudaKernel(int nThreads, AddKernel kernel) {
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < nThreads) {
kernel( i );
}
}
int const vectorSize = 128;
void parallel_for(int nThreads, AddKernel &kernel) {
cudaKernel <<< (nIter-1)/vectorSize+1 , vectorSize >>> ( nThreads , kernel );
}
The point is that we can take this kernel in the form of a functor and run it on multiple backends pretty easily. This is the gist of C++ performance portability.
When your functor uses a lot of variables, it can be a real pain to have to explicitly create functors for every single kernel, declaring and initializing those variables in each functor, and having to explicitly create functor objects. That's a lot of wasted code. Enter the C++ "Lambda." A Lambda is a convenient way to create a functor without all of the tedium. The main convenience of a Lambda is the "capture" clause (e.g., [var1,var2,&var3]
), which automatically defines and initializes variables from the surrounding context in the functor for you. A note of warning, Lambda function names in the symbol table tend to be pretty unwieldy.
As an example, the AddKernel functor you saw before could be expressed as the following as a Lambda:
auto myKernel = [a,b,c] (int i) { c[i] = a[i] + b[i]; };
The auto
keyword, by the way, is just a convenience, where the compiler will assign the object type based on compile-time deduction. It the only way I know of to store a lambda object to a variable, though often, people just declare the lambda in place as a function parameter to the kernel launcher.
Notice the Lambda is only one line of code instead of twelve for this simple case. It becomes more drastic the more variables you use in the capture clause. Lambdas are also nice because they allow you to keep the kernel's code within the context it was originally in, grabbing variables from that context and creating a functor on the fly. If you have five successive kernels (i.e., five separate loops) in a given function, they can all stay in that function together like they originally were. This often makes the code easier to read and follow.
A C++ template directs the compiler to generate a function on the fly based on how it is called. Note that templates can make your code hard to follow if you aren't careful (the same can be said of class inheritance), so be judicious in how you use them. The example launchers you saw above are not very useful because they only work with one functor object type. We can change them to use templates to then work with any kernel functor as follows:
template <class F> void parallel_for( int const nThreads, F &f ) {
for (int i=0; i<nIter; i++) {
f(i);
}
}
template <class F> __global__ void cudaKernel(int nThreads, F f) {
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < nThreads) { f( i ); }
}
int const vectorSize = 128;
template <class F> void parallel_for(int nThreads, F &f) {
cudaKernel <<< (nIter-1)/vectorSize+1 , vectorSize >>> ( nThreads , f);
}
template <class F> __global__ void hipKernel(int nThreads, F f) {
int i = hipBlockIdx_x*hipBlockDim_x + hipThreadIdx_x;
if (i < nThreads) { f( i ); }
}
int const vectorSize = 128;
template<class F> void parallel_for( int const nThreads, F const &f ) {
hipLaunchKernelGGL( hipKernel , dim3((nIter-1)/vectorSize+1) , dim3(vectorSize),
(std::uint32_t) 0 , (hipStream_t) 0 , nThreads , f );
}
When you call parallel_for with the AddKernel functor, the compiler will generate a version of this function with that object type. When you next call it with another functor (say from a Lambda), the compiler will generate yet another version of this launcher function. It's a convenience that allows you to write the launcher code only once for any kernel.
Consider the following CPU code, which x-direction tendencies give Finite-Volume fluxes in a fluids code:
real stateTend (nx ,ny,nz,numState);
real stateFluxLimits(nx+1,ny,nz,numState);
!$acc parallel loop collapse(4)
do l = 1 , numState
do k = 1 , nz
do j = 1 , ny
do i = 1 , nx
stateTend(i,j,k,l) = - ( stateFluxLimits(i+1,j,k,l) -
stateFluxLimits(i ,j,k,l) ) / dx;
enddo
enddo
enddo
enddo
Once this is transformed into a Lambda-based kernel launcher using the YAKL library, this code becomes:
typedef yakl::Array<float,4,yakl::memDevice,yakl::styleC> real4d;
using yakl::c::parallel_for;
using yakl::c::Bounds;
real4d stateTend ("stateTend" ,numState,nz,ny,nx );
real4d stateFluxLimits("stateFluxLimits",numState,nz,ny,nx+1);
// for (int l=0; l < numState; l++) {
// for (int k=0; k < nz; k++) {
// for (int j=0; j < ny; j++) {
// for (int i=0; i < nx; i++) {
parallel_for( Bounds<4>(numState,nz,ny,nx) ,
YAKL_LAMBDA(int l, int k, int j, int i) {
stateTend(l,k,j,i) = - ( stateFluxLimits(l,k,j,i+1) -
stateFluxLimits(l,k,j,i ) ) / dx;
});
If we wanted to keep the code looking like Fortran (using YAKL's Fortran-like Array
s), we get:
typedef yakl::Array<float,4,yakl::memDevice,yakl::styleFortran> real4d;
using yakl::fortran::parallel_for;
using yakl::fortran::Bounds;
real4d stateTend ("stateTend" ,nx ,ny,nz,numState);
real4d stateFluxLimits("stateFluxLimits",nx+1,ny,nz,numState);
// do l = 1 , numState
// do k = 1 , nz
// do j = 1 , ny
// do i = 1 , nx
parallel_for( Bounds<4>(numState,nz,ny,nx) ,
YAKL_LAMBDA(int l, int k, int j, int i) {
stateTend(i,j,k,l) = - ( stateFluxLimits(i+1,j,k,l) -
stateFluxLimits(i ,j,k,l) ) / dx;
});
I like to leave the for
loops commented to better describe what's going on in the kernel launch itself, since the syntax still seems kind of ugly to me. This code works both on the CPU and the GPU with a single source.
Hopefully the code above gives a good example of what something written in Fortran directives would look like in a C++ kernel-based approach. It always begs the question, though: Why bother moving to C++? I've programmed quite a bit in both contexts, and here are my thoughts:
- The OpenACC and OpenMP runtimes often have a difficult time dealing with Fortran data structures that are even remotely modern (pointers, ISO C bindings, derived types with allocatables, etc.). In my experience, this is the second-buggiest part of directives-based approaches, second only to function call directives.
- In directives-based approaches, you're relying on the compiler to do everything for you when you want to move to a new hardware back-end (like Intel MIC or AMD GPUs), and this can take a lot of time for compiler developers to implement. When new architectures and programming models emerge, mature Fortran implementations lag behind C++ implementations by usually a year or two (and sometimes significantly more than that).
- C++ compilers do a much better job of inlining function calls than Fortran compilers do. In Fortran, you often have to use the
!$acc routine(...)
clause for function calls, regardless of potential inlining. Not only does this tend to perform badly, but it's also in my experience the buggiest feature of OpenACC. - In C++ and CUDA, you can use CUDA Managed Memory to shuffle data back and forth for you just by changing the allocation of your data buffer in your Array classes. In Fortran, you must rely on the compiler to do this for you under the hood, and thus far, only Nvidia's PGI compiler does this for you using
-ta=nvidia,managed
. - C++, being used significantly more in mainstream industry than Fortran, naturally has much more mature compilers with fewer bugs than you find in Fortran compiler implementations. Fortran compilers just have fewer people working on them.
- C++ templating usually means you have to write significantly less code, and the compiler generates a lot of the code for you. In Fortran, for a generic function with multiple data types, you have to explicitly create a Fortran
interface
block that maps a generic function to multiple explicitly implemented functions based on multiple data types (which is really cumbersome). - One (valid) criticism of C++ is the lack of a native multi-dimensional array class, but it's not that hard to create one yourself or borrow an already existing one made for GPU kernel launching (e.g., a Kokkos View or YAKL's Array). Also, there's a particularly strong benefit to a C++ Array class. In C++, the metadata of your multi-dimensional array is tied to the object itself whereas in Fortran, you can easily override an array's metadata in a routine's header, which is unsafe. This makes array bounds checking in Fortran subjective and error prone. In C++, you can add in your own array bounds checking in the Array class (turned on or off by a preprocessor macro), and the Array's metadata never changes in C++, making it safer and more reliable in the bounds checking.
- C++ has static analysis tools to catch bugs and undefined behavior, where as Fortran does not (to my knowledge).
From this point on, things are going to get a little more into the weeds. From a user's perspective, the most important things to get from this section are that when running on the GPU, (1) you must capture by value in your lambdas (i.e., [=] (int i) {...}
); and (2) because you must capture by value, you must use an array / data class provided to you by the C++ library you're using (Kokkos, Raja, YAKL, etc.) unless you know what you're doing (more on this in the next section).
Everything is fine when you're running on the CPU, but things get uglier when you try to run on the GPU. The primary reason is that you have two separate memory address spaces: (1) main system memory and (2) GPU memory. Pointers in CPU memory are not valid on the GPU, and vice versa. So, if you create an object on the CPU and then try to use it on the GPU, you'll get a segmentation fault or bus error. Because of this, we have to introduce something new that works on single and multiple memory address spaces.
This, to me, is the crux of getting things working in a generic kernel launching approach in C++. If you want things to work on the GPU, and you cannot address an object created on the CPU, then you need to copy the object to the GPU. To accommodate this, it's easiest to create your own Array class, preferably a multi-dimensional Array class since most science uses arrays with multiple dimensions. This Array class needs Array metadata that is stored in simple compile-time known arrays (not pointers). Only the Array class's data should be a pointer that is allocated. If you want to use this Array on the GPU, allocate it with cudaMalloc()
. If you want to use it on the CPU, use malloc()
. If you want to use it on both without having to copy explicitly, use cudaMallocManaged()
.
When you pass any data class you use in your kernel functors to the GPU, you must do so by value, not by reference, again because you cannot use CPU pointers on the GPU. To do this properly, you usually have to explicitly override the class's default move and copy constructors to copy the class's metadata explicitly but simply move the data pointer itself (essentially sharing it between the original and the GPU copy). However, you must be careful to only allow the last used copy of the object to deallocate that data pointer.
When you share the data pointer, this is considered a "shallow" copy because the full object isn't being replicated but only its metadata. If you want to do a so-called "deep" copy, which copies the data from one allocated buffer to another, you'll need to create an explicit routine for this (à la the Kokkos View API) rather than using the =
operator or copy and move constructors.
If you call a function from a GPU kernel, unless the called function does a lot of work over most of the GPU's thread indices, it's not worth the performance hit. GPU function calls flush the register page and cache, requiring fetches from DRAM for all data inside the called function. In the case of relatively small function calls, you'll want to define that function upon definition rather than waiting so that the compiler can inline it for sake of performance.
To define private variables that every thread on the GPU has a copy of, you must define an array with a size that is known at compile time because the GPU cannot handle variable length arrays on the stack. It must know the size of the stack frame for any GPU kernel at compile time. On the CPU, most compilers have extensions for putting Variable Length Arrays (VLAs) on the function's stack frame, but this will give a fatal compile-time error when using nvcc
. If you have VLAs in your code, you have three options:
- Fix the array size as a compile time constant and keep the private variables.
- Fix a maximum private variable array length, and basically use partially filled arrays.
- Allocate a globally indexed array instead of private variables, and pass it to your kernel, having each thread index its piece of the global array.
In C++, the whole point of a class is that the object has its own data to operate on, so it should be no surprise that you will often access your own class data in a GPU kernel. However, again, you cannot access your class's this pointer because it is pointing to CPU memory. Therefore, when you're creating C++ Lambda kernels to pass to a launcher, you need to avoid using the this
pointer. While in C++17, you're allowed to create Labmdas that capture the *this
object by value, this isn't the most efficient approach. The best thing to do in this case is to create a local reference in the class function containing the Lambda. An example of doing this with YAKL is below:
class Exchange {
...
int nPack;
real *haloSendBufW_cpu;
real *haloSendBufE_cpu;
...
inline void haloPackN_x(Domain const &dom, realArr const &a, int const n) {
auto &haloSendBufW = this->haloSendBufW;
auto &haloSendBufE = this->haloSendBufE;
auto &nPack = this->nPack ;
parallel_for( Bounds<4>(n,nz,ny,hs) ,
YAKL_LAMBDA (int v, int k, int j, int ii) {
int nGlob = nz*ny*hs;
haloSendBufW(nPack*nGlob+iGlob) = a(v,hs+k,hs+j,hs+ii);
haloSendBufE(nPack*nGlob+iGlob) = a(v,hs+k,hs+j,nx+ii);
});
nPack = nPack + n;
}
...
};
The lines starting with auto
are the ones that create the local references that allow this code to work on the GPU. YAKL_LAMBDA
is a macro that expands to [=] __host__ __device__
just like the KOKKOS_LAMBDA
does. The nvcc
compiler uses this code to create an anonymous C++ functor whose operator()
member function has the __host__ __device__
attribute, meaning it can be run on either the host or the device.
- https://github.com/kokkos/kokkos/wiki
- https://raja.readthedocs.io/en/master
- https://github.com/mrnorman/YAKL
AMD ROCm's HC library and Khronos's SYCL library also implement similar approaches: