-
Notifications
You must be signed in to change notification settings - Fork 0
GLShader
To understand layers in general See Graphics Architecture
This article assumes knowledge of OpenGL ES 2.0
So I reached the limit of what I could do using PlayN's Canvas layer and I needed accelerated 2D graphics drawing. The following is how to get this working with PlayN, it will work whether you need to get 2D graphics working (and even 3D with some considerations).
I've always thought that documentation should be iterative and fun. But since I can't think up a theme, I'll just stick to iterative for this. :-)
So you might start by creating a class called YourShader similar to GLDemo but separated out from the main game loop class. You'd get hold of a reference to PlayN's conveniently abstracted GL20 instance by calling graphics().gl20(), have Game.paint() call YourShader.paint() which may configure your shaders and buffers and then draw some vertices.
This will work. However on android the gl context gets destroyed when the activity is paused and a new one created when you resume. PlayN doesn't expose a means of detecting this as it's an android specific event and the public framework classes are meant to be platform agnostic. This means it's difficult to reregister all your shaders, buffers etc.
To the rescue is PlayN's GLShader class. It was built to abstract away the resources re-initialization problem in Android aswell as providing a standardized way of accelerating parts of the gl pipeline over ES2.0 & WebGl. Although created for internal use in drawing to Surfaces and immediateLayer, one can see its usefulness in examples like ParticleShader in tripleplay showcased in tripleplay-demo.
We'll iterate towards a final solution. To get an idea of what's going on, have YourShader extend GLShader. Now it will need to prepare either a ColorCore or TextureCore depending on how you intend to use it (Is your immediatelayer going to behave more like an ImageLayer or CanvasLayer?) -FACT CHECK needed-.
Let's assume you're creating a ColorCore. Inside YourShader you'll have
@Override
protected Core createColorCore() {
return new YourShaderCore(this.VERTEX_SHADER_CODE, this.FRAGMENT_SHADER_CODE) ;
}
public class YourShaderCore extends GLShader.Core
{
The Core class gets reinstantiated every time your GL context is stale, so all the registration and bindings will go in there (the constructor for now). Most state that used to live in YourShader, should now be moved to YourShaderCore.
To add your shader to the PlayN layer hierarchy, you'd do something like this in Game.init()
final shader = new YourShader(graphics().ctx());
graphics().rootLayer().add(
graphics().createImmediateLayer(new ImmediateLayer.Renderer(){
@Override
public void render(Surface surface) {
shader.prepareColor(ColorUtil.GREEN) ;
// shader.addQuad(...) //not needed for now, because we're drawing our own stuff in .flush()
}})) ;
The prepareColor() call of GLShader is responsible for registering the shader with the current GLContext. GLShaders expect to be used by the standard PlayN layer rendering in a sequence of prepareColor/prepareTexture, addQuad/addTriangle combo calls. You cannot mix-and-match shaders at this point as only the last prepared one will actually get drawn (have it's flush called) for this particular immediateLayer. (Can't think why you'd want to mix and match within a layer... but certainty is still comforting).
Your glDrawxxxx code that was in YourShader.paint() should now be in YourShaderCore.flush() and walla!
Unfortunately, we've actually registered our shader program twice! Not the most efficient.
One thing about the GLShader.Core is that it handles the initialization of the shader program in its constructor (which it forces you to call) and then swallows the program id, so you are forced to use its memory management of the attribute handles etc all of which are not exposed (it's a little defensive). On the positive side, this means that GLShader can have some certainty about the resources it uses and as it knows which layers have it registered, it can happily dispose its resources when it knows they aren't needed.
We'll now continue iterating YourShader, making it fit into the PlayN mold. But first...
It's unnerving not to have an intuition about when your code is called, so here's some detail. Here we can see a stacktrace on the Java Version your Core.flush() implementation is called.
JavaGLContext(GLContext).flush() line: 247
JavaGLContext(GLContext).useShader(GLShader, boolean) line: 258
YourShader(GLShader).prepareColor(int) line: 147
SketchGuide$2.render(Surface) line: 153
ImmediateLayerGL.render(InternalTransform) line: 134
ImmediateLayerGL.paint(InternalTransform, int, GLShader) line: 128
GroupLayerGL.render(InternalTransform, int, GLShader) line: 173
GroupLayerGL.paint(InternalTransform, int, GLShader) line: 166
JavaGLContext(GL20Context).paintLayers(GroupLayerGL) line: 76
JavaGraphics.paint(Game, float) line: 156
GLContext.useShader() is chiefly responsible for calling Core.flush(), which will drive your code. This process is usually kicked of by the subsequent shader's prepareTexture/Colour, to flush your changes before they prepare their changes. So if you have more than one layer in your scene you'll likely see the following (assuming the next shader to run is a QuadShader of an ImageLayer).
YourShader(GLShader).flush() line: 163
JavaGLContext(GLContext).flush() line: 249
JavaGLContext(GLContext).useShader(GLShader, boolean) line: 261
QuadShader(GLShader).prepareTexture(int, int) line: 120
JavaCanvasImage(ImageGL).draw(GLShader, InternalTransform, float, float, float, float, boolean, boolean, int) line: 101
ImageLayerGL.paint(InternalTransform, int, GLShader) line: 115
GroupLayerGL.render(InternalTransform, int, GLShader) line: 173
GroupLayerGL.paint(InternalTransform, int, GLShader) line: 166
JavaGLContext(GL20Context).paintLayers(GroupLayerGL) line: 76
JavaGraphics.paint(Game, float) line: 156
JavaPlatform.run(Game) line: 327
PlayN.run(Game) line: 47
Because GLShader & Core swallow the handler of your GL shader program, you have to use the exposed methods of the framework in order to speak to your shader. This means you won't be creating your own FloatBuffers and be able to call into gl commands that take a handle and floatbuffer. I.e. the less common pattern of glVertexAttribPointer -> glEnableVertexAttribArray -> glDrawArrays, won't be available to you. Instead PlayN has created GLBuffer.Float & GLBuffer.Short classes for you to use which can be created by the GLContext. These expect to be used by being bound to GL_ARRAY_BUFFER & GL_ELEMENT_ARRAY_BUFFER, the generally more acceptable & efficient method.
Here's some android related goodness : http://www.learnopengles.com/android-lesson-seven-an-introduction-to-vertex-buffer-objects-vbos/
HOWEVER! It appears that even after you've
One thing to notice about these PlayN Buffers, is that although it appears that you call say elements.bind(TARGET) & then elements.send(TARGET, clientData) etc, this sends the data to the CURRENLTY bound buffer. If you called elements2.bind(TARGET) followed by elements.send(TARGET), you'd be sending the Buffer data probably unintentionally to the GL buffer who's id is stored within elements2. This happens because the PlayN GL20Buffers are responsible for managing the nio buffer as well as the GL buffer. So caution is advised when performing complex buffer operations.
Perhaps you're like me and although you're familiar with OpenGL you generally don't use it everyday. It's not much fun trying to figure out two apis simultaneously. If this is the case, here's a version of ParticleCore with pretty fleshed out comments to serve as a reminder guide of what is going on.
[Link]
So after questions in the forum, I thought I'd start a section about PlayN's attributes (odds and ends).
The question was "why we should set stride as argument to bind method, because I thought I had settled when created Attrib variable?"
The answer (I think) is because OpenGL is designed to be a swiss army knife. PlayN is simply faithfully wrapping the opengl api to expose this swiss army knife functionality as best it can given it's own concerns. This is the best way to future-proof as PlayN decides to expose more functionality.
I think in gl you can reuse VAOs and VBOs with different shaders OR for efficiency, you may also decide to pack different non-congruent data inside the same VBO.
So consider
Shader1:
vec3 gg,
vec2 hh
Shader2:
vec2 kk
Float Buffer X = [0,0,0,1,1 ,2,2,2,3,3 ,4,4 ,5,5 ,6,6 ,7,7] //meaningless data
{ A } { B } {C} {D} {E} {F}
So in a single float buffer X we might send A & B to Shader1, and C,D,E&F to Shader2. OpenGL has no way of knowing ahead of time that this is the case, so each time we bind to Shader1, we set a stride of 20 (5 x 4 bytes per float), and then the offset positions for gg and hh. Likewise we have to reset the stride when binding the samer buffer for Shader2.
The previously presented information should make it easy to tailor GLShader derived classes for your purposes. It's important to bear in mind, that although GLShader does represent a mapping onto the underlying GL shaders, it's not yet meant to be a complete 1-1 mapping, and it also takes up other tasks specifically for the PlayN layer drawing lifecycle which need not be implemented if you don't need them.
Note that by going the GLShader route, you are giving up on non GL/WebGL implementations like Internet Explorer.
(probably could write a bit more about the attribute bindings).
-
Be careful after you've created a FloatBuffer with: vertices = ctx.createFloatBuffer(SIZE), that subsequent calls intended to pass floats are called to vertices.add(float) and not mistakenly to vertices.add(int).
-
If a shader fails to load/execute and stunts opengl, then it's likely that every subsequent shader of all subsequent layers that has yet to be instantantiated may fail. If this happens silently, then all subsequent shader calls of these shaders will fail because the code they need in instantiation will never be run again. Look to the layer/shader in the chain that breaks it.
-
Be mindful of the depth of your layer. If it's not displaying, it may be at the wrong depth or have it's visibility turned off.
-
Make sure to use GL Logging, glGetShaderInfoLog and glGetProgramInfoLog
-
Use asserts() liberally. For complex VBOs, it's useful to assert before hand how many vertices and elements you plan to pack in, the spare capacity, and then assert afterwards how many were actually packed.
-
CG toolkit from Nvidia. ``cgc -oglsl -strict -glslWerror -nocode -profile gpu_vp vert.glsl```