GUI: First Prototype

After some more work on the GUI system, I decided to capture a little video. Unfortunatley the mouse cursor wasn’t captured, so it may look a bit weird.

Up to now I have implemented Window, Text and Button elements. Windows can be resized and moved at will and may hold any number of the other GUI elements. The Text element supports strings which are formatted with basic HTML syntax. The text can be left, right, top, bottom aligned or centered. Buttons have various attachment points for callbacks which can be either written in C/C++ or Python script.

 

Basic GUI work

After having worked a lot on the renderer lighting/shadow mapping part of the engine and having digged through tons of math in the process, I decided to pause my work in that area and instead do something different. So I did some bug hunting, improved OpenGL state handling and cleaned up a lot of code.

Another important thing I started to tackle is the GUI framwork which is overdue for quite some time now. In the process of implementing a nice and simple GUI, I’ll rework the scene graph aswell and integrate it into the GUI as a special 3D-Scene widget. Up to now I have started to code a window and text class. They both are derived from a base class called nx_ScreenElement. Here is a stripped down code snippet of that class:

class nx_ScreenElement
{
  protected:
    nxRect_t m_position;
    I32 m_zOrder;
    bool m_visible;

  public:
    nx_ScreenElement(const nxRect_t& position) : m_position(position), m_zOrder(0), m_visible(true) {}
    virtual ~nx_ScreenElement() {}
    virtual bool onMessage(const nx_Message& message) = 0;
    virtual void update(const F32 deltaTimeInSeconds) = 0;
    virtual void render(const F32 deltaTimeInSeconds) = 0;
    virtual void restore() = 0;

    I32 getZOrder() const { return m_zOrder; }
    void setZOrder(const I32 zOrder) { m_zOrder = zOrder; }

    bool isVisible() const { return m_visible; }
    void setVisible(const bool visible) { m_visible = visible; }

    bool operator<(const nx_ScreenElement& rhs) const { return getZOrder() < rhs.getZOrder(); }
};

 

Every element on the screen will be based on this class. For example: nx_Scene, nx_GUIWindow, nx_GUIText, nx_GUIButton, … There is a master list of all screen elements in the engine which is sorted by z-order. Than system messages like KeyDown, MouseMove, MouseClick, … are send to the elements from front to back via the onMessage method. Each screen element may do some work based on a message and consume it or not. This way it is incredible easy to pass input to all active screen elements. Basically the same happens with update, render and restore. The render method for example is called on all screen elements in the list in reverese order to draw everything from back to front.

I’m still in the early prototyping and testing phase, but the first results are very promising. Here is a screenshot of 4 test windows which could be freely moved and resized on the screen. They are already correctly sorted by z-order and will switch to the front when selected. All colors are still static at the moment, but I have coded the classes in a way that it’ll be extremely easy to make it completely costumizable in the future. There is basic code for texturing/theming support in place aswell, even if I haven’t used it yet.

Shadow Mapping: Point Lights

Success! After having implemented shadow mapping for directional and spot lights, the engine now supports point lights aswell. As usual the code is a bit quick’n dirty and needs some serious cleanup. Nevertheless I’m already very pleased with the perfomance in the unoptimized state. The following video was captured with 4 fully dynamic, shadow casting point lights. FPS was around 70 on my pretty low tech PC: AMD Athlon X2 4450e (2,3Ghz Dual-Core), ATI HD4670. Even CPU usage was very low, so it’s definetly GPU bound. This leaves a lot of room for CPU stuff like game state handling, physics, culling, resource loading, sound and so on…

Click here to read the cosplay review.

 

Shadow Mapping: Spot Light

I finished implementing shadow mapping for spot lights today. It’s only the first try, but it already looks pretty nice. I’m sure with some tweaks in the future it will be really good. Here is a screenshot of a single spot light. Blue line is the direction of the light and red lines are the lights view frustum. Bottom right is the shadow map used for the light.

Basic Shadow Mapping

Another step forward: Basic shadow mapping implementation is done. Up to now it’s only working for directional lights and still very hacky. It definetly needs a few optimizations and cleanups, but it’s still a nice start. I have tweaked my demo application a bit and added one directional light which is spinning around the center of the scene.

 

Deferred Shading: Demo

I build a little demo scene and captured a video. It has 4 fully dynamic point lights:

 

Furthermore the engine now supports directional, point and spot lights. All of them integrated into the scene graph and very easy to use. Next up are optimizations and shadow implementation.

Deferred Shading: First prototype

I finally did it! After several days testing, reasearching the net, going through tons of 3D math and uncounted frustrating hours, I have the first working prototype of the deferred renderer running. My main problems was a bug in my normals and mainly reconstructing the view space position from the depth buffer. Especially the last one was suprisingly tricky and forced me to dig through matrix math stuff again.

Nevertheless I have all main problems tackled and hopefully can do some nice things now. For the moment a screenshot of the prototype. It has 3 lights active and is fully deferred rendered.

Deferred Shading: G-Buffer

Another quick update on the deferred shading progress. I finished prototyping the G-Buffer. Here is the layout:

GL_R32F  |                       Depth                       |
GL_RG16F |         NormalX         |         NormalY         |
GL_RGBA8 |   ColorR   |   ColorG   |   ColorB   | MaterialID |
GL_RGBA8 | SpecularR  | SpecularG  | SpecularB  | Shininess  |

And now a screenshot. On the left you can see the 4 buffers, starting from the bottom: Depth, Color, Normal and Specular.

The main render is still done with default rendering. I’ll implement the deferred shading pass next.

 

Shader fun

Lately I updated the engine renderer to use core OpenGL 3 only. While there are still some things to work out the engine is now running completely on shaders. This allowed me to do implement some amazing things.

1. Completly flexible vertex formats.

Adding a new vertex definition is pretty easy now. First you need to typedef a struct for storing the vertices:

typedef struct myVertexType
{
  F32V3 position;
  F32V3 normal;
  F32V2 texCoord0;
  F32 blendFactor;
} myVertexType_t;

The next step is to create a definition for the vertex type:

nxVertexAttribute_t myVertexDef[] = {
  { nxVertex::Position, 3 },
  { nxVertex::Normal, 3 },
  { nxVertex::TexCoord0, 2 },
  { SID("BlendFactor"), 1 },
  { nxVertex::End, sizeof(myVertexType_t) }
};

And that’s it. This can be done completly seperated from the engine. So no engine recompiles needed!

The first 3 attribute names (nxVertex::Position, …) are predefined in the engine and automatically linked to all shaders. “BlendFactor” is a custom attribute name which must match to the corresponding name and type in the shader (“in float BlendFactor”). The numbers after the names are the size of the attribute in 4 byte steps. The last element “nxVertex::End” is to tell the engine that the definition is complete. “sizeof(…)” is the stride of the vertex type.

2. Framebuffer Objects

To get the real power out of shaders, framebuffer objects are a must have. So I implemented them into the engine and started to play around with them for a bit. Up to now I can create FBO’s and attach textures to them, which are used as render buffers. Then the FBO can be activated and used for rendering. After that the attached texture can be used as input for further shading passes or used for normal texture mapping. This allows some very interesting effects like mirror surfaces, enivormental mapping or procedural texture generation.

I created a basic demo which first renders the scene into a FBO. In the next pass the textures from the FBO are blended into the scene. The two textures used as buffers in the FBO are color and depth:

3. Next: Deferred Shading

The whole thing of this OpenGL 3, shader, FBO thing is that it allows the implementation of deferred shading. I really liked the idea behind this technique when I first read about it and now it’s the time to implement it by myself. Some links if you’re interested in the topic:

Deferred Shading Tutorial (PDF)

Deferred Shading – Wikipedia

That’s it for now. I’ll post some more pics if I have something nice to look at.

NX-Engine progress

While still doing some Python integration I started working on several other parts of the engine.

1. Ported from SDL 1.2 to SFML 2.0

Up to now the engine used SDL 1.2 as it’s wrapper around low level stuff like creating windows and getting events. This worked ok so far but had some major drawbacks:

  • SDL is licensed under LGPL. While still an opensource license and far better than GPL, it is only allowed to dynamically link against the library.
  • SDL 1.2 isn’t able to create OpenGL 3+ contexts.
  • The source code isn’t as clean and readable as I would wish. (C code)

I was aware of SFML for quite a while and while the 2.0 version is still in development and has some bugs it offers clear advantages:

  • Lincense is zlib/png which is awesome because it allows dynamic/static linking + pretty much total freedom.
  • It is possible to create OpenGL 3+ contexts.
  • Source code is very nice and clean. (C++ object oriented)

So I got rid of SDL and plugged SFML into the engine. To my suprise this was easier than I thought and the whole process was smoothly done in a few hours. Maybe a sign for a good engine structure? I hope so!

While SFML has it’s benefits there was some drawbacks aswell. 1. Image loading support isn’t as good as SDL_image and 2. SFML doesn’t support condition variables for thread synchronization. I worked around this problems by integrating DevIL for image loading and porting all threading stuff to boost::thread.

DevIL is an excellent library with an absolutley lovely API which is pretty much the same as OpenGL. It offers support for an amazing range of picture formats and perfectly fits the needs of the engine.

2. gDEBugger fun and optimizations

If you haven’t heard of gDEBugger yet, it is an OpenGL profiler, analyzer, debugger with some nice features. It was recently made available for free and every OpenGL programmer should grab a copy immediately. With the help of this program I was able to optimize the renderer in the engine quite a bit by removing redundant GL calls. The results of 1 hour work were an increase of the FPS in valgrind’s memcheck by the factor 2,5x !

3. Actors for the engine

I started to implement some actor stuff in the engine. Up to know this allows the creation and deletion of objects through script. Next steps are moving things around by script commands and coupling of the system with the scene graph.

Next Page »