Labels

March 22, 2012

BGE Candy branch

From now on I am a part of Blender developers team. Together with Matthew Dietel aka mokazon we started a new Blender development branch called "BGE Candy" with main goal to fix broken and add missing and new (mostly visual) features to Blender Game Engine.

I am quite excited about this, I even made our own logo and splash screen (with "sweet", "eye", "candy" and "Blender" in mind)



so here is the list of stuff currently done and yet to be done.

done:
• SSS (subsurface scattering) material
• rectangular area lights
• logic connector highlighting in UI

in progress:
• parallax mapping
• volumetric point & spot lights
• real-time composite nodes

planned:
• volumetric shadowing for volumetric lights
• environment cube-mapping
• scrolling texture coordinates in Blender material editor
• timer node in material node editor
• world normals in material node editor
• extra uniform inputs for 2D GLSL filters
• extra 2D GLSL filters
• Sky and Atmosphere
• fog volumes

BGE Candy branch thread in BlenderArtists HERE

March 20, 2012

stereo 3D for games and Inficolor triOviz

Dalai Felinto proposed a little challenge to Mike Pan and me - to add support for "Inficolor triOviz 3D" glasses for Blender Game Engine. I am quite crazy for challenges I am not sure I can handle so I started investigating the techniques behind stereo 3D.

Oh, and in return Dalai sent me a pair of glasses. Another pair I had an honor to send to Blender Foundation. It seems it has found a use there already, haha.



http://mango.blender.org/production/kickoff-workshop-day-2/

While Blender has various stereo options already, they are all based on rendering two separate images - one for each eye, from from 2 cameras - one moved a little to the right, other to the left. Then special glasses (either anaglyph, passive or active) separates the image for the correct eye. Sounds simple and reasonable, but it does not work quite well. The explanation is found HERE

implementation
Simply translating camera position will make both eye view direction parallel to each other and for closer objects will result large portion of them to be visible only for single eye and this will cause considerable eye strain.
So we need to add thing called eye convergence - both eyes are aiming at the object you are looking at, this means - for a distant object the eyes are almost parallel, but for a close object eyes must toe inward. So for our stereo 3D implementation we need to add a focal point to focus on. Now we have set both cameras to be translated by half eye-seperation-distance and oriented to look at focal object.
But we are not done yet - now both camera orientations are different and convergence is not parallel to screen plane causing vertical parallax and this adds confusion and discomfort for whole 3D experience. So we need to leave only horizontal parallax. Changes in camera projection matrix are needed to achieve non symmetric (offaxis) camera frustum.
HERE is a great article by Paul Bourke with more detailed explanation and implementation details.

too slow?
Now how about performance. While this might not be an issue for higher-end PCs, using this method for demanding console games are unimaginable. This means rendering same scene twice and cutting already shivery framerate to half is a really expensive sacrifice over an extra feature.

solution
Do it as a post-process shader. We can re-project already rendered scene for both eyes using a simple parallax shifting using magical z-buffer.
There are some shortcomings of the technique though - missing information behind foreground objects or view edges, shaders that depend on view position like reflections and specularity will look wrong and no alpha blending due lack of z-information.
These issues can be somewhat fixed - the changes between original and projected image are quite subtle, so missing information (obscured by other objects or shifted view frustum) can be filled or stretched. And view dependent shaders and alpha blended objects can be rendered separately in a FBO with 2 camera approach and composited in the final image.

GLSL implementation
Well, basically it is my depth of field shader with 2 texture samples - 1 for each eye multiplied with magenta and green, thats all.
I could not get my hands on Crytek's SSRS technique, but mine works quite well.
Shader has wide variety of tweakable user controls including autofocus and automatic parallax calculation proportional to the distance of focal plane, for the most comfortable viewing. Btw. it goes hand in hand with depth of field shader.

GLSL shader is HERE

and blend file HERE
controls:
mouse+WASD - movement
1 - enables stereo
2 - enables depth of field shader

So anyway, what is so special about this Inficolor 3D?
You don't need 3D display to view 3D content, so it is relatively cheap and it works! Sure, it can't quite compete in image quality active shutter glasses offer, but still the depth is great and loss of color is minimal. A little washed out bright parts, but otherwise I am impressed of the image anaglyph 3D can offer. Also the eye strain is minimal after longer use.

If you have magenta/green glasses, here are my results (click on them for larger size and better 3D depth):









or cross-eyed version: