Monday 3 June 2013

Shader Programming and Authoring

For this post i'll go into something a bit different, and speak about things in a more general way.

So we're in 2013, graphics card power has exploded and the "way to go" for real time graphics is shaders.

We see them everywhere, demos, new webgl fancy showcases, visual programming...

Cool thing with shaders (depending on environment you use), is they can easily be compiled on the fly, and you have reflection data to tweak parameters. That can make it an easy "designer friendly tool".

That of course create some issues. Creating bad shaders is easy. Creating good shader subsystem, or good advanced shaders is (very) hard. It's simple to copy paste and modify a random piece of hlsl/glsl/cg found on the net, tweak it to do (approximately) whatever you (kinda) want to do with it. No need for graphics API understanding (almost)

That brings another problem, most of your code base becomes a bulk of copy/paste bits of GPU code (that you don't really know what it does).

That's fine for basic post processors and fancy visuals like in the excellent shadertoy , and for random prototyping, but when you have advanced systems (fully fledged particles systems/geometry processors/compute shaders doing some heavy simulations, then it's a different kettle of fish).

So shader programming is no different than standard programming (in some aspects), you need to really understand how it works, have some coding guidelines and keep code pretty clean (and also know how parallel processing works, specially when you go into gpgpu eg Cuda/Compute/OpenCL).

People will always argue that strict guidelines makes things less tweakable, like having a proper predefined naming, includes and conditional compilation defines to modify shader, pipeline that does one job but very well instead of having a full shader from which you can change any single part if you want, I don't agree with that.

A clean system is easier to maintain, correct naming means all classes use the same base code, without need to change any variable name or semantic, adding properties to data means you only change relevant parts/add a conditional compile define instead of rewrite half of your code base, much easier to run a test compile (less files) and glue things together. It's not easy to do and takes time.

Let's take an example, procedural force field for particles.

Concept is simple as hell, for each particle, get position, compute force based on that position, accumulate to a force buffer (integrate later), or push to velocity right away (not ideal).

Now you can easily have 20+ functions like that. Copying 20 times all the cooking code is prohibitive (not mentioning that if you improve a little part outside of the function you need to copy paste code all the time. That's how most things are done in vvvv and is sucks (badly).

So includes implies some more strict guidelines (naming), but will save you a lot of time (and that's also what I call tweak-able).

For other type of functions (distance functions), you could use the same include in a Sphere Tracer and a particle collider at the same time.

And best of all, you can still use dynamic compilation (using include handlers feature in DirectX).

More advanced (dx11 only), use class linkage and allow several functions in the same handler that you can dynamically switch....

I'll show an example of this at some point, but that's a full topic.

One other thing about shaders is it's a piece of code, but what if you could design shaders on the fly like in vvvv?

For example you have a graph editor in Visual Studio 2012 (it's called DGSL). You can edit your shader via nodes.

Nice isn't it? Not really...

Visual Shader programming is not an easy thing to achieve (as Josh Petrie and other people pointed out in their blog), fitting a feature set which is not too minimal (10 nodes and clicks instead of 2 lines of code), or some nodes which are so complex that you would not need a graph anyway, it's a very hard balance to get.

You mostly end up with a very minimal feature set (like some pixel based material), or have such a low level implementation that it kinda takes twice the time (compared to an efficient code editor with good reflection).


Example 1:
float4 result = a+b;

Why do you need an editor for that? ))

Example 2:
float3 v = cross(v1,v2);

If you don't know what a cross product does, doing it via code or patch will not make any difference.

Example 3:
float3 color = CookTorrance(materialdata); (Patch Cook Torrance with operators)

Patching that with nodes would be a crappy mess, you'd rather have a prebuilt function for it (but then some people will complain they can't tweak...).

Example 4:
float3 color = CookTorrance(materialdata); (Cook Torrance as built in node)

Well just add an include and you go back to example 1.

So in any case (low/high level), I still find it hard to see any decent benefit (not mentioning generally looking at resulting code means any coder will just rewrite it from scratch instead of tweak it).

Also HLSL is still a "minimal" language (compared to java/c#/c++), functions are not that many, no big framework and namespaces, so it's much easier to go into reference page and learn what they do.

So personally I think proper Shader editors with good reflectors, are a much nicer way for authoring.






No comments:

Post a Comment