Thursday, 31 July 2014


One obvious very useful feature for most graphics and audio toolset is some form of timeline.

In 4v you have the native one, which with a little effort could become ok (add undo, move keyframes with keyboard, export/import, and better color editor), but in it's current state is at best irritating. It's not as bad as the hlsl code completion in 4v but that's another story ;)

Then you have more standalone ones, vezer looks pretty cool but mac only, so not for me.

You also have duration, which is likely the best but has 2 major flaws, missing scroll and groups. Also opengl rendering would not fit as easily to integrate in my tool.

Then you have this brand new Posh svg, new 4v timeliner, presented as usual as the next big thing which will make your grandmother dance rock and roll again, and so and so.

So first pre alpha release tests were let's call it.... disappointing. Only one track type, no interpolation, and an amount of ui bugs that it could just eat a piece of wood like piranhas would eat my bum.
Next first "public" release is well... appalling, I personally hardly see the point of releasing something in that state, except to show off your incompetence at building user interface.

So first thing you do when you build a timeliner, you prepare a decent track setup, there is NO point showing 5 key frames and one track. So I start to prepare 8 tracks, 40 key frames each (which is really a small setup), and then everything goes laggy, I get 40 % cpu usage (where the hell does that go), selecting and moving several frames just blow up everything, you can't even recover and have to restart the software, usability at it's best.

So well I start to report those issues, and happily suggest that some DirectX11/OpenGL rendering for that type of things would likely fit and scale much nicer, but only reply I get is "yes I know it's slow but I don't care Dx is not as interesting as svg". So I try to explain that well if your ui doesn't scale, you might look for something else, but I get the usual "I don't give a fuck type of attitude which pisses me off so much". We suggested that actually some of the changes could easily make it through the old one (some undo + key) , I get the same I don't give a fuck.

So well I don't really understand the point (and as a user slightly pissed off that you have to pay 500 euros a license to get that type of answer). And I feel really horrified that this took several month to produce, I'm getting really worried about user interface in the next vvvv50, since in current version since there's no will to improve the old version, then it gets decided to produce a brand new ui framework which is as shit, so I got more or less no faith that we will ever have a decently smooth user interface in previous or next gen vvvv.

But let's stop whining and go to the more fun part ;)

So I'm still without a timeliner, since now I got 2  10% baked unusable pieces of junk, and some others which are nice but I can't integrate them.

So since I also mentioned dx11 would be a good candidate, I decided to spend an afternoon doing track renderers. Also that would really prove a point that modern rendering wins against this browser jazz. I also decided to only focus on the rendering for now, since well it's not too hard to hittest and drag a point after all ;)

Main focus is also to of course have some fast rendering, I want a smooth user interface ;)

I just push my tracks into buffers and then render them. I decided to start with color, which is really simple.

Code Snippet
  1. struct ColorKeyFrame
  2. {
  3.     float4 color;
  4.     float time;
  5.     int trackid;
  6. };
  8. struct KeyFrameLink
  9. {
  10.     int left;
  11.     int right;
  12. };
  14. StructuredBuffer<ColorKeyFrame> ColorKeyFrameBuffer;
  15. StructuredBuffer<KeyFrameLink> ColorLinkBuffer;
  16. StructuredBuffer<float2> TrackOffsetBuffer; //x = top, y = height

Render a bulk of instanced quads, grab color on the left keyframe, color on the right keyframe, position with a map function.

Code Snippet
  1. psInput VS(vsInput input)
  2. {
  3.     psInput output;
  5.     KeyFrameLink cl = ColorLinkBuffer[input.ii];    
  6.     ColorKeyFrame left = ColorKeyFrameBuffer[cl.left];
  7.     ColorKeyFrame right = ColorKeyFrameBuffer[cl.right];
  8.     int tid = input.ii;
  9.     tid = tid / cpPerTrack;
  11.     float2 pos = input.uv;
  12.     float2 offset = TrackOffsetBuffer[tid];
  14.     pos.x = map(pos.x,left.time,right.time);
  15.     pos.x *= 2.0f;
  16.     pos.x -= 1.0f;
  18.     pos.y += offset.x;
  19.     pos.y *= offset.y;
  21.     output.pos = mul(float4(pos,0,1),tRange);
  22.     output.colstart = left.color;
  23.     output.colend = right.color;
  24.     output.uv = input.uv.x;
  26.     return output;
  27. }

Send to the (hardcore) pixel shader:

Code Snippet
  1. float4 PS(psInput input) : SV_Target
  2.   {
  3.     return lerp(input.colstart,input.colend, input.uv);
  4. }

That was so hard ;)

Now let's go for value, positioning keyframe point, well one small Segment instance, just so simple that there's nothing to say about it, now let's build the connections, and let's add  the fact that I want more curves (aka tweens):

Code Snippet
  1. struct ValueKeyFrame
  2. {
  3.     float value;
  4.     float time;
  5.     int trackid;
  6.     int interpolation;
  7. };
  9. struct KeyFrameLink
  10. {
  11.     int left;
  12.     int right;
  13. };

Of course usual buffers are there too, we send a PatchListWithOneControlPoint batch for each connection (we don't need 2, that's beautiful).

Code Snippet
  1. struct linkData
  2. {
  3.     int left : CPOINT0;
  4.     int right : CPOINT1;
  5. };

Now we only need to pass trough keyframe IDs till we reach the domain shader:

Code Snippet
  1. [domain("isoline")]
  2. psInput DS(hsConstantOutput input, OutputPatch<linkData, 1> op, float2 uv : SV_DomainLocation)
  3. {
  4.     psInput output;
  5.     float t = uv.x;
  7.     ValueKeyFrame kl = ValueKeyFrameBuffer[op[0].left];
  8.     ValueKeyFrame kr = ValueKeyFrameBuffer[op[0].right];
  10.     float2 start = ComputePosition(op[0].left);
  11.     float2 end = ComputePosition(op[0].right);
  13.     float x = lerp(start.x,end.x,t);
  14.     float y = lerp(start.y,end.y,lerpFunc(t,kl.interpolation));
  16.     float2 pos = float2(x,y);
  17.     output.pos = mul(float4(pos,0.0f, 1.0f), tRange);
  19.     return output;
  20. }

lerpFunc selects interpolation function, which are all the basic tween modes.

Now well, bang track is simple instanced quad or line, nothing to speak about.

Then let's add wave track, I thought it would be a bit complicated, but that was so easy it's even embarrassing. So first get your favorite audio API (I used Bass). Load a music file, read all the samples as float and feed to a big fat structured buffer (float or float2). I chose float since at least i push any type of multi channel later.

Render a quad, then here is the insane pixel shader:

Code Snippet
  1. float4 PS(psInput input) : SV_Target
  2.   {
  3.     uint cnt, stride;
  4.     WaveDataBuffer.GetDimensions(cnt,stride);    
  5.     float off = cnt;
  6.     float xpos = input.uv.x * off;
  8.     float sampleleft = WaveDataBuffer[xpos*2];
  9.     float sampleright = WaveDataBuffer[xpos*2+1];
  10.     float y = input.uv.y;    
  11.     y *= 2.0f;
  12.     y -= 1.0f;
  14.     return (abs(sampleleft) > abs(y)) + 0.1f;
  15. }

That was hardcore, I could do a bit of oversampling for sure, but it already looks pretty ok.

So here we go, then you add a small ruler.

And now pushing some 4000 keyframes:

Unzoomed view:

And let's push the ui a little bit, since I said I wanted it to scale, 14K keyframes:

3 hours well spent, I now got a pretty decent scalable timeline renderer, which is a pretty huge step forward (80 fps rendering every frame, no caching is rather good so far).

To conclude, hlsl > all :)

No comments:

Post a Comment