Sunday 25 October 2015

Intersections Part2 : Id Maps

In the previous post, I spoke about the ability to perform hit detection using analytical functions.

This works extremely well when we can restrict our use case to it, but now we have some other cases where this is not as ideal:

  • Perform detection on arbitrary shape/3d model.
  • User input is not a pointer anymore, but can also be arbitrary (threshold camera texture, Kinect Body Index)
  • Both previous cases combined together

While we can often perform detection for 3d model by using triangle raycast (I'll keep that one for next post), it can be pretty expensive (specially if we perform a 10 touch hit detection, we need to raycast 10 times).

So instead, one easy technique is to use ID map.

Concept is extremely simple, instead of performing hit with a function, we will render our scene into a UInt texture, where each pixel will be object ID.

Of course it means you have to render your scene another time, but in that case you can also easily use the following:
  • Render to a downsized texture (512*512 is often sufficient)
  • Render either bounding volumes, or simplified versions of our 3d models.
Great thing with this technique, our depth buffer already makes sure that we have closest object ID stored (so we get that for "free").

So now we have our ID map, picking objectID from pointer is trivial:

Code Snippet
  1. Texture2D<uint> ObjectIDTexture;
  2.  
  3. RWStructuredBuffer<uint> RWObjectBuffer : BACKBUFFER;
  4.  
  5. float2 MousePosition;
  6. int width = 512;
  7. int height = 424;
  8. [numthreads(1,1,1)]
  9. void CS(uint3 tid : SV_DispatchThreadID)
  10. {
  11.     uint w,h;
  12.     ObjectIDTexture.GetDimensions(w,h);
  13.     
  14.     float2 p = MousePosition;
  15.     p = p * 0.5f  + 0.5f;
  16.     p.y = 1.0f-p.y;
  17.     p.x *= (float)w;
  18.     p.y *= (float)h;
  19.     
  20.     uint obj = ObjectIDTexture.Load(int3(p,0));
  21.     RWObjectBuffer[0] = obj;
  22.  
  23. }

Not much more is involved, we grab the pixel id, store in a buffer that we can retrieve in staging.

In case we need multiple pointer, we only need to grab N pixels instead, so process stays pretty simple (and we don't need to render scene for each pointer).


Now as mentioned before, we might need to perform detection against arbitrary texture.

As a starter, for simplicity, I will restrict the use case to single user texture.

So first we render user into a R8_Uint texture , where 0 means no active user and anything else = active.

We render our object map next in the same resolution.

We create a buffer (same size as object count, uint), that will store how many user pixel hit an object pixel.

Dispatch to perform this count.

Use another Append buffer, that select elements over a minimum account of pixel (this is generally important to avoid noise with camera/kinect textures).

Accumulating pixel hit count is done this way:

Code Snippet
  1. Texture2D<uint> ObjectIDTexture;
  2. Texture2D<float> InputTexture;
  3.  
  4. RWStructuredBuffer<uint> RWObjectBuffer : BACKBUFFER;
  5.  
  6. float Minvalue;
  7. int maxObjectID;
  8.  
  9. [numthreads(8,8,1)]
  10. void CS(uint3 tid : SV_DispatchThreadID)
  11. {
  12.     uint obj = ObjectIDTexture[tid.xy];
  13.     float value = InputTexture[tid.xy];
  14.     
  15.     if (value > Minvalue && obj < maxObjectID)
  16.     {
  17.         uint oldValue;
  18.         InterlockedAdd(RWObjectBuffer[obj],1,oldValue);
  19.     }
  20. }

Make sure you use InterlockedAdd, as you need atomic operation in that case.


Next we can filter elements:

Code Snippet
  1. StructuredBuffer<uint> HitCountBuffer;
  2. AppendStructuredBuffer<uint> AppendObjectIDBuffer : BACKBUFFER;
  3.  
  4. int minHitCount;
  5.  
  6. [numthreads(64,1,1)]
  7. void CS(uint3 tid : SV_DispatchThreadID)
  8. {
  9.     uint c,stride;
  10.     HitCountBuffer.GetDimensions(c,stride);
  11.     if (tid.x >= c)
  12.         return;
  13.     
  14.     int hitcount = HitCountBuffer[tid.x];
  15.     if (hitcount >= minHitCount)
  16.     {
  17.         AppendObjectIDBuffer.Append(tid.x);
  18.     }
  19. }


This is that easy, of course instead of only rendering ObjectID in the map, we can easily add some extra metadata (triangle ID, closest vertexID) for easier lookup.


Now in order to perform multi user detection (for example, using Kinect2 body Index texture), process is not much different.

Instead of having a buffer of ObjectCount, we create it of ObjectCount*UserCount

Accumulator becomes:

Code Snippet
  1. Texture2D<uint> ObjectIDTexture;
  2. Texture2D<uint> UserIDTexture;
  3.  
  4. RWStructuredBuffer<uint> RWObjectBuffer : BACKBUFFER;
  5.  
  6. float Minvalue;
  7. int maxObjectID;
  8. int objectCount;
  9.  
  10. [numthreads(8,8,1)]
  11. void CS(uint3 tid : SV_DispatchThreadID)
  12. {
  13.     uint obj = ObjectIDTexture[tid.xy];
  14.     uint pid = UserIDTexture[tid.xy];
  15.  
  16.     if (pid != 255 < maxObjectID)
  17.     {
  18.         uint oldValue;
  19.         InterlockedAdd(RWObjectBuffer[pid*objectCount+obj],1,oldValue);
  20.     }
  21. }

And filtering becomes:

Code Snippet
  1. StructuredBuffer<uint> HitCountBuffer;
  2. AppendStructuredBuffer<uint2> AppendObjectIDBuffer : BACKBUFFER;
  3.  
  4. int minHitCount;
  5. int objectCount;
  6. [numthreads(64,1,1)]
  7. void CS(uint3 tid : SV_DispatchThreadID)
  8. {
  9.     uint c,stride;
  10.     HitCountBuffer.GetDimensions(c,stride);
  11.     if (tid.x >= c)
  12.         return;
  13.     
  14.     int hitcount = HitCountBuffer[tid.x];
  15.     if (hitcount >= minHitCount)
  16.     {
  17.         uint2 result;
  18.         result.x = tid.x % objectCount; //objectid;
  19.         result.y = tid.x / objectCount;
  20.         AppendObjectIDBuffer.Append(result);
  21.     }
  22. }


We now have a tuple userid/object id instead, as shown in the following screenshot:




Please also note this technique can also easily be optimized with stencil, setting a bit per user. You get then limited to 8 users tho (7 users in case you also want to reserve one bit for object itself).

You will need one pass per user also (so 6 pass with proper depth stencil state/reference value).

If you lucky enough and can run on Windows10/DirectX11.3, and have a card that allows you, you can also simply do :


Code Snippet
  1. Texture2D<uint> BodyIndexTexture : register(t0);
  2.  
  3. uint PS(float4 p : SV_Position) : SV_StencilRef
  4. {
  5.     uint id = BodyIndexTexture.Load(int3(p.xy, 0));
  6.     if (id = 255) //No user magic value provided by Kinect2
  7.         discard;
  8.     return id;
  9. }


Here is a simple stencil test rig, to show all of the intermediates:


That's it for part 2 (that was simple no?)

For the next (and last) part, I'll explain a few more advanced cases (triangle raycast, scene precull....)


3 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. please continue, this is exactly what i needed ;)

    are all of the samples mostly pseudo code, or are there perhaps full actual working samples/tutorials?

    ReplyDelete
    Replies
    1. All the hlsl samples should compile, but of course you need to setup pipeline to use it.

      Delete