You have a 3D texture which can be rendered to, format is e.g. R32G32B32A32_FLOAT (- size could be e.g. 128x128x128). Create a bounding box which contains your particles (approximately) that will be the space your 3d texture contains, and generate a transform which maps from world to 3d texture space. Bind the texture as a render target.

Render the particles as points; transform to 3d texture space in vertex shader, then in the geometry shader you expand the points into quads, calculate the render target slice for the point, and you also expand across several slices of the texture. It should cover several pixels in x,y and z so you will probably end up outputting e.g. 11 quads each 11×11 pixels in size to give a 5 pixel radius spread.

In the pixel shader you get the destination pixel position – i.e. the destination pixel in the 3d texture grid, because it has a slice index too – and you have the particle’s position. Transform both into same space then calculate the weight (Weight) using the blobby equation of the point on the pixel. Then write out float4(Position.xyz * Weight, Weight), use additive blending.

Now you have 2 options with the final 3d texture:

1. The alpha channel contains the sum of metaball potentials of each point on the grid. evaluate using marching cubes.

2. Evaluate a signed distance from The texture sample T and the texel position P:

float3(T.xyz / T.w) = the weighted centre position of the particles affecting point P (weighted by blobby equation).

approx signed distance = length(P – (T.xyz/T.w)) – Radius.

Radius is a constant approx metaball radius.

Better results can be achieved by hacking radius using the field, but lets leave that as an exercise for the reader. 🙂

Finally this approach still gives you a sparse field; for potentials its not a problem as you can clear to 0, but for distance fields if you want to raymarch you should put a fast sweep as a post process to fill it out.

hope that helps..

]]>A few words about my attempts. I`ve started with simple ray-marching of few analytically defined blobs. http://goo.gl/6r5mw Than I studied optimization methods: depth peeling, Bezier clipping, empty space skipping, BVH… but after some time understand that this stuff is quite useless and rather complicated when you want to render many-many blobs. Then I stumbled on your old post about Frameranger. And it was Holy Grail, particularly comments, where you told the same things. So I digged up my old voxel-renderer of volume texture and began to experiment. http://goo.gl/MFk0o

Now I am trying to figure out how to build properly distance field and visualize it 🙂