direct to video

September 30, 2009

frameranger.

Filed under: demoscene — Tags: , , , , , , , — directtovideo @ 1:45 pm

frameranger by fairlight,cncd,orange
capped.tv

2008 rolled around, and it was time for us to get over the disappointment of Media Error’s poor public reception and try and do something, again, to try and win the Assembly demo competition – the thing that still eluded us after all these years. The major criticism of Media Error had been its flow – some people thought it was a series of unconnected scenes. To me the theme was obvious (obvious!!) – a TV flicking channels, and a thing about war – but apparently it was too vague for the average viewer, who needs to have his hand held from a to b. Apparently a demo only has “flow” nowadays if it either a) consists of one single scene, or b) is so covered in noise that it’s hard to argue what the viewer is seeing and therefore say if there is any flow or not. Making a demo with actual discrete scenes and a variety of content is apparently a complete no-no nowadays. But that’s another post entirely.

Anyway, we decided we did want to do something with a more clear storyline. The rough concept we settled upon was a “story” about a robot (car) coming to earth in a space ship, landing in an explosion of debris, driving to a city, battling a large spider robot and then transforming into humanoid form and finishing the job. Yes, it’s an 11 year old comic book/sci-fi nerd’s wet dream. But seeing as we’re “cool”, the clever part is it was all to be done with style, clever art direction and graphic design. Think the art house version of Transformers.

The second thing we wanted was a “demo part” – something seemingly unconnected to the main story where we throw in a load of big heavy effects to say “hah, we’re better than you”, in classic demo tradition – a homage to the demos of the early 90s, where a “design part” / graphics-heavy story section would give way to a series of vector effects which showed off how massive the coder’s balls are. That’s another problem with the demo-watching audience these days – sadly the average technical knowledge has dropped off quite heavily, so often the audience doesn’t realise if there’s an “effect” or some clever code at work unless there’s a massive red arrow pointing to it saying “clever effect here”, or if the effects are presented in such a way that it’s bleeding obvious that there’s a big effect there. Well, we’re “cool” so we don’t do that usually – we like it when the effects are seamlessly mixed into graphics and design, so you can’t tell exactly what’s going on – but it does lead to people missing the clever stuff. We decided to rectify that with the “demo part”.

It became apparent this demo was a pretty tall order. The scenes were massive and numerous; the animation requirements far exceeded previous productions; and the story / direction meant that it was a lot harder to chop and change – in most demos if you lose a scene you can fill it with something else, but if there’s a story to follow and you lose a scene the story stops making sense.

Assembly 2008 came around. Unfortunately I was busy fixing two other entries – a 4k and a 64k (Panic Room, which won the 64k competition) – so I didn’t devote as much time to the demo as was needed. In the end we all ended up, as has become tradition, in Destop’s apartment on the thursday night of Assembly. One small issue was that I had previously visited the party place on the way to the airport to drop off some friends, and I happened to stop by the Assembly sub-event “Boozembly”. Assembly is an event attended by kids of all ages, unfortunately including those under 18 – and the insurance and licensing difficulties mean that they have to make it completely dry – no alcohol is sold, or even permitted in the hall, something strictly enforced by the numerous security guards. In fact they even sometimes check your car for alcohol if you park in their car park. Fun and games ensue every year as people try and find ways to get alcohol inside – something I’ve managed just once, but that’s another story.

To make up for the lack of booze inside there is an unofficial sub-event that occurs on some rocks a few hundred metres away from the hall called “Boozembly”, which is usually a complete mess of drunks wandering around in the darkness falling over (and off) sharp rocks. I remember one year – I think it was 2003 – two members of Noice had ambulances called within hours of each other for different reasons. Anyway, I had visited the rocks while going to the party hall and had a few beers before heading to Destop’s – so by the time I got there it was very late, and I wasn’t going to be too productive.. the next morning we decided to make the sensible decision and give up. For once we felt the content was too good to waste on a rushed demo.

We set numerous deadlines for release. The first one that went by was NVScene a few weeks after Assembly; later we tried for Breakpoint at easter 2009. All came and went, and Assembly came around again. This time there was a sense we really had to make it. We cleared our schedules and dedicated some proper time for it. For the first time in years I wasn’t able to attend Assembly, so for once that weekend of crunch in Helsinki wasn’t going to happen – it had to get made properly on time. For once we actually planned it out. We had time – a couple of months. We had a lot of content and assets already done, a stable toolset, and we knew exactly what we needed to do.

For this demo we tried a different approach to actually making it. For previous demos I’ve worked on, they usually followed a certain pattern. We made some graphics or effects; we set them up in the demo tool, and spent an age tweaking the lighting, shaders and layering on the post fx; and it remained like that until just before the deadline, when we quickly jammed in some cameras and animation in the scene and called it “done”. What that gave us was a load of pretty but boring scenes. Or worse, a lot of work is done on content that never gets seen by the demo camera. This time we wanted it to be different. We decided to focus on the weakest thing in our demos – the “action” – and get that done first. Post fx, lighting and shading are important, but ultimately a distraction – you can spend hours tweaking colours. It’s the easiest form of procrastination. We went for an approach that some gamedev teams use and tried to build the demo “in grey” – almost unshaded, no post fx, simple lighting and placeholders where necessary, to get the cameras and action right first. I actually believe this did work, and it’s a good way to go – it made us concentrate on getting some nice motions and direction down early, and we knew where the camera was going to be before we set up the lights, post fx and shading.

work in progress shot of the city.

The challenge was still immense. We hit a lot of problems – technical, artistic and logistic. The number and size of scenes meant that our previous approach of baking lightmaps for ambient occlusion / global illumination wasn’t feasible – it would send us way over the 64mb limit – so I came up with a new lighting engine that ran fully deferred, and allowed us to use a number of techniques to approximate ambient occlusion in realtime. The old engine wasn’t set up for such a large amount of animation content and had to go through numerous optimisation passes just to make the thing load in a sensible amount of time. Some of the effects and the rendering engine were very heavy on render targets, so we had serious problems with lack of VRAM. We had major main memory problems too – we had never made such a large demo, and we found that we were hitting the 2gb win32 app limit in the demo tool, causing it to crash. I had to spend yet more time optimising the memory usage. I later discovered the major culprit was TinyXML – it ate 600mb to load a 30mb XML file (our demo “script” – which is generated and superbloated), and completely fragmented the memory with small allocations. My emergency fix was to optimise the XML file itself – yes, cutting down by hand the names of nodes and attributes – and got it down by more than 50%, which got us out of trouble until I rewrote the XML loader after Assembly.

One of the biggest headaches was the music. Fairlight (/CNCD/Orange), unlike many other demogroups, does not have one active musician who does all our soundtracks. We have a few musicians we work with but most have moved away from the scene onto other things – some went pro, some just got out entirely. In some ways it’s good because we are able to look around and find the right sound for each project, not be tied to what one guy can do, and we’ve had some really great soundtracks over the years by the likes of Northbound Sound, Little Bitchard, Ercola, Sumo Lounge and others. The problem is we’ve got no one person in the group who takes responsibility for it. I don’t think it’s an understatement to say this demo has been through the most difficult musical journey of any demo I’ve worked on. Over the year – 18 months it’s been an active project, we’ve had at least five musicians involved, and many tracks. It seems that the more brilliant the musician the harder they are to lock down – they always have other projects on the go and don’t have the time to dedicate to this. With a few weeks to go until Assembly we finally got Ercola (responsible for the Media Error soundtrack and a great producer and artist) involved. He was a guy we knew could turn it around very quickly and do a good job, which is exactly what we needed. Even so it was seriously nerve-wracking up until the last week before Assembly when we finally got the track. By the way, if anyone out there is a great musician give us a call, we are always looking for good musical input. :)

Frameranger contains a lot of graphics and a lot of code. There is a whole collection of effects and rendering techniques, some of which will get a blog post on their own. I decided to go for completely deferred rendering and it worked out great. As well as being able to use as many lights as we wanted it greatly reduced the number of shaders being generated by the ubershader (a major issue on Panic Room). I added the ability to combine multiple (dynamic) environment maps like lights in the deferred render, and support for secondary rays cast off the deferred buffers for ambient occlusion. In fact almost everything in the demo has ambient occlusion, generated one way or another in realtime through various techniques. One of the best things was being able to combine traditional polygon geometry and raytraced effects seamlessly – e.g. we raytraced the liquid effects straight into the deferred buffers and sent them through the same lighting pipeline as everything else, casting and receiving shadows etc.

early work-in-progress shot of the lighting and ambient occlusion passes in the city.

Raytracing / ray marching popped up in numerous places. We used volumetric lights for the car headlights which were ray marched through the shadow map in screen space as a post effect. The surfaces for the fluids were raytraced on GPU too, using a technique I invented to handle almost unlimited numbers of metaballs (up to around 100,000). Of course there were other effects at work too – many post effects, particles, breaking floors and so on. However, most of them were mixed into the design and graphics so they were almost hidden away – it’s the kind of thing where you only notice them when they’re gone.

work in progress shot of the car headlights.

Fortunately we had the antidote to that – the “demo part”. My new fluid solver / renderer was ready at last for that. I had written a new 3D solver for fluids running on the GPU which used a new technique to get higher resolution and better performance: I evaluated the velocities at lower resolution – and even then the grids were still much larger than in Media Error thanks to modern GPU performance. Then I used the velocity grid to drive a procedural fluid flow solver to approximate the look of fluid eddies and mixed that with the velocity grid. Then I applied that to a high res density grid to push it around. The results were superb. The procedural flows weren’t tied to the grid resolution so they could produce really sharp results which didn’t lose detail. The velocity grid just had to handle the overall rough motion.

Then we had to do something interesting with it. In the end we used it for two effects – a liquid renderer driving particles and an isosurface which was raytraced, and a smoke renderer. Both had a full lighting and shadowing pipeline – giving us superb results. For both effects we were able to use voxelised meshes as the source input. We tried a few things for the smoke but in the end we used the effect to destroy some credits text. Unfortunately it was a prime example of artistic vs technical compromise, of which there is a lot in the demo. The scene didn’t show off the power of the effect to the fullest – it didn’t show all the clever features of the effect – but it looked really nice visually, with puffs of coloured smoke. Of course such things are completely lost on the audience. One genius commentor on pouet said about the scene, “nice plasma”. Nice plasma! It makes you glad you bothered with weeks or even months of work trying to innovate in the realm of realtime fluid dynamics, when your results are compared to an ancient demo effect.

One scene that worked out surprisingly well was the “pixel blocks” sequence. It was a simple effect – a grid of cubes, animated by rendering something to texture and using it as a heightfield – made “cool” by the use of raytraced heightfield-based realtime ambient occlusion which gave it the nice shading it had. Surprisingly it ended up as one of the most popular scenes in the demo, yet it was by far the easiest and took about an hour of work to put together on the saturday morning of the deadline.

the heightfield effect

A special word has to go to the work Destop did on the graphics and direction for the demo. He built most of the demo in our demo tool. The battle scene had around 40 cameras and a massive amount of carefully placed animations, and the whole scene contains 1000s of nodes where most scenes contain 10 to 100 – it’s by far the biggest thing I’ve ever seen made with the demo tool. It frightened me a bit actually. We also had Der Piipo doing a lot of modelling and animation work, and Mazor showed up with the 2D hud gfx at the end – just in time to fill some gaps.

Sadly, the demo still had problems. We knew the battle scene was the crux of the demo – make or break – and it was the biggest and hardest scene to do. A long action sequence – part built in Lightwave, part in the demo tool – with a lot of explosions and smoke. It was the smoke that caused me a huge headache. I went over and over this trying different solutions and none of them worked well. The requirements were: it had to fill a lot of space, with multiple explosions around the environment at once; it had to persist, so it didn’t fade out visibly; it had to fit with the meshes and lighting; it should look a bit stylised – not super realistic, but still cool and smoke-like; and the frame budget wasn’t massive for it as the 3D was already eating a lot of power. Those requirements meant I had to rule out something really cool like proper fluid dynamics – the scene was too big for grid-based effects. We could only handle a certain number of particles in the frame rate, and the lighting and shading would have to be faked. I tried various techniques and wrote the effect a few times, and it never quite worked out – so I kept putting it off. In the end I rushed something out in a couple of days and the solution wasn’t satisfactory – a hand-coded particle effect that could be spawned around the environment as needed. I didn’t like the end result much at all. That was one thing that went to the top of the list for the final.

We had other problems too. In the end, even the best laid plans break down as the deadline nears. I wasn’t travelling to Helsinki and I had to go to a wedding on the saturday morning, so that ruled out real last minute crunches – but somehow we ended up doing that anyway. For the last week before Assembly I got up at 6am and went to work every day, working on the demo on the train and at lunchtime whenever time allowed, and then came home and worked on it through the evening and half the night too. Then I got up the next morning and did it all again. The problem with demo crunches is that unlike work crunches, there’s much less external pressure to do it. For work you know you don’t really have a choice. For a demo you always have at the back of your mind, “I could ditch this right now and it wouldn’t matter”. When you’re exhausted and stressed out in the middle of the night you keep going because you don’t want to give up, you want to get it done and you don’t want to let your team mates (who are also up in the middle of the night working with you) down.

Come the thursday night we still had a lot to do. I took the day off work on friday and worked on it solidly, with 3-4 hours sleep on thursday night and less on friday. We missed the deadline on friday night but after a night and morning of work, come saturday lunch time it was done. All that was left was to hand it in, get refreshed, shower, and drive to the wedding – where I think I looked like a zombie. Then, come the evening, the competition started far away in a hockey stadium in Helsinki. The wedding was in the middle of nowhere so mobile phone reception was poor to non-existant but I managed to go outside into the car park and get a bar – when I finally got the news I had been waiting for by SMS, first from my groupmate and then from a load of other friends who had been watching the competition, either at the party or at home watching the live stream. “What happened?” I asked. The reply came back – “finally we’re going to get the trophy :) “. “Is it close or did we destroy the competition?”, I asked. “Destroy :)” came the answer. I went back inside and enjoyed the rest of the wedding with a grin on my face. It seemed like we had finally done it.

About these ads

17 Comments »

  1. …not wanting to complain too much, but some parts look really bad on my ATI. Some shadows, and the liquid simulation is too blocky. :P

    Anyway, great post. Hope you can still write about your previous productions as well ;)

    Comment by xernobyl — September 30, 2009 @ 11:08 pm

  2. I haven’t had an ATI card in some time in any of my machines and everyone around here has NVIDIA too, so it’s not possible for me to check it out on an ATI config easily.

    If anyone wants to help get it working, let me know, cos it wont get fixed otherwise. :)

    Comment by directtovideo — October 1, 2009 @ 7:51 am

  3. The part about musicians could be interpreted in pretty offensive ways..

    Comment by Knos — October 1, 2009 @ 9:08 am

  4. It’s not intended to be.

    It’s always difficult to find a musician who can do the right track for the demo, and has the time to do it and isn’t weighed down by other commitments. Some groups have one default choice of musician, and we don’t.
    It was just harder than normal for this demo to sort it out. :)

    Comment by directtovideo — October 1, 2009 @ 9:16 am

  5. I don’t know If I said it already but …Cool site, love the info. I do a lot of research online on a daily basis and for the most part, people lack substance but, I just wanted to make a quick comment to say I’m glad I found your blog. Thanks, :)

    A definite great read..Jim Bean

    Comment by JimmyBean — October 1, 2009 @ 9:41 am

  6. Flow is not theme continuity between scenes. Or at least is not what I call flow. For example, Saint/Halcyon was pretty random (theme wise), but the flow is really good.

    From my point of view, flow is how the visual actions evolve. In any piece of video, you should be able to see how there is movement going on. Whether it is much or little, usually something moves. These movements should entertain the spectator. The problem I can see on Fairlight demos is that the camera stops, or slows down without reason, or doesn’t follow the speed of movement from the previous one. I’m sure it has to do with lack of time for editing.

    It’s quite hard to explain, but a good way to see if your video/demo has a good flow is to turn off the audio and see if it entertains you. Look out for parts that nothing happens or is too stretched. I’ve tested that with many demos and the ones where Kurli, Louie500, fiver2, dominat8r, loaderror, gargaj … were involved usually have a good sense of this. And, in a experimental way, pixtur ones too.

    I’ve never studied filmography, but I’m sure there must be a big topic about this.

    So, yes, doesn’t matter which routine you use, neither which graphic you use, but how the whole thing gets edited ;)

    Comment by Mr.doob — October 1, 2009 @ 11:52 am

  7. Oh! And navis ones too! Sorry navis :(

    Comment by Mr.doob — October 1, 2009 @ 12:03 pm

  8. now this flow thing is an interesting discussion. deserves a debate on it’s own.
    actually i think frameranger has a good flow. :) for once good editing and camera work. watch the thing through the first few action scenes, i think it flows well and really builds up. destop knows what he is doing.

    however, the biggest criticism we got about frameranger (and media error) was the lack of cohesion between scenes, which i dont buy into personally.

    Comment by directtovideo — October 1, 2009 @ 12:21 pm

  9. Yeah, I agree. The flow on frameranger is way better than in previous fairlight demos. Although it still has some parts that feels weird, specially the particle/blobs one.

    I neither buy the media error criticism. There are so many examples of winner demos that didn’t have any kind of theme or story.

    Anyway, thanks god you decided to share all this. As you will realise, by explaining the tricks and progress of what you’re doing, people will appreciate your work much more.

    Comment by Mr.doob — October 1, 2009 @ 12:53 pm

  10. haha, i think there’s almost a clear line where i took over and started making cameras and directing – the only parts i did were the blocks and blob parts, both on the morning of the deadline. :)

    Comment by directtovideo — October 1, 2009 @ 1:04 pm

  11. I thing the whole criticism thing on fairlight demos doesn’t have to do with the flow of the story itself but the irrelevant parts after the story. The story itself holds well on it’s own. But the problem is that after the long story sequence a plain pure coder’s effect screen that doesn’t fit up with the rest pops up (for example the smoke box). It’s like having two different kind of demos mixed together, one pure story flyby and some other random stuff the coder wanted to include to show off. Maybe if the smoke effect could be integrated into the story and not a separate scene it would be better.

    Personally this inconsistency doesn’t spoil my own enjoyment of fairlight demos (I mostly enjoy the pure coder stuff no matter if they fit or not) but that’s how I understand some people’s criticism on them when they explain it to me.

    Comment by Optimus — October 2, 2009 @ 1:44 am

  12. optimus: in media error it was a problem because the concept of switching channels on a tv wasnt really followed through – we were going to do these adverts and stuff and channel switching fx but it never got done in time. thats why it looks so incoherent i think.
    in frameranger, switching from the story part to the demo part was part of the plan. :)

    Comment by directtovideo — October 2, 2009 @ 7:53 am

    • Great post indeed. I really start appreciating your demos more, when you write about all the details and problems you had to face.

      Could you tell more about your method of rendering metaballs? Is it similar to eurographics paper (2008 I think)? I could see some small glitches at the junctions of small blobs and thought those were polygonization artifacts.

      Cheers.

      Comment by Denis — October 6, 2009 @ 3:45 pm

  13. Ok.. well they are not polygonised, they’re raytraced. I *believe* this method is new, at least new for realtime, who knows what those offline guys have come up with. :) My method is to generate a signed distance field from the metaballs and raytrace that with sphere tracing (distance field tracing). This is basically – 1 pass to render all the particles into the volume (slice) texture, and then a post process to turn it into a signed distance field. So it’s very fast. It’s a bit slower than rendering a plain particle, but its about as fast as rendering a particle say 10 times. :) So we can handle e.g. 100,000 metaballs ok. In frameranger i cant remember how many it was, at least 50k i think. it’s a little bit of a cheat making a SDF from the metaballs but hey.. it works. :)

    The reason you see artefacts is that the grid in frameranger wasn’t quite big enough :) (we ran low on vram..) it was 128x128x128, at 256x256x256 it looked great but was too heavy and a bit slow. its annoying being limited by a grid too, i plan to try something adaptive in future when i need to use this again. :)

    see:
    http://directtovideo.wordpress.com/files/2009/10/metaball_distance_field.jpg

    Comment by directtovideo — October 6, 2009 @ 4:01 pm

    • What kernel are you using for metaballs then and how do you convert the field to Eucledian distance (otherwise you can’t use sphere tracing)?

      BTW I was referring to this paper: http://nis-lab.is.s.u-tokyo.ac.jp/nis/cdrom/eg/eg08_metaballs.pdf

      They render spheres and calc intersections with the resulting isosurface in screen-space afterwards.

      There was also another guy at SIGGRAPH this year having a talk on the subject (hey, it’s CUDA but it does not have to be it): http://delivery.acm.org/10.1145/1600000/1598041/a51-gourmel.pdf?key1=1598041&key2=6210294521&coll=GUIDE&dl=GUIDE&CFID=56549000&CFTOKEN=42961167

      Comment by Denis — October 7, 2009 @ 1:08 pm

      • yea i saw those two but they were too complicated. i wanted something simple that scales linearly and is simple enough for me to understand. :)

        originally i faked it. i rendered the distance field for each particle as a sphere and then blurred it with a “smart blur” in a post process. it worked! it was just a bit.. dodgy. in fact i used that method on frameranger.
        but then afterwards i found a better way which actually works “properly”. render the centre values for each sphere into the volume, scaled by the metaball weight, and with the metaball weight (the usual (1-d^2)^3) in the alpha channel. so its like float4(position.xyz * weight,weight); and additive blend all of those. then a post pass to divide the weighted position sum by the weight sum. then store the signed distance to that weighted average position. that works as your distance field.

        Comment by directtovideo — October 7, 2009 @ 1:33 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Silver is the New Black Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 62 other followers

%d bloggers like this: