direct to video

September 29, 2009

the wonderful world of 64k intros.

Filed under: demoscene — Tags: , , , — directtovideo @ 1:58 pm

Optimising for size has been part of the demoscene for as long as anyone can remember. It probably started off driven by the desire to fit something interesting into a boot loader or something like that. 64k intros involve fitting a whole production – music and graphics and code included – in one self-contained executable of 65536 bytes. That’s not much bigger than an empty word document; it’s likely that one screenshot from the production in JPEG form would be larger than the 64k taken by the production. The nice thing about 64ks is that it’s enough space to do something worthwhile in terms of design, sound and graphics – but it’s small enough to still amaze people with the file size aspect. 64k is enough space to work in a “usual” fashion. To be able to work on graphics and music in a proper environment, and to be able to code without having to think about every byte – something that makes smaller productions (e.g. 4k or less) painful to produce.

The category evolved through the 90s, pushed forward by groups like TBL who managed to fit decent visuals and soundtracks that sounded like “proper music”. It reached a head with the release of fr-08: .the .product in 2000, which contained an amazing amount of graphics which actually looked something like you might see in a typical PC videogame of the time – but in a tiny fraction of the space. It set the world alight and immortalised the creators, and everyone wondered – how was it done?

the product by farbrausch, 64k in 2000

The answer was: they generated it. The textures, models, soundtrack – all generated, based on metadata about how to create the elements supplied by the artist using their custom-produced graphical tool set. In simple terms, they dont store the pixels for the texture in a compressed form; they store the steps to tell the engine how to regenerate the texture, from a combination of simple generators like noise, and blending operations. Of course, this kind of thing had been around for years, but this was the best implementation so far. The clever thing about these guys was that not only did they make such a great production, they also told the world how they did it. Numerous presentations and articles were followed up by releases of most, if not all, the tools they used in the making of the production for the world to see.

werkzeug by .theproduct - tool for making 64k intros

Something about it all caught the imagination of many of the young coders around in the demoscene at the time, myself included. Maybe it was the idea that the coder was back in charge and taking centre stage after years of being edged out by the artists and musicians; maybe it was the challenge of doing something bigger and better than these guys had done; or maybe it was about wanting a piece of the substantial fame they had achieved in the community and beyond. It’s worth noting that most good demos get a few thousand downloads; some of the stuff these guys made got hundreds of thousands. It went way beyond “the demoscene”.

So, like so many other coders, I started working on 64ks myself. The road was long. With 64ks nothing comes easy. You are expected to make content that would look decent in a 64mb demo, but there are no shortcuts. In a demo you can get graphics from anywhere – a 3d tool, your camera, the internet; and you can make the soundtrack however you want as long as it ends up as an MP3. In a 64k you wouldn’t get very far like that – you have to have a hand in the whole creation process yourself. If you want tools to make graphics and music, you have to go and make them. It massively increases the time taken to make anything.

My 64ks through the years

From 2001 to 2006 I made quite a lot of 64ks. They started out pretty awful – badly made hardcoded affairs – but grew into some quite hefty productions, winning Assembly and Breakpoint 64k competitions and gaining a award for best 64k intro and numerous nominations. I developed my own modelling, texturing, animation and scene editing tools, and my own soft synth (VST). It was an exciting time to be involved. There was an arms race between several groups – at first Farbrausch and Conspiracy, later Conspiracy and us (Fairlight). Every year we’d all be sat in the party hall at Breakpoint or Assembly trying to hammer out our production, each wondering what new tricks the other group had managed to pull off and whether it was better than the new tricks we had managed to pull off.

My final 64k was “dead ringer”, released at Assembly 2006, where it took first place. Dead ringer was a breakthrough in that it was the first 64k intro to move away from large and impressive but static scenes with simple animation – and featured a fully animated character performing a series of breakdancing moves that started life as several mb of motion capture data but managed to get squeezed into around 6k. Assembly 2006 had the most amazing 64k competition ever seen – the three big players in 64k (us, Conspiracy and Farbrausch) and Kewlers all competing, the bar getting pushed higher each time. It seemed that after that competition, the 64k scene was never the same again. Perhaps the bar had been pushed too high, and the amount of work needed to move it on was just too much for anyone to contemplate.

After Dead Ringer we were pretty much spent, so we quit. That was the plan, anyway. As it happens we came back in 2008 and made another 64k, “panic room”, which won the 64k intro competition at Assembly 2008, but that’s another story.

It took me a long time to realise the real secret of the farbrausch 64k intros all those years ago: all the talk is about the technology, but really it’s all about the artist. That’s the difference between a few numbers plugged into a generator routine, and those wonderful scenes that looked “almost as good as a video game but in 64k”. Sadly it’s the thing so many coders forget. It doesn’t matter what routines you code or what tools you build – in the end what matters is how they get used.

fluid dynamics #1: introduction, and the smokebox

Filed under: demoscene, fluid dynamics — Tags: , , , , , , , — directtovideo @ 10:57 am

One effect I was asked for a lot for demos by artists was “fluids”. Fluid dynamics make for great visuals – they look complicated, with minute details, yet they make sense to the human eye on a larger scale thanks to their physical basis. Unfortunately they are rather difficult to do. But if it were easy it wouldn’t be fun, right? Fluid dynamics and rendering has become an area of research I’ve kept coming back to over the years, and my journey started at the end of 2006. I immediately discounted simple fakes and decided to try and do something pretty “real” for our demos in 2007.

The first problem is that fluids take some serious computation power to calculate. To do it “properly”, what you have to do is simulate the flow of velocities and densities through space using a set of equations – “Navier Stokes”. I’m no mathematician, but fortunately these equations don’t look half as nasty in code as they do on paper. The rough point of them is that the velocites/densities at a certain point are interacted with other points nearby in space in the right way.

The basic approach follows two paths, depending on how you want to model space – you can use particles (SPH), or a grid. Grids are bounded in the area they cover, and their memory requirements depend on their size, but the equations are simpler and tend to be faster to solve – with grids, you automatically know what points are nearby. densities/velocities tend to dissipate and you lose details. Their rendering suits gas-like fluids well, as it’s typically visualised using a ray march through the volume – which suits transparent media. Particles can suit liquids better, and are unbounded – they can go where ever they want – and you don’t lose details in the same way as grids. You need a lot of particles, and the equations are a bit uglier and slower to solve because you have to work out which particles are near each other particle – it actually has a lot of similarities with flocking behaviours. Rendering particle fluids usually involves some sort of implicit surface formed from the particles and visualised using a raytracer or polygonised with marching cubes.

Modern offline fluid solvers tend to support one or both methods of fluid calculation. Fluids are big business – movies, adverts, all sorts of media use them. Offline renderers support hundreds of thousands of particles or huge grids, and can take hours to compute a few seconds of animation. So it’s going to be hard to compete with realtime.

I started off with grid solvers. They’re easier to understand and there are good examples and tutorials/papers out there that explain how to do it – and they aren’t packed full of magic numbers. The navier stokes equation breaks down into the following steps:
Update velocity grid:
– take the previous frame’s velocity grid
– perform a diffusion step – a bit like a blur
– perform a projection step – makes it mass-conserving, and its what gives the effect that swirly quality. In reality that means a linear solver with 10-20 steps, meaning looping over the whole grid 10-20 times. It’s the slow bit.
– perform an advect step, which is like doing “position = position + velocity” in reverse – to pull the velocities from the previous frame’s grid into the new grid.
Update density/colour grid:
– perform a diffusion step
– advect using the velocity grid.
You’ll soon realise you can cut out the diffusion steps – you can lose them without hurting the final result.

The first attempt I made was to do a 2D grid solver. That’s pretty easy – there’s a good example released by Jos Stam (probably the “father” of realtime fluid dynamics) some years ago which is easy to follow, although it needs optimising. The nice thing about 2D grid fluids is that they map very simply to the GPU – even back in the days of shaders 2.0. The grid goes into a 2D texture, and the algorithm becomes a multi-pass process on that texture. That worked out great – a few days of work and it was usable in a demo. It made it into halfsome in 2007. The resolution was good – we could have one fluid solver running at 512×512 well, or several at 256×256 – which proved to be ample. It was quite easy to control, too – we could simply initialise the density grid with an image, the velocity grid with random blobs, and the image was pushed around by the fluid.

2D fluids are nice, but 3D is better. But 3D brings a whole new set of problems. It was quite easy to extend a 2D solver to 3D on both CPU and GPU. Sadly at the time and on DirectX9, GPUs could not render to volume textures – so the problem could not be extended simply by changing the texture from 2D to 3D. Instead I had to lay out the “volume” as a series of slices on a 2D texture. That was hard to get right, but apart from that it extended easily. The problem was it was rather slow. The GPU at the time (GF6800 was the card du jour) just didn’t have the performance to handle it, when the extra overhead from fixing up and sampling from the 2D slice texture came into account. So the next option was to go CPU – I spent quite a long time hand optimising the code in SSE2 intrinsics, and then wrote out a volume texture in the end for rendering. Unfortunately the algorithm is in parts heavily memory/cache limited – the advect stage in the equation jumps around in memory almost randomly. In fact, in some parts of the solver the GPU was far faster thanks to more efficient memory access, and in other parts the CPU won out thanks to raw calculation performance and being able to re-use previous cell values. (Note, this is back in 2006-2007, Core Duo vs a GF6800.)

Finally I had a solver. Now, how to render the results? Raymarching the “volume texture” was quite slow – rendering the slices as a series of quads worked out better. Now the fun stuff started. I realised the key to a good looking was lighting – and for a semi-transparent thing like smoke/gas, that means shadows with absorbtion along the shadow ray. Ray marching in the shader or on CPU was out of the question, but fortunately there was an easy fake – assume a light that was only from the top down and do a “ray march” which added the value of the current grid cell to a rolling sum, and wrote the current sum back to the grid cell as a shadow value, then moved to the next cell immediately below it. That could be made even easier on my CPU solver by flipping the grid around so that the Y axis of the fluid was the “x axis” of the grid – and the sum could be rolled into the output stage of the copy to volume texture – so the whole shadow “ray march” was almost completely free.

After some time experimenting I discovered that the largest grid resolution I could get away with performance-wise was 64x32x32. Unfortunately it looks pretty rough at that. I tried a few things with octrees to avoid empty space but it just didn’t work – with my small grid, the whole space got filled quickly. In the end I simply doubled the res of the density grid and interpolated the velocities – so the slow part of the equation, updating the velocities, ran on a grid with 1/8 the number of cells as the density grid – which is the one where you really notice the blockiness.

It worked. It ran at framerates of at least 30fps realtime, with a grid res of 128x64x64 for the densities and 64x32x32 for the velocities. At the time of release it was probably the fastest and most powerful CPU grid solver ever made. It was even compared to the nvidia 8800 demo – which was running at a significantly higher resolution, but on vastly superior hardware and without lighting. We used it in media error, although we were so rushed with the demo that it only got used in a rather boring way – as “smoke in a box”. And it made me want to do something bigger and better.

smokebox in media error

old demos #2: media error (and track one).

Filed under: demoscene — Tags: , , — directtovideo @ 8:43 am

Our entry to the demo competition at Assembly 07, placed 3rd.

media error by fairlight cncd orange

In 2006 we made track one for Assembly 2006, and placed 2nd in the demo competition. We did quite well out of it, later picking up the award for the best demo of 2006. So we decided to do it again, and try and go one better and win Assembly.

Winning the Assembly demo competition is pretty much the demoscene equivalent of winning the world cup. The event has been around for over 15 years, and a lot of people interested in the demoscene today grew up marvelling at the demos released there back in the day. It’s a massive event today, with over 4000 visitors – most of whom are gamers, but it still has a strong scene pedigree. The combination of the size and exposure of the event and the historical significance makes it the one we always wanted to win.

Track one was probably one of the most intense creation processes I’ve ever been involved in. We only really decided to make something a few days before Assembly 06 started. This was partly my fault as I was off making glitterati and dead ringer, eating up a lot of my time before the event. We – a team of programmers and 2d+3d artists – spent three days of the four day event holed up in Destop’s apartment in Helsinki, only sleeping a maximum of 4-5 hours the whole time. It was insane and completely exhausting but still an amazing experience – everybody throwing out quality work, making new effects and graphics, running over to someone else’s pc to see the latest big addition. Thankfully the artists had quite a few pieces lying around on their hard disks that we could use.

Of course it was also utterly shambolic. Not only were we not sleeping, we were drinking more or less continuously. Of course we missed the deadline. Then we missed the “real” deadline – the usual cutoff point – and then we were rushing to hit the “absolutely final” deadline – when they record the entries to video. Usually in a demo competition one can push the deadlines right up until the minute they start the competition – and indeed, my current record is submitting a demo at breakpoint while the 8th entry in the competition was actually playing on the big screen. At Assembly they prerecord the entries to video so you don’t have that luxury. Well, we made it – but there were some compromises, like the last minute or two of the demo being just whatever we had to throw in. Someone (we never found out who) moved a load of the timeline bars during the last night and made it all out of sync, and we never quite fixed it all. By the end we had made it but we were totally exhausted and vowed not to do it like that again.

Track One by Fairlight
track one on pouet

The making of media error was supposed to be a carefully planned affair, starting early and finishing on time. Of course it wasn’t. Firstly we decided – at artist request – early in 2007 to completely remake the engine, adding a node graph to our current demo tool which largely resembled the AfterEffects timeline at that time. Then we fully integrated Lightwave support – the artist’s tool of choice – with a full loader for objects and scenes. This was all good, but the tools were quite fragile as a result. Then we foolishly got dragged into the Intel Demo Competition 2007, and by the time we were done we had less than four weeks to make a demo for Assembly 07.

Still, we had managed to assemble a superb team – and we felt we could pull it off. Sadly we ended up exactly where we were in 2006, with a bunch of us piled into Destop’s apartment in Helsinki working all hours on a demo. Again, we missed the deadline by an epic amount. Again it was a crazy but amazing process of not sleeping, drinking too much and spitting out graphics and effects left and right. We made it a bit earlier than track one, and felt pretty good about the demo.

Unfortunately in the competition we came up against one of the biggest demos ever made – Lifeforce by ASD – and fell to 3rd place. Winning Assembly would have to wait.

September 25, 2009

old demos #1: tactical battle loop.

Filed under: demoscene — Tags: , , — directtovideo @ 2:54 pm

I’m a bit late to the blogging party. Which means I’ve got a whole plethora of old things to write about that I would have written about at the time, but I didn’t have a blog. So here goes.

We did this demo back in 2006 for the first Intel Demo Trailer competition. How that came about was, they called me up and said “we’ll give you a laptop just for entering and 5 more if you win – and it only has to be 30 seconds long”. Hard to refuse – pretty good return for a 30 second piece. So we did it.
tactical battle loop by fairlight

We had recently come 2nd at Assembly 2006 with the demo “Track One”, and I decided to use much the same technology, tools and the same team as for that one. Why change a winning formula. Unfortunately the artists had other ideas and demanded that we had a full import pipeline from Lightwave 3D – whereas Track One was built using an import pipeline from 3DS Max, .x, .obj and pretty much any way we had of getting 3D into the engine. Strange that they didn’t like it.
Most of the demo was built by Destop/CNCD in Lightwave and using our demo tool. Extra effects were then coded up and shoved in in the demo tool.

The small catch was that one of the rules was that we had to use music provided by DJ Hell as a basis, although we could remix it. What they didn’t tell us was that the tracks we could choose from were stupidly bad, so our remix ended up almost unrecognisable. Hell wasn’t too happy about that apparently.

The good thing was, we won. So we got the laptops, but we also got something better – Intel flew us all out to Munich for a party. Free bars, paid hotel, the works. So much fun that we also entered 2007 and 2008‘s competitions, but they didnt work out quite as well.


Filed under: demoscene — directtovideo @ 2:32 pm

Some people collect stamps; some mow down old ladies with mopeds. My hobby for the past 15 years or more has been the demoscene.

I can claim that my interest in it has got me at least one job, a modicum of fame in the world of computer geeks, friends and trips all over the world (mainly northern europe), earned me a couple of cable TV appearances watched by at least five people, won me a few awards and a small amount of money, cost me a fair amount of money, cost me even more in terms of lost sleep, stress, liver damage and at least one near-hospitalisation.

These days I’m a programmer in Fairlight – a group that’s become a true piece of computing history, and is only a few years younger than I am (although I didn’t join until I was 19).

Demos used to be a pretty casual affair, hacked together by kids trying to show off to other kids how clever they could be with their computers. But over the years it got serious. The afforementioned kids grew up into highly skilled professionals, many of whom become prominent members of the games industry amongst other things.

Making demos got a lot more complicated too. Nowadays our demos are made by professional artists and programmers in custom-made realtime editing tools. The motivation never really changed – kids showing off to other kids – but the results are a mile away from the early days.

« Newer Posts

Blog at