Posted 2008-06-27 00:41:23 by
Panda3D didn't work out as a demo platform because it also can't seem to use extensions on my machine. So Python is pretty much out the window. My current idea for a demo is Foolish 2 in 64k using Farbraush's libv2 softsynth library and the Windows Speech API. And OpenGL to render the ASCII animation, so we can bust into 3D ASCII animation when it's most dramatic.
The only problem with this idea, as I see it, is that NVScene isn't actually holding a 64k intro competition, just the 4k intro and 64 meg demo varieties. But, you know, fuck it. I don't need their imprimatur.
So, proof of concept: I put together a Speech API call, loaded a font glyph via the GDI, rendered it with OpenGL, and called libv2 to play the sample tune that comes with it. That's pretty much everything I'm going to need to link in. Then I compressed the result with UPX, resulting in a 30k executable. The rest should be pure compiled code, which is pretty information-dense. It's going to be rough, but I think it's doable.
My current draft for the script is a 27k text file, incidentally.
Before I added libv2, the executable was ~13k. The music zips to ~6k, so we're talking ~12k for libv2 (and the DirectSound bindings, which it links in). Which is larger than I expected, but it's no doubt partly because I'm Doing It Wrong. I didn't want to use Farbrausch's Kkrunchy executable compressor (compressing linker, actually) because it trips antivirus heuristics.
I was also pretty surprised that the music included took up 6k. It certainly didn't sound like the song represented significantly more information than, say, the Atrium soundtrack. The music file format stores channel streams separately, and the music was pretty repetitive techno, so, for instance, the bass drum channel should've compressed to about 10 bytes.
Maybe the bulk of the data is synthesizer parameter initialization, or maybe the composer used a bunch of subtle-but-voluminous modulation streams or something, whereas Atrium undoubtedly used LFOs for everything. I don't know.
The more I think about it, though, the more I'm warming up to the idea of giving up on the 64k restriction if it means a better presentation. I'd rather work on adding another feature than compressing an existing one to make it fit.
Johnicholas: “The demoscene is terrifying.”
It really is a genius game, and even back in the day I was mostly a pretender. Back then I had a passing-to-respectable level of familiarity with assembly language programming, with the PC's graphics and audio hardware, with graphics algorithms. Now I've got a passing-to-respectable level of familiarity with compilers, executable file formats, OS architecture, DirectX and OpenGL, shaders and such. But these guys live and breathe this stuff, always have, and they've got so much focus. It takes a level of dedication that I've never been able to reliably muster.
The 4k scene is glorious nowadays. They played a bunch of demos on the projector at the mini demo party GD held a couple weeks ago; Atrium was the one that convinced me that the modern demo scene isn't just a venue for Animation majors making music videos. There's also Tracie, a 1k intro based based around a simplified ray-tracer written in shader model 3. At least, that's what it has to be; I haven't actually found much documentation on it aside from the text file that comes with it. It's mainly that I don't think you could fit an isosurface generation algorithm into 1k. And it doesn't really look like isosurface generation anyways. And, you know, the title.
The proliferation of 3D accelerators in the late 1990s and early 2000s really destroyed much of what I loved about the scene. But I think we're finally at the stage, thanks to 3D accelerators becoming general-purpose enough, that we're once again seeing the possibility of innovative rendering techniques being visually competitive.