tulrich.com
☰ Menu+  instagram  tuffy  ringing ear  rentals 

geekstuff | gameswf | Textweb | Chunked LOD | dumbuild

Cheat sheets: CVS | cygwin | Debian | Emacs | MythTV

Chunked LOD



Since summertime of 2001, I've been doing some personal research into hardware-friendly continuous LOD. I think I've got a method that's about as good as CLOD can be for T&L hardware. It renders CLOD, with geomorphs, using batched primitives. It's very friendly to texture LOD and paging data off disk. The downsides include a long preprocessing phase, and some size overhead for the disk file. Actually another thing that bugs me is that the data isn't in a convenient form for doing height/collision queries. But otherwise it really rocks.


News

28 Jan 2004 -- Somebody just clued me in: this work has been extended with some fancy features and integrated with OpenSceneGraph. See http://www.intrepid.com/~vladimir/osg/. Cool stuff includes hardware mip-map generation, vertex-morphing using a Cg vertex program, etc. Awesome.


6 Aug 2002 -- Released programs and data from SIGGRAPH presentation. See downloads.


25 July 2002 -- Back from SIGGRAPH. Presentation went pretty well, the demos worked well (on my laptop at least -- their preso machines showed a weird bug that I'm hoping to get some help with). At the last minute I discovered I could cut memory use by about 3x by using Doug Lea's malloc (http://gee.cs.oswego.edu/dl/html/malloc.html). (Upon further investigation, it turns out there was an old MSVCRT.DLL in my path [Tcl8.3's fault, doh]. The correct MSVCRT.DLL from Win2K performs much better; about the same as Doug Lea's malloc.)

See link to slides in the docs section.

I'll put up some demos w/ data soon.


23 June 2002 -- New screenshots above. Shows 32K x 32K quadtree-tiled texture on puget dataset. The texture was generated from the 16K x 16K source heightfield, using a simple altitude-to-color mapping, and lighting based on slope. The 32K x 32K texture takes about 61MB in jpeg format. For rendering, that texture gets chopped up into a 9-level quadtree where each node is an independent 128 x 128 jpeg-compressed image. The geometry here was processed w/ a 4 meter error tolerance, rendered here w/ around 2-pixel screen-space error tolerance, at a resolution of 1024x768 (the images above are scaled down to 800x600). The framerate here is ~30 fps on 1GHz P3 w/ GeForce2Go.


16 April 2002 -- I've checked in a draft of the SIGGRAPH course notes in pdf format. I'm going to retire the MSWord version soon, following my adventures w/ LaTeX. LaTeX is nightmarish, but the equations and pdf output are better, and I can edit in emacs.


31 March 2002 -- I'm flying around the 16K x 16K Puget Sound dataset (kindly made available by Peter Lindstrom, the University of Washington, and the USGS; have a look at it here: http://www.cc.gatech.edu/projects/large_models/ps.html and also the associated paper by Lindstrom and Pascucci, http://www.gvu.gatech.edu/people/peter.lindstrom/papers/visualization2001a/ ). Frame rate is good. The geometry actually isn't that challenging for the LOD algorithm -- real DEMs often aren't. The real world at a 10 or 30 meter sample frequency just isn't that bumpy. John Ratcliff's fake moon dataset (available from the .BT repository at http://vterrain.org/BT/index.html ) is tougher from a geometric LOD standpoint, because it's much rougher. However it's only 1K x 1K, so it's not difficult to deal with from a data-size point of view.

The Puget Sound data, on the other hand, is pretty big. At 16K x 16K x 16 bits, the raw heightmap data file is 512MB. The preprocessed chunk file is about 214MB if I decimate to within a 4-meter error tolerance (Hughes Hoppe's PM demo of the same dataset to a 4-meter tolerance takes 256MB when it's unzipped, see http://research.microsoft.com/~hoppe/sr_puget.asp for the cool demo.) If I decimate to a 1-meter max error tolerance (takes maybe half an hour ):, which is much more interesting, the data file is just over 1GB!

For comparison, Lindstrom & Pascucci describe a system that visualizes the same dataset at the full resolution (no predecimation), storing 20 bytes per vertex, which works out to 5+ GB, but they're also clear on the fact that they didn't try too hard to get the per-vertex data-size down. A data-optimal version of their approach would use 3 bytes per vertex, I think: 2 bytes for height, and 1 byte for error. That would work out to about 800MB.


Downloads

Code and binaries from the version shown at SIGGRAPH2002. Updated 2002-08-06. Contains source for Win32 & Linux, executables for Win32, and John Ratcliff's crater dataset.

Additional data files, hosted at SourceForge:

If you can afford the download time & disk space, the Puget Sound dataset is the most impressive, technically, because it's so big. I find the data itself fascinating. Riverblue has the best texture.

SVN access to the latest source code to the viewer and preprocessing tools: http://sourceforge.net/svn/?group_id=31763 . You can build from scratch on Linux or Win32.

SourceForge Logo

License

All the source code for this project has been placed in the public domain. Do whatever you want with it; no copyright, no restrictions whatsoever. The code comes with ABSOLUTELY NO WARRANTY.


Documentation

I presented this stuff at the "Super-size it! Scaling up to Massive Virtual Worlds" course at SIGGRAPH 02.

tu@tulrich.com |