Quadtree tiling / unique full-surface texturing

Here are excerpts from a bunch of emails about texturing terrains. Most of them are from the gd-algorithms mailing list @sourceforge.net, and some are from personal mail. It's not well organized and it's probably pretty redundant, but here it is, all in one place...

From: "Thatcher Ulrich" 
To: 
Subject: Re: [algorithms] Texturing Terrains
Date: Thu, 17 Feb 2000 11:08:05 -0500

In Soul Ride, I've implemented a "unique full-surface" texturing
system.  What it means is that the LOD mesh rendering code can request
a texture for any quadtree square from the full terrain.  The textures
are always at a fixed size (64x64), but depending on how big the
quadtree square happens to be, the resolution can range from several
texels per meter up to one texel per kilometer.  The mesh renderer
uses a distance formula to decide when to lock a texture while it's
traversing the LOD quadtree.  So the closer to the camera, the more
squares will be used, and the higher texture resolution you get.  The
result is that the ratio of texels per pixel on the screen stays
within a limited range, even while the same terrain mesh contains both
close-up detail and long vistas.

The implementation involves a texture cache, and a way of generating
on the fly an arbitrary square of texture at arbitrary detail.  The
texture cache is no rocket science, and in practice no more than a few
hundred 64x64 textures are used in any given view.  My texture cache
currently has a size limit of 3M texels, which I think is pretty good.

As far as generating arbitrary texture squares, my current system
isn't that great, but it works.  The source data is just a set of
arbitrary overlaid rectangular grids of surface types, which can have
arbitrary magnification.  Lightmaps are precomputed and use the same
organization.  To generate a texture, first I paint the surface types
from the source rectangles into a 64x64 array, and add some Perlin
noise.  Then for each texel I use the surface type and texel indices
to choose a color out of a pre-drawn tile texture.  This all happens
in software, and then the texture gets sent to the card via
glTexSubImage2D().  There are surely better ways of doing this, and
footprints and stuff like John mentioned could be added as well.
Basically anything procedural that can be arbitrarily scaled and uses
a sane amount of data and computation.

I also do a second pass to overlay a one-size-fits-all detail texture
for the nearby parts of the mesh, which helps a lot when you're close
to the ground.

Here's an (old) example screenshot showing the results:

old Soul Ride screenshot

And a few new ones where I've replaced the textures with a flat color,
to show the tiling:

tiling0.jpg
tiling1.jpg
tiling2.jpg
tiling3.jpg

The isotropic filtering and my quickie MIP-map code makes the distant
slanted textures look blurrier than they should be :( But it really is
scalable; you can pick some random point out on a distant peak, zoom
into it, and see real unique texture detail down to 0.25 meter or so.
If you happen to pick a spot that a world-builder has put high-res
detail on, it might actually be interesting :)

One problem with this scheme is all the texture building and
downloading that has to happen when the frustum changes and the newly
visible tiles aren't in the cache.  It's definitely a performance
drain.  I have some more ideas about how to address it, which I
haven't tried yet.

--
Thatcher Ulrich
http://tulrich.com


From: "Thatcher Ulrich" 
To: "Chris Babcock" 
Subject: Re: Texturing terrains
Date: Thu, 17 Feb 2000 23:59:38 -0500

Hi Chris,

> Firstly, how are you dealing with lightmaps?  You mentioned
> you precalculated them.  For all resolutions, for a fixed
> resolution, or procedurally generated at same resolution as
> the current texture patch generated?  I've been looking at
> doing it at the patch level during texture generation but I'm
> worried about the lighting moving around as the depth changes.

In my current setup, the surface-type maps and the heightmaps cover
the same areas at the same resolution (actually with a small twist,
mentioned below).  That is, each map can have different dimensions and
a different resolution, but each map contains both surface-type data
and height data.  The maps get overlaid on each other (in the case of
surface-types) or summed together (in the case of heights) to make up
the whole terrain.  So the lightmaps are easy; I just make them the
same size and dimensions as the surface-type maps, and then sample the
corresponding height information to generate the shading.  So the
areas with high-resolution height info also get high-resolution
lightmap coverage.

The lightmaps are actually applied in software, so the lighting is
already baked into the textures that get passed to the API.

Does that answer your question?

> Secondly, if you're generating textures, how are you dealing
> with keeping the edge colors from bleeding?  The neighboring
> textures may be at different resolutions so it isn't just
> copying the edge texels.  Also, you may not have the necessary
> neighbor textures yet so do you wait until they're all done?

Because of the view-dependent tiling, the rasterizer doesn't have to
dip into the lower MIP levels much.  So I don't really have too much
of that problem where far-away tiles don't match their neighbors
correctly.  However, some of that problem comes into play where the
terrain surface gets parallel-ish to the view direction, because of
isotropic MIP-mapping.  I made a post a while back to the algorithms
list about that, and there are a couple screenshots in
soulride.com/images that highlight the problem.

To make the tiles match up seamlessly, I have to do a little bit of
voodoo with the global terrain texture coordinates, because they don't
match up exactly with the terrain height spacing.  For example, say
there's a 64x64 meter patch of height data, with one corner at 0,0 in
world coordinates, with height samples at 1m spacing.  Let's say we
want to cover that patch with a 64x64 tile.  You'd think the texture
samples would represent the terrain color at (0,0), (1,0), (2,0), etc.
Unfortunately that doesn't work because then the 64th texel is at
(63,0), instead of (64,0) where we need it to be to get a seamless
border with the neighboring tile.  So the texel spacing, instead of
being 64/64, or 1, is actually 64/63, or 1.015... So the tile samples
the terrain color at (0,0), (1.015,0), (2.031,0), etc, and the 64th
texel samples the terrain color at (64,0).

You could offset the texel samples in by 0.5, but then you get ugly
borders because the neighboring tiles' border texels are sampling a
different spot; it really is essential that the border texels of
neighboring tiles are sampling the same underlying spot on the map and
thus have matching colors.  I do it by independently generating the
same color from the same input data; copying the texels from
neighboring tiles is just a short-cut to do the same thing.

When the texture is actually applied (in OpenGL at least) you need to
leave a half-texel unused around the outside, so the centers of the
edge texels are exactly on the edges of the patch.

I hope that makes some sense and is helpful.  It took me a while to
figure this stuff out originally -- it's not like it's immediately
obvious :).

-Thatcher

From: "Thatcher Ulrich" 
To: 
Subject: Re: [Algorithms] Terrain Texturing by Splatting
Date: Thu, 14 Sep 2000 00:37:21 -0400

From: Charles Bloom 
>
> Another approach that has been frequently discussed (and we've
> implemented here at Surreal) is generating textures on the fly.
> In these schemes you use something like a quadtree to generate
> a bunch of textures, with small quads near the viewer, and
> large ones far away.  These textures are generated on the CPU
> as you fly around and new stuff comes into view or the lod of
> a section changes.  This gives you lots of texture detail, you
> could use perlin noise, or a texture compositing model (that's what
> we do).  Unfortunately, the CPU overhead is way too large for
> forward thinking T&L titles.

I disagree -- I think the hardware trends are in favor of unique
full-surface texturing (I hope I'm not misusing that phrase, but it
seems to fit the concept well).  The reasoning has to do with the fact
that a typical scene at a fixed display resolution only needs a
bounded number of texels to uniquely texture every pixel.  Carmack's
recent .plan about video memory explains this pretty well.  And the
required number of texels for say a 1024x768 scene is pretty
reasonable; theoretically you only need one texel per pixel although
in practice you need more because of overdraw, imperfect tiling
efficiency, etc.  The relevant hardware trends are: CPUs, buses, and
GPUs continue to get faster, while display resolution is creeping up
relatively slowly.  You could even argue that display resolution is
experiencing a negative trend due to the shift towards consoles.

Anyway, once you have a scalable texturing framework in place, the
overhead problems aren't too bad, and they're getting less bad all the
time.  I'm using such a scheme right now in Soul Ride which is due to
ship soon.

The trouble spots for CPU-rendered unique full-surface textures are:

* Dynamic textures
* Very fast viewpoint movement
* texture-building schemes that need gobs of fill-rate
* view-dependent shading

Outside of those caveats, the advantages are huge.  Bounded texture
memory requirements for unlimited environments.  Any texture
generation/decompression/whatever scheme you can come up with can be
plugged in.

> [... splatting scheme ...]

I'm not sure I fully understand what you're proposing... but I'm not
going to let that keep me from continuing to blabber :)

> texture load than splatting).  Basically what I'm doing is try to
> trade triangle load for texture load, I'm trying to use the very
> capable triangle rendering hardware to generate my texture detail
> for me.

Good point.  I'd twist it around: a reliable hardware
render-to-texture facility coupled with a decent texture
tiling/caching scheme would get the CPU out of the pixel-pushing
business, while still delivering the benefits of unique full-surface
texturing.  It seems silly to composite a bunch of full-resolution
source textures in 3D using multiple passes over complex geometry
every frame, when you could composite once into the needed destination
resolution, cache the results, and then render the geometry in one
simple pass for many frames.  You could have much more complex
compositing operations, and still save the fill rate for the effects
which really benefit from it.

For heightfield terrains I feel like I've more or less solved the
scalable tiling/caching/CLOD framework.  For more complex stuff, I
have no idea, but I'm happy to outline what I've done if it's helpful.

--
Thatcher Ulrich
http://tulrich.com


From: Rui Ferreira 

> I agree with your words about the bounded texel/pixel ratio for a
> frame buffer of a given size, however to achieve this result you
> need a lot of texture memory for cache all on screen textures, how
> did you overcome this problem ?

The memory requirements are not bad at all, because as distance
increases you can reduce the resolution of the tiles with no visible
penalty.  At the moment in Soul Ride my texture cache for all terrain
rendering is set at 3M texels; only 6MB with 16-bit textures.  I use
64x64 texture tiles which are stretched to various sizes according to
view distance.  For most views, the terrain only uses maybe 200 or 300
tiles, or ~1M texels.

> Memory for cached textures and dynamic texture generation for a
> local terrain area seems like a versus issue to me...

Well, I don't do any dynamic terrain textures in Soul Ride, so I haven't run
into that yet :)

> Can you please enlighten us a bit on how did you handle texture
> generation and caching, are you setting textures priorities on load
> and let the driver handle it or you're swapping stuff ?

I'm doing the swapping explicitly.  I have a hash table with ~700
slots for 64x64 tiles.  When the terrain renderer needs a texture for
a certain square of the terrain, it passes the coordinates and
resolution to the cache manager, which passes back the texture.  The
cache manager first looks in the hash table to see if the texture is
already available.  If not, it has to build the texture on the fly
from the source data.  My texture building scheme is not the greatest
thing, but the executive summary is that it uses some noise and some
artist-supplied input data to create square tiles at any resolution
with some procedural detail down to the maximum resolution.

The cache manager keeps some stats about texture usage in the hash
table, so when the table starts to get full it can discard tiles that
haven't been used recently.  The discarded tiles first go into a list
of re-usable textures, and if that list overflows I call
glDeleteTextures() to free the tiles completely.  When a tile is
created I first check the re-usable list and do glTexSubImage(), or if
there are no textures waiting to be reused I just create one from
scratch using glTexImage().

> About the tex generation are you building them on the fly ? I had
> less than stellar results with this using a procedural approach,
> where I derive texels from the terrain normal and the height noise
> function, maybe you're using source textures and just do a fast
> blending operation ? That seems a lot like the "splat" approach
> proposed by Bloom, only with a unique tex approach.

Yeah, I use some source textures along with some noise and precomputed
lightmaps.  A little "eye of newt, toe of frog" as well.

> The tex LOD tree also has problems to overcome, if you move your
> viewpoint LOD tends to create a *lot* of tex generation requests,
> not to mention the overhead of texture uploading. I minimized this
> by only handling a fixed number of uploading per frames, and a fixed
> number of texels to be generated on the texgen loop. Of course, this
> means you will have to render lower detail textures on the screen
> before the new textures are ready for consumption, which looks bad
> of course...  > Basically I avoided pop's with ROAM, only to get
> texture pop's now .... :] sigh...

I have the same problem and use the same solution.  It's the weakest
link in the chain at the moment, but it does work and it's not too
terrible.  The faster the hardware, the less noticable it is.  But I
probably have a high tolerance for my own engine's artifacts :)

> Are you able to generate all the needed texels that enter the
> frustum on a single frame cycle at interactive frame rates??!

Nope, not at the full desired resolution.  Substituting lower-res
tiles helps a lot though, as well as the fact that sharp camera cuts
can generally tolerate a longer frame time.  Once the cache is warmed
up, I find that ordinary camera moves don't demand that sort of
performance.

> Also, you said, "Any texture generation/decompression/whatever scheme you
> can come up with can be plugged in".
>
> This seems natural, with full unique textures you can do anything,
> from scorch marks decals, to local shading, but if you want good
> results, you start getting into things like local ecosystems based
> texturing instead of the typical noise lore. But of course, this
> eats a LOT of cycles, unless you 're counting on the 2000Hz CPU's
> you must use some very slick algos. :]

My philosophy on this has evolved.  2 years ago I wanted everything to
be procedural and generated on the fly, so I could have infinite
worlds, etc.  Nowadays I think that's not so interesting.  The really
interesting environments come from either the real world, the mind and
hand of a designer, or (as you point out) procedural techniques using
gobs and gobs of processing that are impractical to do in real time.
And with procedural techniques you almost always want to have some
artistic oversight to tweak things here and there.  So I believe it's
best to view texture generation as a compression problem, where you
want to empower the designer to fully specify the surface as much as
possible, and use procedural texture-generation as power-tools for the
designer.  There's still a lot of room for creativity in there because
procedural/fractal stuff gives such incredible compression ratios
compared to JPEG or whatever, and I don't know what the "ideal" scheme
is, but I really believe in the value of arbitrary control data
whether it comes from a person or the real world.  I've heard the same
sort of sentiments from other game programmers from time to time.

--
Thatcher Ulrich
http://tulrich.com


from Bryan Turner???
>   That brings up another area I was experimenting in.  You mention
> mip-maps, currently I'm only uploading the L0 image, to save time &
> bandwidth.  This creats the moire-pattern effect that I hate when
> viewed at low angles.

> Have you measured the cost of uploading the L1-LN mip-maps?  Do you
> pre-generate the mip-maps, or create them on the fly?

I haven't measured the cost systematically... the mip-map stuff shows
up in VTune but not as a bad hotspot.  For terrain tiles I always
generate the mipmaps on the fly and upload them.

>  I had ignored the issue up till now, but I think it's a requirement
> at this point.  I'm imagining the highest-detail textures would be
> uploaded at L0 for speed.  Then in subsequent frames when they are
> split, I would upload their pre-generated image pyramid (L1-LN) and
> switch the MIN_FINTER to one of the MIP varieties.

In Soul Ride I just treat the tiles independently; I don't try to
share results across LOD levels.  There are some weird sampling issues
in my system that I think would make sharing difficult or impossible.
Maybe you've figured out how to sidestep the sampling
problems... basically the problem for me is that a 64x64 tile at some
detail level N samples a (for example) 64 meter x 64 meter square of
terrain.  The center of the edge texels have to correspond with the
outer edges of the square, in order to seamlessly match up with the
neighbor tiles.  But that means that the spacing of texture samples is
actually 64 meters / 63 texels (draw a picture with 4x4 tiles if this
doesn't make sense at first).  The tile at the next level up has 2x
the sample spacing, but because the edge texels still have to sample
the exact edges of the 128m x 128m terrain square, the samples are
positioned differently than the way the next MIP-map is positioned.
So the coarser MIP level of a tile at LOD N is different than the
corresponding quadrant of a parent tile at LOD N-1.

I'm not sure if that makes any sense or not... but anyway, it
discouraged me from trying to re-use the MIP-map computation.

>   This would be very fast overall, and yet produce good looking
> textures over time (I hope..).

Yeah, I haven't yet squeezed out all the inefficiencies from my
system, but it still seems to work pretty well, especially on high end
hardware.

-- 
Thatcher Ulrich
http://tulrich.com


On Oct 30, 2000 at 12:28 -0000, Rui Ferreira wrote:

> Anyway, I would like to ask you about texturing metrics. I want to
> use a more accurate metric to choose the level of detail for any
> given triangle (instead of a simple distance ratio), and so I'm
> calculating the triangle area on screen to get the number of texels
> that any given triangle needs...
>
> [...]
> 
> Do you use some metric like this in your engine ?
> 
> Do you think I should stop smoking pipe and just do the leaf calc ?
> What tradeoffs have you made in this area ?

Hi Rui,

The leaf calc to me sounds like trouble -- you want to avoid as much
per-triangle work as possible.  Jon Blow used to do a leaf calc that
even took into account the polygon slope, but I didn't agree with his
reasoning on that, and I think he's changed to something else since
then.

Anyway, what I do is pretty simple, but I think it solves this problem
well.  Conceptually, I have a function which computes the desired
texture LOD, given a distance from the viewpoint.  It's basically just
a log2 with some scales and offsets.  So given any single point on the
terrain, this function will return the best texture LOD for that
point.

When I'm recursing through my tree during rendering, at each node I
use that function to compute the maximum and minimum texture LOD for
that node, using the closest and furthest points on the node's
bounding box.  If the max and min LOD's are the same, then I know that
the computed LOD is suitable for the entire node, even with
perspective, and so I set that texture and don't do any more texture
computations in the child nodes.  If the computed LOD's are different,
then I leave the texture unset, and do the computation again in the
child nodes.

On occassion I'll get differing min/max LODs for a node, but that node
has some leaf triangles in it.  In this case I just pick the LOD at
the closest point and set the texture.  But this case is unusual for
me, so it doesn't much matter.

Hope this helps.

And stop smoking pipe in any case, or it'll start smoking you :)

-- 
Thatcher Ulrich 
==  Soul Ride -- pure snowboarding for the PC -- http://soulride.com
==  real-world terrain -- physics-based gameplay -- no view limits


On Jan 16, 2001 at 12:12 +0100, Timo Heister wrote:
> Hi Thatcher,
> 
>  Dienstag, 16. Januar 2001, you wrote:
> 
> > Soul Ride uses a system that sounds similar to what you're talking
> > about: unique texturing, rendered by software and then downloaded into
> > a texture cache.  The rendering/downloading hurts, especially on
> > TNT-level hardware, but it is feasible.
> 
> Does soul ride has a fully unique texturing approach? or are some
> textures used a few times? (should be possible when displaying snow)
> Anyway,I think i will give it a try.

It's fully unique from the texture cache down.  In other words, there
is a unique texel in texture RAM for every spot on the visible
terrain.  The texture tiles are built on demand, and the rendering
process uses predefined tile types blended using a noise function and
lightmaps, so there is quite a bit of apparent "sameness".

[...]
> >   John R's description and
> > screenshots are very generous and should give you good ideas to keep
> > you busy for a while.
> 
> Yep, it was really kind to tell us his secret algos. But i think my
> engine won't be the future reference. It kindof helped me, but i'm not
> sure if it works that good for current hardware.

Don't be so sure... I think the extra detail passes might be tough on
older hardware, but most of the other ideas seem usable.

> > * you can't reasonably fit the full-resolution textures you need in
> > texture RAM all at once (e.g. in John's case 8192m x 8192m -->
> > 64Mtexels), however, for any given viewpoint, most of the terrain is
> > far away and so can use a lower res version.  So you need some sort of
> > texture cache, and you need to be able to load/generate mip-mapped
> > versions of your texture tiles.
> 
> The question is, if leaving a really low-resolution texture of the
> whole terrain in texture-ram is a good idea. When you get near a quad
> you can upload a larger version. what do you think? It perhaps depends
> on how large your terrain is and where the far clipping-plane is.

Sure.  In fact, you pretty much *have* to do something like this.
Just to do some back-of-the-envelope calculations, let's say your
terrain is like John's, 8192m x 8192m with 1m texels in the base map.
So that's 64M texels at full res.  But you only need full res locally.
At a FOV of 90 degrees, and a 640-pixel-wide window, the terrain
texels will be smaller than one pixel when the viewpoint is more than
320 meters away.  At 640 meters, you can switch to 2m texels with no
visible impact.  At 1280 meters you can use 4m texels, and at 2560
meters you can use 8m texels.  8192 / 8 == 1024, so a single 1024x1024
texture representing your whole terrain looks perfect for all terrain
that's further than 2560 meters away, and only consumes 1M texels.
You also need tiles for the closer terrain, and they take up ~1M
texels per level (actually a little less in practice due to overlap).
So that's three more levels to get to your highest resolution (4m, 2m,
and 1m).  Total texture RAM usage: ~4M texels.  Furthermore, that
includes textures that are outside the frustum, so you actually only
see ~1M texels at any given time.

In Soul Ride I use a 3Mtexel cache for terrain textures, because my
terrain is bigger and the finest resolution is 0.25 meters/texel, but
once you have the system in place it's pretty scalable.

> > * detail rules, but RAM is limited... if you're serious, you need to
> > get into paging and/or compression.  Detail overlays, tiling,
> > splatting, procedural texturing etc can all be thought of as
> > compression techniques in some sense.
> 
> right, has anyone any experience with texture-compression (s3tc,dds
> ...) ? How is the quality of the textures? How fast is the generation?
> (when generating compressed textures in realtime)

My system is totally ad hoc, based on synthesis (and also RLE of some
of the control data), but I suspect that something like JPG would work
great for generic image data.  It's probably faster than grabbing
uncompressed data off the disk, anyway.
 
> > Those are the principles, the rest is just a bunch of minor
> > details... :)
> 
> The problem sometimes is in the detail !
              ^^^^^^^^^
               always!

-- 
Thatcher Ulrich 
==  Soul Ride -- pure snowboarding for the PC -- http://soulride.com
==  real-world terrain -- physics-based gameplay -- no view limits


On Feb 12, 2001 at 12:35 -0800, Alex Pfaffe wrote:

> I took a look at the SoulRide demo the other day and its pretty cool
> how you zoom in to the snow boarder sitting at the top of the hill.
> I am wondering how the texturing is done.  Many games use a
> technique of having a fixed set of basis textures and then
> transitioning between them.  I have always wondered how this is
> done.  Doesn't this require a tremendous number of transition
> textures, lets say you have just two textures, wouldnt you require 8
> transition textures, one for each side and one for each corner?
> Then given you have 3, or 4 texture, the number of transition
> textures would explode exponentially, so clearly that cannot be the
> way it is done.  Alternatively you need a basis texture and 8
> transitions to alpha, but then many triangles would require two
> rendering passes.  One solid and one with the alpha transition, this
> also seems like a problem.

Hi Alex,

The basic approach is that texture tiles are built on-the-fly as they
become visible, and put into a texture cache.  Texture tiles are
square 64x64 texel images, which cover a variable-sized square of
actual terrain geometry (depending on distance from the viewpoint).

To build a particular tile, the engine uses a set of terrain-type maps
which just encode the terrain type (e.g. four or five types of snow
and ice, plus a couple types of forest, water, rock, etc).  There is a
routine which takes the terrain type map for a given tile, combines it
with a noise function and a set of image tiles to represent each
terrain type, and generate the blended tile.  So there are no
predefined transition tiles; the transitions are defined by the noise
function and are completely unique over the whole terrain (although
they have the same general "look" everywhere).

After building a tile, the lighting is applied in software by
modulating with a pre-calculated lightmap.

I'm not convinced this is the best way to do things, given modern
hardware, but it does give the capability of really enormous, detailed
terrain texturing.  I use RLE compression on the terrain types, which
coupled with the natural compression of reusing a set of image tiles
gives a pretty good amount of detail for a relatively small amount of
storage.

> My terrain engine currently has unique textures for the entire
> terrain, but this severely limits the amount of detail you can have
> on the terrain.  I would like to add a detail pass which used some
> kind of blending technique like what you can find in games like
> everquest/asherons call, dozens of flight sims etc.

I do apply a detail texture as a second pass over the nearby terrain
tiles (it could be multitextured, but I never got around to
implementing that).  The detail texture is just a very dark 256x256
additive gray-scale generic pattern that gives a little added visual
interest to nearby stuff.  It would be nice to have different detail
textures for different terrain types, but I couldn't come up with a
good approach that fit in with the way I was doing terrain types.

-- 
Thatcher Ulrich 
==  Soul Ride -- pure snowboarding for the PC -- http://soulride.com
==  real-world terrain -- physics-based gameplay -- no view limits


Hi Jörgen,

I took a peek at the Clusterball site; it looks like fun, and the
screenshots look great.  In my previous job I wrote a game called
Aztec 2000 which was a Ballblaster-inspired game for exercise
machines.

In Soul Ride, I ended up building a "unique full-surface" texturing
system.  It's similar in concept to your idea of storing a really
large compressed texture, and decompressing parts of it on demand.
But instead of normal image compression schemes, I use more
specialized methods.  For example, I store type-maps, which encode the
"type" of terrain at each point.  Much like a tiled system.  But to
fill up an area with a particular type, I mix together two different
textures, selecting from one or the other at every texel based on a
noise function.  That way, we avoid an obvious tiling pattern.  The
boundaries between different surface types are similarly broken up
using a noise function.  The attached screen shot illustrates the
results... there is a low-res type map, with a mixture of ice, rock,
and snow, and the noise functions blend everything together.  There's
also a lightmap which is applied over the whole thing.

The textures are all built and composited in software, and then passed
to a surface cache which stores the textures needed for the current
frame.  The system seems to work reasonably well, but I think we're
going to need to use much higher resolution for the type maps, to get
the detail we want.  It should compress pretty well using RLE or
something like that, since there are only 16 different types.  We're
also experimenting with defining polygonal type regions which then get
painted into the textures.

I'm attaching a screenshot and some mail I sent to the algorithms
mailing list on the same topic.

-Thatcher


From: John Ratcliff 

> I would like to hear some ideas about detail mapping terrains.
>
> [conventional detail mapping]
>
> However, my real concern is that neither of these methods provides
> an ideal solution.  You still just get one base detail map, tiled,
> without any direct correlation to the underlying geography.
>
> The absolute ideal solution would be a high resolution synthesized
> detail texture map that was specific to the underlying geography.
> If you are on grass, then you get detailed grass, if you are on rock
> you get detailed rock, and if you are in between the two a smooth
> blend.  If you have a road, you get a road, if you leave footprints
> in snow they exist in the detail map as well.

Yeah, definitely.

I have a sort of two-level scheme in Soul Ride.  The first level is
unique full-surface texturing, which currently goes down to a max
resolution of 0.25 meters / texel.  That's still pretty blurry when
you're at head-height, so I blend a conventional detail texture on
top, which has I think about 0.06 meters/texel.  It's a
one-size-fits-all detail texture with the associated problems, which
my artist bemoans regularly.  It's better than nothing, though.  I'm
using the faded MIP-level scheme, which works great on recent NVIDIA
hardware, but maybe I'll have to change to iterated alpha if it
doesn't work well on other cards.

The full-surface texturing is kind of interesting and it speaks to the
detail problem.  Because of perspective, you don't actually need that
much texture RAM in any particular view, to have "perfect" detail
everywhere on the terrain.  My texture cache of 4M texels is more than
enough for any possible view, at 640x480.  Carmack expressed this
pretty well in his recent .plan talking about texture RAM.

The trick is in filling that cache with the right stuff.  I use a
quadtree tiling mechanism which I think I wrote about on the list a
while back.  It fits in well with the CLOD scheme.  So for texturing,
the CLOD mesh renderer sends requests to a surface-generation module,
for 64x64 texture patches, each with a specific origin and scale.  If
the patch is in the cache, it's returned; otherwise it's built, modulo
some other minor complexities.  The scale determines how much
world-space the patch covers.

So the surface generation module must be able to build the texture for
any piece of the terrain at any resolution.  There are lots of
possible ways to do this.  You could pre-create the whole thing and
store it in some standard compressed image format, like John
mentioned, but I rejected that as requiring too much RAM.  You could
store polygonal or curved shape boundaries and rasterize them into the
patches as needed.  I actually tried polygonal boundaries, and didn't
like the results too much for what I was doing, but they'd be perfect
for some things like roads or airstrips etc.  The scheme I'm currently
using is to have a set of multi-resolution "surface-type maps" which
cover the whole terrain at designer-controlled resolution.  A surface
type consists of a couple of repeating tiled textures, which are
blended according to a Perlin-noise function to break up the tiling
pattern (and replace it with a noise pattern).  When junctions between
more than one surface type are magnified, I use a Perlin-noise
function to pattern the intersection.

Lightmapping info is stored independently, but is also
multi-resolution, and it's just modulated onto the patches after the
surface stuff is painted.

This scheme looks OK and it's fairly scalable, but it *is* relatively
slow.  The framerate drops approximately in half during frames when
the engine has to build a lot of textures.  But it's really
un-optimized code at the moment since I've been trying various options
to get the visuals looking good; I think I can probably get a factor
of at least 4 speedup out of the current scheme.

So, in my experience the caching stuff just works.  It's trivial to
scale the minimum texel size as small as you want, and I've
experimented a bit with it, but performance suffers if you go too low,
hence my use of ordinary detail mapping for the last bit of detail :)

It's the procedural-texturing-at-any-resolution part that's pretty
much wide-open.  I'm only moderately satisfied with my current scheme,
and it's not very general-purpose either.  The nice thing about
caching procedural patches is that it lends itself to combining
multiple approaches.  Footprints in the snow and that kind of thing is
very easy to add to such a system, because you're just compositing
images or rendering primitives or whatever.  Using completely
different detail maps for different surface types is easy (in fact
that's sort of what I'm doing).  And so on...  basically all the
tricks we already know, but even easier because you're rasterizing
into a square orthographic buffer.  Although the multi-resolution
requirement makes some things tricky, but the resolution is in
power-of-two increments.  (Side note: fast render-to-texture would be
really handy :).

This would probably be a good area to figure out a modular plug-in
scheme.  I guess it's similar in concept to the shader-language stuff
that Quake3 uses, so other people are way ahead of me here.

> The problem with this is two-fold.  First, synthesizing such a
> texture in real time is computationally intensive.  Second, the size
> of the texture map required would be absolutey massive.

I think synthesis and storage are really just two ends of the same
spectrum.  My "procedural" scheme uses a lot of input data, so it
could just be seen as a specialized form of compression, and some of
the compression schemes could be seen as sophisticated types of
synthesis.  Interesting stuff.

--
Thatcher Ulrich
http://tulrich.com