Procedural wall/floor generation

Started by
8 comments, last by Warp9 1 year, 9 months ago

Hello, I'd like to ask you something. I'm trying to understand how are made the walls in Zelda Link's Awakening.

I'm quite sure that those walls hasn't been modeled in a single block.
It gives me the impression that they've been, rather, automatically generated with that certain form. Maybe I'm wrong, that's why I'd like to know if someone can clarify this doubt for me. I need to understand if it's possible to do something like this dynamically and automatically with the way I choose when designing the level.

Advertisement

The actual models could be pieces chosen from a library in the game, or generated before the game shipped so the individual rooms have it all included as a model with the level, both are common options.

The individual hand-crafted items were very common when memory and storage space were extremely limited, but these days there is enough processing power and storage space in most situations to allow whatever the game needs. These days it is almost entirely about design considerations, costs, and deadlines more than size/space limitations. It's all about what works best for the design.

I've worked with tools where a level designer can trace out where they want walls to go, they can specify the height and the material, then the system stitches together various assets into a final mesh. I've also worked in plenty of games where the entire world is built from specially built building-blocks each one hand crafted by artists. Both are reasonable options.

It's pretty common in big games these days to have mesh generation and manipulation in the final game. They're a mix of artist-provided content and programmer created formulas. There tools and libraries in the Marketplace that can build up walls and similar shapes for you. You can also do it yourself if you have the skill and time to write them. You need both parts, the art assets that get manipulated and the code that does the manipulation.

In most games I've worked on those meshes are joined and bundled up as part of the asset cooking process, it's more reliable to bundle them up and ship the polished resource than it is to generate it at runtime, but I've also worked with games that included the mesh manipulation and rebuilt it on the fly. It mostly depends on the overall game. In Link's Awakening each room could be it's own small level with fully prebuilt assets, since the dungeon builder isn't building new rooms merely moving around the connections between rooms. However, in other games where players can customize rooms, customize their models, or otherwise manipulate the world it can make sense to do it all live. It all depends on the game.

I understand, more or less, what you say. It's just that when I look at that image, I don't quite understand how different assets can be combined to generate such a perfect fusion of rocks, especially when looking at them from the top. It's as if that whole block had been completely modeled instead of being the result of combining several pieces.

anotheronedev said:
I don't quite understand how different assets can be combined to generate such a perfect fusion of rocks, especially when looking at them from the top. It's as if that whole block had been completely modeled instead of being the result of combining several pieces.

There are many options. The most traditional one is to create tile able content. This came up with typical 2D tiled backgrounds. But if a 3D world is made from blocks aligned to a global grid, we can still do the same using square shaped patches of geometry, and texture tiles. If you look at the top wall, it's one tiling pattern covering four rocks, then it repeats. Ideally the tiles allow to break such patterns, so the trick isn't obviously visible.
I'm pretty sure that's what they did. The main problem is creating patches / blocks where the boundary matches seamlessly with as many others as possible. I have never seen any DCC tools to make this easier, so i guess they may have made their own specialized tools.

@JoeJ

The main problem is creating patches / blocks where the boundary matches seamlessly with as many others as possible.

It is actually not hard to make that work. If you use the x,y coordinates from the tops of your blocks as the uv coordinates, and you have a seamless texture, everything is easy. If the objects fit together in xy space, then they'll fit together in uv space. (often with some multiplying factor, depending on how fast you want the objects to tile).

Included is a example of some hexagons I threw together, these hexes are each 1.0 wide, and converted to uv coordinates with a ⅓ multiplier, so they tile every 3 hexes. Of course, you can't use x and y locs for the sides of the hexes/blocks, but that can be handled nearly as easily.

Warp9 said:
If you use the x,y coordinates from the tops of your blocks as the uv coordinates, and you have a seamless texture, everything is easy.

But it only works on your top side. If you want to apply the same texture around the whole surface, you get ugly texture seams at least at some edges.

We can improve this by using triplanar mapping, but then some areas of the surface have a blurry blending of up to 3 textures ignoring its content. Rock blends with grass, which makes no sense and looks just bad.
To improve the hack, we could do something like histogram preserving blending, but it still remains a hack with limited application.
We could even generate seamless uv maps, which is very hard but possible. But we can not apply tiling textures to this, as the singularities in the parametrization cause tiles to rotate in 90 degree steps, breaking our assumptions of consistent orientations given from a regular grid and its tiling rules.

At some point, tiling is just no longer useful or doable at all. In general we want more realism, so we want to get rid of grids and their artificial constraints as much as possible.

On the other hand, all those tiling / grid constraints became so characteristic for video games that we may come back to it, even if we technically no longer have to.
In this sense it would be also interesting to approach more complex tilings than just grids, e.g. MC Escher ornaments, Wang tiles, Penrose Tilings, or even Sagrada Familia.

@JoeJ

But it only works on your top side. If you want to apply the same texture around the whole surface, you get ugly texture seams at least at some edges.

That's true. . . However, I'm not usually that worried about seams between tops and sides.

I really like it when the tops all blend together, but I don't mind a difference as one moves from top to sides. In fact, it is actually sometimes worse when the sides blend too well with the tops (makes it harder to see what is going on in the scene).

It sort of seemed like the OP mirrored that sentiment in one of his statements above (with the emphasis on “looking at them from the top” ):

I don't quite understand how different assets can be combined to generate such a perfect fusion of rocks, especially when looking at them from the top.

Basically, I'll grant your point about the seams. But something like the following, slightly modified, version of what I posted above would be absolutely acceptable, from my perspective (even if there are a few seams along the sides). It definitely has that “perfect fusion” of material when looking at it from the top.

Warp9 said:
That's true. . . However, I'm not usually that worried about seams between tops and sides.

Your worries increase as your resolution of geometry increases.
If the geometry is detailed, triangle edges no longer represent discontinuities of material. We might model some rocks on the surface, many smaller rocks, an a layer of soil, for example. You can go around the big rocks, and look at all their sides in detail.
If we model such scene manually as a single continuous surface, this surface can't be textured easily. The common solution is to divide the surface into multiple UV charts, as we know from character models, for example.
At this point we are left with two kinds of seams:

1. Content across the chart boundaries. The parts of texture which are adjacent on the 3D model, but not in texture space, have to be painted in a way the colors and patterns match up. This causes quite a lot of manual work for artists, but is easy if your 2D texture is the intersection of the 3D models surface with a block of 3D texture, for example. 3D textures can be generated procedurally easily, so that's interesting for us. But 3D textures also bring back the global grid restriction, and thus we also fall back to blending multiple 3D textures to model complex, natural scenes.

2. UV space across boundaries. This means the quantization to the UV chart boundary to the texture grid does not match the other part on the surface. Even if colors and patterns match, we still see a seam on the model across UV chart boundaries.
That's no problem if texture resolution is high, but a big problem for low res textures, e.g. lightmaps.
That was the actual reason for me to work on this. The goal was to calculate seamless UV maps respecting the texel grids, and it turned out to be the hardest problem i've ever worked on at all. This is the result:

Notice there is no seam over the whole model. We could even do simulations in texture space to help procedural content generation.
But the lower fruit is displacement mapping. For displacement mapping, seams become a breaking problem, restricting it's applications mostly to terrain height maps in practice.
With seamless textures, we can do general displacement mapping on any surface, which is indeed something new.

Having worked on games for decades, i knew nothing about all this parametrization stuff, before i was facing this lightmaps problem. It's hard but maybe interesting, so maybe worth to share even if off topic : )


@JoeJ

That was the actual reason for me to work on this. The goal was to calculate seamless UV maps respecting the texel grids, and it turned out to be the hardest problem i've ever worked on at all. This is the result:

That looks really interesting. Although I can't say that I've ever had to deal with anything like that myself.

Your worries increase as your resolution of geometry increases.
If the geometry is detailed, triangle edges no longer represent discontinuities of material. We might model some rocks on the surface, many smaller rocks, an a layer of soil, for example. You can go around the big rocks, and look at all their sides in detail.

That is all true.

But I've been assuming a more limited context. . . The OP showed up with some images of landscape blocks, which had distinct tops, and further mentioned the perfect fusion of rocks, when looking at them from the top. This is something which I've seen before in various cases of landscape objects. It is not uncommon to want to combine such landscape objects in various ways, and have the result look like it is a single landscape. And, especially with that sort of top down view (as seen in the OP's image), having all those tops merge together is nice.

However, that xy to uv conversion is not an approach I'd take in most other cases. . . It just works well with those specific sorts of landscape objects,

This topic is closed to new replies.

Advertisement