Archive for category Procedural Content

Creating a Planet : Geometry

I have wanted to expand on my previous post regarding ‘Cube to Sphere Projection’ for a while now, so in this article I am going to cover how I define the spherical geometry of the planets in more detail.

Read the rest of this entry »



Procedural Generation – Textures on the GPU

This small tutorial is really just an extension of the first article I wrote on 3d improved noise on the CPU. More specifically we’ll be getting it to work on the GPU this time.
Read the rest of this entry »

1 Comment

Procedural Generation – Textures on the CPU

In this tutorial I’ll be covering how you can use perlin noise and various fractal functions for generating textures either at compile time or run time. This technique can be useful for generating natural looking textures (grass, wood, marble etc.) or heightmaps for your terrains.

Read the rest of this entry »


Noise Part III (GPU)

Well it’s taken a lot longer than I originally planned, but here is the third part in the Noise series.  In the last two noise articles, I explained how I was using perlin noise to produce heightmap textures, as well as the code used to implement Ken Perlin’s improved noise function.  So far this has all been on the CPU.  In this article, I will explain how I ported this onto the GPU to produce heightmaps and diffusemaps for all the terrain patches.

The move from the CPU to GPU for height map generation actually caused quite a few headaches and problems I had overlooked while initially considering the transformation.  I tackled the conversion in steps, each of which I have outlined below

How it used to be on the CPU:

Initially I was generating a height value using the vertex positions (in cube space) as input to a Perlin noise function.  This was quite easy to use in that it involved one call to the inoise function which returned the height of the planet geometry for each vertex individually.  This would then be directly assigned to vertex before being saved in the VB.  Cube space contains the positions of each of the quadtree vertices before they are projected out onto a sphere, which is in the range [-cubesize, cubesize]. (e.g. for Earth-sized planets this is [-46, 46]).

This had the added benefit that each time I wanted to place an object on the planet, I just needed to call this function with the desired position of the object (again in cube space) and it would return the height of the terrain at that point.  This means I had been lazy up until this point; instead of trudging through the quadtree and getting the position from a height map array, I just did the more expensive noise function call to get the height of each object I wanted to place on the terrain, including the camera for computing min-LOD levels.

Providing I was willing to keep the number of octaves down below 5-6 per vertex and use vertex normals for lighting I could get between 60-90 fps using CPU noise, which isn’t too bad.  Getting any kind of diversity in the terrain at surface level is difficult though with such a tight octave budget, and it just wouldn’t be feasible to generate normal maps for the terrain patches on the CPU, so I decided to move the geometry map generation to the GPU.

Getting the generation onto the GPU:

Geometry Maps:

The first step towards GPU generation was to get the geometry map generation up and working. Because this geometry map is used only for generating the terrain height per vertex, I used a RenderTarget2D with a SurfaceFormat of R32, which provides a 32-bit component (instead of the ‘typical’ A8R8G8B8 surface).  The rendertarget has the same dimensions as the vertex grid, so each <u,v> coordinate of the fullscreen quad texture lines up perfectly with the <x,y> vertex in the patch VB.  The typical size I use for this is 33×33, although is scalable.

The first obstacle I decided to tackle was that I could no longer call upon the noise function individually per vertex; I had to get the GPU to generate the height values of each vertex at once (in the render target).  To do this I pass in the four cube space corner positions of the terrain patch, and lerp between these values in the pixel shader.  This means that I can re-create the position of each vertex in the quadtree grid in the PS, and then use this position vector as input to the 3d noise function, per pixel.

The PS lerps between the four terrain patches using the texture coorindates of the fullscreen-quad, which are assigned in the [0,1] range.  I had a few problems with the linear interpolation because texture coordinates assigned to the fullscreen quad do not match up perfectly with the screen pixels.  This problem is described here.  This means that when I used the texture coordinates for the lerping of world coordinates I had small seams on each patch where the interpolatation didn’t start on exactly 0 or 1, but rather 0+halfPixel and 1-halfpixel.  I got around this problem by slightly modifying the fullscreen quad texture coordinates and offsetting them by the half pixel in the application.  Here is both the fullscreen quad and the shader used for the geometry map generation:


private static void Initialise()
    float ps = 1.0f / textureResolution; // E.g. 33

    // Define the vertex positions.
    m_Vertices[0] = new VertexPositionTexture(new Vector3(-1, 1, 0f), new Vector2(0, 1));
    m_Vertices[1] = new VertexPositionTexture(new Vector3(1, 1, 0f), new Vector2(1 + ps, 1));
    m_Vertices[2] = new VertexPositionTexture(new Vector3(-1, -1, 0f), new Vector2(0, 0 - ps));
    m_Vertices[3] = new VertexPositionTexture(new Vector3(1, -1, 0f), new Vector2(1 + ps, 0 - ps));



float4 PS_INoise(float2 inTexCoords : TEXCOORD0) : COLOR0
    float land0 = (float)0;

    // Get the direction, from the origin (top-left) to the end vector (bottom-right).float3 xDirection = xNE - xNW;
    float3 zDirection = xSW - xNW;

    // Scale the distance by the texture coordinate (which is between 0 and 1)
    xDirection *= inTexCoords.x
    zDirection *= 1 - inTexCoords.y;

    // Lerp outward from the originSY (top-left) in the direction of the end vector (bottom-right)
   float3 lerpedWorldPos = xNW + xDirection + zDirection;

    land0 = doTerrainNoise(lerpedWorldPos);
    return land0;

After I got the geometry map generation working, this meant that instead of the 5-6 octave limit imposed on CPU noise, I could use 16+ octaves of noise to generate the geometry map.  This is more than enough to create diverse heightmaps, but it requires a lot of mixing and tweaking of the noise functions to get anything ‘realistic’ looking.

Normal Maps:

Now come some of the benefits of generating the geometry map on the GPU.  As mentioned above, the geometry map is created at the same resolution as the terrain vertex grid, which means each vertex is represented by exactly 1 texel in the geometry map texture.  So in my case, the geometry map is 33×33.  Using the four corners of each terrain patch, I can now however create another RenderTarget2D with a much higher resolution, say 256×256 or 512×512, which would create ‘geometry’ for the same area, but with much more detail.

n.b. I now call the small 1:1 vertex to texel map the geometry map, while the high resolution map is the height map.

This height map can now be used to create a high resolution normal map for lighting calculations in tangent space, which really improves the visuals of the terrain.  This calculation of a 512×512 geometry wouldn’t have been possible using CPU Perlin noise, but we are now able to create high resolution normal maps which add a lot of realism to the planet surface, especially from high altitudes.

Diffuse Maps:

The next benefit of generating the hi-res height maps, is that now I can generate a diffuse map based on a higher resolution that what I was previous using on the CPU.  Previously I was sending along an extra Vector2 struct on the vertex stream to the GPU which contained the height and slope (both in the [0,1] range) of the vertex.  This was then used in a LUT which returned the index of the texture to use (using tex2D in the PS).  This method is called texture atlasing, and allows me to use 16 512×512 textures passed to the effect in one file (2048×2048).

Because the LUT was based previously on per-vertex height and slope information, the texturing was not all that great.  But now the diffuse map is based on the higher detailed height map, so I am able to get much more detailed results.  Furthermore, because the diffuse map is generated once per patch when the quadtree is subdivided, I don’t need to do any extra work in the terrain PS other than sampling the diffuse map.
Here are two screen shots of the moon Gani on a clear night, I realise the trees in the second shot are not lit correctly 😉


Procedural Road Generation

In the past weeks, I have approached the issue of generating procedural roads at runtime based on different maps generated by the Engine. The road generation is based on self-sensitive L-Systems.  This article will outline what L-Systems are and how I have started to use them to procedurally generate road networks.

This can be a pretty tricky subject depending on how authentic you want to make it; on the one hand you could generate random points on a terrain to represent towns and join them with lines, branch a few more lines off these and call it a road map. This would probably look okay, but inevitably you’re going to end up with roads which lead off to nowhere or two roads which illogically zigzag across one another, or which go up impossibly steep inclines or in water etc., all of these scenarios would seriouly affect the game play in a negative way.

However on the other hand, you could go all out and take into consideration the town positions, population density, altitude and slope information, culture and the time period and base the road generation on all of this. Obviously this would be ideal, but generating this at runtime would probably take a long time, depending at what resolution you generate the road maps. Because Britonia will be a trading and exploration game, the option for the player to follow a road to see where it leads should mean that the player is not simply left stranded in a forest, thinking “well who would have built a road here!?”.

I found an article by Pascal Müller on his site here. You can find the article in German at the bottom of the page, and there is also a shorter, less detailed version in English half way down the page. It is from the German version that I am basing the rules for generating the roads on.

L-Systems and making them self-sensitive:

I’m just going to paste the wikipedia definition of L-System first, because that pretty much sums up what L-Systems are:

“An L-system (Lindenmayer system) is a parallel rewriting system, namely a variant of a formal grammar, most famously used to model the growth processes of plant development, but also able to model the morphology of a variety of organisms. L-systems can also be used to generate self-similar fractals such as iterated function systems.”

What this means is, we can start out with a string of characters, such as “AB” and pass these into various functions, which are called productions. The productions have a set of variables which check the given string for any patterns called predecessors and if any are found, then it replaces those characters with a new set called successors (parallel rewriting system). Because this rewriting is based on a limited number of rules, and because we can iterate through our ‘string’ as often as we want, this produces the self-similar fractals mentioned above.

Let’s take Lindenmayer’s original L-System for modelling the growth of Algae as an example:

For this we start out with the string “AB” and we have two productions.  The first production replaces all instances of the character ‘A’ with ‘AB’.  The second production replaces all instances of the character ‘B’ with ‘A’.   We can run the string through this two productions <X> number of times, and each time the string will gradually grow with a specific pattern.  Consider the following code:

private void GenerateString()
    string Axiom = "AB"; // The initial string

    for(int i = 0; i < maxDepth; i++)
        Axiom = Production1(Axiom); // Run the first production set.
        Axiom = Production2(Axiom); // Run the second production set.

/// This production will search the current string for all instances of the character 'A' and
/// replace them with the characters 'AB'.  This will cause the string to expand each
/// time the production is applied to the current string.
private void Production1(string currString)
    string returnString = "";
    bool lbFound = false;
    foreach(char instr in currString)
        if(instr == 'A'
            returnString += 'A';
            returnString += 'B';
            lbFound = true;

    // If this production was not applied to any characters in
    // the current string, then pass out the unmodified string.
    if(!lbFound) returnString = currString;

/// This production will search the current string for all instances of the character 'B' and
/// replace them with the characters 'A'.
private void Production2(string currString)
    string returnString = "";
    bool lbFound = false;
    foreach(char instr in currString)
        if(instr == 'B'
            returnString += 'A';
            lbFound = true;

    // If this production was not applied to any characters in
    // the current string, then pass out the unmodified string.
    if(!lbFound) returnString = currString;

Running this system 5 times will yield the following results:


While this may not look very useful, imagine now that we also assign a drawing command to each of the characters in the resulting string after 5 iterations.  This would enable us to draw some pretty complex patterns with a relatively small and easy piece of code.

As another quick and slightly more interesting example, take the following instruction set:

start : X
Production : (X → F-[[X]+X]+F[+FX]-X), (F → FF)
angle : 25°

In this example, ‘F’ means “draw forwards”, a ‘-‘ will be interpreted as “rotate angle -25°”, a ‘+’ will increment the angle 25°. the ‘[‘ and ‘]’ push and pop the current position and angle on a stack, so we can branch different instructions. Using the above simple L-System, we can produce images like this:


Again, while this may not seem all that graphically realistic, it is possible for us to assign our own graphics to each instruction, therefore we could use real textures for the stem and leaves etc. This was done to great effect in Asger Klasker’s Ltree project ( ).

Input Maps:

There is a problem emerging here though as we cover the basics of L-Systems.  How can we use input (i.g. the environment) to direct the growth of the patterns?  To do this we can assign functions to different instructions which are run in the correct sequence and can be use to make sure certain conditions are met.  Such instructions are used in road generation to check the height under the current position and end position to make sure the gradient is not too high etc.  These are some of the input maps which I am using to generate the road maps at the minute:

Water and Parks Map:

parks and water map

The idea of this map is to define which areas are out-of-bounds for road generation. We can use this map to also model bridges and fords by using the full range of colours in each pixel. Fords are achieved by allowing the roads to cross only at places where the water level is shallow enough, say:

waterLevel-20 <= roadLevel <= waterLevel.

Because roads cannot be built in (deep) water, we can check the end coordinates of a road’s position, and if water occurs between these two points, then we could place a bridge to allow the player to cross the river. We can also limit this action to only allow main roads to have bridges, that way there is only a limited number of bridges built in any one city.

Altitude (height) Map:


Again we can use this map to define where roads can and cannot be built. Using the height map we can also check the gradient of the planned road and ensure that roads are not built steeper than what the player is able to climb. It is also possible for us to define that the main roads should try and follow the contour of the hills, so we get roads which spiral upwards instead of just going straight from the bottom to the top.

Population Density Map:

pop density map

The population density map is used to steer the roads and motorways in meaningful directions. Areas with a higher population density will usually have a higher density of roads.


There are two major profiles that I’ve been looking at for generating the road maps. The first is a ‘New York’/urban style road generation, which creates dense grid like road networks. While this will not be used in Britonia, I would like to add this capability to the Galaxis Engine so that in the future, any space games made with the Engine can generate modern cities on the surface. The second profile is a medieval/rural profile which looks a lot more organic, with long curving streets branching out from each other. This will be more suited to medieval street maps, whereby the land shapes the streets more than functionality.

New York / Urban:


This is one outcome for procedurally generating a street plan based on New York. The streets are basically created with a vary small chance of the heading changing for forward facing road sections. Then, every <x> pixels/meters I can define a branch in the road in three seperate directions at 90° from each other (fwd, left and right). This helps to produce the block effect.


rural street plan

This is my attempt at generating a street plan for a rural, high mountain area. To achieve this, I set the size of the main road sections much smaller than in the urban map, and also increased the chance that the heading (angle) changes with each section, resulting in streets which are curve instead of straight. so there shouldn’t be as many well defined ‘blocks’ as you might find in New York city etc. I also cut down on the number of branches in the roads, from 3 to 2. This effect doesn’t look to bad and it is possible to tweak the parameters further still to create larger or smaller settlements.


I am still working on the road generation instructions, and there are several things which I hope to finish soon.  At the minute the population density map doesn’t have such a big influence on the generation of the roads.  I am thinking to leave the road generation as it is and just us the population map to determine the actual buildings which will be placed on the land.

Source Code:

The source code will be made available as soon as I have made it friendlier to use.  I will create a download link on the left,  but if anyone wants to see the source before hand then just send me a mail and I’ll forward it to you.  The solution has been created with XNA 3.1, so make sure you have that before running it. The project generates a lot of garbage as everything uses large amounts of strings, so it most definately shouldn’t be used as is in your game. I haven’t done this, but perhaps using StringBuilder would be a better solution and would cut down garbage.

If you have any comments or criticism, then feel free to post on the forums here, but please remember that this was a proof of concept application, and as such I made it with readability in mind, not speed, so there are a great many optimisations which could and should be implemented before you use this in your own projects.

You can get the source code here


Clouds Part I

This is another article which I plan on writing in a few parts. I have added a pretty simple cloud layerr to Britonia using a distored fBm fractal and improved noise basis function. This is used in a shader to generate a per pixel cloud map, which is then just sampled over a spherical ‘terrain’ which a height of zero.

To be honest, the clouds are actually something I had anticipated implementing much sooner.  The theory is quite simple really

The implementation of the clouds is much the same as the terrain, and if anything it is simpler.  You need just create another sphere with LOD. As far as geometry goes, I am using a smaller vertex grid for each patch (e.g. patches made up of 9×9 grids instead of the usual 33×33), and I am also limiting the depth that the quadtree subdivides to to 4.   The texture to be applied to each patch is generated using perlin noise in the pixel shader and can be setup to use any size, I’m going with 256×256.

As far as rendering goes, we just need to set the radius of the sphere as the height of this particular cloud layer and then render each patch with the generated cloud map.  There are a few things to watch out for, such as the render order of things with alpha blending, and also set the culling, but nothing too difficult.

The distorted noise function which I am currently using for the cloud map generation looks like this:

float3 offsetPoint(float3 p)
float3 result;result.x = inoise(p);
result.y = inoise(p*3.33);
result.z = inoise(p*7.77);

return result;

float DistNoise(float3 p, double distortion)
return inoise(p + distortion * offsetPoint(p + 0.5));

// fractal sum distorted noise
float DfBm(float3 p, int octaves, float lacunarity = 2.0, float gain = 0.5)
float freq = 1.0f,
amp  = 0.5f;
float sum  = 0.0f;
for(int i=0; i<octaves; i++) {
sum += DistNoise(p*freq, 0.5)*amp;
freq *= lacunarity;
amp *= gain;
return sum;

/* inoise() function is the improved perlin noise function */

You can then use the return value to mix between two other colours, say, pure alpha and white so you can see the stars at night through the clouds (again look to the screenshots below).

I am not sure at the minute whether generating the colour per frame using noise is the best way to go.  I could probably generate a large texture using the same distorted noise function, which can be applied over large parts of the planet, meaning instead of numerous noise() calls, I just need a tex2D() call (after texture generation).

For the next update I hope to have a bit more variety in the clouds, as well as some changes in cloud cover etc. etc.

Here are a couple of screenshots:




Leave a comment