Archive for category Britonia Development

Creating the Ships List – I

In this article I’ll be speaking about what I’ve been up to with regards to researching and defining the dimensions and specifications of the ships and engines. I aim to have engines and ship sizes which are both believable in size and capabilities, while not necessarily being so accurate as to be confusing for the player. I’m writing this simultaneously with the ships list.

Read the rest of this entry »

Advertisements

4 Comments

Noise Part III (GPU)

Well it’s taken a lot longer than I originally planned, but here is the third part in the Noise series.  In the last two noise articles, I explained how I was using perlin noise to produce heightmap textures, as well as the code used to implement Ken Perlin’s improved noise function.  So far this has all been on the CPU.  In this article, I will explain how I ported this onto the GPU to produce heightmaps and diffusemaps for all the terrain patches.

The move from the CPU to GPU for height map generation actually caused quite a few headaches and problems I had overlooked while initially considering the transformation.  I tackled the conversion in steps, each of which I have outlined below

How it used to be on the CPU:

Initially I was generating a height value using the vertex positions (in cube space) as input to a Perlin noise function.  This was quite easy to use in that it involved one call to the inoise function which returned the height of the planet geometry for each vertex individually.  This would then be directly assigned to vertex before being saved in the VB.  Cube space contains the positions of each of the quadtree vertices before they are projected out onto a sphere, which is in the range [-cubesize, cubesize]. (e.g. for Earth-sized planets this is [-46, 46]).

This had the added benefit that each time I wanted to place an object on the planet, I just needed to call this function with the desired position of the object (again in cube space) and it would return the height of the terrain at that point.  This means I had been lazy up until this point; instead of trudging through the quadtree and getting the position from a height map array, I just did the more expensive noise function call to get the height of each object I wanted to place on the terrain, including the camera for computing min-LOD levels.

Providing I was willing to keep the number of octaves down below 5-6 per vertex and use vertex normals for lighting I could get between 60-90 fps using CPU noise, which isn’t too bad.  Getting any kind of diversity in the terrain at surface level is difficult though with such a tight octave budget, and it just wouldn’t be feasible to generate normal maps for the terrain patches on the CPU, so I decided to move the geometry map generation to the GPU.


Getting the generation onto the GPU:

Geometry Maps:

The first step towards GPU generation was to get the geometry map generation up and working. Because this geometry map is used only for generating the terrain height per vertex, I used a RenderTarget2D with a SurfaceFormat of R32, which provides a 32-bit component (instead of the ‘typical’ A8R8G8B8 surface).  The rendertarget has the same dimensions as the vertex grid, so each <u,v> coordinate of the fullscreen quad texture lines up perfectly with the <x,y> vertex in the patch VB.  The typical size I use for this is 33×33, although is scalable.

The first obstacle I decided to tackle was that I could no longer call upon the noise function individually per vertex; I had to get the GPU to generate the height values of each vertex at once (in the render target).  To do this I pass in the four cube space corner positions of the terrain patch, and lerp between these values in the pixel shader.  This means that I can re-create the position of each vertex in the quadtree grid in the PS, and then use this position vector as input to the 3d noise function, per pixel.

The PS lerps between the four terrain patches using the texture coorindates of the fullscreen-quad, which are assigned in the [0,1] range.  I had a few problems with the linear interpolation because texture coordinates assigned to the fullscreen quad do not match up perfectly with the screen pixels.  This problem is described here.  This means that when I used the texture coordinates for the lerping of world coordinates I had small seams on each patch where the interpolatation didn’t start on exactly 0 or 1, but rather 0+halfPixel and 1-halfpixel.  I got around this problem by slightly modifying the fullscreen quad texture coordinates and offsetting them by the half pixel in the application.  Here is both the fullscreen quad and the shader used for the geometry map generation:

Application:

private static void Initialise()
{
    float ps = 1.0f / textureResolution; // E.g. 33

    // Define the vertex positions.
    m_Vertices[0] = new VertexPositionTexture(new Vector3(-1, 1, 0f), new Vector2(0, 1));
    m_Vertices[1] = new VertexPositionTexture(new Vector3(1, 1, 0f), new Vector2(1 + ps, 1));
    m_Vertices[2] = new VertexPositionTexture(new Vector3(-1, -1, 0f), new Vector2(0, 0 - ps));
    m_Vertices[3] = new VertexPositionTexture(new Vector3(1, -1, 0f), new Vector2(1 + ps, 0 - ps));

}

HLSL:

float4 PS_INoise(float2 inTexCoords : TEXCOORD0) : COLOR0
{
    float land0 = (float)0;

    // Get the direction, from the origin (top-left) to the end vector (bottom-right).float3 xDirection = xNE - xNW;
    float3 zDirection = xSW - xNW;

    // Scale the distance by the texture coordinate (which is between 0 and 1)
    xDirection *= inTexCoords.x
    zDirection *= 1 - inTexCoords.y;

    // Lerp outward from the originSY (top-left) in the direction of the end vector (bottom-right)
   float3 lerpedWorldPos = xNW + xDirection + zDirection;

    land0 = doTerrainNoise(lerpedWorldPos);
    return land0;
}

After I got the geometry map generation working, this meant that instead of the 5-6 octave limit imposed on CPU noise, I could use 16+ octaves of noise to generate the geometry map.  This is more than enough to create diverse heightmaps, but it requires a lot of mixing and tweaking of the noise functions to get anything ‘realistic’ looking.

Normal Maps:

Now come some of the benefits of generating the geometry map on the GPU.  As mentioned above, the geometry map is created at the same resolution as the terrain vertex grid, which means each vertex is represented by exactly 1 texel in the geometry map texture.  So in my case, the geometry map is 33×33.  Using the four corners of each terrain patch, I can now however create another RenderTarget2D with a much higher resolution, say 256×256 or 512×512, which would create ‘geometry’ for the same area, but with much more detail.

n.b. I now call the small 1:1 vertex to texel map the geometry map, while the high resolution map is the height map.

This height map can now be used to create a high resolution normal map for lighting calculations in tangent space, which really improves the visuals of the terrain.  This calculation of a 512×512 geometry wouldn’t have been possible using CPU Perlin noise, but we are now able to create high resolution normal maps which add a lot of realism to the planet surface, especially from high altitudes.

Diffuse Maps:

The next benefit of generating the hi-res height maps, is that now I can generate a diffuse map based on a higher resolution that what I was previous using on the CPU.  Previously I was sending along an extra Vector2 struct on the vertex stream to the GPU which contained the height and slope (both in the [0,1] range) of the vertex.  This was then used in a LUT which returned the index of the texture to use (using tex2D in the PS).  This method is called texture atlasing, and allows me to use 16 512×512 textures passed to the effect in one file (2048×2048).

Because the LUT was based previously on per-vertex height and slope information, the texturing was not all that great.  But now the diffuse map is based on the higher detailed height map, so I am able to get much more detailed results.  Furthermore, because the diffuse map is generated once per patch when the quadtree is subdivided, I don’t need to do any extra work in the terrain PS other than sampling the diffuse map.
Here are two screen shots of the moon Gani on a clear night, I realise the trees in the second shot are not lit correctly 😉

2 Comments

Procedural Road Generation

In the past weeks, I have approached the issue of generating procedural roads at runtime based on different maps generated by the Engine. The road generation is based on self-sensitive L-Systems.  This article will outline what L-Systems are and how I have started to use them to procedurally generate road networks.

This can be a pretty tricky subject depending on how authentic you want to make it; on the one hand you could generate random points on a terrain to represent towns and join them with lines, branch a few more lines off these and call it a road map. This would probably look okay, but inevitably you’re going to end up with roads which lead off to nowhere or two roads which illogically zigzag across one another, or which go up impossibly steep inclines or in water etc., all of these scenarios would seriouly affect the game play in a negative way.

However on the other hand, you could go all out and take into consideration the town positions, population density, altitude and slope information, culture and the time period and base the road generation on all of this. Obviously this would be ideal, but generating this at runtime would probably take a long time, depending at what resolution you generate the road maps. Because Britonia will be a trading and exploration game, the option for the player to follow a road to see where it leads should mean that the player is not simply left stranded in a forest, thinking “well who would have built a road here!?”.

I found an article by Pascal Müller on his site here. You can find the article in German at the bottom of the page, and there is also a shorter, less detailed version in English half way down the page. It is from the German version that I am basing the rules for generating the roads on.

L-Systems and making them self-sensitive:

I’m just going to paste the wikipedia definition of L-System first, because that pretty much sums up what L-Systems are:

“An L-system (Lindenmayer system) is a parallel rewriting system, namely a variant of a formal grammar, most famously used to model the growth processes of plant development, but also able to model the morphology of a variety of organisms. L-systems can also be used to generate self-similar fractals such as iterated function systems.”

What this means is, we can start out with a string of characters, such as “AB” and pass these into various functions, which are called productions. The productions have a set of variables which check the given string for any patterns called predecessors and if any are found, then it replaces those characters with a new set called successors (parallel rewriting system). Because this rewriting is based on a limited number of rules, and because we can iterate through our ‘string’ as often as we want, this produces the self-similar fractals mentioned above.

Let’s take Lindenmayer’s original L-System for modelling the growth of Algae as an example:

For this we start out with the string “AB” and we have two productions.  The first production replaces all instances of the character ‘A’ with ‘AB’.  The second production replaces all instances of the character ‘B’ with ‘A’.   We can run the string through this two productions <X> number of times, and each time the string will gradually grow with a specific pattern.  Consider the following code:

private void GenerateString()
{
    string Axiom = "AB"; // The initial string

    for(int i = 0; i < maxDepth; i++)
    {
        Axiom = Production1(Axiom); // Run the first production set.
        Axiom = Production2(Axiom); // Run the second production set.
    }
}

/// This production will search the current string for all instances of the character 'A' and
/// replace them with the characters 'AB'.  This will cause the string to expand each
/// time the production is applied to the current string.
private void Production1(string currString)
{
    string returnString = "";
    bool lbFound = false;
    foreach(char instr in currString)
    {
        if(instr == 'A'
        {
            returnString += 'A';
            returnString += 'B';
            lbFound = true;
        }
    }

    // If this production was not applied to any characters in
    // the current string, then pass out the unmodified string.
    if(!lbFound) returnString = currString;
}

/// This production will search the current string for all instances of the character 'B' and
/// replace them with the characters 'A'.
private void Production2(string currString)
{
    string returnString = "";
    bool lbFound = false;
    foreach(char instr in currString)
    {
        if(instr == 'B'
        {
            returnString += 'A';
            lbFound = true;
        }
    }

    // If this production was not applied to any characters in
    // the current string, then pass out the unmodified string.
    if(!lbFound) returnString = currString;
}

Running this system 5 times will yield the following results:

Photobucket

While this may not look very useful, imagine now that we also assign a drawing command to each of the characters in the resulting string after 5 iterations.  This would enable us to draw some pretty complex patterns with a relatively small and easy piece of code.

As another quick and slightly more interesting example, take the following instruction set:

start : X
Production : (X → F-[[X]+X]+F[+FX]-X), (F → FF)
angle : 25°

In this example, ‘F’ means “draw forwards”, a ‘-‘ will be interpreted as “rotate angle -25°”, a ‘+’ will increment the angle 25°. the ‘[‘ and ‘]’ push and pop the current position and angle on a stack, so we can branch different instructions. Using the above simple L-System, we can produce images like this:

Photobucket

Again, while this may not seem all that graphically realistic, it is possible for us to assign our own graphics to each instruction, therefore we could use real textures for the stem and leaves etc. This was done to great effect in Asger Klasker’s Ltree project (http://ltrees.codeplex.com/ ).

Input Maps:

There is a problem emerging here though as we cover the basics of L-Systems.  How can we use input (i.g. the environment) to direct the growth of the patterns?  To do this we can assign functions to different instructions which are run in the correct sequence and can be use to make sure certain conditions are met.  Such instructions are used in road generation to check the height under the current position and end position to make sure the gradient is not too high etc.  These are some of the input maps which I am using to generate the road maps at the minute:

Water and Parks Map:

parks and water map

The idea of this map is to define which areas are out-of-bounds for road generation. We can use this map to also model bridges and fords by using the full range of colours in each pixel. Fords are achieved by allowing the roads to cross only at places where the water level is shallow enough, say:

waterLevel-20 <= roadLevel <= waterLevel.

Because roads cannot be built in (deep) water, we can check the end coordinates of a road’s position, and if water occurs between these two points, then we could place a bridge to allow the player to cross the river. We can also limit this action to only allow main roads to have bridges, that way there is only a limited number of bridges built in any one city.

Altitude (height) Map:

Photobucket

Again we can use this map to define where roads can and cannot be built. Using the height map we can also check the gradient of the planned road and ensure that roads are not built steeper than what the player is able to climb. It is also possible for us to define that the main roads should try and follow the contour of the hills, so we get roads which spiral upwards instead of just going straight from the bottom to the top.

Population Density Map:

pop density map

The population density map is used to steer the roads and motorways in meaningful directions. Areas with a higher population density will usually have a higher density of roads.

Profiles:

There are two major profiles that I’ve been looking at for generating the road maps. The first is a ‘New York’/urban style road generation, which creates dense grid like road networks. While this will not be used in Britonia, I would like to add this capability to the Galaxis Engine so that in the future, any space games made with the Engine can generate modern cities on the surface. The second profile is a medieval/rural profile which looks a lot more organic, with long curving streets branching out from each other. This will be more suited to medieval street maps, whereby the land shapes the streets more than functionality.

New York / Urban:

Photobucket

This is one outcome for procedurally generating a street plan based on New York. The streets are basically created with a vary small chance of the heading changing for forward facing road sections. Then, every <x> pixels/meters I can define a branch in the road in three seperate directions at 90° from each other (fwd, left and right). This helps to produce the block effect.

Rural:

rural street plan

This is my attempt at generating a street plan for a rural, high mountain area. To achieve this, I set the size of the main road sections much smaller than in the urban map, and also increased the chance that the heading (angle) changes with each section, resulting in streets which are curve instead of straight. so there shouldn’t be as many well defined ‘blocks’ as you might find in New York city etc. I also cut down on the number of branches in the roads, from 3 to 2. This effect doesn’t look to bad and it is possible to tweak the parameters further still to create larger or smaller settlements.

Improvements:

I am still working on the road generation instructions, and there are several things which I hope to finish soon.  At the minute the population density map doesn’t have such a big influence on the generation of the roads.  I am thinking to leave the road generation as it is and just us the population map to determine the actual buildings which will be placed on the land.

Source Code:

The source code will be made available as soon as I have made it friendlier to use.  I will create a download link on the left,  but if anyone wants to see the source before hand then just send me a mail and I’ll forward it to you.  The solution has been created with XNA 3.1, so make sure you have that before running it. The project generates a lot of garbage as everything uses large amounts of strings, so it most definately shouldn’t be used as is in your game. I haven’t done this, but perhaps using StringBuilder would be a better solution and would cut down garbage.

If you have any comments or criticism, then feel free to post on the forums here, but please remember that this was a proof of concept application, and as such I made it with readability in mind, not speed, so there are a great many optimisations which could and should be implemented before you use this in your own projects.

You can get the source code here

4 Comments

Progress Update (June 2009)

I have been working on many things in the last 6 weeks, so I think it is time for another progress update.  In this update I will outline which components I have started and the progress, including LTrees, GPU map generation and the GUI.
It hasn’t all been quiet on the site though, as it did see some activity though roughly 2 weeks back, when I wrote the first article not related to the progress on Britonia.  I was titled ‘Overcoming Scale and Precision Errors’.  I hope to continue writing other articles and posting them in the future.For the actual game, there have been many updates, so I will take the same format as the last progress update and describe each component seperately, and then follow up with a few screen shots:

Procedural LTrees:

I mention this first, not because I have spent the most time on this area, which in fact is the opposite, but rather because when looking at the new screenshots, the first thing you’ll notice is that I have added trees to the world.  I went ahead as planned and intergrated Asger Klasker’s LTree project from codeplex.com into the engine.

I use this component to procedurally generate a set of trees when the world is loaded, along with an accompanying set of billboards.  The plan is to eventually render the same sets of trees with varying rotations, sizes and colour offsets, so that I don’t have to constantly generate full trees everytime the camera moves.  I have also started setting up an LOD system, which based on the distance of the tree from the camera, switches between rendering a high poly mesh, low poly mesh, single billbord or group billboard.  As I said, the LOD is still a work in progress, but it is so far working quite well.

At the minute whilst flying around the planet (see the youtube links at the bottom), you will notice that the trees are not placed correctly.  This is ‘intentional’, because until I implement the growth maps, I don’t want to lose too much time setting up a mock forests and what-not, and it is still too early for the growth maps.  I do however, still have to test the LOD for the trees, so at the minute they are just planted using a generic growth map which is sampled once per vertex grid position; this is evident by the grid pattern of the trees in the video.  I plan to generate one growth map per patch, which will be used both to identify where trees should be placed, and in the future where buildings and other objects can be placed.  This growth map will be based on the height and slope of the terrain to dictate where valid trees positions are.

GPU Diffusemap, Heightmap, NormalMap and Growthmap Generation:

Well the good news is that I’ve managed to get the terrain heightmap generation working on the GPU, using the four corners of the cube in world space, and lerping between them to generate a grid of noise values using perlin noise.  The bad news is that it hasn’t really improved performance, yet.  I still have a lot of testing to do with the GPU generation, again trying the find the best ‘low octave, good quality’ terrain etc.  I am actually hoping to get the height, normal, and diffuse maps all generated on the GPU (with MRT is available which can output upto 3 textures simutaneously, or single rendertargets if MRTs are not available).  When it is finished, I should be able to generate higher quality diffuse maps on the GPU once per patch, and then sample this like a normal tex2D() in the shader.  This will be useful because I can then a) lerp between texture LODs to avoid the current texture popping, and b) because I can then generate high resolution normal maps based off of the high-res heightmaps; the normals are currently per vertex.

User Interface:

I have also done quite a bit of work in the last few weeks on the UI, and so far I have setup a pretty neat little system whereby the game components can individually retrieve instances of windows (which are stacked on the left side of the screen in the screenshots), and set many debug variables in the windows.  It enables the game components to define custom controls (radio buttons, text boxes, slider bars etc).  This is particularly useful for changing debug variables at runtime.  So far the biggest thing missing is being able to load the UI Windows from .xml files, but I will get to this soon.

The Player Classes:

I have been working on Britonia now for a fair few months now, and while the terrain is shaping up quite nicely, I am now going to concentrate more of the player side of things. So here are some components which I will begin in the coming weeks:

HUD –                 Although this is not really a player class, this is quite obviously a very important part of any game, both because it is the players main method of interacting with the game, and also because it will feature heavily on all game screenshots etc.  So far the GUI is setup mainly to act as an interface for debugging variables, and it is very generic.  I have added a basic window system, whereby I can scale, move, dock and minimise the windows.  So I must now start to create the ‘special’ case windows such as ‘inventory, trade, shops, character skills, character paperdoll etc. etc.

Character Skills –    What would an RPG be without avatar skills?  I have played a great many of computer games over the years, and I have had the chance to play around with many different character skill systems.  I have decided to implement the character skills in much the same way as the ultima online skills worked.  For thouse who don’t know, the skills are grouped together by type, so you would have for example, all the combat skills together, each ranging from 1 to 100.  Each skill can be improved upon by using it, that is to say, to improve you skill in archery you would need to actually use a bow.  Now, per group you have a maximum attainable amount of points, say 400.  This effectily lets the player decide how to improve their skills based on how they play.  you can lock skills (to stop gaining/losing points) or complety ignore other skills which you do not use.  Once the 400 points are full for the group, you cannot upgrade anything else, until you drop points from a differnt skill.

Now it has been a long while since I last played UO (about 10 years ago :), so the numbers are just off the top of my head.  I haven’t started drawing up plans for which skills to include, because first I need to know about all the aspects of the game.  The skill points are not just combat related, but also for mining, lumberjacking, walking etc.  I even remember a skill on UO for creating kindling wood from the trees, so you could build a fire and safely log off.  Although I was young, I remember being very fond of how this all worked, so this is what I will aim for.

Bulletin Quests – I plan to start working on the bulletin quest system of Britonia.  The plan for the minute will be to base this on the kind of quests given in the old Elite games.  Only this time, instead of spaceports, you can read the bulletin either in the town taverns or from the town crier.  These quests will be completely procedural and will fall into one of the following catagories:  Assasinate (a person), Eliminate (a group or band), Gather (collect resources), Escort, smuggle, Whereabouts of? (missing person).  I would like to point out that this will not be the main quest system in Britonia, but it is something that should prove easy to implement and effective.  There will of course be a main plot and more involving quests, but they’re for another time.

Player Hands and Weapons –    I hate modelling.  Right, I had to get that out of the way.  I started maybe two weeks ago to implement the player hands and the weapons.  I modelled a pair of hands and a bow and arrow, skinned them, and put them through the content pipeline from xna creators website.  Everything seems to work.  I have added several idle animations, and the weapons follow the hand animations quite nicely.  At the minute though the right hand looks a little – strange.  I have decided to tackle this problem in little bits, once every two weeks I will dedicate 2-3 hours just modelling, because anything longer in blender makes me want to cry.  If anyone is interested I could put some pictures up on the site, but I am happy to wait for a while before showing them.  The only goal I have set for this in the short term (within 2 months) is to have the bow fire and inpact objects, and the sword(s) swing and impact objects.  This should be enough for the game to not impede the rest of the game’s development.  The bow and arrow is about 50% complete.

Terrain Textures (a quick explanation):

I realise in the videos and screenshots that the terrain texturing is looking a little off.  The seams between textures are very visible, so is tiling and so are the LOD transitions (explained above).  Around March time I worked for about a week tweaking the noise and the textures trying to get them to look good for a youtube video I wanted to make.  I finally got them to acceptable standard, and I made the video.  The same night I decided to fiddle around with the quadtree, and I increased the depth to allow for the massive improvment in scale, but then the texturing was way out again.  I mention this because I have decided to wait before re-tweaking the terrain textures.  Before I have fully added trees, and eventually grass and flowers, I will not know how the terrain will look.  So to save time, I will leave it until the ground objects are a little better.  Also, the terrain textures haven’t changed once since I created the texture pack last year.  I am sure that huge improvements can be made in the visual quality by re-drawing the textures in the texturepack.

Leave a comment

Update On Progress (15 April)

Again its been a while since the last update, and again, not due to any inactivity, but rather I’ve simply had too much to do.  In this article, I’ll just go over what I’ve implemented and what I’m still struggling with in the last 3-4 weeks.

Change in Game Concept:

Its nothing that big really, but I have decided for Britonia, to limit the size of the playing area to one solar system.  Originally, because I am procedurally generating all the content, I had planned to generate as many planets and solar systems as what the player could venture to, using a ‘portal’ system, which would each time take you to a new procedurally generated world (remember, procedurally means it will always be generated exactly the same as long as the seed for a planet doesn’t change).  Recently though, I have spent a lot of time flying around the current planet of Britonia, and it is massive.  So much so, that I am starting to think that it would just be unnecessary to have so many planets and systems.  The player should also have a central ‘HUB’ around which to play, something to call ‘home’; and I think this would be lost if the universe is too big.  That said, I will still add the code for such a universe in the engine, because so far I am really enjoying the procedural aspect of programming.  maybe in the future this could be put to better use, like with a space sim.

So the game will take place within one solar system.

The Solar System.

Although it sounds like a big change, it really isn’t. The game solar system will have at it’s centre one sun, and around it, nine orbiting planets.  I would like to try and model the game solar system around our  own solar system.  There are two ways which I thought about implementing the solar system, and here they are:

#1 – Create a normal scene graph whereby all objects rotate and translation around their parent bodies.  This is conceptually the easiest of the two methods, Each planet would of course have its own set of orbit variables, such as distance from sun, number of days in years, minutes in day etc.  The big disadvantage of this method is that the orbits are round, and not ecliptic – but I this doesn’t really matter for Britonia, as you will never zoom out far enough to appriciate the planetary orbital paths.

#2 – The second option, which I was actually considering, was to implement the orbits using orbital elements.  I only consider this because I originally wanted to generated more than one solar system. This method accurately calculates planetary orbit based on time and date etc.  This would be again overkill for Britonia, especially if you only saw a planet in the Britonian sky once every 9 months or something.

So, as you can probably guess, I will be implementing first method.  I have already started (and have some screen shots for you below).  The only other thing which I had to take into consideration so far is the draw order.  This is nothing overly complicated or time consuming.  All the planets are sorted into order from furthest to closest (based on camera position), and then rendered (i.e. back to front).    The planets will probably be placed so that they are visible from the surface of Britonia (maybe not all planets, but definately enough to get that ‘fantasy’ feeling; check out the screen shots below).

Flora:

I have also started looking at flora generation.  There is a really good project here on codeplex by Asger Feldthaus which uses the Lindenmayer System to proceduraly generate trees.  I am hoping to implement something similar (although a little less complicated; e.g. without tree profiles etc.) with imposters and point sprites, so that the trees can be seem from high altitudes.

Planet Texturing:

I have also added the code to assign textures to the terrain based on slope as well as height.  This was planned for a long time, but I have always had problems generating the gradient.  In order to get the gradient of a vertex, you use:

float lfSlope = Vector3.Dot(vertexNormal, upVector);

This returns the dot product of the vertex in the [-1,1].  0 means the vertex is flat, 1 means it is a straight up slope (perpendicular to the surface), and -1 means straight down.  We actually only need the absolute value  [0,1]of the gradient, as this will be passed to the pixel shader along with the height (again in the [0,1] range) and this will be used for the texture lookup.

I still apparently have a slight problem with this however, as it appears the value is not exactly in the [0,1]range, but rather between [0,0.5f]. I’m not sure why yet, but when debugging the code, it all seems fine.  I suspect it is how I generate the up vector, which is different for each vertex on the planet, but at least it is working.

Other Areas.

Other areas are generally the same.  For the most part now, I find I end up playing with the heightmap generation and then flying around the planet for a while checking the texturing, which means that a lot of time is wasted just exploring, and also that the heightmap generation code is constantly changing.

I still haven’t managed to get the terrain patch generation (heightmap, normalmap and diffuse map) on the GPU yet, but I will have to get that ready soon as I plan on creating growth maps etc. and soon I will have to start looking at off-loading some of the CPU work on the GPU.

Now for the screeshots:

Image

Image

2 Comments

Clouds Part I

This is another article which I plan on writing in a few parts. I have added a pretty simple cloud layerr to Britonia using a distored fBm fractal and improved noise basis function. This is used in a shader to generate a per pixel cloud map, which is then just sampled over a spherical ‘terrain’ which a height of zero.

To be honest, the clouds are actually something I had anticipated implementing much sooner.  The theory is quite simple really

The implementation of the clouds is much the same as the terrain, and if anything it is simpler.  You need just create another sphere with LOD. As far as geometry goes, I am using a smaller vertex grid for each patch (e.g. patches made up of 9×9 grids instead of the usual 33×33), and I am also limiting the depth that the quadtree subdivides to to 4.   The texture to be applied to each patch is generated using perlin noise in the pixel shader and can be setup to use any size, I’m going with 256×256.

As far as rendering goes, we just need to set the radius of the sphere as the height of this particular cloud layer and then render each patch with the generated cloud map.  There are a few things to watch out for, such as the render order of things with alpha blending, and also set the culling, but nothing too difficult.

The distorted noise function which I am currently using for the cloud map generation looks like this:

float3 offsetPoint(float3 p)
{
float3 result;result.x = inoise(p);
result.y = inoise(p*3.33);
result.z = inoise(p*7.77);

return result;
}

float DistNoise(float3 p, double distortion)
{
return inoise(p + distortion * offsetPoint(p + 0.5));
}

// fractal sum distorted noise
float DfBm(float3 p, int octaves, float lacunarity = 2.0, float gain = 0.5)
{
float freq = 1.0f,
amp  = 0.5f;
float sum  = 0.0f;
for(int i=0; i<octaves; i++) {
sum += DistNoise(p*freq, 0.5)*amp;
freq *= lacunarity;
amp *= gain;
}
return sum;
}

/* inoise() function is the improved perlin noise function */

You can then use the return value to mix between two other colours, say, pure alpha and white so you can see the stars at night through the clouds (again look to the screenshots below).

I am not sure at the minute whether generating the colour per frame using noise is the best way to go.  I could probably generate a large texture using the same distorted noise function, which can be applied over large parts of the planet, meaning instead of numerous noise() calls, I just need a tex2D() call (after texture generation).

For the next update I hope to have a bit more variety in the clouds, as well as some changes in cloud cover etc. etc.

Here are a couple of screenshots:

Image

Image

Image

Leave a comment

Progress Update (Feb 2009)

I haven’t been able to update the site for a few weeks, partly because I haven’t got any one feature finished to show you, and partly because I have simply got so much to do. So here is an update on what I am currently working on:

OK, I have been working on several areas of Britonia – and not just the coding.  Because of the size of my todo list I am trying to achieve several things at the same time, but most of them are still related to the planetary terrain system.The features that I am currently working on are as follows:
GPU Heightmaps, Normalmaps and Duffusemaps:

This has been my main focus for the last 2-3 weeks.  I have been trying to generate the height and normals on the GPU using Improved Perlin Noise, and it is kind of working.  I can produce the heightmap information for each terrain patch by passing in the four corners of the patch in world space, but I still have some issues with alligning the patches so there are no seams.  The normal map generation on the GPU should pose no problems (I can already generate a normal map from the given heightmap – but I expect there will be similiar issues with seams).  The diffuse map will be generated from the height and slope information, and will be generated once per patch.
fp64 Geometry:
Based on a post by Flavien Brebion over on the Infinity forums,  I have changed the way in which the geometry is built and renedered.  This involved changing most of the planet geometry code and caused the scattering shaders to stop working (see below).  Now the geometry is built around (0,0,0) and I store the patch centre position in fp64.  At render time, I use fp64CamPos – fp64PatchCentre to create a 32-bit translation matrix, which is just passed to the shader like normal, set camPos to (0,0,0) and render.Atmospheric Scattering:
After changing the way that the geometry is rendered, the scattering no longer worked.   After an hour I managed to changed the atmosphere scattering and get it working again, but for the terrain scattering it is taking awhile longer, and it still isn’t working as of this post.  Before it stopped working I added some code which was posted on the forum by Thexa4 which improved the lighting and stopped terrain turning black when the geometry was ‘higher’ than the camera, so thanks for that!

Low / High end GPUs:
For most of the components I have been doing a Low and Hi effect file, because a lot of effects which I am using require shader model 3.0.  I would like to set the minimum shader model to 2.0, with the option of changing the settings in game, so I have been writing quite a few effects.

Project Spring Clean:
I changed the structure of the solution map a few weeks ago, and seperated most of the components into their own projects.  The  way the solution used to be made it hard to get a good overview.  Now I have split up most of the main components which produce a variety of .dll files, can can be reference and then used by the engine.  For example the scene graph, the planet renderer, the UI etc. are all now within their own projects (and work indepently of each other through).  I am not sure if this is the best way to organise and structure the projects, but I will do for the time being until I learn of something better.


Britonia Logo and Blender3D:
I have also started looking at creating a Britonia Logo using Blender3D ().  Although I haven’t dedicated that much time to it yet.  In the process I have also started half heartedly learning blender.  I say half-heartedly because I don’t particully enjoy modeling, but it is something I will need to do for the future.

Britonia Design Document:
Lastly for the update, I’ve been trying the finish the Britonia Design Document.  I is quite funny when I read through it because it has to be one of the most ambitous design documents ever.  Nevertheless I will try and stick to it in the coming months and indeed years.  I hope to post the design document on the site soon.
*edited*
Shortly after writing this post – I got the atmospheric scattering for the terrain and the atmosphere working again, so I have posted a screenshot below.  The terrain is just using Vertex normals (no per-pixel lighting) and the heightmap generation is still being generated on the CPU, so the performance is quite bad.  I still cannot get the scattering to produce red sunsets, so I still have some work left to doin this area.  When the terrain patch normals are finished, this should further increase the visual quality of the screen shots.
Until next time,
Image

Leave a comment