A New Direction …

I have been thinking quite a lot recently about some of the upcoming aspects of Britonia, and some of the decisions I made at the beginning of the project.  For a long time I’ve wanted to create a game with a sense of depth and emersion, somewhere you could go and play open-endedly, without a sense of the end of the game drawing nearer the more you unraveled the story or progressed through a dungeon.  Obviously, this is not as easy a task as it sounds, but thanks to procedural content, it is not totally out of the question.

I set myself no time limit for finishing the development of the game, piling on the features.  I knew it would take a long time, but that didn’t bother me and indeed it still doesn’t; after all what is the point of making a game if you don’t enjoy it? 

I spend quite a lot of my free time working on Britonia (or at least XNA related projects), but there are still a great many things required to get a full game working which I haven’t even started, and I think it would be better ( or faster) in the long run if I change the theme of the game before I get into the actual in-game content creation.

Therefore I am considering to change the theme / genre to a Space trading simulation, just like the old classic Elite by David Braben and Ian Bell. 

Luckily though, changing to a space sim wouldn’t require me to re-write reems of code, so everything I have done thus far will remain largely unchanged.  Of course the ‘new’ space game will have you landing in ground ports and/or flying around the surface of the planet looking for minerals and artefacts.

So why the change?

Simply put, after comparing the content required (models / textures / environments) for both a space sim and fantasy RPG, I think I have a much better chance of creating believable content for a space sim as opposed to the likes of human NPCs and monsters.  Some of the other considerations are :

Medieval RPG Space RPG
Planets Travel at low altitudes along the surface causes the planet quadtree to update often, and subdivision for quad nodes are always x4. Furthermore, because of the detph of the quadtree, a lot of memory is needed for the heightmaps/normal maps etc.

Adding to this, trees, npcs, towns and ground clutter ….

Planet based RPG’s don’t require realistic distances or accurate planetary movements (orbits/rotations).

NPCs / Monsters Modeling of NPC’s and Monsters is very complex, requiring many different animations (skinning).
Creating AI to mimick human behaviour is even more difficult.

I don’t do faces – creating a good looking one would take me hell of a long time ūüė¶

Modelling Spaceships is conderably easier than organic lifeforms.  AI is still a challenge, but that’s AI.

Animations can be done using rigid bodies, and is typically turret points and/or landing gears. 

Towns / Communities A high level of unique geometry (models) would be required to create believable variation in different towns.  Each individual building of a town in a medieval RPG would have to exist in full with all expected functions.  (A tavern would need a bar, tables chairs etc. etc.) Cities can be represented with one (or a few) larger city meshes.  Which means getting away with considerly less geometry.

One central spaceport UI screen contains all the different ammenities / services, once the player has ‘entered’ the city.

Travel / Distances Travel between planets is achived via ‘stargates’.  I do still very much like this idea.

Travel on the surface is slow and takes a long time. 

Dude – Spaceships!
Textures Buildings are constructed from different materials each requiring a unique texture.  Also different altitudes affect the colour of the texture (white of mountains etc.) It’s common knowlede that all spaces are grey with yellow windows.  Easy

I just wrote these off the top of my head, and I must admit it is hard to remain impartial after convincing myself a change is needed, but even looking at the above table I think the space sim would generally be faster and easier to implement.  Although I made a few less-than-serious remarks above, some of the areas such as travel / distances are obviously made easier with the concept of space flight.

Well, please let me know what you think because I still haven’t fully made up my mind yet.  You have until the website theme changes to a space scene to leave me your comments ūüôā

-John

16 Comments

Depth (Z-) Buffers

As with all games, Z-buffering plays an important part when rendering the pixels to the screen. During the development of Britonia I have often run into a problems related to the Z-Buffer, so I thought it would be nice to share what I have learned about them and how to avoid these issues.

Read the rest of this entry »

1 Comment

Procedural Generation ‚Äď Textures on the GPU

This small tutorial is really just an extension of the first article I wrote on 3d improved noise on the CPU. More specifically we’ll be getting it to work on the GPU this time.
Read the rest of this entry »

1 Comment

Procedural Generation – Textures on the CPU

In this tutorial I’ll be covering how you can use perlin noise and various fractal functions for generating textures either at compile time or run time. This technique can be useful for generating natural looking textures (grass, wood, marble etc.) or heightmaps for your terrains.

Read the rest of this entry »

19 Comments

Noise Part III (GPU)

Well it’s taken a lot longer than I originally planned, but here is the third part in the Noise series.¬† In the last two noise articles, I explained how I was using perlin noise to produce heightmap textures, as well as the code used to implement Ken Perlin’s improved noise function.¬† So far this has all been on the CPU.¬† In this article, I will explain how I ported this onto the GPU to produce heightmaps and diffusemaps for all the terrain patches.

The move from the CPU to GPU for height map generation actually caused quite a few headaches and problems I had overlooked while initially considering the transformation.  I tackled the conversion in steps, each of which I have outlined below

How it used to be on the CPU:

Initially I was generating a height value using the vertex positions (in cube space) as input to a Perlin noise function.  This was quite easy to use in that it involved one call to the inoise function which returned the height of the planet geometry for each vertex individually.  This would then be directly assigned to vertex before being saved in the VB.  Cube space contains the positions of each of the quadtree vertices before they are projected out onto a sphere, which is in the range [-cubesize, cubesize]. (e.g. for Earth-sized planets this is [-46, 46]).

This had the added benefit that each time I wanted to place an object on the planet, I just needed to call this function with the desired position of the object (again in cube space) and it would return the height of the terrain at that point.  This means I had been lazy up until this point; instead of trudging through the quadtree and getting the position from a height map array, I just did the more expensive noise function call to get the height of each object I wanted to place on the terrain, including the camera for computing min-LOD levels.

Providing I was willing to keep the number of octaves down below 5-6 per vertex and use vertex normals for lighting I could get between 60-90 fps using CPU noise, which isn’t too bad.¬† Getting any kind of diversity in the terrain at surface level is difficult though with such a tight octave budget, and it just wouldn’t be feasible to generate normal maps for the terrain patches on the CPU, so I decided to move the geometry map generation to the GPU.


Getting the generation onto the GPU:

Geometry Maps:

The first step towards GPU generation was to get the geometry map generation up and working. Because this geometry map is used only for generating the terrain height per vertex, I used a RenderTarget2D with a SurfaceFormat of R32, which provides a 32-bit component (instead of the ‘typical’ A8R8G8B8 surface).¬† The rendertarget has the same dimensions as the vertex grid, so each <u,v> coordinate of the fullscreen quad texture lines up perfectly with the <x,y> vertex in the patch VB.¬† The typical size I use for this is 33×33, although is scalable.

The first obstacle I decided to tackle was that I could no longer call upon the noise function individually per vertex; I had to get the GPU to generate the height values of each vertex at once (in the render target).  To do this I pass in the four cube space corner positions of the terrain patch, and lerp between these values in the pixel shader.  This means that I can re-create the position of each vertex in the quadtree grid in the PS, and then use this position vector as input to the 3d noise function, per pixel.

The PS lerps between the four terrain patches using the texture coorindates of the fullscreen-quad, which are assigned in the [0,1] range.¬† I had a few problems with the linear interpolation because texture coordinates assigned to the fullscreen quad do not match up perfectly with the screen pixels. ¬†This problem is described here. ¬†This means that when I used the texture coordinates for the lerping of world coordinates I had small seams on each patch where the interpolatation didn’t start on exactly 0 or 1, but rather 0+halfPixel and 1-halfpixel.¬† I got around this problem by slightly modifying the fullscreen quad texture coordinates and offsetting them by the half pixel in the application.¬† Here is both the fullscreen quad and the shader used for the geometry map generation:

Application:

private static void Initialise()
{
    float ps = 1.0f / textureResolution; // E.g. 33

    // Define the vertex positions.
    m_Vertices[0] = new VertexPositionTexture(new Vector3(-1, 1, 0f), new Vector2(0, 1));
    m_Vertices[1] = new VertexPositionTexture(new Vector3(1, 1, 0f), new Vector2(1 + ps, 1));
    m_Vertices[2] = new VertexPositionTexture(new Vector3(-1, -1, 0f), new Vector2(0, 0 - ps));
    m_Vertices[3] = new VertexPositionTexture(new Vector3(1, -1, 0f), new Vector2(1 + ps, 0 - ps));

}

HLSL:

float4 PS_INoise(float2 inTexCoords : TEXCOORD0) : COLOR0
{
    float land0 = (float)0;

    // Get the direction, from the origin (top-left) to the end vector (bottom-right).float3 xDirection = xNE - xNW;
    float3 zDirection = xSW - xNW;

    // Scale the distance by the texture coordinate (which is between 0 and 1)
    xDirection *= inTexCoords.x
    zDirection *= 1 - inTexCoords.y;

    // Lerp outward from the originSY (top-left) in the direction of the end vector (bottom-right)
   float3 lerpedWorldPos = xNW + xDirection + zDirection;

    land0 = doTerrainNoise(lerpedWorldPos);
    return land0;
}

After I got the geometry map generation working, this meant that instead of the 5-6 octave limit imposed on CPU noise, I could use 16+ octaves of noise to generate the geometry map.¬† This is more than enough to create diverse heightmaps, but it requires a lot of mixing and tweaking of the noise functions to get anything ‘realistic’ looking.

Normal Maps:

Now come some of the benefits of generating the geometry map on the GPU. ¬†As mentioned above, the geometry map is created at the same resolution as the terrain vertex grid, which means each vertex is represented by exactly 1 texel in the geometry map texture. ¬†So in my case, the geometry map is 33×33. ¬†Using the four corners of each terrain patch, I can now however create another RenderTarget2D with a much higher resolution, say 256×256 or 512×512, which would create ‘geometry’ for the same area, but with much more detail.

n.b. I now call the small 1:1 vertex to texel map the geometry map, while the high resolution map is the height map.

This height map can now be used to create a high resolution normal map for lighting calculations in tangent space, which really improves the visuals of the terrain. ¬†This calculation of a 512×512 geometry wouldn’t have been possible using CPU Perlin noise, but we are now able to create high resolution normal maps which add a lot of realism to the planet surface, especially from high altitudes.

Diffuse Maps:

The next benefit of generating the hi-res height maps, is that now I can generate a diffuse map based on a higher resolution that what I was previous using on the CPU. ¬†Previously I was sending along an extra Vector2 struct on the vertex stream to the GPU which contained the height and slope (both in the [0,1] range) of the vertex. ¬†This was then used in a LUT which returned the index of the texture to use (using tex2D in the PS). ¬†This method is called texture atlasing, and allows me to use 16 512×512 textures passed to the effect in one file (2048×2048).

Because the LUT was based previously on per-vertex height and slope information, the texturing was not all that great. ¬†But now the diffuse map is based on the higher detailed height map, so I am able to get much more detailed results. ¬†Furthermore, because the diffuse map is generated once per patch when the quadtree is subdivided, I don’t need to do any extra work in the terrain PS other than sampling the diffuse map.
Here¬†are two¬†screen shots of the moon Gani on a clear night, I realise the trees in the second shot are not lit correctly ūüėČ

2 Comments

Procedural Road Generation

In the past weeks, I have approached the issue of generating procedural roads at runtime based on different maps generated by the Engine. The road generation is based on self-sensitive L-Systems.  This article will outline what L-Systems are and how I have started to use them to procedurally generate road networks.

This can be a pretty tricky subject depending on how authentic you want to make it; on the one hand you could generate random points on a terrain to represent towns and join them with lines, branch a few more lines off these and call it a road map. This would probably look okay, but inevitably you’re going to end up with roads which lead off to nowhere or two roads which illogically zigzag across one another, or which go up impossibly steep inclines or in water etc., all of these scenarios would seriouly affect the game play in a negative way.

However on the other hand, you could go all out and take into consideration the town positions, population density, altitude and slope information, culture and the time period and base the road generation on all of this. Obviously this would be ideal, but generating this at runtime would probably take a long time, depending at what resolution you generate the road maps. Because Britonia will be a trading and exploration game, the option for the player to follow a road to see where it leads should mean that the player is not simply left stranded in a forest, thinking “well who would have built a road here!?”.

I found an article by Pascal M√ľller on his site here. You can find the article in German at the bottom of the page, and there is also a shorter, less detailed version in English half way down the page. It is from the German version that I am basing the rules for generating the roads on.

L-Systems and making them self-sensitive:

I’m just going to paste the wikipedia definition of L-System first, because that pretty much sums up what L-Systems are:

“An L-system (Lindenmayer system) is a parallel rewriting system, namely a variant of a formal grammar, most famously used to model the growth processes of plant development, but also able to model the morphology of a variety of organisms. L-systems can also be used to generate self-similar fractals such as iterated function systems.”

What this means is, we can start out with a string of characters, such as “AB” and pass these into various functions, which are called productions. The productions have a set of variables which check the given string for any patterns called predecessors and if any are found, then it replaces those characters with a new set called successors (parallel rewriting system). Because this rewriting is based on a limited number of rules, and because we can iterate through our ‘string’ as often as we want, this produces the self-similar fractals mentioned above.

Let’s take Lindenmayer’s original L-System for modelling the growth of Algae as an example:

For this we start out with the string “AB” and we have two productions.¬† The first production replaces all instances of the character ‘A’ with ‘AB’.¬† The second production replaces all instances of the character ‘B’ with ‘A’.¬†¬† We can run the string through this two productions <X> number of times, and each time the string will gradually grow with a specific pattern.¬† Consider the following code:

private void GenerateString()
{
    string Axiom = "AB"; // The initial string

    for(int i = 0; i < maxDepth; i++)
    {
        Axiom = Production1(Axiom); // Run the first production set.
        Axiom = Production2(Axiom); // Run the second production set.
    }
}

/// This production will search the current string for all instances of the character 'A' and
/// replace them with the characters 'AB'.  This will cause the string to expand each
/// time the production is applied to the current string.
private void Production1(string currString)
{
    string returnString = "";
    bool lbFound = false;
    foreach(char instr in currString)
    {
        if(instr == 'A'
        {
            returnString += 'A';
            returnString += 'B';
            lbFound = true;
        }
    }

    // If this production was not applied to any characters in
    // the current string, then pass out the unmodified string.
    if(!lbFound) returnString = currString;
}

/// This production will search the current string for all instances of the character 'B' and
/// replace them with the characters 'A'.
private void Production2(string currString)
{
    string returnString = "";
    bool lbFound = false;
    foreach(char instr in currString)
    {
        if(instr == 'B'
        {
            returnString += 'A';
            lbFound = true;
        }
    }

    // If this production was not applied to any characters in
    // the current string, then pass out the unmodified string.
    if(!lbFound) returnString = currString;
}

Running this system 5 times will yield the following results:

Photobucket

While this may not look very useful, imagine now that we also assign a drawing command to each of the characters in the resulting string after 5 iterations.  This would enable us to draw some pretty complex patterns with a relatively small and easy piece of code.

As another quick and slightly more interesting example, take the following instruction set:

start : X
Production : (X ‚Üí F-[[X]+X]+F[+FX]-X), (F ‚Üí FF)
angle : 25¬į

In this example, ‘F’ means ‚Äúdraw forwards‚ÄĚ, a ‘-‘ will be interpreted as ‚Äúrotate angle -25¬į‚ÄĚ, a ‘+’ will increment the angle 25¬į. the ‘[‘ and ‘]’ push and pop the current position and angle on a stack, so we can branch different instructions. Using the above simple L-System, we can produce images like this:

Photobucket

Again, while this may not seem all that graphically realistic, it is possible for us to assign our own graphics to each instruction, therefore we could use real textures for the stem and leaves etc. This was done to great effect in Asger Klasker’s Ltree project (http://ltrees.codeplex.com/ ).

Input Maps:

There is a problem emerging here though as we cover the basics of L-Systems.  How can we use input (i.g. the environment) to direct the growth of the patterns?  To do this we can assign functions to different instructions which are run in the correct sequence and can be use to make sure certain conditions are met.  Such instructions are used in road generation to check the height under the current position and end position to make sure the gradient is not too high etc.  These are some of the input maps which I am using to generate the road maps at the minute:

Water and Parks Map:

parks and water map

The idea of this map is to define which areas are out-of-bounds for road generation. We can use this map to also model bridges and fords by using the full range of colours in each pixel. Fords are achieved by allowing the roads to cross only at places where the water level is shallow enough, say:

waterLevel-20 <= roadLevel <= waterLevel.

Because roads cannot be built¬†in (deep) water, we can check the end coordinates of a road’s position, and if water occurs between these two points, then we could place a bridge to allow the player to cross the river. We can also limit this action to only allow main roads to have bridges, that way there is only a limited number of bridges built in any one city.

Altitude (height) Map:

Photobucket

Again we can use this map to define where roads can and cannot be built. Using the height map we can also check the gradient of the planned road and ensure that roads are not built steeper than what the player is able to climb. It is also possible for us to define that the main roads should try and follow the contour of the hills, so we get roads which spiral upwards instead of just going straight from the bottom to the top.

Population Density Map:

pop density map

The population density map is used to steer the roads and motorways in meaningful directions. Areas with a higher population density will usually have a higher density of roads.

Profiles:

There are two major profiles that I’ve been looking at for generating the road maps. The first is a ‘New York’/urban style road generation, which creates dense grid like road networks. While this will not be used in Britonia, I would like to add this capability to the Galaxis Engine so that in the future, any space games made with the Engine can generate modern cities on the surface. The second profile is a medieval/rural profile which looks a lot more organic, with long curving streets branching out from each other. This will be more suited to medieval street maps, whereby the land shapes the streets more than functionality.

New York / Urban:

Photobucket

This is one outcome for procedurally generating a street plan based on New York. The streets are basically created with a vary small chance of the heading changing for forward facing road sections. Then, every <x> pixels/meters I can define a branch in the road in three seperate directions at 90¬į from each other (fwd, left and right). This helps to produce the block effect.

Rural:

rural street plan

This is my attempt at generating a street plan for a rural, high mountain area. To achieve this, I set the size of the main road sections much smaller than in the urban map, and also increased the chance that the heading (angle) changes with each section, resulting¬†in streets which are curve instead of straight. so there shouldn’t be as many well defined ‘blocks’ as you might¬†find in New York city etc. I also cut down on the number of branches in the roads, from 3 to 2. This effect doesn’t look to bad and it is possible to tweak the parameters further still to create larger or smaller settlements.

Improvements:

I am still working on the road generation instructions, and there are several things which I hope to finish soon.¬† At the minute the population density map doesn’t have such a big influence on the generation of the roads.¬† I am thinking to leave the road generation as it is and just us the population map to determine the actual buildings which will be placed on the land.

Source Code:

The source code will be made available as soon as I have made it friendlier to use.¬† I will create a download link on the left,¬† but if anyone wants to see the source before hand then just send me a mail and I’ll forward it to you.¬† The solution has been created with XNA 3.1, so make sure you have that before running it. The project generates a lot of garbage as everything uses large amounts of strings, so it most definately shouldn’t be used as is in your game. I haven’t done this, but perhaps using StringBuilder would be a better solution and would cut down garbage.

If you have any comments or criticism, then feel free to post on the forums here, but please remember that this was a proof of concept application, and as such I made it with readability in mind, not speed, so there are a great many optimisations which could and should be implemented before you use this in your own projects.

You can get the source code here

4 Comments

Progress Update (June 2009)

I have been working on many things in the last 6 weeks, so I think it is time for another progress update.  In this update I will outline which components I have started and the progress, including LTrees, GPU map generation and the GUI.
It hasn’t all been quiet on the site though, as it did see some activity though roughly 2 weeks back, when I wrote the first article not related to the progress on Britonia.¬† I was titled ‘Overcoming Scale and Precision Errors’.¬† I hope to continue writing other articles and posting them in the future.For the actual game, there have been many updates, so I will take the same format as the last progress update and describe each component seperately, and then follow up with a few screen shots:

Procedural LTrees:

I mention this first, not because I have spent the most time on this area, which in fact is the opposite, but rather because when looking at the new screenshots, the first thing you’ll notice is that I have added trees to the world.¬† I went ahead as planned and intergrated Asger Klasker’s LTree project from codeplex.com into the engine.

I use this component to procedurally generate a set of trees when the world is loaded, along with an accompanying set of billboards.¬† The plan is to eventually render the same sets of trees with varying rotations, sizes and colour offsets, so that I don’t have to constantly generate full trees everytime the camera moves.¬† I have also started setting up an LOD system, which based on the distance of the tree from the camera, switches between rendering a high poly mesh, low poly mesh, single billbord or group billboard.¬† As I said, the LOD is still a work in progress, but it is so far working quite well.

At the minute whilst flying around the planet (see the youtube links at the bottom), you will notice that the trees are not placed correctly.¬† This is ‘intentional’, because until I implement the growth maps, I don’t want to lose too much time setting up a mock forests and what-not, and it is still too early for the growth maps.¬† I do however, still have to test the LOD for the trees, so at the minute they are just planted using a generic growth map which is sampled once per vertex grid position; this is evident by the grid pattern of the trees in the video.¬† I plan to generate one growth map per patch, which will be used both to identify where trees should be placed, and in the future where buildings and other objects can be placed.¬† This growth map will be based on the height and slope of the terrain to dictate where valid trees positions are.

GPU Diffusemap, Heightmap, NormalMap and Growthmap Generation:

Well the good news is that I’ve managed to get the terrain heightmap generation working on the GPU, using the four corners of the cube in world space, and lerping between them to generate a grid of noise values using perlin noise.¬† The bad news is that it hasn’t really improved performance, yet.¬† I still have a lot of testing to do with the GPU generation, again trying the find the best ‘low octave, good quality’ terrain etc.¬† I am actually hoping to get the height, normal, and diffuse maps all generated on the GPU (with MRT is available which can output upto 3 textures simutaneously, or single rendertargets if MRTs are not available).¬† When it is finished, I should be able to generate higher quality diffuse maps on the GPU once per patch, and then sample this like a normal tex2D() in the shader.¬† This will be useful because I can then a) lerp between texture LODs to avoid the current texture popping, and b) because I can then generate high resolution normal maps based off of the high-res heightmaps; the normals are currently per vertex.

User Interface:

I have also done quite a bit of work in the last few weeks on the UI, and so far I have setup a pretty neat little system whereby the game components can individually retrieve instances of windows (which are stacked on the left side of the screen in the screenshots), and set many debug variables in the windows.  It enables the game components to define custom controls (radio buttons, text boxes, slider bars etc).  This is particularly useful for changing debug variables at runtime.  So far the biggest thing missing is being able to load the UI Windows from .xml files, but I will get to this soon.

The Player Classes:

I have been working on Britonia now for a fair few months now, and while the terrain is shaping up quite nicely, I am now going to concentrate more of the player side of things. So here are some components which I will begin in the coming weeks:

HUD – ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†Although this is not really a player class, this is quite obviously a very important part of any game, both because it is the players main method of interacting with the game, and also because it will feature heavily on all game screenshots etc.¬† So far the GUI is setup mainly to act as an interface for debugging variables, and it is very generic.¬† I have added a basic window system, whereby I can scale, move, dock and minimise the windows.¬† So I must now start to create the ‘special’ case windows such as ‘inventory, trade, shops, character skills, character paperdoll etc. etc.

Character Skills –¬†¬† ¬†What would an RPG be without avatar skills?¬† I have played a great many of computer games over the years, and I have had the chance to play around with many different character skill systems.¬† I have decided to implement the character skills in much the same way as the ultima online skills worked.¬† For thouse who don’t know, the skills are grouped together by type, so you would have for example, all the combat skills together, each ranging from 1 to 100.¬† Each skill can be improved upon by using it, that is to say, to improve you skill in archery you would need to actually use a bow.¬† Now, per group you have a maximum attainable amount of points, say 400.¬† This effectily lets the player decide how to improve their skills based on how they play.¬† you can lock skills (to stop gaining/losing points) or complety ignore other skills which you do not use.¬† Once the 400 points are full for the group, you cannot upgrade anything else, until you drop points from a differnt skill.

Now it has been a long while since I last played UO (about 10 years ago :), so the numbers are just off the top of my head.¬† I haven’t started drawing up plans for which skills to include, because first I need to know about all the aspects of the game.¬† The skill points are not just combat related, but also for mining, lumberjacking, walking etc.¬† I even remember a skill on UO for creating kindling wood from the trees, so you could build a fire and safely log off.¬† Although I was young, I remember being very fond of how this all worked, so this is what I will aim for.

Bulletin Quests – I plan to start working on the bulletin quest system of Britonia.¬† The plan for the minute will be to base this on the kind of quests given in the old Elite games.¬† Only this time, instead of spaceports, you can read the bulletin either in the town taverns or from the town crier.¬† These quests will be completely procedural and will fall into one of the following catagories:¬† Assasinate (a person), Eliminate (a group or band), Gather (collect resources), Escort, smuggle, Whereabouts of? (missing person).¬† I would like to point out that this will not be the main quest system in Britonia, but it is something that should prove easy to implement and effective.¬† There will of course be a main plot and more involving quests, but they’re for another time.

Player Hands and Weapons –¬†¬† ¬†I hate modelling.¬† Right, I had to get that out of the way.¬† I started maybe two weeks ago to implement the player hands and the weapons.¬† I modelled a pair of hands and a bow and arrow, skinned them, and put them through the content pipeline from xna creators website.¬† Everything seems to work.¬† I have added several idle animations, and the weapons follow the hand animations quite nicely.¬† At the minute though the right hand looks a little – strange.¬† I have decided to tackle this problem in little bits, once every two weeks I will dedicate 2-3 hours just modelling, because anything longer in blender makes me want to cry.¬† If anyone is interested I could put some pictures up on the site, but I am happy to wait for a while before showing them.¬† The only goal I have set for this in the short term (within 2 months) is to have the bow fire and inpact objects, and the sword(s) swing and impact objects.¬† This should be enough for the game to not impede the rest of the game’s development.¬† The bow and arrow is about 50% complete.

Terrain Textures (a quick explanation):

I realise in the videos and screenshots that the terrain texturing is looking a little off.¬† The seams between textures are very visible, so is tiling and so are the LOD transitions (explained above).¬† Around March time I worked for about a week tweaking the noise and the textures trying to get them to look good for a youtube video I wanted to make.¬† I finally got them to acceptable standard, and I made the video.¬† The same night I decided to fiddle around with the quadtree, and I increased the depth to allow for the massive improvment in scale, but then the texturing was way out again.¬† I mention this because I have decided to wait before re-tweaking the terrain textures.¬† Before I have fully added trees, and eventually grass and flowers, I will not know how the terrain will look.¬† So to save time, I will leave it until the ground objects are a little better.¬† Also, the terrain textures haven’t changed once since I created the texture pack last year.¬† I am sure that huge improvements can be made in the visual quality by re-drawing the textures in the texturepack.

Leave a comment