# Coding

## Functy PPA

If you’re an Ubuntu user, you can now get the latest Functy development build through my Launchpad PPA. As long as you’ve added the PPA, this means it can be installed real easy using the standard **apt-get** approach:

sudo apt-get install functy

Beautiful.

For those inclined towards voxels, the latest development version can now export in SVX and OpenVDB format.

All Ubuntu versions since 14.10 are supported (that’s Utopic, Vivid and Wily). Utopic’s the only one I’ve tested so far. Anything earlier is unfortunately a no-go due to an older version of libzip.

That’s it! When I get a bit more time I’ll post about the new OpenVDB export feature.

## Curve Functions as Voxels

- Intro
- Cartesian Functions
- Spherical Functions
- Curve functions (this post)

Curve functions are defined as a circular co-ordinate function extruded along the length of a parametric curve. Hence, just like Cartesian and spherical functions, we have just two inputs (in this case the angle *θ* and parametric variable *s*) and three outputs (the *x*, *y*, *z* co-ordinates of the surface point):

(*x*, *y*, *z*) = *f*(*s*, *θ*).

This is constructed from the parametric curve

(*x*, *y*, *z*) = *p*(*s*)

and the radius functions

*r* = *q*(*s*, *θ*).

Practically speaking there’s some ambiguity here, because the angle *θ* needs to be given an orientation. One rotational axis is defined by the tangent to the curve, but where does that leave the other two so we know where to start our rotation from? To solve this we use the Frenet-frame of the curve to define an orthogonal co-ordinate space independently at each point along its length. It’s generally a good solution because it tries to maintain a consistent orientation along the length of the curve (although this can fail rather miserably if the curve becomes a straight line, which is a discussion for another time) while also being independently defined at each point on the curve. This last point is important in order to be able to calculate the curve surface points in parallel using the GPU.

This arrangement works well transforming from parameters to surface co-ordinates. But for a voxel map we need the reverse: take a voxel co-ordinate (*x*, *y*, *z*) and establish how far it is from the curve. We have the equation of our curve and can write an equation to capture the distance, but actually solving the equation is a different matter.

What we need is a solution to the following equation.

d*p*(*s*) / d*s* = 0

Calculating the algebraic form of d*p*(*s*) / d*s* isn’t hard (we do it already for the lighting calculations), but solving the equation can be. We’d have to rearrange the equation to establish it in terms of *s*. It could have multiple solutions, as the diagram above shows. In short, given *p*(*s*) is user defined and could be practically any function, we simply can’t rely on being able to solve the equation.

So is there nothing we can do? We can’t find the parametric variables that relate to a specific voxel algebraically. This is a real shame; it means we have to use a different approach entirely. The solution I’ve come up with is to effectively rasterise an approximation of the volume as a series of tetrahedrons that can make up the volume. Sadly this is less accurate and less efficient than the ideal algebraic approach, but at least it seems to work and reasonably well.

A tetrahedron is the 3D primitive counterpart to a triangle (the 2D primitive): we can represent any solid approximately as a series of tetrahedrons just as we can approximate any surface as a series of triangles. We can similarly render the solid into a 3D texture as a series of tetrahedrons in the same way we usually render a surface as triangles onto a 2D texture.

To understand this better, think of the tube defined by our curve as if it were partitioned into a series of cuboids, as in the diagram below. In the diagram the cuboids form a very crude approximation of the cross-section of the tube. This is crude partly because of the low number of cuboids used (in practice we’d usually want more) and partly because of my poor Blender skills (which you’ll just have to try to look past).

Here’s how one of those cuboids might look:

Our voxel space is made up of a series of slices (each slice has the same *z* co-ordinates but with varying *x*-*y* co-ordinates) so we need to be able to slice this cuboid. Performing slicing directly on the cuboid turns out to be hard (for me, at least), so the solution is to convert each cuboid into five tetrahedrons, like this:

This is actually really easy to do, because every vertex of each tetrahedron is just one of the vertices of the cuboid: we just need to pick them out in the right order.

Slicing a tetrahedron is a lot easier. Assuming (for the sake of simplicity) we don’t slice along one of the edges, a plane will always cut the edges of the tetrahedron in either three or four points. If it’s three points, we can just join them to create the triangle that represents the slice through the tetrahedron. If it’s four points, we have a quadrilateral, which we can turn into two triangles. We have to take a bit of care over this to avoid accidentally choosing the concave variant where two of the lines cross:

To avoid this, we check whether two of the lines cross, and if they do, reorder the vertices to fix it. We end up with two triangle which we can then rasterize onto the surface slicing the volume. Computers have been rasterizing triangles since the dawn of time (well, at least 1967, which is as good as).

So we have our sequence of simplification: curve volume, cuboids, tetrahedrons, triangles, voxels. It’s a bit long-winded, but we get there in the end. The inefficiency is compounded by the fact we end up performing this ‘volume rasterization’ repeatedly for each slice of the voxel space. We could do it all in one go, but that would require the entire voxel space to be held in memory while the render is performed. Given the amount of data involved (e.g. a 1024 × 1024 × 1024 voxel space would require a gigabyte of RAM) rendering each slice individually is considerably more memory-efficient, if not time-efficient.

The final results are generally good, although it’s still an approximation as compared to the results for Cartesian and spherical functions. As far as I can tell, there’s no straightforward way to avoid this based on Functy’s current design.

Here’s the images from the original post showing how all three curves - Cartesian, spherical and curve - are voxelised into a set of slices making up their combined volume.

## Spherical Functions as Voxels

- Intro
- Cartesian Functions
- Spherical Functions (this post)
- Curve functions

To convert spherical functions into a voxel volume is a bit, but not much, more coplicated than for Cartesian functions.

In the case of Cartesian functions the key criterion was the height above the *x*-*y* plane. In the case of a spherical function, it’s the distance from the centre point of the function that we’ll need to perform the comparison on.

We define our spherical functions using angles *θ* and *φ* circumlocating this point. The angle *θ* represents the angle around the *z*-axis through the centre point of the object. The angle *φ* represents the elevation above the *x*-*y* plane also through the centre.

Converting to a triangulation involves stepping through *θ* and *φ* with a given step distance and calculating the radius *r* = *f*(*θ*, *φ*) for the function. This radius acts like a spine emerging from the centre at the given angle and with length *r* as defined by the function. These spines are used to define the vertices of the triangles that form the surface of the object.

When voxelating the volume, the question becomes one of whether a given voxel, defined by its (*x*, *y*, *z*) position, is inside or outside the object. We do this by considering the spine that runs between the voxel co-ordinates and the centre of the object, and calculating the *θ* and *φ* values that would apply in this case.

The contrasting nature of the two approaches shows itself here. To create triangles you have to calculate the (*x*, *y*, *z*) position of a triangle vertex from the *θ* and *φ* values. In both cases the *θ* and *φ* are needed to calculate the radius.

Calculating *θ* (rotation) and *φ* (elevation) values isn’t hard with a bit of trigonometry.

*θ* = tan^{-1} (*y* / *x*)

and

*φ* = tan^{-1} (*z* / (*x*^{2} + *y*^{2})^{1/2})

Here’s the code we use to do this.

fZFunc = psFuncData->fZMin + (nSlice * fZStep); for (nX = 0; nX < nResolution; nX++) { fXFunc = psFuncData->fXMin + (nX * fXStep); for (nY = 0; nY < nResolution; nY++) { fYFunc = psFuncData->fYMin + (nY * fYStep); // Calculate the angles fOpposite = (fXFunc - psSphericalData->fXCentre); fAdjacent = (fYFunc - psSphericalData->fYCentre); fElevation = (fZFunc - psSphericalData->fZCentre); fAFunc = atan2 (fOpposite, fAdjacent); fDiag = sqrt ((fAdjacent * fAdjacent) + (fOpposite * fOpposite)); fPFunc = atan2 (fElevation, fDiag); if (psSphericalData->psVariableA) { SetVariable (psSphericalData->psVariableA, fAFunc); } if (psSphericalData->psVariableP) { SetVariable (psSphericalData->psVariableP, fPFunc); } fRFunc = ApproximateOperation (psFuncData->psFunction); fDistance = sqrt ((fAdjacent * fAdjacent) + (fOpposite * fOpposite) + (fElevation * fElevation)); if ((fDistance < fRFunc) && ((psFuncData->boMaterialFill == TRUE) || (fDistance > (fRFunc - psFuncData->fMaterialThickness)))) { ucFill = 255u; } else { ucFill = 0u; } } }

Once we have them, we can plug them into the function that defines our spherical object to calculate the radius at that point *r*‘ = *f*(*θ*, *φ*). We compare this with *r*, the length of the spine from the centre of the object to the position of our voxel.

*r* = (*x*^{2} + *y*^{2} + *z*^{2})^{1/2}.

Notice we’re pretending here the co-ordinates of our object are at the origin. If they aren’t, we need to subtract the centre co-ordinates component-wise from our voxel co-ordinates first.

The comparison is then simply *r* ≤ *r*‘ if we’re inside the object and *r* > *r*‘ if we’re outside.

We perform these calculations and comparison for each voxel in our voxel space. As before, if we’re inside we set the voxel to be filled, otherwise we leave it empty.

That deals with Cartesian and spherical functions, which are the easy ones, because the mapping between voxels and the co-ordinate system is one-to-one. Sadly this isn’t the case for curve functions, which I’ll tackle in the next post.

## Cartesian Functions as Voxels

- Intro
- Cartesian Functions (this post)
- Spherical Functions
- Curve functions

Of the three, the easiest class of functions to turn into voxels was the Cartesian functions. In order to understand the process the key thing I eventually realised was that voxelisation is almost the reverse of triangulation.

For the latter, we step through the *x* and *y* co-ordinates at a given resolution and calculate the consequent *z*-values to define the vertex co-ordinates of the triangle.

In contrast, for the former we step through not just the *x* and *y* co-ordinates, but also the *z* co-ordinate. The question we then have to answer is whether this (*x*, *y*, *z*) point is inside or outside the volume.

In practice, we can’t answer this question without defining the boundaries of the volume. Functy has an option to specify the thickness for a function, but the simplest case is to say anything on or below the function that defines the surface is inside the volume, and anything above is outside.

The *z*-value generated by the function for given *x* and *y* co-ordinates represents the height of the function above the *x*-*y* plane. For a given voxel position (*x*, *y*, *z*) we therefore calculate the height of the function *z*‘ = *f*(*x*, *y*) at the point *x*, *y* and compare this to our actual position (*x*, *y*, *z*). If *z* ≤ *z*‘ then we’re inside the volume, so should fill in the voxel. If *z* > *z*‘ on the other hand, we’re outside the volume, so should leave the voxel empty.

And that’s it. To voxelise the function we check each of the points that make up the voxel space, perform the comparison, and either fill in or leave empty the voxels as we go. There are a lot of voxels to go through because we’re working with cubic dimensions (so even a low resolution of 100 steps per dimension gives us a million separate voxels to consider), but the comparison to perform is pretty straightforward in itself, as is clear from the code that performs the check.

fZSlice = psFuncData->fZMin + (nSlice * fZStep); for (nX = 0; nX < nResolution; nX++) { fXFunc = psFuncData->fXMin + (nX * fXStep); for (nY = 0; nY < nResolution; nY++) { fYFunc = psFuncData->fYMin + (nY * fYStep); if (psCartesianData->psVariableX) { SetVariable (psCartesianData->psVariableX, fXFunc); } if (psCartesianData->psVariableY) { SetVariable (psCartesianData->psVariableY, fYFunc); } fZFunc = ApproximateOperation (psFuncData->psFunction); if ((fZFunc < fZSlice) && ((psFuncData->boMaterialFill == TRUE) || (fZFunc > (fZSlice - psFuncData->fMaterialThickness)))) { ucFill = 255u; } else { ucFill = 0u; } } }

Spherical functions are a little more complicated, but not much. I’ll write about them in the next post.

## Voxel Volumes

One of the main feature additions of the latests version of Functy has been the ability to export as SVX files. Functy could already export in PLY and STL, but both of these are triangle based. They represent the 3D functions as surfaces defined by carefully aligned triangle meshes. Rendering objects using a graphics card also uses the same triangulation process, so exporting as PLY or STL is a very natural extension of the existing rendering.

The SVX format is different though. It stores the models as a voxel image (a voxel being a three dimensional pixel, for those who didn’t grow up through the 90s demo scene). As a result, SVX doesn’t just store the surface, but also the volume of a function.

Turning a triangulated surface into a voxelated volume isn’t necessarily straightforward, but Functy has the advantage of having all its objects originate as purely mathematical forms. In theory, this means voxel rendering them as volumes should be quite easily.

What I found in practice is that for Cartesian functions and spherical functions this is true: they can be turned into voxel volumes in a very natural way. Curve functions are a different story though. In the next few posts I’ll go through each of the processes separately, to give an idea about how the solutions for each of the three function types were coded.

- Intro (this post)
- Cartesian Functions
- Spherical Functions
- Curve functions

## Functy version 0.32 now available

It’s not been long (just under a week) since the last Functy release, but the latest changes are suitably discreet to allow this minor update straight away.

The latest version now allows export in STL format, to complement the existing PLY export capabilities. Scenes can either be exported as static models, or with animation as a sequence of STL files for each frame.

In addition, there’s also been a bit of bugfixing too. One particularly nasty bug caused PLY export to fail for some scenes when the program was running on Windows (it was always fine on Linux). I’m hoping the bugfixes will resolve this problem. Please let me know how you find it, especially if it still causes problems.

As always, the binary and source is available direct from Sourceforge or via the downloads page.

## Functy Updates – Shadows and UI Improvements

The last two weeks of summer break have given me the opportunity to give Functy some long-overdue attention. As I mentioned in my previous post, one of the improvements I’ve been working on has been dynamic shadows, and these are now fully implemented. The result is a big improvement in the visual fidelity and sense of depth in the visualisation. You can see the difference in the two screenshots below and I’ll include a proper comparison in a future post.

The next step will be to test out ambient occlusion, as kindly suggested by Andrea Bernabei (@faenil). I’m not sure how well this will work with the type of functional objects rendered by Functy, but it’s worth investigating. Ambient occlusion produces the sort of shadow effects more likely to occur indoors where the light tends to be more scattered, as compared to directed lights or outside direct sunlight, as the current version produces. However, research has shown that ambient occlusion gives a heightened sense of depth as compared to direct shadows, so it’ll be interesting to see the results.

The second major change has been to the user interface. This has been completely rebuilt from the ground up to fit with a single window workbench-style interface, as opposed to the previous multiple-window toolbox-style interface.

Personally I’m a big fan of multi-window interfaces; it should be the job of the window manager to allow you to arrange and configure your windows however you want. Sadly most window managers seem to be lacking in the flexibility department, and managing multiple windows becomes an exercise in hide-and-seek, trying to find windows that got lost behind others like a stack of papers on a desk. So, even though it’s not the perfect solution, I’ve converted Functy so that everything is visible and available in a single window.

I have to admit that the result does look more professional and I think it’s a positive change. I’m not quite sure what will happen if things get more complicated, but it works well for the timebeing.

There’s still a fair bit of work to be done before this can be given a full binary release, such as updating everything to work on Windows, and switching from libglade to GtkBuilder (this may be for a future time). Feel free to test out the version from source (it’s very easy to build, honest!) in the meantime.

## Shadow mapping

One of the great benefits of teaching on computer game development modules at the university is that I have a justifiable reason for looking into interesting graphical effects for my work. It’s a real perk as far as I’m concerned. Last semester I covered shadow rendering as a new topic, which gave me the opportunity to find out about the subject in some depth.

Now it’s the summer and I have more time to experiment, so it only seemed right to apply some of these techniques to Functy; I’m hoping the function rendering will look a lot more realistic and solid once accurate shadows are being cast throughout the environment. Given the functions are all generated on the GPU, texture-based shadows seemed the most appropriate choice compared to stencil shadows. Texture-based shadows also strike me as more flexible and efficient in the long run. Below you can see the depth texture (greyscale) rendered alongside the existing function render (colour). Although they look similar, if you look closely you can see that they’re actually rendered from slightly different angles. That’s because the depth texture is rendered from the perspective of the light, rather than the camera.

The idea with texture-based shadow mapping is that by rendering the depth map from the perspective of the light, you get to see the parts of the world that the light is shining on. This is what the greyscale image in the screenshot above shows. The depth map is then used when rendering the functions to decide whether or not a given pixel is lit (visible from the light) or in shade (occluded from the light by another object). Put simply, if the distance of the pixel from the light is greater than the distance of the pixel transformed onto a point on the depth map, then the pixel is in shadow; otherwise it’s in light.

Once the shadow depth map has been rendered, this therefore needs to be fed into the fragment shaders used for rendering the functions. A quick texture lookup and a distance comparison are then baked into the shader to complete the process. Although it’s a really simple technique in theory, getting it to work in practice is turning out to be… intricate!

Sadly it’s been quite some time since I last had the chance to properly update the Functy codebase. Hopefully in time some of the latent improvements that haven’t yet made it into the release build can be rolled out, along with additions like this shadow rendering. It’s not quite there yet, but soon!

## Network Video

I finally managed to get the video to record properly, so here’s a video version of the shader-based depth-of-field focus-blur.

P.S. This post was actually made using Ubuntu phone. The browser’s still a bit “experimental” so it’s been a bit tricky, but it does seem to work!

## Network structures

As part of an experimental game project I’ve been trying to use the Functy rendering routines to visualise network structures. At the moment it’s at a very early stage, but has - I think - already generated some interesting results.

The screenshot below shows a network of 60 nodes, each one rendered as a spherical co-ordinate function, joined together using links rendered as curves. I just plucked some simple functions out of the air to see what the results would be like but am hoping to extend it with more interesting shapes as things progress.

The various parts of the network are a little hard to discern with a static image, but when I tried to capture a video the result was a mess of fuzzy artefacts (I think there must be something going wrong with my screen capture software), so I gave up on that.

The next step, after neatening up the code, is to arrange better animation of the nodes and links, with dynamic movement based on things like the forces between the nodes. I’m hoping this will produce some really nice effects, and if anything comes of it I’ll put a bit more effort into getting a successful video capture.