Code Alchemist

KageKirin's Dev Blog

1PPP - 1 Page Project Pitch

| Comments

We were recently invited to write a small project pitch on our own.
I can’t say if it was a genuine invitation to pitch new ideas or just because management needed a follow up project, but I kinda liked the idea.

Usually, a GDD (Game Design Document) fills up several pages, or better: an internal project wiki; while a full blown pitch takes at least 30 slides of media presentation with a concept video, several concept artworks and a presentation of the studio of the same size. The pitches on Kickstarter give you an idea for what I mean.

Which is why a “1 Page Project Pitch” — or “1PPP” to introduce the term — seems like a good idea to publish a game idea, however ripe this idea might be.

Let’s lay out a few rules to allow the 1PPP to be as precise and brief as possible.

1PPP — the rules

Brievity: at most one A4 page

A 1PPP should be short and not exceed a A4 page when printed out in a readable font.

Precision: focus on defining the core idea

A 1PPP should focus on describing the core game idea, the game’s core mechanics, the game characters (if the characters play a rather major role in the game), and optionally the game world and its inhabitants.

The latter points about characters, world etc. are prone to change anyway between the idea pitch and the GDD creation, so I wouldn’t lose time detailing them more than needed to convey the idea.

Ludistic Pitch: where’s the fun?

The maybe singlemost important point of the 1PPP: where is the fun in the game?

Focus on describing how the game mechanics laid out above will end up being fun. Also, lay out its genre (if it can be classified) to give a more precise idea.

Financial Pitch: where’s the money?

The second important point of the 1PPP: how is your game going to generate money?

Ok, I’m sure, a lot of readers will now argue that games should be fun and not worry about physically things like money, but at the end of the day, the developers have a life as well and need to pay rent, feed their family, etc. Plus, if you’re pitching the idea to a publisher, this is a point he’ll be most interested in.

Another point to cover if the minimum technical budget (engine licences, platform licences) you would see fit for the project.

Artistic Pitch: what look for the game?

Optional pitch #1: is there a specific look you want for the game?
Sometimes, a game idea rotates more about the general look-and-feel of game than the main mechanics.

Music and sound choices also go in here.

Technical Pitch: anything technical goes here

Optional pitch #2: is there any specific technique you want to use for the game? This can cover everything from rendering, to special inputs, to code specifics. And as all other ideas, this is prone to change.

Story Pitch

Optional pitch #3: everything story-related, from game world, its inhabitants, the main characters etc. should be covered by this. The ideas might change, but sometimes they lay out the particular atmosphere you want to pitch as game idea.

Miscellanous Pitch

Optional pitch #4: everything else. That’s open to you.

Realistic: stay down-to-earth

Stay down-to-earth with the previous pitches. It’s good to expect money by making 10 million sales, but unless you’re Rockstar pitching the next GTA, you probably won’t get that much sales anyway.

Also, for the financial pitch, try to come up with plausible values.

References: keep them to a minimum

Referencing other games is easy. But a “like Final Fantasy” or “like Pokemon” is not enough to describe game mechanics, even less to describe the core game mechanic.

Also, your readers will probably have a different approach, or perspective, or just opinion, about the games you’re referencing, so a comparison goes against the “precision” part explained above.

Plus, as opiniated as your reader may be, your readers might dislike the games you’re referencing, hence the comparison is a sure way to kill your pitch.

Comparisons can be helpful if you just scribble down your ideas on a note to expand and iterate on them later. They should better not be part of the 1PPP.

Furthermore, too many references will make you look like a copy-cat from the start, whereas describing the game mechanics, even if they are 100% similar to the title you would be referencing, will not.

Platform: what will the game run on?

The part can be optional, but since the game mechanics might depend on it (eg. touch controls for smartphone games VS joypad for console games VS mouse&keyboard for PC games), it’s probably better to note them.

Control schemes aside, different platforms introduce different technical constraints, which, even if not addressed in the 1PPP, should be covered later in the GDD.

Also, the platform part might decide the budget you can spend on the realization of the game. (See financial pitch).

tl;dr

With these rules laid out, I’ll be applying them to my own game ideas and publish them as 1PPP on this blog sometimes soon.

Git-Fusion - Distributed Working With Perforce, the More Awesome Way

| Comments

This one is even better than the P4Sandbox I previously wrote about:
Git-Fusion is a server bridge that interfaces Perforce to Git and allows you to use the DVCS freely, while still being able to work with a central server on enterprise level.

Git-Fusion is collection of Python scripts that use P4Python to interface with Perforce and Git on the server, and take care of the both-way syncing of the changes between P4 and Git.

It comes in 2 versions: a “virtual” appliance that can be run anywhere as a VM, and a “native” version that needs to be installed on a Linux machine.

Currently, I’m still stuck on the installation of the “native” version, but I will post a tutorial once I’m done with it.

Pros:
+ Git!
+ All the freedom that Git offers (syncing to multiple servers, “guerilla-style” coding without central repo) while still be able to submit to a central server for storage.
+ Other people that prefer Perforce (yes, they do exist) can still continue to work with it.
+ the “virtual” version is super easy to get to install: 1 virtual appliance disk image to run on a VM, access over HTTP, done.

Cons:
– The “native” version is pretty hard to install (unless you’re a hardcore linux guy that knows how to administrate a system, in which case you deserve all my respect for being able to cope with the Perforce documentation)
– Obviously, the limits of Git still apply. Using Git for large amounts of large binaries is still a very bad idea. (But git was never intended to be used this way)

tl;dr Git-Fusion is pretty promising. Check out the installation tutorial in near future.

/C

P4Sandbox - Distributed Working With Perforce

| Comments

Working with an P4 environment has its quirks, such as not always being able to submit, hence having unrelated changes accumulating in the same file.
In such cases, you can either submit the whole bunch, hoping you didn’t break anything on the way and letting any poor integrator lost in the rain, or spending even more time trying to untangle everything in order to make small submits again (good luck with this if you did NOT start out with making many shelved changes on the way).

To remedy to this pain, Perforce created the so called P4Sandbox, which functions as a local (single workspace) replication server to which you submit your changes to, and later sync with the central server. While working, you can still get the latest updates from the central server by syncing (and resolving possible conflicts along the way).

Setting up a P4Sandbox is pretty easy, since there’s an assistant. You will need to create a client mapping for the files you want to work on, and get those replicated into the local copy.
From there, you create another workspace, this time using the P4Sandbox as server, and start working.

Pros:
+ Distributed: Able to work even when the central server is not available.
+ Lockless: You can still submit even if the files are locked on the central server.
+ Locality: It’s faster than using the central server.

Cons:
– Disk space usage: 2 workspaces + the replicated data: the whole thing will take quite some disk space. Of course, this works great if it’s only source files (or small image data), but once you have a lot of game assets in it, be prepared to see your free disk space fade away quite quickly.
– Submit comments: comments done to changes in the P4Sandbox went lost while syncing them with the central P4Server.
– Shelves: shelving did not work when I used it. Might have been a bug, though.

If you’re like me and annoyed by the limits imposed on you by a central P4 server, I can only urge to try it out for yourself.

/C

Not Dead Yet…

| Comments

This has been one crazy year. Just a moment ago, we were still in May, then it was summer, and now we’re less than 3 weeks to Christmas.

I just wanted to affirm that neither me, nor my blog are dead yet. I’ve just been super busy IRL and lacked the time to write anything new.

I’ll try to update more often, with smaller, but more concise posts.

/C

PS: Same for my other blog as well, btw.

My Work Environment (at Work, 2012)

| Comments

One of those posts, where I rant about my work environment.

At work, I’m pretty much bound to a Windows environment, since all the tools only work there. So, I’m running Windows7 (to which I got upgraded just recently, for the sake of my own sanity, which was at stake for using Vista in 2012) on a DELL Precision T3500 with AMD Radeon Graphics.
For the IDE, it’s naturally Visual Studio, namely Visual Studio 2008 Team System. Per default, I had no version of Visual Assist X, forcing me to buy my own license (talking about “tools of the trade” when you can’t have access to the swiss army knife of VS).
I also installed MetalScroll, which makes finding references inside of one file pretty easy.
And IncrediBuild, which apparently costs way more in Japan, due to some dirty tricks from the Isreali Embassy (the only provider of this product in Japan), which tries to suck the lifeblood out of japanese software companies through their pricing.

For the source version control, we’re using Perforce, so I’m stuck with their tools, and despite there being a couple of welcome improvements in the latest versions, i.e. streams, I don’t have access to them as the server is running in older version. In fact, I often wish (at least 5 times a day) I could use git locally, but the git-p4 bridges don’t work that well.
(The hg-p4 bridge does work well, but I don’t remember the way I set it up at my former employer).

For file editing related things (other than C++), I’m using Notepad++, which, although not perfect, doesn’t give me headaches about usability issues.

I configured P4 to use P4merge for merging files, since I like the 3 way-view with “theirs”, “base” and “ours” on the top, and “merge result” below.
For diffs, I’m using a very handy tool called “Diffuse”, which allows for super fast diff’ing with only keyboard interaction. (The only issue I’m having with it is that it tends to break files with japanese encoding — but then again, it’s the comments that are in japanese, and I just delete them since “you shouldn’t need to comment your code” ;p ).

For other file related actions, I’m either using the vanilla explorer, or Total Commander (which is my preference when it comes to moving lots of files).

JWPce is my preference for writing Japanese, as it includes a pretty neat dictionary, and I’m so used to this tool, that I can hardly read Japanese without.
(Yeah, I’m lazy and I know it).

For the other dev tools, it’s pretty classic, XDK for working on Xbox360, and SN Tools for working on PS3.

Oh yeah, one of the tools that come in handy is Oracle’s -Sun’s- Virtual Machine VirtualBox. I have already a (several) Linux installation to automatize some backups and I’m running one from time to time to test software before installing it on my main system. It’s also pretty cool to track down hardware or driver issues that occur on the host system, but should not be reproducible on the VM.
(Fun fact, the XDK tools tend to work better from within the VM than on the host system).

And that’s pretty much it.
I’m using some batch files to simplify a couple of workflows, but nothing really extraordinary.

/C

Generalized Motion Blur (Cont’d)

| Comments

I hope to cover some more aspects and enhancements of the Generalized Motion Blur I have not treated in my previous post.

As such there are:

  • particle motion blur
  • generating other blur patterns through a different kind of particles
  • batching screen-space deformation to our scene buffer lookups.

Particle Motion blur

Particles, especially fast moving ones, should get a motion blur as well to improve realism. This can be done in the same way we did it for object motion blur (unskinned geometry), namely by computing and writing out the frame-based position difference, but there are a few things to take care of.

Alpha testing and translucence

I did not cover this in the previous post, but it pretty clear that alpha tested or alpha blended geometry ideally requires the motion blur to take the blending into account.
Practically, not taking it into account might lead to a few artifacts, but I doubt they would be that visible in the final result, so we can ignore this part and simply blend this motion vector on top of the motion in the buffer. (We can alter the motion length by some factor to account translucency and to make it less apparent).

World transform matrices

Depending on your architecture, particles might get computed a bit differently from normal objects. While objects can be drawn one by one using their world transform matrix, this is likely to not being that optimal when it comes to drawing thousands of particles. In this, it is going to require a little extra engineering effort to buffer the last frame’s particle positions and to inject them into the drawing of this frame’s ones during the motion blur pass.

As this will double the drawcall cost, it seems wiser to detect the particles that really need motion blur, and to just draw these.

Geometry deformation

I covered this method in my previous post, stating that its result might be unpredictable. This still holds true for complex geometry, but in the case of particles, the geometry ought not be too complex, since most particles are just quads rendered as billboards.

Since rendering motion blur on top of already blurred particles will look pretty bad, such motion blur deformed particles should rendered after the motion blur pass. But there is another type of particles, that could avoid us the work required to either blur thousands of particles at once, or to separate the particle passes into motion-blur-deformed and normal ones.

Motion blur particles

(I did not find a better name despite intensive brainstorming, so beer with it).

The idea is to (slightly) abuse the generalized motion blur method by writing motion vectors directly into the motion buffer. This allows us to have a finer control over what kind of motion vectors get written, as they are based on a “motion vector” texture.
For example, a motion texture can hold a unique direction — the texture being then of one color with no grading — and turn this vector to the “right” direction through rotation to write a motion vector.

But since this texture based, we can go one step further, and use this method to generate other blurs, that are usually drawn in other passes. Best example would a radial blur, which is nothing more than a linear blur, following centroid lines.

In fact, this is the point where the generalized motion blur can play its strength, by allowing us to batch more effects into one single pass.
The next section covers even more batching.

Screen-space warping/refraction/deformation

The main idea is to batch particle based effects, that produce a refractive-like visual, like heat-haze or underwater “wobbling”, into the motion blur pass.
Since the linear blur pass mostly consists of texture fetches from the scene buffer, we can piggy-back on this by “jittering” or “warping”, thus deforming, the screen-space texture coordinates.

To batch this into our existing framework, we need to change the layout of the motion buffer a bit. Since the “warping” is nothing more than offsetting the screen-space coordinates, it requires 2 channels to be effective. Hence, the motion vector has to be reduced to 2 channels as well, which can be achieved by projecting the motion vector into screen-space first, and then write its 2 values into the 2 remaining channels.

Using a QWVU texture format (Q8W8V8U8 or Q16W16V16U16, but not A2W10V10U10 since we need the Q-channel as well), the new buffer layout looks as follows:

1
2
[    U8V8     ][     W8Q8      ]
[ssVelocity.xy][ssWarpOffset.xy]

(I reversed the channel order for simplicity).

Drawing offsets

Just as with the motion vectors, we can write the offset vectors into the 2 channels of the motion buffer. Writing needs to be additive as well, to accumulate offset movements from different layers of particles, so nothing really differs from the motion (particle) passes, but the target channels. To avoid writing into the wrong channels, one can set up write masks before the draw pass (and reset them at the end).

Particles need to be Z-tested against the Z-buffer to mask out foreground geometry, and can profit from being smoothly depth-blended, by modulating their strength with respect to a factor depending on the Z-buffer pixel depth and the particle’s. But these conditions have to fulfilled for the motion blur particles pass as well.

To further optimize the particle passes, it’s possible to batch drawing the motion blur particles and the warp particles at the same, going as far as modifying the shader to draw both values at once as needed.

So far for the second post on the Generalized Motion Blur method. We covered which particle passes can be batched and why drawing all particles again to generate a motion vector field on them might not be such a good idea.

In the next posts of the series, We will cover the different aspects of the actual linear blur pass, and show even more effects that can be batched into it.

tl;dr
Bad idea: Drawing thousands of particles twice to generate motion blur on them.
Good idea: Drawing special particles with motion vectors is more efficient.
Super idea: Batch screen-space warping particles into the motion particles.

/C

Generalized Motion Blur (Idea)

| Comments

The are several existing techniques to apply Motion Blur, each with a slightly different purpose and outcome.

I will try to roughly classify those techniques by showing the method behind, and then propose a possible generalization that allows to apply those different techniques to the same scene in a computation economical way.

Ideally, rendering at high framerates (>60 fps) would not require any motion blur, as the human brain would create the impression of blur to compensate for the eye’s framerate of 24 fps. Sadly, such an approach is technically — rendering a 60+ frames on current gen consoles is hardly possibly or would require other limitations in terms of rendering quality — and artistically limited — motion blur might be wanted to express certain aesthetics — hence not practical.

The main goal is to apply several kinds of motion blur to a given scene in a as little drawcalls and fullscreen passes as possible, and also using a little framebuffers (memory consumption) as possible.

I will also write about possible ways to further optimize the processus.

Types of Motion Blur

Nature of Motion Blur

Before we begin, let’s define the nature of a motion blur.

When we perceive a motion to be blurred, this is the effect of our eyes not being able to keep up with the movement, hence it appearing doubled, or blurred.

Similarly with a camera, a motion blur is the result of a movement velocity being higher than shutter speed of the camera, respectively, higher than the CMOS capture speed.

Last Frame Blended Motion Blur

This motion blur processus consists in blending the (n) last frame(s) with the current frame, which results in an impression of afterimage.

In the old OpenGL versions, it could be implemented using an Accumulation Buffer, but on modern machines, I would implement it to use the last frame’s final image (i.e. blurred with frame (t-2)’s final image) as an input to blend with the current frame’s final image. This would result in a series of recursive blurs, which might give a nice result.

On the downside, this would have me store a fullres buffer of frame (t-1), and possible fast camera movements might yield very strange results, such as strange afterimages instead of blurred lines.

Object Deformation Motion Blur

This motion blur procedure consists in deforming moving objects in the vertex shader along their frame interpolated movement vector.

It was used for example in the Radeon 9700 Animusic “Pipe Dream” demo from ATI to apply a motion blur on the balls.

The difficulty of the method lies in the vertex shader where we would need to transform the vertices differently, according to their position relative to the center of the object and the motion vector. This works for simple objects where the start and end of said object are easily defined (e.g. spheres, as in the AMD demo), but becomes more difficult as the object’s shape gets more complex.

(Although it could be done by approximating each vertex to a unit sphere around the center of the object and distort vertices from the lower hemisphere with respect to the motion vector. Something like this (untested):

1
2
3
4
float3 posRadius = normalize(os_position);   //unit sphere radius vector for a given vertex in object space
float hemisphere = sign(dot(posRadius, normalize(vMotion)));  //define which part of the sphere of the sphere we're on, w/r to the motion vector
os_position += vMotion * saturate(-hemisphere);   //distort the vertex along the motion vector when on the lower hemisphere
// continue vertex transform as usual

The result of this is unpredictable on complex geometry, though, and might lead to strangely deformed shapes).

On modern PC GPUs, this could be solved by issuing more vertices to be drawn via a geometry shader. Current gen console GPUs on the other don’t support geometry shader per se, making this approach not practical for cross-platform titles.

Camera Motion Blur

This blur is applicable as post-processing effect, as it does not modify any geometry. It consists of using the current and the previous frames’ view-projection matrices to reproject the Z-buffer into world space, and to build a difference in position (the movement) from the coordinates.

1
2
3
4
5
6
7
8
9
10
11
float4x4 invViewProj_curr;
float4x4 invViewProj_prev;


float zDepth = tex2D(depthBuffer, ssTC);
float4 sscoords = float4x4(ssTC, zDepth, 1);

float4 wpos_curr = mul(invViewProj_curr, sscoords);
float4 wpos_prev = mul(invViewProj_prev, sscoords); 

float4 wmov = wpos_curr - wpos_prev;

We can then project the movement vector back to screen space and use it to apply a linear blur on the scene texture.

Note: this method was presented by nVidia in the GPU Gems 3.

Object Motion Blur

This blur method requires the blurred objects to be drawn in another render pass, which resulting buffer is used later to apply a per-pixel linear blur.

For each object, we pass its current and previous world matrices as input, project the project into screen space, and write the per-pixel movement computed from transforming each of the objects vertices by the current and previous world space matrices.

Most of the work can be done in the vertex shader, and the position difference can be done in the pixel shader for accuracy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
float4x4 World_curr;
float4x4 World_prev;
float4x4 ViewProj;

struct VS_OUT
{
    float4 HPos : POSITION;
    float4 wp_curr : TEXCOORD0;
    float4 wp_prev : TEXCOORD1;
};

VS_OUT VS_main(float4 Position : POSITION)
{
    VS_OUT out = (VS_OUT)01;
    
    float4 w_curr = mul(World_curr, Position);
    float4 w_prev = mul(World_prev, Position);
    
    out.HPos = mul(ViewProj, w_curr);
    out.wp_curr = w_curr;
    out.wp_prev = w_prev;
    
    return out;
}

float4 PS_main(VS_OUT in) : COLOR0
{
    float4 Color = (float4)0;
    
    float4 wp_diff = in.wp_curr - in.wp_prev;
    Color = wp_diff;
    
    return Color;
}

Then again, as for the Camera motion blur, we use this movement vector (projected into screen space) as input to the linear blur.

Animation Motion Blur

This method is an extension upon the object motion blur for skinned objects that takes the animation into account to build the motion vector.

As such, the motion vector will be the difference between the world position of one vertex using the current skin and world space matrices and the world position of the same vertex using the previous skin and world space matrices.

The algorithm differs depending on how the skinning is computed, but given the case it’s done on the GPU, it will look like follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
float4x4 World_curr;
float4x4 World_prev;
float4x4 ViewProj;

float4x4 skinning_matrices_curr[n];
float4x4 skinning_matrices_prev[n];

struct VS_OUT
{
    float4 HPos : POSITION;
    float4 wp_curr : TEXCOORD0;
    float4 wp_prev : TEXCOORD1;
};

VS_OUT VS_main(
    float4 Position : POSITION,
    float4 Weight : TEXOORD0,
    int4 Indeces : TEXCOORD1
)
{
    VS_OUT out = (VS_OUT)01;
    
    float4 skinnedPos_curr = computeSkinnedVertex(Position, Indeces, Weight, skinning_matrices_curr);
    float4 skinnedPos_prev = computeSkinnedVertex(Position, Indeces, Weight, skinning_matrices_prev);    
    
    float4 w_curr = mul(World_curr, skinnedPos_curr);
    float4 w_prev = mul(World_prev, skinnedPos_prev);
    
    out.HPos = mul(ViewProj, w_curr);
    out.wp_curr = w_curr;
    out.wp_prev = w_prev;
    
    return out;
}

The final movement vector and linear blur computation is the same as in the Object motion blur.

Generalization

Since 3 (4 with some algorithmic changes) of these methods consist of writing the motion vector into a motion (or velocity) buffer, and using this buffer as input to a per-pixel linear blur, generalizing the blurs seems straightforward.

We choose a texture format that allows blending and signed values. On Xbox360, such a format would be the 32-bit D3DFMT_Q8W8V8U8 or its 64-bit counterpart D3DFMT_Q16W16V16U16.

Since the Xenon GPU does not allow floating-point textures to be blended, it would be impractical to use such formats and doing the blending after a readback of the previous draw pass, as this would imply a lot of resolving and texture fetches.

Unsigned formats, on the other hand, make the writing of negative values impossible, hence they fall out of choice for screen or world space motion vectors.

As such, the generalized algorithm looks as follows:
1. write the Z-Buffer reprojected Camera motion into the velocity buffer as an opaque blend to overwrite any existing value from the previous frame.
2. write the Object motion with Z-testing against the previously used Z-buffer to avoid writing more than necessary. Blending should be additive.
3. write the Skinned Object motion in the same way, blending additively.
4. write some motion vectors (more on that in an ulterior post) 5. resolve into a texture of possibly the same format as the render target surface. 6. using this velocity texture, apply a per-pixel linear blur on the scene texture.

More blurring…

We can optimize and even batch more blurring into the motion blur pass, but I will post more on it a later post.

tl;dr
There are several types of Motion blur passes, and they can be batched for a more optimal render process. More on blurring later.

/C

Test-Driven Shader Development (an Idea)

| Comments

This post is more about an idea I had, than about its actual implementation.

Test-driven Shader Development

Base idea

When doing shader development, be it for R&D purposes or in a production environment, you will run into a couple of possible issues:

  • shader not compiling
  • shader not linking (in GL)
  • incompatible vertex and pixel shader due to interpolator differences (Cg, DX)
  • shader not running due to wrong input
  • shader running but not giving the expected result (algorithmic error)
  • shader not returning the expected result due to wrong input
  • wrong output from one shader breaking another shader (happens often when playing with lighting models)
  • shader computationally too heavy (instruction-bound or texture-fetch-bound)
  • a lot of other stuff that can go wrong, and by Murphy’s law, will go wrong.

In many of these cases, nailing the problem down to a few causes, at best a single one, will allow for fast solutions and let the programmer focus on the more interesting parts.

Compiler/Driver issues

Those are mostly issues related to building and loading the shader.

The straightforward solution is to implement hot reload, i.e. reloading while the engine or test environment is running, and this everytime the shader file is saved.

The apported benefit is that this will allow for shader cooking, i.e. editing and tweaking of the shader depending on its “visual” result.
(One of the features I loved on CryENGINE3, and that I’m totally missing on the current engine at work).

An optimization to this:
reload everytime a hash value depending on the full shader source, i.e. the main function file and all of its includes, changes.

Input issues

The straightforward solution is to have “static” inputs. Those can be:

  • static textures to simulate a GBuffer
  • static camera values
  • static uniform settings
  • static vertex settings

In the same idea, being able to tweak parameters and see its outcome is likely to help finding input values that lead to computational errors. (Div-by-zero anyone?).

Output issues

This kind of issues can be caught by creating a difference image to either

  • a “reference” image (e.g. created through raytracing instead of rendering)
  • the last “good” result image

The frame “correctness” is the amount of errors/differences in relation to the reference image.

Algorithmic issues

  • using “random” inputs
    • shuffle inputs several times, test if output is correct

Saving the tests

A great addition for this kind of framework is to save the test “artifacts” (in Jenkins’ terms) along with its inputs to allow for later reproducibility.

  • inputs: the shader files, static input values, user input values
  • artifacts as such would be: random input values, output frames

Generalization

Such a framework would be great to be generalized to work with both Direct3D (several versions, but at least DX9.5 for Xenon and DX11.1) and OpenGL (several versions as well, but ES2 would make my day). And support for the exots like GX2 (“CafĂ©”) and GCM (PS3, Vita). (The shader in latter systems are based on GLSL and Cg, respectively, making the porting easier).

Furthermore, OpenCL and ComputeShaders would equally profit fropm such a system. (As would SPU jobs do, but that’s limited to a certain type of hardware).

A general solution would allow to have “any” kind of data processed by “any” kind of “processor”, be it DX, GL, CL, CUDA or C++AMP.

tl;dr Is there some student/grad student that would feel like implementing such a system as a master/diploma/doctorate thesis?

/C