Diabolus in Musica Postmortem

Diabolus in Musica is a horror visual novel I made in a month for the Second Annual Spooktober Visual Novel Jam hosted by DevTalk+.

In this article I’m going to discuss what went right, what went wrong and my plans for future development. I’m also going to include some critique of the work – I’d originally planned a separate post for this but I’m finding it difficult to separate the two.

Scope

Everyone’s favourite topic – scope! I’ve generally gotten better at scope over the past couple of projects I’ve done – because I’ve actually finished them! This jam lasted a month and all materials barring concept work had to be done within the time frame.

With Diabolus in Musica, the actual amount of content I aimed to deliver within the timeframe was well scoped. My initial plan of five character routes was immediately taken to two, and when I realised I was running out of time I cut the number of endings from four to three, ensuring the remaining content would actually be finished.

All five characters – only two have routes within the game.

However, I wouldn’t call the project well scoped. The story I wanted to tell didn’t fit well into the one month creation period. It needed time to breathe that I couldn’t give it. The audience needed to see how the mc’s relationship to their five friends developed – how they met, how they became friends and how the relationships changed as it became clear that mc was no more than fodder for Niccolo’s plans.

My major influences were books where the plot and themes revolve around characters who at first seem glamourous, untouchable and admirable, but through the eyes of a naïve outsider, the reader learns the dark truths behind the glamor. This sort of tale just can’t be told over a half hour visual novel. I needed a far larger project to convey my meaning and create an engaging experience.

Based on the positive responses from friends I’m considering maybe continuing the project in the new year, but I want to finish up other projects and see how the streamers from the jam react first.

Time Management

I had absolutely zero time management for this project. I worked on it whenever I felt like it, got distracted and did things like try to write my own particle system which was abandoned, and spent more time on the sequences I liked than the ones I was less keen on. It was a bit of a disaster that led to a hurried last week.

There’s a lot of value in playing around and experimenting, but not when you have a deadline!

The last time I did a game in a month I spend an hour on it every morning before work, then a little more at weekends. This was a good pace and one I’m going to employ for further projects with this time frame. I do think a month is a good amount of time – its enough that you don’t crunch, is long enough to make a complete product but also short enough that if it doesn’t work or you don’t finish it wasn’t a waste of your time.

Another note on time management is that that last game jam was completed in March. For obvious reasons my mental fortitude was significantly higher at the time, and that had no small impact on getting the game done.

Pipeline

My general pipeline for working on visual novels has been…not to have one? I write all my dialog directly into my IDE and add expressions, effects and sound at the same time.

From talking to other VN devs it appears this isn’t common practice, and most write their dialog and narration in a separate screenwriting program or in something like twine before then copying into their game code.

This isn’t to say I don’t plan anything – I have outlines and diagrams on paper but its all fairly loose.

I’d like to try planning in an external editor and see how this affects my process and productivity.

Another issue with my pipe for this was file organisation. Because this game was more route based and less linear than my others, rather than numbering each scene and separating them out, I made files for each of the characters. This was beyond confusing and I wouldn’t take this approach again.

Writing and Characterisation

Something that I felt undermined the overall quality of the game, in conjunction with making it more difficult to write, was that both my mc and the characters they interacted with lacked consistency. This was in both personality and tone/word use.

The expositionary narration and the actual character’s dialog didn’t match up.

When I was writing January Edwards: PI, I had some strong characters who had been bouncing around in my head for a while, so I had a good idea of how they would talk and react to situations. With Dream Dilemma, the characters were less defined but they were heavily based on archetypes and the game had a theme of density vs willingness to make your own decisions, so I got away with a lack of depth or solidity in their characterisation. I’m now realising that I did a lot with DD to make it easy for myself!

I think if I had made this a longer project as mentioned above, I would been able to develop the characters though additional scene writing and them just living in my head over time. I suppose with this sort of timeline my best bet would have been to make some upfront decisions about speech patterns, basic personality, etc. that would then be used throughout.

The largest issue with character was with Niccolo – I would have liked to set him up as shy and awkward, then drip feed his true nature to the player, so they feel like they work out he’s bad just as the game reveals it.

Niccolo goes from sad boi to pure evil in 2 seconds.

Another thing I picked up on during editing was that my labelling for choices was inconsistent. When the mc wasn’t going to say much I usually used the actual line, but when it was longer I condensed it into something like ‘disagree’. In future I think I’d like to choose either a description approach or a shortened version of the character speech approach and stick to that.

One of the choices in the game where what you say is condensed.

It seems that most visual novels speak in second person, addressing the player as you, and are in present tense. I used first person past tense, just because its the way I’m most familiar with writing. This made some things tricky – especially when the player character was surprised or wondering about the future. I think I’d like to try a second person, present tense approach next time!

Art Direction and Use of Assets

I felt that the game lacked a cohesive art direction. It wasn’t bad, or jarring, just unspectacular and lacking in identity. The use of generic, paid-for assets meant that none of the art was created with the themes of the game in mind, nor was it created cohesively. Because I used an asset pack for the backgrounds and a single tool for the characters, there was at least a shared vision between each of these elements.

The character sprites all used the same pose. This is a limitation of the software I used to create them. An improvement would have either been to commission/team up with an artist, use different stock assets that were more disparate or make the art myself.

Its very obvious when characters stand next to one another.

The single commonality seems to be that everything is very blue. This is largely due to the background art and the night-time setting. It makes things too dark, makes locations hard to distinguish from small screenshots and gifs and just gives a depressing air which isn’t what I want to put across. In future games I’d like to have a strong colour scheme that is representative of the themes, tone and genre of the game.

So much navy!

One last thing that I think would have improved things overall is replacing the newspaper clipping end scenes with CGs involving the characters. While I think the newspaper is a neat device, it wasn’t thematically appropriate to the game – where had newspapers appeared before the ending? Nowhere. Where had the idea of reporting and news been in the story? Nowhere. In my detective game that might have worked, but not here.

A neat device, but not appropriate for the game.

CGs also appeal to the type of audience this game was for. A CG of Haydn waiting in the rain or the player leaning in to kiss Niccolo before the big reveal would have driven excitement for the characters and upped the romantic tensions of the game.

The one thing I really liked about the art was the dust particle and the lightning effects. It would be nice to be able to put that amount of polish into all scenes! Generally that’s going to need more time or more people.

Cohesion Between Art and Writing

There are a couple of instances in the game where the art doesn’t match the writing. For example, the hallway made the school feel Japanese and the chairs in the final performance hall didn’t look like you could be restrained to one.

Because the art was fixed, the writing should have taken its ques from the art so that the game felt like a better whole. Alternatively, I could have made my own art or teamed up with an artist, though both of those opinions have considerable impact on scope and production methodology. One of the biggest motivators for me is seeing finished art in the game immediately as it feels like a ‘real project’ and because I have an art background but haven’t flexed those muscles in quite some time there tends to be some confidence barriers that slow me down.

Code and Mechanics

I don’t have an awful lot to say on code – the mechanics of this game were very simple and didn’t do anything beyond the base capabilities of ren’py.

I did at one point attempt to write a particle manager but I scrapped it very quickly – if I want to write things like this they need to be factored into the plan at the start so I have additional time.

One very interesting thing that I did learn from a couple of art bugs however was that I’ve been using hide/show images incorrectly the whole time I’ve been using ren’py – that was a surprise for sure. I had been hiding and showing every image as it appeared, and using file names directly in the show/hide statements without defining them as image variable types. Looking forward to less buggy games now that I know how to use one of the most basic and commonly used functions properly!

From the ren’py documentation.

The Future

There’s a lot of takeaways from this project, but I think the main one is that in order to improve the quality of my games, I need to improve the art. Either I need to dust off that ol’ art degree and draw something myself, or I need to form a team with an artist.

Overall the strongest element of the game appeared to be the writing, which is surprising for me as I have no formal training in that area like I do with art and code.

I’m also interested in creating games with slightly deeper mechanics, so might work on something a little more design/code heavy in the future.

First things first though – some well deserved relaxation, then finishing the January Edwards demo!

VHS Video Material Tutorial

I made this VHS Video shader in unreal as part of the first shader challange for the Technically Speaking discord. Our theme was ‘Retro’ and I may have been toying with ideas relating to an FMV game, so decided to throw the two ideas together.

In the spirit of sharing knowledge that the server is built on, I’ve written a tutorial below on how I set it up.

For anyone who prefers to see the source, the project files can be found here. Unzip these folders and copy them into the content folder of your project.

Feel free to comment on here or message me on twitter if you have any questions about the setup or how to do things. 🙂

I’ve also set up a ko-fi now, so if you get some use out of this and have pennies to spare, a tip would be much appreciated!

Buy Me a Coffee at ko-fi.com

Video

Video Texture

Start off by importing a video texture and creating a basic unlit material that uses it. Then make a blueprint that opens your imported media source on level start.

Instructions on how to import, setup and use a video texture in level can be found on the unreal documentation page, so I’m not going to cover it here, but feel free to reach out with any questions!

Video Texture on a Sphere

The video was just a silly horror-esqe clip I took with my phone, looking at the creepiest things I could find in my house. The quality of the video doesn’t matter – not only is the video going to be down-ressed in further steps, but low quality makes it more old-school!

Blurring

In order to really give the video an old school feel, we’re going to add some loss of focus that takes the sharpness out of the image. (Thanks to Simon for this suggestion!)

This one is a little bit of a cheat – rather than writing your own function, grab the SpiralBlur-Texture node. If this doesn’t accept an external texture sample as an input, grab the custom node from inside and use that directly in your graph.

The Spiral Blur Node
Blurring Logic Using Custom Node From Spiral Blur

Convert the inputs to this node into scalar parameters. I found the values below worked well. A very small distance, but very large number of distance steps gives us a substantial amount of blur but keeps the coherence of the image together.

Blur Scalar Parameters in A Material Instance

This blur looks great, but we don’t want our camera out of focus the whole time! To fix this, we’re going to lerp the original image and the blurred image, using a modulo operator to switch between them.

Modulo (the Fmod node in unreal) returns the remainder of a division. So the modulo of 4 and 2 is 0, where the modulo of 5 and 2 is 1.

Fmod Node

By getting the modulo of Time and a scalar parameter, we create an uneven oscillation of values. This produces a more natural look than a wave function. By rounding this, we create a jarring switch between the two which suits the VHS look well.

The difference between modulo and a sine wave.

You can now use the scalar parameter to control how often you see each version of the image. A value like 2 will evaluate to 0 often, whereas a value like 5 will be more likely to have a remainder so will come out as 1 or above. You can see how the output scales in the gif above.

I put the normal image in my A slot and the blurred image in my B slot and then, because I was seeing the blurred image more often than the non-blurred one, stuck in a one minus to flip the values around (one minus returns 1 – the input value). This was mostly put in there from a graph neatness perspective so feel free to skip this or to change the order of your inputs in your own graph!

The two samples plus the lerp between them.
The material on a sphere with the modulo input set to 2 to make it easier to see!

The dark patches appearing on the gif above are where the result of our modulo is a negative number. Personally I think it adds to the effect, but if you’d rather get rid of it, a saturate node before the lerp will do the trick. (Saturate clamps between 0-1 in a single ALU instruction, where the Clamp node may represent more than one instruction on some hardware.)

UV Manipulation

Now that we have a nice blurry image, we’re going to make some edits to the UVs to add stretching, chromatic aberration and an era appropriate resolution!

Resolution

The resolution of a VHS is 333×430, so our video should reflect this. I’ve explained this down-res technique in a previous post, however that was in hlsl, so I’ll go over it again for unreal.

Create a scalar parameter or constant for your X and Y resolutions and set them to 333 and 430 respectively. Append them to create a float2 value, then floor this.

Multiply your texture coord with this to create a grid of your image. Once the grid is created, floor it, then divide by the resolution. This will take your tiling back down, but because of the floor, it will be clamped to the grid we initially created.

Stages of the maths for lowering the resolution.
Nodes for lowering the resolution.

This can then be used as as the UV input for your video texture.

The shader with various resolutions.

Stretching

The next thing we’re going to add is an occasional stretch of the image.
This is created by multiplying the Y resolution with a lerp between a sine wave and 1.

We use a sine wave so that every time we see the stretch it will be in a slightly different position. Create a parameter for stretch speed, then multiply this with time. Get the sine of this, and we have a basic wave.

Next, multiply this by 0.5, then add 0.5. This takes our wave from -1 -> 1 space into 0 -> 1 space ( -1 * 0.5 = – 0. 5, + 0.5 = 0 then 1 * 0.5 = 0.5, + 0.5 = 1).

Finally, multiply this with another scalar parameter to control the strength of our wave.

For the interpolator, we’re going to use the same modulo logic we used for the blurring. Create a new scalar parameter for how often we want to see the stretched version, modulo that with time, then round it.

Chromatic Aberration

Our final uv tweak is chromatic aberration. This is when we get red and blue around the edges of our image as the channels are offset from one another.

To do this, we are going to replace our single sample for the non-blurred version of the texture with three new samples, two of which are offset.

Take the UVs we made as a result of resolution changes, and add a scalar parameter to it for the offset. Append this with a 1 in the y, then input this as the uv for the first texture.

Take the UVs as they are for the second texture.

For the third, multiply the offset parameter by -1, then add it to the UVs and append this with 1 in the y.

Once these are setup, take the R from the first texture, the G from the second and the B from the third and append these to make a final color. This can then be input to the lerp with the blur.

Node setup for chromatic aberration.
Chromatic aberration on a sphere.

Image Effects

Now that we’ve finished playing around with our UVs and base image, we can start to add the static and other image effects that will make the shader instantly recognisable as being a VHS style video.

For these effects you’ll need a channel packed texture with scanlines, a block of white and the date/time. I recommend putting the date/time on G, scanlines on R and white block on B, to take advantage of differing compression quality between channels.

My texture – do as I say, not as I do! Putting the block on G was a poor choice here.

If you’re using Photoshop, I’d recommend using a scatter brush, then motion blur, then add noise to create the scanlines effect.

Adjustments

The first thing we’re going to do is take the output of the lerp between the blurred and normal video and add some adjustments. Add a power and multiply node, then create parameters for these. The power lets us adjust the gamma of our image and the multiply gives us a color tint.

On a sphere with gamma significantly increased.

Grain

The next thing we’re going to add is a grain effect. This will be multiplied with the output of our adjustments.


Rather than a texture, grab the simplex noise node. We’re going to lerp between noise and one minus noise so that the position of the visible noise changes. (As we’re multiplying, only black areas will be visible.)

The interpolator is a rounded sine wave, so add Time multiplied by a new scalar parameter for the speed, then get this sine of this. Multiply and add by 0.5 as we covered above, then round it so we only get -1, 0 or 1 values.

After this, add another parameter for strength, reverse it using one minus and add it to the lerp. We reverse this because the grain becomes stronger as there is less white in it, but this doesn’t make a whole lot of sense to a user, so this is there to make the inputs more clean. This can then be multiplied by our adjustment output.

Sphere with the grain added – lower gamma makes this easier to see.
Output of the grain.

Scanlines

The next thing you want to add are scanlines. This is where the texture we made earlier comes in. Take the white block channel of your texture, multiply it by a constant float 2 of 1, 75 in order to create a number of thinner lines. Then use the panner node to move this in the y direction. I left these as constants, but feel free to add parameters for speed and tiling!

Multiply this with the grain and adjustments.

Nodes for scanlines.
With the lines on – we’re almost done!

Static

The next effect we’ll add is static. These are the long static bands that really sell a VHS feel.

Like the scanlines, we’re using our texture with a panner, but we have a bit of logic on the coordinates and speed to vary things a little.

Start by taking the scanline channel of your texture and plugging a panner node into it.

For the coordinate, we’re going to create a sin wave that is used to scale the texture up and down. Multiply Time by a scalar parameter for your speed, then get the sine of this and multiply it with another parameter for the amplitude. I’ve called the amp ‘variation’ because it changes how large and therefore how different from the original the texture becomes. Multiply this by your texture coordinate and you have a scanline that scales up and down!

For the speed, we want to pan in the y but not the x, so put down an append node with a constant of 0 in x. Multiply the wave by a new speed parameter (not the same as the scaling speed – coherence creates a less glitchy look), then put this in the y slot. This gives us an ociliating speed.

Add the texture sample to your grain and scanline multiplication.

Static lines added to the effect.

This looks cool, but its no good if it says on screen forever. Multiply this with yet another variation on the modulo that we’ve used for other effects.

Nodes for the static effect.
Less frequent scanlines using modulo.

Overlay

The last thing we’re going to do is add a date/time overlay. Just take the date/time channel from your texture and add it to the previous effects.

After this, you can add a multiplier to the whole effect for a stronger emissive glow.

Overlay nodes.

Parameters

And we’re done! Here’s a screenshot of the parameters I had and the settings I used in the final video above.

Performance

GPU

While this wasn’t really intended to be particularly performant or made with a games application in mind, I’d encourage everyone to be aware of how their shaders affect GPU timing and memory!

This is fairly low on instructions and samples, even with with number of times we sampled the video texture.

Looking at the stats, it took ~0.3ms at runtime, which in my opinion is acceptable even if the video wasn’t the main focus of the scene. If we were using this in a game aiming for 60fps I’d perhaps have some reservation, with 0.3ms being

Memory

Video textures cannot be streamed like regular textures. You can generate mips for them, but they don’t have the same behaviour, so we need to consider that our video will be loaded at all times.

My runtime non streaming memory with this in the scene was 130 MB. That’s a huge amount of memory for a single texture. If this is the singular focus of the scene, it might be okay, but if we’re having this in a larger level it starts to become a concern.

There are a couple of options for making this cheaper if you want to use this technique in a game:

  • Make the video shorter – my video was fairly long
  • Make the video smaller – I just used whatever the default res on my phone was then changed it in the shader, your source video could be low res.
  • If you can’t afford video at all – make an atlas texture out of stills and run through them in the shader.

That’s it! Thanks for reading, and have fun making spooky VHS effects – ’tis the Halloween season after all!

Ren’py Tidbits

I’ve been a little dead on this blog latley as I’ve been focusing on a ren’py project that didn’t really have a lot of tech art related topics to write about.

The demo for that project can be found here for anyone who likes solving detective mysteries!

I’m working on it more, as well as another prototype, so figured I’d post about a couple of neat tidbits I’ve done.

Dialog Blips

In order for this to work, the text speed needs to be slow. The speed of the text can be changed on line 122 of options.rpy. It is 0 by default, which makes the text appear instantly. Larger numbers will make the text appear more slowly.

default preferences.text_cps = 15

The blips are implemented by providing a callback function to the character when it is created.

define e = Character(“Eileen”, callback = dialog_beep)

When the callback is called, it is given an event argument which tells us when the callback occurred. For the dialog, we use show done and slow done, which are when text is shown and when an entire line of slow text are finished showing, respectively. What we’re doing here is playing a sound when the text starts displaying, looping that sound, then stopping it when the dialog line finishes.

Because there is a slight pause between each beep, it gives the impression that the words and beeps are connected, but really we’re just displaying some text slowly and looping a beep sound until the text is finished displaying.

def dialog_beep (event, interact=True, **kwargs):
if not interact:
return

if event == “show_done”:
renpy.sound.play(“audio/beep.wav”, loop=True)
elif event == “slow_done”:
renpy.sound.stop()

More info can be found on the ren’py documentation.

Handling Screen Variables

I’ve been working on a new prototype that relies heavily on custom screens which has a couple of gotchas in terms of when variables are updated and refreshed.

A hacky fix for some issues can be to refresh the screen by restarting the interaction.

renpy.restart_interaction()

Another thing to note is that variables set in screens should use the SetScreenVariable method rather than just being set manually, as this will cause the whole screen to be updated appropriately.

SetScreenVariable(“number_pressed”, number_pressed + str(i+1))

Hiding the Say Screen

I haven’t found a non-hacky way to do this yet, but in order to hide the say screen and only interact though my custom screen, I used the pause function after displaying my screen. As it refreshes its interactions separate from the say screen, we don’t get blocked from interacting by the pause.

label start:
show screen remote()
$ renpy.pause()
return

renpy

Handling Variables In Ren’Py

I’m currently working on a Ren’Py game with a couple of other people and we’re not quite ready to post about it publicly yet, but I wanted to share some notes on handling variables between states and sessions as its caught me up a little bit!

Supporting Rollback and Save States

I’ve got a couple of classes that handle things like information about the main character and your current relationship with romancable characters. I was setting these up in the init, but it turns out that init variables do not participate in save states or rollback as they are often used for things like the ui.

The solution is to do variable initialisation in a callable label and then run this at start and load time. The hasattr function is used to check if the variable exists, with renpy.store containing every declared variable in the project.

label _init_variables:
    python:
         if not hasattr(renpy.store, 'mc') : mc = MainCharacter()
    return

The call in new context function creates a new context where rollback is disabled and save/load still happens in the top level context.

label start:
    $ renpy.call_in_new_context("_init_variables")

label after_load:
    $ renpy.call_in_new_context("_init_variables")
    return

Persistent Data

We wanted to have a list of achievements and ending so the player knows how many endings they’ve found. While ren’py does have a built in achievements system, this seems mostly for passing backend data to services like stream.

Instead, I used persistent variables. Adding ‘persistent.’ to a variable name saves it in persistent data, which remains the same between sessions.

$if persistent.achievements_list is None:persistent.achievements_list = []

I can then just add to this list when a player gets an achievement.

$ persistent.achievements_list.append("Started The Game")

Capture

Forcing Updates During Runtime

Variables are only checked when changed within the correct scope, which means that a class taking in a variable will only get the value at the point the class instance is created.
This feels hacky, but the accepted solution by the community seems to be to put all the variables that need to be constantly passed around into a screen, as it is forced to updated every frame.

 show screen update_vars()
screen update_vars():
    $ naal.good_value = naal.good_value
    $ naal.other_value = naal.other_value
    $ naal.bad_value = naal.bad_value
    $ sal.good_value = sal.good_value
    $ sal.other_value = sal.other_value
    $ sal.bad_value = sal.bad_value

Floating World Shader Part 5 – Ocean Material

This is the last part of the breakdown of my shaders based on work by Owakita. This will probably be the biggest one of all as we tackle the gersner wave ocean material.

I’ve written a number of posts breaking this down, and these can be found here:

Part 1 – Initial Plans
Part 2 – Sky Material
Part 3 – Post Processing Material
Part 4 – Gradient Fresnel Material

Gersner Wave Function

The gersner wave function was made based on this video from the Dreams team. Its a fantastic watch, so I’m just going to pop this here in lieu of going through the graph.

The main change that I had to do to get this into unreal was to replace the loop with repeating the wave function over and over, which causes node spaghetti and makes the system far less flexible.

0e39026e32a4895225f348cebd4f1564

blog15

The bitshifting for the psudo random number had to be done in a custom hlsl node, as there is no other way to convert to int and to bitshift.

blog16

I also did the final calculation in a custom node, as it was simpler to read than trying to drag the pins from each variable in. Honestly, its still a mess. I was really missing just writing hlsl at this stage.

blog17

Normals and Light Stylisation

I had some issues with normals appearing incorrect, which was eventually was down to some dodgy order of operations and some calculations being done in meters when they should have been in centimeters. Caught out by a couple things when trying to move between code and nodes.

ezgif-2-858790595a9f

I rounded the normal in order to give a hard edged, toon look to the shader.

9b478778c7b354907dabc39204061d26

Texturing

blog19

To create the smooth line pattern seen in the concept, I created a tiling texture in photoshop with outline on R and color variation on G.

I used the same smoothstep trick as with the gradient to have a color assigned to black, white and midgrey.

blog22

12ca3f35f65b888b285a05037ed23262

To add the outline, I took the black outline on the R channel of the texture and added the outline color to it. I then multiplied this with the lerped color.

blog24

Foam

In order to make the foam appear on top of the waves, I got the dot product of the surface normal and a vector parameter. This vector was used to control the angle of the foam. This was them multiplied with a noise texture to provide the breakup effect and rounded to keep the toon feel.

blog27

blog26

The outline on the foam was created by taking the above and creating an inverted version of it, with a slightly smaller wave. This multiplied by the original created an outline. This then hooked into the texture outline to receive the same color.

blog29

466716526a04d45f0c8e1df1d9eed190

blog28

All together, we get a nice breakup wave effect.

ezgif-2-282099dc540d

Final Shader

Combine all of this and you get the ocean shader from the original gif!

okiawaGif

Floating World Shader Part 4 – Gradient Fresnel Material

Here’s part four of the breakdown of my shaders based on work by Owakita. This is going to cover the gradient material found on the meshes in the scene.

I’ve written a number of posts breaking this down, and these can be found here:

Part 1 – Initial Plans
Part 2 – Sky Material
Part 3 – Post Processing Material
Part 5 – Ocean Material

Object Space Gradient

I wanted the gradient to run vertically across the object, unaffected by the position of the object in world. I didn’t want to adjust my UV map in order to accommodate this, so decided to do my gradient in object space Z. (Unreal is Z up)

blog12

I used the ObjectLocalBounds node to get the top and bottom of the object bounds in Z (though I did include a mask parameter so the gradient could be used in other directions), then used a smoothstep to produce a 0 – 1 value between the two.

Three Part Gradient

The reference image actually has three colors in its gradient, and I wanted to get as close as possible. To do this, I had to lerp twice, breaking the initial 0 – 1 coming from my object bounds into two parts.

To do this, I smoothstepped between 0 and 0.5, to map the bottom half of the value range to 0 – 1, then smoothstepped between 0.5 and 1 to do the same to the top half. I then used the first value to lerp between two colors, and the second to lerp between the previous lerp and the third color.

blog11

Fresnel

The fresnel effect is very simple, literally just using the fresnel node to lerp between an edge color and the main gradient. This was used to provide the edge highlighting seen in the reference.

1

For context, the fresnel node does a dot product between the input normal (in this case either a normal map converted to world space or the the world space normal of the pixel) and the camera position to determine whether the surface is facing the camera. Facing returns 0, facing away returns 1, so we can use this to assign a color to faces at grazing angles.

blog13

Rock Variation

The rocks also use this shader, but forego the gradient and have a normal map input to the fresnel, as well as a very high fresnel amount, giving a shadowed, two tone look.

blog14

 

 

Floating World Shader Part 3 – Post Processing

Today I’m continuing the breakdown of my shaders based on work by Owakita. Here’s part three!

I’ve written a number of posts breaking this down, and these can be found here:

Part 1 – Initial Plans
Part 2 – Sky Material
Part 4 – Gradient Fresnel Material
Part 5 – Ocean Material

Post Process Material

The aim of the post process was to create an outline around the objects, provide a grainy look and tint the screen color.

okiawaGif

Different Approaches

There were two approaches I considered for this. One was the kernel based edge detection I used on a project a couple years back, and another was the simpler sampling depth offset technique found in the UE4 stylised rendering example.

Ultimately I went for Epic’s approach, as while it may have been slightly more expensive for the amount of times it sampled depth, it was far simpler to setup and read later down the line. For this project I was really focused on the aesthetic over performance or tech, but I do still like to consider these things.

Determining Line Width

I started off by creating a single line on one side, by by getting the screen uv and offsetting it. I then multiplied this by screen texel size so the offset appears to be the same width, regardless of screen resolution.

width
Node graph for offset UV. 

Subtracting one channel of this value from the scene depth will give a single line edge, but we want an edge on all sides!

blog5
Scene with an outline on a single side.

Creating An Outline On All Sides

I then split this offset into its U and V parts and multiply each by minus one, leaving us with four sets of offset UVs, as seen in the image below. These are then used to sample scene depth, which creates four different samples of the depth, all with a slight offset in one direction.

explaination
Diagram showing how depth sampled with offset UV’s combine to create an outline.

Subtracting these from the initial scene depth then adding them together leaves us with a nice combined outline!

Sample Depths
Node graph for combing offset UV samples.

blog6
The scene with an exaggerated outline.

Depth Based Line and Wireframe Fix

With this approach, we start to get the internal wireframe being outlined at distance. As a fix, we take the scene depth, divide it, then take one minus this (as UE4’s depth is flipped) then clamp it. This is the multiplied with the outline. This lowers the outline amount for these faces.

blog9

Capture

I later added something very similar to the initial outline calculation, where this was multiplied with the result of the texel size multiplier. What this did was take the depth and as the object gets further away, reduces the size of the line. This allowed me to not lose detail at distance and stopped blobby looking outlines.

blog8

Putting It All Together

One this outline is created, it is used as the alpha to lerp between the line color and the scene color. Here I’ve multiplied the scene color with a texture and color to add a bit of grain and color tinting.

color
Node graph for the combined result.

Final Result

result
Sphere showing the final result of the material.

options
Exposed parameters for the material.

blog7
Final result in the scene. 

Floating World Shader Part 2 – Sky Material

As talked about in my last post, I’ve been working on some shaders based on work by Owakita. Here’s part two of the breakdown!

I’ve written a number of posts breaking this down, and these can be found here:

Part 1 – Initial Plans
Part 3 – Post Processing Material
Part 4 – Gradient Fresnel Material
Part 5 – Ocean Material

Sky Material

9a1e8e847636d86ea7737b20299d1077

The sky material for this project was pretty simple, as I made a copy of the base sky material found in the construction script of BP_Sky_Sphere, and replaced that reference with the copy.

blog1
BP_Sky_Sphere Construction Script

Inside of that copy, I replaced the scrolling clouds texture with my moon, and the stars with my stars. I also added an additional cloud speed parameter so that I could control the speed of the stars and moon separately, for a nice parallax effect.

blog2
Texture samples that need changed in M_Sky.

The most challenging part here was creating the moon texture. As it has such a large area to cover and is projected over a sphere, I had to get the moon shape in the correct area of the texture and bend it appropriately so that it looked correct on the sky sphere mesh. I got there after a lot of trial, error and free transform!

blog3
The star and moon textures I created. 

For the colors, I left the material as is but overrode the light based colouring in the details menu. I then tweaked zenith and horizon color until I got close to the reference image.

blog4
Tickbox for turning off sun position based color and color controls.

That’s all there was to it for the skybox! Next post will look at the post processing effect.

Floating World Shader Part 1 – Initial Plans

Over the bank holiday I worked on a number of shaders in UE4 based on work by Owakita. I love her dreamy, pastel aesthetic and wanted to try it for myself.

I’ve written a number of posts breaking this down, and these can be found here:

Part 2 – Sky Material
Part 3 – Post Processing Material
Part 4 – Gradient Fresnel Material
Part 5 – Ocean Material

okiawaGif
The finished product!

comparison
Comparison between Owakita’s original work and my recreation of it.

Reference Breakdown

Initially looking at the work, I broke my project down into a post process, gradient and gersner ocean material. In future blog posts I’ll be covering how I did each of these stages!

Untitled-1

At this point I listed some features I thought each shader should have, but this did change a fair bit over the course of development.

Nanoreno Week 4 – Finished!

I’ve officially completed NaNoRenO2020! Its a really good feeling since I’m pretty terrible at finishing game projects normally!

You can play the game at https://amesyflo.itch.io/dream-dilemma.

Proofreading

Most of this last week was dedicated to proofreading. This should have been the easiest part of the project really, but having my full dialog script inside of my game code made it quite tough. Spellcheck picked up on a lot of things it shouldn’t and it wasn’t super easy to write in flow when there’s game code dotted about.

It made me think about writing a tool to parse dialog from a text document/spreadsheet  into .rpy files. At the very least I think next project I want to think about a file structure that allows me to keep game code and dialog separate.

Overall Experience

Overall this game jam was a great experience – I finally finished a game! Its led me to have a bunch more VN ideas, so expect to see more of that in the future. First though…I have a shader I want to write…