Water Shader

Wiping your PC’s data to fix it is a good excuse to start a new project right? 😉

I began working on a stylized water shader in Unity this morning. I’m aiming for something like the images below, with depth based texture effects, reflection, refraction, fresnel and a bit of movement.

Water Moodboard

Render Type and Blend Mode

I started off by making a simple transparent shader. I used unity’s unlit preset as this will allow me to add additional outputs to the vertex to fragment function which I’ll need for getting things like screen position later.

For the blend mode I’ve set it to use traditional transparency. The generated colour is multiplied by its own alpha, and the colour already on screen is multiplied by one minus the generated colour’s alpha. This gives us traditional layered transparency where the two values add up to one. The higher the value in the source alpha, the more opaque the generated colour will appear.

I might change this to additive later depending on the type of look I want to achieve.

Tags { “RenderType”=”Transparent” “Queue” = “Transparent”}

Blend SrcAlpha OneMinusSrcAlpha

water2

Depth Fade

The depth fade should colour pixels that are near geometry that intersects with the water plane.

The first thing I needed to do to was get the depth texture that is already being rendered as part of the inbuilt forward rendering pipeline in Unity.

uniform sampler2D _CameraDepthTexture;

After this, I added a new output from my vertex struct that would allow me to get the object’s screen position.

float4 screenPos : TEXCOORD1;

I used this in the vertex function (v2f) to get screen position, using unity’s built in function from UnityCG.cginc. This takes the vertices clip space position and uses it to work out the position on screen as a screenspace texture coordinate.

I get the clip space position by using UnityObjectToClipPos, which is in the shader by default anyway.  This transforms a position in 3D object space to the 2D clip space of the camera by multiplying it with the model, view and projection matrices.

o.vertex = UnityObjectToClipPos(v.vertex);

o.screenPos = ComputeScreenPos(o.vertex);

I then use in fragment shader to get the final colour.

Depth is computed using the LinearEyeDepth function, which when given the rendered depth texture calculates the depth between the camera each pixel. To sample the depth texture, I used an hlsl macro and gave it the screen position as a uv.

To convert this to something we can use with our objects, I took the screen position of the object (represented as a float in the 4th homogeneous coordinate) and subtracted that from the depth, allowing me to compare the depth of the object with the plane I’ve generated. 

half depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPos)));
half screenDepth = depth – i.screenPos.w;
fixed4 col = saturate(lerp(_Color, _Color2, screenDepth));

This is what screen depth looks like, represented as red = 0 and green = 1. Green are pixels who’s depth is 1 as no other pixels need to be rendered behind it.

Water3

There’s a wee gotcha here – remember to saturate (clamp 0 -1) the final colour value, otherwise you’ll end up with negative numbers inverting your colour. I think this is to do with the screen space position still being a humongous coordinate and not a screen space one, so I’ll have a look at fixing that properly.

water4

Here’s the final result of the depth! Next I’m going to look at combining this with a texture to generate some nice edges on objects.water5

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s