Eldrich Blast VFX

This is a small magic effect I’ve been working on, based on my Fey warlock D&D character. D&D spells are a nice way to find inspiration for vfx.

162125fbd0a938fb9c6f4929ec1d9480.gif

Magic attacks like this are new to me so I learned quite a bit doing it!

Turns out Unity’s default particle trail rendering isn’t great as it tried to construct the trail on the fly – you can see the “kinks” at the start and end of the trail.

I used the minimum and maximum filters in photoshop to make the textures. This is great! Really nice way to make wispy ethereal shapes without much painting.

Capture.PNG

It was also interesting to experiment with how shapes change when applied to different particle types/stretched along a trail – I used Rafles breakdown to learn how to shape a trail texture and I’m happy with how it turned out.

A last minute addition to the effect was the diamonds on the ground. I was ready to call it even though I wasn’t 100% happy, and grabbed a gif. Seeing the effect as a thumbnail helped me see that it didn’t look connected to its environment, so added the floor particles to ground the effect a little.

I picked up on a few things I need to work on during this, particularly to do with my process.

Firstly, I need to use more reference! With environmental or realistic effects I always use reference, but with this qnd with the less realistic effects I’ve made, I was sort of winging it. Effects still need to be grounded in reality even when they’re completely fantastical. The effect improved when I looked at bullet impact references and realized the impact aura should shoot towards the caster, not away from them.

In addition to reference, with multi-part effects like this one, storyboarding could be very helpful – I’m not sure i ever had a clear idea of what the effect actually was!

In terms of the effect itself, the anticipation and impact could be stronger and the pink colour is a little too saturated. Better use of reference and defining a colour scheme as part of planning should help these along.

Looking forward to taking these lessons into the next effect!

LCD Shader

Today I used my lunch break to get started on an LCD shader for my current project. While the montior casing looks more likley to be CRT, I really like the subpixel element and moire looks that LCD provides.

Capture1

To create the effect above, I pixelated the main image then multiplied it with the RBG component as a texture.

Capture4

To pixelate the texture, I multipled the texture by the number of pixels I wanted on screen, rounded that to create a grid, then divided it by num pixels to bring the tiling back down, but clamped to the grid. (Shown below with a 10 pixel grid for ease of understanding – the image at the top uses a 100 pixel grid.)

1  2  3

fixed4 pixel = tex2D(_MainTex, (round(i.uv * _Pixel) / _Pixel));

I then made sure that the uv used to sample the rgb component texture was multiplied by the pixel value, to have them match up.

Capture5   Capture6

 

Starting Something New!

This might be the fastest I’ve started something new after finishing a project! I really liked the style I was working in with my pool shader, so wanted to continue.

I’m taking the rose tinted skybox, blown out lighting and simple shapes to a project I’ve had floating about in my head for a while – a sort of adventure game, sort of visual novel, based on internet mysteries and creepypasta. I’m hoping the juxtaposition of cutesy art direction and darker subject matter will make for something interesting!

I’m letting this one lead with art, as I know if I do mechanics first I’ll never get done! The 3D art will be a computer screen and then the majority of the rest of the art will be UI based.

1

Below I’ve got a quick first pass of the model with some unity default UI stuff in there. The blinking light is an incredibly simple shader – just a couple lines for that effect!

float blink = round(sin(_Time.y * _Speed) * 0.5 + 0.5);
fixed4 col = _Color * blink;

1b81eb30059bad646c9ab70caed6b9a0

Finished Water Shader

Finally done with this! Its been a while since I posted and can’t quite remember what I’ve done since then, so I’ll just post the full shader code at the bottom of this post so you can take a look!

As far as I know, I added some smaller waves, changed the uv scroll to be sin based rather than a scroll and made the wee sparkle particles.

Pool5

Deciding how to present this one was interesting – I wanted to emphasise the shader but just having a shaderball didn’t really show what it could do. I opted for this super simple style so that the main piece could shine.

Pool3Pool4Pool6Pool2

Here’s a video and the full code.

Shader "Unlit/Sh_Water_Unlit_River"
{
	Properties
	{	//Texture pack u freak
		_Color("Body Color 1", Color) = (1,1,1,1)
		_Color2("Edge Color", Color) = (1,1,1,1)
		_Color3("Body Color 2", Color) = (1,1,1,1)
		_MainTexture("Body Texture", 2D) = "white" {}
		_EdgeTexture("Edge Texture", 2D) = "white" {}
		_Distance("Edge Thickness", Float) = 1.0
		_Normal("Normal Map", 2D) = "bump"{}
		_Speed("Wave Speed", Range (0,10)) = 1.0
		_Noise("Wave Texture", 2D) = "white"{}
		_Amount("Wave Amount", Float) = 1.0
		_Speed2("Scroll Speed", Range(0,10)) = 1.0
		_TexAmt("Little Waves Amount", Float) = 1.0

	}
		SubShader
	{
		Tags { "RenderType" = "Transparent" "Queue" = "Transparent" "Lightmode" = "ForwardBase"}
		LOD 100
		Blend SrcAlpha OneMinusSrcAlpha

		Pass
		{
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#pragma multi_compile_fog

			#include "UnityCG.cginc"

			struct appdata
			{
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;
				float4 normal : NORMAL;
			};

			struct v2f
			{
				float2 uv : TEXCOORD0;
				UNITY_FOG_COORDS(1)
				float4 vertex : SV_POSITION;
				float4 screenPos : TEXCOORD1; //Custom Data in V2f
				float2 uv2 : TEXCOORD2;
				float4 normal : NORMAL;
				float2 viewDir : TEXCOORD3;
			};

			sampler2D _EdgeTexture;
			float4 _EdgeTexture_ST;
			float4 _Color;
			float4 _Color2;
			float4 _Color3;
			uniform sampler2D _CameraDepthTexture;
			float _Distance;
			sampler2D _Normal;
			float4 _Normal_ST;
			float _Speed;
			float _Speed2;
			sampler2D _Noise;
			float _Amount;
			sampler2D _MainTexture;
			float4 _MainTexture_ST;
			float _TexAmt;

			v2f vert (appdata v)
			{
				v2f o;
				//Wobble
				float4 noiseTex = tex2Dlod(_Noise, float4(v.uv, 0, 0) * 20);

				//y pos = sin for general up down movement * texture for waves and sides need to come up at different times
				v.vertex.y += (sin(_Time.y * _Speed + v.vertex.x + v.vertex.z) * _Amount * (noiseTex * 2)); //also fix normal
				v.vertex.y += (sin(_Time.y * _Speed * 0.5) * _TexAmt) * round(noiseTex);

				o.vertex = UnityObjectToClipPos(v.vertex);
				o.uv = TRANSFORM_TEX(v.uv, _EdgeTexture);
				o.uv2 = TRANSFORM_TEX(v.uv, _MainTexture);
				o.screenPos = ComputeScreenPos(o.vertex); //Get vertex in clip space, compute screen position
				o.normal = v.normal;
				o.viewDir = normalize(ObjSpaceViewDir(o.vertex));
				UNITY_TRANSFER_FOG(o,o.vertex);
				return o;
			}

			fixed4 frag (v2f i) : SV_Target
			{
				half depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPos)));
				half screenDepth = saturate((depth - i.screenPos.w) / _Distance); 

				float4 noiseTex = tex2D(_Noise, i.uv2 * 0.5);

				float2 pan_uv = float2((i.uv.x * noiseTex.r), (i.uv.y + (sin(_Time.y * _Speed * 1.5) * _Amount * _Speed2)));
				float2 pan_uv2 = float2((i.uv2.x * noiseTex.r), (i.uv2.y + (sin(_Time.y * _Speed * 1.5) * _Amount * _Speed2)));

				float4 noiseTex2 = tex2D(_Noise, pan_uv2);

				fixed4 edge = saturate(tex2D(_EdgeTexture, pan_uv).b) * _Color2;
				fixed4 body = saturate(tex2D(_MainTexture, pan_uv2).b) + lerp(_Color, _Color3, smoothstep(0.25, 0.75, noiseTex2.r));

				float fresnel = max(0, 1 - (dot(i.viewDir, i.normal)));
				float wobbly_fresnel = fresnel;

				fixed4 color = lerp(edge + body, body, screenDepth);
				fixed4 col = saturate(lerp(color * 0.1, color, wobbly_fresnel));
				col.a = saturate(lerp(edge.a, _Color.a, screenDepth) + wobbly_fresnel * col.a);

				UNITY_APPLY_FOG(i.fogCoord, col);
				return col;
			}
			ENDCG
		}
	}
}

 

Water Shader

Wiping your PC’s data to fix it is a good excuse to start a new project right? 😉

I began working on a stylized water shader in Unity this morning. I’m aiming for something like the images below, with depth based texture effects, reflection, refraction, fresnel and a bit of movement.

Water Moodboard

Render Type and Blend Mode

I started off by making a simple transparent shader. I used unity’s unlit preset as this will allow me to add additional outputs to the vertex to fragment function which I’ll need for getting things like screen position later.

For the blend mode I’ve set it to use traditional transparency. The generated colour is multiplied by its own alpha, and the colour already on screen is multiplied by one minus the generated colour’s alpha. This gives us traditional layered transparency where the two values add up to one. The higher the value in the source alpha, the more opaque the generated colour will appear.

I might change this to additive later depending on the type of look I want to achieve.

Tags { “RenderType”=”Transparent” “Queue” = “Transparent”}

Blend SrcAlpha OneMinusSrcAlpha

water2

Depth Fade

The depth fade should colour pixels that are near geometry that intersects with the water plane.

The first thing I needed to do to was get the depth texture that is already being rendered as part of the inbuilt forward rendering pipeline in Unity.

uniform sampler2D _CameraDepthTexture;

After this, I added a new output from my vertex struct that would allow me to get the object’s screen position.

float4 screenPos : TEXCOORD1;

I used this in the vertex function (v2f) to get screen position, using unity’s built in function from UnityCG.cginc. This takes the vertices clip space position and uses it to work out the position on screen as a screenspace texture coordinate.

I get the clip space position by using UnityObjectToClipPos, which is in the shader by default anyway.  This transforms a position in 3D object space to the 2D clip space of the camera by multiplying it with the model, view and projection matrices.

o.vertex = UnityObjectToClipPos(v.vertex);

o.screenPos = ComputeScreenPos(o.vertex);

I then use in fragment shader to get the final colour.

Depth is computed using the LinearEyeDepth function, which when given the rendered depth texture calculates the depth between the camera each pixel. To sample the depth texture, I used an hlsl macro and gave it the screen position as a uv.

To convert this to something we can use with our objects, I took the screen position of the object (represented as a float in the 4th homogeneous coordinate) and subtracted that from the depth, allowing me to compare the depth of the object with the plane I’ve generated. 

half depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPos)));
half screenDepth = depth – i.screenPos.w;
fixed4 col = saturate(lerp(_Color, _Color2, screenDepth));

This is what screen depth looks like, represented as red = 0 and green = 1. Green are pixels who’s depth is 1 as no other pixels need to be rendered behind it.

Water3

There’s a wee gotcha here – remember to saturate (clamp 0 -1) the final colour value, otherwise you’ll end up with negative numbers inverting your colour. I think this is to do with the screen space position still being a humongous coordinate and not a screen space one, so I’ll have a look at fixing that properly.

water4

Here’s the final result of the depth! Next I’m going to look at combining this with a texture to generate some nice edges on objects.water5

Unity Profiler Tool

I had a look at the CPU profiler in Unity today as prep for my new job. Below is a super quick overview of the tool.

To open the profiler, go to Window > Profiler. This opens a new window showing a timeline of frames and a list of functions for each frame.

1

To view the current frame, click the current button in the top right. To view a specific frame, click on it in the graph. This will pause the game and give you information for that frame. The total ms cost for the frame is shown in the middle of the window. The lines on the graph show some target ms costs such as 33 and 16.

2

The list of functions allow us to see the time in ms for each function, as well as the percentage of the total frame time that that function uses. The self version of these shows the cost of the function without its children. The calls column shows how many calls to that function happen in the frame.

GC alloc shows how much garbage collection happens for that resource. Garbage is memory set aside to store data that is no longer in use – GC refers to the process that frees up this memory for use. If this happens too often it can hit performance, so worth keeping an eye on.

To get more information on child functions, we can run deep profiling by using the button next to record at the top of the window. Below, I’ve ran it to get more information on culling. Without deep profiling, I could see that this cost was related to culling lights, but with it on, I can see that its specifically linked to single directional shadows (common sense should have told me that lighting culling cost was associated with shadow casting, but data is always good!).

4

The profiler has a bit of an overhead – so make sure that you use the statics view with the profiler off to get the actual fps and ms per frame cost before profiling. Any logging functions that you may be using are part of debugging will also be fairly heavy. 3