Eldrich Blast VFX

This is a small magic effect I’ve been working on, based on my Fey warlock D&D character. D&D spells are a nice way to find inspiration for vfx.

162125fbd0a938fb9c6f4929ec1d9480.gif

Magic attacks like this are new to me so I learned quite a bit doing it!

Turns out Unity’s default particle trail rendering isn’t great as it tried to construct the trail on the fly – you can see the “kinks” at the start and end of the trail.

I used the minimum and maximum filters in photoshop to make the textures. This is great! Really nice way to make wispy ethereal shapes without much painting.

Capture.PNG

It was also interesting to experiment with how shapes change when applied to different particle types/stretched along a trail – I used Rafles breakdown to learn how to shape a trail texture and I’m happy with how it turned out.

A last minute addition to the effect was the diamonds on the ground. I was ready to call it even though I wasn’t 100% happy, and grabbed a gif. Seeing the effect as a thumbnail helped me see that it didn’t look connected to its environment, so added the floor particles to ground the effect a little.

I picked up on a few things I need to work on during this, particularly to do with my process.

Firstly, I need to use more reference! With environmental or realistic effects I always use reference, but with this qnd with the less realistic effects I’ve made, I was sort of winging it. Effects still need to be grounded in reality even when they’re completely fantastical. The effect improved when I looked at bullet impact references and realized the impact aura should shoot towards the caster, not away from them.

In addition to reference, with multi-part effects like this one, storyboarding could be very helpful – I’m not sure i ever had a clear idea of what the effect actually was!

In terms of the effect itself, the anticipation and impact could be stronger and the pink colour is a little too saturated. Better use of reference and defining a colour scheme as part of planning should help these along.

Looking forward to taking these lessons into the next effect!

Armor of Agathys VFX

This is the final effect created with the subsurface scattering effect I posted about earlier.
I started by creating an animation clip that changed the ice amount parameter of the shader over time.

3

ice2

I then added a quick particle effect that consisted of a glow, shiny dots and swirls. (If anyone thinks of a more professional way to talk about VFX do let me know! “Shiny Dots” is a phrase I use far too often at work!)

ice1

The glow used fairly high emission and low lifetime, plus size over life, to give the impression of light flickering. It just uses the default particle material, which works surprisingly well for effects like this.

The dots use radial velocity to pull them in, with orbit added on a single axis to give additional unnatural motion.

bb9097e2-255e-4b4f-a93e-395963fb7b1c_rw_1920

The trails used the texture below. Initially designed to be chunks that flew outwards, more similar to my concept image above, it worked nicely as a trial instead so I kept it. The gaps give me the nice tearing sort of effect in the swirl, and the details look far less blocky and messy when stretched out. This particle system also uses orbital and radial velocity.

TX_Swirl

Finished Water Shader

Finally done with this! Its been a while since I posted and can’t quite remember what I’ve done since then, so I’ll just post the full shader code at the bottom of this post so you can take a look!

As far as I know, I added some smaller waves, changed the uv scroll to be sin based rather than a scroll and made the wee sparkle particles.

Pool5

Deciding how to present this one was interesting – I wanted to emphasise the shader but just having a shaderball didn’t really show what it could do. I opted for this super simple style so that the main piece could shine.

Pool3Pool4Pool6Pool2

Here’s a video and the full code.

Shader "Unlit/Sh_Water_Unlit_River"
{
	Properties
	{	//Texture pack u freak
		_Color("Body Color 1", Color) = (1,1,1,1)
		_Color2("Edge Color", Color) = (1,1,1,1)
		_Color3("Body Color 2", Color) = (1,1,1,1)
		_MainTexture("Body Texture", 2D) = "white" {}
		_EdgeTexture("Edge Texture", 2D) = "white" {}
		_Distance("Edge Thickness", Float) = 1.0
		_Normal("Normal Map", 2D) = "bump"{}
		_Speed("Wave Speed", Range (0,10)) = 1.0
		_Noise("Wave Texture", 2D) = "white"{}
		_Amount("Wave Amount", Float) = 1.0
		_Speed2("Scroll Speed", Range(0,10)) = 1.0
		_TexAmt("Little Waves Amount", Float) = 1.0

	}
		SubShader
	{
		Tags { "RenderType" = "Transparent" "Queue" = "Transparent" "Lightmode" = "ForwardBase"}
		LOD 100
		Blend SrcAlpha OneMinusSrcAlpha

		Pass
		{
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#pragma multi_compile_fog

			#include "UnityCG.cginc"

			struct appdata
			{
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;
				float4 normal : NORMAL;
			};

			struct v2f
			{
				float2 uv : TEXCOORD0;
				UNITY_FOG_COORDS(1)
				float4 vertex : SV_POSITION;
				float4 screenPos : TEXCOORD1; //Custom Data in V2f
				float2 uv2 : TEXCOORD2;
				float4 normal : NORMAL;
				float2 viewDir : TEXCOORD3;
			};

			sampler2D _EdgeTexture;
			float4 _EdgeTexture_ST;
			float4 _Color;
			float4 _Color2;
			float4 _Color3;
			uniform sampler2D _CameraDepthTexture;
			float _Distance;
			sampler2D _Normal;
			float4 _Normal_ST;
			float _Speed;
			float _Speed2;
			sampler2D _Noise;
			float _Amount;
			sampler2D _MainTexture;
			float4 _MainTexture_ST;
			float _TexAmt;

			v2f vert (appdata v)
			{
				v2f o;
				//Wobble
				float4 noiseTex = tex2Dlod(_Noise, float4(v.uv, 0, 0) * 20);

				//y pos = sin for general up down movement * texture for waves and sides need to come up at different times
				v.vertex.y += (sin(_Time.y * _Speed + v.vertex.x + v.vertex.z) * _Amount * (noiseTex * 2)); //also fix normal
				v.vertex.y += (sin(_Time.y * _Speed * 0.5) * _TexAmt) * round(noiseTex);

				o.vertex = UnityObjectToClipPos(v.vertex);
				o.uv = TRANSFORM_TEX(v.uv, _EdgeTexture);
				o.uv2 = TRANSFORM_TEX(v.uv, _MainTexture);
				o.screenPos = ComputeScreenPos(o.vertex); //Get vertex in clip space, compute screen position
				o.normal = v.normal;
				o.viewDir = normalize(ObjSpaceViewDir(o.vertex));
				UNITY_TRANSFER_FOG(o,o.vertex);
				return o;
			}

			fixed4 frag (v2f i) : SV_Target
			{
				half depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPos)));
				half screenDepth = saturate((depth - i.screenPos.w) / _Distance); 

				float4 noiseTex = tex2D(_Noise, i.uv2 * 0.5);

				float2 pan_uv = float2((i.uv.x * noiseTex.r), (i.uv.y + (sin(_Time.y * _Speed * 1.5) * _Amount * _Speed2)));
				float2 pan_uv2 = float2((i.uv2.x * noiseTex.r), (i.uv2.y + (sin(_Time.y * _Speed * 1.5) * _Amount * _Speed2)));

				float4 noiseTex2 = tex2D(_Noise, pan_uv2);

				fixed4 edge = saturate(tex2D(_EdgeTexture, pan_uv).b) * _Color2;
				fixed4 body = saturate(tex2D(_MainTexture, pan_uv2).b) + lerp(_Color, _Color3, smoothstep(0.25, 0.75, noiseTex2.r));

				float fresnel = max(0, 1 - (dot(i.viewDir, i.normal)));
				float wobbly_fresnel = fresnel;

				fixed4 color = lerp(edge + body, body, screenDepth);
				fixed4 col = saturate(lerp(color * 0.1, color, wobbly_fresnel));
				col.a = saturate(lerp(edge.a, _Color.a, screenDepth) + wobbly_fresnel * col.a);

				UNITY_APPLY_FOG(i.fogCoord, col);
				return col;
			}
			ENDCG
		}
	}
}

 

Niagara First Look

I had a first look at Niagara last night, following on from my big ramble about how hyped I am about it. Haven’t made anything proper yet, just made a module and looked at how variables are defined.

To enable Niagara, go into the plugin manager and enable it under FX. This gives you some new asset types.

5

Systems are collections of emitters than can be placed in world, emitters are components of particles that can be fit together to make systems and modules are scripts that can be used inside of emitters. Functions are bits of script that can be used within modules, and parameter collections allow you to define your own global params.

I really like this way of working. I can see in larger studios having FX artists responsible for emitter and system creation and TAs creating modules and functions for them to use.

1

I started by creating a test module with the aim of figuring out how to connect a couple different variables.

All the parameters are stored within the parameter map, and you can set and get variables from here. Namespace is important! To add a user input value, give it the Module namespace.

I allowed the user to set colour, and then multiplied it with another user input to let them define the emissive intensity.

I then added a size multiplier, so that the size gets larger as the particle ages. Initially, this didn’t work, as I was getting size, multiplying it, then setting it. This meant that because I was multiplying it by zero on the first frame, I always had a value of zero. Remembering execution type is really important here! To fix this I needed an initial value, set on spawn for performance, that was multiplied on update by the particle age.

2

I tried using custom parameter maps for this, but had a lot of trouble and couldn’t make it non constant. As it turns out I was looking for something too complicated – you can set a custom parameter inside the emitter itself, then use its name in any module in the emitter.

3

4

Another gotcha I found – there’s a “use in Niagra” tickbox on materials – important!

 

Here’s my particles in action! Not very exciting yet, but a nice intro to the tech.

cae55a96e18aa0ac203caea3b15219d8786f6db8eefde06690d37d086731b63b

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

HYPE! UE4 Niagara

Rambly post about the cool features of Niagara and the interesting performance and production implications of a modular/exposed to users particle workflow…you’ve been warned. 😉

I’ve just watched Epic’s showcase of their new particle editor and I AM SO EXCITED.
This looks awesome – it adds flexibility so that you can make fx work as complicated as you want ,depending on your level of technical knowledge, and puts power in the user’s hands. I can’t count how many times I’ve said if only I could just multiply the velocity by the wind direction etc…

I love the idea of module creation plus the usual stack editor. A technical artist creating modules using vector math etc and then a VFX artist using those modules to create beautiful effects is a great workflow, using the team’s strengths effectively.

It has a lot of control – using namespaces to define variables, being able to choose execution type and order, access to in game events, logic and variables. I get the impression that if you have the technical skill and the dream, you can do it!

Being HLSL based is great, that will give us access to a lot of render attributes we wouldn’t be able to touch otherwise, and having it this way rather than C++ for simulation and HLSL for drawing makes it nice and easy to use.

Despite this, drawing and simulation are separated, which is great for performance and production. If we want two emitters to have the same behavior but different looks, we can simulate once and give different instructions to draw them.

I’m intrigued by the snippets of HLSL you can write, learning a non node based shader language is something I’ve had on my to do list for a while and it might be a nice introduction for me. I’ll 100% be using the HLSL expressions in the stack editor – its just like Houdini!

I think the CPU GPU parity thing is very interesting. I’d tend to do things a different way when working with different processors – I think they used the example of a raycast vs using the depth buffer in the video – and its an interesting idea to try and have a single way of working. I’m not sure if this is more or less flexible, but it provokes concerns regarding performance and education. Why would you spend milliseconds raycasting on the gpu when you could look up the depth buffer? If you didn’t know that raycasting wasn’t the best solution, would you ever find out?

The ideas of exposing values, inheritance and overrides are fantastic, and exactly how we should be making games. This gives us maximum flexibility and re-usability on assets. It allows tweaks to be made by those with less knowledge of the systems where people with more techy knowledge create the building blocks. Only needing to do this once means more time can be spent on developing other tools and working towards future needs. I guess the only caveat with this is being careful about what’s exposed and to whom. When the VFX artist exposes rate to the environment artist and they set it to 2000 because it looks the best, the TA will be spending their time doing content monitoring and profiling to find stuff like this, rather than investing in those tools and future tech that setting these things up only once was supposed to allow them to do.

Overall, I think this is a great direction for UE4 to be going in with its particle editor – can’t wait to give it a shot!

Applying Compositing to Real Renders

I applied the logic I discussed in my last post to some real renders today! I’ll speak a bit more about my simulation and lighting setup in another post, but I’d like to mention a few additions to the expression and a couple of gotchas that I found.

The simulation that I’d created was far too small at the beginning for me to want in my sheet – I was trying to create something that was fairly generic and didn’t have too much movement inside of the flipbook as this would be done in the particle system in engine.

Because of this, I wanted to start my render partway in, and of course, I didn’t want 240 images in my sheet, so I only used every third image.

8

To load in the rendered images for the composite, I took my previous file loading logic, but added the start frame – 1 to it. This allowed me to offset the number that the node stepped though.

N = frame increment , S = start frame

padzero(4, (($F-1)*N+1)+(S-1))

2

This didn’t work for me originally, until I found a wee gotcha. As I’d rendered the image from frame 49, the file node was defaulting to a frame range between 49 and 240. To correct this, I overrode the frame range to do 1 – 64, one frame for each of my flipbook frames.

1

This worked brilliantly, but the final images came out very dark. I’m not sure if this was due to my light setup or something common to Houdini (I have heard others mention using gamma adjustment before rendering out) but I’d like to find out!

3

To correct this I upped the gamma and the levels slightly.

5

This is my final result! A 2k 64 frame flipbook, created from a 240 frame simulation.

Smoke_Pos.jpg

Flipbook Compositing in Houdini

I’ve written about this topic before, but I wanted to revisit it with a better solution.

Previously, to create flipbooks in Houdini I had been rendering every frame of my sim, using a standalone python script to delete and renumber frames, then used a basic mosciac cop to stich the remaining frames together.

I decided I wanted to bring the solution entirely into Houdini, so had a look at adding some script-y bits to my cop setup to eliminate the tool.

I rendered only the frames I wanted by changing the increment on my mantra node. I’m going to refer to this number as N or the Nth frame. By rendering out every 3rd frame of a 48 frame long simulation, I got 16 images – perfect for a flipbook!

An important note – my first frame is frame 1, not frame 0. In order to correctly import the files in the cop, we need to start an image named 0001. I’ll explain why in the next section.

7

Once rendered, to get these frames into a flipbook I made a cop net with a file node attached to a mosaic node.

8

If I tried to load the files using the default image sequencing stuff, I’d have gotten an error, because it was looking for frames numbered 0000, 0001, 0002 etc., as the $F4 in the filepath means current frame number with four leading zeros.

$HIP/render/flipbook_test.mantra_ipr.$F4.exr

My render was producing 0001, 0004, 0007 and so on. To make it look for these numbers, I had to replace the $F4 with that sequence. To get the leading zeros, I used the padzero() function, which takes a number of leading zeros and an integer.

To get the integer that would match up to my render sequence, I took the current frame, took one away from it, multiplied this by N then added one.

The cop moves through frames ($F), starting at 1 and incrementing by 1. This is very important, because the presumption that this was base 0 was breaking things for a while! The textport is your friend for finding these things…

​padzero(4, ($F-1)*N+1)

This gave us a complete file path that looked like this. The backticks are important, as they tell Houdini to evaluate that section as an expression. Otherwise, it will treat it as part of the string and produce an error as we don’t have a filepath with that name!

$HIP/render/flipbook_test.mantra_ipr.​padzero(4, ($F-1)*3+1)`.exr

9

I then set my mosaic up to take 16 images in a frame and 4 per line, and made myself a nice square texture sheet!

10

 

Sand Deformation Finished!

I finished my sand deformation and kick up vfx! The lighting isn’t great and I’m still unhappy with the horribly inefficient blueprints, but visually I’m happy with the shader and VFX and am ready to move on with this one. Check it out below!

Its been nice to be presenting things properly again – its been more than a year since I last posted to vimeo…

 

Finished the VFX, Broke the Lighting

1d556e58c032c7fc9ee073f6a0d2d844.gif

Made the last final tweaks to the fx – adding alpha and size scaling over life to the smoke and spending god knows how long tweaking colours…(That one made me rethink this whole art creation thing and made me want to go write some scripts!)

I attempted to make some changes to the lighting and sky sphere and pretty much broke everything. I’ll polish that up and then probably call this done.

I’ve got some ideas regarding other types of kickup vfx for different terrain types and new shaders, but the system for the particle spawning is so horrible (reaed: raycasts every frame) that I’m tempted to ditch this and possibly come back for a total refactor later.