Niagara First Look

I had a first look at Niagara last night, following on from my big ramble about how hyped I am about it. Haven’t made anything proper yet, just made a module and looked at how variables are defined.

To enable Niagara, go into the plugin manager and enable it under FX. This gives you some new asset types.


Systems are collections of emitters than can be placed in world, emitters are components of particles that can be fit together to make systems and modules are scripts that can be used inside of emitters. Functions are bits of script that can be used within modules, and parameter collections allow you to define your own global params.

I really like this way of working. I can see in larger studios having FX artists responsible for emitter and system creation and TAs creating modules and functions for them to use.


I started by creating a test module with the aim of figuring out how to connect a couple different variables.

All the parameters are stored within the parameter map, and you can set and get variables from here. Namespace is important! To add a user input value, give it the Module namespace.

I allowed the user to set colour, and then multiplied it with another user input to let them define the emissive intensity.

I then added a size multiplier, so that the size gets larger as the particle ages. Initially, this didn’t work, as I was getting size, multiplying it, then setting it. This meant that because I was multiplying it by zero on the first frame, I always had a value of zero. Remembering execution type is really important here! To fix this I needed an initial value, set on spawn for performance, that was multiplied on update by the particle age.


I tried using custom parameter maps for this, but had a lot of trouble and couldn’t make it non constant. As it turns out I was looking for something too complicated – you can set a custom parameter inside the emitter itself, then use its name in any module in the emitter.



Another gotcha I found – there’s a “use in Niagra” tickbox on materials – important!


Here’s my particles in action! Not very exciting yet, but a nice intro to the tech.

















HYPE! UE4 Niagara

Rambly post about the cool features of Niagara and the interesting performance and production implications of a modular/exposed to users particle workflow…you’ve been warned. 😉

I’ve just watched Epic’s showcase of their new particle editor and I AM SO EXCITED.
This looks awesome – it adds flexibility so that you can make fx work as complicated as you want ,depending on your level of technical knowledge, and puts power in the user’s hands. I can’t count how many times I’ve said if only I could just multiply the velocity by the wind direction etc…

I love the idea of module creation plus the usual stack editor. A technical artist creating modules using vector math etc and then a VFX artist using those modules to create beautiful effects is a great workflow, using the team’s strengths effectively.

It has a lot of control – using namespaces to define variables, being able to choose execution type and order, access to in game events, logic and variables. I get the impression that if you have the technical skill and the dream, you can do it!

Being HLSL based is great, that will give us access to a lot of render attributes we wouldn’t be able to touch otherwise, and having it this way rather than C++ for simulation and HLSL for drawing makes it nice and easy to use.

Despite this, drawing and simulation are separated, which is great for performance and production. If we want two emitters to have the same behavior but different looks, we can simulate once and give different instructions to draw them.

I’m intrigued by the snippets of HLSL you can write, learning a non node based shader language is something I’ve had on my to do list for a while and it might be a nice introduction for me. I’ll 100% be using the HLSL expressions in the stack editor – its just like Houdini!

I think the CPU GPU parity thing is very interesting. I’d tend to do things a different way when working with different processors – I think they used the example of a raycast vs using the depth buffer in the video – and its an interesting idea to try and have a single way of working. I’m not sure if this is more or less flexible, but it provokes concerns regarding performance and education. Why would you spend milliseconds raycasting on the gpu when you could look up the depth buffer? If you didn’t know that raycasting wasn’t the best solution, would you ever find out?

The ideas of exposing values, inheritance and overrides are fantastic, and exactly how we should be making games. This gives us maximum flexibility and re-usability on assets. It allows tweaks to be made by those with less knowledge of the systems where people with more techy knowledge create the building blocks. Only needing to do this once means more time can be spent on developing other tools and working towards future needs. I guess the only caveat with this is being careful about what’s exposed and to whom. When the VFX artist exposes rate to the environment artist and they set it to 2000 because it looks the best, the TA will be spending their time doing content monitoring and profiling to find stuff like this, rather than investing in those tools and future tech that setting these things up only once was supposed to allow them to do.

Overall, I think this is a great direction for UE4 to be going in with its particle editor – can’t wait to give it a shot!