HYPE! UE4 Niagara

Rambly post about the cool features of Niagara and the interesting performance and production implications of a modular/exposed to users particle workflow…you’ve been warned. 😉

I’ve just watched Epic’s showcase of their new particle editor and I AM SO EXCITED.
This looks awesome – it adds flexibility so that you can make fx work as complicated as you want ,depending on your level of technical knowledge, and puts power in the user’s hands. I can’t count how many times I’ve said if only I could just multiply the velocity by the wind direction etc…

I love the idea of module creation plus the usual stack editor. A technical artist creating modules using vector math etc and then a VFX artist using those modules to create beautiful effects is a great workflow, using the team’s strengths effectively.

It has a lot of control – using namespaces to define variables, being able to choose execution type and order, access to in game events, logic and variables. I get the impression that if you have the technical skill and the dream, you can do it!

Being HLSL based is great, that will give us access to a lot of render attributes we wouldn’t be able to touch otherwise, and having it this way rather than C++ for simulation and HLSL for drawing makes it nice and easy to use.

Despite this, drawing and simulation are separated, which is great for performance and production. If we want two emitters to have the same behavior but different looks, we can simulate once and give different instructions to draw them.

I’m intrigued by the snippets of HLSL you can write, learning a non node based shader language is something I’ve had on my to do list for a while and it might be a nice introduction for me. I’ll 100% be using the HLSL expressions in the stack editor – its just like Houdini!

I think the CPU GPU parity thing is very interesting. I’d tend to do things a different way when working with different processors – I think they used the example of a raycast vs using the depth buffer in the video – and its an interesting idea to try and have a single way of working. I’m not sure if this is more or less flexible, but it provokes concerns regarding performance and education. Why would you spend milliseconds raycasting on the gpu when you could look up the depth buffer? If you didn’t know that raycasting wasn’t the best solution, would you ever find out?

The ideas of exposing values, inheritance and overrides are fantastic, and exactly how we should be making games. This gives us maximum flexibility and re-usability on assets. It allows tweaks to be made by those with less knowledge of the systems where people with more techy knowledge create the building blocks. Only needing to do this once means more time can be spent on developing other tools and working towards future needs. I guess the only caveat with this is being careful about what’s exposed and to whom. When the VFX artist exposes rate to the environment artist and they set it to 2000 because it looks the best, the TA will be spending their time doing content monitoring and profiling to find stuff like this, rather than investing in those tools and future tech that setting these things up only once was supposed to allow them to do.

Overall, I think this is a great direction for UE4 to be going in with its particle editor – can’t wait to give it a shot!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s