Eveee Crash Course

A new age is upon us. Internal is dead, and we have a shiny new engine to replace it. Moving to a mod­ern, real­time engine isn’t that easy though, there’s a lot to learn. This quick intro will help ease you into the age of Eevee. Since this is only a quick overview, I won’t be get­ting too in-depth about all the dif­fer­ent set­tings and algo­rithms behind them. I’m also skip­ping mate­ri­als since the work­flow in Eevee is not that dif­fer­ent from Cycles. We build mate­ri­als with node net­works and use (almost) the same set of nodes. 

I’ve also includ­ed links with more infor­ma­tion for some of the new tech­niques and algo­rithms. Check them out if you want to learn more. If you are into code you can also browse the source code.

Coming from Cycles

What are the differences?

Cycles is a unbi­ased ray trac­er. Ray trac­ing is a ren­der­ing tech­nique based on trac­ing the path of rays of light in a scene. Rays and mate­ri­als fol­low the rules of physics so the way dif­fer­ent mate­ri­als react to light is phys­i­cal­ly accu­rate. This method pro­duces high­ly real­is­tic images but takes a long time to ren­der (as we already know 🙂 ).

On the oth­er hand, Eevee is a ras­ter­iz­er that uses real­time ren­der­ing tech­niques, much like game engines. Rasterization is a tech­nique where the scene’s geom­e­try is pro­ject­ed onto pix­els. The col­or of each pix­el is deter­mined by the shad­er code of the mate­r­i­al and it’s based on the sur­face’s nor­mals and posi­tions of the lights. On top of that, Eeevee uses addi­tion­al tech­niques to make the result look bet­ter and sim­u­late dif­fer­ent effects. 

Cycles and Eevee serve dif­fer­ent needs. With Cycles you get real­ism, while Eevee gives you more flex­i­bil­i­ty and a faster workflow.

What nodes are not available?

Currently Eevee sup­ports all mate­ri­als nodes with the excep­tion of the Velvet and Toon shaders. As a (very approx­i­mat­ed) alter­na­tive to Velvet you can use the Principled shad­er. Just bump the sheen and rough­ness val­ues.

To replace Toon we can now make your own ghet­to toon shaders with the new shad­er-to-rgb node. Plug the out­put of any shad­er into a shad­er-to-rgb, then into a col­or ramp and final­ly into an emis­sion shad­er with an inten­si­ty of 1 to get start­ed. You can use this new node to cre­ate masks for light/shadow areas, feed col­or ramps for gra­di­ents and more. And of course you can mix this with the new Grease Pencil good­ness in 2.80.

Jumping between engines

Eevee mate­ri­als work with Cycles and vice-ver­sa. Lights are a dif­fer­ent sto­ry. Eevee does­n’t use nodes for lights like Cycles does so light­ing can look a bit dif­fer­ent in each engine. You might also have to tweak engine-spe­cif­ic parts like the dif­fer­ent set­tings for vol­u­met­rics, DOF, etc.

Despite that, it’s actu­al­ly a good work­flow idea to start work­ing with Eevee and move into Cycles for the more final ren­ders. You can take advan­tage of the real­time work­flow and then move into ray trac­ing for the final stretch. Note that you can also plug nodes to engine-spe­cif­ic out­puts now. That means you can cre­ate mul­ti-engine mate­ri­als. Check the out the “tar­get” drop­down in the out­put node. 

Samples

In the world of Eevee “sam­ples” mean Temporal Antialiasing Samples (TAA). TAA is new-ish AA tech­nique used in mod­ern game engines. It com­bines pre­vi­ous­ly ren­dered frames to smooth out jagged edges. Eevee gets these “frames” by jit­ter­ing the cam­era matrix (mov­ing the cam­era randomly). 

Setting this to 1 turns antialias­ing off, since it only ren­ders one frame/sample (but gives you a sol­id per­for­mance boost!). Note that this set­ting also clamps vol­u­met­ric samples.

The wikipedia arti­cle on TAA will help you under­stand it bet­ter, as well as this nice sum­ma­ry of AA tech­niques. Also check out Sketchfab’s announce­ment on it.

Irradiance volumes

Irradiance vol­umes are a kind of light probe used for indi­rect lighting.

Light probes are a new object type unique to Eevee. They catch the scene’s envi­ro­ment for reflec­tions or indi­rect light­ing. Irradiance Volumes help approach glob­al illu­mi­na­tion by pre­cal­cu­lat­ing the inten­si­ty of the light. In oth­er words, bak­ing indi­rect lighting.

To start using them first add an irra­di­ance vol­ume to your scene and scale it to encom­pass the area where you want indi­rect light­ing. Now bake it by going to the Indirect Lighting Panel in the prop­er­ties edi­tor. You can also enable auto bak­ing, but watch out for slow downs if your scene is too heavy. Remember that more space cov­ered by irra­di­ance vol­umes means more bak­ing time. 

Looking fur­ther down in that pan­el you will also find Diffuse Bounces. Coming from Cycles you might be tempt­ed to increase this, but the default (3) is already good enough for most cas­es. Diffuse Occlusion is the size of the shad­ow map of each irra­di­ance sam­ple. Irradiance sam­ples form the vol­ume and store incom­ing light for a spe­cif­ic point in space. We don’t need as much infor­ma­tion for indi­rect light­ing so the default of 32px is also good enough. 

The float­ing balls rep­re­sent the irra­di­ance samples. 

Shadows

ESM vs VSM

General shad­ow set­tings live in the Shadows pan­el in the ren­der sett­tings and the first option you will notice there is Method. Clicking this drop­down presents two mys­te­ri­ous names:

  • ESM (Exponential Shadow mapping)
  • VSM (Variance Shadow Mapping)

Both ESM and VSM are tech­niques to fil­ter shad­owmaps to pre­vent alias­ing. ESM uses an expo­nen­tial math­e­mat­i­cal func­tion, while VSM stores extra infor­ma­tion about depth allow­ing maps to be fil­tered in the GPU the way tex­tures are (eg. mipmaps, gauss­ian blur, etc.).

Notice the light bleed in ESM and the streak of light between shad­ows in VSM

When to use one or the oth­er depends on your scene and is a sub­tle deci­sion. VSM does­n’t suf­fer the same light bleed­ing arti­facts ESM has and in sim­ple scenes it can give you fuller shad­ows. However it can have some light bleed when the scene is too com­plex or objects are too close. Another down­side of VSM is that it uses more RAM (since it stores more info). 

You can read more on the juciy math behind ESM in the orig­i­nal paper. And of course, there is one for VSM too.

Controlling quality

Cube size and Cascade size are the size of the shad­ow maps for point/spot/area lamps and sun lamps respec­tive­ly. Higher res­o­lu­tion maps will yield bet­ter results at the expense of per­for­mance. Another option is High Bitdepth, which will cre­ate 32-bit depth shad­ow maps that look bet­ter but take more RAM and pro­cess­ing power.

More shadow options

Check out your light’s prop­er­ties. You will find anoth­er shad­ows pan­el with even more options. The options shown depend on the light type but we actu­al­ly have a great deal of con­trol on Eevee’s shad­ows. These set­tings are not glob­al like the pre­vi­ous, they are spe­cif­ic to each light.

At the bot­tom you will notice two set­tings: Exponent and Bleed Bias. These two con­trol the bias for reduc­ing light bleed for ESM and VSM respec­tive­ly. Use these to con­trol how much light bleeds into the shad­ow, under or over-dark­en­ing the edges.

The light streak is gone, but be watch out. It can also make shad­ow edges look jagged.

Reflections

Screen Space Reflections

Screen Space Reflection is a tech­nique for reusing screen space data (AKA the stuff that’s vis­i­ble) to cal­cu­late reflec­tions. It works in a sim­i­lar way to ray trac­ing, but inter­sect­ing rays with the depth buffer instead of actu­al geometry.

Screen Space Reflections can give us good look­ing reflec­tions in real­time but they have one major down­side though: they can only reflect what’s vis­i­ble. They can’t reflect an object or part of it that is obscured, out­side the field of view, fac­ing away from the cam­era, etc. 

Notice the miss­ing under­sides in the reflections

The good news is that SSR can work togeth­er with the oth­er probes. SSR gets pri­or­i­ty, but if a ray miss­es Eevee will fall back to cube­maps and pla­nar reflec­tions for help.

This answs­er in StackXchange goes into more detail on how SSR works.

Cubemaps

A Cubemap is a type of light probe in Eeeve. They are used to cap­ture the sur­round­ings of objects for reflec­tions. A sin­gle cube­map is a col­lec­tion of six square tex­tures that rep­re­sent the reflec­tions in an envi­ron­ment with the six squares form the faces of an imag­i­nary cube. A cube­map probe rep­re­sents an array of sev­er­al cubemaps.

Make sure to cov­er the entire area you want reflect­ed and increase the clip­ping end too!

Wikipedia has an arti­cle about Cubemapping and you can also get some more info from Unity’s documentation.

Planar reflections

Planar reflec­tions or Reflection planes are anoth­er kind of reflec­tion probe. They are used for smooth planes like mir­rors or win­dows. They do their mag­ic by re-ren­der­ing the scene with a flipped cam­era and then plac­ing the result in the probe’s area. Note that I said re-ren­der. They will add to ren­der times and decrease performance.

Also note that unless Screen Space Reflections is enabled, these planes will only work on spec­u­lar sur­faces with a rough­ness close to zero. On the oth­er side, if SSR is enabled these probes act like team play­ers and help accel­er­ate the ray march­ing process, adding objects and data miss­ing from view space.

To use the Planar Reflection probes place them slight­ly above the reflec­tive sur­faces and scale them to fill. Don’t get too close though, as it may cap­ture the oth­er side of the surface. 

Check out Unreal Engine’s docs for more bits of infor­ma­tion about this.

Planar reflec­tions don’t use ray trac­ing, so no infi­nite mir­ror effect for Eevee.

Volumetric

Volumetric settings

To see any vol­u­met­ric mate­ri­als you first need to enable Volumetric in the ren­der set­tings. And of course you need to have a mate­r­i­al (or world) with some­thing plugged in the vol­ume output. 

You might notice it looks a bit chop­py at first. Lowering the tile size or increas­ing the sam­ples will improve the qual­i­ty. You can also bump the sam­ples but remem­ber they will be clamped by the glob­al samples.

Principled volumetric

The prin­ci­pled shad­er now has a vol­u­met­ric cousin. This node sim­pli­fies vol­u­met­ric mate­ri­als the same way the Disney princess prin­ci­pled shad­er sim­pli­fied PBR. It com­bines Volume Scatter, Volume Absorption and a few oth­ers. You can cre­ate fog, smoke, wisps and any oth­er mate­r­i­al you can think of with just this shad­er (and tex­tures of course).

Volumetric nir­vana

Post-processing effects

Depth of Field

Eevee’s DoF is not that dif­fer­ent from Cycles, you will find Distance and F‑Stop in the cam­era prop­er­ties. As usu­al you need to enable DoF in the ren­der set­tings first. Note that the set­tings for Eevee and Cycles are two sep­a­rate sets, they work the same way but the engines don’t share them. If you need to squeeze more per­for­mance or get more blur try play­ing with max dis­tance in the ren­der set­tings. Also, in case any­one is caught off-guard by this: this effect is only vis­i­ble in cam­era view.

Bloom

Bloom is an effect that repro­duces an optics arti­fact of real world cam­eras. It adds pools of light pour­ing from the bor­ders of bright areas to cre­ate an illu­sion of over­whelm­ing brightness.

Remember using the Glare node with Fog Glow in the com­pos­i­tor? Well, for­get him. Bloom is our friend now. The effect is the same as what we used to do in the com­pos­i­tor but more flex­i­ble and real­time. Watch out though, it’s easy to abuse this effect and turn your ren­der into a soup of near-white pixels.

We will abuse it any­ways. Ahhh shinny…

That’s it for today! I will be post­ing more tuto­ri­als as I con­tin­ue to use Eevee and find new stuff. These next months will be real­ly excit­ing, as more peo­ple start using the engine and come up with new tech­niques and best prac­tices. Stay tuned.

All the posts you can read
TutorialsBlender, Eevee19.10.2020