Everything you wanted to know about outline shaders.

Outline Shader Report

So you want to outline your object in Unity3D for Game Development or VR Development? Here’s everything you need to know.


  • Outline shaders are best done as a 2 pass process rather than tracing an outline.
  • It only makes sense to have an outline only approach if you actually want only the outline shown. For example it’s not more efficient to have an outline­only effect compared to an outline­with­ object effect.
  • Having world space based outline width is easy to do, though at far distances the outline disappears.
  • Standard method of outlining does not handle hard edges very well. If all your meshes don’t have hard edges then go for the standard approach! This one apparently performs really good on mobile: http://wiki.unity3d.com/index.php?title=Outlined_Diffuse_3


Overview of Outlining Techniques

All outline shaders work in 2 passes:

  • 1. The outline/silhouette pass that will draw the mesh but slightly “larger”.
  • 2. Then a drawing pass which draws the actual object on top of that.

This means that the object is rendered twice. Without using an image effect or some other complicated line tracing method there is no way around this. In order to do the outline only without the object it still requires the object to be rendered. On the second pass the object rendered as a transparent object that will “peer through” the silhouette. So there is no significant performance gain by rendering the outline only. The object must be rendered again as a “pass through” mask anyway. Might save a bit on not having to perform lighting calculations on the object but if you plan to have the object in the scene it is best to just go with the standard approach.


Possible Line tracing solutions (spoiler: they are all pretty much a no go :/ )

Tracing a line around a silhouette is not something shaders do well.

  • 1. In order to trace a line around an object one would need to perform some pretty heavy calculations on the CPU each frame to determine the sillhouette. On PC one could do a GPU based calculation, even this is not ideal performance wise. After that you’d have to create a 2d mesh representing the outline each frame and then rendered seprately (terrible idea on mobile to reprocess a mesh each frame!)
  • 2. Alternatively one could use an image effect with a depth image perhaps but this can have messy side effects in the rest of the scene (it’s not very modular). 3. One could also perhaps use an image effect to draw the silhouettes of all outlineable objects as a 2D image and use a method similar to the UnityGUI outline effect by drawing 4 outline silhouettes in all diagonals. This means the outline objects must be rendered an extra time.  After that must render the outlines. And another pass if you want to ”cut out” the middle parts and only have the outlines. On top of this they will be screen­spaced width. None of this is good for VR or needs more research at best.

World­space vs Screensspace Based Outline Width

World­space based outline width is easily achievable, most standard outline shaders actually have world­space as standard and then add calculations to adjust and make the width screen­space based. A world­space based width with objects far away means that the line will disappear at some point.. It’s fairly straightforward to have a world­space width with a minimum screen­space width. Though this comes with a few extra calculations per vertex (but only a few).


Outline / Silhouette Pass

When rendering the outline, the object is rendered as a solid colour but the vertices are shifted “outwards”. There are 3 methods of calculating this that I’ve come across.

Method 1: Normals Simply use the mesh normals. This is fine for smooth meshes where vertices have a single normal. Advantage:Works with all smooth meshs Disadvantage: With hard edges where a vertex will have different normals for each face it is a part of, this causes undesirable outline effects.


Method 2: Estimate ‘smooth” normals based on a center. A way to get around these hard edge artifacts is to estimate a normal. Basically for each vertex you go “outward” from a center and use that. Essentially scaling the object up slightly. Advantage: Gets rid of most hard edge problems Disadvantage: For most complicated meshes with concave sections simply scaling up does not work.


Method 3: Vertex Color Normals. A method which should perform best of both worlds is by pre calculating a smooth normal for each vertex based on all the normals and then passing that information to the shader as vertex colour data Advantage:Should work with all meshes, hard edges or not as the pre calculations will ensure “smooth” normals by averaging the disparate normals on hard edges. Disadvantage:Requires a preprocessing step on the mesh. But only once! And you use up the vertex colour channel so one cannot use per vertex coloring anymore (not without a workaround). The mesh can still be tinted as a whole and textured as normal.

Thanks for reading and remember to subscribe to our YouTube Channel for more Game Dev & VR Dev tutorials and topics!