Ember Effect in Trapcode Particular

Since I’m still a rather poor student, I shamingly have to admit that I haven’t purchased the Red Giant Trapcode Suite yet. Instead, I use their generous trial version once in a while when I need to create something fancy for our projects. This time I was going to produce an intro sequence for a game we had been working on last Christmas, so I took the time to learn how to make a fairly simple ember effect. I was fascinated by the fact how effective the use of a noise texture could be used to create the smoke effect in the background rather than spending expensive calculations on creating a separate particle system for it.

Just to be clear, this is not exactly a tutorial. I just wrote down the overall process, so I might have missed some important details. You’ll notice that I jump around a lot between the different components in After Effects, mostly because I was tweaking the parameters back and forth as I went along.

Please visit Red Giant if you’re interested, their products are super awesome.




Go ahead and create a new composition, feel free to choose any given resolution. I would start off with a high resolution composition, since it works better to scale down an effect rather than trying to scale it up. By default, it’s already set to HDTV 1080 29.97, so we’re ready to start.

Create a new black solid, matching the resolution of the composition

Go to Effects & Presets, scroll down to Red Giant→RG Trapcode→Particular. Drag and drop it on your recently created solid. If you now play the timeline, you’ll see a bunch of white particles going out from the center of the screen. It doesn’t do a lot more than this for now, but this is where the fun starts.

In Effect Controls (Press F3 for shortcut), you’ll see plenty of sections to choose from. Each section holds attributes that we now can manipulate with and we are going to start right away with the Emitter.

Emitter I


1. Slow down the velocity to around 20 and change the velocity random percentage to about 60%. By pressing TAB, we can automatically step down in the list of attributes.

2. Change the Emitter type to Box and set the Emitter size on the X-Axis to about 1600 and on the Y-Axis to 1360. You might have to check XYZ to be individual in order to access the different axes and perform non-uniform scaling.

3. Set particles/sec to 50 and set direction spread to 20

4. Set velocity to 20

5. Go ahead and rotate 90 degrees on the X-Axis

6. Position the particle system at the bottom of the screen around 960, 944, 0

Particle I

1. Set particle life time to around 6 – 8

2. Set life random to about 30 and set sphere feather to 0

3. Set size to 3 and size random to 50

4. Go to Size over Life and add the steep preset to make the particles fade in much smoother. This does the job, so we keep it as it is.


Physics I

1. Set Wind on Y-Axis to -400

2. Go to Turbulence Field→Set Affect position to 450

Now that’s more like it. By putting in some turbulence and affecting the positions of the particles with a factor and forcing it upwards with the wind, we achieve this swaying effect of a burning fire.

3. Set Turbulence Field→Scale to 6 and complexity to 1

4. Set Turbulence Field→Evolution speed to 15



1. Now we require an actual camera in the composition, so go ahead and create one. Set it be a One-Node Camera.

2. Set the camera position to about 960, -300, -2666.7

3. Now the particles slowly creeps up from the bottom of the screen after a couple of seconds into the timeline



1. Set blend mode to Normal

2. Toggle transparency

3. Set Motion Blur→Motion Blur→On

4. Set Motion Blur→Type→Subframe Sample

5. Set Motion Blur→Levels→16

6. Set Motion Blur→Opacity Boost→16

Emitter II

1. Set particles/sec to 25


1. Create a copy of the solid, since we’re going to change the motion parameters

Emitter III – Copy

1. Set Random Seed to 107960. Just because. Magic numbers.

Physics II – Copy

1. Set affect position to 300

2. Set X-Axys offset to 2180

3. Set Y-Axis offset to 2000

4. Set Z-Axis offset to 2840

5. Set scale to 10

6. Set Wind Y-Axis to -550

Blend Mode

1. Set first solid blend mode to Screen


1. Create a new solid and call it “Background” or “BG”

2. From Effects & Preset, add a Gradient Map to the new solid

3. Swap the colors so that we’re starting with black from the bottom

4. Set the Ramp Shape to be a Radial Shape

5. Set the Start of Ramp to 982, 1600

6. Set the End of Ramp to 366.9, 144

7. Set Start Color to dark orange

8. Set Ramp Scatter to 11.4

9. Move down background below and set the previous solids blending mode to “Screen”


Adjustment Layer I

1. Create a new Adjustment Layer and place it above the two original solids

Solid 1

1. Set particle color to bright orange/yellow

Solid 2

1. Set particle color to strong orange

Adjustment Layer II

1. Set glow threshold to 73

2. Set glow radius to 10



1. Create a new solid and call it noise

2. In Effect & Assets, find Turbulent noise and attach to the Noise

3. Set Noise→Scale→316

4. Set Noise→Contrast→360

5. Set blending mode to Multiply

6. Move it down below the two original solids

7. Set keyframe for Noise→Evolution and Noise→Offset Turbulence

8. Drag to the end of timeline

9. Set keyframe for Noise→Evolution to 960, -712


Adjustment Layer III

1. Set Glow Colors A & B Colors

Color A: Bright Orange

Color B: Dark Orange

2. Set glow threshold to 42.2%

3. Set glow operation to Screen

4. Set glow radius to 20

5. Set glow intensity to 3.0

6. Copy another glow

7. Set glow radius to 50


Solid 2

1. Set particles/sec to 40

2. Set particle size to 3.5

3. Set Opacity Boost to 20

Solid 1

1. Set Opacity Boost to 20

Adjustment Layer IV

1. In Effects & Presets, add Hue/Saturation

Set master hue to -8


Solid 1

1. Set Shutter Angle to 600

2. Set Air→Wind Y-Axis to -700

3. Set particle size to 2

Adjustment Layer V

1. Set glow 1 intensity to 2.8

2. Set glow 1 threshold to 27.8

3. Set glow 2 threshold to 51.4



That’s about it! While there is a general process to follow, I once again realized how much comes down to experimentation and playing around with the available features in After Effects and the Trapcode Suite. Here is the final rendered sequence of the effect in action.


A Study on Discrete LOD in Unity Game Engine

A study of the built-in LOD Group component in Unity carried out by students Ludvig Arlebrink and Fredrik Linde at Blekinge Institute of Technology. This is supplementary material to the full report.

The Unity version used at the time was 2017.3.0f3 (64-bit). We use the Stanford bunny with LODs ranging from 0 to 4 for the experiment. We define five tests: LOD, Crossfade, Dither, No LOD and Empty. Where LOD, is a standard LOD test using the unity LOD group component, without any transitions. Both crossfade and dither also uses the LOD group component but with transitions enabled and set to crossfade. The no LOD does not use the LOD group component and renders the bunny on its maximum LOD. Finally, the empty test is just for reference and is a completely empty scene with a black clear color.

We began the implementation with creating Unity prefabs for each of the tests, except the empty test. For the LOD, crossfade and dither test, we add a LOD group component and for each LOD we add a child gameobject with a mesh renderer component for the LOD mesh. The No LOD prefab only includes a mesh renderer component with no children.

In the beginning of a test, we instantiate 20 bunnies of its corresponding prefab for each axis for a total size of 8000 bunnies. When the test is complete, we destroy all instances of the bunny and repeat this process until all tests have finished executing. At this point the application enters its final stage by running the empty scene test and automatically shuts down afterwards. We created a script for the camera to traverse a path defined by a number of points that the camera interpolates between given a constant speed at all times.

In memory of Mia Guy

I apologize in advance if this is sudden and that it’s none of my business, but I have to write this.

Today I woke up to a tragic message on my facebook page from the husband of my former biology teacher Mia Guy which stated that she had passed away from cancer. I was aware that she had been hospitalized since the beginning of last week when the cancer unfortunately had spread itself to her brain. Somewhere I knew that she didn’t have much time left and I thought about her every single day since I first got to know about what happened.

Mia Guy was the first teacher I met on my first visit at LBS Nyköping where she worked as a biology teacher and she took great care of me and my mom to make us feel welcome. Later on as I started studying there, she became an important role model for me. In a sense, she was the heart of the place and seemed to have this warm energy that could light up even the darkest days. It was always fun to go to her lessons because it was okay to joke around and she would often throw some jokes back at you. Biology had never been funnier. If you ever needed a shoulder to cry on or yell out your anger, she was always there for you and helped you back up on your feet.

She was at her core fearless and a self-taught master of mindfulness as she used to say. Everyone who has ever stepped foot inside that school lost one of their greatest friends today. I’m a big believer in that people still live on inside us, in our memories, and because of that they can never truly die. They act through us and give us strength to carry on, just like they had to even when they didn’t always have the courage or strength to do so…to keep moving forward no matter what. My thoughts goes to Mia’s family and I want them to know that Mia will always serve as a shining beacon of light in my house and for all of us that she helped to guide throughout her years. I’m so thankful that I had the chance to get to know her.

Thank you for everything Mia, I will always think of you.


Connecting Maya with custom engine: Part II

This slideshow requires JavaScript.

I’ve longed to say this for almost a month now…it’s finally done. My custom maya renderer is finished. Well, at least for now…I would call it an early alpha of it. If I find time to record a demo of this, I will put it up on the page in an upcoming blog post. There are a lot of extensions that can be made, but in its current state it can do the following:

  • Add and remove
    • Meshes
    • Pointlights
    • Materials
  • Update and handle renaming of previously added nodes
  • Duplication
    • Meshes
    • Pointlights
  • Updating material changes (color and cosine power factor)
  • Load currently connected textures on material. If there is no texture, a dummy texture is given to that material.
  • Updating topology changes on meshes
  • Updating parent and children transforms
  • Track the current camera used in the Maya Viewport
  • Camera Settings:
    • Ortographic and Perspective view
    • Field of view
    • Near and far plane
  • Phong and Blinn-Phong Shading

There were a lot of challenges I had to face in this assignment and here goes a list of the most difficult problems and my approach of solving them.

Several materials with Deferred Shading

At first, I was unsure if my decision to go with Deferred Shading was a smart move at all. The reason was to easily be able to handle a large amount of pointlights, but then I realized that I would lose specific material data for a given mesh after the first pass when creating the G-buffers. I looked around at forums and asked around in school for solutions, in which some if the suggestions were:

  • Write material data to its own G-buffer
    • While this would work, it will increase the already demanding bandwidth that comes with a conventional Deferred Shading implementation.
  • Create a lookup texture holding the material data and pass to the light pass
    • Would probably work as well, but seemed like overkill and more work than necessary.

The better solution as explained to me by my teacher was to pass a material buffer for each mesh in the first pass and pass its data to the pixel shader. While in the pixel shader, I could add the material properties onto the albedo sampled from the connected texture of the material. By blending the texture with the material color, I received a form of tinting.

Handling duplication and renaming of nodes

What Maya actually does when duplicating a node is that it “behind the scenes” creates a pre-notation node holding the original mesh name with an added “preNotato” prefix before renaming it to a unique name in the scene. What this means is that when the node has been duplicated, it still qualifies as if something has been added to the scene even if it hasn’t been given a final name yet.

Everything goes fine when adding this node in the rendering engine, but when the node is updated in Maya and messages holding the changes have arrived to the rendering engine, the node has been given its final name and it won’t be found in the unordered map for that specific type of node.

At first I thought I could just iterate through the unordered map given the old name of the node, the “pre-notation” name” and change the key value. However, since the key value is a const datatype, this proved to be impossible and I had to come up with a more drastic solution. What I had to do instead was to save a reference to the node given the old name, erase the node registered with the old name and insert the new key with the saved reference to the node. While this doesn’t feel entirely okay or optimized in any way, it works like a charm given the proper error handling.

Actually gathering a created shape node with geometry attached to it

Iterating through the scene at start and gathering the mesh data from the already existing meshes is easy, if we just use an MItDag iterator and check the datatype. No problem. How about checking after an added mesh in the added node callback and sending the mesh data? Not as elegant.

If we try to create an MFnMesh directly from the added node, given it is of the type kMesh, we will most likely receive an error that the shape node has no geometry attached to it. When a mesh is added, there are a lot of components that comes with it that also could pass as part of the kMesh type and the geometry might also not be finished at this point. So, how do we find the valid mesh node whenever mesh is added?

To walk around this problem, an attribute callback is attached to every node that passes the kMesh type check and we try to create a MFnMesh from the node within the plug instead. This will fail as well, for a certain number of times, until…a valid mesh node with attached geometry to the shape node passes by. If the MFnMesh creation succeeds , send the mesh data as usual and remove the attribute callback once you’re done sending the data. The CallbackID for a created mesh can be stored as a global variable and re-used for other added meshes.

Geometry seems to be the biggest issue in this case, while the parent transform node and other data can be found when a mesh is created. Why this is such a big problem specifically when adding a mesh node, I have no idea. Why can’t the geometry be finished when the shape node arrives? Why would we want to be able to access the shape node before it has any geometry attached to it? I’m not putting Maya down, I just find a bit strange but most of all, frustrating.

Gathering the mesh data

So what does it actually takes to gather the mesh data? Here is where I’m glad that I’ve worked with the FBX SDK because it’s pretty much the same thing. For a given mesh, I have to iterate through its polygons and gather each vertex in the current face. What I was glad to find out this time, which should be possible in the FBX as well, was that I could gather triangles from non-triangulated faces. This way, it doesn’t matter if we forget to triangulate the mesh at export. While indexing is possible, I’d started out with getting the more straight ahead approach to work in which you append all the vertices in every triangle, resulting in duplicate vertices.

What was really mind-bending was that the MItMeshPolygon::getTriangle() returns object-relative vertex indices…MItMeshPolygon::getTriangle() returns object-relative vertex indices…BUT MItMeshPolygon::normalIndex() and ::getNormal() need face-relative vertex indices to work. So, I utilized a helper function from this great tutorial to get it to work and also gather the normals. Since I didn’t change the winding order in the plug-in that reads data from Maya, I used the already existing rasterizer state for the skybox to flip the winding order instead.

Finding the currently attached material data

At its current state, I can query texture paths at the start of the program and change material color for a given material. Several meshes can share the same material, but so far I only support for one material per mesh and can’t make any switches between the different materials in the scene. To first find the shader node, we must use the function getConnectedSetsAndMembers on the given mesh node. The documentation states the following about this way of reaching the shader node:
“Returns all the sets connected to the specified instance of this mesh. For each set in the “sets” array there is a corresponding entry in the “comps” array which are all the components in that set. If the entire object is in a set, then the corresponding entry in the comps array will have no elements in it”. Note: This method will only work with a MFnMesh function set which has been initialized with an MFn::kMesh”

The set we’re after in this case is the shading node group for the mesh. Within that, we can find the assigned shader. There will always be a default lambert shader in the Maya scene that automatically is assigned to any created mesh. Next, from the shading group node set where the shader is expected to be found, we look for the plug “surfaceShader”. We must now go through the surfaceShader plug to find the connected shaders of this mesh. The returning node should be the material, or “lambert1” if no other materials are active in the scene.

Representing the DAG-hierarchy transformation

Getting the transformation for an individual mesh is simple, we can extract the parent transform for a given mesh by calling on the first parent (there will in 99.9% of all the cases be a transform node directly above the shape node). We put an attribute changed callback on that transform node and gather the translation, quaternion and scale values. We can specify the different spaces, in which kTransform is object space.

Now, if we were to actually create a parent/child relationship, we would have to iterate through every potential child of a transform, verify that the children are transforms and add together all the local transformations all the way out through the root. Or we could implement an actual parent\child relationship in the render engine. While this is useful, what would you say if Maya already does all of this for us?

What we want to do is to get the dagpath from the transform node and from the dagpath query the inclusiveMatrix. This matrix, compared to the gathered vectors used to create the final transformation matrix, will represent the accurate transformation in the hierarchy all the way out into the world. So there is actually no need to create a system that handles parent-child relationship if we use this specific matrix from the dagpath. I was kind of shocked how well this worked at first, but it does exactly what it says in the documentation. The inclusiveMatrix on a dagpath will return the already calculated transformation in the hierarchy, which saves us a lot of work.

One final detail is that we still must call upon any potential children…recursively…to update their transformations in the plugin whenever their parent moves. This is done with the old trusty depth-first search, see my blog post Exporting Joint Hierarchies Part I.

Using the M3DView to access the active camera in the viewport

While it’s possible to iterate through the scene for all active cameras and make callbacks to look for added cameras, it’s only the currently active camera we’re interested in. Sure, we still must be able to switch between them and that’s possible through the M3DView class. Given a M3DView, we can get the active viewport from the panel name and the viewport panel name is “modelPanel4”, which can be gathered from this simply python script:

getPanel -underPointer

When executed in Maya, this script will return the name of the panel you’re currently holding the cursor over. These names won’t change as far as I know, so these can be hard coded. With the viewport at your disposal, you can now query the dagpath of the currently active camera and initialize an MFnCamera with it. From the MFnCamera, you can grab all sorts of properties…projection, near/far plane, field of view, position etc. We can also query the modelView and projection matrix directly from the M3DView, so there are many ways to accomplish the same goal.

What’s next?

  • I’m going to continue building on the overall structure of the entire project and try to improve core areas such as message sending. The goal for the next time is to have optimized the way I’m sending messages so I can manage much more complicated meshes and modifying their topology without seeing any larger frame drops.
  • Even though I’m using the inclusive matrix for the transformations, I’m probably going to implement a proper child-parent relationship in the rendering engine so I can support manual and automatic calculations.
  • Only one material is supported for each mesh and while several meshes can share the same material, I’m going to extend that and make it possible to switch between different materials
  • Better management for textures and making sure the same texture won’t be loaded again if another mesh shares it. If so, it should load it directly from the one that is already in the memory
  • Add a graphical user interface to manage the performance and other options in the engine. Maybe I will go with ImGui, but I would like to try making my own interface

That’s all for now and until next time, have a good one 🙂

Connecting Maya with custom engine : Part I

For the final assignment in a level editing course I’m taking part in this year, we’ve have been assigned to connect Maya with our own custom engine to view the contents of a maya scene. So far I’ve only been able to get the camera up and running, as well sending mesh data between the two applications using shared memory. It’s been very difficult to get into it, but I’m slowly starting to get a better understanding of it all. So far, it’s pretty static and I haven’t connected any transforms yet. That’s next on the to-do list, as well displaying multiple objects. The engine I’m using is the same I used to implement deferred rendering and morph animation with, you can watch below:


With that comes a bit of a challenge, since deferred shading requires me to sort out the materials in a different way. I pretty much lose material attributes after the first pass, so I must either write the materials properties to its own separate GBuffer or create some form of lookup texture. At this point I’m only focusing on getting the basics to work, but those are some of the solutions I’m thinking about. I can now also handle non-triangulated data, as well mixed with triangulated data. The sphere for example is a mixture of triangles at the top, while the polygons at the center remains as quads. The MItMeshPolygon can help you to identify if a polygon is already triangulated or not, then sort out the data. From a quad, we expect a total of six vertices while we only expect three vertices from a triangle.

Here are a few screenshots from the result of a very long Saturday, filled with mistakes and many crashes. It was worth it, because I started out from an empty scene and got all this by the end of the day.

DirectX11: Morph Animation Demo


This is a presentation of a Morph Animation technique based on the GPU approach described in ShaderX3: Advanced Rendering with Directx and OpenGL section 1.4. Me and a friend of mine implemented this as a summer practice, but that solution was entirely based on the CPU with a dynamic vertex buffer.

While this works, it’s much faster and requires less computational power to go with the GPU approach that binds the target and source vertex buffers to the same input layout. That way, the interpolation can be done entirely in the vertex shader.

Exporting Joint Hierarchies: Part II

Hi again, I’m back with another post full of content and research I’ve done over the past week. Last time was a little introduction with a smaller depth-search script to tidy up a joint hierarchy, but today I’m actually going to show you several things you can do in Maya to export your joint hierarchies “properly”. I’m currently using Maya 2017, along with the updated FBX SDK version 2017.0.1. I’m probably going to talk a lot more about the FBX SDK in part III, which has been a major headache for me in the recent year. For now, let’s focus on the exporting.

Naming Convention

There are really no strict naming conventions I would enforce, but it makes debugging and especially mirroring of any form easier if you use the prefix L or R (Left or Right) for arms and legs.

Naming Convention

Before Skinning

Here is a checklist of all the things I would recommend to have done before skinning:

  • Make sure arms and knees are a bit bent so that the IK’s won’t brake
  • Finished UV-mapping the mesh and made sure all UV-coordinates are within the valid UV-range
  • Freeze transformation on mesh to be skinned
  • Triangulate the mesh
  • Freeze all rotations on the joint hierarchy
  • Get rid of transform nodes in joint hierarchy
  • Last of all, used Delete All by Type->History on the entire scene. After this point, you should only use Delete All by Type->Non-Deformer History in order to not remove any deformer related data (all the skin weights, rigging components etc)

Skinning Settings

This is a skin binding setting that has worked well in the previous game project. The most important settings have been highlighted with the color red, which means that these must be used for any given mesh no matter the complexity. Dual quaternion is a skinning method developed to remove “candy wrap” effects on meshes and other weird artifacts that could occur in a classic linear skinning method. Geodesic Voxel is a bind method that is primarily designed to work with troublesome meshes, even non-manifold ones, but that’s not the primary argument for using it. Geodesic Voxel binds weights very accurately if the right settings are applied, which means less painting and less work for an artist. At best, it can help us to buy time but if necessary, we still must paint weights at certain places. The number of max influences must be 4, which is standard and makes everything less complicated. We only want ONE bindpose, so that the FBX SDK doesn’t find several and gives us the wrong one.

Bind to: Joint hierarchy

Bind method: Geodesic Voxel

Skinning method: Dual quaternion

Weight distribution: Interactive

Allow multiple bindposes: No

Max influences: 4

Maintain max influences: Yes

Remove unused influences: Yes

Colorize skeleton: Optional, only a visual feature

Include hidden selections on creation: No

Falloff: Must be experimented with, but default value at 0.20 often does the trick

Resolution: Higher resolution works better for complicated meshes, lower resolutions works better for less complicated meshes. It’s up to the artist to determine the complexity. If in doubt, use 512.

Validate voxel state: Yes

Bind Settings

Skin definition errors

When exporting content from Maya, there are a few important details to keep in mind. Maya can often warn us and even tell us specifically what kind of data couldn’t be properly exported, but with skinned meshes it’s even trickier.

The most important error to watch out for is that the skin definition couldn’t be found, which could be caused by a number of reasons.

  • Always control that the mesh isn’t non-manifold. Although the binding method Geodesic voxel can work with troublesome meshes, it shouldn’t be picked as a work-around for the problem. The trouble could remain hidden without you knowing it.
  • Validate that the exported joint hierarchy contains ONLY joint nodes. IK_Handles can be ignored on export, but we don’t want a chain of joints in which some of them are also located in a group. This could prevent the depth-first search algorithm from successfully retrieving the joint hierarchy.

Working with UV-sets

Usually we work with only the default UV-set present at start which is always given the name “map1” by Maya. Each individual mesh is given a default UV-set and this is fine if we only want that particular object to have one texture. However, we might want to cover several body parts using only one UV-map? Easy, we just combine the objects together and voila, they are all in the same default “map1” UV-Set.

So…how does this work for separate meshes? By using the UV Set Editor we can select several meshes to instruct them to be part in the same UV-Set by adding a new one with a custom name.


So for example, “UVSet_Arm” can hold the UV-map for the upper arm submesh, the lower arm submesh, the hand submesh etc. By using UV-sets, we can split our character’s submeshes into different regions without having to combine them to pair their UV’s together.

  1. In the Modelling category, go to UV->UV-Set Editor
  2. Select the meshes you want to belong to the same UV-Set
  3. In the UV-Set Editor, click “New” and give a new to your new UV-set. If you select between the previously selected meshes, you can see that the they all now share the same UV-Set.


To actually tell Maya to use this new UV-Set instead of the “map1” set, do the following:

  1. Go to Windows->Relationship Editor->UV linking->UV centric
  2. On the left side in the relationship editor, you will see your mesh with the custom UV-set connected to it. On the right side, you see the current material on the mesh with its textures on the different channels. Simply select your UV-set and then click on the texture. Now we’ve changed from the default UV-set to be your own custom UV-set, rather than “map1”.


Binding several meshes to one hierarchy

When working with submeshes, we will bind the meshes exactly as if we worked with an entire mesh for the body. For each submesh we want to bind to the joint hierarchy, select the root node and then the submesh. Bind according to the given settings, you can save them in the Bind Skin settings so you won’t have to change them every time for each mesh. The only thing you want to change is to allow for multiple bindposes. Each mesh must be given its own bindpose, if we now work with submeshes. Repeat this for all meshes. It might seem strange to continuously use the root node for each submesh, but trust me, it works so much better than binding the closest joints and their corresponding submesh and then having to add influences if not all joints were included. As long as we always bind the submeshes with the root node, all’s going to be fine and Maya takes care of the additional joint influences in a chain.

End the skinning by selection all the submeshes and the entire joint hierarchy, go to Edit->Delete All by Type->Non-Deformer History. You are not allowed under any circumstances to use Delete All by Type->History on a skinned and rigged character, since this will delete the DEFORMATION data. If this happens, you have to undo the history or re-skin everything again or load the skin weights from a file. Even critical components in the rig might suddenly disappear, most usually cluster modifiers attached to spline curves.

Exporting skin weights to file

There are currently two ways of rescuing your skin weights in Maya. One is imaged based and the other one is relying on the XML format. Both can be used for the same purpose, it’s up to you to experiment and make the decisions. Confusing part is that they both share almost the same name, so I’m going to explain the differences between the two methods.

The image based, which I would say is the most troublesome one, can be located under Rigging->Skin->Export Weight Maps. Just because it’s the more unreliable of the two methods, it doesn’t mean that it won’t work for your case. It’s just much more sensitive and requires you to have a very thoroughly polished UV-map. It will store several weight maps for each individual joint that can later be re-imported.

The XML format method, which I strongly recommend over the image based method, can be located under Rigging->Deformer->Export Weights. It works almost the same as the image based method, but it simply stores the entire joint hierarchy’s weights in one XML file instead of having several images to represent the entire hierarchy. The XML method is far superior to the image based method, but I would still recommend having a polished UV-map for this one as well.

Animation layers in Maya

When we first create an animation layer, we will get a base animation layer and a layer on top of it. Delete the other layer so we only have the Base Animation layer. In the Base Animation layer, we will find all the scene’s content even if it’s not keyframed. It’s just there to act as a base for the other layers. On a layer, we can select a layer mode. Additive is the default. We can also add object to the layer if we forgot to select everything we wanted in that layer.

From the base layer, we want to select the entire joint hierarchy and all submeshes bound to it. With the “entire joint hierarchy”, I mean shift selecting all the joints in the hierarchy and not just the root node. Then we add a new layer, which we can start animating. No animations should be applied on the Base Animation layer, keep the character in bindpose. When the animation is finished on a layer, it must be baked as any other animation and added to the stack. Here is how you do it:

  1. Make sure you’re on the correct animation layer and not on the base animation layer
  2. Select the joint hierarchy and all the submeshes
  3. Go to Edit->Keys->Bake Simulation and click the box.
  4. When we’ve reached the Bake Simulation settings, we want the following settings:

Hierarchy: Selected

Channels: All keyable

Driven Channels: Necessary if a driven key is used

Control points: No

Shapes: Yes

Time Range: Time Slider, if specified for the correct range or use custom start/end. What’s important is that it matches the range of the animation and doesn’t include empty keyframes.

Bake to: New layer

Baked layers: Keep

Sample by: 1.0000 by default

Oversampling rate: 1 by default

Smart bake: No, we require a keyframe per frame. Smart bake doesn’t ensure this.

Keep unbaked layers: Keep them as backup in case the baking went wrong, they can be manually deleted later and should be removed for final export

Sparse curve bake: No

Disable implicit control: Yes

Unroll rotation: Yes

  1. Validate that the baking succeeded. Both the joints and the submeshes should now have keyframes and move according to them. If so, you can go ahead and remove the unbaked layer if no more changes are going to be added to the animation. IK-handles and other controls can also be safely removed.


Bake Settings.JPG

Now, we can easily switch between the bindpose and the different animations easily. The good thing about using layers is that we don’t have to use several files to store each individual animation and each layer can be blended to form more variations.

That was all for me this time, I hope someone might find this useful one day. I sure can’t keep all these details in my head so that’s the whole reason for writing these blog posts. In the next part, I will write about FBX format export settings and how I load the data from the file into FBX SDK.

If anyone much more experienced with Maya would stumble upon this post or find something out of place, write me a comment and give me feedback. I don’t assume to have the right understanding of all this and if I have gotten something entirely wrong, let me know. I’m just here to learn and this is a big area of interest for me. All that was mentioned in this post serves the purpose of making exporting from Maya a little less painful.

But just a little.

Exporting Joint Hierarchies : Part I

For the large game project in the third year of the Technical Artist program, I decided to revisit my role in the pipeline as the “animation supervisor”. Thus, one of my responsibilities are to write helper scripts for the animators and riggers to make their life a little less painful.

One of the more troublesome problems we ran into in the small game project in the second year was undesired transform nodes grouping together one or more joint hierarchies. These can show up if we change the scale of a joint’s parent node and then re-parent it under another joint that has a different scale. This is to preserve the scale of that joint. However, we should never have to end up in this situation from the beginning if the one responsible for building the joint hierarchy is always keeping an open eye on the parent attributes.

Depth-first search algorithm is common to use when working with joint hierarchies, both in Maya and game engines. It’s basically a recursive search starting out from the root joint and it traverses down through each limb, finding the children of the current child node. When there are no more children in the current joint, which means we have reached the leaf node for that specific branch, it moves on to the next branch of joints. Like a tree.

We should validate that the exported joint hierarchy contains ONLY  joint nodes. IK_Handles can be ignored on export, but we don’t want a chain of joints in which some of them are also located in a group. This could prevent the depth-first search algorithm from successfully retrieving the joint hierarchy. It will also clutter up your hierarchy and at worst, turn into unforeseen consequences down the road. Trying to zero out the transforms could in some cases work, but it shouldn’t have to be necessary. 

The depth-first search in python could look like this:


The RecursiveDepthFirstSearch function takes a root joint and a parent index as parameters. Since we’re passing the root joint to the function, it won’t have a parent so indicate this by just setting the parent index to -1.

For each recursive call, it takes the node and loops through its children, sending each one into the RecursiveDepthFirstSearch function. I noticed that we only want to add 1 to the index after we have printed it, not before it. This will make the parent index look incorrect at debugging, while it actually has been traversed correctly. So, how would we handle a transform node? Outside the function, we only allowed that the selection could be a joint node but never did the same thing for listRelatives in the function. As many might know, Python uses a duck typed behavior (if it quacks, treat it like a duck, otherwise treat it differently) and therefore the assigned variable will become whatever type the current node was. This will break the depth-first search algorithm, so we must make a couple of more exceptions.



Depth3So, two more exceptions have been added. Now, if the selected node wasn’t a valid root node, the function won’t even be called in the first place. If the child node is a transform node, we first query the children of the transform node and put them in the nextJoint array. Then, we ungroup the transform node and send in the first element in the nextJoint array assuming this is a joint node. When running the function, it should now print the same results as in the image to the left.

In it’s current state, the small script is capable of automatically remove these unwanted transform nodes.  If new transform nodes keep showing up despite all actions, the connectJoint command can help us to reconnect two joints.


This worked out for us in the small game project, but I’m still curious to find out what actually happens with the attributes when this command is used. Does it force the child to have the same attributes of the parent? I’m probably going to take a look at that before I go to bed. Stay tuned for further updates to the script 🙂


Small Game Project 2017: Bullet Physics Test

For the small game project in the second year at the Technical Artist program at BTH, I tried out to work with Bullet Physics to implement simple collision tests and gravity to our game world. Each platform has its own rigidbody that collides with the player rigid body, in which we can grab a transform matrix every frame from the player rigid body to use as the world matrix for the player object.

Even though I had to add some force in the player movement, it will later help us with knockback from enemy units. My biggest mistake in the beginning was not to update the player rigid body position as well, so it stayed in the center platform and collided while the player object never seemed to fall down when walking over the edges. Many thanks to my group members and Henrik Vik for helping me out with linking the libraries, it can be a real hassle sometimes 🙂