In memory of Mia Guy

I apologize in advance if this is sudden and that it’s none of my business, but I have to write this.

Today I woke up to a tragic message on my facebook page from the husband of my former biology teacher Mia Guy which stated that she had passed away from cancer. I was aware that she had been hospitalized since the beginning of last week when the cancer unfortunately had spread itself to her brain. Somewhere I knew that she didn’t have much time left and I thought about her every single day since I first got to know about what happened.

Mia Guy was the first teacher I met on my first visit at LBS Nyköping where she worked as a biology teacher and she took great care of me and my mom to make us feel welcome. Later on as I started studying there, she became an important role model for me. In a sense, she was the heart of the place and seemed to have this warm energy that could light up even the darkest days. It was always fun to go to her lessons because it was okay to joke around and she would often throw some jokes back at you. Biology had never been funnier. If you ever needed a shoulder to cry on or yell out your anger, she was always there for you and helped you back up on your feet.

She was at her core fearless and a self-taught master of mindfulness as she used to say. Everyone who has ever stepped foot inside that school lost one of their greatest friends today. I’m a big believer in that people still live on inside us, in our memories, and because of that they can never truly die. They act through us and give us strength to carry on, just like they had to even when they didn’t always have the courage or strength to do so…to keep moving forward no matter what. My thoughts goes to Mia’s family and I want them to know that Mia will always serve as a shining beacon of light in my house and for all of us that she helped to guide throughout her years. I’m so thankful that I had the chance to get to know her.

Thank you for everything Mia, I will always think of you.

 

Advertisements

Connecting Maya with custom engine: Part II

This slideshow requires JavaScript.

I’ve longed to say this for almost a month now…it’s finally done. My custom maya renderer is finished. Well, at least for now…I would call it an early alpha of it. If I find time to record a demo of this, I will put it up on the page in an upcoming blog post. There are a lot of extensions that can be made, but in its current state it can do the following:

  • Add and remove
    • Meshes
    • Pointlights
    • Materials
  • Update and handle renaming of previously added nodes
  • Duplication
    • Meshes
    • Pointlights
  • Updating material changes (color and cosine power factor)
  • Load currently connected textures on material. If there is no texture, a dummy texture is given to that material.
  • Updating topology changes on meshes
  • Updating parent and children transforms
  • Track the current camera used in the Maya Viewport
  • Camera Settings:
    • Ortographic and Perspective view
    • Field of view
    • Near and far plane
  • Phong and Blinn-Phong Shading

There were a lot of challenges I had to face in this assignment and here goes a list of the most difficult problems and my approach of solving them.

Several materials with Deferred Shading

At first, I was unsure if my decision to go with Deferred Shading was a smart move at all. The reason was to easily be able to handle a large amount of pointlights, but then I realized that I would lose specific material data for a given mesh after the first pass when creating the G-buffers. I looked around at forums and asked around in school for solutions, in which some if the suggestions were:

  • Write material data to its own G-buffer
    • While this would work, it will increase the already demanding bandwidth that comes with a conventional Deferred Shading implementation.
  • Create a lookup texture holding the material data and pass to the light pass
    • Would probably work as well, but seemed like overkill and more work than necessary.

The better solution as explained to me by my teacher was to pass a material buffer for each mesh in the first pass and pass its data to the pixel shader. While in the pixel shader, I could add the material properties onto the albedo sampled from the connected texture of the material. By blending the texture with the material color, I received a form of tinting.

Handling duplication and renaming of nodes

What Maya actually does when duplicating a node is that it “behind the scenes” creates a pre-notation node holding the original mesh name with an added “preNotato” prefix before renaming it to a unique name in the scene. What this means is that when the node has been duplicated, it still qualifies as if something has been added to the scene even if it hasn’t been given a final name yet.

Everything goes fine when adding this node in the rendering engine, but when the node is updated in Maya and messages holding the changes have arrived to the rendering engine, the node has been given its final name and it won’t be found in the unordered map for that specific type of node.

At first I thought I could just iterate through the unordered map given the old name of the node, the “pre-notation” name” and change the key value. However, since the key value is a const datatype, this proved to be impossible and I had to come up with a more drastic solution. What I had to do instead was to save a reference to the node given the old name, erase the node registered with the old name and insert the new key with the saved reference to the node. While this doesn’t feel entirely okay or optimized in any way, it works like a charm given the proper error handling.

Actually gathering a created shape node with geometry attached to it

Iterating through the scene at start and gathering the mesh data from the already existing meshes is easy, if we just use an MItDag iterator and check the datatype. No problem. How about checking after an added mesh in the added node callback and sending the mesh data? Not as elegant.

If we try to create an MFnMesh directly from the added node, given it is of the type kMesh, we will most likely receive an error that the shape node has no geometry attached to it. When a mesh is added, there are a lot of components that comes with it that also could pass as part of the kMesh type and the geometry might also not be finished at this point. So, how do we find the valid mesh node whenever mesh is added?

To walk around this problem, an attribute callback is attached to every node that passes the kMesh type check and we try to create a MFnMesh from the node within the plug instead. This will fail as well, for a certain number of times, until…a valid mesh node with attached geometry to the shape node passes by. If the MFnMesh creation succeeds , send the mesh data as usual and remove the attribute callback once you’re done sending the data. The CallbackID for a created mesh can be stored as a global variable and re-used for other added meshes.

Geometry seems to be the biggest issue in this case, while the parent transform node and other data can be found when a mesh is created. Why this is such a big problem specifically when adding a mesh node, I have no idea. Why can’t the geometry be finished when the shape node arrives? Why would we want to be able to access the shape node before it has any geometry attached to it? I’m not putting Maya down, I just find a bit strange but most of all, frustrating.

Gathering the mesh data

So what does it actually takes to gather the mesh data? Here is where I’m glad that I’ve worked with the FBX SDK because it’s pretty much the same thing. For a given mesh, I have to iterate through its polygons and gather each vertex in the current face. What I was glad to find out this time, which should be possible in the FBX as well, was that I could gather triangles from non-triangulated faces. This way, it doesn’t matter if we forget to triangulate the mesh at export. While indexing is possible, I’d started out with getting the more straight ahead approach to work in which you append all the vertices in every triangle, resulting in duplicate vertices.

What was really mind-bending was that the MItMeshPolygon::getTriangle() returns object-relative vertex indices…MItMeshPolygon::getTriangle() returns object-relative vertex indices…BUT MItMeshPolygon::normalIndex() and ::getNormal() need face-relative vertex indices to work. So, I utilized a helper function from this great tutorial to get it to work and also gather the normals. Since I didn’t change the winding order in the plug-in that reads data from Maya, I used the already existing rasterizer state for the skybox to flip the winding order instead.

Finding the currently attached material data

At its current state, I can query texture paths at the start of the program and change material color for a given material. Several meshes can share the same material, but so far I only support for one material per mesh and can’t make any switches between the different materials in the scene. To first find the shader node, we must use the function getConnectedSetsAndMembers on the given mesh node. The documentation states the following about this way of reaching the shader node:
“Returns all the sets connected to the specified instance of this mesh. For each set in the “sets” array there is a corresponding entry in the “comps” array which are all the components in that set. If the entire object is in a set, then the corresponding entry in the comps array will have no elements in it”. Note: This method will only work with a MFnMesh function set which has been initialized with an MFn::kMesh”

The set we’re after in this case is the shading node group for the mesh. Within that, we can find the assigned shader. There will always be a default lambert shader in the Maya scene that automatically is assigned to any created mesh. Next, from the shading group node set where the shader is expected to be found, we look for the plug “surfaceShader”. We must now go through the surfaceShader plug to find the connected shaders of this mesh. The returning node should be the material, or “lambert1” if no other materials are active in the scene.

Representing the DAG-hierarchy transformation

Getting the transformation for an individual mesh is simple, we can extract the parent transform for a given mesh by calling on the first parent (there will in 99.9% of all the cases be a transform node directly above the shape node). We put an attribute changed callback on that transform node and gather the translation, quaternion and scale values. We can specify the different spaces, in which kTransform is object space.

Now, if we were to actually create a parent/child relationship, we would have to iterate through every potential child of a transform, verify that the children are transforms and add together all the local transformations all the way out through the root. Or we could implement an actual parent\child relationship in the render engine. While this is useful, what would you say if Maya already does all of this for us?

What we want to do is to get the dagpath from the transform node and from the dagpath query the inclusiveMatrix. This matrix, compared to the gathered vectors used to create the final transformation matrix, will represent the accurate transformation in the hierarchy all the way out into the world. So there is actually no need to create a system that handles parent-child relationship if we use this specific matrix from the dagpath. I was kind of shocked how well this worked at first, but it does exactly what it says in the documentation. The inclusiveMatrix on a dagpath will return the already calculated transformation in the hierarchy, which saves us a lot of work.

One final detail is that we still must call upon any potential children…recursively…to update their transformations in the plugin whenever their parent moves. This is done with the old trusty depth-first search, see my blog post Exporting Joint Hierarchies Part I.

Using the M3DView to access the active camera in the viewport

While it’s possible to iterate through the scene for all active cameras and make callbacks to look for added cameras, it’s only the currently active camera we’re interested in. Sure, we still must be able to switch between them and that’s possible through the M3DView class. Given a M3DView, we can get the active viewport from the panel name and the viewport panel name is “modelPanel4”, which can be gathered from this simply python script:

getPanel -underPointer

When executed in Maya, this script will return the name of the panel you’re currently holding the cursor over. These names won’t change as far as I know, so these can be hard coded. With the viewport at your disposal, you can now query the dagpath of the currently active camera and initialize an MFnCamera with it. From the MFnCamera, you can grab all sorts of properties…projection, near/far plane, field of view, position etc. We can also query the modelView and projection matrix directly from the M3DView, so there are many ways to accomplish the same goal.

What’s next?

  • I’m going to continue building on the overall structure of the entire project and try to improve core areas such as message sending. The goal for the next time is to have optimized the way I’m sending messages so I can manage much more complicated meshes and modifying their topology without seeing any larger frame drops.
  • Even though I’m using the inclusive matrix for the transformations, I’m probably going to implement a proper child-parent relationship in the rendering engine so I can support manual and automatic calculations.
  • Only one material is supported for each mesh and while several meshes can share the same material, I’m going to extend that and make it possible to switch between different materials
  • Better management for textures and making sure the same texture won’t be loaded again if another mesh shares it. If so, it should load it directly from the one that is already in the memory
  • Add a graphical user interface to manage the performance and other options in the engine. Maybe I will go with ImGui, but I would like to try making my own interface

That’s all for now and until next time, have a good one 🙂

Exporting Joint Hierarchies: Part III

Hello again, for part III I’m only going to provide an updated document containing more information from the previous post. I struggled for over two weeks to get Skeletal Animation to work with animation layers in Maya and last Friday we finally managed to fix it. It was a pure team effort and I’m thankful to everyone in that team that helped me out. The current approach is to have all the joints oriented to the world, which is the easiest to start with before converting the different orientations between Maya and the game engine.

All the settings provided here have worked fine for us so far and if anyone else out there is looking to export animated content from Maya, do have a look and see if there is something of value in there for you. In the upcoming “Exporting Joint Hierarchies: Part IV”, I will go through the majority of the FBX Converter we’ve been working on lately and show how to gather animation data from animation layers. Ahead of me is yet another busy week of great adventures.

Click here to download the pdf

SkeletonAnimImGui

 

Connecting Maya with custom engine : Part I

For the final assignment in a level editing course I’m taking part in this year, we’ve have been assigned to connect Maya with our own custom engine to view the contents of a maya scene. So far I’ve only been able to get the camera up and running, as well sending mesh data between the two applications using shared memory. It’s been very difficult to get into it, but I’m slowly starting to get a better understanding of it all. So far, it’s pretty static and I haven’t connected any transforms yet. That’s next on the to-do list, as well displaying multiple objects. The engine I’m using is the same I used to implement deferred rendering and morph animation with, you can watch below:

 

With that comes a bit of a challenge, since deferred shading requires me to sort out the materials in a different way. I pretty much lose material attributes after the first pass, so I must either write the materials properties to its own separate GBuffer or create some form of lookup texture. At this point I’m only focusing on getting the basics to work, but those are some of the solutions I’m thinking about. I can now also handle non-triangulated data, as well mixed with triangulated data. The sphere for example is a mixture of triangles at the top, while the polygons at the center remains as quads. The MItMeshPolygon can help you to identify if a polygon is already triangulated or not, then sort out the data. From a quad, we expect a total of six vertices while we only expect three vertices from a triangle.

Here are a few screenshots from the result of a very long Saturday, filled with mistakes and many crashes. It was worth it, because I started out from an empty scene and got all this by the end of the day.

DirectX11: Morph Animation Demo

 

This is a presentation of a Morph Animation technique based on the GPU approach described in ShaderX3: Advanced Rendering with Directx and OpenGL section 1.4. Me and a friend of mine implemented this as a summer practice, but that solution was entirely based on the CPU with a dynamic vertex buffer.

While this works, it’s much faster and requires less computational power to go with the GPU approach that binds the target and source vertex buffers to the same input layout. That way, the interpolation can be done entirely in the vertex shader.

Exporting Joint Hierarchies: Part II

Hi again, I’m back with another post full of content and research I’ve done over the past week. Last time was a little introduction with a smaller depth-search script to tidy up a joint hierarchy, but today I’m actually going to show you several things you can do in Maya to export your joint hierarchies “properly”. I’m currently using Maya 2017, along with the updated FBX SDK version 2017.0.1. I’m probably going to talk a lot more about the FBX SDK in part III, which has been a major headache for me in the recent year. For now, let’s focus on the exporting.

Naming Convention

There are really no strict naming conventions I would enforce, but it makes debugging and especially mirroring of any form easier if you use the prefix L or R (Left or Right) for arms and legs.

Naming Convention

Before Skinning

Here is a checklist of all the things I would recommend to have done before skinning:

  • Make sure arms and knees are a bit bent so that the IK’s won’t brake
  • Finished UV-mapping the mesh and made sure all UV-coordinates are within the valid UV-range
  • Freeze transformation on mesh to be skinned
  • Triangulate the mesh
  • Freeze all rotations on the joint hierarchy
  • Get rid of transform nodes in joint hierarchy
  • Last of all, used Delete All by Type->History on the entire scene. After this point, you should only use Delete All by Type->Non-Deformer History in order to not remove any deformer related data (all the skin weights, rigging components etc)

Skinning Settings

This is a skin binding setting that has worked well in the previous game project. The most important settings have been highlighted with the color red, which means that these must be used for any given mesh no matter the complexity. Dual quaternion is a skinning method developed to remove “candy wrap” effects on meshes and other weird artifacts that could occur in a classic linear skinning method. Geodesic Voxel is a bind method that is primarily designed to work with troublesome meshes, even non-manifold ones, but that’s not the primary argument for using it. Geodesic Voxel binds weights very accurately if the right settings are applied, which means less painting and less work for an artist. At best, it can help us to buy time but if necessary, we still must paint weights at certain places. The number of max influences must be 4, which is standard and makes everything less complicated. We only want ONE bindpose, so that the FBX SDK doesn’t find several and gives us the wrong one.

Bind to: Joint hierarchy

Bind method: Geodesic Voxel

Skinning method: Dual quaternion

Weight distribution: Interactive

Allow multiple bindposes: No

Max influences: 4

Maintain max influences: Yes

Remove unused influences: Yes

Colorize skeleton: Optional, only a visual feature

Include hidden selections on creation: No

Falloff: Must be experimented with, but default value at 0.20 often does the trick

Resolution: Higher resolution works better for complicated meshes, lower resolutions works better for less complicated meshes. It’s up to the artist to determine the complexity. If in doubt, use 512.

Validate voxel state: Yes

Bind Settings

Skin definition errors

When exporting content from Maya, there are a few important details to keep in mind. Maya can often warn us and even tell us specifically what kind of data couldn’t be properly exported, but with skinned meshes it’s even trickier.

The most important error to watch out for is that the skin definition couldn’t be found, which could be caused by a number of reasons.

  • Always control that the mesh isn’t non-manifold. Although the binding method Geodesic voxel can work with troublesome meshes, it shouldn’t be picked as a work-around for the problem. The trouble could remain hidden without you knowing it.
  • Validate that the exported joint hierarchy contains ONLY joint nodes. IK_Handles can be ignored on export, but we don’t want a chain of joints in which some of them are also located in a group. This could prevent the depth-first search algorithm from successfully retrieving the joint hierarchy.

Working with UV-sets

Usually we work with only the default UV-set present at start which is always given the name “map1” by Maya. Each individual mesh is given a default UV-set and this is fine if we only want that particular object to have one texture. However, we might want to cover several body parts using only one UV-map? Easy, we just combine the objects together and voila, they are all in the same default “map1” UV-Set.

So…how does this work for separate meshes? By using the UV Set Editor we can select several meshes to instruct them to be part in the same UV-Set by adding a new one with a custom name.

UVSet

So for example, “UVSet_Arm” can hold the UV-map for the upper arm submesh, the lower arm submesh, the hand submesh etc. By using UV-sets, we can split our character’s submeshes into different regions without having to combine them to pair their UV’s together.

  1. In the Modelling category, go to UV->UV-Set Editor
  2. Select the meshes you want to belong to the same UV-Set
  3. In the UV-Set Editor, click “New” and give a new to your new UV-set. If you select between the previously selected meshes, you can see that the they all now share the same UV-Set.

UVSet1UVSet2

To actually tell Maya to use this new UV-Set instead of the “map1” set, do the following:

  1. Go to Windows->Relationship Editor->UV linking->UV centric
  2. On the left side in the relationship editor, you will see your mesh with the custom UV-set connected to it. On the right side, you see the current material on the mesh with its textures on the different channels. Simply select your UV-set and then click on the texture. Now we’ve changed from the default UV-set to be your own custom UV-set, rather than “map1”.

RelationshipEditor

Binding several meshes to one hierarchy

When working with submeshes, we will bind the meshes exactly as if we worked with an entire mesh for the body. For each submesh we want to bind to the joint hierarchy, select the root node and then the submesh. Bind according to the given settings, you can save them in the Bind Skin settings so you won’t have to change them every time for each mesh. The only thing you want to change is to allow for multiple bindposes. Each mesh must be given its own bindpose, if we now work with submeshes. Repeat this for all meshes. It might seem strange to continuously use the root node for each submesh, but trust me, it works so much better than binding the closest joints and their corresponding submesh and then having to add influences if not all joints were included. As long as we always bind the submeshes with the root node, all’s going to be fine and Maya takes care of the additional joint influences in a chain.

End the skinning by selection all the submeshes and the entire joint hierarchy, go to Edit->Delete All by Type->Non-Deformer History. You are not allowed under any circumstances to use Delete All by Type->History on a skinned and rigged character, since this will delete the DEFORMATION data. If this happens, you have to undo the history or re-skin everything again or load the skin weights from a file. Even critical components in the rig might suddenly disappear, most usually cluster modifiers attached to spline curves.

Exporting skin weights to file

There are currently two ways of rescuing your skin weights in Maya. One is imaged based and the other one is relying on the XML format. Both can be used for the same purpose, it’s up to you to experiment and make the decisions. Confusing part is that they both share almost the same name, so I’m going to explain the differences between the two methods.

The image based, which I would say is the most troublesome one, can be located under Rigging->Skin->Export Weight Maps. Just because it’s the more unreliable of the two methods, it doesn’t mean that it won’t work for your case. It’s just much more sensitive and requires you to have a very thoroughly polished UV-map. It will store several weight maps for each individual joint that can later be re-imported.

The XML format method, which I strongly recommend over the image based method, can be located under Rigging->Deformer->Export Weights. It works almost the same as the image based method, but it simply stores the entire joint hierarchy’s weights in one XML file instead of having several images to represent the entire hierarchy. The XML method is far superior to the image based method, but I would still recommend having a polished UV-map for this one as well.

Animation layers in Maya

When we first create an animation layer, we will get a base animation layer and a layer on top of it. Delete the other layer so we only have the Base Animation layer. In the Base Animation layer, we will find all the scene’s content even if it’s not keyframed. It’s just there to act as a base for the other layers. On a layer, we can select a layer mode. Additive is the default. We can also add object to the layer if we forgot to select everything we wanted in that layer.

From the base layer, we want to select the entire joint hierarchy and all submeshes bound to it. With the “entire joint hierarchy”, I mean shift selecting all the joints in the hierarchy and not just the root node. Then we add a new layer, which we can start animating. No animations should be applied on the Base Animation layer, keep the character in bindpose. When the animation is finished on a layer, it must be baked as any other animation and added to the stack. Here is how you do it:

  1. Make sure you’re on the correct animation layer and not on the base animation layer
  2. Select the joint hierarchy and all the submeshes
  3. Go to Edit->Keys->Bake Simulation and click the box.
  4. When we’ve reached the Bake Simulation settings, we want the following settings:

Hierarchy: Selected

Channels: All keyable

Driven Channels: Necessary if a driven key is used

Control points: No

Shapes: Yes

Time Range: Time Slider, if specified for the correct range or use custom start/end. What’s important is that it matches the range of the animation and doesn’t include empty keyframes.

Bake to: New layer

Baked layers: Keep

Sample by: 1.0000 by default

Oversampling rate: 1 by default

Smart bake: No, we require a keyframe per frame. Smart bake doesn’t ensure this.

Keep unbaked layers: Keep them as backup in case the baking went wrong, they can be manually deleted later and should be removed for final export

Sparse curve bake: No

Disable implicit control: Yes

Unroll rotation: Yes

  1. Validate that the baking succeeded. Both the joints and the submeshes should now have keyframes and move according to them. If so, you can go ahead and remove the unbaked layer if no more changes are going to be added to the animation. IK-handles and other controls can also be safely removed.

BakedBakedLayer

Bake Settings.JPG

Now, we can easily switch between the bindpose and the different animations easily. The good thing about using layers is that we don’t have to use several files to store each individual animation and each layer can be blended to form more variations.

That was all for me this time, I hope someone might find this useful one day. I sure can’t keep all these details in my head so that’s the whole reason for writing these blog posts. In the next part, I will write about FBX format export settings and how I load the data from the file into FBX SDK.

If anyone much more experienced with Maya would stumble upon this post or find something out of place, write me a comment and give me feedback. I don’t assume to have the right understanding of all this and if I have gotten something entirely wrong, let me know. I’m just here to learn and this is a big area of interest for me. All that was mentioned in this post serves the purpose of making exporting from Maya a little less painful.

But just a little.

Exporting Joint Hierarchies : Part I

For the large game project in the third year of the Technical Artist program, I decided to revisit my role in the pipeline as the “animation supervisor”. Thus, one of my responsibilities are to write helper scripts for the animators and riggers to make their life a little less painful.

One of the more troublesome problems we ran into in the small game project in the second year was undesired transform nodes grouping together one or more joint hierarchies. These can show up if we change the scale of a joint’s parent node and then re-parent it under another joint that has a different scale. This is to preserve the scale of that joint. However, we should never have to end up in this situation from the beginning if the one responsible for building the joint hierarchy is always keeping an open eye on the parent attributes.

Depth-first search algorithm is common to use when working with joint hierarchies, both in Maya and game engines. It’s basically a recursive search starting out from the root joint and it traverses down through each limb, finding the children of the current child node. When there are no more children in the current joint, which means we have reached the leaf node for that specific branch, it moves on to the next branch of joints. Like a tree.

We should validate that the exported joint hierarchy contains ONLY  joint nodes. IK_Handles can be ignored on export, but we don’t want a chain of joints in which some of them are also located in a group. This could prevent the depth-first search algorithm from successfully retrieving the joint hierarchy. It will also clutter up your hierarchy and at worst, turn into unforeseen consequences down the road. Trying to zero out the transforms could in some cases work, but it shouldn’t have to be necessary. 

The depth-first search in python could look like this:

Depth1

The RecursiveDepthFirstSearch function takes a root joint and a parent index as parameters. Since we’re passing the root joint to the function, it won’t have a parent so indicate this by just setting the parent index to -1.

For each recursive call, it takes the node and loops through its children, sending each one into the RecursiveDepthFirstSearch function. I noticed that we only want to add 1 to the index after we have printed it, not before it. This will make the parent index look incorrect at debugging, while it actually has been traversed correctly. So, how would we handle a transform node? Outside the function, we only allowed that the selection could be a joint node but never did the same thing for listRelatives in the function. As many might know, Python uses a duck typed behavior (if it quacks, treat it like a duck, otherwise treat it differently) and therefore the assigned variable will become whatever type the current node was. This will break the depth-first search algorithm, so we must make a couple of more exceptions.

Depth2

 

Depth3So, two more exceptions have been added. Now, if the selected node wasn’t a valid root node, the function won’t even be called in the first place. If the child node is a transform node, we first query the children of the transform node and put them in the nextJoint array. Then, we ungroup the transform node and send in the first element in the nextJoint array assuming this is a joint node. When running the function, it should now print the same results as in the image to the left.

In it’s current state, the small script is capable of automatically remove these unwanted transform nodes.  If new transform nodes keep showing up despite all actions, the connectJoint command can help us to reconnect two joints.

 

This worked out for us in the small game project, but I’m still curious to find out what actually happens with the attributes when this command is used. Does it force the child to have the same attributes of the parent? I’m probably going to take a look at that before I go to bed. Stay tuned for further updates to the script 🙂

 

Small Game Project 2017: Bullet Physics Test

For the small game project in the second year at the Technical Artist program at BTH, I tried out to work with Bullet Physics to implement simple collision tests and gravity to our game world. Each platform has its own rigidbody that collides with the player rigid body, in which we can grab a transform matrix every frame from the player rigid body to use as the world matrix for the player object.

Even though I had to add some force in the player movement, it will later help us with knockback from enemy units. My biggest mistake in the beginning was not to update the player rigid body position as well, so it stayed in the center platform and collided while the player object never seemed to fall down when walking over the edges. Many thanks to my group members and Henrik Vik for helping me out with linking the libraries, it can be a real hassle sometimes 🙂

Daily Report: Animation Transfer Script

For this assignment, we were instructed to create a keyframe transfer script in Python in which the two different joint hierarchies had different orientations. While gathering the keys for the two skeletons, there is a bit of challenging math behind the transition of rotation values as we step out of the SOURCE joint’s coordinate system into the world in order to be able to step into the TARGET joint’s coordinate system. To make the transfer a bit easier and more manageable, a user interface with two lists to manage each hierarchy was created in PySide. The user can reorganize the joints by moving them up or down in the list, delete them or reset the TARGET skeleton back to its original pose.

toparent

It is important to note that all of the components do not always share the same space, which would result in wrong calculations. In hierarchical character animations, it’s common to use a “parent-child” relationship to replicate the behavior of a moving body in the real world. On top of the “parent-child” relationship, each joint has its own local transform that is relative to its parent.

“A” represents a “to-parent” matrix. To take the last joint in the hierarchy into the world space coordinate system for example, the global transform matrix must be created by multiplying  together A0 * A1 * A2 *A3 from the right.

It’s therefore required to go between the different coordinate systems to make all transformations in the skeleton relative to the world. A joint can be transferred to its parent’s coordinate system by using a “to-parent” matrix and further on into the world. In order to accomplish this transfer between the different coordinate system, a recursive function was written to gather all the transformations from the current node all the way up to its parents and ultimately reach the root node. From here on, the global transformation can be calculated.

To isolate the keyframe rotation, we also apply the inverse bind pose which in this case represents the values of rotation at the very first frame of the SOURCE joint’s animation (resembles a T-Pose). A change of basis takes place and when this process is over, we can extract the euler rotations from a final translation matrix and set it to the current time where there was a keyframe on the SOURCE joint and set it to the corresponding TARGET joint.

I’m glad that I had already implemented Skeletal Animation for DirectX as preparation for this assignment, since it enabled me to further practice on my understanding of the relationship between different coordinate systems. Though, I still think of animation as one of the more challenging tasks in creative mediums both technically and aesthetically. I’m not a math genius, but I’m not completely incapable of taking on bigger tasks as I continue to grow. I’ve gained so much valuable knowledge over the last few weeks, so there is no stopping now. I’m just going to keep moving forward, despite my mistakes that I perform on a daily basis. Until next time, have a good one 🙂

Daily Report: Skeletal Animation Progress

Progress on my Skeletal Animation implementation in DirectX11, I’m finally starting to get somewhere. Sorry for the clipping at some places, I’m using motion capture data with HumanIK in Maya and it doesn’t totally sync up with my model. I’m currently working on the Delta Timing and interpolation between keyframes, not quite there yet, but it’s not breaking on me anymore. I will in the near future write a long post on my webpage about my first skeletal animation system, discuss the whole process, talk about the wrong stuff I did and what I did good etc etc.

My next task is to replace the section where I read keyframe data to instead access animation curves in the FBX format rather than creating keyframes from the FBX root node. I still find it gruesomely hard though, so it will probably take me a couple of more weeks to finish this system. I can only thank the older Technical Artist students, my project group and teachers for guiding me in the right direction and continue to have patience with me. Until next time, have a good one 🙂