Software Development in C/C++ : Setting up Eclipse and Git for Windows

So, I am off to new adventures. I have decided to start exploring the Eclipse IDE, while brushing up on some previous skills. For this series, I will explore the Eclipse IDE and utilize the command-line driven version of Git (previously only worked in the desktop and Visual Studio plug-in) to help my development of whatever it is I am going to work on.

This first post in the series will walk through how to set up Eclipse with Git for Windows. In future posts, Documentation and guide on setting up Eclipse:

Install Eclipse and basic preferences

Oxygen is the latest version, so I’ll guess we go with that. Upon start, Eclipse will ask where the workspace settings should be stored. For now, I use this standard path for my workspace.


Perspectives window in Eclipse

We want to go to the perspectives button and make sure we have selected C/C++. Perspectives are pretty much different configurations of the layout suited for the specific languages you are working towards, whether it is C/C++ or markup languages such as XML.

On top of this, I am going to look at some additional settings.

1. In General > Workspace and have a look at the encoding at the bottom. By default, Eclipse uses the default platform encoding derived from the operation system settings. In my case, this is Cp1252. While UTF-8 is backward compatible with most western encodings, that is not the case for other encodings. There is a significant number of plain text files for which UTF-8 is not backward compatible.

2. In C/C++ > Editor > Folding, allow folding of preprocessor branches and control flow statements.

3. I do not know about you, but I am not a big fan of the white background…have never been. I would like to switch to a darker theme, much easier on the eyes.

Go to Help > Eclipse Marketplace > Search for Eclipse Color Themes and install it. Once it is installed go to Window > Preferences > General > Appearence > Color Theme. Here you now have a bunch of color themes to choose from, and you can add more by downloading them from or make custom changes yourself to create your own theme. That however, I am not going to go through now. I will go with the Obsidian theme and carry on.

Color theme

The default color themes

Install compilers (Cygwin)

Documentation and guide on installing Cygwin:

Before we can actually do something, we have to download a compiler. GCC is a popular choice, so we go right ahead with that. On Windows, we could install either MinGW GCC or Cygwin GCC. It is recommended to pick MinGW if we are not sure, because MinGW is lighter and easier to install, to the price of having less features. I am more interested in the Cygwin, so we go with that.

First, we download the setup on its mother site. Select “Install Cygwin” and download the setup-x86_64.exe for 64-bit windows, setup-x86.exe for 32-bit.

  1. Select Install From Internet
  2. Select Directory (Default C:\Cygwin, avoid Program Files if possible due to the blank space)
  3. Choose Local Package Directory (I left it as default, normally the downloads folder in the user folder)
  4. Select Direct Connection and choose the mirror to download from (took the first available on the list)
  5. Now do not fall back over the chair as a new window pops up once the download is finished. It is time to select packages, and there are a few as you can see. A few. We must open the “Devel” category, or search immediately, and select the following packages:
    1. gcc and g++. Take most of them to be on the safe side.
    2. gdb (The GNU Debugger)
    3. make (The GNU version of the ‘make’ utility
  6. Install packages. If you missed any, you can always re-run the installation.



The full selection list of packages in the Devel category


The pending packages, these are the ones I am using. Notice that the gcc-core and gcc g++ are having the same versions, which is necessay.

Once the installation is complete, we must add Cygwin to the PATH variable. If you have previously worked with Python, you know we must do the same for the interpreter in order to use it from the command prompt. Otherwise if this is a first time, this is generally how you do it:

Control Panel > System and Security > System > Advanced System Settings > Advanced Tab > Environment Variables > System Variables > Add “C:\cygwin64\bin;” in front of the existing PATH entry.

Now, was that it? Let us verify that. Go to the window command prompt and write “man gcc”.

Verify Cygwin

Invoking GCC in the windows command prompt

If you see this, it means we have successfully installed Cygwin.

Despite having verified that Cygwin indeed is installed, there still might be trouble down the line. If the gcc and gcc g++ packages are installed with different versions, then we will notice that once we are trying to run our first program since the preprocessing will most likely fail. I used the 6.4.0 version, pick any that is available but just remember to be consistent with other gcc components. This happened to me the first time I made this setup and it was not so obvious at first, so really pay attention to that once you are selecting which version of the packages you would like to install.

Install Eclipse C/C++ Development Tool (CDT)

Install New Software

Installing the development tools

We go to Help > Install New Software > In the “Work with” drop-down, select the Eclipse version currently used. In my case at the moment this post is written, this is Oxygen. Now we will see a bunch of options appearing…we want to expand “Programming Languages” and check the “C/C++ Development Tools SDK”. Once that is installed, we can move on to create our first program. It is going to be a classic “Hello World” example.

Writing first C/C++ program in Eclipse


The setup wizard for the project

Go to File > New C/C++ Project > C++ Managed Build


Creating the C++ project

In “Project Types”, select Executable with empty project

In “Toolchains”, select Cygwin compiler (or MinGW). Click next.

Check configuration for both Debug and Release. Click finish.

If we have unresolved inclusion error, it can be for two reasons that I am aware about:

  1. Broken Cygwin or MinGW installation, core packages are installed with different versions. The only way to solve this is to go back to the installation and check the package versions again.
  2. Incomplete include directories. This is not more complicated than in Visual Studios, see below for include paths.

Go to Project > Properties > C/C++ General > Paths and Symbols. Here you can add include paths. We need to set this up for both C and C++. Notice than I am using the 6.4.0 for the gcc, I will label this as $GCC VERSION.

Paths C

Include paths for C projects

Add the following directories to “GNU C”, where $CYGWIN_HOME is your Cygwin installed directory:

  • $CYGWIN_HOME\lib\gcc\i686-pc-cygwin\GCC VERSION\include
  • $CYGWIN_HOME\usr\include
  • $CYGWIN_HOME\usr\include\w32api
Paths C++

Include paths for C++ projects

Then add the following directories to “GNU C++”:

  • $CYGWIN_HOME\lib\gcc\i686-pc-cygwin\GCC VERSION\include\c++
  • $CYGWIN_HOME\lib\gcc\i686-pc-cygwin\GCC VERSION\include\c++\i686-pc-cygwin
  • $CYGWIN_HOME\lib\gcc\i686-pc-cygwin\GCC VERSION\include\c++\backward
  • $CYGWIN_HOME\lib\gcc\i686-pc-cygwin\GCC VERSION\include
  • $CYGWIN_HOME\usr\include
  • $CYGWIN_HOME\usr\include\w32api

Make sure to do this for both Debug and Release configuration. Now, add a new file and create a simple “Hello World” program. You can call it anything you want, just make sure it ends with the extension “.cpp”.


Simple Hello World to test the project setup

If we now test building this application, it should run.

Setup Git repository

The final thing I want to do before I wrap up this post is to setup a git repository. There are several options for this. I am going to try with the EGit Eclipse plugin, which makes the process really easy:

If you have not done that already, you must install git before proceeding.

Configure repository

Configuring the repository

1. Right click on the project, go to Team > Share Project. In the Configure Git Repository, you can enter the path where you want to store the local repository. I have already set up a project called Atlantis, but for the purpose of this guide I just copied it.


NO-HEAD shows that we do not have a branch yet

2. How do we now know it is shared? If it stands NO-HEAD in the same field as the project name in the outliner to the left, that means we have not committed anything yet. Also, you can view the Git Staging and Git Repositories windows by going to Window > Show View > Other > Git


Committing in the Git Staging window

3. The first thing we must do is to commit our project to our local repository. You can simply drag and drop your changed from the unstaged changes to staged changes. Write a message to inform what is included in this commit, then click Commit and Push.


Setup the destination for the push

4. Copy the url from your repository and insert it in the necessary field, then enter your login in the Authentication section. Click next.

Push to remote

Select the branch to push changes

5. Select the branch you want to push to and click next. Go to your github repository online and check that everything has been pushed.

That is all for now, this is the basic setup for Eclipse on Windows. Hopefully this small guide can clear out some confusions when starting to work in the Eclipse IDE for the first time. If you have previously worked with Visual Studio, it is not much more complicated. Next time, I will start coding on some smaller projects and maybe get some graphics up on the screen.


Lamborghini Aventador LP 700-4 Pre-Rendering

Pre-render 9

At the beginning of 2018, I tried to figure out something to create for my portfolio. The next step for me would be to create a car, since I had not previously tried it and I felt ready to attempt something more complicated. I have always loved Lamborghini and my favorite is the Aventador. It was not a difficult choice.

For an introduction of the Lamborghini Aventador, I would strongly recommend watching Top Gear’s review of it. This footage, along with some blueprints, has been a massive source of inspiration.

This project has challenged me in many ways, and taught me new essential workflows especially when creating cars. I would lie if I did not say that I made any mistakes along the way, but it was all worth it. Now, I want to finish up this project and continue making cars now that I know how to approach it. I realize I still have a long way to go, but this is better than what I have produced in the past. I will only continue to work harder on these kind of projects to become better. Most importantly, I had fun and it feels good to have found my way back to it.

I would also like to thank my friends Jonte Carrera and Jesper Eriksson for the helpful feedback I have received in order to improve my work.

Arnold Renderer

Most importantly, I had to learn more about the Arnold Renderer. I feel old, but I started out with Mental Ray. Arnold is now the default renderer in Maya. Solid Angle, the creator of Arnold, was acquired by Autodesk in 2016. Arnold is an optimized brute force algorithm (named after the bodybuilder Arnold Schwarzenegger). Global illumination is calculated on every pixel of the frame. Without sampling, it would be more expensive than sampling techniques such as final gathering or irradiance mapping.

The strengths of Arnold are:

  • Very fast photorealism.
  • Cross-application compatible.
  • Much faster than similiar renderers, such as Autodesk RayTracer or ART.
  • Arnold is an unbiased renderer, it’s more physically accurate. Mental Ray is a biased renderer.
  • Global and local control over sampling and ray depth, the number of bounces.

In short, greater realism with less effort. But as with all things, there are some limitations to consider:

  • Limited render to texture. Mental Ray is better here for content-creation for games, no seperate diffuse and specular pass.
  • No background rendering (batching) or render farms.
  • Transmissive and hard caustic (light focused by curved surface) is not possible. No photon mapping, no transmissive or refractive caustics.

My Settings for Arnold Renderer

Open the Render Settings Window. Under Arnold Renderer Tab, the two most important section are Sampling and Ray Depth.


Sampling and Ray Depth settings

Ray Depth

  • Set Diffuse bounces to 5, to allow for more light bouncing in the scene.
  • If there are many metallic and glass surfaces in the scene, raise Specular bounces to 4.
  • Transmission rays is the trace depth for refractions. It is 8 by default. Good enough, we need at least 7.
  • Remember to set the Total allowed rays to support all the number of rays in the other settings. Set it to allow a total of 30 rays. This will give us more information on the transference of light in the scene.
  • Diffuse can be set to 10, but this is only for higher quality rendering. Because indirect diffuse rays are so frequent, this can quickly get expensive.


  • Sampling controls the overall visual fidelity. For draft renders, the Camera AA can be decreased, even to negative numbers. To get rid of more grain, we have to tweak each component setting. I will increase this later for final rendering.
  • The sampling is deceptive. It is actually the squares of these values. Increasing just by an integer will increase the sampling almost by the double each time. Camera Samples might appear as 9 in the attributes at the top of the sampling subsection, even if we set Camera AA to 3. 3×3 = 9.
  • Set specular and transmission to 3.
  • The camera samples do not have a maximum number since it has nothing to do with global illumination. Camera samples only deal with anti-aliasing or grain removal.
  • Minimum diffuse samples = Camera^2 * diffuse samples^2 (Samples per pixel).
  • Maximum diffuse samples = Min Diffuse Samples + (Camera^2 * (Diffuse Ray Depth -1)).
  • Sampling has more dramatic effect on samples count than Ray Depth attributes. So be careful with these settings.

Controlling processing in Render Settings


Additional render settings

Open the Render Settings Dialog and go to the Systems Tab

  • Turn on Progressive Refinement under Maya Integration. Gives a low fidelity preview of the frame. The finished high fidelty tiles of the frame are called buckets. Will result in faster production renders.
  • Autodetect Threads in Rendering Settings. Uses 100% of the computer’s processing power. Otherwise leave 1 of TOTAL NUMBER OF CORES for other computations. So if I have 4, I will set this setting to 3. Setting negative value will tell Arnold to use the available cores – 1.

Next, go to Arnold Render Tab in the Lights section.


  • Check the Low Light Threshold. If you have extreme low light conditions in your shot, then the lighting may get cropped off and you will see black areas in your rendering. This is an optimization that makes sure that really dark areas in the scene doesn’t get calculated. Could be a problem for interior scenes. If you see black areas in your render, set this threshold to 0.

Remember, all these settings are saved in the scene file and not in the Maya user preferences. Also, these are just my settings and I do not claim that they are solid proof for every scenario. Adjust depending on the scenario I would say.


However, the car is not quite done yet. The rest of the week will be focused on fine-tuning the rendering settings to prepare for the final rendering. There has been a lot of tweaking as well with the mesh, as I suffered from some skewed faces that caused me to have weird reflections…some of them are still present but it is much better in comparison to only a few days ago. It is still a bit low poly in some areas as well, most notably around the wheels. The lighting model is not final, I just set up some quick point lights to give the car some highlights on top of the image based lighting.

Pre-render 10Pre-render 11Prerender_13Prerender-12Prerender-14


Here are some resources I have used during the project, these are not mine.

Lamborghini Aventador LP700-4 (2011)

Blueprint of Lamborghini Aventador LP 700-4


HDR image from the sIBL Archivehttp. Link:

Image Quality-Driven Level of Detail Selecton on a Triangle Budget

For our bachelor thesis in development of digital games, Ludvig Arlebrink and I decided to continue with our LOD studies following the pre-study we conducted of Unity’s LOD group component in the beginning of the year. This is only supplementary material for our bachelor thesis, so do not feel like you are missing something if some parts are hard to grasp. It is necessary to have read our paper from the very beginning, but at least you can enjoy the “stunning” graphics! For more supplementary material, see the bottom of the page. The thesis will, hopefully, be available for the public in the coming months. That is, if we actually make it and receive our final grading. For now, nothing is guaranteed.

We had previously thought out many different ideas, staying late at school, frantically drawing on the whiteboard, running the equations, repeatedly bashing our heads into the concrete wall, but no progress.  Ludvig was the one coming up with all ideas, I was just tagging along with my curiosity. Sometimes we came up with ideas that were utterly terrible, but sometimes, they left us completely mind blown…only to discover that it had already been done. Damn!

Nothing really stuck with us or seemed to fit the scope of this course. We had started to move away from the LOD pre-study and were curious to take on new techniques, but I think it was the right decision to go back to it. Eventually, Ludvig came up with the idea of image quality-driven LOD selection, which took him a while to explain. The approach felt confusing at first, but then it all made perfect sense.

In this study, we propose an image quality-driven process to choose a LOD combination given a camera position and orientation, and a triangle budget. The metric to assess the quality of the rendered image is the structural similarity (SSIM) index, which is a popular choice in computer science for image comparison for its ability to approximate similarity between images.

The aim of our thesis was to determine the difference in image quality between a custom level of detail pre-preprocessing approach proposed in this paper, and the level of detail system built in the game engine Unity. This is investigated by implementing a framework in Unity for the proposed level of detail pre-preprocessing approach in this paper and designing representative test scenes to collect all data samples. Once the data is collected, the image quality produced by the proposed level of detail pre-preprocessing approach is compared to Unity’s existing level of detail approach using perceptual-based metrics.

Despite that the proposed approach is primarily developed for the personal computer platform (PC), it is more likely to contribute the most to rendering complex scenes on mobile devices. This is because large productions rarely have to be concerned with triangle budgets using the modern day hardware, but mobile devices are running on stricter triangle budgets due to their limited hardware in comparison to the PC. Even so, image quality must still be preserved and maintain a high quality on the triangular meshes even if a lower triangle count is required. That is where the proposed approach differs from the previous DLOD approaches by also taking image quality into account, and not only performance.

Except a few corner cases, such as accidental culling, we managed to maintain similar image quality as Unity’s built-in LOD approach. However, the conclusion is drawn from the experiment that when comparing the SSIM quality of rendered images between Unity’s built-in approach to LOD and the proposed approach, Unity’s built-in approach generally performed better in terms of SSIM.

I would like to thank my friend Ludvig Arlebrink to have been given the opportunity to work with him, this has been a true learning experience and it has really helped my confidence. Last but not least, we would like to thank our supervisor Francisco Lopez Luro for his feedback and guidance throughout the project.

Additional supplementary material

LOD group component pre-study

Bachelor thesis presentation

Ember Effect in Trapcode Particular

Since I’m still a rather poor student, I shamingly have to admit that I haven’t purchased the Red Giant Trapcode Suite yet. Instead, I use their generous trial version once in a while when I need to create something fancy for our projects. This time I was going to produce an intro sequence for a game we had been working on last Christmas, so I took the time to learn how to make a fairly simple ember effect. I was fascinated by the fact how effective the use of a noise texture could be used to create the smoke effect in the background rather than spending expensive calculations on creating a separate particle system for it.

Just to be clear, this is not exactly a tutorial. I just wrote down the overall process, so I might have missed some important details. You’ll notice that I jump around a lot between the different components in After Effects, mostly because I was tweaking the parameters back and forth as I went along.

Please visit Red Giant if you’re interested, their products are super awesome.



Go ahead and create a new composition, feel free to choose any given resolution. I would start off with a high resolution composition, since it works better to scale down an effect rather than trying to scale it up. By default, it’s already set to HDTV 1080 29.97, so we’re ready to start.

Create a new black solid, matching the resolution of the composition

Go to Effects & Presets, scroll down to Red Giant→RG Trapcode→Particular. Drag and drop it on your recently created solid. If you now play the timeline, you’ll see a bunch of white particles going out from the center of the screen. It doesn’t do a lot more than this for now, but this is where the fun starts.

In Effect Controls (Press F3 for shortcut), you’ll see plenty of sections to choose from. Each section holds attributes that we now can manipulate with and we are going to start right away with the Emitter.

Emitter I


1. Slow down the velocity to around 20 and change the velocity random percentage to about 60%. By pressing TAB, we can automatically step down in the list of attributes.

2. Change the Emitter type to Box and set the Emitter size on the X-Axis to about 1600 and on the Y-Axis to 1360. You might have to check XYZ to be individual in order to access the different axes and perform non-uniform scaling.

3. Set particles/sec to 50 and set direction spread to 20

4. Set velocity to 20

5. Go ahead and rotate 90 degrees on the X-Axis

6. Position the particle system at the bottom of the screen around 960, 944, 0

Particle I

1. Set particle life time to around 6 – 8

2. Set life random to about 30 and set sphere feather to 0

3. Set size to 3 and size random to 50

4. Go to Size over Life and add the steep preset to make the particles fade in much smoother. This does the job, so we keep it as it is.


Physics I

1. Set Wind on Y-Axis to -400

2. Go to Turbulence Field→Set Affect position to 450

Now that’s more like it. By putting in some turbulence and affecting the positions of the particles with a factor and forcing it upwards with the wind, we achieve this swaying effect of a burning fire.

3. Set Turbulence Field→Scale to 6 and complexity to 1

4. Set Turbulence Field→Evolution speed to 15



1. Now we require an actual camera in the composition, so go ahead and create one. Set it be a One-Node Camera.

2. Set the camera position to about 960, -300, -2666.7

3. Now the particles slowly creeps up from the bottom of the screen after a couple of seconds into the timeline



1. Set blend mode to Normal

2. Toggle transparency

3. Set Motion Blur→Motion Blur→On

4. Set Motion Blur→Type→Subframe Sample

5. Set Motion Blur→Levels→16

6. Set Motion Blur→Opacity Boost→16

Emitter II

1. Set particles/sec to 25


1. Create a copy of the solid, since we’re going to change the motion parameters

Emitter III – Copy

1. Set Random Seed to 107960. Just because. Magic numbers.

Physics II – Copy

1. Set affect position to 300

2. Set X-Axys offset to 2180

3. Set Y-Axis offset to 2000

4. Set Z-Axis offset to 2840

5. Set scale to 10

6. Set Wind Y-Axis to -550

Blend Mode

1. Set first solid blend mode to Screen


1. Create a new solid and call it “Background” or “BG”

2. From Effects & Preset, add a Gradient Map to the new solid

3. Swap the colors so that we’re starting with black from the bottom

4. Set the Ramp Shape to be a Radial Shape

5. Set the Start of Ramp to 982, 1600

6. Set the End of Ramp to 366.9, 144

7. Set Start Color to dark orange

8. Set Ramp Scatter to 11.4

9. Move down background below and set the previous solids blending mode to “Screen”


Adjustment Layer I

1. Create a new Adjustment Layer and place it above the two original solids

Solid 1

1. Set particle color to bright orange/yellow

Solid 2

1. Set particle color to strong orange

Adjustment Layer II

1. Set glow threshold to 73

2. Set glow radius to 10



1. Create a new solid and call it noise

2. In Effect & Assets, find Turbulent noise and attach to the Noise

3. Set Noise→Scale→316

4. Set Noise→Contrast→360

5. Set blending mode to Multiply

6. Move it down below the two original solids

7. Set keyframe for Noise→Evolution and Noise→Offset Turbulence

8. Drag to the end of timeline

9. Set keyframe for Noise→Evolution to 960, -712


Adjustment Layer III

1. Set Glow Colors A & B Colors

Color A: Bright Orange

Color B: Dark Orange

2. Set glow threshold to 42.2%

3. Set glow operation to Screen

4. Set glow radius to 20

5. Set glow intensity to 3.0

6. Copy another glow

7. Set glow radius to 50


Solid 2

1. Set particles/sec to 40

2. Set particle size to 3.5

3. Set Opacity Boost to 20

Solid 1

1. Set Opacity Boost to 20

Adjustment Layer IV

1. In Effects & Presets, add Hue/Saturation

Set master hue to -8


Solid 1

1. Set Shutter Angle to 600

2. Set Air→Wind Y-Axis to -700

3. Set particle size to 2

Adjustment Layer V

1. Set glow 1 intensity to 2.8

2. Set glow 1 threshold to 27.8

3. Set glow 2 threshold to 51.4



That’s about it! While there is a general process to follow, I once again realized how much comes down to experimentation and playing around with the available features in After Effects and the Trapcode Suite. Here is the final rendered sequence of the effect in action.

A Study on Discrete LOD in Unity Game Engine

A study of the built-in LOD Group component in Unity carried out by students Ludvig Arlebrink and Fredrik Linde at Blekinge Institute of Technology. This is supplementary material to the full report.

The Unity version used at the time was 2017.3.0f3 (64-bit). We use the Stanford bunny with LODs ranging from 0 to 4 for the experiment. We define five tests: LOD, Crossfade, Dither, No LOD and Empty. Where LOD, is a standard LOD test using the unity LOD group component, without any transitions. Both crossfade and dither also uses the LOD group component but with transitions enabled and set to crossfade. The no LOD does not use the LOD group component and renders the bunny on its maximum LOD. Finally, the empty test is just for reference and is a completely empty scene with a black clear color.

We began the implementation with creating Unity prefabs for each of the tests, except the empty test. For the LOD, crossfade and dither test, we add a LOD group component and for each LOD we add a child gameobject with a mesh renderer component for the LOD mesh. The No LOD prefab only includes a mesh renderer component with no children.

In the beginning of a test, we instantiate 20 bunnies of its corresponding prefab for each axis for a total size of 8000 bunnies. When the test is complete, we destroy all instances of the bunny and repeat this process until all tests have finished executing. At this point the application enters its final stage by running the empty scene test and automatically shuts down afterwards. We created a script for the camera to traverse a path defined by a number of points that the camera interpolates between given a constant speed at all times.

Connecting Maya with custom engine: Part II

This slideshow requires JavaScript.

I’ve longed to say this for almost a month now…it’s finally done. My custom maya renderer is finished. Well, at least for now…I would call it an early alpha of it. If I find time to record a demo of this, I will put it up on the page in an upcoming blog post. There are a lot of extensions that can be made, but in its current state it can do the following:

  • Add and remove
    • Meshes
    • Pointlights
    • Materials
  • Update and handle renaming of previously added nodes
  • Duplication
    • Meshes
    • Pointlights
  • Updating material changes (color and cosine power factor)
  • Load currently connected textures on material. If there is no texture, a dummy texture is given to that material.
  • Updating topology changes on meshes
  • Updating parent and children transforms
  • Track the current camera used in the Maya Viewport
  • Camera Settings:
    • Ortographic and Perspective view
    • Field of view
    • Near and far plane
  • Phong and Blinn-Phong Shading

There were a lot of challenges I had to face in this assignment and here goes a list of the most difficult problems and my approach of solving them.

Several materials with Deferred Shading

At first, I was unsure if my decision to go with Deferred Shading was a smart move at all. The reason was to easily be able to handle a large amount of pointlights, but then I realized that I would lose specific material data for a given mesh after the first pass when creating the G-buffers. I looked around at forums and asked around in school for solutions, in which some if the suggestions were:

  • Write material data to its own G-buffer
    • While this would work, it will increase the already demanding bandwidth that comes with a conventional Deferred Shading implementation.
  • Create a lookup texture holding the material data and pass to the light pass
    • Would probably work as well, but seemed like overkill and more work than necessary.

The better solution as explained to me by my teacher was to pass a material buffer for each mesh in the first pass and pass its data to the pixel shader. While in the pixel shader, I could add the material properties onto the albedo sampled from the connected texture of the material. By blending the texture with the material color, I received a form of tinting.

Handling duplication and renaming of nodes

What Maya actually does when duplicating a node is that it “behind the scenes” creates a pre-notation node holding the original mesh name with an added “preNotato” prefix before renaming it to a unique name in the scene. What this means is that when the node has been duplicated, it still qualifies as if something has been added to the scene even if it hasn’t been given a final name yet.

Everything goes fine when adding this node in the rendering engine, but when the node is updated in Maya and messages holding the changes have arrived to the rendering engine, the node has been given its final name and it won’t be found in the unordered map for that specific type of node.

At first I thought I could just iterate through the unordered map given the old name of the node, the “pre-notation” name” and change the key value. However, since the key value is a const datatype, this proved to be impossible and I had to come up with a more drastic solution. What I had to do instead was to save a reference to the node given the old name, erase the node registered with the old name and insert the new key with the saved reference to the node. While this doesn’t feel entirely okay or optimized in any way, it works like a charm given the proper error handling.

Actually gathering a created shape node with geometry attached to it

Iterating through the scene at start and gathering the mesh data from the already existing meshes is easy, if we just use an MItDag iterator and check the datatype. No problem. How about checking after an added mesh in the added node callback and sending the mesh data? Not as elegant.

If we try to create an MFnMesh directly from the added node, given it is of the type kMesh, we will most likely receive an error that the shape node has no geometry attached to it. When a mesh is added, there are a lot of components that comes with it that also could pass as part of the kMesh type and the geometry might also not be finished at this point. So, how do we find the valid mesh node whenever mesh is added?

To walk around this problem, an attribute callback is attached to every node that passes the kMesh type check and we try to create a MFnMesh from the node within the plug instead. This will fail as well, for a certain number of times, until…a valid mesh node with attached geometry to the shape node passes by. If the MFnMesh creation succeeds , send the mesh data as usual and remove the attribute callback once you’re done sending the data. The CallbackID for a created mesh can be stored as a global variable and re-used for other added meshes.

Geometry seems to be the biggest issue in this case, while the parent transform node and other data can be found when a mesh is created. Why this is such a big problem specifically when adding a mesh node, I have no idea. Why can’t the geometry be finished when the shape node arrives? Why would we want to be able to access the shape node before it has any geometry attached to it? I’m not putting Maya down, I just find a bit strange but most of all, frustrating.

Gathering the mesh data

So what does it actually takes to gather the mesh data? Here is where I’m glad that I’ve worked with the FBX SDK because it’s pretty much the same thing. For a given mesh, I have to iterate through its polygons and gather each vertex in the current face. What I was glad to find out this time, which should be possible in the FBX as well, was that I could gather triangles from non-triangulated faces. This way, it doesn’t matter if we forget to triangulate the mesh at export. While indexing is possible, I’d started out with getting the more straight ahead approach to work in which you append all the vertices in every triangle, resulting in duplicate vertices.

What was really mind-bending was that the MItMeshPolygon::getTriangle() returns object-relative vertex indices…MItMeshPolygon::getTriangle() returns object-relative vertex indices…BUT MItMeshPolygon::normalIndex() and ::getNormal() need face-relative vertex indices to work. So, I utilized a helper function from this great tutorial to get it to work and also gather the normals. Since I didn’t change the winding order in the plug-in that reads data from Maya, I used the already existing rasterizer state for the skybox to flip the winding order instead.

Finding the currently attached material data

At its current state, I can query texture paths at the start of the program and change material color for a given material. Several meshes can share the same material, but so far I only support for one material per mesh and can’t make any switches between the different materials in the scene. To first find the shader node, we must use the function getConnectedSetsAndMembers on the given mesh node. The documentation states the following about this way of reaching the shader node:
“Returns all the sets connected to the specified instance of this mesh. For each set in the “sets” array there is a corresponding entry in the “comps” array which are all the components in that set. If the entire object is in a set, then the corresponding entry in the comps array will have no elements in it”. Note: This method will only work with a MFnMesh function set which has been initialized with an MFn::kMesh”

The set we’re after in this case is the shading node group for the mesh. Within that, we can find the assigned shader. There will always be a default lambert shader in the Maya scene that automatically is assigned to any created mesh. Next, from the shading group node set where the shader is expected to be found, we look for the plug “surfaceShader”. We must now go through the surfaceShader plug to find the connected shaders of this mesh. The returning node should be the material, or “lambert1” if no other materials are active in the scene.

Representing the DAG-hierarchy transformation

Getting the transformation for an individual mesh is simple, we can extract the parent transform for a given mesh by calling on the first parent (there will in 99.9% of all the cases be a transform node directly above the shape node). We put an attribute changed callback on that transform node and gather the translation, quaternion and scale values. We can specify the different spaces, in which kTransform is object space.

Now, if we were to actually create a parent/child relationship, we would have to iterate through every potential child of a transform, verify that the children are transforms and add together all the local transformations all the way out through the root. Or we could implement an actual parent\child relationship in the render engine. While this is useful, what would you say if Maya already does all of this for us?

What we want to do is to get the dagpath from the transform node and from the dagpath query the inclusiveMatrix. This matrix, compared to the gathered vectors used to create the final transformation matrix, will represent the accurate transformation in the hierarchy all the way out into the world. So there is actually no need to create a system that handles parent-child relationship if we use this specific matrix from the dagpath. I was kind of shocked how well this worked at first, but it does exactly what it says in the documentation. The inclusiveMatrix on a dagpath will return the already calculated transformation in the hierarchy, which saves us a lot of work.

One final detail is that we still must call upon any potential children…recursively…to update their transformations in the plugin whenever their parent moves. This is done with the old trusty depth-first search, see my blog post Exporting Joint Hierarchies Part I.

Using the M3DView to access the active camera in the viewport

While it’s possible to iterate through the scene for all active cameras and make callbacks to look for added cameras, it’s only the currently active camera we’re interested in. Sure, we still must be able to switch between them and that’s possible through the M3DView class. Given a M3DView, we can get the active viewport from the panel name and the viewport panel name is “modelPanel4”, which can be gathered from this simply python script:

getPanel -underPointer

When executed in Maya, this script will return the name of the panel you’re currently holding the cursor over. These names won’t change as far as I know, so these can be hard coded. With the viewport at your disposal, you can now query the dagpath of the currently active camera and initialize an MFnCamera with it. From the MFnCamera, you can grab all sorts of properties…projection, near/far plane, field of view, position etc. We can also query the modelView and projection matrix directly from the M3DView, so there are many ways to accomplish the same goal.

What’s next?

  • I’m going to continue building on the overall structure of the entire project and try to improve core areas such as message sending. The goal for the next time is to have optimized the way I’m sending messages so I can manage much more complicated meshes and modifying their topology without seeing any larger frame drops.
  • Even though I’m using the inclusive matrix for the transformations, I’m probably going to implement a proper child-parent relationship in the rendering engine so I can support manual and automatic calculations.
  • Only one material is supported for each mesh and while several meshes can share the same material, I’m going to extend that and make it possible to switch between different materials
  • Better management for textures and making sure the same texture won’t be loaded again if another mesh shares it. If so, it should load it directly from the one that is already in the memory
  • Add a graphical user interface to manage the performance and other options in the engine. Maybe I will go with ImGui, but I would like to try making my own interface

That’s all for now and until next time, have a good one 🙂

Connecting Maya with custom engine : Part I

For the final assignment in a level editing course I’m taking part in this year, we’ve have been assigned to connect Maya with our own custom engine to view the contents of a maya scene. So far I’ve only been able to get the camera up and running, as well sending mesh data between the two applications using shared memory. It’s been very difficult to get into it, but I’m slowly starting to get a better understanding of it all. So far, it’s pretty static and I haven’t connected any transforms yet. That’s next on the to-do list, as well displaying multiple objects. The engine I’m using is the same I used to implement deferred rendering and morph animation with, you can watch below:


With that comes a bit of a challenge, since deferred shading requires me to sort out the materials in a different way. I pretty much lose material attributes after the first pass, so I must either write the materials properties to its own separate GBuffer or create some form of lookup texture. At this point I’m only focusing on getting the basics to work, but those are some of the solutions I’m thinking about. I can now also handle non-triangulated data, as well mixed with triangulated data. The sphere for example is a mixture of triangles at the top, while the polygons at the center remains as quads. The MItMeshPolygon can help you to identify if a polygon is already triangulated or not, then sort out the data. From a quad, we expect a total of six vertices while we only expect three vertices from a triangle.

Here are a few screenshots from the result of a very long Saturday, filled with mistakes and many crashes. It was worth it, because I started out from an empty scene and got all this by the end of the day.

DirectX11: Morph Animation Demo


This is a presentation of a Morph Animation technique based on the GPU approach described in ShaderX3: Advanced Rendering with Directx and OpenGL section 1.4. Me and a friend of mine implemented this as a summer practice, but that solution was entirely based on the CPU with a dynamic vertex buffer.

While this works, it’s much faster and requires less computational power to go with the GPU approach that binds the target and source vertex buffers to the same input layout. That way, the interpolation can be done entirely in the vertex shader.

Exporting Joint Hierarchies: Part II

Hi again, I’m back with another post full of content and research I’ve done over the past week. Last time was a little introduction with a smaller depth-search script to tidy up a joint hierarchy, but today I’m actually going to show you several things you can do in Maya to export your joint hierarchies “properly”. I’m currently using Maya 2017, along with the updated FBX SDK version 2017.0.1. I’m probably going to talk a lot more about the FBX SDK in part III, which has been a major headache for me in the recent year. For now, let’s focus on the exporting.

Naming Convention

There are really no strict naming conventions I would enforce, but it makes debugging and especially mirroring of any form easier if you use the prefix L or R (Left or Right) for arms and legs.

Naming Convention

Before Skinning

Here is a checklist of all the things I would recommend to have done before skinning:

  • Make sure arms and knees are a bit bent so that the IK’s won’t brake
  • Finished UV-mapping the mesh and made sure all UV-coordinates are within the valid UV-range
  • Freeze transformation on mesh to be skinned
  • Triangulate the mesh
  • Freeze all rotations on the joint hierarchy
  • Get rid of transform nodes in joint hierarchy
  • Last of all, used Delete All by Type->History on the entire scene. After this point, you should only use Delete All by Type->Non-Deformer History in order to not remove any deformer related data (all the skin weights, rigging components etc)

Skinning Settings

This is a skin binding setting that has worked well in the previous game project. The most important settings have been highlighted with the color red, which means that these must be used for any given mesh no matter the complexity. Dual quaternion is a skinning method developed to remove “candy wrap” effects on meshes and other weird artifacts that could occur in a classic linear skinning method. Geodesic Voxel is a bind method that is primarily designed to work with troublesome meshes, even non-manifold ones, but that’s not the primary argument for using it. Geodesic Voxel binds weights very accurately if the right settings are applied, which means less painting and less work for an artist. At best, it can help us to buy time but if necessary, we still must paint weights at certain places. The number of max influences must be 4, which is standard and makes everything less complicated. We only want ONE bindpose, so that the FBX SDK doesn’t find several and gives us the wrong one.

Bind to: Joint hierarchy

Bind method: Geodesic Voxel

Skinning method: Dual quaternion

Weight distribution: Interactive

Allow multiple bindposes: No

Max influences: 4

Maintain max influences: Yes

Remove unused influences: Yes

Colorize skeleton: Optional, only a visual feature

Include hidden selections on creation: No

Falloff: Must be experimented with, but default value at 0.20 often does the trick

Resolution: Higher resolution works better for complicated meshes, lower resolutions works better for less complicated meshes. It’s up to the artist to determine the complexity. If in doubt, use 512.

Validate voxel state: Yes

Bind Settings

Skin definition errors

When exporting content from Maya, there are a few important details to keep in mind. Maya can often warn us and even tell us specifically what kind of data couldn’t be properly exported, but with skinned meshes it’s even trickier.

The most important error to watch out for is that the skin definition couldn’t be found, which could be caused by a number of reasons.

  • Always control that the mesh isn’t non-manifold. Although the binding method Geodesic voxel can work with troublesome meshes, it shouldn’t be picked as a work-around for the problem. The trouble could remain hidden without you knowing it.
  • Validate that the exported joint hierarchy contains ONLY joint nodes. IK_Handles can be ignored on export, but we don’t want a chain of joints in which some of them are also located in a group. This could prevent the depth-first search algorithm from successfully retrieving the joint hierarchy.

Working with UV-sets

Usually we work with only the default UV-set present at start which is always given the name “map1” by Maya. Each individual mesh is given a default UV-set and this is fine if we only want that particular object to have one texture. However, we might want to cover several body parts using only one UV-map? Easy, we just combine the objects together and voila, they are all in the same default “map1” UV-Set.

So…how does this work for separate meshes? By using the UV Set Editor we can select several meshes to instruct them to be part in the same UV-Set by adding a new one with a custom name.


So for example, “UVSet_Arm” can hold the UV-map for the upper arm submesh, the lower arm submesh, the hand submesh etc. By using UV-sets, we can split our character’s submeshes into different regions without having to combine them to pair their UV’s together.

  1. In the Modelling category, go to UV->UV-Set Editor
  2. Select the meshes you want to belong to the same UV-Set
  3. In the UV-Set Editor, click “New” and give a new to your new UV-set. If you select between the previously selected meshes, you can see that the they all now share the same UV-Set.


To actually tell Maya to use this new UV-Set instead of the “map1” set, do the following:

  1. Go to Windows->Relationship Editor->UV linking->UV centric
  2. On the left side in the relationship editor, you will see your mesh with the custom UV-set connected to it. On the right side, you see the current material on the mesh with its textures on the different channels. Simply select your UV-set and then click on the texture. Now we’ve changed from the default UV-set to be your own custom UV-set, rather than “map1”.


Binding several meshes to one hierarchy

When working with submeshes, we will bind the meshes exactly as if we worked with an entire mesh for the body. For each submesh we want to bind to the joint hierarchy, select the root node and then the submesh. Bind according to the given settings, you can save them in the Bind Skin settings so you won’t have to change them every time for each mesh. The only thing you want to change is to allow for multiple bindposes. Each mesh must be given its own bindpose, if we now work with submeshes. Repeat this for all meshes. It might seem strange to continuously use the root node for each submesh, but trust me, it works so much better than binding the closest joints and their corresponding submesh and then having to add influences if not all joints were included. As long as we always bind the submeshes with the root node, all’s going to be fine and Maya takes care of the additional joint influences in a chain.

End the skinning by selection all the submeshes and the entire joint hierarchy, go to Edit->Delete All by Type->Non-Deformer History. You are not allowed under any circumstances to use Delete All by Type->History on a skinned and rigged character, since this will delete the DEFORMATION data. If this happens, you have to undo the history or re-skin everything again or load the skin weights from a file. Even critical components in the rig might suddenly disappear, most usually cluster modifiers attached to spline curves.

Exporting skin weights to file

There are currently two ways of rescuing your skin weights in Maya. One is imaged based and the other one is relying on the XML format. Both can be used for the same purpose, it’s up to you to experiment and make the decisions. Confusing part is that they both share almost the same name, so I’m going to explain the differences between the two methods.

The image based, which I would say is the most troublesome one, can be located under Rigging->Skin->Export Weight Maps. Just because it’s the more unreliable of the two methods, it doesn’t mean that it won’t work for your case. It’s just much more sensitive and requires you to have a very thoroughly polished UV-map. It will store several weight maps for each individual joint that can later be re-imported.

The XML format method, which I strongly recommend over the image based method, can be located under Rigging->Deformer->Export Weights. It works almost the same as the image based method, but it simply stores the entire joint hierarchy’s weights in one XML file instead of having several images to represent the entire hierarchy. The XML method is far superior to the image based method, but I would still recommend having a polished UV-map for this one as well.

Animation layers in Maya

When we first create an animation layer, we will get a base animation layer and a layer on top of it. Delete the other layer so we only have the Base Animation layer. In the Base Animation layer, we will find all the scene’s content even if it’s not keyframed. It’s just there to act as a base for the other layers. On a layer, we can select a layer mode. Additive is the default. We can also add object to the layer if we forgot to select everything we wanted in that layer.

From the base layer, we want to select the entire joint hierarchy and all submeshes bound to it. With the “entire joint hierarchy”, I mean shift selecting all the joints in the hierarchy and not just the root node. Then we add a new layer, which we can start animating. No animations should be applied on the Base Animation layer, keep the character in bindpose. When the animation is finished on a layer, it must be baked as any other animation and added to the stack. Here is how you do it:

  1. Make sure you’re on the correct animation layer and not on the base animation layer
  2. Select the joint hierarchy and all the submeshes
  3. Go to Edit->Keys->Bake Simulation and click the box.
  4. When we’ve reached the Bake Simulation settings, we want the following settings:

Hierarchy: Selected

Channels: All keyable

Driven Channels: Necessary if a driven key is used

Control points: No

Shapes: Yes

Time Range: Time Slider, if specified for the correct range or use custom start/end. What’s important is that it matches the range of the animation and doesn’t include empty keyframes.

Bake to: New layer

Baked layers: Keep

Sample by: 1.0000 by default

Oversampling rate: 1 by default

Smart bake: No, we require a keyframe per frame. Smart bake doesn’t ensure this.

Keep unbaked layers: Keep them as backup in case the baking went wrong, they can be manually deleted later and should be removed for final export

Sparse curve bake: No

Disable implicit control: Yes

Unroll rotation: Yes

  1. Validate that the baking succeeded. Both the joints and the submeshes should now have keyframes and move according to them. If so, you can go ahead and remove the unbaked layer if no more changes are going to be added to the animation. IK-handles and other controls can also be safely removed.


Bake Settings.JPG

Now, we can easily switch between the bindpose and the different animations easily. The good thing about using layers is that we don’t have to use several files to store each individual animation and each layer can be blended to form more variations.

That was all for me this time, I hope someone might find this useful one day. I sure can’t keep all these details in my head so that’s the whole reason for writing these blog posts. In the next part, I will write about FBX format export settings and how I load the data from the file into FBX SDK.

If anyone much more experienced with Maya would stumble upon this post or find something out of place, write me a comment and give me feedback. I don’t assume to have the right understanding of all this and if I have gotten something entirely wrong, let me know. I’m just here to learn and this is a big area of interest for me. All that was mentioned in this post serves the purpose of making exporting from Maya a little less painful.

But just a little.