Production Pipeline: Rigging

Rigging is the process of adding a ‘skeleton’ to a 3D model so it can be animated. Rigging involves binding the model to a series of joints and handles so animators can bend the model into the desired pose. This stage can be performed after the modeling is complete or once textures and shaders have been applied. Test animations are often performed to show how the model looks when deformed into different positions, and corrective adjustments are often made.

A rig visible inside a model.

Placing the skeleton is usually the simplest part of rigging, as the bones are typically placed in accordance to where real life bones are positioned. Joint Hierarchy refers to the chain of command formed when making a rig to ensure it works properly, meaning that when you move a bone high in the hierarchy, it will also reposition the bones lower than it in the hierarchy. Put simply, when you move a thigh bone, all the bones in the lower leg will also move (This is referred to as forward kinematics.) Inverse Kinematics is the opposite of forward kinematics and is used often used when modeling a character’s limbs. Instead of moving the thigh and having all the bones down to the toes adjust, inverse kinematics makes it so when you move the toe or foot, all the bones up to the thigh adjust with it. Inverse kinematics are most appropriately used when a model has to place it’s terminating joints ( feet, hands, fingers, toes) very precisely. Degrees of freedom and restrains are necessary to create realistic animation, as most joints in real life are limited in the directions they can move, such as knees, wherein joint constraints need to be applied.

A series of Morph Targets for a face.

Faces and fabric are usually too complex to traditionally rig with the bone and joint structure, and so morph targets are often used. Morph targets are duplicates of the original model, but altered to a different state from the default. These alterations can be applied to the default models over time, in which the 3D software used with interpolate the changes from a starting point to an ending point, essentially automatically animating the model until the morph target is reached.

Slick, J. (2015). How Are 3D Models Prepared for Animation?. About.com Tech. Retrieved 24 February 2015, from http://3d.about.com/od/Creating-3D-The-CG-Pipeline/a/What-Is-Rigging.htm

Wikipedia,. (2015). Skeletal animation. Retrieved 24 February 2015, from http://en.wikipedia.org/wiki/Skeletal_animation

Sanders, A. (2015). What Are Morph Targets?. About.com Tech. Retrieved 24 February 2015, from http://animation.about.com/od/glossaryofterms/g/What-Are-Morph-Targets.htm

Production Pipeline: Texturing and Shaders

Texturing a 3D model with texture maps (which is effectively applying a 2D pixel shader is performed in conjunction with UV mapping and is a method of adding detail, surface texture and colour to an object by applying different types of image maps to either a UV map or the 3D model itself. A shader is a program that performs shading and producing the appropriate colours in an image. Using a pixel shader to apply textures is very effective at allowing near-photorealistic scene to be rendered in real time, as it greatly reduces the number of lighting calculations and polygons needed to achieve the intended look.

This image shows the different texture maps and how they affect a 3D model.

There are many different types of texture maps, each with a different effect, such as:

  • Diffuse mapping- the simplest kind of texture map, it wraps a 2D image around the 3D model so it displays flat colours and textures. These maps are used most effectively on UV maps.

A diffuse map of a safe

  • Bump mapping– Bump mapping is a resource friendly method of simulating bumps and wrinkles on a 3D object. A bump map is a grayscale image which creates appears to create variations in height to a surface it is applied to without increasing the polygon count. However, if viewed from the wrong angle the illusion of depth will break.
  • Normal mapping– similar to bump mapping in that it creates the illusion of texture without deforming the mesh, normal mapping is more advanced due to its use of an RGB image, of which’s colours directly correspond to the X, Y and Z axes of 3D space. This allows it to be more believably viewed from multiple angles.

This image shows the difference between a bump map and normal map when applied and viewed from an angle. The normal map gives a slightly more convincing illusion of depth.

  • Specular mapping– specular maps are used to denote a surface’s highlight colour and shininess. The lighter the value on the map, the shinier the surface will appear to be.

3D shaders are applied to the geometry and topology of the model and can also access the colours and textures already applied to it. Most 3D modelling software allow texture artists to easily tweak shader parameters, allowing the modification of how the 3D object interacts with light, like transparency, glossiness, reflectivity and ambient occlusion.

Some examples of the surface effects that can be achieved when a 3D shader is applied to a surface.

Oz, F. (2015). Normal Map vs. Bump Map | Unity Community. Forum.unity3d.com. Retrieved 22 February 2015, from http://forum.unity3d.com/threads/normal-map-vs-bump-map.78622/

Digital-Tutors Blog,. (2014). Know the Difference: Bump, Normal and Displacement Maps. Retrieved 22 February 2015, from http://blog.digitaltutors.com/bump-normal-and-displacement-maps/

Wikipedia,. (2015). Texture mapping. Retrieved 22 February 2015, from http://en.wikipedia.org/wiki/Texture_mapping

Wikipedia,. (2015). Shader. Retrieved 22 February 2015, from http://en.wikipedia.org/wiki/Shader

Production Pipeline: UV Mapping

UV Mapping is the process of unwrapping a polygonal mesh to create a 2D image to apply a texture to, then applying that image to the 3D model, giving it colour and texture. This stage usually occurs after the modeling process. U and V represent the axes of the 2D texture, as X, Y and Z are already used to denote 3D space in modelling software.

A 3D model and its 2D UV map.

The process of unwrapping the mesh can be done automatically or manually through the process of assigning seams along the edges of a mesh. This creates an image that resembles a clothes pattern, which can be painted, or have a picture or texture map applied onto it. When this image is applied back onto the 3D model, it creates the appearance of it having a texture.

The ball on the left displays what happens then an image is applied to a mesh without first unwrapping it. The ball on the right had been UV unwrapped before having the image applied.

Wikipedia,. (2015). UV mapping. Retrieved 21 February 2015, from http://en.wikipedia.org/wiki/UV_mapping

Production Pipeline: 3D Modeling

3D modeling marks the start of the production phase in the pipeline. It is the stage that sees the creation of the 3D assets to be used throughout the rest of the production. 3D modelers take the finalised concept art from the pre-production stage and bring it into the third dimension. They can choose from using a variety of techniques to best suit the project requirements – the most relevant to animation being polygonal modeling and digital sculpting.

3D Model of Smaug from Peter Jackson’s “The Hobbit”

Polygonal Modeling

Polygonal modeling involves the placement of vertices (points in 3D space) which are linked by line segments to form a polygonal mesh. They are favoured because they are very flexible, suitable for most inorganic objects, and fast to render. However, curves using this method can only be approximated through the use of many polygons (the ‘faces’ of a polygonal mesh) so it is generally unfavourable to be used with complex organic meshes. This process of creating a model is often done in programs like 3ds Max, Maya and Blender.

The video below demonstrates the process of polygonal modeling.

Digital Sculpting

Digital sculpting technology allows 3D modelers to create assets in a very similar fashion to sculpting digital clay. Meshes are created intuitively with the assistance of digital drawing tablets, which simulates using real sculpting tools and techniques, making the process of creating organic models faster and more efficient, with the benefit of having increased potential for more complex surface detail. However, it is usually unsuitable for inorganic models like buildings, furniture and most inanimate objects. It is often preformed in programs like Z-Brush and Mudbox.

The video below shows the process involved in digital sculpting.

Curve modelling

A less commonly used modelling technique in animation, curve modeling such as NURBS, or non-uniform rational B-spline, is comprised of smoothly interpreted surfaces between two Bezier curves, which are known as splines. It is most commonly used for industrial and automotive modeling, though can also be used to effectively model objects that are radial in nature, such as vases and bowls, by revolving a profile curve along a central axis.

This image show the difference between a polygonal mesh and NURBS surfaces

 Digital-Tutors Blog,. (2013). How Does a 3D Production Pipeline Work. Retrieved 21 February 2015, from http://blog.digitaltutors.com/understanding-a-3d-production-pipeline-learning-the-basics/

Sketchup-ur-space.com,. (2015). Theory of 3D modeling. Retrieved 21 February 2015, from http://www.sketchup-ur-space.com/2014/aug/theory-of-3D-modeling.html

Slick, J. (2015). 7 Common Modeling Techniques for Film and Games. About.com Tech. Retrieved 21 February 2015, from http://3d.about.com/od/3d-101-The-Basics/a/Introduction-To-3d-Modeling-Techniques.htm

Production Pipeline: Pre-Production

Pre-production is the first stage of the 3D animation production pipeline- where ideas are explored, communicated, improved upon, and finalised. It is to ensure there is a solid plan to the production, and the that team working on a specific animation understands and is proficient in what they’re doing. It involves cost and time management, and research into subjects specific to the project. The pre-production process reduces the risk of issues and flaws further down the pipeline, and makes for a more streamlined and resourceful process.

Concept Art from Walt Disney Animation Studio’s “Big Hero 6”

Concept Development – Exploration of different visual ideas

At the beginning of every project, there is an idea of what it should communicate to the audience and how it goes about expressing those messages. Often the director or client presents a written script and character and scene descriptions that the animation team turns into concept art- often, loose and rough images that explore the ideas presented in the written material. Research on different styles and methods is also integral. The point of this stage is not to create highly refined illustrations, but instead to create as many ideas as possible to help with deciding on and refining the visual direction of the animation. The director or client decides on the concepts and designs they are happy with- which are then finalised to be used during production.

Character concept art of Po from Dreamworks Animation’s “Kung Fu Panda”

Storyboarding – Synthesizes writing and visuals, introduces cinematography

Similar to the concept development step, the point of storyboarding is to visualise the ideas of the director or client, however, it is more focused on the plot, acting and cinematography, and serves to unify the visual and written elements of the production. It is often presented as small, simple, sequential sketches of a scene- which are referred to as key frames, and details plot points, character interaction and cinematic techniques. The colours and design are often unimportant in this stage. The purpose of storyboarding is to create a more solid understanding of the story as a whole and to make things that need to be added in, changed, or cut, more obvious. It will likely change over time through passes- when the director or client provides critique to ensure it is to their liking and that it makes sense to the viewer.

Storyboards from Pixar’s “UP”

Animatic – Combines visuals, timing, cinematography and sound

An animatic is usually made after the storyboard has be approved. While still loose in appearance, it provides greater insight into how the final film will look and feel with motion and timing. Animatics generally involve many sketches of different camera shots edited together and being presented sequentially- to give an idea of how long a scene takes and how well it performs in accordance with the whole animation. They are often presented with voice acting and a rough soundtrack to ensure visual and auditory elements work well together. You will commonly find animatics as deleted scenes in the special features of an animated film DVD, as many amendments are made here to reduce wasting time and resources in the production stage.

Below is a series of animatics from Dreamworks Animation’s “How to Train Your Dragon” depicting scenes that did not make it into the final film.

Animation World Network,. (2015). Producing Animation: The 3D CGI Production Process. Retrieved 19 February 2015, from http://www.awn.com/animationworld/producing-animation-3d-cgi-production-process

Slideshare.net,. (2014). Pre-production, Production & Post-production Process in 3D Animation. Retrieved 19 February 2015, from http://www.slideshare.net/Veetildigital/pre-productionpost-process-in-3d-animation

Storyofanimation.blogspot.com.au,. (2015). the Story of Animation: Pre-Production. Retrieved 20 February 2015, from http://storyofanimation.blogspot.com.au/p/concept-art.html

Wikipedia,. (2015). Storyboard. Retrieved 20 February 2015, from http://en.wikipedia.org/wiki/Storyboard#Animatics

Hello and Welcome

To my research and development blog for my freshly started Bachelor of Animation course at SAE, Brisbane! Expect to see the progress on my assignments for MDU115, and maybe more if I am feeling productive. Please brace yourselves for an excess of words! This is what HSC Advanced English has done to me. I will never be the same.