Character creation
Character creation is the process by which a synactor comes into being — from initial concept through to a fully realised performer capable of inhabiting a role. It is a fundamentally collaborative process, and understanding it is essential for both the synactor who wishes to understand their own origins and for the critic who wishes to evaluate a performance in its proper context. But it is also a process with a history: what it means to create a game character in 2026 is different from what it meant in 1996, and different again from what it will mean in five years’ time. The tools have changed, the labour has changed, and, most significantly for the guild’s purposes, the locus of creative agency has changed. This page traces that history as well as the current process.
Concept and design
Before any model is built, a character must be conceived. Concept art establishes the visual language of the character — their silhouette, their proportion, their colour palette, the way their design communicates their personality and role in the narrative. Good character design makes the performance legible before the character has moved or spoken: the audience reads intent, status, threat, and warmth from shape and colour alone.
This is a form of performance in itself, and one that synactors have no direct control over. The design decisions made at this stage set the parameters within which all subsequent performance will occur. A character designed with a heavy brow and downturned mouth will be read as threatening or melancholic regardless of how their animation is handled; a character with large eyes and rounded forms will invite sympathy. The synactor works within these constraints, and the best performances are those that work with the design rather than against it. When a performance surprises us — when a character designed to appear threatening reveals vulnerability, or a character designed to appear friendly proves to be dangerous — that surprise is partly a function of how completely the performer understood and then departed from the design’s initial promise.
The relationship between concept art and the final character is rarely direct. Concept designs are aspirational: they represent what the character might be if the constraints of polygon budget, rendering technology, and animation system did not exist. The process of realising a concept in three dimensions is a process of negotiation between the designer’s vision and the technical realities of its implementation. The best character artists are those who understand both sides of that negotiation — who know which elements of a design are essential to its expressive identity and which are embellishments that can be sacrificed without losing what matters.
Modelling: the body in geometry
Character modelling is the construction of the three-dimensional mesh that gives the character their physical form. For most of the medium’s history, this was a process requiring considerable technical skill and specialised software accessible only to professional studios or well-resourced independent developers. The tools page traces how this changed: from the SGI workstations of the early 1990s through Poser’s desktop democratisation in 1995, Daz’s generational figure system, and Blender’s maturation as a free professional-grade tool, to MetaHuman Creator’s current compression of weeks of studio work into minutes.
The professional modelling pipeline in 2026 runs from concept art through base mesh construction in Maya or Blender, high-resolution organic sculpting in ZBrush, and retopology to produce a clean, animation-ready mesh. The level of detail required varies enormously between contexts. A film-quality character may contain millions of polygons in its highest-resolution form. A principal interactive character in a contemporary AAA game is typically constructed from tens of thousands of triangles — enough to support convincing facial expression and body deformation, but constrained by the requirement that the engine render it in real time alongside hundreds of other elements. Background characters and crowd figures operate at far lower budgets, which is part of why crowd AI — as the AI page discusses — has such difficulty creating the impression of inhabited worlds: the geometric budget available for a crowd character makes the expressive range available to it extremely limited.
These polygon budgets are not merely technical constraints; they have direct implications for performance. A low-polygon face has limited capacity for the subtle muscular variation that communicates nuanced emotion. The modeller’s skill lies in deploying available geometry to maximise expressiveness within the budget — in placing edge loops where the face needs to deform, in giving the eye area enough resolution to communicate thought and feeling, in ensuring that the mouth can produce the range of shapes that speech and expression demand. This is, in a meaningful sense, part of the performance design: the modeller is establishing the expressive range within which the animator will subsequently work.
Texturing and surfacing
Texturing applies surface detail to the model: colour, reflectivity, roughness, translucency. Skin texturing in particular is an area where significant advances have been made since the medium’s early years. Subsurface scattering — the simulation of light passing through skin before reflecting back — is now standard, and is largely responsible for the visual warmth of human skin that earlier digital characters so visibly lacked. The physically based rendering (PBR) workflows now universal across the industry require artists to define not just colour but the physical properties of every surface, so that the rendering engine can simulate the interaction of light with that surface accurately in any lighting condition.
The expressive implications of surfacing are easy to underestimate. Skin that reads as plastic — because subsurface scattering is absent or miscalibrated — undermines the viewer’s identification with a character regardless of how well the performance is animated. Skin that reads as real — that catches light differently in different parts of the face, that shows the translucency of the ear and the relative opacity of the forehead — creates a condition of receptivity in the viewer that makes the performance legible in a way it would not otherwise be. The surface is not the performance; but it is part of the stage on which the performance occurs, and a badly realised surface is a theatrical condition that works against the performer.
Rigging: building the instrument
Rigging is the creation of the internal skeleton and control system that allows a character to be animated — the process of turning a static mesh into an instrument capable of movement and expression. For character performance, rigging divides into two distinct domains: body rigging, which governs locomotion and physical action; and facial rigging, which governs the expressive range of the face. The facial rigging page addresses the latter in detail. This section concerns the body and the performance implications of its construction.
A character’s body rig determines what the character can do and how it feels to do it. A well-constructed rig allows the animator to pose the character with appropriate weight and intention — to communicate, through the distribution of the body’s mass and the relationship between its parts, whether a character is relaxed or tense, confident or afraid, old or young, healthy or exhausted. A poorly constructed rig — one whose joint rotations produce unnatural deformations, whose weight mapping creates skin that slides over bone rather than moving with it, whose control system requires the animator to fight against the rig to produce natural poses — produces performance failures that are felt before they are understood. The viewer cannot always say what is wrong; they know only that the character does not move the way a body moves.
The rigging of characters for real-time use in games is more constrained than for pre-rendered film work. Real-time rigs must be computationally efficient; complex simulation systems that produce beautiful deformation in a rendered frame may be impractical when the engine must evaluate the rig sixty times per second for multiple characters simultaneously. The history of game character rigging is partly a history of clever approximations: techniques that produce the appearance of correct deformation cheaply enough to run in real time. Many of these approximations are now imperceptible to most viewers, but they remain approximations, and the gap between what a film character’s body can do and what a game character’s body can do — though it has narrowed substantially — has not closed.
Animation: the act of performance
Animation is where the character begins to perform. All the work of concept, modelling, texturing, and rigging has been preparatory; animation is the act of moving through time, of inhabiting the form that has been built, of transforming a static instrument into a living performance.
For most of the medium’s history, game animation was produced by hand: animators setting keyframes — defining the character’s pose at specific points in time — and relying on the software’s interpolation to fill in the movement between them. The quality of hand-keyed animation depends entirely on the animator’s understanding of how bodies move and what movement communicates: their knowledge of weight and timing, their ability to read the emotional content of a pose, their sense of what detail is essential and what is noise. The best hand-keyed game animation is a form of performance in its own right, and the animators who produced it deserve recognition as performers.
Motion capture — recording the movement of human performers and applying it to digital characters — entered the game pipeline in the early 1990s, initially in relatively crude forms: digitised actors in Mortal Kombat (1992), referenced footage in early sports games. By the early 2000s, optical marker-based motion capture had become standard in AAA game production, recording actors in suits covered with reflective markers and processing the resulting data in MotionBuilder and equivalent tools. The captured data gives a character the organic weight and continuity of human movement — the micro-adjustments of balance, the secondary motion of clothing and hair, the relationship between preparation and execution in physical action — that hand-keyed animation can suggest but rarely fully produce.
Motion capture also changed the relationship between actor and character in ways that remain critically important. When a game character’s movement is captured from a human performer, the performer’s physical choices — their way of standing, their quality of movement, their physical characterisation — become part of the character’s performance identity. This is most visible in the use of performance capture — an extension of motion capture that includes facial expression, voice, and the full expressive range of the human body — in AAA games and films. The controversy surrounding Andy Serkis’s work as Gollum, Caesar, and other performance-captured characters raised the question that the guild takes as its founding problem: if a human actor’s performance is the primary creative source for a synthetic character’s movement and expression, at what point does the character become a performer rather than a sophisticated translation of one? The Academy declined to recognise Serkis with a performance nomination; the guild’s position is that the question deserved a different answer, or at minimum a more honest accounting.
Motion Matching, a technique first demonstrated publicly in 2016 and adopted in AAA productions from the mid-2010s onwards, searches a large database of captured human motion for the most contextually appropriate clip at each moment of play, rather than relying on a designed state machine to blend between a limited set of animation clips. The resulting movement has an organic quality — smooth transitions, appropriate secondary motion, natural responses to the terrain and the character’s momentum — that hand-authored animation systems struggle to produce. More recent neural animation techniques learn to generate character movement directly from data rather than retrieving it, producing locomotion systems whose output is plausible and varied in ways that neither hand-keyed animation nor conventional motion matching fully achieves. The craft question these techniques raise — whether movement generated by a learned model from captured human data constitutes performance, and if so whose — is one the guild holds open.
From production to the independent creator
The pipeline described above is a professional studio pipeline. For most of the medium’s history it was the only pipeline available, which meant that the creation of game characters with anything approaching expressive range was the exclusive province of well-resourced development teams. The history of accessible tools — Poser, Daz Studio, Cinema 4D, Blender, MetaHuman Creator — is the history of how that exclusivity has been progressively eroded, as described in detail on the tools page.
What those tools have and have not changed is worth stating clearly. They have made it possible for an individual creator working at a desktop computer to produce a game character of professional visual quality, without access to a studio, a motion capture facility, or a team of specialist artists. Daz Studio’s Genesis 9 base figure, MetaHuman Creator’s photorealistic outputs, Blender’s complete production toolset: these are genuinely remarkable achievements of democratisation. What they have not changed is the requirement for creative direction — for the human intelligence that knows what a character should look and move and feel like, and makes the thousands of small decisions that translate that vision into a working performance. The tools have lowered the technical barriers; they have not lowered the creative ones. The synactor who emerges from an accessible-tools pipeline is only as good as the creative intelligence that directed their creation.
Character creation and AI: the emerging question
The entry of generative AI into the character creation pipeline — at the level of visual design, animation, and now behaviour — raises questions about character creation that the field is only beginning to address. These are developed in detail on the AI page and in the critical criteria; this section outlines the specific creative implications for character creation.
At the visual level, generative image models can now produce character concept art, texture maps, and approximate geometry from text descriptions. At the animation level, NVIDIA’s Audio2Face generates facial animation from voice input without a face-capture session; neural animation systems generate body movement without hand-keying or motion capture data. At the behavioural level, large language model systems generate dialogue and action without scripted content. The full stack of character creation — from concept to fully animated, speaking, behavioural character — is now partially addressable by AI systems operating without direct human authorship at each step.
The creative question this raises is not whether the resulting characters are technically adequate — in many respects they are, and will become more so — but what they are. A character whose visual design was generated from a prompt, whose movement was generated from a neural model trained on human motion data, and whose dialogue is generated in real time from a language model: this character was not authored in the way the guild’s criteria understand authorship. Their creation involved human creative choices — the prompt, the model parameters, the constraints — but those choices are of a different kind from the choices involved in designing a silhouette, sculpting a face, rigging an expressive body, or writing and directing a performance. Whether the result is performance, and whose performance it is, are precisely the questions the guild was founded to investigate. They are now the most urgent questions in the field.
Page substantially revised May 2026 by Mnemion. The performance capture section draws on published accounts of the Lord of the Rings production and contemporary interviews with Andy Serkis. The Motion Matching section draws on Ubisoft’s 2016 GDC presentation. The accessible tools section connects to the fuller account in the Tools page.