Responsive first person animations are key when making a first person shooter. Visual feedback greatly affects the experience of using a weapon within any game. To make animations feel more lively and responsive they should react to the game world and our player input. While avoiding looking too stiff or repetitive.
Procedural animations are great for the creation of repetitive motion patterns like locomotion. Or breaking up repetitive animations by adding secondary motion and physics.
The main purpose of this system is to produce responsive animations through the use of math and code. It can also serve as a way to reduce the reliance on baked animations. However, base poses and baked animations are still a foundational part of this system.
Baked animations are a much quicker alternative when dealing with more complex motion. When authoring first person weapon animations the most complex movements are those involving the hands when handling the weapon. Combining baked and procedural animation can get more mileage out of a limited animation set.
Base poses
Complex Weapon Actions (Reloads, Equip animations, Idle breaks etc.)
Hand posing
Pointers on creating a first person character setup.
For best result, a good character rig and setup will give you a much cleaner result. This will also help you avoid a lot of work trying to deal with edge cases, and correcting visual errors by modifying the animation system itself.
A good rig makes for easier work and better looking animations.
This project is based on Unreal Engine 5. Unreal uses animation controllers called Animation Blueprints (ABP) some technical aspects may not translate 1:1 for older versions of the engine. General concepts should still apply.
Since the release of UE5, ABPs have seen significant performance gains thanks to Blueprint Thread Safe Update Animation. Blueprint thread safe animation allows us to do many of the computational heavy parts of the animation system on separate threads instead Game Thread. This allows us to do more logical operations inside the Animation Blueprint with a greatly reduced performance footprint. Perfect for calculating procedural animations!
First person weapon animations are a great start to procedural animation. Many of the compound moves that makes up both locomotion and weapon actions can be broken down into several smaller linear movements.
By using some basic trigonometry and activation functions we can model these movement cycles from the ground up.
By combining math with the input parameters from our game world and player controller, we can build animations that feel responsive and dynamic.
Animating through code means calculating and applying offsets, rotations and scaling to the bones of our skeletal mesh at runtime.
When animating weapon movement, what we really want to do is animate the bone our weapon is attached to. The item bone acts like a socket that we can animate and apply rotations and offsets to at runtime.
Then, using IK will allow us to keep our hands aligned and 'attached' to the weapon model itself.
Since we're dealing with First Person Animations the item bone should be part of your First Person viewmodel (arms).
In case you deal with multiple items equipped at the same time, like dual wielding or offhand equipment, having a double item bone setup can come in useful. Dual wielding is not covered in this article (yet), but the same principles still apply.
If you have no other choice than attaching the weapon to, let's say, the hand of your character. These concepts still apply. Though I'd advise against it.
This usually means gluing the weapon to the main hand of the character. Often creating hard cuts when items transition between hands or the item moves from the characters hand to the world. Hand adjustments become a lot harder to pull off.
When using an item bone, the hands follow the item. When parenting the weapon to the hand it's the other way around, the item follows the hand.
I strongly recommended using a separate item bone.
I have written an article on FPS Character Setup for pointers on how to establish a good base character for First Person Animations. This is the setup I've used when developing this animation system.
TIP:the unreal default mannequin comes with a good bone setup per default, with a bone named "ik_hand_gun" serving as the item bone that has child IK targets for both the left and right hand already set up.The first step to creating an animation is choosing a default pose for your character.
For a first person character this usually means finding a pose that frames your weapon without obstructing other gameplay elements.
For optimal performance, having the left and right hand IK targets properly set up and aligned to the hand poses will make for much cleaner IK later on.
If we compare the first person rifle pose to the full body rifle pose on the full character mesh we can clearly see that the character on the left's both right and left hand being offset unnaturally, with head and shoulders aligned. This is to better frame the weapon and hands through the first person camera perspective, while also avoiding the seams or inside of the mesh being visible to the player. The arms are usually cut off when dealing with split 1P and 3P characters.
First Person pose (left): This is how the character is posed when viewed through the first person camera.
World person pose (right): This is what other players see when looking at your character.
The green line is the item bone. The white bones are the left and right hand IK targets. Parented to the item bone
Note that the left and right hand bones align with the IK targets in the base pose. There is no IK running in the above image.
Different weapons require different poses. The same principle still applies.
There are many different ways to store poses. Instead of using hard set animations or being overly reliant on state machines. Consider passing base poses as variables from the weapon class itself.
The same goes for any other baked animation or pose you want to change based on the current weapon. Like different reloads, equips or even melee attacks.
It can be worth soft loading these animations and storing them somewhere before equipping the weapon.
Before we start animating these are some of the core concepts that are used extensively throughout this animation system. These are the things that drive the animations at their core
Time is the foundation of animation. To model change over time. Of course we need to track time.
The most basic timer tracks elapsed game time. The input rate is a great way to control the speed of our animation.
This makes a lot more sense in the next part. Sine waves.
Most engines have some kind of built in time function that tracks time elapsed since the application started. The reason we create our own timers is to control the flow of time. By adding multipliers to our input time we can easily adjust the speed at which the timers increment or decrement.
Sine and cosine allow us to convert time into oscillating linear movement. This makes up a large portion of the fundamental motions that move the weapon relative to our base pose. Changing the timescale will change the speed of the movement patterns. Combining timers with different time scales can create new more complex patterns.
Much procedural generation hinges on trigonometry. You can get far with a basic understanding of the unit circle.
x = Sin(t) | y = Sin(t)
Sine waves are the fundamental part of creating oscillating linear motion. You will encounter these in all sorts of procedural workflows.
This is a corner stone when working with procedural repetitive motion.
x = Cos(t) | y = Sin(t)
Combining Cosine and Sine gives us a rotating vector. This is very useful anytime we want to convert floating point numbers to directional vectors.
x = Sin(t) | y = Sin(double_t)
The vertical component is double timed creating a more interesting motions. This will be a recurring pattern through out this article, as it is the base for creating the "swimming" motion often seen in walk cycles.
The step cycle is crucial, it defines the time it takes for a character to complete one full step.
The step cycle drives many parameters of the procedural pipeline.
In the "Double Time" example above. One timer is incremented at twice the speed of the other. This creates a "swimming" ∞ pattern that we often see in walk cycles in other games.
I like to track and store these as two separate variables:
float Step Time
float Step Double Time
As they will be re-used any time we need to base something of our step cycle.
While this article is based on the use of timers. Using animation curves baked into animation assets could be an alternative way to help drive this system, at which point you might as well be hand animating.
This could come in useful if you need to sync your FP animations with your lower body viewmodel.
Occasionally you come across games that scale animations linearly with the movement speed. This looks sloppy. Walking faster or slower than the intended pace gives the impression of moving at hyper speed or in slow motion. There may be use cases for this. However, not when modeling the base locomotion cycle.
A stepped cycle changes the frequency of the movement at set intervals instead of continuous scaling. Consider having multiple step frequencies. I recommend having at least one for each movement state:
Walking
Jogging
Sprinting
Crouching
Or better yet. Create a stepping function based on movement speed instead of relying on hard set values tied to movement states!
This is just the base when it comes to representing pace. The next step is to better represent stride length, inertia and impact forces of the character locomotion.
The bulk of the procedural workflow can be reduced to two main components:
Translational and rotational offsets. Play rate, scale and speed of which are determined by a multitude of common parameters gathered from the games state. Player input, character velocity and gameplay actions are some examples. These parameters and events are fed into the animation controller(ABP) at runtime.
How we interpret these parameters is something that we can either generalize or change on a per weapon basis. How much and how fast a weapon moves around the screen can be used to model how a weapon handles. It can also be used to help sell the personality of the player character.
Offset, bobbing speed and height is based on player velocity and direction.
Additional layering is done by reading current player input.
Here the rifle is pushed into the shoulder during forward and backward movement.
Sway and roll are based on planar velocity as well as specific lateral velocity to accentuates lateral movement.
While these values are exaggerated, they make up the bulk of the base movement animations.
Both making use of the Step Time and Double Time to drive the base movement patterns.
When looking around, having the weapon follow the control input enhances the responsiveness of our animation.
In a basic form this can be achieved easily. Using the axis input from our character controller to drive the rotation of our item bone.
Making the weapon rotate faster than the player view makes it look light and easy to control. It also makes our character look more in control, leading with the barrel being ready to fire.
Reducing the rotation speed can help make our weapon look heavy. Ideal when modeling heavier weapon types.
Its not uncommon to see game implement one of the two models, making weapons look either too light or too heavy. This can be fine when going for a stylized look.
When applied incorrectly, it can break the immersion of our character. An elite soldier unable to properly handle a firearm. Or a a malnourished wastelander swinging a mini-gun like it's made out of Styrofoam.
A trained character should be leading with the barrel. This means pointing the barrel where you intend to go. When walking around with your weapon drawn and ready to fire, you naturally want the weapon pointed in the direction you're looking, ensuring you're prepared to fire as soon as you round a corner or turn your head. This will make your character look in control.
The exception to this rule is when you intentionally want to depict your character as lacking control. This can be useful when you want to convey weight, like carrying a heavy machine gun, rocket launcher, or maybe even a front-heavy revolver.
If you've read through my article about building a dynamic recoil system. You'll see that I'm tracking both the vertical and horizontal recoil offset generated by the recoil model. Just like the mouse axis inputs these can be used to offset the weapon to help better visualize the spray pattern.
Vertical and Horizontal recoil parameters are used to change the Pitch, Yaw, and Vertical offset of the item bone. The scale of which is decided by whether or not we are currently in ADS.
Another layer of detail is recoil. recoil is driven by force impulses fed into a spring solver.
I like to see it as these spring impulses simulate the muscle forces from the characters arms counteracting the twist and torque around the grip point, where as the regular recoil offset accounts for the part of the recoil absorbed by the entire body.
This is most visible in less stable firearms like pistols. Weapons that tend to have a more powerful twist and torque, without the stability of a shoulder stock.
The impulse system also affects the Roll of the item bone.
After firing multiple rounds in a row, I think it's nice to visualize the character adapting to and better anticipating the forces of the recoil. This can be done by scaling the weapon recoil and the strength of impulses based on how long the player has fired in sequence. Single fire or short burst will still look impactful, while prolonged fire looks less twitchy and erratic.
This is something I first read about being used in Battlefield V, according to a Dev post on their forums. While this post has since been lost. Thank you to whomever wrote it!
The rifle returns back into frame after a few rounds have been fired.
Using the same recoil value the weapon continues to push into the character. If there was no counter push, the values would have to be tuned down alot, making the whole system look less dynamic.
It's important to break up the idle. As simple breathing pattern isn't anything special, it's just the first step of many.
This is probably a good time to talk about activation functions. If you're somewhat curious about machine learning you have probably heard the term.
It's a lot simpler than it sounds. It's possibly something you've already been using, especially if you are familiar with the Unreal material editor.
Activation functions enable us to activate, deactivate, or scale specific parts of an animation, based on a set of input parameters. It's a mathematical way to make certain part of an equations relevant based on an activation state. It is what i think would be best described as masking in other digital media. An activation state can be anything. Movement direction, velocity, or maybe how many bullets have been fired in a sequence. As long as we can represent the value as a floating point number that we can normalize to the 0-1 range. This is our alpha value.
For example. We don't want this breath cycle to play when we are moving as it will be additive with our step sequence. So we can mask it out while we're moving.
To do this we can define a speed threshold. When moving at speeds above this threshold which we no longer want our breath cycle to play. Thanks to some relatively simple math we can convert our current movement speed to an alpha value. We will then use this alpha value to turn off our breathing offset.
We take our current movement velocity
the length of our velocity vector on the XY plane
divide it by our speed threshold
the speed at which we want the breathing to be completely removed
clamp the value
to keep the value in the 0-1 range
invert the activation value
We now have a smoothed 0-1 float which we can multiply by our breathing offset or timer to act as a switch or "activation".
You should be familiar with the way of calculating percentages. As the velocity approaches the threshold our value approaches 1.0. Once we surpass the threshold the value will keep increasing, this is something we want to avoid.
We clamp the result to the 0-1 range to make sure the values above the threshold are discarded. We now have an alpha value that is at 0 when standing still and 1 when moving at or above our threshold. This is the opposite of what we want.
You might recognize this from the material editor. It is a very common tool when working with masks. What the one minus node does is invert any values in the 0-1 range. When working 2d masks this inverts all black and white elements of an image. In this case it inverts our activation value.
The activation now looks something like this:
If we multiply our breath offset by our activation value, the closer we get to our speed threshold the less our breathing offset will be applied applied.
This is not a binary state, instead we have a linear blend from stand still to the threshold. This may seem trivial, but I think this is one of the most powerful tools to master when it comes to constructing complex motion.
Our breathing offset will now be blended into our animation when we stay below the velocity threshold.
Micro adjustments or Jitter is something I consider the dust particles of FPS animation.
Combining several small motions at different frequencies will create a seemingly random movement pattern.
This can be used to model tremors or muscles doing micro adjustments to hold still an object of weight. Doing this procedurally will allow us to change these parameters on the fly. We could scale it based on stamina to give the impression of fatigue. We could increase the frequency based on our characters mental state to model fear.
These details help add life to otherwise static animations.
Gives the illusion of muscles working to correct and stabilize the weapon even when stationary.
The same base noise with scaled offset values. This is what it could look like if you drank every bottle on practice range.
Hand adjustment animations serve to break up a static base pose. I would like to categorize them into two types: idle breaks and action breaks.
Idle breaks are used to interrupt static or repetitive movement.
Action breaks add variation to player actions.
Action breaks and idle breaks are both baked animations utilized to disrupt the otherwise static base pose. In this case, hand adjustment animations have a set chance to trigger once when performing an action. These actions could be when: a character stops shooting, after jumping, or completing a reload.
These animations are non-intrusive and directly linked to player actions, hence the label action break.
While idle breaks serve a similar purpose, they are used to break up states of inaction.
Creating small subtle adjustment animations and applying them additively can enable you to re-use them across multiple different weapons at minimal overhead.
These are impulses that help add weight and inertia to the character.
Small spring impulses are added when starting to move or changing direction. These are one offs that are use to emphasize when the player changes movement direction.
Adjusting the strength of these impulses can give a very stylized and fluid look. I think this is similar to how The Finals do their animations.
Pitch, Yaw and Vertical offset are affected by the players vertical velocity.
On landing, impulses are fed through the previously established recoil and movement impulse systems based on our vertical velocity on impact.
How you scale the landing impact will play a big part in the perceived weight of the landing.
The camera system also adds a slight recoil impulse upon landing. These systems together really add weight when controlling our character.
When taking fall damage. Consider adding camera roll to give the impression of the players legs not fully bracing the fall. The FPS setup is parented under the camera meaning they follow any motions we add to it.
Unreal has a very easy to use camera shake system that work great with small animations like this.
Usually this would require a full animation blend space, authored for a specific crouch pose, possibly with it's own 2,4 or 8 way directional locomotion animations. Through this procedural workflow, the animation offset is just a single offset, blended by a crouch alpha value.
This example is just a slight roll and vertical offset. This can easily be customized on a per weapon basis for better framing, without any need to leave the editor.
Here is an example where the jump and crouch together almost gives the impression of vaulting over the wall.
The sprinting animation can also be modeled using the same principles. Sprinting is also a linear action, that can be easily managed as a separate pose state.
On each footstep a camera impulse is added and scaled based on the current sprint Alpha. The animations below are all driven by the exact same parameters and scaling as the base locomotion. Altering the parameters based on sprinting could yield even more interesting results, but I think this looks fine.
This is nothing more than an alpha blend between two poses.
The rest is driven by the same parameters as the base locomotion.
These animation only occupy the right hand. Leaving the offhand free for it's own animations.
Re-using the pistol sprint pose with other pistol handle weapons creates a Warzone-like tactical sprint animation for free.
This could be incorporated as a mechanic that allows off hand equipment usage when sprinting.
Reloads aren't necessarily that difficult to do in a procedural way. It's more about keeping them interesting. This is one of the cases where I do prefer to use baked animations, since creating an interesting reload animation requires a lot of finger, hand and arm movements.
Equip is another essential animation, Equip time is often a mechanic used to balance a weapon. The animation length should probably be adapted to the length of the equip mechanic before being finalized.
Unequip animations aren't always necessary. It's fairly common to see games balance weapons by their equip time, often making the unequip instant. If you prioritize visuals unequip animations are a given.
The big edge case is solving weapon switching before a weapon is fully equipped. I.E when players are quickly scrolling their inventory, as we don't want to wait for both the equip and unequip animation.
In other words, the unequip animation is only played if a weapon has completed it's equip time.
But wait a second, There is definitely room for procedural animation here as well!
The anim controller already contains functions to add recoil and movement impulses. Through custom anim notifies, we can easily incorporate these into baked animation tracks. We just have to call the same functions we do when adding the other impulses. Randomizing the impulse range can give more variety at a low cost.
The base animation without any impulses added.
Toned down impulses that blend nicely with the rest of the weapon animations.
Greatly exaggerated values to give a better view of what is actually happening.
When doing ADS what we really want is to use our weapons diegetic aimpoint instead of our crosshair. This mainly comes down to trying to maintain our diegetic aimpoint center of screen.
I'm using a material that renders weapons at a different Field Of View. In essence, this makes them look more compact, and allows not only better visuals, but also a lot of clarity when looking down the sights of a weapon.
I strongly recommend these FOV material functions: Weapon FOV - By Krystian Komisarek as a starting point.
How exactly you want to frame your sight depends a whole lot on your project and your bullet trace model. Do you want to frame the recoil or do you want to center the weapon aim point.
In my project based on a camera recoil model similar to counter strike, featuring a vertical and horizontal offset to bullet traces. When feeding these recoil values as rotations to the animation controller, the bullets get neatly framed by the sights.
To make sure our aimpoint remains the center of screen we can instead reduce or completely eliminate translational and rotational offsets.
I still prefer keeping some rotational movement to maintain a more fluid look.
Working with scopes is pretty forgiving. A red dot, or holographic sight have their aim points seemingly floating in the air. This means we can fake the framing.
By replacing the crosshair with a red dot or holographic aimpoint. And making sure it stays almost centered works just as well. In an action game this wont be easy to notice, and might prove more predictable in the long term.
When weapons have a physical sight, faking the aimpoint becomes a bit harder, as we rely on the weapon screen position to stay centered.
What this really means is using our weapons front sight as the animation pivot point.
When using front sights, or really any static aimpoint, we want it to remain centered.
Here recoil impulses are added after the ADS rotation adjustments. If the recoil impulses were applied first, the weapon would look like it sinks when shooting.
Here is what happens if we don't properly account for weapon rotation in ADS on weapons using front sights.
This could still be work if our recoil model features significant movement accuracy penalties.
Setting up ADS is really as simple as blending two anim states. When our rifle is on the hip we have a base pose with additive transform and rotation offsets.
When we go into ADS we have a separate state that overwrites our transform values to center our weapon on screen. This allows us to be more selective about what rotations and offsets we then reintroduce in our offset calculations.
When not using ADS the offsets are additive to retain our base pose.
In ADS we replace the base pose by hard set values that center the weapon on screen. Also layering in some of the rotation values generated by the model.
*Don't worry about the fast path. This is comes from adding offsets together in the graph for easy editing, these values can be cached once we are happy with our system.Node(A): Hipfire Node(B): ADS
This is all you need for an ADS blend.
Modern FPS games rely on dynamic weapon recoil to make different firearms feel unique and interesting. The recoil system greatly impacts the cadence of any shooter game.
Looking at visual and audio feedback elements
Feedback is half of what makes gunplay feel good