Skip to main content

Behind the Scenes: Context Changing in Towerscape


Hunter Bobeck | Gameplay Programmer



In order to take advantage of the first-person perspective VR provides, Towerscape allows players to shrink down to toy-size and commandeer towers themselves, using the power of their imagination. In this post, I will be discussing the design pattern and programming methodology behind the mechanic.


The Interaction

Since Towerscape is played in room-scale VR, we have the ability to adjust the player’s relative scale to the game world. Shrinking the player in the game results in the player feeling smaller, even though their scale hasn’t changed in real life.

The act of shrinking down is begun via a controller interaction with the chosen tower. Controllers are tracked within the room-scale VR space, so to choose the tower to commandeer, the player physically moves the controller to be in contact with the virtual tower. Then they only have to press the appropriate button to initiate the interaction.

Not all towers can be commandeered by the player. Player Towers must first be distinguished by placing a Player Flag atop a Base Tower. Each Flag then provides that tower with a Locus, the invisible region of contact in which the player may interface to shrink down to that zone.


When the interaction occurs, not only does scale change, but the player is moved as well. Before starting the interaction, they are standing on the floor of the room; by the end of the interaction, they have ended up standing on the tower. Because we don’t want this transition to be jarring, we aren’t teleporting the player. Instead, the sensation is that of “zooming in”: smoothly flying the player from their position in the room to a position atop the tower.

In order to achieve the sum of this interaction within the Unity engine, the scripts have to handle player input, do object scaling over time, and move the player over time.


Defining Contexts

From the programming perspective, we refer to the different scale situations not as ‘Scales’ but as ‘Contexts’, for the reason that being in tower-scale not only means you are smaller, but the rules of the game are different as well. For example, your ability to pick up and move towers on the board is no longer available when you are in tower-scale.

While it was fun to play with at first, we don’t want players to have the ability to pick up the tower they are on. In development, this allowed players to fly around on a tower they were standing on by grabbing it and steering with their controller, effectively carrying themself around the room. This may sound like great fun in theory, and to be honest it was pretty fun to try and get the hang of, but the twitching of your  hand holding the controller was magnified at that scale, and motion sickness was a real issue, especially when you accidentally threw yourself off the tower. It’s also simply not the experience we are aiming for.

To go along with the definition of Contexts, we call the interaction of changing your scale a Context Change. The player can either be in the 'Out Context' or the 'In Context'. 'In' refers to being in the tower-scale; 'Out' refers to being outside of the tower-scale. Context Changing, then, is the act of switching between these Contexts. A ‘Shrinkage Context Change’ is the act of transitioning from the Out Context to the In Context. An ‘Unshrinkage Context Change’ is the act of transitioning from the In Context to the Out Context.

The Scripts

There are three main scripts involved: LocusShrinkage, LocusUnshrinkage, and ContextChangeManager:

C:\Users\Hunter\AppData\Local\Microsoft\Windows\INetCache\Content.Word\Context Changing.png

The Context Change Manger is what handles the scaling and movement of the player. But it doesn’t know when to do so without the help of the other two scripts. LocusShrinkage and LocusUnshrinkage tell the Context Change Manager when to perform a Shrinkage Context Change or an Unshrinkage Context Change, respectively.

Input

These Locus scripts reside on both controllers. This way, they can be easily fed controller input, as well as controller contact with a Locus.

Detecting player input is straightforward. Because we are using the SteamVR Interaction System plugin, we only have to reference the Hand component on either controller to check for button pressing.

But is the controller in the right position to initiate the interaction? For the PlayerShrinkage script, the position depends on whether the controller is in contact with a Locus; for the PlayerUnshrinkage script, the position isn’t important. However, Unshrinkage does require the player to be In Context.

The Context Change Manager is then putting these requested Context Changes into effect. Because this script is a manager script, it is only intended to be used by one of Unity’s GameObjects. We placed it on our Player object, because the Player is controlling these interactions and is also the very object the interaction is supposed to affect.

To let other scripts communicate to the Context Change Manager (for example, to ask for a Context Change, or determine the current Context), the Context Change Manager needed to be easily accessible. To do this I implemented a singleton design pattern. Setting this up was as easy as putting the following code at the top of the script:

private static ContextChangeManger singleton;
private void Awake()
{
singleton = this;
}

Any other script that accesses the Context Change Manager can do so without having a direct reference. They only have to ask the class for its reference to the singleton instance:

ContextChangeManager contextChangeManager = ContextChangeManager.singleton;

Scaling

At first thought, the goal of the scaling operation is to make the Player smaller. When the player shrinks down to toy-size, they are supposed to become 1/50th of their normal size. However, we ended up approaching the problem from the opposite perspective: the Environment (everything other than the player) growing to 50 times its original size when the player wants to “shrink down”. This alternative was followed due to a number of issues with the player-shrinking approach.

For one, the VR Camera component is not meant to work at small scales; a sort of distance-pixelization occurs at small scales where you cannot see objects in front of your face because they are already clipping at that minor distance. We were able to solve this issue by adding a script to adjust the near clipping distance to go below what the Unity Inspector panel allowed. But this was only the minor hurdle.

The greater issue confronting us with this approach was that SteamVR’s Interaction System did not work at such small scales. We are using the Interaction System for the item interactions in the game, such as grabbing and throwing objects with realistic physics, as well as the more advanced item interactions such as picking up and shooting the Longbow. These interactions would have to be rewritten if we wanted to successfully shrink the player.

There were other significant issues as well. For example, physics-based locomotion systems lose their fine-tuning at that scale. Particle Systems associated with the player would have to be recreated, since Unity’s Particle Systems do not scale in size.

This difficulty had us thinking relatively – instead of shrinking the player down directly, we took the route of scaling the environment to be larger. In theory, this has the same effect visually. But it wasn’t going to be that simple of a fix. By avoiding scaling issues for the Player, we were now forced to handle scaling issues for the Environment.

The Environment object serves as a container for all other objects in the scene that aren’t part of the Player. It may seem like a lot more has to change, but fortunately, this has proven to be easier thus far. This is because a lot of the Environment scaling work is done automatically. For the most part, the relativity of objects within the Environment container object stays the same; in other words, when the entire Environment is scaled, the scales of objects within it stay the same relative to each other.

The only thing to keep in mind is that applying new position and scale values to objects in the environment has to be done relatively instead of absolutely.

For example, when enemies die they may spawn an item drop just above their feet. The parameter specifying the distance along the y-axis above the enemy’s feet position to place the drop has to be multiplied by the Environment’s current scale factor. This is because the drop height difference may work fine at the scale of 1, but at the scale of 50, it’s only 1/50th of the distance it needs to be.

Here’s how we access the Environment’s current scale, via the singleton design pattern:
float scaleMultiple = ContextChangeManager.singleton.scaleMultiple();

The method scaleMultiple() returns the multiple of its original scale the Environment is currently at. So during Out Context, it returns 1, and during In Context, it returns 50. Importantly, it also returns transitional values during the transition between scales, if anything needs to be given an appropriate scale while things are happening during the Player’s Shrinkage or Unshrinkage Context Changes.

When possible, we are avoiding absolute values for parameters such as positions, and instead using percentages. For example, the Greedy Goblin has a sack that grows larger as he stuffs more Coins into it. When this sack’s scale grows, it is not being added to, but instead it is being set to a percentage of its previous size.

A additional minor difficulty posed by scaling with either method was the scaling of light sources. Changing a light source’s scale does not adjust the actual light distribution correspondingly. The intensity parameter has to be scaled accordingly as well.

Scaling was the major difficulty posed by Context Changing. We were able to arrive at a low-effort solution by scaling the Environment. It’s worth noting, however, that this precluded us from utilizing Unity’s ‘NavMesh’ system for enemy pathing. That system doesn’t scale efficiently because it must be baked which doesn’t work so well at runtime.

For now, we are implementing our own enemy pathing that is actually scalable (driven by a behaviour tree movement system).

Also, we are still figuring out how to properly scale Unity’s Cloth physics for the flags.

Movement

The zooming-in experience is preferable to being teleported instantly on top of the tower. That’s why implementing a smooth moving transition was key.

To make the transition smooth, I began by playing simply “lerping” (“linearly interpolating”) the player’s position.

Lerping requires an interpolation ratio. The ratio signifies how far from the start point to the end point the Player should be at the given moment.

The ratio is created by reading in a value for the amount of time the interaction has spent occurring thus far, and diving that value by the duration over which the interaction is supposed to occur.

This results in an indicator for the progression of the transition. For example, the ratio would be 0 at the start, 1 at the end, and .5 halfway in between. This interpolation ratio is then applied to interpolate that far in between the start and end positions.

The start and end position are simply set to the player’s position in the room, and the player’s intended position atop the tower, respectively.

Lerping wasn’t satisfactory, however. Because in a linear interpolation of position, motion will follow a linear rate, the Player experiences a sudden beginning and ending of movement. This feeling can be represented like this:


What we really want is an easing in and out of movement, a continuous curve of velocity toward the tower instead of abruptly changing whether the player is moving. To achieve a curve more like the one below, I used the formula "smootherstep" on the interpolation ratio:

t = t*t*t * (t * (6f*t - 15f) + 10f)


Here again is the result!



Now you can fire arrows at the enemy yourself!


Comments