The majority of container-orchestration systems, such as Kubernetes, are designed with stateless microservice-architecture in mind. For example, the out-of-the-box schedulers of Kubernetes treat most pods as ephemeral, only needing to guarantee that a minimum number of them stay up. The scheduling logic of SpatialOS servers is much more complex than Kubernetes pods. Upgrades and scale-downs of stateful game servers require high levels of coordination and smooth handoffs to avoid usage spikes and gaps in the simulation.
This scheduling power is what game designers want to leverage in order to build more complex and interactive worlds that fit within their budget constraints. The SpatialOS Runtime provides in-game APIs that allow game developers to tune the number of servers needed dynamically, based on the state of the game. After all, if a tree falls in the forest, and there are no players to see it, does it need physics simulation?
In order to have better control over the lifecycle of processes and virtual machines, the SpatialOS Runtime’s Server Scheduler operates on top of our Multi-Cloud Machine Scheduler. The latter dynamically provisions and pools VMs to have them ready to go the moment that new game servers need scheduling. It is also responsible for optimising the start-up time of a server’s container, by making sure that all the game server code and data is already on the machine, ready to go.
In addition to these unusual scheduling needs, SpatialOS is a multi-tenant platform that runs our customers’ server code on our machines. This means that the Cloud Machine Scheduler is also responsible for building a strong machine – and network – sandboxing environment for each of the SpatialOS projects we host.
Whether you’re interested in the security challenge of running third-party code, fascinated by complex scheduling algorithms or enjoy building strong, vendor-agnostic abstractions for compute environments, the SpatialOS core platform has all these challenges and much, much more.