This is the multi-page printable view of this section. Click here to print.
Create Kubernetes Cluster
1 - Google Kubernetes Engine
Before you begin
Take the following steps to enable the Kubernetes Engine API:
- Visit the Kubernetes Engine page in the Google Cloud Platform Console.
- Create or select a project.
- Wait for the API and related services to be enabled. This can take several minutes.
- Enable billing for your project.
- If you are not an existing GCP user, you may be able to enroll for a $300 US Free Trial credit.
Choosing a shell
To complete this quickstart, we can use either Google Cloud Shell or a local shell.
Google Cloud Shell is a shell environment for managing resources hosted on Google Cloud Platform (GCP). Cloud Shell comes preinstalled with the gcloud
and kubectl
command-line tools. gcloud
provides the primary command-line interface for GCP, and kubectl
provides the command-line interface for running commands against Kubernetes clusters.
If you prefer using your local shell, you must install the gcloud
and kubectl
command-line tools in your environment.
Cloud shell
To launch Cloud Shell, perform the following steps:
- Go to Google Cloud Platform Console
- From the top-right corner of the console, click the Activate Google Cloud Shell button:
- A Cloud Shell session opens inside a frame at the bottom of the console. Use this shell to run
gcloud
andkubectl
commands.
Local shell
To install gcloud
and kubectl
, perform the following steps:
- Install the Google Cloud SDK, which includes the
gcloud
command-line tool. - Initialize some default configuration by running the following command.
- When asked
Do you want to configure a default Compute Region and Zone? (Y/n)?
, enterY
and choose a zone in your geographical region of choice.
gcloud init
- When asked
- Install the
kubectl
command-line tool by running the following command:gcloud components install kubectl
Choosing a Regional or Zonal Cluster
You will need to pick a geographical region or zone where you want to deploy your cluster, and whether to create a regional or zonal cluster. We recommend using a Regional cluster, as the zonal GKE control plane can go down temporarily to adjust for cluster resizing, automatic upgrades and repairs.
After choosing a cluster type, choose a region or zone. The region you chose is COMPUTE_REGION
below.
(Note that if you chose a zone, replace --region=[COMPUTE_REGION]
with --zone=[COMPUTE_ZONE]
in commands below.)
Choosing a Release Channel and Optional Version
We recommend using the regular
release channel, which offers a balance between stability and freshness.
If you’d like to read more, see our guide on Release Channels.
The release channel you chose is RELEASE_CHANNEL
below.
(Optional) During cluster creation, to set a specific available version in the release channel, use the --cluster-version=[VERSION]
flag, e.g. --cluster-version=1.27
. Be sure to choose a version supported by Code Blind. (If you rely on release channels, the latest Code Blind release should be supported by the default versions of all channels.)
Choosing a GKE cluster mode
A cluster consists of at least one control plane machine and multiple worker machines called nodes. In Google Kubernetes Engine, nodes are Compute Engine virtual machine instances that run the Kubernetes processes necessary to make them part of the cluster.
Code Blind supports both GKE Standard mode and GKE Autopilot mode.
Code Blind GameServer
and Fleet
manifests that work on Standard are compatible
on Autopilot with some constraints, described in the following section. We recommend
running GKE Autopilot clusters, if you meet the constraints.
You can’t convert existing Standard clusters to Autopilot; create new Autopilot clusters instead.
Code Blind on GKE Autopilot
Autopilot is GKE’s fully-managed mode. GKE configures, maintains, scales, and upgrades nodes for you, which can reduce your maintenance and operating overhead. You only pay for the resources requested by your running Pods, and you don’t pay for unused node capacity or Kubernetes system workloads.
This section describes the Code Blind-specific considerations in Autopilot clusters. For a general comparison between Autopilot and Standard, refer to Choose a GKE mode of operation.
Autopilot nodes are, by default, optimized for most workloads. If some of your workloads have broad compute requirements such as Arm architecture or a minimum CPU platform, you can also choose a compute class that meets that requirement. However, if you have specialized hardware needs that require fine-grained control over machine configuration, consider using GKE Standard.
Code Blind on Autopilot has pre-configured opinionated constraints. Evaluate whether these constraints impact your workloads:
- Operating system: No Windows containers.
- Resource requests: Autopilot has pre-determined minimum Pod resource requests. If your game servers require less than those minimums, use GKE Standard.
- Scheduling strategy:
Packed
is supported, which is the Code Blind default.Distributed
is not supported. - Host port policy:
Dynamic
is supported, which is the Code Blind default.Static
andPassthrough
are not supported. - Seccomp profile: Code Blind sets the seccomp profile to
Unconfined
to avoid unexpected container creation delays that might occur because Autopilot enables theRuntimeDefault
seccomp profile. - Pod disruption policy:
eviction.safe: Never
is supported, which is the Code Blind default.eviction.safe: Always
is supported.eviction.safe: OnUpgrade
is not supported. If your game sessions exceed one hour, refer to Considerations for long sessions.
Choosing a GCP network
By default, gcloud
and the Cloud Console use the VPC named default
for all new resources. If you
plan to create a dual-stack IPv4/IPv6 cluster cluster, special considerations need to
be made. Dual-stack clusters require a dual-stack subnet, which are only supported in
custom mode VPC networks. For a new dual-stack cluster, you can either:
create a new custom mode VPC,
or if you wish to continue using the
default
network, you must switch it to custom mode. After switching a network to custom mode, you will need to manually manage subnets within thedefault
VPC.
Once you have a custom mode VPC, you will need to choose whether to use an existing subnet or create a
new one - read VPC-native guide on creating a dual-stack cluster, but don’t create the cluster
just yet - we’ll create the cluster later in this guide. To use the network and/or subnetwork you just created,
you’ll need to add --network
and --subnetwork
, and for GKE Standard, possibly --stack-type
and
--ipv6-access-type
, depending on whether you created the subnet simultaneously with the cluster.
Creating the firewall
We need a firewall to allow UDP traffic to nodes tagged as game-server
via ports 7000-8000. These firewall rules apply to cluster nodes you will create in the
next section.
gcloud compute firewall-rules create game-server-firewall \
--allow udp:7000-8000 \
--target-tags game-server \
--description "Firewall to allow game server udp traffic"
Creating the cluster
Create a GKE cluster in which you’ll install Code Blind. You can use GKE Standard mode or GKE Autopilot mode.
Create a Standard mode cluster for Code Blind
Create the cluster:
gcloud container clusters create [CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--release-channel=[RELEASE_CHANNEL] \
--tags=game-server \
--scopes=gke-default \
--num-nodes=4 \
--enable-image-streaming \
--machine-type=e2-standard-4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you want to create[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above[RELEASE_CHANNEL]
: The GKE release channel, chosen above
Flag explanations:
--region
: The compute region you chose above.--release-channel
: The release channel you chose above.--tags
: Defines the tags that will be attached to new nodes in the cluster. This is to grant access through ports via the firewall created above.--scopes
: Defines the Oauth scopes required by the nodes.--num-nodes
: The number of nodes to be created in each of the cluster’s zones. Default: 4. Depending on the needs of your game, this parameter should be adjusted.--enable-image-streaming
: Use Image streaming to pull container images, which leads to significant improvements in initialization times. Limitations apply to enable this feature.--machine-type
: The type of machine to use for nodes. Default:e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.
(Optional) Creating a dedicated node pool
Create a dedicated node pool for the Code Blind resources to be installed in. If you skip this step, the Code Blind controllers will share the default node pool with your game servers, which is fine for experimentation but not recommended for a production deployment.
gcloud container node-pools create agones-system \
--cluster=[CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--node-taints agones.dev/agones-system=true:NoExecute \
--node-labels agones.dev/agones-system=true \
--num-nodes=1 \
--machine-type=e2-standard-4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you created[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above
Flag explanations:
--cluster
: The name of the cluster you created.--region
: The compute region you chose above.--node-taints
: The Kubernetes taints to automatically apply to nodes in this node pool.--node-labels
: The Kubernetes labels to automatically apply to nodes in this node pool.--num-nodes
: The number of nodes per cluster zone. For regional clusters,--num-nodes=1
creates one node in 3 separate zones in the region, giving you faster recovery time in the event of a node failure.--machine-type
: The type of machine to use for nodes. Default:e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.
(Optional) Creating a metrics node pool
Create a node pool for Metrics if you want to monitor the Code Blind system using Prometheus with Grafana or Cloud Logging and Monitoring.
gcloud container node-pools create agones-metrics \
--cluster=[CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--node-taints agones.dev/agones-metrics=true:NoExecute \
--node-labels agones.dev/agones-metrics=true \
--num-nodes=1 \
--machine-type=e2-standard-4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you created[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above
Flag explanations:
--cluster
: The name of the cluster you created.--region
: The compute region you chose above.--node-taints
: The Kubernetes taints to automatically apply to nodes in this node pool.--node-labels
: The Kubernetes labels to automatically apply to nodes in this node pool.--num-nodes
: The number of nodes per cluster zone. For regional clusters,--num-nodes=1
creates one node in 3 separate zones in the region, giving you faster recovery time in the event of a node failure.--machine-type
: The type of machine to use for nodes. Default:e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.
(Optional) Creating a node pool for Windows
If you run game servers on Windows, you
need to create a dedicated node pool for those servers. Windows Server 2019 (WINDOWS_LTSC_CONTAINERD
) is the recommended image for Windows
game servers.
Warning
RunningGameServers
on Windows nodes is currently Alpha. Feel free to file feedback
through Github issues.gcloud container node-pools create windows \
--cluster=[CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--image-type WINDOWS_LTSC_CONTAINERD \
--machine-type e2-standard-4 \
--num-nodes=4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you created[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above
Flag explanations:
--cluster
: The name of the cluster you created.--region
: The compute region you chose above.--image-type
: The image type of the instances in the node pool -WINDOWS_LTSC_CONTAINERD
in this case.--machine-type
: The type of machine to use for nodes. Default:e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.--num-nodes
: The number of nodes per cluster zone. For regional clusters,--num-nodes=1
creates one node in 3 separate zones in the region, giving you faster recovery time in the event of a node failure.
Create an Autopilot mode cluster for Code Blind
Note
These installation instructions apply to Code Blind 1.30+Choose a Release Channel (Autopilot clusters must be on a Release Channel).
Create the cluster:
gcloud container clusters create-auto [CLUSTER_NAME] \ --region=[COMPUTE_REGION] \ --release-channel=[RELEASE_CHANNEL] \ --autoprovisioning-network-tags=game-server
Replace the following:
[CLUSTER_NAME]
: The name of your cluster.[COMPUTE_REGION]
: the GCP region to create the cluster in.[RELEASE_CHANNEL]
: one ofrapid
,regular
, orstable
, chosen above. The default isregular
.
Flag explanations:
--region
: The compute region you chose above.--release-channel
: The release channel you chose above.--autoprovisioning-network-tags
: Defines the tags that will be attached to new nodes in the cluster. This is to grant access through ports via the firewall created above.
Setting up cluster credentials
gcloud container clusters create
configurates credentials for kubectl
automatically. If you ever lose those, run:
gcloud container clusters get-credentials [CLUSTER_NAME] --region=[COMPUTE_REGION]
Next Steps
- Continue to Install Code Blind.
2 - Amazon Elastic Kubernetes Service
Create your EKS Cluster using the Getting Started Guide.
Possible steps are the following:
- Create new IAM role for cluster management.
- Run
aws configure
to authorize yourawscli
with properAWS Access Key ID
andAWS Secret Access Key
. - Create an example cluster:
eksctl create cluster \
--name prod \
--version 1.28 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 3 \
--nodes-max 4
Note
EKS does not use the normal Kubernetes networking since it is incompatible with Amazon VPC networking.Allowing UDP Traffic
For Code Blind to work correctly, we need to allow UDP traffic to pass through to our EKS cluster worker nodes. To achieve this, we must update the workers’ nodepool SG (Security Group) with the proper rule. A simple way to do that is:
- Log in to the AWS Management Console
- Go to the VPC Dashboard and select Security Groups
- Find the Security Group for the workers nodepool, which will be named something like
eksctl-[cluster-name]-nodegroup-[cluster-name]-workers/SG
- Select Inbound Rules
- Edit Rules to add a new Custom UDP Rule with a 7000-8000 port range and an appropriate Source CIDR range (
0.0.0.0/0
allows all traffic)
Next Steps
- Continue to Install Code Blind.
3 - Azure Kubernetes Service
Choosing your shell
You can use either Azure Cloud Shell or install the Azure CLI on your local shell in order to install AKS in your own Azure subscription. Cloud Shell comes preinstalled with az
and kubectl
utilities whereas you need to install them locally if you want to use your local shell. If you use Windows 10, you can use the WIndows Subsystem for Windows as well.
Creating the AKS cluster
If you are using Azure CLI from your local shell, you need to log in to your Azure account by executing the az login
command and following the login procedure.
Here are the steps you need to follow to create a new AKS cluster (additional instructions and clarifications are listed here):
# Declare necessary variables, modify them according to your needs
AKS_RESOURCE_GROUP=akstestrg # Name of the resource group your AKS cluster will be created in
AKS_NAME=akstest # Name of your AKS cluster
AKS_LOCATION=westeurope # Azure region in which you'll deploy your AKS cluster
# Create the Resource Group where your AKS resource will be installed
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION
# Create the AKS cluster - this might take some time. Type 'az aks create -h' to see all available options
# The following command will create a four Node AKS cluster. Node size is Standard A1 v1 and Kubernetes version is 1.28.0. Plus, SSH keys will be generated for you, use --ssh-key-value to provide your values
az aks create --resource-group $AKS_RESOURCE_GROUP --name $AKS_NAME --node-count 4 --generate-ssh-keys --node-vm-size Standard_A4_v2 --kubernetes-version 1.28.0 --enable-node-public-ip
# Install kubectl
sudo az aks install-cli
# Get credentials for your new AKS cluster
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $AKS_NAME
Alternatively, you can use the Azure Portal to create a new AKS cluster (instructions).
Allowing UDP traffic
For Code Blind to work correctly, we need to allow UDP traffic to pass through to our AKS cluster. To achieve this, we must update the NSG (Network Security Group) with the proper rule. A simple way to do that is:
- Log in to the Azure Portal
- Find the resource group where the AKS(Azure Kubernetes Service) resources are kept, which should have a name like
MC_resourceGroupName_AKSName_westeurope
. Alternative, you can typeaz resource show --namespace Microsoft.ContainerService --resource-type managedClusters -g $AKS_RESOURCE_GROUP -n $AKS_NAME -o json | jq .properties.nodeResourceGroup
- Find the Network Security Group object, which should have a name like
aks-agentpool-********-nsg
(ie. aks-agentpool-55978144-nsg for dns-name-prefix agones) - Select Inbound Security Rules
- Select Add to create a new Rule with UDP as the protocol and 7000-8000 as the Destination Port Ranges. Pick a proper name and leave everything else at their default values
Alternatively, you can use the following command, after modifying the RESOURCE_GROUP_WITH_AKS_RESOURCES
and NSG_NAME
values:
az network nsg rule create \
--resource-group RESOURCE_GROUP_WITH_AKS_RESOURCES \
--nsg-name NSG_NAME \
--name AgonesUDP \
--access Allow \
--protocol Udp \
--direction Inbound \
--priority 520 \
--source-port-range "*" \
--destination-port-range 7000-8000
Getting Public IPs to Nodes
Kubernetes version prior to 1.18.19, 1.19.11 and 1.20.7
To find a resource’s public IP, search for Virtual Machine Scale Sets -> click on the set name(inside MC_resourceGroupName_AKSName_westeurope
group) -> click Instances
-> click on the instance name -> view Public IP address
.
To get public IP via API look here.
For more information on Public IPs for VM NICs, see this document.
Kubernetes version starting 1.18.19, 1.19.11 and 1.20.7
Virtual Machines public IP is available directly in Kubernetes EXTERNAL-IP.
Next Steps
- Continue to Install Code Blind.
4 - Minikube
Installing Minikube
First, install Minikube, which may also require you to install a virtualisation solution, such as VirtualBox as well.
Starting Minikube
Minikube will need to be started with the supported version of Kubernetes that is supported with Code Blind, via the
--kubernetes-version
command line flag.
Optionally, we also recommend starting with an agones
profile, using -p
to keep this cluster separate from any other
clusters you may have running with Minikube.
minikube start --kubernetes-version v1.27.6 -p agones
Check the official minikube start reference for more options that may be required for your platform of choice.
Note
You may need to increase the --cpu
or --memory
values for your minikube instance, depending on what resources are
available on the host and/or how many GameServers you wish to run locally.
Depending on your Operating System, you may also need to change the --driver
(driver list) to enable GameServer
connectivity with or without
some workarounds listed below.
Known working drivers
Other operating systems and drivers may work, but at this stage have not been verified to work with UDP connections via Code Blind exposed ports.
Linux (amd64)
- Docker (default)
- kvm2
Mac (amd64)
- Docker (default)
- Hyperkit
Windows (amd64)
- hyper-v (might need this blog post and/or this comment for WSL support)
If you have successfully tested with other platforms and drivers, please click “edit this page” in the top right hand side and submit a pull request to let us know.
Local connection workarounds
Depending on your operating system and virtualization platform that you are using with Minikube, it may not be
possible to connect directly to a GameServer
hosted on Code Blind as you would on a cloud hosted Kubernetes cluster.
If you are unable to do so, the following workarounds are available, and may work on your platform:
minikube ip
Rather than using the published IP of a GameServer
to connect, run minikube ip -p agones
to get the local IP for
the minikube node, and connect to that address.
Create a service
This would only be for local development, but if none of the other workarounds work, creating a Service for the
GameServer
you wish to connect to is a valid solution, to tunnel traffic to the appropriate GameServer container.
Use the following yaml:
apiVersion: v1
kind: Service
metadata:
name: agones-gameserver
spec:
type: LoadBalancer
selector:
agones.dev/gameserver: ${GAMESERVER_NAME}
ports:
- protocol: UDP
port: 7000 # local port
targetPort: ${GAMESERVER_CONTAINER_PORT}
Where ${GAMESERVER_NAME}
is replaced with the GameServer you wish to connect to, and ${GAMESERVER_CONTAINER_PORT}
is replaced with the container port GameServer exposes for connection.
Running minikube service list -p agones
will show you the IP and port to connect to locally in the URL
field.
To connect to a different GameServer
, run kubectl edit service agones-gameserver
and edit the ${GAMESERVER_NAME}
value to point to the new GameServer
instance and/or the ${GAMESERVER_CONTAINER_PORT}
value as appropriate.
Warning
minikube tunnel
(docs)
does not support UDP (Github Issue) on some combination of
operating system, platforms and drivers, but is required when using the Service
workaround.Use a different driver
If you cannot connect through the Service
or use other workarounds, you may want to try a different
minikube driver, and if that doesn’t work, connection via UDP may not
be possible with minikube, and you may want to try either a
different local Kubernetes tool or use a cloud hosted Kubernetes cluster.
Next Steps
- Continue to Install Code Blind.