Install and configure Code Blind on Kubernetes
Instructions for creating a Kubernetes cluster and installing Code Blind.
Usage Requirements
- Kubernetes cluster version 1.26, 1.27, 1.28
- Firewall access for the range of ports that Game Servers can be connected to in the cluster.
- Game Servers must have the game server SDK integrated, to manage Game Server state, health checking, etc.
Warning
This release has been tested against Kubernetes versions 1.26, 1.27, 1.28 on GKE. Other versions may work, but are unsupported. It is also likely that not all of these versions are supported by other cloud providers.Supported Container Architectures
The following container operating systems and architectures can be utilised with Code Blind:
OS | Architecture | Support |
---|
linux | amd64 | Stable |
linux | arm64 | Alpha |
windows | amd64 | Alpha |
For all the platforms in Alpha, we would appreciate testing and bug reports on any issue found.
Code Blind and Kubernetes Supported Versions
Code Blind will support 3 releases of Kubernetes, targeting the newest version as being the latest available version in the GKE Rapid channel. However, we will ensure that at least one of the 3 versions chosen for each Code Blind release is supported by each of the major cloud providers (EKS and AKS). The vendored version of client-go will be aligned with the middle of the three supported Kubernetes versions. When a new version of Code Blind supports new versions of Kubernetes, it is explicitly called out in the release notes.
The following table lists recent Code Blind versions and their corresponding required Kubernetes versions:
Code Blind version | Kubernetes version(s) |
---|
1.38 | 1.26, 1.27, 1.28 |
1.37 | 1.26, 1.27, 1.28 |
1.36 | 1.26, 1.27, 1.28 |
1.35 | 1.25, 1.26, 1.27 |
1.34 | 1.25, 1.26, 1.27 |
1.33 | 1.25, 1.26, 1.27 |
1.32 | 1.24, 1.25, 1.26 |
1.31 | 1.24, 1.25, 1.26 |
1.30 | 1.23, 1.24, 1.25 |
1.29 | 1.24 |
1.28 | 1.23 |
1.27 | 1.23 |
1.26 | 1.23 |
1.25 | 1.22 |
1.24 | 1.22 |
1.23 | 1.22 |
1.22 | 1.21 |
1.21 | 1.21 |
Best Practices
For detailed guides on best practices running Code Blind in production, see Best Practices.
1 - Create Kubernetes Cluster
Instructions for creating a Kubernetes cluster to install Code Blind on.
1.1 - Google Kubernetes Engine
Before you begin
Take the following steps to enable the Kubernetes Engine API:
- Visit the Kubernetes Engine page in the Google Cloud Platform Console.
- Create or select a project.
- Wait for the API and related services to be enabled. This can take several minutes.
- Enable billing for your project.
- If you are not an existing GCP user, you may be able to enroll for a $300 US Free Trial credit.
Choosing a shell
To complete this quickstart, we can use either Google Cloud Shell or a local shell.
Google Cloud Shell is a shell environment for managing resources hosted on Google Cloud Platform (GCP). Cloud Shell comes preinstalled with the gcloud
and kubectl
command-line tools. gcloud
provides the primary command-line interface for GCP, and kubectl
provides the command-line interface for running commands against Kubernetes clusters.
If you prefer using your local shell, you must install the gcloud
and kubectl
command-line tools in your environment.
Cloud shell
To launch Cloud Shell, perform the following steps:
- Go to Google Cloud Platform Console
- From the top-right corner of the console, click the
Activate Google Cloud Shell button:
- A Cloud Shell session opens inside a frame at the bottom of the console. Use this shell to run
gcloud
and kubectl
commands.
Local shell
To install gcloud
and kubectl
, perform the following steps:
- Install the Google Cloud SDK, which includes the
gcloud
command-line tool. - Initialize some default configuration by running the following command.
- When asked
Do you want to configure a default Compute Region and Zone? (Y/n)?
, enter Y
and choose a zone in your geographical region of choice.
- Install the
kubectl
command-line tool by running the following command:gcloud components install kubectl
Choosing a Regional or Zonal Cluster
You will need to pick a geographical region or zone where you want to deploy your cluster, and whether to
create a regional or zonal cluster.
We recommend using a Regional cluster, as the zonal GKE control plane can go down temporarily to adjust for cluster resizing,
automatic upgrades and
repairs.
After choosing a cluster type, choose a region or zone. The region you chose is COMPUTE_REGION
below.
(Note that if you chose a zone, replace --region=[COMPUTE_REGION]
with --zone=[COMPUTE_ZONE]
in commands below.)
Choosing a Release Channel and Optional Version
We recommend using the regular
release channel, which offers a balance between stability and freshness.
If you’d like to read more, see our guide on Release Channels.
The release channel you chose is RELEASE_CHANNEL
below.
(Optional) During cluster creation, to set a specific available version in the release channel, use the --cluster-version=[VERSION]
flag, e.g. --cluster-version=1.27
. Be sure to choose a version supported by Code Blind. (If you rely on release channels, the latest Code Blind release should be supported by the default versions of all channels.)
Choosing a GKE cluster mode
A cluster consists of at least one control plane machine and multiple worker machines called nodes. In Google Kubernetes Engine, nodes are Compute Engine virtual machine instances that run the Kubernetes processes necessary to make them part of the cluster.
Code Blind supports both GKE Standard mode and GKE Autopilot mode.
Code Blind GameServer
and Fleet
manifests that work on Standard are compatible
on Autopilot with some constraints, described in the following section. We recommend
running GKE Autopilot clusters, if you meet the constraints.
You can’t convert existing Standard clusters to Autopilot; create new Autopilot
clusters instead.
Code Blind on GKE Autopilot
Autopilot is GKE’s fully-managed mode. GKE configures, maintains, scales, and
upgrades nodes for you, which can reduce your maintenance and operating
overhead. You only pay for the resources requested by your running Pods, and
you don’t pay for unused node capacity or Kubernetes system workloads.
This section describes the Code Blind-specific considerations in Autopilot
clusters. For a general comparison between Autopilot and Standard, refer to
Choose a GKE mode of operation.
Autopilot nodes are, by default, optimized for most workloads. If some of your
workloads have broad compute requirements such as Arm architecture or a minimum
CPU platform, you can also choose a
compute class
that meets that requirement. However, if you have specialized hardware needs
that require fine-grained control over machine configuration, consider using
GKE Standard.
Code Blind on Autopilot has pre-configured opinionated constraints. Evaluate
whether these constraints impact your workloads:
- Operating system: No Windows containers.
- Resource requests: Autopilot has pre-determined
minimum Pod resource requests.
If your game servers require less than those minimums, use GKE Standard.
- Scheduling strategy:
Packed
is supported, which is the Code Blind default. Distributed
is not
supported. - Host port policy:
Dynamic
is supported, which is the Code Blind default.
Static
and Passthrough
are not supported. - Seccomp profile: Code Blind sets the seccomp profile to
Unconfined
to
avoid unexpected container creation delays that might occur because
Autopilot enables the
RuntimeDefault
seccomp profile. - Pod disruption policy:
eviction.safe: Never
is supported, which is the Code Blind
default. eviction.safe: Always
is supported. eviction.safe: OnUpgrade
is
not supported. If your game sessions exceed one hour, refer to
Considerations for long sessions.
Choosing a GCP network
By default, gcloud
and the Cloud Console use the VPC named default
for all new resources. If you
plan to create a dual-stack IPv4/IPv6 cluster cluster, special considerations need to
be made. Dual-stack clusters require a dual-stack subnet, which are only supported in
custom mode VPC networks. For a new dual-stack cluster, you can either:
create a new custom mode VPC,
or if you wish to continue using the default
network, you must switch it to custom mode.
After switching a network to custom mode, you will need to manually manage subnets within the default
VPC.
Once you have a custom mode VPC, you will need to choose whether to use an existing subnet or create a
new one - read VPC-native guide on creating a dual-stack cluster, but don’t create the cluster
just yet - we’ll create the cluster later in this guide. To use the network and/or subnetwork you just created,
you’ll need to add --network
and --subnetwork
, and for GKE Standard, possibly --stack-type
and
--ipv6-access-type
, depending on whether you created the subnet simultaneously with the cluster.
Creating the firewall
We need a firewall to allow UDP traffic to nodes tagged as game-server
via ports 7000-8000. These firewall rules apply to cluster nodes you will create in the
next section.
gcloud compute firewall-rules create game-server-firewall \
--allow udp:7000-8000 \
--target-tags game-server \
--description "Firewall to allow game server udp traffic"
Creating the cluster
Create a GKE cluster in which you’ll install Code Blind. You can use
GKE Standard mode
or GKE Autopilot mode.
Create a Standard mode cluster for Code Blind
Create the cluster:
gcloud container clusters create [CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--release-channel=[RELEASE_CHANNEL] \
--tags=game-server \
--scopes=gke-default \
--num-nodes=4 \
--enable-image-streaming \
--machine-type=e2-standard-4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you want to create[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above[RELEASE_CHANNEL]
: The GKE release channel, chosen above
Flag explanations:
--region
: The compute region you chose above.--release-channel
: The release channel you chose above.--tags
: Defines the tags that will be attached to new nodes in the cluster. This is to grant access through ports via the firewall created above.--scopes
: Defines the Oauth scopes required by the nodes.--num-nodes
: The number of nodes to be created in each of the cluster’s zones. Default: 4. Depending on the needs of your game, this parameter should be adjusted.--enable-image-streaming
: Use Image streaming to pull container images, which leads to significant improvements in initialization times. Limitations apply to enable this feature.--machine-type
: The type of machine to use for nodes. Default: e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.
(Optional) Creating a dedicated node pool
Create a dedicated node pool
for the Code Blind resources to be installed in. If you skip this step, the Code Blind controllers will
share the default node pool with your game servers, which is fine for experimentation but not
recommended for a production deployment.
gcloud container node-pools create agones-system \
--cluster=[CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--node-taints agones.dev/agones-system=true:NoExecute \
--node-labels agones.dev/agones-system=true \
--num-nodes=1 \
--machine-type=e2-standard-4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you created[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above
Flag explanations:
--cluster
: The name of the cluster you created.--region
: The compute region you chose above.--node-taints
: The Kubernetes taints to automatically apply to nodes in this node pool.--node-labels
: The Kubernetes labels to automatically apply to nodes in this node pool.--num-nodes
: The number of nodes per cluster zone. For regional clusters, --num-nodes=1
creates one node in 3 separate zones in the region, giving you faster recovery time in the event of a node failure.--machine-type
: The type of machine to use for nodes. Default: e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.
(Optional) Creating a metrics node pool
Create a node pool for Metrics if you want to monitor the
Code Blind system using Prometheus with Grafana or Cloud Logging and Monitoring.
gcloud container node-pools create agones-metrics \
--cluster=[CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--node-taints agones.dev/agones-metrics=true:NoExecute \
--node-labels agones.dev/agones-metrics=true \
--num-nodes=1 \
--machine-type=e2-standard-4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you created[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above
Flag explanations:
--cluster
: The name of the cluster you created.--region
: The compute region you chose above.--node-taints
: The Kubernetes taints to automatically apply to nodes in this node pool.--node-labels
: The Kubernetes labels to automatically apply to nodes in this node pool.--num-nodes
: The number of nodes per cluster zone. For regional clusters, --num-nodes=1
creates one node in 3 separate zones in the region, giving you faster recovery time in the event of a node failure.--machine-type
: The type of machine to use for nodes. Default: e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.
(Optional) Creating a node pool for Windows
If you run game servers on Windows, you
need to create a dedicated node pool for those servers. Windows Server 2019 (WINDOWS_LTSC_CONTAINERD
) is the recommended image for Windows
game servers.
Warning
Running
GameServers
on Windows nodes is currently Alpha. Feel free to file feedback
through
Github issues.
gcloud container node-pools create windows \
--cluster=[CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--image-type WINDOWS_LTSC_CONTAINERD \
--machine-type e2-standard-4 \
--num-nodes=4
Replace the following:
[CLUSTER_NAME]
: The name of the cluster you created[COMPUTE_REGION]
: The GCP region to create the cluster in, chosen above
Flag explanations:
--cluster
: The name of the cluster you created.--region
: The compute region you chose above.--image-type
: The image type of the instances in the node pool - WINDOWS_LTSC_CONTAINERD
in this case.--machine-type
: The type of machine to use for nodes. Default: e2-standard-4
. Depending on the needs of your game, you may wish to have smaller or larger machines.--num-nodes
: The number of nodes per cluster zone. For regional clusters, --num-nodes=1
creates one node in 3 separate zones in the region, giving you faster recovery time in the event of a node failure.
Create an Autopilot mode cluster for Code Blind
Note
These installation instructions apply to Code Blind 1.30+Choose a Release Channel (Autopilot clusters must be on a Release Channel).
Create the cluster:
gcloud container clusters create-auto [CLUSTER_NAME] \
--region=[COMPUTE_REGION] \
--release-channel=[RELEASE_CHANNEL] \
--autoprovisioning-network-tags=game-server
Replace the following:
[CLUSTER_NAME]
: The name of your cluster.[COMPUTE_REGION]
: the GCP region to create the cluster in.[RELEASE_CHANNEL]
: one of rapid
, regular
, or stable
, chosen above. The default is regular
.
Flag explanations:
--region
: The compute region you chose above.--release-channel
: The release channel you chose above.--autoprovisioning-network-tags
: Defines the tags that will be attached to new nodes in the cluster. This is to grant access through ports via the firewall created above.
Setting up cluster credentials
gcloud container clusters create
configurates credentials for kubectl
automatically. If you ever lose those, run:
gcloud container clusters get-credentials [CLUSTER_NAME] --region=[COMPUTE_REGION]
Next Steps
1.2 - Amazon Elastic Kubernetes Service
Create your EKS Cluster using the Getting Started Guide.
Possible steps are the following:
- Create new IAM role for cluster management.
- Run
aws configure
to authorize your awscli
with proper AWS Access Key ID
and AWS Secret Access Key
. - Create an example cluster:
eksctl create cluster \
--name prod \
--version 1.28 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 3 \
--nodes-max 4
Allowing UDP Traffic
For Code Blind to work correctly, we need to allow UDP traffic to pass through to our EKS cluster worker nodes. To achieve this, we must update the workers’ nodepool SG (Security Group) with the proper rule. A simple way to do that is:
- Log in to the AWS Management Console
- Go to the VPC Dashboard and select Security Groups
- Find the Security Group for the workers nodepool, which will be named something like
eksctl-[cluster-name]-nodegroup-[cluster-name]-workers/SG
- Select Inbound Rules
- Edit Rules to add a new Custom UDP Rule with a 7000-8000 port range and an appropriate Source CIDR range (
0.0.0.0/0
allows all traffic)
Next Steps
1.3 - Azure Kubernetes Service
Choosing your shell
You can use either Azure Cloud Shell or install the Azure CLI on your local shell in order to install AKS in your own Azure subscription. Cloud Shell comes preinstalled with az
and kubectl
utilities whereas you need to install them locally if you want to use your local shell. If you use Windows 10, you can use the WIndows Subsystem for Windows as well.
Creating the AKS cluster
If you are using Azure CLI from your local shell, you need to log in to your Azure account by executing the az login
command and following the login procedure.
Here are the steps you need to follow to create a new AKS cluster (additional instructions and clarifications are listed here):
# Declare necessary variables, modify them according to your needs
AKS_RESOURCE_GROUP=akstestrg # Name of the resource group your AKS cluster will be created in
AKS_NAME=akstest # Name of your AKS cluster
AKS_LOCATION=westeurope # Azure region in which you'll deploy your AKS cluster
# Create the Resource Group where your AKS resource will be installed
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION
# Create the AKS cluster - this might take some time. Type 'az aks create -h' to see all available options
# The following command will create a four Node AKS cluster. Node size is Standard A1 v1 and Kubernetes version is 1.28.0. Plus, SSH keys will be generated for you, use --ssh-key-value to provide your values
az aks create --resource-group $AKS_RESOURCE_GROUP --name $AKS_NAME --node-count 4 --generate-ssh-keys --node-vm-size Standard_A4_v2 --kubernetes-version 1.28.0 --enable-node-public-ip
# Install kubectl
sudo az aks install-cli
# Get credentials for your new AKS cluster
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $AKS_NAME
Alternatively, you can use the Azure Portal to create a new AKS cluster (instructions).
Allowing UDP traffic
For Code Blind to work correctly, we need to allow UDP traffic to pass through to our AKS cluster. To achieve this, we must update the NSG (Network Security Group) with the proper rule. A simple way to do that is:
- Log in to the Azure Portal
- Find the resource group where the AKS(Azure Kubernetes Service) resources are kept, which should have a name like
MC_resourceGroupName_AKSName_westeurope
. Alternative, you can type az resource show --namespace Microsoft.ContainerService --resource-type managedClusters -g $AKS_RESOURCE_GROUP -n $AKS_NAME -o json | jq .properties.nodeResourceGroup
- Find the Network Security Group object, which should have a name like
aks-agentpool-********-nsg
(ie. aks-agentpool-55978144-nsg for dns-name-prefix agones) - Select Inbound Security Rules
- Select Add to create a new Rule with UDP as the protocol and 7000-8000 as the Destination Port Ranges. Pick a proper name and leave everything else at their default values
Alternatively, you can use the following command, after modifying the RESOURCE_GROUP_WITH_AKS_RESOURCES
and NSG_NAME
values:
az network nsg rule create \
--resource-group RESOURCE_GROUP_WITH_AKS_RESOURCES \
--nsg-name NSG_NAME \
--name AgonesUDP \
--access Allow \
--protocol Udp \
--direction Inbound \
--priority 520 \
--source-port-range "*" \
--destination-port-range 7000-8000
Getting Public IPs to Nodes
Kubernetes version prior to 1.18.19, 1.19.11 and 1.20.7
To find a resource’s public IP, search for Virtual Machine Scale Sets -> click on the set name(inside MC_resourceGroupName_AKSName_westeurope
group) -> click Instances
-> click on the instance name -> view Public IP address
.
To get public IP via API look here.
For more information on Public IPs for VM NICs, see this document.
Kubernetes version starting 1.18.19, 1.19.11 and 1.20.7
Virtual Machines public IP is available directly in Kubernetes EXTERNAL-IP.
Next Steps
1.4 - Minikube
Follow these steps to create a
Minikube cluster for your Code Blind install.
Installing Minikube
First, install Minikube, which may also require you to install
a virtualisation solution, such as VirtualBox as well.
Starting Minikube
Minikube will need to be started with the supported version of Kubernetes that is supported with Code Blind, via the
--kubernetes-version
command line flag.
Optionally, we also recommend starting with an agones
profile, using -p
to keep this cluster separate from any other
clusters you may have running with Minikube.
minikube start --kubernetes-version v1.27.6 -p agones
Check the official minikube start reference for more options that
may be required for your platform of choice.
Note
You may need to increase the --cpu
or --memory
values for your minikube instance, depending on what resources are
available on the host and/or how many GameServers you wish to run locally.
Depending on your Operating System, you may also need to change the --driver
(driver list) to enable GameServer
connectivity with or without
some workarounds listed below.
Known working drivers
Other operating systems and drivers may work, but at this stage have not been verified to work with UDP connections
via Code Blind exposed ports.
Linux (amd64)
Mac (amd64)
Windows (amd64)
If you have successfully tested with other platforms and drivers, please click “edit this page” in the top right hand
side and submit a pull request to let us know.
Local connection workarounds
Depending on your operating system and virtualization platform that you are using with Minikube, it may not be
possible to connect directly to a GameServer
hosted on Code Blind as you would on a cloud hosted Kubernetes cluster.
If you are unable to do so, the following workarounds are available, and may work on your platform:
minikube ip
Rather than using the published IP of a GameServer
to connect, run minikube ip -p agones
to get the local IP for
the minikube node, and connect to that address.
Create a service
This would only be for local development, but if none of the other workarounds work, creating a Service for the
GameServer
you wish to connect to is a valid solution, to tunnel traffic to the appropriate GameServer container.
Use the following yaml:
apiVersion: v1
kind: Service
metadata:
name: agones-gameserver
spec:
type: LoadBalancer
selector:
agones.dev/gameserver: ${GAMESERVER_NAME}
ports:
- protocol: UDP
port: 7000 # local port
targetPort: ${GAMESERVER_CONTAINER_PORT}
Where ${GAMESERVER_NAME}
is replaced with the GameServer you wish to connect to, and ${GAMESERVER_CONTAINER_PORT}
is replaced with the container port GameServer exposes for connection.
Running minikube service list -p agones
will show you the IP and port to connect to locally in the URL
field.
To connect to a different GameServer
, run kubectl edit service agones-gameserver
and edit the ${GAMESERVER_NAME}
value to point to the new GameServer
instance and/or the ${GAMESERVER_CONTAINER_PORT}
value as appropriate.
Warning
minikube tunnel
(
docs)
does not support UDP (
Github Issue) on some combination of
operating system, platforms and drivers, but is required when using the
Service
workaround.
Use a different driver
If you cannot connect through the Service
or use other workarounds, you may want to try a different
minikube driver, and if that doesn’t work, connection via UDP may not
be possible with minikube, and you may want to try either a
different local Kubernetes tool or use a cloud hosted Kubernetes cluster.
Next Steps
2 - Install Code Blind
Install Code Blind in your existing Kubernetes cluster.
If you have not yet created a cluster, follow the instructions
for the environment where you will be running Code Blind.
2.1 - Install Code Blind using YAML
We can install Code Blind to the cluster using an install.yaml file.
Installing Code Blind
Warning
Installing Code Blind with the install.yaml
file will use pre-generated, well known TLS
certificates stored in this repository for securing Kubernetes webhooks communication.
For production workloads, we strongly recommend using the
helm installation which allows you to generate
new, unique certificates or provide your own certificates. Alternatively,
you can use helm template
as described below
to generate a custom yaml installation file with unique certificates.
Installing Code Blind using the pre-generated install.yaml
file is the quickest,
simplest way to get Code Blind up and running in your Kubernetes cluster:
kubectl create namespace agones-system
kubectl apply --server-side -f https://raw.githubusercontent.com/googleforgames/agones/release-1.38.0/install/yaml/install.yaml
You can also find the install.yaml
in the latest agones-install
zip from the releases archive.
Customizing your install
To change the configurable parameters
in the install.yaml
file, you can use helm template
to generate a custom file locally
without needing to use helm to install Code Blind into your cluster.
The following example sets the featureGates
and generateTLS
helm parameters
and creates a customized install-custom.yaml
file (note that the pull
command was introduced in Helm version 3):
helm pull --untar https://agones.dev/chart/stable/agones-1.38.0.tgz && \
cd agones && \
helm template agones-manual --namespace agones-system . \
--set agones.controller.generateTLS=false \
--set agones.allocator.generateTLS=false \
--set agones.allocator.generateClientTLS=false \
--set agones.crds.cleanupOnDelete=false \
--set agones.featureGates="PlayerTracking=true" \
> install-custom.yaml
Uninstalling Code Blind
To uninstall/delete the Code Blind
deployment and delete agones-system
namespace:
kubectl delete fleets --all --all-namespaces
kubectl delete gameservers --all --all-namespaces
kubectl delete -f https://raw.githubusercontent.com/googleforgames/agones/release-1.38.0/install/yaml/install.yaml
kubectl delete namespace agones-system
Note: It may take a couple of minutes until all resources described in install.yaml
file are deleted.
Next Steps
2.2 - Install Code Blind using Helm
Install Code Blind on a
Kubernetes cluster using the
Helm package manager.
Prerequisites
Helm 3
Installing the Chart
To install the chart with the release name my-release
using our stable helm repository:
helm repo add agones https://agones.dev/chart/stable
helm repo update
helm install my-release --namespace agones-system --create-namespace agones/agones
We recommend installing Code Blind in its own namespaces, such as agones-system
as shown above.
If you want to use a different namespace, you can use the helm --namespace
parameter to specify.
When running in production, Code Blind should be scheduled on a dedicated pool of nodes, distinct from where Game Servers are scheduled for better isolation and resiliency. By default Code Blind prefers to be scheduled on nodes labeled with agones.dev/agones-system=true
and tolerates node taint agones.dev/agones-system=true:NoExecute
. If no dedicated nodes are available, Code Blind will
run on regular nodes, but that’s not recommended for production use. For instructions on setting up a dedicated node
pool for Code Blind, see the Code Blind installation instructions for your preferred environment.
The command deploys Code Blind on the Kubernetes cluster with the default configuration. The configuration section lists the parameters that can be configured during installation.
Tip
List all releases using helm list --all-namespaces
Namespaces
By default Code Blind is configured to work with game servers deployed in the default
namespace. If you are planning to use another namespace you can configure Code Blind via the parameter gameservers.namespaces
.
For example to use default
and xbox
namespaces:
kubectl create namespace xbox
helm install my-release agones/agones --set "gameservers.namespaces={default,xbox}" --namespace agones-system
Note
You need to create your namespaces before installing Code Blind.If you want to add a new namespace afterward upgrade your release:
kubectl create namespace ps4
helm upgrade my-release agones/agones --reuse-values --set "gameservers.namespaces={default,xbox,ps4}" --namespace agones-system
Uninstalling the Chart
To uninstall/delete the my-release
deployment:
helm uninstall my-release --namespace=agones-system
RBAC
By default, agones.rbacEnabled
is set to true. This enables RBAC support in Code Blind and must be true if RBAC is enabled in your cluster.
The chart will take care of creating the required service accounts and roles for Code Blind.
If you have RBAC disabled, or to put it another way, ABAC enabled, you should set this value to false
.
Configuration
The following tables lists the configurable parameters of the Code Blind chart and their default values.
General
Parameter | Description | Default |
---|
agones.featureGates | A URL query encoded string of Flags to enable/disable e.g. Example=true&OtherThing=false . Any value accepted by strconv.ParseBool(string) can be used as a boolean value | `` |
agones.rbacEnabled | Creates RBAC resources. Must be set for any cluster configured with RBAC | true |
agones.registerWebhooks | Registers the webhooks used for the admission controller | true |
agones.registerApiService | Registers the apiservice(s) used for the Kubernetes API extension | true |
agones.registerServiceAccounts | Attempts to create service accounts for the controllers | true |
agones.createPriorityClass | Attempts to create priority classes for the controllers | true |
agones.priorityClassName | Name of the priority classes to create | agones-system |
agones.requireDedicatedNodes | Forces Code Blind system components to be scheduled on dedicated nodes, only applies to the GKE Standard without node auto-provisioning | false |
Custom Resource Definitions
Parameter | Description | Default |
---|
agones.crds.install | Install the CRDs with this chart. Useful to disable if you want to subchart (since crd-install hook is broken), so you can copy the CRDs into your own chart. | true |
agones.crds.cleanupOnDelete | Run the pre-delete hook to delete all GameServers and their backing Pods when deleting the helm chart, so that all CRDs can be removed on chart deletion | true |
agones.crds.cleanupJobTTL | The number of seconds for Kubernetes to delete the associated Job and Pods of the pre-delete hook after it completes, regardless if the Job is successful or not. Set to 0 to disable cleaning up the Job or the associated Pods. | 60 |
Metrics
Parameter | Description | Default |
---|
agones.metrics.prometheusServiceDiscovery | Adds annotations for Prometheus ServiceDiscovery (and also Strackdriver) | true |
agones.metrics.prometheusEnabled | Enables controller metrics on port 8080 and path /metrics | true |
agones.metrics.stackdriverEnabled | Enables Stackdriver exporter of controller metrics | false |
agones.metrics.stackdriverProjectID | This overrides the default gcp project id for use with stackdriver | `` |
agones.metrics.stackdriverLabels | A set of default labels to add to all stackdriver metrics generated in form of key value pair (key=value,key2=value2 ). By default metadata are automatically added using Kubernetes API and GCP metadata enpoint. | `` |
agones.metrics.serviceMonitor.interval | Default scraping interval for ServiceMonitor | 30s |
Service Accounts
Parameter | Description | Default |
---|
agones.serviceaccount.controller.name | Service account name for the controller | agones-controller |
agones.serviceaccount.controller.annotations | Annotations added to the Code Blind controller service account | {} |
agones.serviceaccount.sdk.name | Service account name for the sdk | agones-sdk |
agones.serviceaccount.sdk.annotations | A map of namespaces to maps of Annotations added to the Code Blind SDK service account for the specified namespaces | {} |
agones.serviceaccount.allocator.name | Service account name for the allocator | agones-allocator |
agones.serviceaccount.allocator.annotations | Annotations added to the Code Blind allocator service account | {} |
Container Images
Parameter | Description | Default |
---|
agones.image.registry | Global image registry for all the Code Blind system images | us-docker.pkg.dev/agones-images/release |
agones.image.tag | Global image tag for all images | 1.38.0 |
agones.image.controller.name | Image name for the controller | agones-controller |
agones.image.controller.pullPolicy | Image pull policy for the controller | IfNotPresent |
agones.image.controller.pullSecret | Image pull secret for the controller, allocator, sdk and ping image. Should be created both in agones-system and default namespaces | `` |
agones.image.sdk.name | Image name for the sdk | agones-sdk |
agones.image.sdk.tag | Image tag for the sdk | value of agones.image.tag |
agones.image.sdk.cpuRequest | The cpu request for sdk server container | 30m |
agones.image.sdk.cpuLimit | The cpu limit for the sdk server container | 0 (none) |
agones.image.sdk.memoryRequest | The memory request for sdk server container | 0 (none) |
agones.image.sdk.memoryLimit | The memory limit for the sdk server container | 0 (none) |
agones.image.sdk.alwaysPull | Tells if the sdk image should always be pulled | false |
agones.image.ping.name | Image name for the ping service | agones-ping |
agones.image.ping.tag | Image tag for the ping service | value of agones.image.tag |
agones.image.ping.pullPolicy | Image pull policy for the ping service | IfNotPresent |
agones.image.extensions.name | Image name for extensions | agones-extensions |
agones.image.extensions.pullPolicy | Image pull policy for extensions | IfNotPresent |
Code Blind Controller
Parameter | Description | Default |
---|
agones.controller.replicas | The number of replicas to run in the agones-controller deployment. | 2 |
agones.controller.pdb.minAvailable | Description of the number of pods from that set that must still be available after the eviction, even in the absence of the evicted pod. Can be either an absolute number or a percentage. Mutually Exclusive with maxUnavailable | 1 |
agones.controller.pdb.maxUnavailable | Description of the number of pods from that set that can be unavailable after the eviction. It can be either an absolute number or a percentage Mutually Exclusive with minAvailable | `` |
agones.controller.http.port | Port to use for liveness probe service and metrics | 8080 |
agones.controller.healthCheck.initialDelaySeconds | Initial delay before performing the first probe (in seconds) | 3 |
agones.controller.healthCheck.periodSeconds | Seconds between every liveness probe (in seconds) | 3 |
agones.controller.healthCheck.failureThreshold | Number of times before giving up (in seconds) | 3 |
agones.controller.healthCheck.timeoutSeconds | Number of seconds after which the probe times out (in seconds) | 1 |
agones.controller.resources | Controller resource requests/limit | {} |
agones.controller.generateTLS | Set to true to generate TLS certificates or false to provide your own certificates | true |
agones.controller.tlsCert | Custom TLS certificate provided as a string | `` |
agones.controller.tlsKey | Custom TLS private key provided as a string | `` |
agones.controller.nodeSelector | Controller node labels for pod assignment | {} |
agones.controller.tolerations | Controller toleration labels for pod assignment | [] |
agones.controller.affinity | Controller affinity settings for pod assignment | {} |
agones.controller.annotations | Annotations added to the Code Blind controller pods | {} |
agones.controller.numWorkers | Number of workers to spin per resource type | 100 |
agones.controller.apiServerQPS | Maximum sustained queries per second that controller should be making against API Server | 400 |
agones.controller.apiServerQPSBurst | Maximum burst queries per second that controller should be making against API Server | 500 |
agones.controller.logLevel | Code Blind Controller Log level. Log only entries with that severity and above | info |
agones.controller.persistentLogs | Store Code Blind controller logs in a temporary volume attached to a container for debugging | true |
agones.controller.persistentLogsSizeLimitMB | Maximum total size of all Code Blind container logs in MB | 10000 |
agones.controller.disableSecret | Disables the creation of any allocator secrets. If true, you MUST provide the {agones.releaseName}-cert secrets before installation. | false |
agones.controller.customCertSecretPath | Remap cert-manager path to server.crt and server.key | {} |
agones.controller.allocationApiService.annotations | Annotations added to the Code Blind apiregistration | {} |
agones.controller.allocationApiService.disableCaBundle | Disable ca-bundle so it can be injected by cert-manager | false |
agones.controller.validatingWebhook.annotations | Annotations added to the Code Blind validating webhook | {} |
agones.controller.validatingWebhook.disableCaBundle | Disable ca-bundle so it can be injected by cert-manager | false |
agones.controller.mutatingWebhook.annotations | Annotations added to the Code Blind mutating webhook | {} |
agones.controller.mutatingWebhook.disableCaBundle | Disable ca-bundle so it can be injected by cert-manager | false |
agones.controller.allocationBatchWaitTime | Wait time between each allocation batch when performing allocations in controller mode | 500ms |
agones.controller.topologySpreadConstraints | Ensures better resource utilization and high availability by evenly distributing Pods in the agones-system namespace | {} |
Ping Service
Parameter | Description | Default |
---|
agones.ping.install | Whether to install the ping service | true |
agones.ping.replicas | The number of replicas to run in the deployment | 2 |
agones.ping.http.expose | Expose the http ping service via a Service | true |
agones.ping.http.response | The string response returned from the http service | ok |
agones.ping.http.port | The port to expose on the service | 80 |
agones.ping.http.serviceType | The Service Type of the HTTP Service | LoadBalancer |
agones.ping.http.nodePort | Static node port to use for HTTP ping service. (Only applies when agones.ping.http.serviceType is NodePort .) | 0 |
agones.ping.http.loadBalancerIP | The Load Balancer IP of the HTTP Service load balancer. Only works if the Kubernetes provider supports this option. | `` |
agones.ping.http.loadBalancerSourceRanges | The Load Balancer SourceRanges of the HTTP Service load balancer. Only works if the Kubernetes provider supports this option. | [] |
agones.ping.http.annotations | Annotations added to the Code Blind ping http service | {} |
agones.ping.udp.expose | Expose the udp ping service via a Service | true |
agones.ping.udp.rateLimit | Number of UDP packets the ping service handles per instance, per second, per sender | 20 |
agones.ping.udp.port | The port to expose on the service | 80 |
agones.ping.udp.serviceType | The Service Type of the UDP Service | LoadBalancer |
agones.ping.udp.nodePort | Static node port to use for UDP ping service. (Only applies when agones.ping.udp.serviceType is NodePort .) | 0 |
agones.ping.udp.loadBalancerIP | The Load Balancer IP of the UDP Service load balancer. Only works if the Kubernetes provider supports this option. | `` |
agones.ping.udp.loadBalancerSourceRanges | The Load Balancer SourceRanges of the UDP Service load balancer. Only works if the Kubernetes provider supports this option. | [] |
agones.ping.udp.annotations | Annotations added to the Code Blind ping udp service | {} |
agones.ping.healthCheck.initialDelaySeconds | Initial delay before performing the first probe (in seconds) | 3 |
agones.ping.healthCheck.periodSeconds | Seconds between every liveness probe (in seconds) | 3 |
agones.ping.healthCheck.failureThreshold | Number of times before giving up (in seconds) | 3 |
agones.ping.healthCheck.timeoutSeconds | Number of seconds after which the probe times out (in seconds) | 1 |
agones.ping.resources | Ping pods resource requests/limit | {} |
agones.ping.nodeSelector | Ping node labels for pod assignment | {} |
agones.ping.tolerations | Ping toleration labels for pod assignment | [] |
agones.ping.affinity | Ping affinity settings for pod assignment | {} |
agones.ping.annotations | Annotations added to the Code Blind ping pods | {} |
agones.ping.updateStrategy | The strategy to apply to the allocator deployment | {} |
agones.ping.pdb.enabled | Set to true to enable the creation of a PodDisruptionBudget for the ping deployment | false |
agones.ping.pdb.minAvailable | Description of the number of pods from that set that must still be available after the eviction, even in the absence of the evicted pod. Can be either an absolute number or a percentage. Mutually Exclusive with maxUnavailable | 1 |
agones.ping.pdb.maxUnavailable | Description of the number of pods from that set that can be unavailable after the eviction. It can be either an absolute number or a percentage Mutually Exclusive with minAvailable | `` |
agones.ping.topologySpreadConstraints | Ensures better resource utilization and high availability by evenly distributing Pods in the agones-system namespace | {} |
Allocator Service
Parameter | Description | Default |
---|
agones.allocator.apiServerQPS | Maximum sustained queries per second that an allocator should be making against API Server | 400 |
agones.allocator.apiServerQPSBurst | Maximum burst queries per second that an allocator should be making against API Server | 500 |
agones.allocator.remoteAllocationTimeout | Remote allocation call timeout. | 10s |
agones.allocator.totalRemoteAllocationTimeout | Total remote allocation timeout including retries. | 30s |
agones.allocator.logLevel | Code Blind Allocator Log level. Log only entries with that severity and above | info |
agones.allocator.install | Whether to install the allocator service | true |
agones.allocator.replicas | The number of replicas to run in the deployment | 3 |
agones.allocator.service.name | Service name for the allocator | agones-allocator |
agones.allocator.service.serviceType | The Service Type of the HTTP Service | LoadBalancer |
agones.allocator.service.clusterIP | The Cluster IP of the Code Blind allocator. If you want Headless Service for Code Blind Allocator, you can set None to clusterIP. | `` |
agones.allocator.service.loadBalancerIP | The Load Balancer IP of the Code Blind allocator load balancer. Only works if the Kubernetes provider supports this option. | `` |
agones.allocator.service.loadBalancerSourceRanges | The Load Balancer SourceRanges of the Code Blind allocator load balancer. Only works if the Kubernetes provider supports this option. | [] |
agones.allocator.service.annotations | Annotations added to the Code Blind allocator service | {} |
agones.allocator.service.http.enabled | If true the allocator service will respond to REST requests | true |
agones.allocator.service.http.appProtocol | The appProtocol to set on the Service for the http allocation port. If left blank, no value is set. | `` |
agones.allocator.service.http.port | The port that is exposed externally by the allocator service for REST requests | 443 |
agones.allocator.service.http.portName | The name of exposed port | http |
agones.allocator.service.http.targetPort | The port that is used by the allocator pod to listen for REST requests. Note that the allocator server cannot bind to low numbered ports. | 8443 |
agones.allocator.service.http.nodePort | If the ServiceType is set to “NodePort”, this is the NodePort that the allocator http service is exposed on. | 30000-32767 |
agones.allocator.service.grpc.enabled | If true the allocator service will respond to gRPC requests | true |
agones.allocator.service.grpc.port | The port that is exposed externally by the allocator service for gRPC requests | 443 |
agones.allocator.service.grpc.portName | The name of exposed port | `` |
agones.allocator.service.grpc.appProtocol | The appProtocol to set on the Service for the gRPC allocation port. If left blank, no value is set. | `` |
agones.allocator.service.grpc.nodePort | If the ServiceType is set to “NodePort”, this is the NodePort that the allocator gRPC service is exposed on. | 30000-32767 |
agones.allocator.service.grpc.targetPort | The port that is used by the allocator pod to listen for gRPC requests. Note that the allocator server cannot bind to low numbered ports. | 8443 |
agones.allocator.generateClientTLS | Set to true to generate client TLS certificates or false to provide certificates in certs/allocator/allocator-client.default/* | true |
agones.allocator.generateTLS | Set to true to generate TLS certificates or false to provide your own certificates | true |
agones.allocator.disableMTLS | Turns off client cert authentication for incoming connections to the allocator. | false |
agones.allocator.disableTLS | Turns off TLS security for incoming connections to the allocator. | false |
agones.allocator.disableSecretCreation | Disables the creation of any allocator secrets. If true, you MUST provide the allocator-tls , allocator-tls-ca , and allocator-client-ca secrets before installation. | false |
agones.allocator.tlsCert | Custom TLS certificate provided as a string | `` |
agones.allocator.tlsKey | Custom TLS private key provided as a string | `` |
agones.allocator.clientCAs | A map of secret key names to allowed client CA certificates provided as strings | {} |
agones.allocator.tolerations | Allocator toleration labels for pod assignment | [] |
agones.allocator.affinity | Allocator affinity settings for pod assignment | {} |
agones.allocator.annotations | Annotations added to the Code Blind allocator pods | {} |
agones.allocator.resources | Allocator pods resource requests/limit | {} |
agones.allocator.labels | Labels Added to the Code Blind Allocator pods | {} |
agones.allocator.readiness.initialDelaySeconds | Initial delay before performing the first probe (in seconds) | 3 |
agones.allocator.readiness.periodSeconds | Seconds between every liveness probe (in seconds) | 3 |
agones.allocator.readiness.failureThreshold | Number of times before giving up (in seconds) | 3 |
agones.allocator.nodeSelector | Allocator node labels for pod assignment | {} |
agones.allocator.serviceMetrics.name | Second Service name for the allocator | agones-allocator-metrics-service |
agones.allocator.serviceMetrics.annotations | Annotations added to the Code Blind allocator second Service | {} |
agones.allocator.serviceMetrics.http.port | The port that is exposed within cluster by the allocator service for http requests | 8080 |
agones.allocator.serviceMetrics.http.portName | The name of exposed port | http |
agones.allocator.allocationBatchWaitTime | Wait time between each allocation batch when performing allocations in allocator mode | 500ms |
agones.allocator.updateStrategy | The strategy to apply to the ping deployment | {} |
agones.allocator.pdb.enabled | Set to true to enable the creation of a PodDisruptionBudget for the allocator deployment | false |
agones.allocator.pdb.minAvailable | Description of the number of pods from that set that must still be available after the eviction, even in the absence of the evicted pod. Can be either an absolute number or a percentage. Mutually Exclusive with maxUnavailable | 1 |
agones.allocator.pdb.maxUnavailable | Description of the number of pods from that set that can be unavailable after the eviction. It can be either an absolute number or a percentage. Mutually Exclusive with minAvailable | `` |
agones.allocator.topologySpreadConstraints | Ensures better resource utilization and high availability by evenly distributing Pods in the agones-system namespace | {} |
Extensions
Parameter | Description | Default |
---|
agones.extensions.http.port | Port to use for liveness probe service and metrics | 8080 |
agones.extensions.healthCheck.initialDelaySeconds | Initial delay before performing the first probe (in seconds) | 3 |
agones.extensions.healthCheck.periodSeconds | Seconds between every liveness probe (in seconds) | 3 |
agones.extensions.healthCheck.failureThreshold | Number of times before giving up (in seconds) | 3 |
agones.extensions.healthCheck.timeoutSeconds | Number of seconds after which the probe times out (in seconds) | 1 |
agones.extensions.resources | Extensions resource requests/limit | {} |
agones.extensions.generateTLS | Set to true to generate TLS certificates or false to provide your own certificates | true |
agones.extensions.tlsCert | Custom TLS certificate provided as a string | `` |
agones.extensions.tlsKey | Custom TLS private key provided as a string | `` |
agones.extensions.nodeSelector | Extensions node labels for pod assignment | {} |
agones.extensions.tolerations | Extensions toleration labels for pod assignment | [] |
agones.extensions.affinity | Extensions affinity settings for pod assignment | {} |
agones.extensions.annotations | Annotations added to the Code Blind extensions pods | {} |
agones.extensions.numWorkers | Number of workers to spin per resource type | 100 |
agones.extensions.apiServerQPS | Maximum sustained queries per second that extensions should be making against API Server | 400 |
agones.extensions.apiServerQPSBurst | Maximum burst queries per second that extensions should be making against API Server | 500 |
agones.extensions.logLevel | Code Blind Extensions Log level. Log only entries with that severity and above | info |
agones.extensions.persistentLogs | Store Code Blind extensions logs in a temporary volume attached to a container for debugging | true |
agones.extensions.persistentLogsSizeLimitMB | Maximum total size of all Code Blind container logs in MB | 10000 |
agones.extensions.disableSecret | Disables the creation of any allocator secrets. If true, you MUST provide the {agones.releaseName}-cert secrets before installation. | false |
agones.extensions.customCertSecretPath | Remap cert-manager path to server.crt and server.key | {} |
agones.extensions.allocationApiService.annotations | Annotations added to the Code Blind apiregistration | {} |
agones.extensions.allocationApiService.disableCaBundle | Disable ca-bundle so it can be injected by cert-manager | false |
agones.extensions.validatingWebhook.annotations | Annotations added to the Code Blind validating webhook | {} |
agones.extensions.validatingWebhook.disableCaBundle | Disable ca-bundle so it can be injected by cert-manager | false |
agones.extensions.mutatingWebhook.annotations | Annotations added to the Code Blind mutating webhook | {} |
agones.extensions.mutatingWebhook.disableCaBundle | Disable ca-bundle so it can be injected by cert-manager | false |
agones.extensions.allocationBatchWaitTime | Wait time between each allocation batch when performing allocations in controller mode | 500ms |
agones.extensions.pdb.minAvailable | Description of the number of pods from that set that must still be available after the eviction, even in the absence of the evicted pod. Can be either an absolute number or a percentage. Mutually Exclusive with maxUnavailable | 1 |
agones.extensions.pdb.maxUnavailable | Description of the number of pods from that set that can be unavailable after the eviction. It can be either an absolute number or a percentage Mutually Exclusive with minAvailable | `` |
agones.extensions.replicas | The number of replicas to run in the deployment | 2 |
agones.extensions.topologySpreadConstraints | Ensures better resource utilization and high availability by evenly distributing Pods in the agones-system namespace | {} |
GameServers
Parameter | Description | Default |
---|
gameservers.namespaces | a list of namespaces you are planning to use to deploy game servers | ["default"] |
gameservers.minPort | Minimum port to use for dynamic port allocation | 7000 |
gameservers.maxPort | Maximum port to use for dynamic port allocation | 8000 |
gameservers.podPreserveUnknownFields | Disable field pruning and schema validation on the Pod template for a GameServer definition | false |
Helm Installation
Parameter | Description | Default |
---|
helm.installTests | Add an ability to run helm test agones to verify the installation | false |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example,
helm install my-release --namespace agones-system \
--set gameservers.minPort=1000,gameservers.maxPort=5000 agones
The above command will deploy Code Blind controllers to agones-system
namespace. Additionally Code Blind will use a dynamic GameServers’ port allocation range of 1000-5000.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
helm install my-release --namespace agones-system -f values.yaml agones/agones
Helm test
This test would create a GameServer
resource and delete it afterwards.
Tip
In order to use helm test
command described in this section you need to set helm.installTests
helm parameter to true
.Check the Code Blind installation by running the following command:
helm test my-release -n agones-system
You should see a successful output similar to this :
NAME: my-release
LAST DEPLOYED: Wed Mar 29 06:13:23 2023
NAMESPACE: agones-system
STATUS: deployed
REVISION: 4
TEST SUITE: my-release-test
Last Started: Wed Mar 29 06:17:52 2023
Last Completed: Wed Mar 29 06:18:10 2023
Phase: Succeeded
Controller TLS Certificates
By default agones chart generates tls certificates used by the admission controller, while this is handy, it requires the agones controller to restart on each helm upgrade
command.
Manual
For most use cases the controller would have required a restart anyway (eg: controller image updated). However if you really need to avoid restarts we suggest that you turn off tls automatic generation (agones.controller.generateTLS
to false
) and provide your own certificates (certs/server.crt
,certs/server.key
).
Tip
You can use our script located at
cert.sh to generate them.
Cert-Manager
Another approach is to use cert-manager.io solution for cluster level certificate management.
In order to use the cert-manager solution, first install cert-manager on the cluster.
Then, configure an Issuer
/ClusterIssuer
resource and
last configure a Certificate
resource to manage controller Secret
.
Make sure to configure the Certificate
based on your system’s requirements, including the validity duration
.
Here is an example of using a self-signed ClusterIssuer
for configuring controller Secret
where secret name is my-release-cert
or {{ template "agones.fullname" . }}-cert
:
#!/bin/bash
# Create a self-signed ClusterIssuer
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
EOF
# Create a Certificate with IP for the my-release-cert )
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-release-agones-cert
namespace: agones-system
spec:
dnsNames:
- agones-controller-service.agones-system.svc
secretName: my-release-agones-cert
issuerRef:
name: selfsigned
kind: ClusterIssuer
EOF
After the certificates are generated, we will want to inject caBundle into the controller and extensions webhook and disable the controller and extensions secret creation through the following values.yaml file.:
agones:
controller:
disableSecret: true
customCertSecretPath:
- key: ca.crt
path: ca.crt
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
allocationApiService:
annotations:
cert-manager.io/inject-ca-from: agones-system/my-release-agones-cert
disableCaBundle: true
validatingWebhook:
annotations:
cert-manager.io/inject-ca-from: agones-system/my-release-agones-cert
disableCaBundle: true
mutatingWebhook:
annotations:
cert-manager.io/inject-ca-from: agones-system/my-release-agones-cert
disableCaBundle: true
extensions:
disableSecret: true
customCertSecretPath:
- key: ca.crt
path: ca.crt
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
allocationApiService:
annotations:
cert-manager.io/inject-ca-from: agones-system/my-release-agones-cert
disableCaBundle: true
validatingWebhook:
annotations:
cert-manager.io/inject-ca-from: agones-system/my-release-agones-cert
disableCaBundle: true
mutatingWebhook:
annotations:
cert-manager.io/inject-ca-from: agones-system/my-release-agones-cert
disableCaBundle: true
After copying the above yaml into a values.yaml
file, use below command to install Code Blind:
helm install my-release --namespace agones-system --create-namespace --values values.yaml agones/agones
Reserved Allocator Load Balancer IP
In order to reuse the existing load balancer IP on upgrade or install the agones-allocator
service as a LoadBalancer
using a reserved static IP, a user can specify the load balancer’s IP with the agones.allocator.http.loadBalancerIP
helm configuration parameter value. By setting the loadBalancerIP
value:
- The
LoadBalancer
is created with the specified IP, if supported by the cloud provider. - A self-signed server TLS certificate is generated for the IP, used by the
agones-allocator
service.
Next Steps
3 - Deploy Kubernetes cluster and install Code Blind using Terraform
Install a
Kubernetes cluster and Code Blind declaratively using Terraform.
Prerequisites
- Terraform v1.0.8
- Access to the the Kubernetes hosting provider you are using (e.g.
gcloud
,
awscli
, or az
utility installed) - Git
Note
All our Terraform modules and examples use a
Helm 3 Module.
The last Code Blind release to include a Helm 2 module was 1.9.0.
3.1 - Installing Code Blind on Google Kubernetes Engine using Terraform
You can use Terraform to provision a GKE cluster and install Code Blind on it.
Before you begin
Take the following steps to enable the Kubernetes Engine API:
- Visit the Kubernetes Engine page in the Google Cloud Platform Console.
- Create or select a project.
- Wait for the API and related services to be enabled. This can take several minutes.
- Enable billing for your project.
- If you are not an existing GCP user, you may be able to enroll for a $300 US Free Trial credit.
Choosing a shell
To complete this quickstart, we can use either Google Cloud Shell or a local shell.
Google Cloud Shell is a shell environment for managing resources hosted on Google Cloud Platform (GCP). Cloud Shell comes preinstalled with the gcloud and kubectl command-line tools. gcloud
provides the primary command-line interface for GCP, and kubectl
provides the command-line interface for running commands against Kubernetes clusters.
If you prefer using your local shell, you must install the gcloud and kubectl command-line tools in your environment.
Cloud shell
To launch Cloud Shell, perform the following steps:
- Go to Google Cloud Platform Console
- From the top-right corner of the console, click the
Activate Google Cloud Shell button:
- A Cloud Shell session opens inside a frame at the bottom of the console. Use this shell to run
gcloud
and kubectl
commands. - Set a compute zone in your geographical region with the following command. The compute zone will be something like
us-west1-a
. A full list can be found here.gcloud config set compute/zone [COMPUTE_ZONE]
Local shell
To install gcloud
and kubectl
, perform the following steps:
- Install the Google Cloud SDK, which includes the
gcloud
command-line tool. - Initialize some default configuration by running the following command.
- When asked
Do you want to configure a default Compute Region and Zone? (Y/n)?
, enter Y
and choose a zone in your geographical region of choice.
- Install the
kubectl
command-line tool by running the following command:gcloud components install kubectl
Installation
An example configuration can be found here:
Terraform configuration with Code Blind submodule.
Copy this file into a local directory where you will execute the terraform commands.
The GKE cluster created from the example configuration will contain 3 Node Pools:
"default"
node pool with "game-server"
tag, containing 4 nodes."agones-system"
node pool for Code Blind Controller."agones-metrics"
for monitoring and metrics collecting purpose.
Configurable parameters:
- project - your Google Cloud Project ID (required)
- name - the name of the GKE cluster (default is “agones-terraform-example”)
- agones_version - the version of agones to install (an empty string, which is the default, is the latest version from the Helm repository)
- machine_type - machine type for hosting game servers (default is “e2-standard-4”)
- node_count - count of game server nodes for the default node pool (default is “4”)
- enable_image_streaming - whether or not to enable image streaming for the
"default"
node pool (default is true) - zone - (Deprecated, use location) the name of the zone you want your cluster to be
created in (default is “us-west1-c”)
- network - the name of the VPC network you want your cluster and firewall rules to be connected to (default is “default”)
- subnetwork - the name of the subnetwork in which the cluster’s instances are launched. (required when using non default network)
- log_level - possible values: Fatal, Error, Warn, Info, Debug (default is “info”)
- feature_gates - a list of alpha and beta version features to enable. For example, “PlayerTracking=true&ContainerPortAllocation=true”
- gameserver_minPort - the lower bound of the port range which gameservers will listen on (default is “7000”)
- gameserver_maxPort - the upper bound of the port range which gameservers will listen on (default is “8000”)
- gameserver_namespaces - a list of namespaces which will be used to run gameservers (default is
["default"]
). For example ["default", "xbox-gameservers", "mobile-gameservers"]
- force_update - whether or not to force the replacement/update of resource (default is true, false may be required to prevent immutability errors when updating the configuration)
- location - the name of the location you want your cluster to be created in (default is “us-west1-c”)
- autoscale - whether you want to enable autoscale for the gameserver nodepool (default is false)
- min_node_count - the minimum number of nodes for a nodepool when autoscale is enabled (default is “1”)
- max_node_count - the maximum number of nodes for a nodepool when autoscale is enabled (default is “5”)
Warning
On the lines that read source = "git::https://github.com/googleforgames/agones.git//install/terraform/modules/gke/?ref=main"
make sure to change ?ref=main
to match your targeted Code Blind release, as Terraform modules can change between
releases.
For example, if you are targeting release-1.38.0, then you will want to have
source = "git::https://github.com/googleforgames/agones.git//install/terraform/modules/gke/?ref=release-1.38.0"
as your source.
Creating the cluster
In the directory where you created module.tf
, run:
This will cause terraform to clone the Code Blind repository and use the ./install/terraform
folder as the starting point of
the Code Blind submodule, which contains all necessary Terraform configuration files.
Next, make sure that you can authenticate using gcloud:
gcloud auth application-default login
Option 1: Creating the cluster in the default VPC
To create your GKE cluster in the default VPC just specify the project variable.
terraform apply -var project="<YOUR_GCP_ProjectID>"
Option 2: Creating the cluster in a custom VPC
To create the cluster in a custom VPC you must specify the project, network and subnetwork variables.
terraform apply -var project="<YOUR_GCP_ProjectID>" -var network="<YOUR_NETWORK_NAME>" -var subnetwork="<YOUR_SUBNETWORK_NAME>"
To verify that the cluster was created successfully, set up your kubectl credentials:
gcloud container clusters get-credentials --zone us-west1-c agones-terraform-example
Then check that you have access to the Kubernetes cluster:
You should have 6 nodes in Ready
state.
Uninstall the Code Blind and delete GKE cluster
To delete all resources provisioned by Terraform:
terraform destroy -var project="<YOUR_GCP_ProjectID>"
Next Steps
3.2 - Installing Code Blind on AWS Elastic Kubernetes Service using Terraform
You can use Terraform to provision an EKS cluster and install Code Blind on it.
Installation
You can use Terraform to provision your Amazon EKS (Elastic Kubernetes Service) cluster and install Code Blind on it using the Helm Terraform provider.
An example of the EKS submodule config file can be found here:
Terraform configuration with Code Blind submodule
Copy this file into a separate folder.
Configure your AWS CLI tool CLI configure:
Initialise your terraform:
Creating Cluster
By editing modules.tf
you can change the parameters that you need to. For instance, the - machine_type
variable.
Configurable parameters:
- cluster_name - the name of the EKS cluster (default is “agones-terraform-example”)
- agones_version - the version of agones to install (an empty string, which is the default, is the latest version from the Helm repository)
- machine_type - EC2 instance type for hosting game servers (default is “t2.large”)
- region - the location of the cluster (default is “us-west-2”)
- node_count - count of game server nodes for the default node pool (default is “4”)
- log_level - possible values: Fatal, Error, Warn, Info, Debug (default is “info”)
- feature_gates - a list of alpha and beta version features to enable. For example, “PlayerTracking=true&ContainerPortAllocation=true”
- gameserver_minPort - the lower bound of the port range which gameservers will listen on (default is “7000”)
- gameserver_maxPort - the upper bound of the port range which gameservers will listen on (default is “8000”)
- gameserver_namespaces - a list of namespaces which will be used to run gameservers (default is
["default"]
). For example ["default", "xbox-gameservers", "mobile-gameservers"]
- force_update - whether or not to force the replacement/update of resource (default is true, false may be required to prevent immutability errors when updating the configuration)
Now you can create an EKS cluster and deploy Code Blind on EKS:
terraform apply [-var agones_version="1.38.0"]
After deploying the cluster with Code Blind, you can get or update your kubeconfig by using:
aws eks --region us-west-2 update-kubeconfig --name agones-cluster
With the following output:
Added new context arn:aws:eks:us-west-2:601646756426:cluster/agones-cluster to /Users/user/.kube/config
Switch kubectl
context to the recently created one:
kubectl config use-context arn:aws:eks:us-west-2:601646756426:cluster/agones-cluster
Check that you are authenticated against the recently created Kubernetes cluster:
Uninstall the Code Blind and delete EKS cluster
Run the following commands to delete all Terraform provisioned resources:
terraform destroy -target module.helm_agones.helm_release.agones -auto-approve && sleep 60
terraform destroy
Note
There is an issue with the AWS Terraform provider:
https://github.com/terraform-providers/terraform-provider-aws/issues/9101
Due to this issue you should remove helm release first (as stated above),
otherwise
terraform destroy
will timeout and never succeed.
Remove all created resources manually in that case, namely: 3 Auto Scaling groups, EKS cluster, and a VPC with all dependent resources.
3.3 - Installing Code Blind on Azure Kubernetes Service using Terraform
You can use Terraform to provision an AKS cluster and install Code Blind on it.
Installation
Install az
utility by following these instructions.
The example of AKS submodule configuration could be found here:
Terraform configuration with Code Blind submodule
Copy module.tf
file into a separate folder.
Log in to Azure CLI:
Configure your terraform:
Create a service principal and configure its access to Azure resources:
Now you can deploy your cluster (use values from the above command output):
terraform apply -var client_id="<appId>" -var client_secret="<password>"
Once you created all resources on AKS you can get the credentials so that you can use kubectl
to configure your cluster:
az aks get-credentials --resource-group agonesRG --name test-cluster
Check that you have access to the Kubernetes cluster:
Configurable parameters:
- log_level - possible values: Fatal, Error, Warn, Info, Debug (default is “info”)
- cluster_name - the name of the AKS cluster (default is “agones-terraform-example”)
- agones_version - the version of agones to install (an empty string, which is the default, is the latest version from the Helm repository)
- machine_type - node machine type for hosting game servers (default is “Standard_D2_v2”)
- disk_size - disk size of the node
- region - the location of the cluster
- node_count - count of game server nodes for the default node pool (default is “4”)
- feature_gates - a list of alpha and beta version features to enable. For example, “PlayerTracking=true&ContainerPortAllocation=true”
- gameserver_minPort - the lower bound of the port range which gameservers will listen on (default is “7000”)
- gameserver_maxPort - the upper bound of the port range which gameservers will listen on (default is “8000”)
- gameserver_namespaces - a list of namespaces which will be used to run gameservers (default is
["default"]
). For example ["default", "xbox-gameservers", "mobile-gameservers"]
- force_update - whether or not to force the replacement/update of resource (default is true, false may be required to prevent immutability errors when updating the configuration)
Uninstall the Code Blind and delete AKS cluster
Run next command to delete all Terraform provisioned resources:
Reference
Details on how you can authenticate your AKS terraform provider using official instructions.
Next Steps
4 - Confirming Code Blind Installation
Verify Code Blind is installed and has started successfully.
To confirm Code Blind is up and running, run the following command:
kubectl describe --namespace agones-system pods
It should describe six pods created in the agones-system
namespace, with no error messages or status. All Conditions
sections should look like this:
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
All this pods should be in a RUNNING
state:
kubectl get pods --namespace agones-system
NAME READY STATUS RESTARTS AGE
agones-allocator-5c988b7b8d-cgtbs 1/1 Running 0 8m47s
agones-allocator-5c988b7b8d-hhhr5 1/1 Running 0 8m47s
agones-allocator-5c988b7b8d-pv577 1/1 Running 0 8m47s
agones-controller-7db45966db-56l66 1/1 Running 0 8m44s
agones-ping-84c64f6c9d-bdlzh 1/1 Running 0 8m37s
agones-ping-84c64f6c9d-sjgzz 1/1 Running 0 8m47s
That’s it!
Now with Code Blind installed, you can utilise its Custom Resource Definitions to create
resources of type GameServer
, Fleet
and more!
What’s next
5 - Upgrading Code Blind and Kubernetes
Strategies and techniques for managing Code Blind and Kubernetes upgrades in a safe manner.
Note
Whichever approach you take to upgrading Code Blind, make sure to test it in your development environment
before applying it to production.Upgrading Code Blind
The following are strategies for safely upgrading Code Blind from one version to another. They may require adjustment to
your particular game architecture but should provide a solid foundation for updating Code Blind safely.
The recommended approach is to use multiple clusters, such that the upgrade can be tested
gradually with production load and easily rolled back if the need arises.
Warning
Changing
Feature Gates within your Code Blind install
can constitute an “upgrade” as it may create or remove functionality
in the Code Blind installation that may not be forward or backward compatible with installed resources in an existing
installation.
Upgrading Code Blind: Multiple Clusters
We essentially want to transition our GameServer allocations from a cluster with the old version of Code Blind,
to a cluster with the upgraded version of Code Blind while ensuring nothing surprising
happens during this process.
This also allows easy rollback to the previous infrastructure that we already know to be working in production, with
minimal interruptions to player experience.
The following are steps to implement this:
- Create a new cluster of the same size or smaller as the current cluster.
- Install the new version of Code Blind on the new cluster.
- Deploy the same set of Fleets, GameServers and FleetAutoscalers from the old cluster into the new cluster.
- With your matchmaker, start sending a small percentage of your matched players’ game sessions to the new cluster.
- Assuming everything is working successfully on the new cluster, slowly increase the percentage of matched sessions to the new cluster, until you reach 100%.
- Once you are comfortable with the stability of the new cluster with the new Code Blind version, shut down the old cluster.
- Congratulations - you have now upgraded to a new version of Code Blind! 👍
Upgrading Code Blind: Single Cluster
If you are upgrading a single cluster, we recommend creating a maintenance window, in which your game goes offline
for the period of your upgrade, as there will be a short period in which Code Blind will be non-responsive during the upgrade.
Installation with install.yaml
If you installed Code Blind with install.yaml, then you will need to delete
the previous installation of Code Blind before upgrading to the new version, as we need to remove all of Code Blind before installing
the new version.
- Start your maintenance window.
- Delete the current set of Fleets, GameServers and FleetAutoscalers in your cluster.
- Make sure to delete the same version of Code Blind that was previously installed, for example:
kubectl delete -f https://raw.githubusercontent.com/googleforgames/agones/<old-release-version>/install/yaml/install.yaml
- Install Code Blind with install.yaml.
- Deploy the same set of Fleets, GameServers and FleetAutoscalers back into the cluster.
- Run any other tests to ensure the Code Blind installation is working as expected.
- Close your maintenance window.
- Congratulations - you have now upgraded to a new version of Code Blind! 👍
Installation with Helm
Helm features capabilities for upgrading to newer versions of Code Blind without having to uninstall Code Blind completely.
For details on how to use Helm for upgrades, see the helm upgrade documentation.
Given the above, the steps for upgrade are simpler:
- Start your maintenance window.
- Delete the current set of Fleets, GameServers and FleetAutoscalers in your cluster.
- Run
helm upgrade
with the appropriate arguments, such a --version
, for your specific upgrade - Deploy the same set of Fleets, GameServers and FleetAutoscalers back into the cluster.
- Run any other tests to ensure the Code Blind installation is working as expected.
- Close your maintenance window.
- Congratulations - you have now upgraded to a new version of Code Blind! 👍
Upgrading Kubernetes
The following are strategies for safely upgrading the underlying Kubernetes cluster from one version to another.
They may require adjustment to your particular game architecture but should provide a solid foundation for updating your cluster safely.
The recommended approach is to use multiple clusters, such that the upgrade can be tested
gradually with production load and easily rolled back if the need arises.
Code Blind has multiple supported Kubernetes versions for each version. You can stick with a minor Kubernetes version until it is not supported by Code Blind, but it is recommended to do supported minor (e.g. 1.12.1 ➡ 1.13.2) Kubernetes version upgrades at the same time as a matching Code Blind upgrades.
Patch upgrades (e.g. 1.12.1 ➡ 1.12.3) within the same minor version of Kubernetes can be done at any time.
Multiple Clusters
This process is very similar to the Upgrading Code Blind: Multiple Clusters approach above.
We essentially want to transition our GameServer allocations from a cluster with the old version of Kubernetes,
to a cluster with the upgraded version of Kubernetes while ensuring nothing surprising
happens during this process.
This also allows easy rollback to the previous infrastructure that we already know to be working in production, with
minimal interruptions to player experience.
The following are steps to implement this:
- Create a new cluster of the same size or smaller as the current cluster, with the new version of Kubernetes
- Install the same version of Code Blind on the new cluster, as you have on the previous cluster.
- Deploy the same set of Fleets and/or GameServers from the old cluster into the new cluster.
- With your matchmaker, start sending a small percentage of your matched players’ game sessions to the new cluster.
- Assuming everything is working successfully on the new cluster, slowly increase the percentage of matched sessions to the new cluster, until you reach 100%.
- Once you are comfortable with the stability of the new cluster with the new Kubernetes version, shut down the old cluster.
- Congratulations - you have now upgraded to a new version of Kubernetes! 👍
Single Cluster
If you are upgrading a single cluster, we recommend creating a maintenance window, in which your game goes offline
for the period of your upgrade, as there will be a short period in which Code Blind will be non-responsive during the node
upgrades.
- Start your maintenance window.
- Scale your Fleets down to 0 and/or delete your GameServers. This is a good safety measure so there aren’t race conditions
between the Code Blind controller being recreated and GameServers being deleted doesn’t occur, and GameServers can end up stuck in erroneous states.
- Start and complete your control plane upgrade(s).
- Start and complete your node upgrades.
- Scale your Fleets back up and/or recreate your GameServers.
- Run any other tests to ensure the Code Blind installation is still working as expected.
- Close your maintenance window.
- Congratulations - you have now upgraded to a new version of Kubernetes! 👍