Ingress for Anthos architecture
Ingress for Anthos uses a centralized Kubernetes API server to deploy Ingress across multiple clusters. This centralized API server is called the config cluster. Any GKE cluster can act as the config cluster. The config cluster uses two custom resource types: MultiClusterIngress
and MultiClusterService
. By deploying these resources on the config cluster, the Anthos Ingress Controller deploys load balancers across multiple clusters.
The following concepts and components make up Ingress for Anthos:
-
Anthos Ingress controller
- This is a globally distributed control plane that runs as a service outside of your clusters. This allows the lifecycle and operations of the controller to be independent of GKE clusters. -
Config cluster
- This is a chosen GKE cluster running on Google Cloud where the MultiClusterIngress and MultiClusterService resources are deployed. This is a centralized point of control for these multi-cluster resources. These multi-cluster resources exist in and are accessible from a single logical API to retain consistency across all clusters. The Ingress controller watches the config cluster and reconciles the load balancing infrastructure. -
Environ
- An environ is a domain that groups clusters and infrastructure, manages resources, and keeps a consistent policy across them. Ingress uses the concept of environs for how Ingress is applied across different clusters. Clusters that you register to an environ become visible to Ingress, so they can be used as backends for Ingress. -
Member cluster
- Clusters registered to an environ are called member clusters. Member clusters in the environ comprise the full scope of backends that Ingress is aware of. The Google Kubernetes Engine cluster management view provides a secure console to view the state of all your registered clusters.
Configure Ingress for Anthos to route traffic across multiple clusters in different regions.
It’s a Google-hosted service that supports deploying shared load balancing resources across clusters and across regions.
Requirements for Ingress for Anthos
Ingress for Anthos is supported on:
- GKE clusters on Google Cloud. GKE on-prem clusters are not currently supported.
- GKE clusters in all GKE Release Channels.
- Clusters in VPC-native (Alias IP) mode.
- Clusters that have HTTP-load balancing enabled, which is enabled by default. Note that Ingress for Anthos only supports the external HTTP(S) load balancer.
Enable the required google api’s
gcloud services enable \
--project=`project-id` \
container.googleapis.com \
gkeconnect.googleapis.com \
anthos.googleapis.com \
multiclusteringress.googleapis.com \
gkehub.googleapis.com \
cloudresourcemanager.googleapis.com
Registering your clusters
Prerequisites for registering a cluster
- If you are creating a distinct service account for each Kubernetes cluster that you register, bind the gkehub.connect IAM role to the service account for its corresponding cluster with an IAM Condition on the cluster’s membership name:
MEMBERSHIP_NAME=[MEMBERSHIP_NAME]
HUB_PROJECT_ID=[HUB_PROJECT_ID]
SERVICE_ACCOUNT_NAME=[SERVICE_ACCOUNT_NAME]
gcloud projects add-iam-policy-binding ${HUB_PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_NAME}@${HUB_PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/gkehub.connect" \
--condition "expression=resource.name == \
'projects/${HUB_PROJECT_ID}/locations/global/memberships/${MEMBERSHIP_NAME}',\
title=bind-${SERVICE_ACCOUNT_NAME}-to-${MEMBERSHIP_NAME}"
-
[MEMBERSHIP_NAME]
is the membership name that you choose to uniquely represent the cluster while registering it. -
[HUB_PROJECT_ID]
is the Google Cloud project ID in which you want to register clusters -
[SERVICE_ACCOUNT_NAME]
is the display name that you choose for the Service Account. -
[LOCAL_KEY_PATH]
is a local filepath where you’d like to save the service account’s private key, a JSON file. We recommend that you name the file using the service account name and your project ID, such as/tmp/creds/[SERVICE_ACCOUNT_NAME]-[HUB_PROJECT_ID].json
Find the URIs for your clusters:
gcloud container clusters list --uri
Register your first cluster:
gcloud container hub memberships register first-cluster-name \
--project=project-id \
--gke-uri=uri \
--service-account-key-file=service-account-key-path
Register your second cluster:
gcloud container hub memberships register second-cluster-name \
--project=project-id \
--gke-uri=uri \
--service-account-key-file=service-account-key-path
where:
project-id
is your Project ID.uri
is the URI of the GKE cluster.service-account-key-path
is the local file path to the service account’s private key JSON file downloaded as part of registration prerequisites. This service account key is stored as a secret named creds-gcp in the gke-connect namespace.
Verify your clusters are registered:
gcloud container hub memberships list
Specifying a config cluster:
The config cluster is a GKE cluster you choose to be the central point of control for Ingress across the member clusters. Unlike GKE Ingress, the Anthos Ingress controller does not live in a single cluster but is a Google-managed service that watches resources in the config cluster. This GKE cluster is used as a multi-cluster API server to store resources such as MultiClusterIngress
and MultiClusterService
. Any member cluster can become a config cluster, but there can only be one config cluster at a time.
If the config cluster is down or inaccessible, then MultiClusterIngress
and MultiClusterService
objects cannot update across the member clusters. Load balancers and traffic can continue to function independently of the config cluster in the case of an outage.
Enabling Ingress for Anthos and selecting the config cluster occurs in the same step. The GKE cluster you choose as the config cluster must already be registered to an environ.
Identify the URI of the cluster you want to specify as the config cluster:
gcloud container hub memberships list
Enable Ingress for Anthos and select first-cluster as the config cluster:
gcloud alpha container hub ingress enable --config-membership=projects/project_id/locations/global/memberships/first-cluster
The output is similar to this:
Waiting for Feature to be created...done.
Note that this process can take a few minutes while the controller is bootstrapping. If successful, the output is similar to this:
Waiting for Feature to be created...done.
Waiting for controller to start...done.
If unsuccessful, the command will timeout like below:
Waiting for controller to start...failed.
ERROR: (gcloud.alpha.container.hub.ingress.enable) Controller did not start in 2 minutes. Please use the `describe` command to check Feature state for debugging information.
If no failure occurred in the previous step, you may proceed with next steps. If a failure occurred in the previous step, then check the feature state. It should indicate what exactly went wrong:
gcloud alpha container hub ingress describe
Now deploying a sample app in your multi clusters as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
namespace: sample-app
labels:
app: sample-app
spec:
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: frontend
image: gcr.io/google-samples/sample-app:0.1
ports:
- containerPort: 8080
Deploying through the config cluster
Now that the application is deployed across gke-us and gke-eu, you will deploy a load balancer by deploying MultiClusterIngress (MCI)
and MultiClusterService (MCS)
resources in the config cluster. MCI and MCS are custom resources (CRDs) that are the multi-cluster equivalents of Ingress and Service resources.
configuring the first-cluster as the config cluster. The config cluster is used to deploy and configure Ingress across all clusters.
kubectl config use-context first-cluster
Note: Only one cluster can be the active config cluster at any time. If you deploy MultiClusterIngress
and MultiClusterService
resources to other clusters, but they will not be seen or processed by the Anthos Ingress Controller.
Backendconfig
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: sample-app-health-check-cfg
namespace: sample-app
spec:
healthCheck:
checkIntervalSec: 60
timeoutSec: 30
healthyThreshold: 1
unhealthyThreshold: 10
type: HTTP
port: 8080
requestPath: /ping
timeoutSec: 120
cdn:
enabled: false
securityPolicy:
name: sample-app-security-rules
logging:
enable: true
sampleRate: 1.0
MultiClusterService
Create a file named mcs.yaml from the following manifest:
apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
name: sample-app-mcs
namespace: sample-app
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"8080":"sample-app-health-check-cfg"}}'
spec:
template:
spec:
selector:
app: sample-app
ports:
- name: web
protocol: TCP
port: 8080
targetPort: 8080
To view the multiclusterservice:
kubectl get mcs -n sample-app-mcs
MultiClusterIngress
Create a file named mci.yaml from the following manifest:
apiVersion: networking.gke.io/v1
kind: MultiClusterIngress
metadata:
name: sample-app-ingress
namespace: sample-app
annotations:
networking.gke.io/static-ip: <your static ip>
networking.gke.io/pre-shared-certs: "cert1, cert2"
spec:
template:
spec:
backend:
serviceName: sample-app-mcs
servicePort: 8080
rules:
- host: sample-app.com
http:
paths:
- path: /
backend:
serviceName: sample-app-mcs
servicePort: 8080
clusters:
- link: "us-east1/first-cluster"
- link: "us-central1/second-cluster"
Deploy the MultiClusterIngress resource that references sample-app-mcs as a backend:
kubectl apply -f mci.yaml
The output is similar to this:
multiclusteringress.networking.gke.io/sample-app-mci created
Note that MultiClusterIngress
has the same schema as the Kubernetes Ingress. The Ingress resource semantics are also the same with the exception of the backend.serviceName
field.
The backend.serviceName
field in a MultiClusterIngress
references a MultiClusterService
in the environ API rather than a Service in a Kubernetes cluster. This means that any of the settings for Ingress, such as TLS termination, settings can be configured in the same way.
Validating a successful deployment status:
Google Cloud Load Balancer deployment may take several minutes to deploy for new load balancers.
kubectl describe mci sample-app-ingress -n sample-app
Managing clusters
The set of clusters targeted by the load balancer can be changed by adding or removing a Membership.
For example, to remove the second-cluster as a backend for an ingress, run:
gcloud container hub memberships unregister second-cluster --gke-uri=uri
To add a cluster in Europe, run:
gcloud container hub memberships register europe-cluster --context=europe-cluster --service-account-key-file=/path/to/service-account-key-file