In this workshop, you will set up Istio Ambient in a multi-cluster environment, deploy a sample Bookinfo app, and explore how Solo.io Ambient enables secure service-to-service communication in all directions.
![]() |
|---|
- Create two clusters and set the env vars below to their context
export CLUSTER1=gke_ambient_one # UPDATE THIS
export CLUSTER2=gke_ambient_two # UPDATE THIS
export REPO_KEY=e6283d67ad60
export ISTIO_VERSION=1.29.0
export GLOO_MESH_LICENSE_KEY=<update> # UPDATE THIS- Download Solo's
istioctlBinary:
OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
mkdir -p ~/.istioctl/bin
curl -sSL https://storage.googleapis.com/istio-binaries-${REPO_KEY}/${ISTIO_VERSION}-solo/istioctl-${ISTIO_VERSION}-solo-${OS}-${ARCH}.tar.gz | tar xzf - -C ~/.istioctl/bin
chmod +x ~/.istioctl/bin/istioctl
export PATH=${HOME}/.istioctl/bin:${PATH}-
Verify using
istioctl version -
Clone this repo and change directory
git clone https://github.com/rvennam/ambient-multicluster-workshop.git
cd ambient-multicluster-workshopfor context in ${CLUSTER1} ${CLUSTER2}; do
kubectl --context ${context} create ns bookinfo
kubectl --context ${context} apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl --context ${context} apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo-versions.yaml
donefor context in ${CLUSTER1} ${CLUSTER2}; do
kubectl --context=${context} create ns istio-system || true
kubectl --context=${context} create ns istio-gateways || true
done
kubectl --context=${CLUSTER1} create secret generic cacerts -n istio-system \
--from-file=./certs/cluster1/ca-cert.pem \
--from-file=./certs/cluster1/ca-key.pem \
--from-file=./certs/cluster1/root-cert.pem \
--from-file=./certs/cluster1/cert-chain.pem
kubectl --context=${CLUSTER2} create secret generic cacerts -n istio-system \
--from-file=./certs/cluster2/ca-cert.pem \
--from-file=./certs/cluster2/ca-key.pem \
--from-file=./certs/cluster2/root-cert.pem \
--from-file=./certs/cluster2/cert-chain.pemTo use HELM instead, see Helm Instructions
Install the operator
for context in ${CLUSTER1} ${CLUSTER2}; do
helm upgrade -i --kube-context=${context} gloo-operator \
oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
--version 0.5.0-rc.0 -n gloo-system --create-namespace \
--set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY} \
--set manager.image.repository=us-docker.pkg.dev/solo-public/gloo-operator/gloo-operator &
doneUse the ServiceMeshController resource to install Istio on both clusters
kubectl --context=${CLUSTER1} apply -f - <<EOF
apiVersion: operator.gloo.solo.io/v1
kind: ServiceMeshController
metadata:
name: istio
spec:
version: 1.29.0
cluster: cluster1
network: cluster1
EOF
kubectl --context=${CLUSTER2} apply -f - <<EOF
apiVersion: operator.gloo.solo.io/v1
kind: ServiceMeshController
metadata:
name: istio
spec:
version: 1.29.0
cluster: cluster2
network: cluster2
EOFExpose using an east-west gateway:
istioctl --context=${CLUSTER1} multicluster expose -n istio-gateways
istioctl --context=${CLUSTER2} multicluster expose -n istio-gateways
Instead of using istioctl, you can also apply yaml
kubectl apply --context $CLUSTER1 -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
labels:
istio.io/expose-istiod: "15012"
topology.istio.io/network: cluster1
name: istio-eastwest
namespace: istio-gateways
spec:
gatewayClassName: istio-eastwest
listeners:
- name: cross-network
port: 15008
protocol: HBONE
tls:
mode: Passthrough
- name: xds-tls
port: 15012
protocol: TLS
tls:
mode: Passthrough
EOFkubectl apply --context $CLUSTER2 -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
labels:
istio.io/expose-istiod: "15012"
topology.istio.io/network: cluster2
name: istio-eastwest
namespace: istio-gateways
spec:
gatewayClassName: istio-eastwest
listeners:
- name: cross-network
port: 15008
protocol: HBONE
tls:
mode: Passthrough
- name: xds-tls
port: 15012
protocol: TLS
tls:
mode: Passthrough
EOFLink clusters together:
istioctl multicluster link --contexts=$CLUSTER1,$CLUSTER2 -n istio-gatewaysInstead of using istioctl, you can also apply yaml
export CLUSTER1_EW_ADDRESS=$(kubectl get svc -n istio-gateways istio-eastwest --context $CLUSTER1 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
export CLUSTER2_EW_ADDRESS=$(kubectl get svc -n istio-gateways istio-eastwest --context $CLUSTER2 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
echo "Cluster 1 east-west gateway: $CLUSTER1_EW_ADDRESS"
echo "Cluster 2 east-west gateway: $CLUSTER2_EW_ADDRESS"
kubectl apply --context $CLUSTER1 -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
annotations:
gateway.istio.io/service-account: istio-eastwest
gateway.istio.io/trust-domain: cluster2
labels:
topology.istio.io/network: cluster2
name: istio-remote-peer-cluster2
namespace: istio-gateways
spec:
addresses:
- type: IPAddress
value: $CLUSTER2_EW_ADDRESS
gatewayClassName: istio-remote
listeners:
- name: cross-network
port: 15008
protocol: HBONE
tls:
mode: Passthrough
- name: xds-tls
port: 15012
protocol: TLS
tls:
mode: Passthrough
EOF
kubectl apply --context $CLUSTER2 -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
annotations:
gateway.istio.io/service-account: istio-eastwest
gateway.istio.io/trust-domain: cluster1
labels:
topology.istio.io/network: cluster1
name: istio-remote-peer-cluster1
namespace: istio-gateways
spec:
addresses:
- type: IPAddress
value: $CLUSTER1_EW_ADDRESS
gatewayClassName: istio-remote
listeners:
- name: cross-network
port: 15008
protocol: HBONE
tls:
mode: Passthrough
- name: xds-tls
port: 15012
protocol: TLS
tls:
mode: Passthrough
EOFLet's make sure we did everything right!
istioctl --context $CLUSTER1 multicluster check
istioctl --context $CLUSTER2 multicluster check
We should expect to see:
✅ License Check: license is valid for multicluster
✅ Pod Check (istiod): all pods healthy
✅ Pod Check (ztunnel): all pods healthy
✅ Pod Check (eastwest gateway): all pods healthy
✅ Gateway Check: all eastwest gateways programmed
✅ Peers Check: all clusters connected
for context in ${CLUSTER1} ${CLUSTER2}; do
kubectl --context ${context} label namespace bookinfo istio.io/dataplane-mode=ambient
doneTo make productpage available across clusters, we have the option of one of the following labels on its Kubernetes Service:
- Creates a new global service:
<name>.<namespace>.mesh.internal - Calls to the service using the standard Kubernetes hostnames (
<name>or<name>.<namespace>, or<name>.<namespace>.svc.cluster.local) remains unchanged and includes only local endpoints.
- Creates a new global service:
<name>.<namespace>.mesh.internal - Calls to the service using the standard Kubernetes hostnames (
<name>or<name>.<namespace>, or<name>.<namespace>.svc.cluster.local) also get routing to both local and remote (other clusters) endpoints.
Lets use the first option:
for context in ${CLUSTER1} ${CLUSTER2}; do
kubectl --context ${context} -n bookinfo label service productpage solo.io/service-scope=global
kubectl --context ${context} -n bookinfo annotate service productpage networking.istio.io/traffic-distribution=Any
doneTip: The solo.io/service-scope=global label can also be on the namespace, which makes all the services in the namespace global
Apply the following Kubernetes Gateway API resources to cluster1 to expose productpage service using an Istio gateway:
kubectl --context=${CLUSTER1} apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: bookinfo-gateway
namespace: bookinfo
spec:
gatewayClassName: istio
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: bookinfo
namespace: bookinfo
spec:
parentRefs:
- name: bookinfo-gateway
rules:
- matches:
- path:
type: Exact
value: /productpage
- path:
type: PathPrefix
value: /static
- path:
type: Exact
value: /login
- path:
type: Exact
value: /logout
- path:
type: PathPrefix
value: /api/v1/products
# backendRefs:
# - name: productpage
# port: 9080
backendRefs:
- kind: Hostname
group: networking.istio.io
name: productpage.bookinfo.mesh.internal
port: 9080
EOFWait until a LB IP gets assigned to bookinfo-gateway-istio svc and then visit the app!
curl $(kubectl get svc -n bookinfo bookinfo-gateway-istio --context $CLUSTER1 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")/productpageVoila! This should be round robinning between productpage on both clusters.
Scale down productpage on cluster1 to simulate a failure:
kubectl scale deploy -n bookinfo productpage-v1 --replicas=0 --context $CLUSTER1Visit the application in your browser and you'll see traffic is not impacted because we're failing over to cluster2 automatically.
Scale productpage back up:
kubectl scale deploy -n bookinfo productpage-v1 --replicas=1 --context $CLUSTER1
We can also scale down other services. Lets enable details to be multi-cluster and scale it down
kubectl --context $CLUSTER1 -n bookinfo label service details solo.io/service-scope=global-only
kubectl --context $CLUSTER2 -n bookinfo label service details solo.io/service-scope=global-only kubectl scale deploy -n bookinfo details-v1 --replicas=0 --context $CLUSTER1Visit the application in your browser and you'll see traffic is not impacted because we're failing over from productpage.cluster1 to bookinfo.cluster2 automatically.
Scale details back up:
kubectl scale deploy -n bookinfo details-v1 --replicas=2 --context $CLUSTER1Istio Waypoints enable Layer 7 traffic management in an Ambient Mesh, providing advanced capabilities like routing, authorization, observability, and security policies. Acting as dedicated traffic proxies, Waypoints handle HTTP, gRPC, and other application-layer protocols, seamlessly integrating with Istio’s security model to enforce fine-grained traffic control.
Let’s apply a Waypoint for the bookinfo namespace and create a header-based routing policy: • Traffic going to reviews Service should route to reviews-v1 by default. • Requests with the header end-user: jason should be directed to reviews-v2 instead.
for context in ${CLUSTER1} ${CLUSTER2}; do
kubectl --context=${context} label ns bookinfo istio.io/use-waypoint=auto
kubectl --context=${context} apply -f ./reviews-v1.yaml
doneIn addition to managing traffic coming into the mesh and within the mesh, ambient mesh can also manage traffic leaving the mesh. This includes observing the traffic and enforcing policies against it.
Just as a waypoint can be used for traffic addressed to a service inside your cluster, a gateway can be used for traffic that leaves your cluster.
In Istio, you can direct traffic to this gateway on a host-by-host basis using the ServiceEntry resource, which is bound to a waypoint used for egress control.
This section will only use $CLUSTER1.
First, we'll deploy an egress gateway in the istio-egress namespace, and call it egress-gateway
kubectl create namespace istio-egress
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: egress-gateway
namespace: istio-egress
spec:
gatewayClassName: istio-waypoint
listeners:
- name: mesh
port: 15008
protocol: HBONE
allowedRoutes:
namespaces:
from: All
EOFIf you plan on creating SE's in the istio-egress namespace, you can label just the ns and not need to label every SE:
kubectl label ns istio-egress istio.io/use-waypoint=egress-gateway
Define httpbin.org on port 40 and 443 as external hosts using ServiceEntries in the bookinfo namespace. Notice that we're labeling the ServiceEntry to use the egress gateway
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: httpbin.org
namespace: bookinfo
labels:
istio.io/use-waypoint: egress-gateway
istio.io/use-waypoint-namespace: istio-egress
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: HTTP
targetPort: 443
resolution: DNS
---
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: httpbin.org-tls
namespace: bookinfo
spec:
host: httpbin.org
trafficPolicy:
tls:
mode: SIMPLE
EOFOnly allow ratings to call httpbin.org
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: ratings-to-httpbin
namespace: bookinfo
spec:
targetRefs:
- kind: ServiceEntry
group: networking.istio.io
name: httpbin.org
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/bookinfo/sa/bookinfo-ratings"]
EOFYou should now be able to call httpbin.org from ratings:
kubectl exec -it $(kubectl get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}') -n bookinfo -- curl -s httpbin.org/getBut NOT reviews:
kubectl exec -it $(kubectl get pod -l app=reviews -n bookinfo -o jsonpath='{.items[0].metadata.name}') -n bookinfo -- curl -s httpbin.org/get
Gloo Mesh allows you to extend the mesh to include workloads running on virtual machines (VMs). This enables seamless communication between services running on VM and those running on Kubernetes, providing a unified service mesh.
To bring a VM into the mesh, you need to generate a bootstrap configuration. This configuration includes the necessary certificates and metadata for the VM to join the mesh.
Run the following command to generate the bootstrap configuration for the VM:
istioctl bootstrap --namespace vm1 --service-account vm1This command creates a bootstrap token that will be used by the VM to authenticate with the mesh.
ssh into the VM before proceeding to the next step.
Copy the token from generated in Step 1 and set it as an environment variable on the VM:
export BOOTSTRAP_TOKEN=<set from previous command>Replace <set from previous command> with the actual token value.
The ztunnel is a lightweight data plane component that enables the VM to participate in the Ambient Mesh. Run the following command on the VM to start the ztunnel:
docker run -d -e BOOTSTRAP_TOKEN=${BOOTSTRAP_TOKEN} -e ALWAYS_TRAVERSE_NETWORK_GATEWAY=true --network=host us-docker.pkg.dev/gloo-mesh/istio-e6283d67ad60/ztunnel:1.29.0-solo-distrolessThis command pulls the ztunnel container image and starts it with the necessary configuration to connect to the mesh.
Once the ztunnel is running, you can test connectivity from the VM to services in the mesh. Use the following commands to verify that the VM can communicate with the productpage service in the bookinfo namespace:
export ALL_PROXY=socks5h://127.0.0.1:15080
curl productpage.bookinfo:9080
curl productpage.bookinfo.mesh.internal:9080- The first
curlcommand tests connectivity using the service's Kubernetes DNS name. - The second
curlcommand tests connectivity using the service's mesh-internal DNS name.
If both commands return the expected responses, the VM has successfully joined the mesh and can communicate with other services.
Optionally, you can deploy the Gloo Management Plane that provides many benefits and features. For this lab, we'll just focus on the UI and the service graph.
Start by downloading the meshctl cli
curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.12.0 sh -
export PATH=$HOME/.gloo-mesh/bin:$PATH
Cluster1 will act as the management cluster and workload cluster: (see mgmt-values.yaml for reference)
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
helm repo update
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds -n gloo-mesh --create-namespace --version=2.12.0 \
--set installEnterpriseCrds=false --kube-context=$CLUSTER1
helm upgrade -i gloo-platform gloo-platform/gloo-platform -n gloo-mesh --version 2.12.0 --kube-context=$CLUSTER1 --values mgmt-values.yaml \
--set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEYThen, register cluster2 as a workload cluster to cluster1:
export TELEMETRY_GATEWAY_ADDRESS=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $CLUSTER1 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):4317
meshctl cluster register cluster2 --kubecontext $CLUSTER1 --profiles gloo-mesh-agent --remote-context $CLUSTER2 --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESSLaunch the UI:
meshctl dashboard



