Simple Route Rules

For this use case, we need to use a modified version of Recommendations

Deploy recommendation:v2

You will deploy docker images that were privously built. If you want to build recommendation V2 visit: Create Recommendation V2
We have a 2nd Deployment to manage the v2 version of recommendation.

Deploy Recommendation microservice V2 using an existing image

kubectl apply -f <(istioctl kube-inject -f recommendation/kubernetes/Deployment-v2.yml) -n tutorial

Wait for v2 to be deployed

Wait for those pods to show "2/2", the istio-proxy/envoy sidecar is part of that pod

kubectl get pods -w -n tutorial
NAME                                  READY     STATUS    RESTARTS   AGE
customer-3600192384-fpljb             2/2       Running   0          17m
preference-243057078-8c5hz           2/2       Running   0          15m
recommendation-v1-60483540-9snd9     2/2       Running   0          12m
recommendation-v2-2815683430-vpx4p   2/2       Running   0         15s

and test the customer endpoint

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

curl $(minikube ip):$INGRESS_PORT/customer

you likely see "customer => preference => recommendation v1 from '99634814-d2z2t': 3", where '99634814-d2z2t' is the pod running v1 and the 3 is basically the number of times you hit the endpoint.

curl $(minikube ip):$INGRESS_PORT/customer

you likely see "customer => preference => recommendation v2 from '2819441432-5v22s': 1" as by default you get round-robin load-balancing when there is more than one Pod behind a Service

Send several requests to see their responses:

./scripts/run.sh

The default Kubernetes/OpenShift behavior is to round-robin load-balance across all available pods behind a single Service. Add another replica of recommendation-v2 Deployment.

kubectl scale --replicas=2 deployment/recommendation-v2 -n tutorial

Now, you will see two requests into the v2 and one for v1.

customer => preference => recommendation v1 from '2819441432-qsp25': 29
customer => preference => recommendation v2 from '99634814-sf4cl': 37
customer => preference => recommendation v2 from '99634814-sf4cl': 38

Scale back to a single replica of the recommendation-v2 Deployment

kubectl scale --replicas=1 deployment/recommendation-v2 -n tutorial

Changing Istio Routings

All users to recommendation:v2

From the main istio-tutorial directory,

kubectl create -f istiofiles/destination-rule-recommendation-v1-v2.yml -n tutorial
kubectl create -f istiofiles/virtual-service-recommendation-v2.yml -n tutorial

./scripts/run.sh

you should only see v2 being returned

All users to recommendation:v1

Note: "replace" instead of "create" since we are overlaying the previous rule

kubectl replace -f istiofiles/virtual-service-recommendation-v1.yml -n tutorial

kubectl get virtualservice -n tutorial

kubectl get virtualservice -o yaml -n tutorial

All users to recommendation v1 and v2

By simply removing the rule

kubectl delete -f istiofiles/virtual-service-recommendation-v1.yml -n tutorial

and you should see the default behavior of load-balancing between v1 and v2

./scripts/run.sh

Canary deployment: Split traffic between v1 and v2

Canary Deployment scenario: push v2 into the cluster but slowly send end-user traffic to it, if you continue to see success, continue shifting more traffic over time

$ kubectl get pods -l app=recommendation -n tutorial

NAME                                  READY     STATUS    RESTARTS   AGE
recommendation-v1-3719512284-7mlzw   2/2       Running   6          2h
recommendation-v2-2815683430-vn77w   2/2       Running   0          1h

Create the virtualservice that will send 90% of requests to v1 and 10% to v2

and send in several requests:

./scripts/run.sh

In another terminal, change the mixture to be 75/25

Clean up

kubectl delete -f istiofiles/virtual-service-recommendation-v1_and_v2_75_25.yml -n tutorial
kubectl delete -f istiofiles/destination-rule-recommendation-v1-v2.yml -n tutorial

or you can run:

./scripts/clean.sh tutorial