Press ?
for help.
All features are anonymous.
So what is this Istio thing anyway
JJ Asghar
You probably know this symbol. I'm assuming you do. If you've used it, you probably notice that there are missing portions of the ecosystem.
Networking is still hard.
Now imagine trying to do the following with kubernetes.
This is where Istio comes in.
Istio
It’s the Greek word for ‘sail’.
Bring up the k8s cluster you've already built after talking about Istio. This is where Istio comes into play. It full fills the list of - Request Routing - Load Balancing - Authentication - Failure Management - Fault Injection - Circut Breaking And I hope to show a few off to you today.
curl -L https://git.io/getLatestIstio | sh -
cd istio-1.0.5
export PATH=$PWD/bin:$PATH
kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller
helm install install/kubernetes/helm/istio --name istio --namespace istio-system
kubectl create -f install/kubernetes/istio-demo.yaml
https://github.com/istio/istio/releases
Stable
release until you are comfortable with the software.First thing first. We need to actually deploy Istio. I'm going to run this in another window, but in essence its the following.
I should mention this assumes you have Isito already installed on your local machine.
VERIFY installation.
kubectl get pods && kubctl get svc && kubectl get ingress istio puts everything in it's own namespace.... kubectl get svc -n istio-system kubectl get pods -n istio-system
I'm going to use a demo app from the official Istio website.
I'm going to use a demo app from the official Istio website.
As I was learning Istio, it had the most concise explaination of what was going on for me and I hope it does the same for you.
Here is the architecture of the app we are going to deploy.
As you can see there is a Python frontend, Java and Ruby middleware, with a nodejs backend.
If this isn't a modern microservices application i don't know what is.
I'd like to have yall focus on the "Reviews" section here, there are three different versions, one without stars v1, one with black stars, v2 and finally one with red stars v3.
These are the services we are going to manipulatiate.
kubectl apply -f <(istioctl \
kube-inject -f \
samples/bookinfo/platform/kube/bookinfo.yaml)
The following command is what i'm going to run to install bookinfo.
VERIFY
kubectl get services && kubectl get pods
As you con see here we now have services and pods running bookinfo. including our v1,v2,v3
kubectl apply -f \
samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl apply -f \
samples/bookinfo/networking/destination-rule-all.yaml
Now that we have it installed, we need to create an ingress gateway assuming it hasn't and also put in a default routing role.
VERIFY
kubectl get gateway
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
firefox $GATEWAY_URL/productpage
kubectl apply -f \
samples/bookinfo/networking/virtual-service-all-v1.yaml
kubectl get virtualservices -o yaml
kubectl get destinationrules -o yaml
Lets start with the obivious advantage of Istio. Routing to the correct application.
If I refresh the site, you'll see that it's round robining between the three versions.
This is obviously bad. And the first thing we'll do is fix that.
OPEN samples/bookinfo/networking/virtual-service-all-v1.yaml
kubectl get virtualservices -o yaml kubectl get destinationrules -o yaml
kubectl apply -f \
samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
Now I'm going to now cut the traffic for a user named "Jason."
Imagine the ability to put out a pre-release version of your microservice and only have one user use it. It's all done via HTTP headers and in this case we have the following
http:
- match:
- headers:
end-user:
exact: jason
As you can see here, the yaml section is very straight forward.
I'll bring it up in my editor too here so you can see it in context
kubectl -n istio-system get svc servicegraph
kubectl -n istio-system port-forward \
$(kubectl -n istio-system get pod -l \
app=servicegraph -o jsonpath='{.items[0].metadata.name}') \
8088:8088
Ok, so we can now make sure that specific things go to specific places, but what about our app, we think we know what it looks like, but we should verfiy right? Luckly Istio has a built in Service graph system:
This can be extremely useful for more advanced setups. Visualizing how everything comes together.
VERIFY
There are two links you can try
http://localhost:8088/force/forcegraph.html will give you a moveable graph
and
http://localhost:8088/dotviz will give you the following graph
kubectl port-forward -n istio-system \
$(kubectl get pod -n istio-system -l \
app=jaeger -o \
jsonpath='{.items[0].metadata.name}') \
16686:16686
So we know how to move traffic around, and how things look, but what about issues through the stack? This is where tracing comes into play.
VERIFY
firefox http://localhost:16686
It'll explain it better then I could ever. ;)
As you can see there is a tone of information in Jaeger.
If any of this tickles your fancy, head to the following link
kubectl apply -f \
samples/bookinfo/networking/virtual-service-all-v1.yaml
kubectl apply -f \
samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
kubectl apply -f \
samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml
kubectl port-forward -n istio-system \
$(kubectl get pod -n istio-system -l \
app=jaeger -o jsonpath='{.items[0].metadata.name}') \
16686:16686
firefox http://localhost:1668
So now we can see the health of our app, manipulatiate the traffic, lets cause some problems.
Lets force some failures and look at jaeger to see the errors.
VERIFY
firefox $GATEWAY_URL/productpage # login as jason
As you can see we are erroring! You’ve found a bug. There are hard-coded timeouts in the microservices that have caused the reviews service to fail. The timeout between the productpage and the reviews service is 6 seconds - coded as 3s + 1 retry for 6s total. The timeout between the reviews and ratings service is hard-coded at 10 seconds. Because of the delay we introduced, the /productpage times out prematurely and throws the error. Bugs like this can occur in typical enterprise applications where different teams develop different microservices independently. Istio’s fault injection rules help you identify such anomalies without impacting end users.
DON'T FORGET TO LOG OUT HERE
kubectl apply -f \
samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
kubectl apply -f \
samples/bookinfo/networking/virtual-service-reviews-v3.yaml
kubectl get virtualservice reviews -o yaml
Lets fix this by moving all the traffic to version 3 which we know works
First we'll move it 50/50 to verify that we aren't crazy
Ok, good that's looking better