Kubernetes has a collection of strategies that provide ways to define access to services running in a cluster.
Though they all work slightly differently, typically they are implemented as a service that provides a mapping to a given port or ports that are exposed on pods or nodes within the cluster and associated with an IP address.
The only real exception to this approach is the Ingress controller
The ClusterIP is the default Kubernetes service type. It provides access to an application’s services via an internal cluster IP address that is reachable by all other nodes within the cluster. It is not accessible from outside of the cluster.
The NodePort service is the simplest way to get external access to an application’s service endpoint within a cluster. This approach opens an identical port, typically in the range 30000–32767, across all Nodes in the cluster and associates this with an IP address and port. Any traffic that is then directed to this port is forwarded on to the application’s service.
A LoadBalancer is the typical way to expose an application to the internet. It relies on the cloud to create an external load balancer with an IP address in the relevant network space. Any traffic that is then directed to this IP address is forwarded on to the application’s service.
An Ingress controller differs from the previous options in that it is not implemented as a Kubernetes service and instead behaves in a manner similar to a router that can make rule based routing decisions about which service to deliver traffic to.
The following sections will explain the various types of service currently supported and where these might typically be used.
Before looking further at the types of inbound control available to us let’s create a simple 1 replica Nginx application based on the default Nginx image. This will spin up the default Nginx container that will listen, by default, on port 80.
cat <<EOF > nginx-app.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-app labels: app: nginx-app spec: replicas: 1 selector: matchLabels: app: nginx-app template: metadata: labels: app: nginx-app spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 EOF kubectl apply -f nginx-app.yaml
Next we will connect to the Nginx pod and install curl so that we can utilise it to test connectivity in our upcoming examples.
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-test-app-65b8cd96c4-kqtlm 1/1 Running 0 11m $ kubectl exec -it nginx-test-app-65b8cd96c4-kqtlm /bin/bash root@nginx-test-app-65b8cd96c4-kqtlm:/# apt update && apt install -y curl <output truncated for brevity> root@nginx-test-app-65b8cd96c4-kqtlm:/# exit
First we will define a service using the ClusterIP type. This service will expose port 80 on the pod by mapping it to port 80 on the cluster IP that will get assigned to the service when it is created.
cat <<EOF > clusterip.yaml --- apiVersion: v1 kind: Service metadata: name: clusterip-service spec: selector: app: nginx-app type: ClusterIP ports: - name: http port: 80 protocol: TCP EOF kubectl apply -f clusterip.yaml
If we now use kubectl to query the available services we can see that there is a service called clusterip-service that exposes port 80 on the cluster IP address 10.12.147.210.
$ kubectl get service clusterip-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) clusterip-service ClusterIP 10.12.147.210 none 80/TCP
Now we can connect back into our pod and run curl to query this.
$ kubectl exec -it nginx-test-app-65b8cd96c4-kqtlm curl 10.12.147.210:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> <!-- output truncated for brevity --> </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <!-- output truncated for brevity --> <p><em>Thank you for using nginx.</em></p> </body> </html>
If all went as expected we should see the html for the default Nginx welcome page.
As mentioned earlier
this type of service is only accessible inside the
cluster, in order to expose it outside of the cluster we will need to look at
our other access options.
To remove the clusterip-service run the following.
$ kubectl delete service clusterip-service
A NodePort, as the name implies, works by opening a port on every node of the cluster where the application in question is running. The associated service that Kubernetes creates as part of this is then responsible for routing incoming traffic intended for that service to that NodePort.
The following example shows how to assign a NodePort to our existing nginx-app pod that we created earlier.
cat <<EOF > nodeport.yaml --- apiVersion: v1 kind: Service metadata: name: nodeport-service spec: selector: app: nginx-app type: NodePort ports: - name: http port: 80 nodePort: 30001 protocol: TCP EOF kubectl apply -f nodeport.yaml
If we query our available services now we should see a new entry for our nodeport-service
$ kubectl get service nodeport-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodeport-service NodePort 10.254.141.196 <none> 80:30001/TCP 5s
There are several ways in which we can now access our Nginx application. The first is via the cluster IP as in the previous example. Though the same restriction still applies here in that this will only work from within the cluster.
$ kubectl exec -it nginx-test-app-65b8cd96c4-kqtlm curl 10.254.141.196:80 <!-- output truncated for brevity --> <h1>Welcome to nginx!</h1> <!-- output truncated for brevity -->
To access the cluster from outside we will first need to find the addresses that have been associated with the Node itself.
$ kubectl get pod nginx-test-app-65b8cd96c4-kqtlm -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-test-app-65b8cd96c4-kqtlm 1/1 Running 0 64m 192.168.158.2 k8s-m3-n3-4elkr4e46fng-minion-0 <none> $ kubectl describe node k8s-m3-n3-4elkr4e46fng-minion-0 | grep IP InternalIP: 10.0.0.15 ExternalIP: 220.127.116.11
In this particular example we can see that our node has both an internal and an external IP address. This means that we could browse directly to http://18.104.22.168:30001 from the internet (assuming the cluster has appropriate inbound access enabled).
Alternatively this could also be accessed by any other server instance that was deployed on the 10.0.0.0/24 host network that had appropriate security group access.
For example if we have another instance that is attached to that network on 10.0.0.16 and both it and the node running the Nginx application belong to a security group that allows access to TCP/30001 then the following will be possible.
$ ssh ubuntu@k8s-bastion ubuntu@k8s-bastion:~$ curl 10.0.0.15:30001 <!-- output truncated for brevity --> <h1>Welcome to nginx!</h1> <!-- output truncated for brevity -->
This provides a handy way to give access to non-production clusters where it is not necessarily desirable to use publicly addressable IP addresses either because of cost or availability. The only downside of this approach is that standard services such as HTTP and HTTPS end up being exposed on a non-standard ports
Typically the NodePort type tends to be used as an abstraction for use with higher-level ingress types such as the loadbalancer.
Using a LoadBalancer service type automatically deploys an external load balancer. The exact implementation of this will be dependent on the cloud provider that you are using.
For most scenarios of a production nature this would be the most straightforward approach to take.
To provide our Nginx application with an internet facing loadbalancer we can simply run the following.
cat <<EOF > loadbalancer.yaml --- kind: Service apiVersion: v1 metadata: name: loadbalanced-service spec: selector: app: nginx-app type: LoadBalancer ports: - name: http port: 80 protocol: TCP EOF kubectl apply -f loadbalancer.yaml
Check the state of the loadbalanced-service until the EXTERNAL-IP status is no longer <pending>.
$ kubectl get service loadbalanced-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE loadbalanced-service LoadBalancer 10.254.28.183 22.214.171.124 80:31177/TCP 2m18s
Once we can see that our service is active and has been assigned an external IP address we should be able to retrieve the “Welcome Page” via the browser or simply via curl from any internet accessible machine.
$ curl 126.96.36.199 <!-- output truncated for brevity --> <h1>Welcome to nginx!</h1> <!-- output truncated for brevity -->
For a more complete example that shows connections being load balanced between many identical application pods take a look at Running a hello world application
While the default behaviour of the loadbalancer service may be fine for a large majority of typical use cases, there are times when this behaviour will need to be modified to suit a particular scenario.
Some examples of where this might be applicable include such things as:
being able to retain the floating IP used for the VIP.
creating a loadbalancer that does not have an IP address assigned from the public address pool.
The ability to assign which network, subnet or port the loadbalancer will use for it’s VIP address.
Fortunately Kubernetes supplies a means to achieve these desired changes in
behaviour through the use of
Although, by default, the loadbalancer is created with an externally addressable public IP address it is possible to use a local IP address instead with the following annotation.
annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
A simple example would look like this.
cat <<EOF > loadbalancer_internal_ip.yaml kind: Service apiVersion: v1 metadata: name: lb-internal-ip namespace: default annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: "true" spec: type: LoadBalancer selector: app: nginx-app ports: - protocol: TCP port: 80 targetPort: 80 EOF kubectl apply -f loadbalancer_internal_ip.yaml
The resulting loadbalancer would be provisioned with a VIP from the existing Kubernetes host network.
If we examine the node we can see that it’s internal network address is in the 10.0.0.0/24 subnet and a simple query of the the new service shows that it too has now been assigned an address from this same range as it’s VIP.
$ kubectl describe nodes k8s-m3-n3-4elkr4e46fng-minion-0 | grep InternalIP InternalIP: 10.0.0.15 $ kubectl get svc lb-internal-ip NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE lb-internal-ip LoadBalancer 10.254.229.121 10.0.0.38 80:32500/TCP 138m
There may be occasions, such as existing DNS entries for example, where it is desirable to retain the floating IP that has been assigned to a loadbalancer that is assigned to a Kubernetes service.
To do this we can add the following annotation to our service manifest.
annotations: loadbalancer.openstack.org/keep-floatingip: "true"
This can also be used in conjunction with an IP address that is already allocated to your cloud project.
spec: type: LoadBalancer loadBalancerIP: 188.8.131.52
Here is an example service that creates a loadbalancer for an Nginx
application. It will use the existing ip address 184.108.40.206 for the load
balancer and sets the
keep-floatingip flag to true.
$ cat <<EOF > lb-retain-fip.yaml --- kind: Service apiVersion: v1 metadata: name: svc-nginx-retain-fip namespace: default annotations: loadbalancer.openstack.org/keep-floatingip: "true" spec: type: LoadBalancer loadBalancerIP: 220.127.116.11 selector: app: nginx-app ports: - protocol: TCP port: 80 targetPort: 80 EOF
Now, when we remove the service, the 18.104.22.168 address will remain allocated to our cloud project rather than being released back to the public address pool.
There are occasions where an application needs to be able to determine the
original IP address for requests it receives. In order to do this we need to
X-Forwarded-For is an HTTP header field that can be used for
identifying the originating IP address of a client connecting to a web server
through a loadbalancer or an HTTP proxy.
If we deploy a standard LoadBalancer service in front of our echoserver application using the default settings we can confirm that the IP address of the originating server is not visible.
Here is the echoserver deployment manifest. We can deploy this using the following command:
kubectl apply -f deployment-echoserver.yml
# deployment-echoserver.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: echoserver-deployment labels: app: echoserver spec: replicas: 3 selector: matchLabels: app: echoserver template: metadata: labels: app: echoserver spec: containers: - name: echoserver image: gcr.io/google-containers/echoserver:1.10 ports: - containerPort: 8080
The echoserver loaadbalancer manifest can be deployed in the same manner:
kubectl apply -f lb-echoserver-1.yml
# lb-echoserver-1.yml --- kind: Service apiVersion: v1 metadata: name: echoserver-lb-service spec: selector: app: echoserver type: LoadBalancer ports: - name: http port: 80 targetPort: 8080 protocol: TCP
we can see by querying with curl that there is no source information available
Request Headers section.
$ curl -i 22.214.171.124 HTTP/1.1 200 OK Date: Fri, 17 Apr 2020 01:34:59 GMT Content-Type: text/plain Transfer-Encoding: chunked Connection: keep-alive Server: echoserver Hostname: echoserver-deployment-7b98d7c584-8fpbq Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.0.0.13 method=GET real path=/ query= request_version=1.1 request_scheme=http request_uri=http://126.96.36.199:8080/ Request Headers: accept=*/* host=188.8.131.52 user-agent=curl/7.58.0 Request Body: -no body in request-
If we now add the annotation
to our loadbalancer manifest, like so:
# lb-echoserver-2.yml --- kind: Service apiVersion: v1 metadata: name: echoserver-lb-service annotations: loadbalancer.openstack.org/x-forwarded-for: "true" spec: selector: app: echoserver type: LoadBalancer ports: - name: http port: 80 targetPort: 8080 protocol: TCP
kubectl apply -f lb-echoserver-2.yml
we can see by rerunning our curl query that there is source information available in
Request Headers section under the
$ curl -i 184.108.40.206 HTTP/1.1 200 OK Date: Fri, 17 Apr 2020 01:23:59 GMT Content-Type: text/plain Transfer-Encoding: chunked Server: echoserver Hostname: echoserver-deployment-7b98d7c584-pv685 Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.0.0.14 method=GET real path=/ query= request_version=1.1 request_scheme=http request_uri=http://220.127.116.11:8080/ Request Headers: accept=*/* host=18.104.22.168 user-agent=curl/7.58.0 x-forwarded-for=22.214.171.124 Request Body: -no body in request-
In certain cases, such as when using automation, it may be desirable to be able
to re-use an existing floating IP address. Through the use of the
annotation this can be achieved
Assume we have created a port on our cluster network with the following characteristics:
fixed ip: 10.0.0.212
floating ip: 126.96.36.199
$ openstack port show lb-vip-10-0-0-212 -f yaml -c fixed_ips -c id -c name fixed_ips: - ip_address: 10.0.0.212 subnet_id: 6ddad590-3a57-4fd1-990b-be067e3f657d id: ca981484-22f8-4e9d-94f5-e43594afb15e name: lb-vip-10-0-0-212 $ openstack floating ip show 188.8.131.52 -f yaml -c fixed_ip_address -c floating_ip_address fixed_ip_address: 10.0.0.212 floating_ip_address: 184.108.40.206
If we add the annotation
loadbalancer.openstack.org/port-id: to our
loadbalancer manifest and provide the port id as the value we can then
make use of this existing port as the VIP on our loadbalancer.
# lb-echoserver-port-id.yml --- kind: Service apiVersion: v1 metadata: name: echoserver-lb-service annotations: loadbalancer.openstack.org/port-id: "ca981484-22f8-4e9d-94f5-e43594afb15e" spec: selector: app: echoserver type: LoadBalancer ports: - name: http port: 80 targetPort: 8080 protocol: TCP
Deploy the loadbalancer,
kubectl apply -f lb-echoserver-port-id.yml
and test connectivity with curl to confirm that the echoserver app is accessible via our existing port’s floating IP.
$ curl -i 220.127.116.11 HTTP/1.1 200 OK Date: Thu, 23 Apr 2020 02:29:08 GMT Content-Type: text/plain Transfer-Encoding: chunked Connection: keep-alive Server: echoserver Hostname: echoserver-deployment-7b98d7c584-kzpkv Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.0.0.13 method=GET real path=/ query= request_version=1.1 request_scheme=http request_uri=http://18.104.22.168:8080/ Request Headers: accept=*/* host=22.214.171.124 user-agent=curl/7.58.0 Request Body: -no body in request-