Background
We recently worked with Situm, a company that offers high precision indoor navigation. Situm’s platform is able to ascertain the position of a user inside a building by relying on a smartphone’s sensors: WiFi, Bluetooth, magnetometer, gyroscope, and the accelerometer. Their platform is now being used in a variety of scenarios, including to provide patients with turn-by-turn navigation inside hospitals and to optimize response times of commercial security services by tracking the locations of their security personnel. As Situm continues to grow, they wanted to provide high availability and scalability so they looked to Kubernetes on Azure. We also worked with the Azure engineering team to add support to Kubernetes for UDP workloads on Azure.
The Problem
The Situm app running on a smartphone continually sends sensor information to Situm’s services. To optimize for performance, this communication is done over UDP. The received sensor data is processed by artificial intelligence models to compute the positioning of the smartphone within a building. Each model tracks a user’s prior movements to predict their next one. As a result, each location request from the smartphone needs to hit the service with the same artificial intelligence model loaded, otherwise there will be a performance penalty for reloading the entire user state.
The Solution
Kubernetes can easily scale out Situm’s application to handle the increased workload. In order to direct smartphone traffic to the same backend, we need to enable load balancing with session affinity.
Kubernetes
The easiest method to provision a Kubernetes cluster on Azure is through Azure Container Service. The underlying engine of Azure Container Service, acs-engine, is open-source and available on GitHub, and provides a means to further customize a Kubernetes deployment.
While working with Situm, we used acs-engine to deploy our cluster as we needed a private build of hyperkube. hyperkube is a minimal container that contains all core Kubernetes services (e.g., kubelet, apiserver, etc.) compiled into a single binary. We needed a private build of hyperkube as the upstream change to the Kubernetes project to add support for UDP traffic to the Azure provider had not yet been included in a release. Support for UDP traffic through the Azure load balancer is now included in Kubernetes v1.6.5 and later. For reference, here is the pull request on the Kubernetes project to add the support and the subsequent pull request to backport it to v1.6.5.
With a Kubernetes Deployment Situm can easily scale-up/down their workload. Below is a Deployment manifest for a sample UDP workload listening on port 10001 (note that the protocol is explicitly set to UDP):
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: udp-server-deployment spec: replicas: 5 template: metadata: labels: name: udp-server spec: containers: - name: udp-server image: jpoon/udp-server imagePullPolicy: Always ports: - containerPort: 10001 protocol: UDP
Session Affinity
kube-proxy is a daemon that runs on each node and acts as a network proxy and load balancer for services on that node. kube-proxy watches the Kubernetes master and for each Service it installs and maintains iptables rules which capture traffic to the Service and redirects traffic to a pod in the backend pool.
In order to expose the Deployment to the outside world, we create a Service with type LoadBalancer. Again, note that UDP is explicitly set as the protocol.
apiVersion: v1 kind: Service metadata: name: udp-server-service labels: app: udp-server spec: type: LoadBalancer ports: - port: 10001 protocol: UDP selector: name: udp-server
The default load balancing mode of a Kubernetes Service is round robin, and this is implemented by Kubernetes as a chain of iptables forwarding rules. We can take a peek at the iptables rules by ssh-ing into any of the agents:
$ sudo iptables-save -t nat ... -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-PXZRUUO6ETLHNSK5 -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-7BYVUHZG6JCWRBXE -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-TQ6Y3YOHBBRO6ZEY -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-L63KMNPKEWP6ZS3R -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -j KUBE-SEP-IGJNOFR3CZKDK3MQ ...
The above excerpt shows a chain of forwarding rules; each rule uses the statistic module in random mode to match incoming packets. In order to attain session affinity, deploy the Kubernetes Service with service.spec.sessionAffinity set to ClientIP:
apiVersion: v1 kind: Service metadata: name: udp-server-service labels: app: udp-server spec: type: LoadBalancer sessionAffinity: ClientIP ports: - port: 10001 protocol: UDP selector: name: udp-server
Once the new manifest has been applied, Kubernetes will add the following new iptables rules on the agents:
$ sudo iptables-save -t nat ... -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-PXZRUUO6ETLHNSK5 --mask 255.255.255.255 --rsource -j KUBE-SEP-PXZRUUO6ETLHNSK5 -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-7BYVUHZG6JCWRBXE --mask 255.255.255.255 --rsource -j KUBE-SEP-7BYVUHZG6JCWRBXE -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-TQ6Y3YOHBBRO6ZEY --mask 255.255.255.255 --rsource -j KUBE-SEP-TQ6Y3YOHBBRO6ZEY -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-L63KMNPKEWP6ZS3R --mask 255.255.255.255 --rsource -j KUBE-SEP-L63KMNPKEWP6ZS3R -A KUBE-SVC-47RAPJ3AUKGKU6DC -m comment --comment "default/udp-server-service:" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-IGJNOFR3CZKDK3MQ --mask 255.255.255.255 --rsource -j KUBE-SEP-IGJNOFR3CZKDK3MQ ...
The rules use the recent module to track the source addresses of the packets. The first packet from a given IP address will make it past these rules and will be resolved by the statistic-based rules we saw earlier. For the next 10800 seconds, subsequent packets from the same source address will then match the above rule and follow the same resolution (i.e., be forwarded to the same backend pod).
Opportunities for Reuse
The solution outlined in this code story is adaptable to any workload which requires session persistence. As of Kubernetes v1.6.5 (release notes), Kubernetes on Azure supports both UDP and TCP workloads, and respects the Service spec’s sessionAffinity.
The UDP test workload used throughout this code story is available on GitHub.
The post Scaling UDP Workloads with Kubernetes appeared first on ISE Developer Blog.