Get started with octavia-ingress-controller for Kubernetes

Posted 飞翔的尘埃

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Get started with octavia-ingress-controller for Kubernetes相关的知识,希望对你有一定的参考价值。

This guide explains how to deploy and config the octavia-ingress-controller in Kubernetes cluster on top of OpenStack cloud.


1What is an Ingress Controller?

In Kubernetes, Ingress allows external users and client applications access to HTTP services. Ingress consists of two components.
  • Ingress Resource is a collection of rules for the inbound traffic to reach Services. These are Layer 7 (L7) rules that allow hostnames (and optionally paths) to be directed to specific Services in Kubernetes.

  • Ingress Controller which acts upon the rules set by the Ingress Resource, typically via an HTTP or L7 load balancer.

It is vital that both pieces are properly configured to route traffic from an outside client to a Kubernetes Service.

2Why octavia-ingress-controller

As an OpenStack based public cloud provider in Catalyst Cloud, one of our goals is to continuously provide the customers the capability of innovation by delivering robust and comprehensive cloud services. After deploying Octavia and Magnum service in the public cloud, we are thinking about how to help customers to develop their applications running on the Kubernetes cluster and make their services accessible to the public in a high-performance way.

After creating a Kubernetes cluster in Magnum, the most common way to expose the application to the outside world is to use LoadBalancer type service. In the OpenStack cloud, Octavia(LBaaS v2) is the default implementation of LoadBalancer type service, as a result, for each LoadBalancer type service, there is a load balancer created in the cloud tenant account. We could see some drawbacks of this way:

  • The cost of Kubernetes Service is a little bit high if it's one-to-one mapping from the service to Octavia load balancer, the customers have to pay for a load balancer per exposed service, which can get expensive.

  • There is no filtering, no routing, etc. for the service. This means you can send almost any kind of traffic to it, like HTTP, TCP, UDP, Websockets, gRPC, or whatever.

  • The traditional ingress controllers such as nginx ingress controller, HAProxy ingress controller, Træfik, etc. don't make much sense in the cloud environment because the user still needs to expose service for the ingress controller itself which may increase the network delay and decrease the performance of the application.

The octavia-ingress-controller could solve all the above problems in the Open Stack environment by creating a single load balancer for multiple NodePort type services in an Ingress. In order to use the octavia-ingress-controller in Kubernetes cluster, use the value openstack for the annotation kubernetes.io/ingress.class in the metadata section of the Ingress Resource as shown below:

 
   
   
 
  1. annotations: kubernetes.io/ingress.class: openstack


3How to deploy octavia-ingress-controller


3.1Prepare kubeconfig file


Kubeconfig file is used to configure access to Kubernetes clusters. This is a generic way of referring to configuration files in Kubernetes. The following commands are performed in a Kubernetes cluster created using kubeadm.


  • Install cfssl tools, which are used for generating TLS certs

 
   
   
 
  1. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && chmod +x cfssl_linux-amd64 && mv cfssl_linux-amd64 /usr/local/bin/cfssl

  2. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && chmod +x cfssljson_linux-amd64 && mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

  3. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 && chmod +x cfssl-certinfo_linux-amd64 && mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

  • Re-use the CA cert and key in the existing cluster

 
   
   
 
  1. pushd /etc/kubernetes/pki

  2. cat > ca-config.json <<EOF

  3. {

  4. "signing": {

  5.   "default": {

  6.     "expiry": "87600h"

  7.   },

  8.   "profiles": {

  9.     "kubernetes": {

  10.       "usages": [

  11.           "signing",

  12.           "key encipherment",

  13.           "server auth",

  14.           "client auth"

  15.       ],

  16.       "expiry": "87600h"

  17.     }

  18.   }

  19. }

  20. }

  21. EOF

  22. cat > ingress-openstack-csr.json <<EOF

  23. {

  24. "CN": "octavia-ingress-controller",

  25. "hosts": [],

  26. "key": {

  27.   "algo": "rsa",

  28.   "size": 2048

  29. },

  30. "names": [

  31.   {

  32.     "C": "NZ",

  33.     "ST": "Wellington",

  34.     "L": "Wellington",

  35.     "O": "Catalyst",

  36.     "OU": "Lingxian"

  37.   }

  38. ]

  39. }

  40. EOF

  41. cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=kubernetes ingress-openstack-csr.json | cfssljson -bare ingress-openstack

  42. # You can take a look at the files generated by cfssl

  43. ls -l | grep ingress-openstack

  • Create kubeconfig file for octavia-ingress-controller

 
   
   
 
  1. ca_data=$(cat ca.crt | base64 | tr -'\n')

  2. client_cert_data=$(cat ingress-openstack.pem | base64 | tr -'\n')

  3. client_key_data=$(cat ingress-openstack-key.pem | base64 | tr -'\n')


  4. cat <<EOF > /etc/kubernetes/ingress-openstack.conf

  5. apiVersion: v1

  6. kind: Config

  7. clusters:

  8. - cluster:

  9.     certificate-authority-data: ${ca_data}

  10.     server: https://${k8s_master_ip}:6443

  11.   name: kubernetes

  12. preferences: {}

  13. users:

  14. - name: octavia-ingress-controller

  15.   user:

  16.     client-certificate-data: ${client_cert_data}

  17.     client-key-data: ${client_key_data}

  18. contexts:

  19. - context:

  20.     cluster: kubernetes

  21.     user: octavia-ingress-controller

  22.   name: octavia-ingress-controller@kubernetes

  23. current-context: octavia-ingress-controller@kubernetes

  24. EOF

  25. popd

3.2Config RBAC for octavia-ingress-controller user


For testing purpose, grant cluster-admin role for octavia-ingress-controller user so that the user has full access to the Kubernetes cluster.

 
   
   
 
  1. cat <<EOF | kc create --

  2. ---

  3. apiVersion: rbac.authorization.k8s.io/v1beta1

  4. kind: ClusterRoleBinding

  5. metadata:

  6.   name: octavia-ingress-controller

  7. subjects:

  8.  - kind: User

  9.    name: octavia-ingress-controller

  10. roleRef:

  11.   kind: ClusterRole

  12.   name: cluster-admin

  13.   apiGroup: rbac.authorization.k8s.io

  14. EOF

3.3Prepare octavia-ingress-controller service configuration


We need credentials of admin user and a normal user(e.g. demo) in OpenStack.

 
   
   
 
  1. source openrc_admin

  2. project_id=$(openstack project show demo -c id -f value)

  3. auth_url=$(export | grep OS_AUTH_URL | awk -F'"' '{print $2}')

  4. subnet_id=$(openstack subnet show private-subnet -c id -f value)

  5. public_net_id=$(openstack network show public -c id -f value)


  6. cat <<EOF > /etc/kubernetes/ingress-openstack.yaml

  7. kubernetes:

  8.   kubeconfig: /etc/kubernetes/ingress-openstack.conf

  9. openstack:

  10.   username: demo

  11.      password: password

  12.      project_id: ${project_id}

  13.      auth_url: ${auth_url}/v3

  14.   region: RegionOne

  15. octavia:

  16.   subnet_id: ${subnet_id}

  17.   fip_network: ${public_net_id}

  18. EOF

3.4Setting up octavia-ingress-controller service

Wait until the octavia-ingress-controller static pod is up and running.

 
   
   
 
  1. $ kubectl get pod --all-namespaces | grep octavia-ingress-controller

  2. kube-system octavia-ingress-controller-lingxian-k8s-master 1/1 Running 0 1m

4Setting up HTTP Load Balancing with Ingress


4.1Create a backend service


Create a simple service(echo hostname) that listens on a HTTP server on port 8080.


When you create a Service of type NodePort, Kubernetes makes your Service available on a randomly- selected high port number (e.g. 32066) on all the nodes in your cluster. Generally the Kubernetes nodes are not externally accessible by default, creating this Service does not make your application accessible from the Internet. However, we could verify the service using its CLUSTER-IP on Kubernetes master node:

 
   
   
 
  1. $ curl http://10.106.36.88:8080

  2. hostname-echo-deployment-698fd44fc8-jptl2

To make your HTTP web server application publicly accessible, you need to create an Ingress resource.

4.2Create an Ingress resource


The following command defines an Ingress resource that directs traffic that requests http://api.sample.com/hostname to the hostname-echo Service:

 
   
   
 
  1. cat <<EOF | kc create -f -

  2. apiVersion: extensions/v1beta1

  3. kind: Ingress

  4. metadata:

  5.   name: test-octavia-ingress

  6.   annotations:

  7.     kubernetes.io/ingress.class: "openstack"

  8. spec:

  9.   rules:

  10.   - host: api.sample.com

  11.     http:

  12.       paths:

  13.       - path: /hostname

  14.         backend:

  15.           serviceName: hostname-echo-svc

  16.           servicePort: 8080

  17. EOF

Kubernetes creates an Ingress resource on your cluster. The octavia-ingress-controller service running in your cluster is responsible for creating/maintaining the corresponding resources in Octavia to route all external HTTP traffic (on port 80) to the hostname-echo NodePort Service you exposed.

Verify that Ingress Resource has been created. Please note that the IP address for the Ingress Resource will not be defined right away (wait a few moments for the ADDRESS field to get populated):

 
   
   
 
  1. $ kubectl get ing

  2. NAME HOSTS ADDRESS PORTS AGE

  3. test-octavia-ingress api.sample.com 80 12s

  4. # Wait until the ingress gets an IP address

  5. $ kubectl get ing

  6. NAME HOSTS ADDRESS PORTS AGE

  7. test-octavia-ingress api.sample.com 172.24.4.9 80 9m

For testing purpose, you can log into a host that has network connection with the OpenStack cloud, you need to update /etc/hosts file in the host to resolve api.sample.com to the Ingress IP address, then you should be able to access the backend service by sending HTTP request to the domain name specified in the Ingress Resource:

 
   
   
 
  1. $ echo "172.24.4.9 api.sample.com" >> /etc/hosts

  2. $ curl http://api.sample.com/hostname

  3. hostname-echo-deployment-698fd44fc8-jptl2

以上是关于Get started with octavia-ingress-controller for Kubernetes的主要内容,如果未能解决你的问题,请参考以下文章

[Apollo Server] Get started with Apollo Server

Power BI Desktop 入门Get started with Power BI Desktop

Microsoft: Get started with Dynamic Data Masking in SQL Server 2016 and Azure SQL

Oracle LiveLabs实验:Get started with Oracle GoldenGate for Big Data

openstack octavia介绍

openstack octavia 手工安装