Using OpenStack as an infrastructure provider in Rancher
Rancher is an open-source project that allows us to manage Kubernetes clusters . Its great advantage is that it allows you to manage from a single place Kubernetes clusters hosted either in one of the most popular Kubernetes providers, such as Google Container Engine, Amazon EKS or Azure Kubernetes Service, or in infrastructure providers, such as Amazon EC2, Microsoft Azure, Digital Ocean, RackSpace, OpenStack, SoftLayer, vSphere, among others.
In this tutorial we will see, on the one hand, how to create a Kubernetes cluster using OpenStack as an infrastructure provider for the master and worker nodes. On the other hand, we will see how to use OpenStack Cinder to provide persistent storage to Kubernetes clusters built on top of OpenStack.
Installation and initial configuration of Rancher
Rancher is a platform for managing Kubernetes distributions in different providers (Google, Amazon, Microsoft, RackSpace, OpenStack, vSphere, …). Rancher enables the centralized creation and management of Kubernetes clusters regardless of the chosen provider. In addition, it allows you to use application catalogs from Helm chart repositories for easy deployment.
Next we will see how to perform a basic installation of Rancher and how to configure it so that Rancher can create Kubernetes clusters using OpenStack as IaaS.
Rancher Installation
Following the Rancher Quick Installation Guide , from an Ubuntu machine configured with Docker we will execute
sudo docker run -d \
--volumes-from rancher-data \ #1
--restart=unless-stopped \
-p 80:80 -p 443:443 \
rancher/rancher:latest`
- We store the Rancher data as a volume outside the container.
Installing Rancher Using SSL Certificates
If we have SSL certificates we can start Rancher telling it to use the certificates. In the case of Rancher-STIC, we have placed the certificates in a directory that we pass as a volume to Rancher.
docker run -d \
--restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /home/ubuntu/rancherdata:/var/lib/rancher \ #1
-v /home/ubuntu/certificados/star_stic_ual_es.crt:/etc/rancher/ssl/cert.pem \ #2
-v /home/ubuntu/certificados/star_stic_ual_es.key:/etc/rancher/ssl/key.pem \ #3
-v /home/ubuntu/certificados/DigiCertCA.crt:/etc/rancher/ssl/cacerts.pem \ #4
rancher/rancher:latest
- Volume for data storage
- Concatenation of the Rancher certificate with that of the certification authority (Rancher uses certificate chains )
- private key
- Certifying entity
If the CA is not going to be used, use this script
docker run -d \
--restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /home/ubuntu/rancherdata:/var/lib/rancher \ #1
-v /home/ubuntu/certificados/star_stic_ual_es.crt:/etc/rancher/ssl/cert.pem \ #2
-v /home/ubuntu/certificados/star_stic_ual_es.key:/etc/rancher/ssl/key.pem \ #3
rancher/rancher:latest \
--no-cacerts #4
- Volume for data storage
- Concatenation of the Rancher certificate with that of the certification authority (Rancher uses certificate chains )
- private key
- Disabling Default Certificates Generated by Rancher
Activation of OpenStack as a provider of compute infrastructure for Rancher
By default, Rancher offers the following providers for creating Kubernetes clusters:
- Kubernetes providers: Google Container Engine, Amazon EKS, and Azure Kubernetes Service
- Infrastructure providers: Amazon EC2, Microsoft Azure, Digital Ocean, and vSphere.
However, other infrastructure providers such as RackSpace, OpenStack , SoftLayer, and others are not available by default.
To activate OpenStack as a new infrastructure provider (this is applicable to any other provider, eg RackSpace), we will select Globalas cluster and then Node Drivers in the menu bar. This action shows all the providers that we can add for the creation of the Kubernetes cluster nodes that we create.
The following figure illustrates the list of providers and how OpenStack is activated as an infrastructure provider.
Clicking Activate on OpenStack will activate OpenStack as the infrastructure provider, as illustrated in the following figure:
However, this is not enough. Next, we must define the template with which the Kubernetes cluster nodes (master and workers) will be created in OpenStack
Creating the template for nodes
Google Container Engine, Amazon EKS, and so on are commercial products that already have their URLs, Availability Zone names, image names for node creation, and so on already set. However, since there is no single OpenStack provider, the parameters listed above such as URLs, availability zone names, image names, and so on can (and certainly do) vary from one OpenStack provider to another.
Each Rancher user will need to configure their own templates to specify the different OpenStack providers they have access to, as well as the different OpenStack instance configurations they want to use depending on the type of Kubernetes node they are creating.
To create a template, in the user drop-down menu we select Node Templates and then press the button Add Template.
Below are the parameters to enter in this dialog box
- activeTimeout: Leave the default value of 200, which will be the timeout we allow OpenStack to create nodes.
- authURL: Public Keystone endpoint that provides authentication. In our case http://openstack.stic.ual.es:5000/v3 (only accessible from the UAL VPN). This information can be found by the OpenStack administrator by opening the Horizon console and looking at services in the menu Admin | System Information. You can also get the URL using the OpenStack CLI with:
openstack endpoint list
- availabilityZone: Availability zone where the Kubernetes nodes created with this template will be instantiated. In our case we will introduce nova.
- domainName: Name of the domain to which the user providing OpenStack resources to this template belongs. In our case we will introduce default. Instead of setting the parameter domainId, one could set the parameter domainId, but this is a more difficult value to obtain.
- endPointType: Type of endpoint that we will use to interact with the OpenStack Keystone component for authentication. We will leave publicURL, since Rancher does not have access to the OpenStack tunnel network or maintenance network.
- flavorName: Full name of the flavor with which the nodes created with this template will be created. In this example we will use large, although we can choose any other among those available in the OpenStack we are accessing (eg tiny, small, large, xlarge, and others).
- floatingipPool: Name of the external network that provides the floating IPs to the created instances. In our case we will introduce ual-net.
- imageName: Full name of the image to use to create the instances. In our case we will use Ubuntu 18.04 LTS, although we could have used any of those available in OpenStack-STIC ( CentOS 7, Debian10, openSUSE Leap 15.1, …).
- ipVersion: We leave 4, since the addresses we use are IPv4.
- keypairName: Name of the public key file that will be injected into the instance on creation, and therefore must be available with that name in the OpenStack project in which the instances created using this template will be created . (pe os-sistemas)
- netName: Name of the network to which the instances created according to this template will connect. Review the networks of your OpenStack project and identify the name of the network where the instances to be created will be located (eg k8s-net).
- password: Password of the user in OpenStack to be able to give access to Rancher so that it creates the instances.
- privateKeyFile: Content of the private key that Rancher will use to provision the instances and that will be the pair of the public key that has been entered inkeypairName .
- region: Name of the region. In our case it isRegionOne
- secGroups: Comma-separated list of OpenStack project security groups applicable to instances created with this template (pe default)
- sshPort: Access port to the instances. 22We leave the value
- sshUser: Username of the instance to be created, which will depend on the type and image used to create the instance. For example, for Ubuntu images the user is ubuntu, for Debian it is debian, for Fedora it is fedora.
- tenantName: Name of the OpenStack project in which instances using this template will be created. Check the project name in Horizon to get this value.
- userName: OpenStack username that this template will use.
From this moment we already have a template with which we can create the nodes of our Kubernetes cluster. All nodes created using this template will have the characteristics defined in the template (image, flavor, network, security groups, and so on).
Creation of various templates
To better adjust the need for each type of node to be created, different templates can be defined with different flavors with a greater or lesser amount of resources.
To create a second template from the first one, we can clone the previous template (with the option cloneoffered by Rancher and make some adjustments to it to extend, for example, the flavor of the nodes that have the worker function in the Kubernetes cluster).
The following figure illustrates two templates available for creating a Kubernetes cluster, one flavored medium template for etcd and Control Plane nodes , and another flavored template for Worker xlarge nodes .
Kubernetes cluster creation using templates
From the created templates we will deploy an example Kubernetes cluster for CI tasks. The characteristics of the nodes are the following:
- A node pool of 3 nodes mediumfor etcdy Control Planewith prefix k8-prod-ci.
- A node pool of 4 nodes xlargewith a Workerprefix of k8-prod-ci-worker.
After a few minutes, the cluster will be created and we will be able to see in the associated OpenStack project the created instances distinguished with the prefixes k8s-prod-ci and k8-prod-ci-worker.
Configuring OpenStack as a volume provider
Cinder is the OpenStack component that provides block storage. We can use Cinder for creating persistent storage in Kubernetes projects. Here we will see how to make this configuration in Rancher.
To activate OpenStack as a cloud provider in Rancher, a series of parameters must be configured through a YAML configuration file for the cluster options.
The configuration in Rancher of a cloud provider for storage or any other service (eg load balancers) is done at the cluster level. Therefore, this configuration will have to be done on each Kubernetes cluster.
When creating the cluster, once the configuration of the Master and Worker nodes of the Kubernetes cluster has been defined, we will configure the part related to storage in volumes with Cinder. To make this configuration we will select the link Edit as YAML. This will open a box where we will configure the OpenStack and Cinder options so that Cinder volumes can be provided to this cluster.
It is also possible to apply the OpenStack configuration as a cloud provider on top of existing Kubernetes clusters. Once the changes are made, Rancher will reconfigure the Kubernetes cluster so that Cinder can be used as a storage provider.
YAML file configuration settings
Below is the snippet with the configuration of Cinder as a block storage provider in the cluster. Settings download link
Configuration options are grouped into various sections. The most important for defining Cinder volumes in Rancher are the sections globaland block_storagethe section openstackCloudProvider. However, in this same section there are also other interesting sections for other specific situations
cloud_provider:
name: openstack
openstackCloudProvider:
block_storage: #1
ignore-volume-az: true
trust-device-path: false
global:
auth-url: 'http://openstack.stic.ual.es:5000/v3/' #2
domain-name: default #3
tenant-id: "your-tenant-id-here" #4
username: "your-username-here" #5
password: "your-password-here" #6
load_balancer:
create-monitor: false
floating-network-id: "your-external-net-id-here" #7
manage-security-groups: false
monitor-max-retries: 0
subnet-id: "your-subnet-id-here" #8
use-octavia: false
metadata:
request-timeout: 0
route: {}
- Cinder configuration options
- OpenStack-STIC Authentication Endpoint
- Domain name used in OpenStack-STIC
- Project ID. Project name is not supported . See below for information on Obtaining the Project ID
- Username
- Password
- ID of the external network that provides the floating IPs
- Project Subnet ID
1. We obtain the project ID through the View credentials option that is available in the OpenStack menu Project | Compute | API Access. Then press the button View credentials
The dialog User Credentials Details will appear displaying the Project ID, which is the information we needed to complete the cluster configuration YAML for use by Cinder.
2. floating-network-id we will obtain it in the menu Network | Networks, selecting the external network ( ual-net). Overview The necessary data will appear in the tab ID.
3. We will obtain the subnet identifier in the menu Network | Networks, selecting the project network, then its subnet and the necessary data are in ID.
Creating Cinder volumes on Kubernetes clusters
Containers can store data, but data is lost when containers are deleted. Kubernetes offers persistent volumes, which is storage external to the pod, either on the host, in a storage cluster, or cloud storage. If a container fails, the container that replaces it can access the data again without data loss.
Kubernetes offers two forms of persistent storage: Persistent Volumes (PVs ) and Storage classes ( Storage classes ).
Persistent Volumes (PV):
- They are pre-provisioned volumes that can be attached to pods later.
- When the application starts, it creates a Persistent Volume Claim (PVC ) and is bound to the persistent volume.
- In Rancher they are created at the cluster level, not at the project level.
- They represent something like storage drivers (we could have classes for Cinder, iSCSI, GlusterFS, NFS, …). Learn more about volume types in Kubernetes .
- They provision persistent volumes on demand.
- They allow you to create the PVC directly without having to create the persistent volume first.
- They create volumes (Cinder) that are later connected to the PVCs.
If we want to create Cinder volumes, once the cluster is configured to use Cinder as a storage provider, we have to create a storage class.
To create a storage class for Cinder:
1. Select the cluster at the global level, not at the project level (eg Default, System, …).
2. Select Storage Classesfrom menuStorage
3. Select Add Classto add a storage class
4. Enter a name (pe Cinder) and choose OpenStack Cinder Volumeas storage provider. (In the section Customize you can configure that the associated volumes be deleted or maintained after deleting a workload)
The selection of OpenStack Cinder Volumeas a storage provider will take the values configured in the YAML of the cluster in the Configuring OpenStack as a volume provider section .
Create a persistent volume
- Select the cluster at the global level, not at the project level ( Default, System, …).
- In the menu Storage select Persistent Volumes and press the button Add Volume.
- Enter the values a name (eg myPV), choose OpenStack Cinder Volumein Volume Plugin, a capacity (eg 10 GB). In the section Plugin configuration enter values for Volume ID, Secret Nameand Secret Namespace.
- In the Customize unfold section Assign to Storage Class and select the class cinder created previously.
Creating a persistent volume request
In Kubernetes, a Persistent Volume Claim (PVC ) can be created either from a persistent volume or from a storage class. A PVC does provide real storage for our cluster and if we use Cinder, we will see the volumes created in OpenStack.
Let's start by creating a PVC. To do this, we select the project Defaultof our cluster and select the menu Workloads. Then we select the menu Volumesand press Add Volume.
Next we will have to indicate whether to create the PVC from a previously created persistent volume (eg mypv) or to create the PVC from one of the available storage classes (eg the class cinder). Let's see how to do both options:
- Creating the PVC from a persistent volume : Enter a name (pe pvc-from-persistent-volume), select the radio button Use an existing persistent volumein the section Sourceand choose the persistent volume from the dropdown Persistent Volume(pe mypv).
- Creating the PVC from a storage class : Enter a name (pe pvc-from-storage-class), select the radio button Use a Storage Class to provision a new persistent volumein the section Sourceand choose cinderfrom the dropdown Storage Class. This action will create in OpenStack a volume of the specified size and with a random name that corresponds to that of the PVC
The volume created in OpenStack matches the PVC created from Rancher. In our case, PVC is kubernetes-dynamic-pvc-7c9ebbab-aa63-11e9-b090-fa163ee64fe0
Building volume-based applications
Rancher's catalog of applications is quite extensive. Many of the applications in the catalog allow the creation of volumes. For example, we can create a MySQL with an associated volume of 8 GB, as shown in the figure
Thus, by default we will have MySQL running in a single pod. Once created, if we log into MySQL, create a sample database, and delete the pod, the data will not be lost. The data is stored on a PVC, which is stored on a Cinder volume in OpenStack. If you recreate a pod that replaces the deleted pod, after logging into the new pod you can verify that you have access to the same data that the deleted pod had.
Creating a volume request using YAML manifest
Both storage class and volume requests can be made from the Rancher console or by directly deploying YAML manifests.
Below is the YAML manifest for creating a storage class named cinderfor OpenStack Cinder
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cinder
provisioner: kubernetes.io/cinder
parameters:
availability: nova
We will create it with .kubectl apply -f https://gist.githubusercontent.com/gitusername/c4a179accb12cf04a6af5fdc8f438f11/raw/bc274085607900f5335df8d5a2b5915e85b128f5/cinder-sc.yaml
See the Kubernetes storage providers in the official documentation .Once the storage class is created, we can create volume requests on top of it. Below is a YAML manifest for creating a 9GB PVC over the storage class cinderdefined above.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cinder-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 9Gi # pass here the size of the volume
storageClassName: cinder
We will create it with .
kubectl apply -f https://gist.githubusercontent.com/gitusername/788d6d7803cb32834c4afb96eeef6e5d/raw/dcf488634eac1bdbf0d4933e3eb2def55c8d8925/cinder-pvc.yaml
After creating it, the volume will appear in the Storage | Persisten VolumesRancher menu and in the OpenStack Volumes area ( Volumes | Volumes).Creating NFS volumes with Openstack Manila on Kubernetes clusters
Kubernetes allows mounting existing NFS shares. NFS volumes are external volumes to the Kubernetes cluster and are persistent, so their content is preserved after pods that have it mounted are removed.
For this example we will use Openstack Manila as the NFS server. For the examples we have already created a share , available in the path /var/lib/manila/mnt/share-c3e6b450-9e7b-4144-a113-84cbcd50ddd6 on server 192.168.128.17. Said share already has the access rules configured so that it can be mounted.
Create a persistent volume
- Select the cluster at the global level, not at the project level ( Default, System, …).
- In the menu Storageselect Persistent Volumesand press the button Add Volume.
- Enter the values a name (pe my-share-pv), choose NFS Sharein Volume Plugin, a capacity (pe 10 GB). In the section Plugin Configurationenter the values of pathand serverprovided by OpenStack Manila for the created share.
Starting from persistent volume, we could create volume claimsDefault on it from the cluster option in Resources | Volumes | Add Volumeand selecting Use an existing persistent volumein Source.
Creating a volume request using YAML manifest
Both persistent volumes and volume requests can be done from the Rancher console or by directly deploying YAML manifests.
Below is the YAML manifest for creating an NFS persistent volume for an OpenStack Manila share
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /var/lib/manila/mnt/share-c3e6b450-9e7b-4144-a113-84cbcd50ddd6
server: 192.168.128.17
We will create it withkubectl apply -f https://gist.githubusercontent.com/gitusername/3fe119adcacf269442da0d0db9adcc3d/raw/1e54dd73a00bd55f70dd229fab1123c2a5d89787/nfs-pv.yaml
Once the persistent volume is created, we can create volume requests on it. Below is a YAML manifest for creating a 10GB PVC on the volume nfs-pvdefined above.apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: ""
volumeName: nfs-pv
We will create it with kubectl apply -f https://gist.githubusercontent.com/gitusername/4adbc23ae9237197be32d622bad8876c/raw/3403e9da298c971d739575e0aab8f221e99158e3/nfs-pvc.yaml
ConclusionsRancher offers an integrated platform for creating and maintaining Kubernetes clusters. In the event that we use public providers (Google, Azure, Amazon, …) for the deployment of the infrastructure, it is enough to complete the parameters provided by the provider. However, since each OpenStack cloud that we have access to can have different configuration parameters (eg image names, flavors, external network, and so on), we must configure our own templates according to the OpenStack installation that we have. access.
In this tutorial we have seen how to create these templates and how to customize the configuration using YAML for OpenStack storage and load balancers. We have also seen how to create persistent storage for Kubernetes clusters using OpenStack Cinder and OpenStack Manila as storage providers.
Post a Comment