Prerequisites
We suppose that the Kubernetes cluster is up and running with at least one master and one node.
Credential has been properly configured and kubectl can be used on at least one of the hosts.
Download all the yaml files from git repo and switch to the directory that contains configuration files.
In order to accelerate the velocity to download the Docker images, we set up a local Docker registry on master host 192.168.1.102. Here is the command to start a local registry. Create a local directory to make the docker images persistent.
You can check the images in the registry by visiting http://192.168.1.102:5000/v2/_catalog
Create a New Namespace
There are a couple of pre-defined namespaces for different purposes (default, kube-system, kube-public). The namespace.yaml defines a new namespace named iot2cloud. Here is the namespace.yaml file:
We create a new namespace for our applications.
You can check the namespaces
Voila, we have a new namespace created. In order to simplify the command, we create an alias for the namespace in the context. Alternatively, you can make it persistent by add this to your $HOME/.bashrc file.
Attach Labels to Nodes
In our scenario, some of the hosts are resource-constrained. We would like to assign all the Cloud applications to the Master and the others to the Nodes.nodeSelect helps us make this happen. Add labels to categorize the nodes and check the results:
Set up MariaDB
Create secrets and configMaps
Kubernetes introduces secret concept to hold sensitive information such as password, token, key pairs etc. We keep our MariaDB password in the secret and MariaDB pod will create the password set in the secret for user root.
You can check that the secret was created like this:
Create MariaDB Pod and Service
Remember to update the image: 192.168.1.102:5000/mariadb:latest in mariadb-master-service.yaml file to the actual repository you use. Make the same changes to all the other yaml file.
Check the pods and service status:
Optionally, if you are going to run MariaDB in master-slave mode. Ensure that there are more than one nodes labeled nuc in the cluster and run:
Run RabbitMQ Service
RabbitMQ service exposes port 5672 catering for requests.
Create IoT Rest API Service and Gateway Server
There are two containers running in one pod, one for iot-rest-api-service and the other is for gateway server.
The replica is set to 3, meaning 3 pods would be created evenly on gateway1 to gateway3 hosts.
You can get the NodePort of Rest API service:
And browse the REST service via http://192.168.1.102:32626/api/oic/res
Run Home Dashboard
2 pods will be created for home dashboard and database will be initialized after running the commands below:
Get the NodePort:
Then you are able to login to the home portal via http://192.168.1.102:31328/ (use the NodePort you got from the kubectl command instead)
Start Admin Portal
Run the following commands to start the admin portal. The admin portal can only run on single pod in that the trained models are stored in the local file system and not yet shared between pods.
Update the env http_proxy and https_proxy to empty string in admin-portal.yaml if it is not required. Then run:
Get the NodePort and visit the admin portal. Next, point the demo gateway’s IP address to the http://gateway.iot2cloud:8000/.
Run Celery Worker and Trigger Tasks
Celery worker is simply a worker process thereby no service definition required. There are two containers in the pod, one for long running tasks and the other for periodic tasks.
Run the command below to initialize the worker:
And run this command to trigger the tasks (I will try to skip this manual step later):
BKMs
Restart pods: in some cases, pods get error or fail to restart. I need to restart the pod to recover the application. There is no straight command to restart pods. You can restart pods by:
If you cannot find the yaml file immediately, run the command below alternatively:
Pod log outputs
References: