CISCO ACI integration with Docker UCP
Details of integration of Docker UCP and CISCO ACI
CISCO ACI CNI integration with Docker UCP
Recently I got a chance to try out CISCO ACI (Application Centric Infrastructure) with Docker UCP (Universal Control Plane). Docker UCP is a centralized cluster management tool to manage docker swarm cluster. Post 3.x version, Docker UCP also installs kubernetes components by default. This looks to be a nice feature addition. A nice introduction can be found here docker-ucp. It gives user the ability to manage swarm cluster as well as kubernetes infrastructure seamlessly. Docker UCP comes with browser based user interface which can be accessed on port 443 of UCP manager host. The UCP user interface provides a common view for both docker swarm cluster and kubernetes cluster. Let’s start with step-by-step instruction to install Docker UCP with ACI CNI.
Note - We will cover ACI fabric related objects and component in other post.
System requirements
We will create a cluster with one master and one worker node. However, one can add as many master and worker as needed.
For manager node:
CPU - 16 core
RAM - 16GB
OS - Ubuntu 18.04/CoreOS 7
This document uses Ubuntu as the OS but CoreOS works identical.
Docker EE installation
To start with Docker EE, please register your DockerHub account for a trial of Docker EE. This will give you a free trial of 30 days. Copy the license and setup on all the nodes. I added that in ~/.bashrc file.
export DOCKER_EE_URL="https://storebits.docker.com/ee/ubuntu/sub-xxx-yy-aa-zzz-bbbbb"
export DOCKER_EE_VERSION=18.09
URL & Version will appear on your DockerHub page righ hand side. Replace above URL & version before adding in ~/.bashrc.
Install pre-requisite on all nodes
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
unzip \
jq \
software-properties-common
Add the GPG key and source repo of docker UCP on all nodes
curl -fsSL "${DOCKER_EE_URL}/ubuntu/gpg" | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=$(dpkg --print-architecture)] $DOCKER_EE_URL/ubuntu \
$(lsb_release -cs) \
stable-$DOCKER_EE_VERSION"
sudo apt-get update
Install docker-ee on all nodes
sudo apt-get install docker-ee docker-ee-cli containerd.io
Verify your installation
sudo docker ps
sudo docker run hello-world
sudo docker ps -a
(Optional step) Add user to docker group
sudo usermod -aG docker ${USER}
logout
If you login now, you won’t be needing to issue docker commands with sudo. Instead,
docker ps
will work just fine. Now, you are done with Docker installation on all nodes. Let’s proceed with UCP installation.
UCP installation
Each UCP version has slight variation in term-of user parameter for installation. By the time, I wrote this post, I tried UCP-3.1.5 and 3.1.4. . I found UCP-3.1.5 has some issues in installation of CNI. It might have got resolved by now.
I have chosen UCP-3.1.4 for installation.
Pre-requisite for Docker Swarm Cluster and UCP installation
Make sure below ports are enabled on the firewall of the system. These ports are used by swarm and UCP components to form cluster and to allow user access of the cluster. Execute following as root user.
ufw allow 22/tcp
ufw allow 179/tcp
ufw allow 443/tcp
ufw allow 2376/tcp
ufw allow 2377/tcp
ufw allow 4789/udp
ufw allow 6443/tcp
ufw allow 6444/tcp
ufw allow 7946/tcp
ufw allow 7946/udp
ufw allow 9099/tcp
ufw allow 10250/tcp
ufw allow 12376/tcp
ufw allow 12378:12386/tcp
ufw allow 12388/tcp
ufw enable
ufw reload
Install UCP-3.1.4
Log into your manager node. Execute the following command:
CNI_URL=https://raw.githubusercontent.com/Vikash082/opflex-cni-test/test_1/data/aci_deployment.yaml
docker container run --rm -it --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:3.1.4 install \
--host-address <host_address> \
--unmanaged-cni true \
--cni-installer-url ${CNI_URL} \
--interactive
Let’s break down used option in above docker command.
- CNI_URL - This is the URL of deployment file which need to be used by UCP to install ACI CNI. Above installation will bring ACI CNI containers on manager and worker nodes. You can also create your own deployment file using acc-provision tool.
- docker/ucp:3.1.4 - This tells docker to pick UCP-3.1.4 image.
- host-address - Routable IP of your host machine.
- unmanaged_cni - true|false - If set false, UCP will use default Calcio plugin for networking.
- interactive - This will do the installation in interactive mode.
Once installation happens succesfully, it will print out following information.
INFO[0053] Login to UCP at https://<host_address>:443
INFO[0053] Username: admin
INFO[0053] Password: XXaabbZZYY123EErrwww
You can now login to UCP UI and browse the swarm cluster and kubernetes components deployed.
You can set admin user and it’s password at the time of UCP installation. Use options,
--admin-username
--admin-password
By default, UCP installation will initialize swarm, if not setup. You can verify swarm cluster using following commands.
docker node ls // Prints swarm node details.
Verify docker components
Log in to manager node. Use below commands and check whether the containers are in ‘Ready’ state or not.
docker ps
Veirfy Kubernetes pods
In your Manager node, use below command to verify ACI pods.
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aci-containers-controller-79f9476556-2bwv2 2/2 Running 2 2d
kube-system aci-containers-host-2chlq 4/4 Running 0 2d
kube-system aci-containers-host-6fxrs 4/4 Running 4 2d
kube-system aci-containers-openvswitch-58dt6 1/1 Running 1 2d
kube-system aci-containers-openvswitch-x756c 1/1 Running 0 2d
kube-system compose-64cf857c47-rbgmt 1/1 Running 1 2d
kube-system compose-api-6bddf48f67-krg55 1/1 Running 2 2d
kube-system kube-dns-6d79cfd8c-rc7df 3/3 Running 3 2d
kube-system ucp-metrics-fw6d9 3/3 Running 28 2d
- aci-containers-controller-xxx-yy - This is ACI controller pod and runs only on manager node. It has aci-container-controller and aci-gbpserver containers running.
- aci-containers-host-xxxyy - This is daemonset and run on all the nodes. In
this pod, four containers run namely,
- aci-containers-host
- opflex-agent
- opflex-server
- mcast-daemon
- aci-containers-openvswitch-xxxxx - This containers runs CISCO version of openvswitch. It runs as daemon-set on all the nodes.
Summary
Above steps works for all the available CNI which is supported by docker. Check docker documentation to find all the supported CNIs. I also tried Calcio and Flannel in similar way with UCP.
Share this post
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email