Skip to content

How to setup Multus CNI phase1 demo environment

Tomofumi Hayashi edited this page May 31, 2018 · 1 revision

This document describes how to setup lab environment for Multus-CNI phase1 demo at k8s Network-SIG meeting.

Requirement

  • CentOS with libvirt/kvm environment (i.e. virsh)

    In this document, VMs (for kube-master and nodes) will be created in this CentOS machine.

  • Ansible

  • Kube-ansible

Procedure

1) Get kube-ansible and some preparation

$ sudo -i
$ cd <working directory>
$ git clone https://github.com/redhat-nfvpe/kube-ansible.git
$ cd kube-ansible
$ ansible-galaxy install -r requirements.yml
$ cat << EOM >> inventory/virthost.inventory 
vmhost ansible_host=127.0.0.1 ansible_ssh_user=root

[virthost]
vmhost
EOM

2) Run ansible-playbook to setup virtual machine

$ ansible-playbook -i inventory/virthost.inventory \
  -e "network_type=npwg-poc1" playbooks/virthost-setup.yml

Once run ansible successfully, ./inventory/vms.local.generated is created.

3) Run ansible-playbook to install kubernetes into it

$ ansible-playbook -i inventory/vms.local.generated \
  -e @"./inventory/examples/npwg-demo-1/extra-vars.yml" \
  -e "multus_version=dev/network-plumbing-working-group-crd-change" \
  playbooks/kube-install.yml

4) ssh into kube-master VM

$ ssh -i ~/.ssh/virt_host/id_vm_rsa centos@kube-master

5) Create CRDs, network attachment objects and pods with network attachments.

# check node readiness
$ kubectl get node -o wide
# check all pods are ready
$ kubectl get pod --all-namespaces -o wide

# Clone multus-cni (branch dev/network-plumbing-working-group-crd-change)
$ git clone https://github.com/intel/multus-cni.git
$ cd multus-cni
$ git checkout dev/network-plumbing-working-group-crd-change

$ cd examples/npwg-demo-1
# Create CRD definition 
$ kubectl create -f 01_crd.yml
# Create clusterrole and bind it to all node
$ kubectl create -f 02_clusterrole.yml
$ for n in kube-master kube-node-1 kube-node-2 kube-node-3; do
kubectl create clusterrolebinding rolebind-multus-crd-overpowered-$n \
     --clusterrole=multus-crd-overpowered --user=system:node:$n
kubectl create clusterrolebinding rolebind-multus-crd-overpowered-$n-tesns1 \
     --namespace=testns1 --clusterrole=multus-crd-overpowered --user=system:node:$n
done

# Create new namespace
$ kubectl create -f 03_namespace1.yml
# Create network attachment objects
$ kubectl create -f 04_macvlan1.yml
$ kubectl create -f 05_vlan1.yml
$ kubectl create -f 06_flannel2.yml

# Create pods that use above network attachment object.
$ kubectl create -f 11_pod_case1.yml
$ kubectl create -f 12_pod_case2.yml
$ kubectl create -f 13_pod_case3.yml
$ kubectl create -f 14_pod_case4.yml

Clean up all vms

<Quit to kube-master>
$ ansible-playbook -i inventory/virthost.inventory playbooks/vm-teardown.yml