- How to run the fully virtualized example
If you are interested in getting an OCP deployment running as quickly as
possible you will find in the
/samples/ocp_on_libvirt folder some
configuration examples to run the
DCI Openshift Agent in a “all-in-one” full
In this case, the agent will use
libvirt virtual machines deployed on top of
Please note that systems are nested virtual
machines at least in the case
of the provision host: provisioner will spawn a bootstrap VM inside itself, in
our case that would be a VM inside a VM. Please remember to enable
in your Jumpbox.
The full virtualized environment scenario requires the DCI Jumpbox to have at least 64 Gi of memory and 200 Gi of storage to host a virtual provision machine and all virtual masters.
The provided example will create 4 systems (1 provisionner and 3 OCP masters)
on top of the DCI Jumpbox. The number of nodes can be adapted by modifying the
This example will help you to run the
dci-openshift-agent within one single
system by running
libvirt virtual machines. This example is a good path to
dci-openshift-agent (all different steps, hooks, settings) and
to be used as a development environment.
At this point, the
DCI Jumpbox is installed with all above prerequisites
(learn how to install the DCI
The following documentation covers how to configure deploy virtual systems,
virtual networks and the according
It will also guide you to generate and use an appropriate settings file for this scenario.
Please note, that in the fully virtualized environment, the
Agent will create the
Openshift Provisioning node system automatically.
First, you need to work directly as the
# su - dci-openshift-agent $ id uid=990(dci-openshift-agent) gid=987(dci-openshift-agent) groups=987(dci-openshift-agent),107(qemu),985(libvirt) ...
libvirt_up playbook to configure libvirt nodes.
This playbook will:
- Create 3 local virtual machines to be used as
System Under Test
- Create 1 local virtual machine to be used as a
- Generate the relative
hostsfile (ready to be used as an inventory for the
- Provide a
pre-run.ymlhook file to be used by the agent.
cd ~/samples/ocp_on_libvirt/ $ ansible-playbook -v libvirt_up.yml
Copy the newly created file
hosts to the
$ pwd ~/samples/ocp_on_libvirt $ sudo cp hosts /etc/dci-openshift-agent/
dci-openshift-agent is now ready to run the “all-in-one” virtualized
You can check the virtual machines status by using
$ sudo virsh list --all Id Name State ---------------------------------------------------- 60 provisionhost running 64 master-0 off 65 master-2 off 66 master-1 off
From here on out you can run your agent normally as you would with baremetal
hardware, please refer to the main
README.md file section "Starting the DCI
OCP Agent" for how to start the agent. The agent will see the virtualized
resources as regular resources thanks to SSH and VBMC emulation.
After you run a DCI job (see the main
README.md) you will be able to interact
with the RHOCP cluster:
$ export KUBECONFIG=/home/admin/clusterconfigs/auth/kubeconfig $ oc get pods --all-namespaces
In case you need to delete the fully virtualized environment, you can run the
$ cd samples/ocp_on_libvirt/ $ ansible-playbook -v libvirt_destroy.yml
We have provided dnsmasq config templates in the samples directory to serve dhcp/dns from the dci jumpbox if you don’t already have a dns/dhcp server on your bare metal network.
The kni page offers a good start to understand and debug your libvirt environments.
Furthermore, some issues (see below) are specific to a libvirt(or small) environment.
Due to the lack of hardware (cpu, memory) and the fact that all resources are virtualized, the installation may take longer to complete. A recurring timeout is reached during the bootstrap.
Two parameters are available to increase this timeout, increase_bootstrap_timeout and increase_install_timeout.
- name: "installer : Run IPI installer" import_role: name: installer vars: increase_bootstrap_timeout: 2 increase_install_timeout: 2
If you need to troubleshoot this environment for bootstrap/install issues please follow the "Troubleshooting" section(s) in the main README