The OVS Plugin¶
The OVS Plugin was not maintained in some CloudStack versions. Please use CloudStack 22.214.171.124 and later. If you wish to use OVS as the default networking backend on Linux, only follow the Agent Configuration part of this guide. CloudStack will automatically detect it up based on the configuration in the agent.properties file. This in spite of the OVS Plugin not being shown in the Network Service Providers.
Introduction to the OVS Plugin¶
The OVS plugin is the native SDN implementations in CloudStack, using GRE isolation method. The plugin can be used by CloudStack to implement isolated guest networks and to provide additional services like NAT, port forwarding and load balancing.
Features of the OVS Plugin¶
The following table lists the CloudStack network services provided by the OVS Plugin.
|Network Service||CloudStack version|
|Virtual Networking||>= 4.0|
|Static NAT||>= 4.3|
|Port Forwarding||>= 4.3|
|Load Balancing||>= 4.3|
Table: Supported Services
The Virtual Networking service was originally called ‘Connectivity’ in CloudStack 4.0
The following hypervisors are supported by the OVS Plugin.
Table: Supported Hypervisors
Configuring the OVS Plugin¶
Before enabling the OVS plugin the hypervisor needs to be install OpenvSwitch. Default, XenServer has already installed OpenvSwitch. However, you must install OpenvSwitch manually on KVM. CentOS 6.4 and OpenvSwitch 1.10 are recommended.
- CentOS 6.4 is recommended.
- To make sure that the native bridge module will not interfere with openvSwitch the bridge module should be added to the blacklist. See the modprobe documentation for your distribution on where to find the blacklist. Make sure the module is not loaded either by rebooting or executing rmmod bridge before executing next steps.
CloudStack needs to have at least one physical network with the isolation method set to “GRE”. This network should be enabled for the Guest traffic type.
With KVM, the traffic type should be configured with the traffic label that matches the name of the Integration Bridge on the hypervisor. For example, you should set the traffic label as following:
- Management & Storage traffic: cloudbr0
- Guest & Public traffic: cloudbr1 See KVM networking configuration guide for more detail.
Only for KVM hypervisor
Configure network interfaces:
/etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none IPV6INIT=no NM_CONTROLLED=no ONBOOT=yes TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=cloudbr0 /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none IPV6INIT=no NM_CONTROLLED=no ONBOOT=yes TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=cloudbr1 /etc/sysconfig/network-scripts/ifcfg-cloudbr0 DEVICE=cloudbr0 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=172.16.10.10 GATEWAY=172.16.10.1 NETMASK=255.255.255.0 HOTPLUG=no /etc/sysconfig/network-scripts/ifcfg-cloudbr1 DEVICE=cloudbr1 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=none HOTPLUG=no /etc/sysconfig/network NETWORKING=yes HOSTNAME=testkvm1 GATEWAY=126.96.36.199
Enabling the service provider¶
The OVS provider is disabled by default. Navigate to the “Network Service Providers” configuration of the physical network with the GRE isolation type. Navigate to the OVS provider and press the “Enable Provider” button.
Using the OVS plugin requires a network offering with Virtual Networking enabled and configured to use the OVS element. Typical use cases combine services from the Virtual Router appliance and the OVS plugin.
Table: Isolated network offering with regular services from the Virtual Router.
The tag in the network offering should be set to the name of the physical network with the OVS provider.
Isolated network with network services. The virtual router is still required to provide network services like dns and dhcp.
Table: Isolated network offering with network services
Using the OVS plugin with VPC¶
OVS plugin does not work with VPC at that time
Since version 4.12 it is possible to enable DPDK support on CloudStack along with the OVS plugin.
Edit /etc/cloudstack/agent/agent.properties to enable DPDK support on the agent and on ovs-vstcl commands for port creations as well as the path to OVS ports (usually: /var/run/openvswitch)
Agent should be restarted for actions to take effect.
When the host agent connects to the management server, it sends the list of hosts capabilities. When DPDK support is enabled on the host, the capability with name ‘dpdk’ is sent to the management server. The list of host capabilities are persisted on the ‘capabilities’ column on ‘hosts’ table, and can be retrieved by the ‘listHosts’ API method:
list hosts id=HOST_ID filter=capabilities
Additional VM configurations¶
In order to enable DPDK on VM deployments, users should pass addition configuration to VMs. The required configurations are listed on the next section. Administrators can allow users to pass additional configurations to their VMs by the account scoped setting:
Users are able to pass extra configurations as part of the ‘deployVirtualMachine’ or ‘updateVirtualMachine’ API methods. These extra configurations are included on the resulting XML domain of the virtual machine and are also persisted on CloudStack database as details on the ‘user_vm_details’ table.
The ‘deployVirtualMachine’ and ‘updateVirtualMachine’ API methods accept a URL UTF-8 string encoded parameter ‘extraconfig’.
Parameter is decoded following these rules:
- There could be multiple XML sections, separated by a new line
- Each section can be named, setting a title ending on ‘:’ at the first line
- Double quotes instead of single quotes should be used
- Configurations are persisted as VM details, with the key: ‘extraconfig-TITLE’ or ‘extraconfig-N’ where N is a number.
In order to pass the below extra configuration to the VM, named ‘config-1’
config-1: <tag> <inner-tag>VALUE</inner-tag> </tag>
The ‘extraconfig’ parameter should receive the UTF-8 URL encoded string:
On ‘user_vm_details’ table the additional configuration is persisted with key: ‘extraconfig-config-1’
Additional configurations to enable DPDK on VMs¶
To enable DPDK on VM deployments:
Set the global configuration to ‘true’ (as global setting or account setting)
Generate the UTF-8 URL encoded additional configuration to enable huge pages and NUMA, examples below:
dpdk-hugepages: <memoryBacking> <hugepages> </hugepages> </memoryBacking> dpdk-numa: <cpu mode="host-passthrough"> <numa> <cell id="0" cpus="0" memory="9437184" unit="KiB" memAccess="shared"/> </numa> </cpu>
Pass the ‘extraconfig’ parameter to ‘deployVirtualMachine’ or ‘updateVirtualMachine’ API methods as a single UTF-8 URL encoded string containing multiple extra configurations (as shown above). Note: if multiple extra configurations are needed, follow the example above and add new sections separated by an empty line, encode the whole string and pass it as a single string to the APIs as ‘extraconfig’ parameter.
Additionally, users can pass extra configuration named ‘dpdk-interface-TAG’ to be included on VMs interfaces definition. Example below:
dpdk-interface-model: <model type='virtio'/>
DPDK vHost User mode selection¶
The vHost user mode describes a client/server model between Openvswitch along with DPDK and QEMU, in which one acts as client while the other as server. The server creates and manages the vHost user sockets and the client connects to the sockets created by the server:
- DPDK vHost user server mode:
- Is the default configuration.
- OVS with DPDK acts as the server, while QEMU acts as the client.
- The port types used are: dpdkvhostuser
- DPDK vHost user client mode:
- OVS with DPDK acts as the client and QEMU acts as the server.
- If Openvswitch is restarted then the sockets can reconnect to the existing sockets on the server, and normal connectivity can be resumed.
- The port types used are: dpdkvhostuserclient
Applying additional configurations via service offerings¶
It is possible to avoid passing additional configuration on each VM deployment, but setting these configurations on a service offering, and those are passed to the VM.
- To create a service offering with additional configurations, pass each key/value pair as service offering details on service offering creation, with keys starting with the “extraconfig” keyword, and each value an URL UTF-8 encoded string.
- Additional configurations are stored as service offering details
For example, applying DPDK additional configurations via service offering:
create serviceoffering name=<NAME> displaytext=<NAME> serviceofferingdetails.key=extraconfig-dpdk-hugepages serviceofferingdetails.value=%3CmemoryBacking%3E%20%3Chugepages%2F%3E%20%3C%2FmemoryBacking%3E serviceofferingdetails.key=extraconfig-dpdk-numa serviceofferingdetails.value=%3Ccpu%20mode%3D%22host-passthrough%22%3E%20%3Cnuma%3E%20%3Ccell%20id%3D%220%22%20cpus%3D%220%22%20memory%3D%229437184%22%20unit%3D%22KiB%22%20memAccess%3D%22shared%22%2F%3E%20%3C%2Fnuma%3E%20%3C%2Fcpu%3E
The preferred DPDK vHost User Mode must be passed as a service offering detail, with special key name: “DPDK-VHOSTUSER”. Possible values are: “client” or “server”. The following table illustrates the expected behaviour on DPDK ports and VM guest interfaces.
By default, the server mode is assumed if it is not passed as a service offering detail.
|DPDK vHost User Mode||OVS port creation type||VM guest interface mode|
create serviceoffering name=<NAME> displaytext=<NAME> serviceofferingdetails.key=DPDK-VHOSTUSER serviceofferingdetails.value=client serviceofferingdetails.key=extraconfig-dpdk-hugepages serviceofferingdetails.value=%3CmemoryBacking%3E%20%3Chugepages%2F%3E%20%3C%2FmemoryBacking%3E serviceofferingdetails.key=extraconfig-dpdk-numa serviceofferingdetails.value=%3Ccpu%20mode%3D%22host-passthrough%22%3E%20%3Cnuma%3E%20%3Ccell%20id%3D%220%22%20cpus%3D%220%22%20memory%3D%229437184%22%20unit%3D%22KiB%22%20memAccess%3D%22shared%22%2F%3E%20%3C%2Fnuma%3E%20%3C%2Fcpu%3E
DPDK VMs live migrations¶
It is possible to perform live migrations of DPDK enabled VMs since CloudStack version 4.13. DPDK enabled VMs can be migrated between hosts in the same cluster which are both DPDK enabled.
CloudStack determinates that a VM is a DPDK enabled VM when the following conditions are met:
- The VM is a user VM
- The VM state is Running
- The host in which the VM is running is a DPDK enabled host (i.e. host contains the ‘dpdk’ capability as part of its capabilities. Check Agent configuration for DPDK support.)
- The VM acquires the DPDK required configurations via VM details or service offering details. DPDK required additional configurations are additional configurations with name:
DPDK enabled VMs can only be migrated between DPDK enabled hosts. Therefore the ‘findHostsForMigration’ API method excludes non-DPDK enabled hosts from the list of suitable hosts to migrate DPDK enabled VMs.
When VM is created or started, CloudStack creates ports with DPDK support with format: “csdpdk-N” where N is a number, incremented on new ports creation. This port is set into the ‘source’ property of the ‘interface’ tag on the XML domain of the VM, prepended by the value of the OVS path set on the property:
That would set interfaces to type ‘vhostuser’ and reference the ports created in the XML domain of the VMs as:
<interface type='vhostuser'> <source type="unix" path="<OVS_PATH>/<port_name>" .../> ... </interface>
Note that the OVS_PATH property is required, as explained on Agent configuration for DPDK support. For example, when OVS_PATH is set to the default path for Openvswitch (/var/run/openvswitch), interfaces will reference created ports on: /var/run/openvswitch/<port_name>