{"id":994,"date":"2013-07-28T22:56:52","date_gmt":"2013-07-28T19:56:52","guid":{"rendered":"http:\/\/www.cloudbase.it\/?p=994"},"modified":"2015-10-30T05:39:51","modified_gmt":"2015-10-30T03:39:51","slug":"rdo-multi-node","status":"publish","type":"post","link":"https:\/\/cloudbase.it\/rdo-multi-node\/","title":{"rendered":"Multi-node OpenStack RDO on RHEL and Hyper-V &#8211; Part 1"},"content":{"rendered":"<p>We are getting a lot of requests about how to deploy <a title=\"OpenStack\" href=\"http:\/\/www.openstack.org\/\" target=\"_blank\">OpenStack<\/a> in proof of concept (PoC) or production environments as, let&#8217;s face it, setting up an OpenStack infrastructure from scratch without the aid of a deployment tool is not particularly suitable for faint at heart newcomers \ud83d\ude42<\/p>\n<p><a title=\"DevStack\" href=\"http:\/\/devstack.org\/\" target=\"_blank\">DevStack<\/a>, a tool that targets development environments, is still very popular for building proof of concepts as well, although the results can be quite different from deploying a stable release version. Here&#8217;s an alternative that provides a very easy way to get OpenStack up and running, using the latest OpenStack stable release.<\/p>\n<p>&nbsp;<\/p>\n<h1>RDO and Packstack<\/h1>\n<p><a title=\"RDO\" href=\"http:\/\/openstack.redhat.com\/Quickstart\" target=\"_blank\">RDO<\/a> is an excellent solution to go from zero to a fully working OpenStack deployment in a matter of minutes.<\/p>\n<ul>\n<li><strong>RDO<\/strong> is simply a distribution of OpenStack for Red Hat Enterprise Linux (RHEL), Fedora and derivatives (e.g.: <a title=\"CentOS\" href=\"http:\/\/www.centos.org\/\" target=\"_blank\">CentOS<\/a>).<\/li>\n<li><strong>Packstack<\/strong> is a Puppet based tool that simplifies the deployment of RDO.<\/li>\n<\/ul>\n<p>There&#8217;s quite a lot of documentation about RDO and Packstack, but mostly related to so called all-in-one setups (one single server), which are IMO too trivial to be considered for anything beyond the most basic PoC, let alone a production environment. Most real OpenStack deployments are multi-node, which is quite natural given the highly distributed nature of OpenStack.<\/p>\n<p>Some people might argue that the reason for limiting the efforts to all-in-one setups is reasonably mandated by the available hardware resources. Before taking a decision in that direction, consider that you can run the scenarios described in this post entirely on VMs. For example, I&#8217;m currently employing VMWare Fusion virtual machines on a laptop, nested hypervisors (KVM and Hyper-V) included. This is quite a flexible scenario as you can simulate as many hosts and networks as you need without the constraints that a physical environment has.<\/p>\n<p>Let&#8217;s start with describing how the OpenStack Grizzly multi-node setup that we are going to deploy looks like.<\/p>\n<p>&nbsp;<\/p>\n<h2>Controller<\/h2>\n<p>This is the OpenStack &#8220;brain&#8221;, running all Nova services except <em>nova-compute<\/em> and <em>nova-network<\/em>, plus <em>quantum-server<\/em>, Keystone, Glance, Cinder and Horizon (you can add also Swift and Ceilometer).<br \/>\nI typically assign 1GB of RAM to this host and 30GB of disk space (add more if you want to use large Cinder LVM volumes or big Glance images). On the networking side, only a single nic (eth0) connected to the management network is needed (more on networking soon).<\/p>\n<p>&nbsp;<\/p>\n<h2>Network Router<\/h2>\n<p>The job of this server is to run <a title=\"OpenVSwitch\" href=\"http:\/\/openvswitch.org\/\" target=\"_blank\">OpenVSwitch<\/a> to allow networking among your virtual machines and the Internet (or any other external network that you might define).<br \/>\nBeside OpenVSwitch this node will run <em>quantum-openvswitch-agent, quantum-dhcp-agent, quantum-l3-agent<\/em> and <em>quantum-metadata-proxy.<\/em><br \/>\n1 GB of RAM and 10GB of disk space are enough here. You&#8217;ll need three nics, connected to the management (eth0), guest data (eth1) and public (eth2) networks.<br \/>\n<strong>Note:<\/strong> If you run this node as a virtual machine, make sure that the hypervisor&#8217;s virtual switches support promiscuous mode.<\/p>\n<p>&nbsp;<\/p>\n<h2>KVM compute node (optional)<\/h2>\n<p>This is one of the two hypervisors that we&#8217;ll use in our demo. Most people like to use <strong>KVM<\/strong> in OpenStack, so we are going to use it to run our Linux VMs.<br \/>\nThe only OpenStack services required here are <em>nova-compute<\/em> and <em>quantum-openvswitch-agent<\/em>.<br \/>\nAllocate the RAM and disk resources for this node based on your requirements, considering especially the amount of RAM and disk space that you want to assign to your VMs. 4GB of RAM and 50GB of disk space can be considered as a starting point. If you plan to run this host in a VM, make sure that the virtual CPU supports nested virtualization. Two nics required, connected to the management (eth0) and guest data (eth1) networks.<\/p>\n<p>&nbsp;<\/p>\n<h2>Hyper-V compute node (optional)<\/h2>\n<p>Micrsosoft Hyper-V Server 2012 R2 is a great and completely <span style=\"text-decoration: underline;\">free<\/span> hypervisor, just grab a copy of the ISO from <a title=\"Hyper-V\" href=\"http:\/\/www.microsoft.com\/en-us\/evalcenter\/evaluate-hyper-v-server-2012-r2\" target=\"_blank\">here<\/a>. In the demo we are going to use it for running Windows instances, but beside that you can of course use it to run Linux or FreeBSD VMs as well. You can also grab a ready made <strong>OpenStack Windows Server 2012 R2 Evaluation<\/strong> image from <a title=\"OpenStack Windows Server 2012\" href=\"https:\/\/cloudbase.it\/windows-cloud-images\/\" target=\"_blank\">here<\/a>, no need to learn how to package a Windows OpenStack image today. Required OpenStack services here are <em>nova-compute<\/em> and <em>quantum-hyperv-agent<\/em>. No worries, <a title=\"OpenStack Hyper-V Compute Installer\" href=\"https:\/\/cloudbase.it\/openstack-hyperv-driver\/\" target=\"_blank\">here&#8217;s<\/a> an installer that will take care of setting them up for you, make sure to download the stable Grizzly release.<br \/>\nTalking about resources to allocate for this host, the same consideration discussed for the KVM node apply here as well, just consider that Hyper-V will require 16GB-20GB of disk space for the OS itself, including updates. I usually assign 4GB of RAM and 60-80GB of disk. Two nics required here as well, connected to the management and guest data networks.<\/p>\n<p>&nbsp;<\/p>\n<h2><a href=\"https:\/\/i0.wp.com\/www.cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i0.wp.com\/www.cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png?resize=701%2C496\" alt=\"OpenStack_multi_node_network\" width=\"701\" height=\"496\" \/><\/a><\/h2>\n<h2><\/h2>\n<h2>Networking<\/h2>\n<p>Let&#8217;s spend a few words about how the hosts are connected.<\/p>\n<p>&nbsp;<\/p>\n<h3>Management<\/h3>\n<p>This network is used for management only (e.g. running nova commands or ssh into the hosts). It should definitely not be accessible from the OpenStack instances to avoid any security issue.<\/p>\n<p>&nbsp;<\/p>\n<h3>Guest data<\/h3>\n<p>This is the network used by guests to communicate among each other and with the rest of the world. It&#8217;s important to note that although we are defining a single physical network, we&#8217;ll be able to define multiple isolated networks using VLANs or tunnelling on top of it. One of the requirements of our scenario is to be able to run groups of isolated instances for different tenants.<\/p>\n<p>&nbsp;<\/p>\n<h3>Public<\/h3>\n<p>Last, this is the network used by the instances to access external networks (e.g. the Internet) routed through the network host. External hosts (e.g. a client on the Internet) will be able to connect to some of your instances based on the floating ip and security descriptors configuration.<\/p>\n<p>&nbsp;<\/p>\n<h1>Hosts configuration<\/h1>\n<p>Just do a minimal installation and configure your network adapters. We are using CentOS 6.4 x64, but RHEL 6.4, Fedora or Scientific Linux images are perfectly fine as well. <strong>Packstack<\/strong> will take care of getting all the requirements as we will soon see.<\/p>\n<p>Once you are done with the installation, updating the hosts with <em>yum update -y<\/em> is a good practice.<\/p>\n<p>Configure your management adapters (eth0) with a static IP, e.g. by editing directly the <em>ifcfg-eth0<\/em> configuration file in <em>\/etc\/sysconfig\/network-scripts<\/em>. As a basic example:<\/p>\n<pre>DEVICE=\"eth0\"\r\nONBOOT=\"yes\"\r\nBOOTPROTO=\"static\"\r\nMTU=\"1500\"\r\nIPADDR=\"10.10.10.1\"\r\nNETMASK=\"255.255.255.0\"<\/pre>\n<p>&nbsp;<\/p>\n<p>General networking configuration goes in <em>\/etc\/sysconfig\/network<\/em>, e.g.:<\/p>\n<pre>GATEWAY=10.10.10.254\r\nNETWORKING=yes\r\nHOSTNAME=openstack-controller<\/pre>\n<p>&nbsp;<\/p>\n<p>And add your DNS configuration in <em>\/etc\/resolv.conf<\/em>, e.g.:<\/p>\n<pre>nameserver 208.67.222.222\r\nnameserver 208.67.220.220<\/pre>\n<p>&nbsp;<\/p>\n<p>Nics connected to <em>guest data<\/em> (eth1) and <em>public<\/em> (eth2) networks don&#8217;t require an IP. <span style=\"text-decoration: underline;\">You also don&#8217;t need to add any OpenVSwitch configuration here<\/span>, just make sure that the adapters get enabled on boot, e.g.:<\/p>\n<pre>DEVICE=\"eth1\"\r\nBOOTPROTO=\"none\"\r\nMTU=\"1500\"\r\nONBOOT=\"yes\"<\/pre>\n<p>&nbsp;<\/p>\n<p>You can reload your network configuration with:<\/p>\n<pre class=\"lang:sh decode:true\">service network restart<\/pre>\n<p>&nbsp;<\/p>\n<h1>Packstack<\/h1>\n<p>Once you have setup all your hosts, it&#8217;s time to install Packstack. Log in on the controller host console and run:<\/p>\n<pre>sudo yum install -y http:\/\/rdo.fedorapeople.org\/openstack\/openstack-grizzly\/rdo-release-grizzly.rpm\r\nsudo yum install -y openstack-packstack\r\nyum install -y openstack-utils<\/pre>\n<p>&nbsp;<\/p>\n<p>Now\u00a0we need to create a so called &#8220;answer file&#8221; to tell Packstack how we want our OpenStack deployment to be configured:<\/p>\n<pre>packstack --gen-answer-file=packstack_answers.conf<\/pre>\n<p>&nbsp;<\/p>\n<p>One useful point about the answer file is that it is already populated with random passwords for all your services, change them as required.<\/p>\n<p>Here&#8217;s a script add our configuration to the answers file. Change the IP address of the network and KVM compute hosts along with any of the Cinder or Quantum parameters to fit your scenario.<\/p>\n<p>&nbsp;<\/p>\n<pre>ANSWERS_FILE=packstack_answers.conf\r\nNETWORK_HOST=10.10.10.2\r\nKVM_COMPUTE_HOST=10.10.10.3\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_SSH_KEY \/root\/.ssh\/id_rsa.pub\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_NTP_SERVERS 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org,3.pool.ntp.org\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_CINDER_VOLUMES_SIZE 20G\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_NOVA_COMPUTE_HOSTS $KVM_COMPUTE_HOST\r\nopenstack-config --del $ANSWERS_FILE general CONFIG_NOVA_NETWORK_HOST\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_QUANTUM_L3_HOSTS $NETWORK_HOST\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_QUANTUM_DHCP_HOSTS $NETWORK_HOST\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_QUANTUM_METADATA_HOSTS $NETWORK_HOST\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_QUANTUM_OVS_TENANT_NETWORK_TYPE vlan\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_QUANTUM_OVS_VLAN_RANGES physnet1:1000:2000\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_QUANTUM_OVS_BRIDGE_MAPPINGS physnet1:br-eth1\r\nopenstack-config --set $ANSWERS_FILE general CONFIG_QUANTUM_OVS_BRIDGE_IFACES br-eth1:eth1<\/pre>\n<p>&nbsp;<\/p>\n<p>Now, all we have to do is running Packstack and just wait for the configuration to be applied, including dependencies like MySQL Server and <a title=\"Apache Qpid\" href=\"http:\/\/qpid.apache.org\/\" target=\"_blank\">Apache Qpid<\/a>\u00a0(used by RDO as an alternative to <a title=\"RabbitMQ\" href=\"http:\/\/www.rabbitmq.com\/\" target=\"_blank\">RabbitMQ<\/a>). You&#8217;ll have to provide the password to access the other nodes only once, afterwards Packstack will deploy an SSH key to the remote ~\/.ssh\/authorized_keys files. As anticipated Puppet is used to perform the actual deployment.<\/p>\n<p>&nbsp;<\/p>\n<pre>packstack --answer-file=packstack_answers.conf<\/pre>\n<p>&nbsp;<\/p>\n<p>At the end of the execution, Packstack will ask you to install a new Linux kernel on the hosts provided as part of RDO repository. This is needed because the kernel provided by RHEL (and thus CentOS) doesn&#8217;t support network namespaces, a feature needed by Quantum in this scenario. What Packstack doesn&#8217;t tell you is that the 2.6.32 kernel\u00a0they provide will create a lot more issues with Quantum. At this point why not installing a modern 3.x kernel? \ud83d\ude42<\/p>\n<p>My suggestion is to skip altogether the RDO kernel and install the 3.4 kernel provided as part of the CentOS Xen project (which does not mean that we are installing Xen, we only need the kernel package).<\/p>\n<p>Let&#8217;s update the kernel and reboot the network and KVM compute hosts from the controller (no need to install it on the controller itsef):<\/p>\n<pre>for HOST in $NETWORK_HOST $KVM_COMPUTE_HOST\r\ndo\r\n\u00a0 \u00a0 ssh -o StrictHostKeychecking=no $HOST \"yum install -y centos-release-xen &amp;&amp; yum update -y --disablerepo=* --enablerepo=Xen4CentOS kernel &amp;&amp; reboot\"\r\ndone<\/pre>\n<p>&nbsp;<\/p>\n<p>At the time of writing, there&#8217;s a bug in Packstack that applies to multi-node scenarios where the Quantum firewall driver is not set in quantum.conf, causing failures in Nova. Here&#8217;s a simple fix to be executed on the controller (the alternative would be to disable the security groups feature altogether):<\/p>\n<pre>sed -i 's\/^#\\ firewall_driver\/firewall_driver\/g' \/etc\/quantum\/plugins\/openvswitch\/ovs_quantum_plugin.ini\r\nservice quantum-server restart<\/pre>\n<p>&nbsp;<\/p>\n<p>We can now check if everything is working. First we need to set our environment variables:<\/p>\n<pre>source .\/keystonerc_admin\r\nexport EDITOR=vim<\/pre>\n<p>&nbsp;<\/p>\n<p>Let&#8217;s check the nova services:<\/p>\n<pre>nova-manage service list<\/pre>\n<p>&nbsp;<\/p>\n<p>Here&#8217;s a sample output. If you see xxx in place of one of the smily faces it means that there&#8217;s something to fix \ud83d\ude42<\/p>\n<pre>Binary           Host                                 Zone             Status     State Updated_At\r\nnova-conductor   os-controller.cbs                    internal         enabled    :- )  2013-07-14 19:08:17\r\nnova-cert        os-controller.cbs                    internal         enabled    :- )  2013-07-14 19:08:19\r\nnova-scheduler   os-controller.cbs                    internal         enabled    :- )  2013-07-14 19:08:17\r\nnova-consoleauth os-controller.cbs                    internal         enabled    :- )  2013-07-14 19:08:19\r\nnova-compute     os-compute.cbs                       nova             enabled    :- )  2013-07-14 18:42:00<\/pre>\n<p>&nbsp;<\/p>\n<p>Now we can check the status of our Quantum agents on the network and KVM compute hosts:<\/p>\n<pre>quantum agent-list<\/pre>\n<p>&nbsp;<\/p>\n<p>You should get an output similar to the following one.<\/p>\n<pre>+--------------------------------------+--------------------+----------------+-------+----------------+\r\n| id                                   | agent_type         | host           | alive | admin_state_up |\r\n+--------------------------------------+--------------------+----------------+-------+----------------+\r\n| 5dff6900-4f6b-4f42-b7f1-f2842439bc4a | DHCP agent         | os-network.cbs | :- )  | True           |\r\n| 666a876f-6005-466b-9822-c31d48a5c9a8 | L3 agent           | os-network.cbs | :- )  | True           |\r\n| 8c190fc5-990f-494f-85c0-f3964639274b | Open vSwitch agent | os-compute.cbs | :- )  | True           |\r\n| cf62892e-062a-460d-ab67-4440a790715d | Open vSwitch agent | os-network.cbs | :- )  | True           |\r\n+--------------------------------------+--------------------+----------------+-------+----------------+<\/pre>\n<p>&nbsp;<\/p>\n<h1>OpenVSwitch<\/h1>\n<p>On the network node we need to add the <em>eth2<\/em> interface to the <em>br-ex<\/em> bridge:<\/p>\n<pre>ovs-vsctl add-port br-ex eth2<\/pre>\n<p>&nbsp;<\/p>\n<p>We can now check if the OpenVSwitch configuration has been applied correctly on the network and KVM compute nodes. Log in on the network node and run:<\/p>\n<pre>ovs-vsctl show<\/pre>\n<p>&nbsp;<\/p>\n<p>The output should look similar to:<\/p>\n<pre>f99276eb-4553-40d9-8bb0-bf3ac6e885e8\r\n    Bridge br-int\r\n        Port br-int\r\n            Interface br-int\r\n                type: internal\r\n        Port \"int-br-eth1\"\r\n            Interface \"int-br-eth1\"\r\n    Bridge \"br-eth1\"\r\n        Port \"eth1\"\r\n            Interface \"eth1\"\r\n        Port \"br-eth1\"\r\n            Interface \"br-eth1\"\r\n                type: internal\r\n        Port \"phy-br-eth1\"\r\n            Interface \"phy-br-eth1\"\r\n    Bridge br-ex\r\n        Port br-ex\r\n            Interface br-ex\r\n                type: internal\r\n        Port \"eth2\"\r\n            Interface \"eth2\"\r\n    ovs_version: \"1.10.0\"<\/pre>\n<p>&nbsp;<\/p>\n<p>Notice the membership of <em>eth1<\/em> to <em>br-eth1<\/em> and <em>eth2<\/em> to <em>br-ex<\/em>. If you don&#8217;t see them, we can just add them now.<\/p>\n<p>To add a bridge, should <em>br-eth1<\/em> be missing:<\/p>\n<pre>ovs-vsctl add-br br-eth1<\/pre>\n<p>&nbsp;<\/p>\n<p>To add the <em>eth1<\/em> port to the bridge:<\/p>\n<pre>ovs-vsctl add-port br-eth1 eth1<\/pre>\n<p>&nbsp;<\/p>\n<p>You can now repeat the same procedure on the KVM compute node, considering only<em> br-eth1<\/em> and <em>eth1 <\/em>(there&#8217;s no <em>eth2<\/em>).<\/p>\n<p>&nbsp;<\/p>\n<h1>What&#8217;s next?<\/h1>\n<p>Ok, enough for today. In the forthcoming Part 2 we&#8217;ll see how to add a Hyper-V compute node to the mix!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We are getting a lot of requests about how to deploy OpenStack in proof of concept (PoC) or production environments as, let&#8217;s face it, setting up an OpenStack infrastructure from scratch without the aid of a deployment tool is not particularly suitable for faint at heart newcomers \ud83d\ude42 DevStack, a tool that targets development environments,&hellip;<\/p>\n","protected":false},"author":3,"featured_media":1041,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"_jetpack_memberships_contains_paid_content":false},"categories":[26,9,83,27,1],"tags":[],"class_list":["post-994","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-centos","category-hyper-v","category-openstack","category-rhel","category-uncategorized","category-26","category-9","category-83","category-27","category-1","description-off"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Multi-node OpenStack RDO on RHEL and Hyper-V - Part 1 - Cloudbase Solutions<\/title>\n<meta name=\"description\" content=\"How to deploy a multi-node OpenStack setup with RDO and Packstack in a matter of minutes including KVM and Hyper-V compute nodes\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cloudbase.it\/rdo-multi-node\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Multi-node OpenStack RDO on RHEL and Hyper-V - Part 1 - Cloudbase Solutions\" \/>\n<meta property=\"og:description\" content=\"How to deploy a multi-node OpenStack setup with RDO and Packstack in a matter of minutes including KVM and Hyper-V compute nodes\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cloudbase.it\/rdo-multi-node\/\" \/>\n<meta property=\"og:site_name\" content=\"Cloudbase Solutions\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/facebook.com\/cloudbasesolutions\" \/>\n<meta property=\"article:published_time\" content=\"2013-07-28T19:56:52+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2015-10-30T03:39:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1169\" \/>\n\t<meta property=\"og:image:height\" content=\"826\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Alessandro Pilotti\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@cloudbaseit\" \/>\n<meta name=\"twitter:site\" content=\"@cloudbaseit\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Alessandro Pilotti\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/\"},\"author\":{\"name\":\"Alessandro Pilotti\",\"@id\":\"https:\/\/cloudbase.it\/#\/schema\/person\/9d625055e67e54d90dc90447ac906e69\"},\"headline\":\"Multi-node OpenStack RDO on RHEL and Hyper-V &#8211; Part 1\",\"datePublished\":\"2013-07-28T19:56:52+00:00\",\"dateModified\":\"2015-10-30T03:39:51+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/\"},\"wordCount\":1705,\"commentCount\":12,\"publisher\":{\"@id\":\"https:\/\/cloudbase.it\/#organization\"},\"image\":{\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png\",\"articleSection\":[\"CentOS\",\"Hyper-V\",\"OpenStack\",\"RHEL\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/cloudbase.it\/rdo-multi-node\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/\",\"url\":\"https:\/\/cloudbase.it\/rdo-multi-node\/\",\"name\":\"Multi-node OpenStack RDO on RHEL and Hyper-V - Part 1 - Cloudbase Solutions\",\"isPartOf\":{\"@id\":\"https:\/\/cloudbase.it\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png\",\"datePublished\":\"2013-07-28T19:56:52+00:00\",\"dateModified\":\"2015-10-30T03:39:51+00:00\",\"description\":\"How to deploy a multi-node OpenStack setup with RDO and Packstack in a matter of minutes including KVM and Hyper-V compute nodes\",\"breadcrumb\":{\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/cloudbase.it\/rdo-multi-node\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage\",\"url\":\"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png\",\"contentUrl\":\"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png\",\"width\":1169,\"height\":826},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/cloudbase.it\/rdo-multi-node\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/cloudbase.it\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Multi-node OpenStack RDO on RHEL and Hyper-V &#8211; Part 1\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/cloudbase.it\/#website\",\"url\":\"https:\/\/cloudbase.it\/\",\"name\":\"Cloudbase Solutions\",\"description\":\"Cloud Interoperability\",\"publisher\":{\"@id\":\"https:\/\/cloudbase.it\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/cloudbase.it\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/cloudbase.it\/#organization\",\"name\":\"Cloudbase Solutions\",\"url\":\"https:\/\/cloudbase.it\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/cloudbase.it\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/cloudbase.it\/wp-content\/uploads\/2017\/03\/CBSL-Logo-2017-Black.png\",\"contentUrl\":\"https:\/\/cloudbase.it\/wp-content\/uploads\/2017\/03\/CBSL-Logo-2017-Black.png\",\"width\":583,\"height\":143,\"caption\":\"Cloudbase Solutions\"},\"image\":{\"@id\":\"https:\/\/cloudbase.it\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"http:\/\/facebook.com\/cloudbasesolutions\",\"https:\/\/x.com\/cloudbaseit\",\"https:\/\/www.linkedin.com\/company-beta\/3139764\",\"https:\/\/www.youtube.com\/channel\/UCBgH5RqPL4lgxA8gn3rIAFw\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/cloudbase.it\/#\/schema\/person\/9d625055e67e54d90dc90447ac906e69\",\"name\":\"Alessandro Pilotti\",\"description\":\"Co-Founder &amp; CEO\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Multi-node OpenStack RDO on RHEL and Hyper-V - Part 1 - Cloudbase Solutions","description":"How to deploy a multi-node OpenStack setup with RDO and Packstack in a matter of minutes including KVM and Hyper-V compute nodes","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/cloudbase.it\/rdo-multi-node\/","og_locale":"en_US","og_type":"article","og_title":"Multi-node OpenStack RDO on RHEL and Hyper-V - Part 1 - Cloudbase Solutions","og_description":"How to deploy a multi-node OpenStack setup with RDO and Packstack in a matter of minutes including KVM and Hyper-V compute nodes","og_url":"https:\/\/cloudbase.it\/rdo-multi-node\/","og_site_name":"Cloudbase Solutions","article_publisher":"http:\/\/facebook.com\/cloudbasesolutions","article_published_time":"2013-07-28T19:56:52+00:00","article_modified_time":"2015-10-30T03:39:51+00:00","og_image":[{"width":1169,"height":826,"url":"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png","type":"image\/png"}],"author":"Alessandro Pilotti","twitter_card":"summary_large_image","twitter_creator":"@cloudbaseit","twitter_site":"@cloudbaseit","twitter_misc":{"Written by":"Alessandro Pilotti","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/cloudbase.it\/rdo-multi-node\/#article","isPartOf":{"@id":"https:\/\/cloudbase.it\/rdo-multi-node\/"},"author":{"name":"Alessandro Pilotti","@id":"https:\/\/cloudbase.it\/#\/schema\/person\/9d625055e67e54d90dc90447ac906e69"},"headline":"Multi-node OpenStack RDO on RHEL and Hyper-V &#8211; Part 1","datePublished":"2013-07-28T19:56:52+00:00","dateModified":"2015-10-30T03:39:51+00:00","mainEntityOfPage":{"@id":"https:\/\/cloudbase.it\/rdo-multi-node\/"},"wordCount":1705,"commentCount":12,"publisher":{"@id":"https:\/\/cloudbase.it\/#organization"},"image":{"@id":"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage"},"thumbnailUrl":"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png","articleSection":["CentOS","Hyper-V","OpenStack","RHEL"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/cloudbase.it\/rdo-multi-node\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/cloudbase.it\/rdo-multi-node\/","url":"https:\/\/cloudbase.it\/rdo-multi-node\/","name":"Multi-node OpenStack RDO on RHEL and Hyper-V - Part 1 - Cloudbase Solutions","isPartOf":{"@id":"https:\/\/cloudbase.it\/#website"},"primaryImageOfPage":{"@id":"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage"},"image":{"@id":"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage"},"thumbnailUrl":"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png","datePublished":"2013-07-28T19:56:52+00:00","dateModified":"2015-10-30T03:39:51+00:00","description":"How to deploy a multi-node OpenStack setup with RDO and Packstack in a matter of minutes including KVM and Hyper-V compute nodes","breadcrumb":{"@id":"https:\/\/cloudbase.it\/rdo-multi-node\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/cloudbase.it\/rdo-multi-node\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudbase.it\/rdo-multi-node\/#primaryimage","url":"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png","contentUrl":"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png","width":1169,"height":826},{"@type":"BreadcrumbList","@id":"https:\/\/cloudbase.it\/rdo-multi-node\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/cloudbase.it\/"},{"@type":"ListItem","position":2,"name":"Multi-node OpenStack RDO on RHEL and Hyper-V &#8211; Part 1"}]},{"@type":"WebSite","@id":"https:\/\/cloudbase.it\/#website","url":"https:\/\/cloudbase.it\/","name":"Cloudbase Solutions","description":"Cloud Interoperability","publisher":{"@id":"https:\/\/cloudbase.it\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/cloudbase.it\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/cloudbase.it\/#organization","name":"Cloudbase Solutions","url":"https:\/\/cloudbase.it\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudbase.it\/#\/schema\/logo\/image\/","url":"https:\/\/cloudbase.it\/wp-content\/uploads\/2017\/03\/CBSL-Logo-2017-Black.png","contentUrl":"https:\/\/cloudbase.it\/wp-content\/uploads\/2017\/03\/CBSL-Logo-2017-Black.png","width":583,"height":143,"caption":"Cloudbase Solutions"},"image":{"@id":"https:\/\/cloudbase.it\/#\/schema\/logo\/image\/"},"sameAs":["http:\/\/facebook.com\/cloudbasesolutions","https:\/\/x.com\/cloudbaseit","https:\/\/www.linkedin.com\/company-beta\/3139764","https:\/\/www.youtube.com\/channel\/UCBgH5RqPL4lgxA8gn3rIAFw"]},{"@type":"Person","@id":"https:\/\/cloudbase.it\/#\/schema\/person\/9d625055e67e54d90dc90447ac906e69","name":"Alessandro Pilotti","description":"Co-Founder &amp; CEO"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/cloudbase.it\/wp-content\/uploads\/2013\/07\/OpenStack_multi_node_network.png","_links":{"self":[{"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/posts\/994"}],"collection":[{"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/comments?post=994"}],"version-history":[{"count":73,"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/posts\/994\/revisions"}],"predecessor-version":[{"id":34659,"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/posts\/994\/revisions\/34659"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/media\/1041"}],"wp:attachment":[{"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/media?parent=994"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/categories?post=994"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudbase.it\/wp-json\/wp\/v2\/tags?post=994"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}