DevStack Archives - Cloudbase Solutions Cloud Interoperability Fri, 20 Jan 2017 15:37:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 124661541 OpenStack – KVM vs Hyper-V – Scenario 1 https://cloudbase.it/openstack-kvm-vs-hyper-v-part-2/ https://cloudbase.it/openstack-kvm-vs-hyper-v-part-2/#comments Thu, 15 Dec 2016 13:30:06 +0000 https://cloudbase.it/?p=36631 In the previous part of this OpenStack KVM vs Hyper-V compute benchmarking series we talked about how to setup the testing environment and how to run some scenarios. Time for some results!   Scenario 1 To begin with, we started with a basic KVM vs Hyper-V scenario: Ask Nova to boot a VM Wait until Nova…

The post OpenStack – KVM vs Hyper-V – Scenario 1 appeared first on Cloudbase Solutions.

]]>
In the previous part of this OpenStack KVM vs Hyper-V compute benchmarking series we talked about how to setup the testing environment and how to run some scenarios. Time for some results!

 

Scenario 1

To begin with, we started with a basic KVM vs Hyper-V scenario:

  • Ask Nova to boot a VM
  • Wait until Nova reports the VM as active
  • Ask Nova to delete the VM

Results become especially interesting when looking for bottlenecks in the infrastructure, so the following results were obtained by running the test with 50 VMs in parallel on a single node with a total of 200 VM.

  1. Results for KVM running on Xenial Ubuntu 16.04.1 LTS (default kernel version 4.4.0-45-generic): kvm-1
  2. Results for Hyper-V / Windows Server 2012 R2:hyperv-2012r2-1
  3. Results for Hyper-V / Windows Server 2016:hyperv-2016-1

Conclusions

Each iteration sequence number corresponds to one Rally scenario execution. We can see a few spikes in the graph for Hyper-V Compute with Windows Server 2012 R2. Those are due to a minor bottleneck on Windows Server 2012 R2, which is no longer present in Windows Server 2016.

On average in this scenario, KVM is ~32% slower than Hyper-V on Windows Server 2016, but ~18% faster than Hyper-V on Windows Server 2012 R2.

Not only is Hyper-V on Windows Server 2016 faster than Hyper-V on Windows Server 2012 R2, but it’s also significantly faster than KVM!

Ready for more scenarios? part 3 is going to be available soon!

The post OpenStack – KVM vs Hyper-V – Scenario 1 appeared first on Cloudbase Solutions.

]]>
https://cloudbase.it/openstack-kvm-vs-hyper-v-part-2/feed/ 2 36631
OpenStack – KVM vs Hyper-V – Part 1 (Introduction) https://cloudbase.it/openstack-kvm-vs-hyper-v-part-1/ Thu, 15 Dec 2016 13:00:25 +0000 https://cloudbase.it/?p=36627 OpenStack Newton has been recently released. During this cycle we did quite some work on increasing the performance and overall reliability of our Windows and Hyper-V OpenStack compute drivers.   This is the first post in a series where we are going to share some of our Hyper-V vs KVM benchmarking results for this OpenStack release, get…

The post OpenStack – KVM vs Hyper-V – Part 1 (Introduction) appeared first on Cloudbase Solutions.

]]>
OpenStack Newton has been recently released. During this cycle we did quite some work on increasing the performance and overall reliability of our Windows and Hyper-V OpenStack compute drivers.

 

This is the first post in a series where we are going to share some of our Hyper-V vs KVM benchmarking results for this OpenStack release, get ready to be surprised!

 

To begin with, here’s our setup:

  • Controller node, Ubuntu 16.04.1 LTS (default kernel version 4.4.0-45-generic)
  • KVM Compute node, Ubuntu 16.04.1 LTS (default kernel version 4.4.0-45-generic)
  • Hyper-V Compute node, Windows Server 2012 R2
  • Hyper-V Compute node, Windows Server 2016

 

Hardware Specs (all nodes are identical)
Processor Intel(R) Xeon(R) CPU E5-2650 @ 2.00GHz
Installed memory (RAM) 128 GB
Network cards Chelsio Communications Inc T420-CR 10 Gigabit Ethernet
Storage Intel SSDSC2BA200G3T 200GB SATA Solid State Drive

 

All the nodes have plain vanilla OS installs with the latest updates.

The Linux hosts have been deployed using DevStack. Here you can find more details about the deployment and configuration.

The Hyper-V compute nodes include the Hyper-V Compute driver.

 

Configuration

A set of basic steps have been taken to improve performance on both types of hypervisors.

For the KVM Compute node, we followed the official documentation:

In /etc/nova/nova.conf the “cpu_mode” was modified to “host-passthrough”, as follows:

[libvirt]

cpu_mode = host-passthrough

 

The VHostNet kernel module improves network performance:

modprobe vhost_net

 

For the Hyper-V Compute nodes, the active power option scheme has been set to “High Performance“:

powercfg.exe /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c

Note: in our results, changing the power option scheme brought roughly a 10-15% performance boost.

 

Rally

We used Rally as a benchmarking tool for the OpenStack deployment, installed on the Controller node.

Our Rally scenarios can be found here. To begin with, just download and run this script, it will install Rally in the “~/rally” virtualenv:

./install_rally.sh –url https://github.com/cloudbase/rally –branch blogpost

 

If everything ends successfully, a short green message should appear saying how to activate the Rally virtual environment:

source ~/rally/bin/activate

 

First, you have to provide the details about the OpenStack deployment that you are going to benchmark. A simple way to register an existing deployment in Rally is through the deployment configuration files.

Put your cloud access data into a JSON configuration file (for existing OpenStack deployments we can use “existing.json”) and run the following command to register the deployment in Rally:

rally deployment create –file=existing.json –name=existing

 

Rally should now be able to list all the images/flavors/networks etc. from the OpenStack deployment:

rally show images

rally show flavors

rally show networks

 

To run a benchmark test, you need a task configuration file. You can find here the JSON files that we are going to use. Here’s how to start the execution of a task:

rally task start –task boot_and_delete.json –task-args '{"image": "cirros-vhdx", "flavor_name": "m1.tiny"}'

 

Thanks for reading this first post in our KVM vs Hyper-V series! To add some suspense, benchmarking results will be published in the next entries, starting with part 2!

The post OpenStack – KVM vs Hyper-V – Part 1 (Introduction) appeared first on Cloudbase Solutions.

]]>
36627
Open vSwitch 2.5 on Hyper-V (OpenStack) – Part 1 https://cloudbase.it/open-vswitch-2-5-hyper-v-part-1/ Wed, 18 May 2016 20:44:07 +0000 https://cloudbase.it/?p=36035 We are happy to announce the availability of Open vSwitch 2.5 (OVS) for Microsoft Hyper-V Server 2012, 2012 R2 and 2016 (technical preview) thanks to the joint effort of Cloudbase Solutions, VMware and the rest of the Open vSwitch community. The OVS 2.5 release includes the Open vSwitch CLI tools and services (e.g. ovsdb-server, ovs-vswitchd, ovs-vsctl, ovs-ofctl, etc.), and an updated version…

The post Open vSwitch 2.5 on Hyper-V (OpenStack) – Part 1 appeared first on Cloudbase Solutions.

]]>
We are happy to announce the availability of Open vSwitch 2.5 (OVS) for Microsoft Hyper-V Server 2012, 2012 R2 and 2016 (technical preview) thanks to the joint effort of Cloudbase Solutions, VMware and the rest of the Open vSwitch community.

The OVS 2.5 release includes the Open vSwitch CLI tools and services (e.g. ovsdb-server, ovs-vswitchd, ovs-vsctl, ovs-ofctl, etc.), and an updated version of the OVS Hyper-V virtual switch forwarding extension, providing fully interoperable GRE, VXLAN and STT encapsulation between Hyper-V and Linux, including KVM based virtual machines.

As usual, we also released an MSI installer that takes care of the Windows services for ovsdb-server and ovs-vswitchd daemons along with all the required binaries and configurations.

All the Open vSwitch code is available as open source here:

https://github.com/openvswitch/ovs/tree/branch-2.5
https://github.com/cloudbase/ovs/tree/branch-2.5-cloudbase

Supported Windows operating systems:

  • Windows Server and Hyper-V Server 2012 and 2012 R2.
  • Windows Server and Hyper-V Server 2016 (technical preview).
  • Windows 8, 8.1 and 10.

 

Installing Open vSwitch on Hyper-V

The entire installation process is seamless. Download our installer and run it. You will be welcomed by the following screen:

 

Windows

Click “Next”, accept the license, click “Next” again and you’ll have the option to install both the Hyper-V virtual switch extension driver and the command line tools. If you want to install only the command line tools (in order to be able to connect to a Linux or Windows server), just deselect the driver option.

 

Open vSwitch 2.5 Hyper-V Setup on Windows

 

Click “Next” followed by “Install” and the installation will start. You will have to confirm that you want to install the signed kernel driver and the process will be completed in a matter of a few seconds, generating an Open vSwitch database and starting the ovsdb-server and ovs-vswitchd services.

 

OVSHVSetup3_1

 

 

 

 

 

 

 

 

The installer also adds the command line tools folder to the system path, available after the next logon or CLI shell execution.

 

Unattended installation

Fully unattended installation is also available (if you already have accepted/imported our certificate). This helps to install Open vSwitch with Windows GPOs, Puppet, Chef, SaltStack, DSC or any other automated deployment solution:

msiexec /i openvswitch-hyperv-2.5.0.msi /l*v log.txt

 

Configuring Open vSwitch on Windows

Let us assume that we have the following environment: a host with four Ethernet cards in which we shall bind a Hyper-V Virtual Switch on top of one of them.

The list of adapters:

PS C:\package> Get-NetAdapter

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
port3                     Intel(R) 82574L Gigabit Network Co...#3      26 Up           00-0C-29-40-8B-EA         1 Gbps
nat                       Intel(R) 82574L Gigabit Network Co...#4      27 Up           00-0C-29-40-8B-E0         1 Gbps
port2                     Intel(R) 82574L Gigabit Network Co...#2      18 Up           00-0C-29-40-8B-D6         1 Gbps
port1                     Intel(R) 82574L Gigabit Network Conn...      17 Up           00-0C-29-40-8B-CC         1 Gbps

Create a Hyper-V external virtual switch with the AllowManagementOS flag set to false.

For example:

PS C:\package> New-VMSwitch -Name vSwitch -NetAdapterName port1 -AllowManagementOS $false

Name    SwitchType NetAdapterInterfaceDescription
----    ---------- ------------------------------
vSwitch External   Intel(R) 82574L Gigabit Network Connection

To verify that the extension has been installed on our system:

PS C:\package> Get-VMSwitchExtension -VMSwitchName vSwitch -Name "Cloudbase Open vSwitch Extension"

Id                  : 583CC151-73EC-4A6A-8B47-578297AD7623
Name                : Cloudbase Open vSwitch Extension
Vendor              : Cloudbase Solutions SRL
Version             : 13.43.16.16
ExtensionType       : Forwarding
ParentExtensionId   :
ParentExtensionName :
SwitchId            : 5844f4dd-b3d7-496c-81cb-481a64fa7f58
SwitchName          : vSwitch
Enabled             : False
Running             : False
ComputerName        : HYPERV_NORMAL_1
Key                 :
IsDeleted           : False

We can now enable the OVS extension on the vSwitch virtual switch:

PS C:\package> Enable-VMSwitchExtension -VMSwitchName vSwitch -Name "Cloudbase Open vSwitch Extension"

Id                  : 583CC151-73EC-4A6A-8B47-578297AD7623
Name                : Cloudbase Open vSwitch Extension
Vendor              : Cloudbase Solutions SRL
Version             : 13.43.16.16
ExtensionType       : Forwarding
ParentExtensionId   :
ParentExtensionName :
SwitchId            : 5844f4dd-b3d7-496c-81cb-481a64fa7f58
SwitchName          : vSwitch
Enabled             : True
Running             : True
ComputerName        : HYPERV_NORMAL_1
Key                 :
IsDeleted           : False

Please note that when you enable the extension, the virtual switch will stop forwarding traffic until it is configured (adding the Ethernet adapter under a bridge).

i.e.

PS C:\package> ovs-vsctl.exe add-br br-port1
PS C:\package> ovs-vsctl.exe add-port br-port1 port1

Let us talk in more detail about the two commands issued above.

The first command:

PS C:\package> ovs-vsctl.exe add-br br-port1

will add a new adapter on the host, which is disabled by default:

PS C:\package> Get-NetAdapter

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
br-port1                  Hyper-V Virtual Ethernet Adapter #2          47 Disabled     00-15-5D-00-62-79        10 Gbps
port3                     Intel(R) 82574L Gigabit Network Co...#3      26 Up           00-0C-29-40-8B-EA         1 Gbps
nat                       Intel(R) 82574L Gigabit Network Co...#4      27 Up           00-0C-29-40-8B-E0         1 Gbps
port2                     Intel(R) 82574L Gigabit Network Co...#2      18 Up           00-0C-29-40-8B-D6         1 Gbps
port1                     Intel(R) 82574L Gigabit Network Conn...      17 Up           00-0C-29-40-8B-CC         1 Gbps

This adapter can be used as an IP-able device:

PS C:\package> Enable-NetAdapter br-port1
PS C:\package> New-NetIPAddress -IPAddress 14.14.14.2 -InterfaceAlias br-port1 -PrefixLength 24

IPAddress         : 14.14.14.2
InterfaceIndex    : 47
InterfaceAlias    : br-port1
AddressFamily     : IPv4
Type              : Unicast
PrefixLength      : 24
PrefixOrigin      : Manual
SuffixOrigin      : Manual
AddressState      : Tentative
ValidLifetime     : Infinite ([TimeSpan]::MaxValue)
PreferredLifetime : Infinite ([TimeSpan]::MaxValue)
SkipAsSource      : False
PolicyStore       : ActiveStore

IPAddress         : 14.14.14.2
InterfaceIndex    : 47
InterfaceAlias    : br-port1
AddressFamily     : IPv4
Type              : Unicast
PrefixLength      : 24
PrefixOrigin      : Manual
SuffixOrigin      : Manual
AddressState      : Invalid
ValidLifetime     : Infinite ([TimeSpan]::MaxValue)
PreferredLifetime : Infinite ([TimeSpan]::MaxValue)
SkipAsSource      : False
PolicyStore       : PersistentStore

The second command:

PS C:\package> ovs-vsctl.exe add-port br-port1 port1

will allow the bridge to use the actual physical NIC on which the Hyper-V vSwitch was created (port1).

Users from Linux are familiar with the setup above because it is similar to a linux bridge.

Limitations

  • We currently support a single Hyper-V virtual switch in our forwarding extension.
  • Multiple host nics with LACP support is experimental in this release.

 

OpenStack Integration with Open vSwitch on Windows

OpenStack is a very common use case for Open vSwitch on Hyper-V. The following example is based on a DevStack Mitaka All-in-One deployment on Ubuntu 14.04 LTS with a Hyper-V compute node, but the concepts and the following steps apply to any OpenStack deployment.

Let us install our DevStack node. Here is a sample local.conf configuration:

ubuntu@ubuntu:~/devstack$ cat local.conf 
[[local|localrc]]
# Set this to your management IP
HOST_IP=14.14.14.1
FORCE=yes

#Services to be started
disable_service n-net

enable_service rabbit mysql
enable_service key
enable_service n-api n-crt n-obj n-cond n-sch n-cauth n-cpu
enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta q-fwaas q-lbaas 
enable_service horizon
enable_service g-api g-reg

disable_service heat h-api h-api-cfn h-api-cw h-eng
disable_service cinder c-api c-vol c-sch
disable_service tempest

ENABLE_TENANT_TUNNELS=False
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,hyperv
OVS_ENABLE_TUNNELING=True

LIBVIRT_TYPE=kvm

API_RATE_LIMIT=False

DATABASE_PASSWORD=Passw0rd
RABBIT_PASSWORD=Passw0rd
SERVICE_TOKEN=Passw0rd
SERVICE_PASSWORD=Passw0rd
ADMIN_PASSWORD=Passw0rd

SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOGDAYS=2

RECLONE=no

KEYSTONE_BRANCH=stable/mitaka
NOVA_BRANCH=stable/mitaka
NEUTRON_BRANCH=stable/mitaka
SWIFT_BRANCH=stable/mitaka
GLANCE_BRANCH=stable/mitaka
CINDER_BRANCH=stable/mitaka
HEAT_BRANCH=stable/mitaka
TROVE_BRANCH=stable/mitaka
HORIZON_BRANCH=stable/mitaka

[[post-config|$NEUTRON_CONF]]
[database]
min_pool_size = 5
max_pool_size = 50
max_overflow = 50

Networking:

ubuntu@ubuntu:~/devstack$ ifconfig eth3
eth3      Link encap:Ethernet  HWaddr 00:0c:29:25:db:8c  
          inet addr:14.14.14.1  Bcast:14.14.14.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe25:db8c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2209 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1007 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:336185 (336.1 KB)  TX bytes:153402 (153.4 KB)

After DevStack finishes installing we can add some Hyper-V VHD or VHDX images to Glance, for example our Windows Server 2012 R2 evaluation image. Additionally, since we are using VXLAN, the default guest MTU should be set to 1450. This can be done via DHCP option if the guest supports it, as described here.

Now let us move to the Hyper-V node. First we have to download the latest OpenStack compute installer:

PS C:\package> Start-BitsTransfer https://cloudbase.it/downloads/HyperVNovaCompute_Mitaka_13_0_0.msi

Full steps on how to install and configure OpenStack on Hyper-V are available here: OpenStack on Windows installation.

In our example, the Hyper-V node will use the following adapter to connect to the OpenStack environment:

Ethernet adapter br-port1:

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::9c1a:f185:bb09:62e2%47
   IPv4 Address. . . . . . . . . . . : 14.14.14.2
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . :

This is the internal adapter bound to the vSwitch virtual switch, as created during the previous steps (ovs-vsctl add-br br-port1).

We can now verify our deployment by taking a look at the Nova services and Neutron agents status in the OpenStack controller and ensuring that they are up and running:

ubuntu@ubuntu:~/devstack$ nova service-list
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host            | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| 5  | nova-conductor   | ubuntu          | internal | enabled | up    | 2016-04-26T20:09:44.000000 | -               |
| 6  | nova-cert        | ubuntu          | internal | enabled | up    | 2016-04-26T20:09:39.000000 | -               |
| 7  | nova-scheduler   | ubuntu          | internal | enabled | up    | 2016-04-26T20:09:45.000000 | -               |
| 8  | nova-consoleauth | ubuntu          | internal | enabled | up    | 2016-04-26T20:09:46.000000 | -               |
| 9  | nova-compute     | ubuntu          | nova     | enabled | up    | 2016-04-26T20:09:48.000000 | -               |
| 10 | nova-compute     | hyperv_normal_1 | nova     | enabled | up    | 2016-04-26T20:09:39.000000 | -               |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
ubuntu@ubuntu:~/devstack$ neutron agent-list
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host            | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| 1bb8eccc-ad8c-43c2-a54e-d84c6cd7acd4 | DHCP agent         | ubuntu          | nova              | :-)   | True           | neutron-dhcp-agent        |
| 3d89e79d-3cb4-4a10-ae01-773b86f83fb2 | Loadbalancer agent | ubuntu          |                   | :-)   | True           | neutron-lbaas-agent       |
| 7777a901-0c58-4180-8d01-4ea3296621a4 | Open vSwitch agent | ubuntu          |                   | :-)   | True           | neutron-openvswitch-agent |
| 93d6390a-19d2-4c79-8f76-90736bc47f5f | HyperV agent       | hyperv_normal_1 |                   | :-)   | True           | neutron-hyperv-agent      |
| c3af1d4b-5bba-47b0-b0db-b3c0d49bb41a | Metadata agent     | ubuntu          |                   | :-)   | True           | neutron-metadata-agent    |
| ec9bc28c-a5ee-4733-8b9c-3a1f99c42f08 | L3 agent           | ubuntu          | nova              | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+

Next we can disable the Windows Hyper-V agent, which is not needed since we use neutron Open vSwitch agent.

From a command prompt (cmd.exe), issue the following commands:

C:\package>sc config "neutron-hyperv-agent" start=disabled
[SC] ChangeServiceConfig SUCCESS

C:\package>sc stop "neutron-hyperv-agent"

SERVICE_NAME: neutron-hyperv-agent
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 1  STOPPED
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x0
        WAIT_HINT          : 0x0

We need to create a new service called neutron-ovs-agent and put its configuration options in C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf. From a command prompt:

C:\Users\Administrator>sc create neutron-ovs-agent binPath= "\"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\OpenStackServiceNeutron.exe\" neutron-hyperv-agent \"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python27\Scripts\neutron-openvswitch-agent.exe\" --config-file \"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf\"" type= own start= auto error= ignore depend= Winmgmt displayname= "OpenStack Neutron Open vSwitch Agent Service" obj= LocalSystem
[SC] CreateService SUCCESS

C:\Users\Administrator>notepad "c:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf"

C:\Users\Administrator>sc start neutron-ovs-agent

SERVICE_NAME: neutron-ovs-agent
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 2  START_PENDING
                                (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x1
        WAIT_HINT          : 0x0
        PID                : 2740
        FLAGS              :

Note: creating a service manually for the OVS agent won’t be necessary anymore starting with the next Nova Hyper-V MSI installer version.

Here is the content of the neutron_ovs_agent.conf file:

[DEFAULT]
verbose=true
debug=false
control_exchange=neutron
policy_file=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
rpc_backend=neutron.openstack.common.rpc.impl_kombu
rabbit_host=14.14.14.1
rabbit_port=5672
rabbit_userid=stackrabbit
rabbit_password=Passw0rd
logdir=C:\OpenStack\Log\
logfile=neutron-ovs-agent.log
[agent]
tunnel_types = vxlan
enable_metrics_collection=false
[SECURITYGROUP]
enable_security_group=false
[ovs]
local_ip = 14.14.14.2
tunnel_bridge = br-tun
integration_bridge = br-int
tenant_network_type = vxlan
enable_tunneling = true

Now if we run ovs-vsctl show, we can see a VXLAN tunnel in place:

PS C:\> ovs-vsctl.exe show
a81a54fc-0a3c-4152-9a0d-f3cbf4abc3ca
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0e0e0e01"
            Interface "vxlan-0e0e0e01"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="14.14.14.2", out_key=flow, remote_ip="14.14.14.1"}
    Bridge "br-port1"
        Port "port1"
            Interface "port1"
        Port "br-port1"
            Interface "br-port1"
                type: internal

After spawning a Nova instance on the Hyper-V node you should see:

PS C:\> get-vm

Name              State   CPUUsage(%) MemoryAssigned(M) Uptime   Status
----              -----   ----------- ----------------- ------   ------
instance-00000003 Running 0           512               00:01:09 Operating normally


PS C:\Users\Administrator> Get-VMConsole instance-00000003
PS C:\> ovs-vsctl.exe show
a81a54fc-0a3c-4152-9a0d-f3cbf4abc3ca
    Bridge br-int
        fail_mode: secure
        Port "f44f4971-4a75-4ba8-9df7-2e316f799155"
            tag: 1
            Interface "f44f4971-4a75-4ba8-9df7-2e316f799155"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0e0e0e01"
            Interface "vxlan-0e0e0e01"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="14.14.14.2", out_key=flow, remote_ip="14.14.14.1"}
    Bridge "br-port1"
        Port "port1"
            Interface "port1"
        Port "br-port1"
            Interface "br-port1"
                type: internal

In this example, “f44f4971-4a75-4ba8-9df7-2e316f799155” is the OVS port name associated to the instance-00000003 VM vnic. You can find out the details by running the following PowerShell cmdlet:

PS C:\Users\Administrator> Get-VMByOVSPort -OVSPortName "f44f4971-4a75-4ba8-9df7-2e316f799155"
...
ElementName                          : instance-00000003
...

The VM instance-00000003 got an IP address from the neutron DHCP agent, with fully functional networking between KVM and Hyper-V hosted virtual machines!

This is everything you need to get started with OpenStack, Hyper-V and OVS.

In the next blog post we will show you how to manage Hyper-V on OVS without OpenStack using a VXLAN tunnel.

The post Open vSwitch 2.5 on Hyper-V (OpenStack) – Part 1 appeared first on Cloudbase Solutions.

]]>
36035
Open vSwitch 2.4 on Hyper-V – Part 1 https://cloudbase.it/open-vswitch-24-on-hyperv-part-1/ Mon, 28 Sep 2015 11:00:32 +0000 http://cloudbase.it/?p=34185 We are happy to announce the availability of the Open vSwitch (OVS) 2.4.0 beta for Microsoft Hyper-V Server 2012, 2012 R2 and 2016 (technical preview) thanks to the joint effort of Cloudbase Solutions, VMware and the rest of the Open vSwitch community. Furthermore, support for Open vSwitch on OpenStack Hyper-V compute nodes is also available starting with Kilo! The OVS 2.4…

The post Open vSwitch 2.4 on Hyper-V – Part 1 appeared first on Cloudbase Solutions.

]]>
We are happy to announce the availability of the Open vSwitch (OVS) 2.4.0 beta for Microsoft Hyper-V Server 2012, 2012 R2 and 2016 (technical preview) thanks to the joint effort of Cloudbase Solutions, VMware and the rest of the Open vSwitch community. Furthermore, support for Open vSwitch on OpenStack Hyper-V compute nodes is also available starting with Kilo!

The OVS 2.4 0 release includes the Open vSwitch CLI tools and daemons (e.g. ovsdb-server, ovs-vswitchd, ovs-vsctl, ovs-ofctl etc), and an updated version of the OVS Hyper-V virtual switch forwarding extension, providing fully interoperable VXLAN and STT encapsulation between Hyper-V and Linux, including KVM based virtual machines.

As usual, we also released an MSI installer that takes care of the Windows services for the ovsdb-server and ovs-vswitchd daemons along with all the required binaries and configurations.

All the Open vSwitch code is available as open source here:

https://github.com/openvswitch/ovs/tree/branch-2.4
https://github.com/cloudbase/ovs/tree/branch-2.4-ovs

Supported Windows operating systems:

  • Windows Server and Hyper-V Server 2012 and 2012 R2
  • Windows Server and Hyper-V Server 2016 (technical preview)
  • Windows 8, 8.1 and 10

 

Installing Open vSwitch on Hyper-V

The entire installation process is seamless. Download our installer and run it. You’ll be welcomed by the following screen:

open_vswitch_windows_hyper-v

 

Click “Next”, accept the license, click “Next” again and you’ll have the option to install both the Hyper-V virtual switch extension driver and the command line tools. In case you’d like to install the command line tools only to connect remotely to a Windows or Linux OVS server, just deselect the driver option.

 

OVSHVSetup3

 

Click “Next” followed by “Install” and the installation will start. You’ll have to confirm that you want to install the signed kernel driver and the process will be completed in a matter of a few seconds, generating an Open vSwitch database and starting the ovsdb-server and ovs-vswitchd services.

 

OVSHVSetup3_1

 

The installer also adds the command line tools folder to the system path, available after the next logon or CLI shell execution.

 

Unattended installation

Fully unattended installation is also available(if you already have accepted/imported our certificate) in order to install Open vSwitch with Windows GPOs, Puppet, Chef, SaltStack, DSC or any other automated deployment solution:

msiexec /i openvswitch-hyperv-installer-beta.msi /l*v log.txt

Configuring Open vSwitch on Windows

Create a Hyper-V external virtual switch. Remember that if you want to take advantage of VXLAN or STT tunnelling you will have to create an external virtual switch with the AllowManagementOS flag set to true.

For example:

PS C:\package> Get-VMSwitch

Name     SwitchType NetAdapterInterfaceDescription
----     ---------- ------------------------------
external External   Intel(R) PRO/1000 MT Network Connection #2

PS C:\package> Get-VMNetworkAdapter -ManagementOS -SwitchName external

Name     IsManagementOs VMName SwitchName MacAddress   Status IPAddresses
----     -------------- ------ ---------- ----------   ------ -----------
external True                  external   000C293F2BCF {Ok}

To verify that the extension has been installed on our system:

PS C:\package> Get-VMSwitchExtension external

Id                  : EA24CD6C-D17A-4348-9190-09F0D5BE83DD
Name                : Microsoft NDIS Capture
Vendor              : Microsoft
Version             : 6.3.9600.16384
ExtensionType       : Monitoring
ParentExtensionId   :
ParentExtensionName :
SwitchId            : 42406c9a-7b64-432a-adcd-83aa60aefeb9
SwitchName          : external
Enabled             : False
Running             : False
ComputerName        : WIN-L8H4PEU1R8B
Key                 :
IsDeleted           : False

Id                  : E7C3B2F0-F3C5-48DF-AF2B-10FED6D72E7A
Name                : Microsoft Windows Filtering Platform
Vendor              : Microsoft
Version             : 6.3.9600.16384
ExtensionType       : Filter
ParentExtensionId   :
ParentExtensionName :
SwitchId            : 42406c9a-7b64-432a-adcd-83aa60aefeb9
SwitchName          : external
Enabled             : True
Running             : True
ComputerName        : WIN-L8H4PEU1R8B
Key                 :
IsDeleted           : False

Id                  : 583CC151-73EC-4A6A-8B47-578297AD7623
Name                : Open vSwitch Extension
Vendor              : Open vSwitch
Version             : 11.56.50.171
ExtensionType       : Forwarding
ParentExtensionId   :
ParentExtensionName :
SwitchId            : 42406c9a-7b64-432a-adcd-83aa60aefeb9
SwitchName          : external
Enabled             : False
Running             : False
ComputerName        : WIN-L8H4PEU1R8B
Key                 :
IsDeleted           : False

We can now enable the OVS extension on the external virtual switch:

PS C:\package> Enable-VMSwitchExtension "Open vSwitch Extension" -VMSwitchName external

Id                  : 583CC151-73EC-4A6A-8B47-578297AD7623
Name                : Open vSwitch Extension
Vendor              : Open vSwitch
Version             : 11.56.50.171
ExtensionType       : Forwarding
ParentExtensionId   :
ParentExtensionName :
SwitchId            : 42406c9a-7b64-432a-adcd-83aa60aefeb9
SwitchName          : external
Enabled             : True
Running             : True
ComputerName        : WIN-L8H4PEU1R8B
Key                 :
IsDeleted           : False

Please note that in the moment you enable the extension, the virtual switch will stop forwarding traffic until configured:

PS C:\package> ovs-vsctl.exe add-br br-tun
PS C:\package> ovs-vsctl.exe add-port br-tun external.1
PS C:\package> ovs-vsctl.exe add-port br-tun internal
PS C:\package> ping 10.13.10.30

Pinging 10.13.10.30 with 32 bytes of data:
Reply from 10.13.10.30: bytes=32 time=2ms TTL=64
Reply from 10.13.10.30: bytes=32 time<1ms TTL=64

Why is the above needed?

To seamlessly integrate Open vSwitch with the Hyper-V networking model we need to use Hyper-V virtual switch ports instead of tap devices (Linux). This is the main difference in the architectural model between Open vSwitch on Windows compared to its Linux counterpart.

From the OVS reference:

“In OVS for Hyper-V, we use ‘external’ as a special name to refer to the physical NICs connected to the Hyper-V switch. An index is added to this special name to refer to the particular physical NIC. Eg. ‘external.1’ refers to the first physical NIC on the Hyper-V switch. (…) Internal port is the virtual adapter created on the Hyper-V switch using the ‘AllowManagementOS’ setting. In OVS for Hyper-V, we use a ‘internal’ as a special name to refer to that adapter.”

Note: the above is subject to change. The actual adapter names will be used in an upcoming release (e.g. Ethernet1) in place of “external.x”.

 

Limitations

We currently support a single Hyper-V virtual switch in our forwarding extension. This is subject to change in the near future.

 

Openstack Integration with Open vSwitch on Windows

OpenStack is a very common use case for Open vSwitch on Hyper-V. The following example is based on a DevStack Kilo All-in-One deployment on Ubuntu 14.04 LTS with a Hyper-V compute node, but the concepts and the following steps apply to any OpenStack deployment.

Let’s install our SevStack node. Here’s a sample localrc configuration:

ubuntu@ubuntu:~/devstack$ cat localrc 
# Misc
HOST_IP=10.13.10.30
DATABASE_PASSWORD=Passw0rd
ADMIN_PASSWORD=Passw0rd
SERVICE_PASSWORD=Passw0rd
SERVICE_TOKEN=Passw0rd
RABBIT_PASSWORD=Passw0rd

KEYSTONE_BRANCH=stable/kilo
NOVA_BRANCH=stable/kilo
NEUTRON_BRANCH=stable/kilo
GLANCE_BRANCH=stable/kilo
HORIZON_BRANCH=stable/kilo
REQUIREMENTS_BRANCH=stable/kilo

# Reclone each time
RECLONE=yes

# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs

# Pre-requisite
ENABLED_SERVICES=rabbit,mysql,key

# Nova - Compute Service
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch
IMAGE_URLS+=",https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"

# Neutron - Networking Service
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron,g-api,g-reg

# Horizon
ENABLED_SERVICES+=,horizon

# VLAN configuration
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
ENABLE_TENANT_TUNNELS=True
Q_ML2_TENANT_NETWORK_TYPE=vxlan
TENANT_TUNNEL_RANGE=5000:10000

Networking:

ubuntu@ubuntu:~/devstack$ ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:0c:29:87:f9:4a  
          inet addr:10.13.10.30  Bcast:10.13.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe87:f94a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1481 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1642 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:101988 (101.9 KB)  TX bytes:112315 (112.3 KB)
          Interrupt:16 Base address:0x2000

After DevStack finishes installing we can add some Hyper-V VHD or VHDX images to Glance, for example our Windows Server 2012 R2 evaluation image. Additionally, since we are using VXLAN, the default guest MTU should be set to 1450. This can be done via a DHCP option if the guest supports it, as described here.

Now let’s move to the Hyper-V node. First we have to download the latest OpenStack compute installer:

PS C:\package> Start-BitsTransfer https://www.cloudbase.it/downloads/HyperVNovaCompute_Kilo_2015_1.msi

Full steps on how to install and configure OpenStack on Hyper-V are available here: Openstack on Windows installation.

In our example, the Hyper-V node will use the following adapter to connect to the OpenStack environment:

Ethernet adapter vEthernet (external):

   Connection-specific DNS Suffix  . :
   IPv6 Address. . . . . . . . . . . : fd1a:32:d256:0:7911:fd1e:32b8:1d50
   Link-local IPv6 Address . . . . . : fe80::7911:fd1e:32b8:1d50%19
   IPv4 Address. . . . . . . . . . . : 10.13.10.35
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . :

This is the adapter bound to the external virtual switch, as created during the previous steps.

We can now verify our deployment by taking a look at the Nova services and Neutron agents status on the OpenStack controller and ensuring that they are up and running:

ubuntu@ubuntu:~/devstack$ nova service-list
+----+----------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary         | Host            | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+----------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor | ubuntu          | internal | enabled | up    | 2015-09-17T10:02:15.000000 | -               |
| 2  | nova-cert      | ubuntu          | internal | enabled | up    | 2015-09-17T10:02:18.000000 | -               |
| 3  | nova-scheduler | ubuntu          | internal | enabled | up    | 2015-09-17T10:02:21.000000 | -               |
| 4  | nova-compute   | ubuntu          | nova     | enabled | up    | 2015-09-17T10:02:19.000000 | -               |
| 5  | nova-compute   | WIN-L8H4PEU1R8B | nova     | enabled | up    | 2015-09-17T10:02:17.000000 | -               |
+----+----------------+-----------------+----------+---------+-------+----------------------------+-----------------+
ubuntu@ubuntu:~/devstack$ neutron agent-list
+--------------------------------------+--------------------+-----------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host            | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-----------------+-------+----------------+---------------------------+
| 2cbf5b0c-5d31-40a5-8abc-c889663e2cb4 | L3 agent           | ubuntu          | :-)   | True           | neutron-l3-agent          |
| 4de21c7c-5e50-4835-96f3-d34228cf2480 | DHCP agent         | ubuntu          | :-)   | True           | neutron-dhcp-agent        |
| 530ace5c-bb03-4b56-a087-b2048261255a | Open vSwitch agent | ubuntu          | :-)   | True           | neutron-openvswitch-agent |
| 90c59a72-319c-4019-94aa-b808a4f3dfb0 | Metadata agent     | ubuntu          | :-)   | True           | neutron-metadata-agent    |
| fecf11f3-7a64-4b81-8c2d-11fdd1dddbd9 | HyperV agent       | WIN-L8H4PEU1R8B | :-)   | True           | neutron-hyperv-agent      |
+--------------------------------------+--------------------+-----------------+-------+----------------+---------------------------+

Next we can disable the Windows Hyper-V agent, which is not needed since we use OVS:

C:\package>sc config "neutron-hyperv-agent" start=disabled
[SC] ChangeServiceConfig SUCCESS

C:\package>sc stop "neutron-hyperv-agent"

SERVICE_NAME: neutron-hyperv-agent
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 1  STOPPED
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x0
        WAIT_HINT          : 0x0

We need to create a new service called neutron-ovs-agent and put its configuration options in C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf. From a command prompt:

C:\Users\Administrator>sc create neutron-ovs-agent binPath= "\"C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\bin\OpenStackServiceNeutron.exe\" neutron-hyperv-agent \"C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\Scripts\neutron-openvswitch-agent.exe\" --config-file \"C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf\"" type= own start= auto  error= ignore depend= Winmgmt displayname= "OpenStack Neutron Open vSwitch Agent Service" obj= LocalSystem
[SC] CreateService SUCCESS

C:\Users\Administrator>notepad "c:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf"

C:\Users\Administrator>sc start neutron-ovs-agent

SERVICE_NAME: neutron-ovs-agent
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 2  START_PENDING
                                (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x1
        WAIT_HINT          : 0x0
        PID                : 2740
        FLAGS              :

Note: creating a service manually for the OVS agent won’t be necessary anymore starting with the next Nova Hyper-V MSI installer version.

Here’s the content of the neutron_ovs_agent.conf file:

[DEFAULT]
verbose=true
debug=true
control_exchange=neutron
policy_file=C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
rpc_backend=neutron.openstack.common.rpc.impl_kombu
rabbit_host=10.13.10.30
rabbit_port=5672
rabbit_userid=guest
rabbit_password=guest
logdir=C:\OpenStack\Log\
logfile=neutron-ovs-agent.log
[agent]
tunnel_types = vxlan
enable_metrics_collection=false
[SECURITYGROUP]
enable_security_group=false
[ovs]
local_ip = 10.13.10.35
tunnel_bridge = br-tun
integration_bridge = br-int
tenant_network_type = vxlan
enable_tunneling = true

Now if we run ovs-vsctl show, we can see a VXLAN tunnel in place:

PS C:\Users\Administrator> ovs-vsctl.exe show
01ee44a6-9fac-461a-a8c1-da77a09fae69
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port internal
            Interface internal
        Port "vxlan-0a0d0a1e"
            Interface "vxlan-0a0d0a1e"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.10.35", out_key=flow, remote_ip="10.13.10.30"}

        Port "external.1"
            Interface "external.1"
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal

After spawning a Nova instance on the Hyper-V node you should see:

PS C:\Users\Administrator> get-vm

Name              State   CPUUsage(%) MemoryAssigned(M) Uptime   Status
----              -----   ----------- ----------------- ------   ------
instance-00000004 Running 4           2048              00:00:41 Operating normally


PS C:\Users\Administrator> Get-VMConsole instance-00000004
PS C:\Users\Administrator> ovs-vsctl.exe show
01ee44a6-9fac-461a-a8c1-da77a09fae69
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port internal
            Interface internal
        Port "vxlan-0a0d0a1e"
            Interface "vxlan-0a0d0a1e"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.10.35", out_key=flow, remote_ip="10.13.10.30"}

        Port "external.1"
            Interface "external.1"
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "dbc80e38-96a8-4e26-bc74-3aa03aea23f9"
            tag: 1
            Interface "dbc80e38-96a8-4e26-bc74-3aa03aea23f9"

In this example, “dbc80e38-96a8-4e26-bc74-3aa03aea23f9” is the OVS port name associated to the instance-00000004 VM vnic. You can find out the details by running the following PowerShell cmdlet:

PS C:\Users\Administrator> Get-VMByOVSPort -OVSPortName "dbc80e38-96a8-4e26-bc74-3aa03aea23f9"
...
ElementName                          : instance-00000004
...

The VM instance-00000004 got an IP address from the neutron DHCP agent, with fully functional networking between KVM and Hyper-V hosted virtual machines!

This is everything you need to get started with OpenStack, Hyper-V and OVS. In the next blog post we’ll show how to manage Hyper-V on OVS without OpenStack.

 

Notes

The beta installer is built by our Jenkins servers every time a new commit lands in the project repositories, so expect frequent updates.

 

The post Open vSwitch 2.4 on Hyper-V – Part 1 appeared first on Cloudbase Solutions.

]]>
34185
VirtualBox driver for OpenStack https://cloudbase.it/virtualbox-openstack-driver/ Thu, 09 Apr 2015 18:49:07 +0000 http://www.cloudbase.it/?p=2165 More and more people are interested in cloud computing and OpenStack but many of them give it up because they can't test or interact with this kind of infrastructure. This is mostly a result of either high costs of hardware or the difficulty of the deployment in a particular environment.

The post VirtualBox driver for OpenStack appeared first on Cloudbase Solutions.

]]>
More and more people are interested in cloud computing and OpenStack but many of them give it up because they can’t test or interact with this kind of infrastructure. This is mostly a result of either high costs of hardware or the difficulty of the deployment in a particular environment.

 
In order to help the community to interact more with cloud computing and learn about it, Cloudbase Solutions has come up with a simple VirtualBox driver for OpenStack. VirtualBox allows you to set up a cloud environment on your personal laptop, no matter which operating system you’re using (Windows, Linux, OS X). It also gets the job done with a free and familiar virtualization environment.

 

Nova hypervisor support matrix

Feature Status VirtualBox
Attach block volume to instance optional Partially supported
Detach block volume from instance optional Partially supported
Evacuate instances from host optional Not supported
Guest instance status mandatory Supported
Guest host status optional Supported
Live migrate instance across hosts optional Not supported
Launch instance mandatory Supported
Stop instance CPUs optional Supported
Reboot instance optional Supported
Rescue instance optional Not supported
Resize instance optional Supported
Restore instance optional Supported
Service control optional Not supported
Set instance admin password optional Not supported
Save snapshot of instance disk optional Supported
Suspend instance optional Supported
Swap block volumes optional Not supported
Shutdown instance mandatory Supported
Resume instance CPUs optional Supported
Auto configure disk optional Not supported
Instance disk I/O limits optional Not supported
Config drive support choice Not supported
Inject files into disk image optional Not supported
Inject guest networking config optional Not supported
Remote desktop over RDP choice Supported
View serial console logs choice Not supported
Remote desktop over SPICE choice Not supported
Remote desktop over VNC choice Supported
Block storage support optional Supported
Block storage over fibre channel optional Not supported
Block storage over iSCSI condition Supported
CHAP authentication for iSCSI optional Supported
Image storage support mandatory Supported
Network firewall rules optional Not supported
Network routing optional Not supported
Network security groups optional Not supported
Flat networking choice Supported
VLAN networking choice Not supported

More information regarding this feature can be found on the following pages: Nova Support Matrix and Hypervisor Support Matrix.

VirtualBox supported features

Guest instance status

Provides a quick report on information about the guest instance, including the power state, memory allocation, CPU allocation, number of vCPUs and cumulative CPU execution time.

Virtualbox Driver - Guest instance status

Guest host status

Provides a quick report of available resources on the host machine.

Virtualbox Driver - Hypervisor information

Launch instance

Creates a new instance (virtual machine) on the virtualization platform.

Virtualbox Driver - Launch instance

Shutdown instance

Virtualbox Driver - Shutdown instance

Stop instance CPUs

Stopping an instance CPUs can be thought of as roughly equivalent to suspend-to-RAM. The instance is still present in memory, but execution has stopped.

Virtualbox Driver - Stop instance CPUs

Resume instance CPUs

Virtualbox Driver - Resume instance CPUs

Suspend instance

Suspending an instance can be thought of as roughly equivalent to suspend-to-disk. The instance no longer consumes any RAM or CPUs, having its live running state preserved in a file on disk. It can later be restored, at which point it should continue execution where it left off.

Virtualbox Driver - Suspend instance

Save snapshot of instance disk

The snapshot operation allows the current state of the instance root disk to be saved and uploaded back into the glance image repository. The instance can later be booted again using this saved image.
VirtualBox Driver - Save snapshot of instance disk

Block storage support

Block storage provides instances with direct attached virtual disks that can be used for persistent storage of data. As an alternative to direct attached disks, an instance may choose to use network based persistent storage.

Virtualbox Driver - Block storage support

Remote desktop over VNC

Virtualbox Driver - Remote desktop over VNC

Note: In order to use this feature, the VNC extension pack for VirtualBox must be installed.

You can list all of the available extension packages running the following command:

VBoxManage list extpacks

Pack no. 0: Oracle VM VirtualBox Extension Pack
 Version: 4.3.20
 Revision: 96996
 Edition:
 Description: USB 2.0 Host Controller, Host Webcam, VirtualBox RDP, PXE ROM with E1000 support.
 VRDE Module: VBoxVRDP
 Usable: true
 Why unusable:

 Pack no. 1: VNC
 Version: 4.3.18
 Revision: 96516
 Edition:
 Description: VNC plugin module
 VRDE Module: VBoxVNC
 Usable: true
 Why unusable:

Setting up DevStack environment

Create Virtual Machine

  • Processors:
    • Number of processors: 2
    • Number of cores per processor 1
  • Memory: 4GB RAM (Recommended)
  • HDD – SATA – 20 GB Preallocated
  • Network:
    • Network Adapter 1: NAT
    • Network Adapter 2: Host Only
    • Network Adapter 3: Nat
  • Operating system – Ubuntu Server 14.04 (Recommended)

Update System

$ sudo apt-get update
$ sudo apt-get upgrade

Install openssh-server, git, vim, openvswitch-switch

$ sudo apt-get install -y git vim openssh-server openvswitch-switch

Edit network Interfaces

Here’s an example for a configuration. You’re free to use your own settings.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

# The management interface
auto eth1
iface eth1 inet manual
up ip link set eth1 up
up ip link set eth1 promisc on
down ip link set eth1 promisc off
down ip link set eth1 down

# The public interface
auto eth3
iface eth3 inet manual
up ip link set eth3 up
down ip link set eth3 down

Clone devstack

$ cd
$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack

Change local.conf

$ sudo vim ~/devstack/local.conf

Here we have a template config file. You can also use your own settings.

[[local|localrc]]
HOST_IP=10.0.2.15
DEVSTACK_BRANCH=master
DEVSTACK_PASSWORD=Passw0rd

#Services to be started
enable_service rabbit
enable_service mysql

enable_service key

enable_service n-api
enable_service n-crt
enable_service n-obj
enable_service n-cond
enable_service n-sch
enable_service n-cauth
enable_service n-novnc
# Do not use Nova-Network
disable_service n-net

enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-lbaas
enable_service q-fwaas
enable_service q-metering
enable_service q-vpn

disable_service horizon

enable_service g-api
enable_service g-reg

enable_service cinder
enable_service c-api
enable_service c-vol
enable_service c-sch
enable_service c-bak

disable_service s-proxy
disable_service s-object
disable_service s-container
disable_service s-account

enable_service heat
enable_service h-api
enable_service h-api-cfn
enable_service h-api-cw
enable_service h-eng

disable_service ceilometer-acentral
disable_service ceilometer-collector
disable_service ceilometer-api

enable_service tempest

# To add a local compute node, enable the following services
disable_service n-cpu
disable_service ceilometer-acompute

IMAGE_URLS+=",https://raw.githubusercontent.com/cloudbase/ci-overcloud-init-scripts/master/scripts/devstack_vm/cirros-0.3.3-x86_64.vhd.gz"
HEAT_CFN_IMAGE_URL="https://www.cloudbase.it/downloads/Fedora-x86_64-20-20140618-sda.vhd.gz"

Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_TENANT_NETWORK_TYPE=vlan

PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1
OVS_BRIDGE_MAPPINGS=physnet1:br-eth1

OVS_ENABLE_TUNNELING=False
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=500:2000

GUEST_INTERFACE_DEFAULT=eth1
PUBLIC_INTERFACE_DEFAULT=eth3

CINDER_SECURE_DELETE=False
VOLUME_BACKING_FILE_SIZE=50000M

LIVE_MIGRATION_AVAILABLE=False
USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION=False

LIBVIRT_TYPE=kvm
API_RATE_LIMIT=False

DATABASE_PASSWORD=$DEVSTACK_PASSWORD
RABBIT_PASSWORD=$DEVSTACK_PASSWORD
SERVICE_TOKEN=$DEVSTACK_PASSWORD
SERVICE_PASSWORD=$DEVSTACK_PASSWORD
ADMIN_PASSWORD=$DEVSTACK_PASSWORD

SCREEN_LOGDIR=/opt/stack/logs/screen
VERBOSE=True
LOG_COLOR=False

SWIFT_REPLICAS=1
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5d2014f6

KEYSTONE_BRANCH=$DEVSTACK_BRANCH
NOVA_BRANCH=$DEVSTACK_BRANCH
NEUTRON_BRANCH=$DEVSTACK_BRANCH
SWIFT_BRANCH=$DEVSTACK_BRANCH
GLANCE_BRANCH=$DEVSTACK_BRANCH
CINDER_BRANCH=$DEVSTACK_BRANCH
HEAT_BRANCH=$DEVSTACK_BRANCH
TROVE_BRANCH=$DEVSTACK_BRANCH
HORIZON_BRANCH=$DEVSTACK_BRANCH
TROVE_BRANCH=$DEVSTACK_BRANCH
REQUIREMENTS_BRANCH=$DEVSTACK_BRANCH

More information regarding local.conf can be found on Devstack configuration.

Edit ~/.bashrc

$ vim ~/.bashrc

Add this lines at the end of file.

export OS_USERNAME=admin
export OS_PASSWORD=Passw0rd
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://127.0.0.1:5000/v2.0

Disable Firewall

$ sudo ufw disable

Run stack.sh

$ cd ~/devstack
$ ./stack.sh

IMPORTANT: If the scripts don’t end properly or something else goes wrong, please unstack first using ./unstack.sh script.

Setup networks

# Remove the current network configuration 

# Remove the private subnet from the router
neutron router-interface-delete router1 private-subnet
# Remove the public network from the router
neutron router-gateway-clear router1
# Delete the router
neutron router-delete router1
# Delete the private network
neutron net-delete private
# Delete the public network
neutron net-delete public

# Setup the network

# Create the private network
NETID1=$(neutron net-create private --provider:network_type flat --provider:physical_network physnet1 | awk '{if (NR == 6) {print $4}}');
echo "[i] Private network id: $NETID1";
# Creathe the private subnetwork
SUBNETID1=$(neutron subnet-create private 10.0.1.0/24 --dns_nameservers list=true 8.8.8.8 | awk '{if (NR == 11) {print $4}}');
# Create the router
ROUTERID1=$(neutron router-create router | awk '{if (NR == 9) {print $4}}');
# Attach the private subnetwork to the router
neutron router-interface-add $ROUTERID1 $SUBNETID1
# Create the public network
EXTNETID1=$(neutron net-create public --router:external | awk '{if (NR == 6) {print $4}}');
# Create the public subnetwork
neutron subnet-create public --allocation-pool start=10.0.2.100,end=10.0.2.120 --gateway 10.0.2.1 10.0.2.0/24 --disable-dhcp    
# Attach the public network to the router
neutron router-gateway-set $ROUTERID1 $EXTNETID1

# Security Groups

# Enable ping
nova secgroup-add-rule default ICMP 8 8 0.0.0.0/0
# Enable SSH
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# Enable RDP
nova secgroup-add-rule default tcp 3389 3389 0.0.0.0/0

Change current version of nova and neutron

For the moment the *Nova Driver* and *Neutron Agent* for *VirtualBox* are not included in the current version of OpenStack. In order to use them we must change the version of *nova* and *neutron* installed by DevStack.

  • Change the nova version used:

$ cd /opt/stack/nova
$ git remote add vbox https://github.com/cloudbase/nova-virtualbox.git
$ git fetch vbox
$ git checkout -t vbox/virtualbox_driver
$ sudo python setup.py install

  • Change the neutron version used:

$ cd /opt/stack/neutron
$ git remote add vbox https://github.com/cloudbase/neutron-virtualbox.git
$ git fetch vbox
$ git checkout -t vbox/virtualbox_agent
$ sudo python setup.py install

  • Change mechanism drivers:

$ cd /etc/neutron/plugins/ml2
$ vim ml2_conf.ini

  • Add vbox in the following line:

mechanism_drivers = openvswitch,vbox

Port forwarding

In order to access the services provided by the virtual host with DevStack from within the host machine you have to forward the ports towards said host.

For each used port we need to run one of the following commands:

# If the virtual machine is in power off state.
$ VBoxManage --modifyvm DevStack [--natpf<1-N> [<rulename>],tcp|udp,[<hostip>],
                                 <hostport>,[<guestip>],<guestport>]

# If the virtual machine is running
$ VBoxManage --controlvm DevStack natpf<1-N> [<rulename>],tcp|udp,[<hostip>],
                                             <hostport>,[<guestip>],<guestport> |

For example the required rules for a compute node can be the following:

# Message Broker (AMQP traffic) - 5672
$ VBoxManage controlvm DevStack natpf1 "Message Broker (AMQP traffic), tcp, 127.0.0.1, 5672, 10.0.2.15, 5672"

# iSCSI target - 3260
$ VBoxManage controlvm DevStack natpf1 "iSCSI target, tcp, 127.0.0.1, 3260, 10.0.2.15, 3260"

# Block Storage (cinder) - 8776
$ VBoxManage controlvm DevStack natpf1 "Block Storage (cinder), tcp, 127.0.0.1, 8776, 10.0.2.15, 8776"

# Networking (neutron) - 9696
$ VBoxManage controlvm DevStack natpf1 "Networking (neutron), tcp, 127.0.0.1, 9696, 10.0.2.15, 9696"

# Identity service (keystone) - 35357 or 5000
$ VBoxManage controlvm DevStack natpf1 "Identity service (keystone) administrative endpoint, tcp, 127.0.0.1, 35357, 10.0.2.15, 35357"

# Image Service (glance) API - 9292
$ VBoxManage controlvm DevStack natpf1 "Image Service (glance) API, tcp, 127.0.0.1, 9292, 10.0.2.15, 9292"

# Image Service registry - 9191
$ VBoxManage controlvm DevStack natpf1 "Image Service registry, tcp, 127.0.0.1, 9191, 10.0.2.15, 9191"

# HTTP - 80
$ VBoxManage controlvm DevStack natpf1 "HTTP, tcp, 127.0.0.1, 80, 10.0.2.15, 80"

# HTTP alternate
$ VBoxManage controlvm DevStack natpf1 "HTTP alternate, tcp, 127.0.0.1, 8080, 10.0.2.15, 8080"

# HTTPS - 443
$ VBoxManage controlvm DevStack natpf1 "HTTPS, tcp, 127.0.0.1, 443, 10.0.2.15, 443"

More information regarding Openstack default ports can be found on Appendix A. Firewalls and default ports.

Setting up nova-compute

Clone nova

$ cd
$ git clone -b virtualbox_driver https://github.com/cloudbase/nova-virtualbox.git

Install nova & requirements

$ cd nova
$ pip install -r requirements.txt
$ python setup.py install

Configuration

VirtualBox Nova Driver have the following custom config options:

Group Config option Default value Short description
[virtualbox] remote_display False Enable or disable the VRDE Server.
[virtualbox] retry_count 3 The number of times to retry to execute command.
[virtualbox] retry_interval 1 Interval between execute attempts, in seconds.
[virtualbox] vboxmanage_cmd VBoxManage Path of VBoxManage.
[virtualbox] vrde_unique_port False Whether to use an unique port for each instance.
[virtualbox] vrde_module Oracle VM VirtualBox Extension Pack The module used by VRDE Server.
[virtualbox] vrde_port 3389 A port or a range of ports the VRDE server can bind to.
[virtualbox] vrde_require_instance_uuid_as_password False Use the instance uuid as password for the VRDE server.
[virtualbox] vrde_password_length None VRDE maximum length for password.
[virtualbox] wait_soft_reboot_seconds 60 Number of seconds to wait for instance to shut down after soft reboot request is made.
[rdp] encrypted_rdp False Enable or disable the rdp encryption.
[rdp] security_method RDP The security method used for encryption. (RDP, TLS, Negotiate).
[rdp] server_certificate None The Server Certificate.
[rdp] server_private_key None The Server Private Key.
[rdp] server_ca None The Certificate Authority (CA) Certificate.

The following config file is an example of nova_compute.conf. You can use your own settings.

[DEFAULT]
verbose=true
debug=true
use_cow_images=True
allow_resize_to_same_host=true
vnc_enabled=True
vncserver_listen = 127.0.0.1
vncserver_proxyclient_address = 127.0.0.1
novncproxy_base_url=http://127.0.0.1:6080/vnc_auto.html
# [...]

[cinder]
endpoint_template=http://127.0.0.1:8776/v2/%(project_id)s

[virtualbox]
#On Windows
#vboxmanage_cmd=C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
remote_display = true
vrde_module = VNC
vrde_port = 5900-6000
vrde_unique_port = true
vrde_password_length=20
vrde_require_instance_uuid_as_password=True
[rdp]

#encrypted_rdp=true
#security_method=RDP
#server_certificate=server_cert.pem
#server_private_key=server_key_private.pem
#server_ca=ca_cert.pem
#html5_proxy_base_url=http://127.0.0.1:8000/

More information regarding compute node configuration can be find on the following pages: List of compute config options and Nova compute.

Start up nova-compute

$ nova-compute --config-file nova_compute.conf

Setting up the VirtualBox Neutron Agent

Clone neutron

$ cd
$ git clone -b virtualbox_agent https://github.com/cloudbase/neutron-virtualbox.git

Install neutron & requirements

$ cd neutron
$ pip install -r requirements.txt
$ python setup.py install

Create neutron-agent.conf

VirtualBox Neutron Agent have the following custom config options:

Group Config option Default value Short description
[virtualbox] retry_count 3 The number of times to retry to execute command.
[virtualbox] retry_interval 1 Interval between execute attempts, in seconds.
[virtualbox] vboxmanage_cmd VBoxManage Path of VBoxManage.
[virtualbox] nic_type 82540EM The network hardware which VirtualBox presents to the guest.
[virtualbox] use_local_network False Use host-only network instead of bridge.

Here is a config file as an example for neutron_agent.conf. Feel free to use your own settings.

[DEFAULT]
debug=True
verbose=True
control_exchange=neutron
policy_file=$PATH/policy.json
rpc_backend=neutron.openstack.common.rpc.impl_kombu
rabbit_host=127.0.0.1
rabbit_port=5672
rabbit_userid=stackrabbit
rabbit_password=Passw0rd
logdir=$LOG_DIR
logfile=neutron-vbox-agent.log

[AGENT]
polling_interval=2
physical_network_mappings=*:vboxnet0

[virtualbox]
use_local_network=True

Start up the VirtualBox agent

$ neutron-vbox-agent --config-file neutron_agent.conf

Proof of concept

The post VirtualBox driver for OpenStack appeared first on Cloudbase Solutions.

]]>
2165
OpenStack on Hyper-V – Icehouse 2014.1.3 – Best tested compute component? https://cloudbase.it/openstack-on-hyper-v-release-testing/ Wed, 15 Oct 2014 12:01:26 +0000 http://www.cloudbase.it/?p=1852 Releasing stable components of a large cloud computing platform like OpenStack is not something that can be taken lightheartedly, there are simply too many variables and moving parts that need to be taken in consideration. The OpenStack development cycle includes state of the art continuous integration testing including a large number of 3rd party CI testing…

The post OpenStack on Hyper-V – Icehouse 2014.1.3 – Best tested compute component? appeared first on Cloudbase Solutions.

]]>
Releasing stable components of a large cloud computing platform like OpenStack is not something that can be taken lightheartedly, there are simply too many variables and moving parts that need to be taken in consideration.

The OpenStack development cycle includes state of the art continuous integration testing including a large number of 3rd party CI testing infrastructures to make sure that any new code contribution won’t break the existing codebase.

The OpenStack on Hyper-V 3rd party CI is currently available for Nova and Neutron (with Cinder support in the works and more projects along the way), spinning up an OpenStack cloud with Hyper-V nodes for every single new code patchset to be tested, meaning hundreds of clouds deployed and dismissed per day. It’s hosted by Microsoft and maintained by a team composed by Microsoft and Cloudbase Solutions engineers.

This is a great achievement, especially when cnsidered in the whole OpenStack picture, where dozens of other testing infrastructures operate in a similar way while hundreds of developers tirelessly submit code to be reviewed. Thanks to this large scale joint effort, QA automation has surely been taken to a whole new level.
 

Where’s the catch?

There’s always a tradeoff between the desired workload and the available resources. In an ideal world, we would test every possible use case scenario, including all combinations of supported operating systems and component configurations. The result would simply require too many resources or execution times in the order of a few days. Developers and reviewers need to know if the code passed tests, so long test execution times are simply detrimental for the project. A look at the job queue shortly before a code freeze day will give a very clear idea of what we are talking about :-).

On the other side, stable releases require as much testing as possible, especially if you plan to sleep at night while your customers deploy your products in production environments.

To begin with, the time constraint that continuous integration testing requires disappear, since in OpenStack we have a release every month or so and this leads us to:

Putting the test scenarios together

We just need a matrix of operating systems and project specific options combinations that we want to test. The good news here are that the actual tests to be performed are the same ones used for continuous integration (Tempest), simply repeated for every scenario.

For the specific Hyper-V compute case, we need to test features that the upstream OpenStack CI infrastructure cannot test. Here’s a quick rundown list:

  • Every supported OS version: Hyper-V 2008 R2, 2012, 2012 R2 and vNext.
  • Live migration, which requires 2 compute servers per run
  • VHD and VHDX images (fixed, dynamic)
  • Copy on Write (CoW) and full clones 
  • Various Neutron network configurations: VLAN, flat and soon Open vSwitch!
  • Dynamic / fixed VM memory
  • API versions (v1, v2)
  • A lot more coming with the Kilo release: Hyper-V Generation 2 VMs, RemoteFX, etc


Downstream bug fixes and features

Another reason for performing additional tests is that “downstream” product releases, integrate the “upstream” projects (the ones available on the Launchpad project page and related git repositories) with critical bug fixes not yet merged upstream (time to land a patch is usually measured in weeks) and optionally new features backported from subsequent releases.

For example the OpenStack Hyper-V Icehouse 2014.1.3 release includes the following additions:

Nova

  • Hyper-V: cleanup basevolumeutils
  • Hyper-V: Skip logging out in-use targets
  • Fixes spawn issue on Hyper-V
  • Fixes Hyper-V dynamic memory issue with vNUMA
  • Fixes differencing VHDX images issue on Hyper-V
  • Fixes Hyper-V should log a clear error message
  • Fixes HyperV VM Console Log
  • Adds Hyper-V serial console log
  • Adds Hyper-V Compute Driver soft reboot implementation
  • Fixes Hyper-V driver WMI issue on 2008 R2
  • Fixes Hyper-V boot from volume live migration
  • Fixes Hyper-V volume discovery exception message
  • Add differencing vhdx resize support in Hyper-V Driver
  • Fixes Hyper-V volume mapping issue on reboot
  • HyperV Driver – Fix to implement hypervisor-uptime

Neutron

  • Fixes Hyper-V agent port disconnect issue
  • Fixes Hyper-V 2008 R2 agent VLAN Settings issue
  • Fixes Hyper-V agent stateful security group rules

Ceilometer

  • No changes from upstream

Running all the relevant integration tests against the updated repositories provides an extremely important proof for our users that the quality standards are well respected.

Source code repositories:

 

Packaging

Since we released the first Hyper-V installer for Folsom we had a set goals:

  • Easy to deploy
  • Automated configuration
  • Unattended installation
  • Include a dedicated Python environment
  • Easy to automate with Puppet, Chef, SaltStack, etc
  • Familiar for Windows users
  • Familiar for DevOps
  • Handle required OS configurations (e.g. create VMSwitches)
  • No external requirements / downloads
  • Atomic deployment

The result is the Hyper-V OpenStack MSI installer that keeps getting better with every release:

 

Sharing the test results

Starting with Icehouse 2014.1.3 we decided to publish the test results and the tools that we use to automate the tests execution:

Test results

http://www.cloudbase.it/openstack-hyperv-release-tests-results

Each release contains a subfolder for every test execution (Hyper-V 2012 R2 VHDX, Hyper-V 2012 VHD, etc), which in turn will contain the results in HTML format and every possible log, configuration file, list of applied Windows Update hot fixes, DevStack logs and so on.

Test tools

All the scripts that we are using are available here:

https://github.com/cloudbase/openstack-hyperv-release-tests

The main goal is to provide a set of tools that anybody can use efficiently with minimum hardware requirements and reproduce the same tests that we run (see for example the stack of Intel NUCs above).

Hosts:

  • Linux host running Ubuntu 12.04 or 14.04
  • One or more Hyper-V nodes

Install the relevant prerequisites on the Linux node.

Enable WinRM with HTTPS on the Hyper-V nodes.

Edit config.yaml, providing the desired Hyper-V node configurations and run:

./run.sh https://www.cloudbase.it/downloads/HyperVNovaCompute_Icehouse_2014_1_3.msi stable/icehouse

The execution can be easily integrated with Jenkins or any other automation tool:

Screen Shot 2014-10-15 at 02.32.12

Run with custom parameters, for testing individual platforms:

We are definitely happy with the way in which Hyper-V support in OpenStack is growing. We are adding lots of new features and new developers keep on joining the ranks, so QA became an extremely important part of the whole equation. Our goal is to keep the process open so that anybody can review and contribute to our testing procedures for both the stable releases and the master branch testing executed on the Hyper-V CI infrastructure.

The post OpenStack on Hyper-V – Icehouse 2014.1.3 – Best tested compute component? appeared first on Cloudbase Solutions.

]]>
1852
OpenStack Havana 2013.2.2 Hyper-V compute installer released! https://cloudbase.it/openstack-havana-2013-2-2-hyper-v-compute-installer-released/ Wed, 19 Feb 2014 21:49:20 +0000 http://www.cloudbase.it/?p=1377 Following the recent announcement last Friday of the availability of OpenStack Havana release 2013.2.2, we’re glad to announce that the Havana 2013.2.2 Hyper-V Nova compute installer is available for download.   Installing it is amazingly easy as usual, just get the free Hyper-V Server 2008 R2 / 2012 / 2012 R2 or enable the Hyper-V role on…

The post OpenStack Havana 2013.2.2 Hyper-V compute installer released! appeared first on Cloudbase Solutions.

]]>
Following the recent announcement last Friday of the availability of OpenStack Havana release 2013.2.2, we’re glad to announce that the Havana 2013.2.2 Hyper-V Nova compute installer is available for download.

 

Hyperv_nova_2013_2_2

Installing it is amazingly easy as usual, just get the free Hyper-V Server 2008 R2 / 2012 / 2012 R2 or enable the Hyper-V role on Windows Server 2008 R2 / 2012 / 2012 R2 and start the installer. No need for additional requirements!

If you prefer to deploy OpenStack on Hyper-V via Chef, Puppet, SaltStack or group policies, here’s how to execute the installer in unattended mode.

 

How to get started?

Your Hyper-V compute nodes can be added to any Havana OpenStack cloud, for example based on Ubuntu or RDO on RHEL/CentOS. As soon as the installer is done you’ll see the new compute node in your cloud, no need for anything else.

 

What type of guest instances can I run on Hyper-V?

Windows, Linux or FreeBSD instances. Another key advantage is that beside Windows, most modern Linux distributions come already with the Hyper-V integration components installed, no need to deploy additional tools or drivers. Just make sure that your Glance images are in VHD or VHDX format!

A typical use case consists in running multiple hypervisors in your OpenStack cloud, for example KVM for Linux guests and Hyper-V for Windows guests.

If you’d like to test how Windows images run on OpenStack, here are the official Microsoft OpenStack Windows Server 2012 R2 evaluation images ready for download.

 

Licensing and support

Hyper-V 2012 R2 is free and provides all the hypervisor related features that you can find on Windows Server 2012 R2 with no memory or other usage limitations.

If you want to run Windows guests you might want to check out the Microsoft SPLA and Volume Licensing options (this applies to any hypervisor, not only Hyper-V). By using Windows Server Datacenter licenses, which provide unlimited virtualization rights, you might be surprised to see how cheap licensing can be!

Please note that based on your licensing agreement, Microsoft provides full support for your Windows virtual machines running on Hyper-V. This is rarely the case if you decide to run Windows on KVM, unless your stack is listed in the Microsoft SVVP program!

 

Release Notes

Beside the upstream Nova, Neutron and Ceilometer components updated for 2013.2.2, we also added to this release additional bug fixes that already landed in Icehouse but whose backporting to Havana still needs to be merged or that are still in the process of being merged. Here’s the full list:

 

Nova

Neutron

Ceilometer

 

The road to Icehouse

Would you like to test how our latest bits work, maybe by using Devstack? Our Icehouse beta installer is packaged and released automatically anytime a new patch lands in Nova, Neutron or Ceilometer.

 

The post OpenStack Havana 2013.2.2 Hyper-V compute installer released! appeared first on Cloudbase Solutions.

]]>
1377
DevStack on Hyper-V https://cloudbase.it/devstack-on-hyper-v/ https://cloudbase.it/devstack-on-hyper-v/#comments Mon, 19 Nov 2012 19:24:06 +0000 http://www.cloudbase.it/?p=718 DevStack is without any doubt one of the easiest ways to set up an OpenStack environment for testing or development purposes (no production!!). It’s also a great way to test the Hyper-V Nova Compute and Quantum Grizzly beta versions that we are releasing these days! 🙂 Hyper-V Server 2012 is free and can be downloaded…

The post DevStack on Hyper-V appeared first on Cloudbase Solutions.

]]>
DevStack is without any doubt one of the easiest ways to set up an OpenStack environment for testing or development purposes (no production!!).
It’s also a great way to test the Hyper-V Nova Compute and Quantum Grizzly beta versions that we are releasing these days! 🙂

Hyper-V Server 2012 is free and can be downloaded from here . The installation is very simple and straightforward as with any Windows Server solution. You can use of course also the full Windows Server 2012 , but unless you are really missing the GUI features, there’s no need for it.

Another great option for development consists in enabling the Hyper-V role on Windows 8 (Pro or Enterprise). If you have a Mac, Linux or Windows 7 you can run both Hyper-V and DevStack virtualized on VMWare Fusion 5 / Workstation 9.

To make things even easier, in this guide we’ll run DevStack in a VM on top of the Hyper-V compute node. Hyper-V is not particularly picky about hardware, which means that there’s no need for expensive servers to be used for test and development.

The DevStack VM

Let’s start by downloading an Ubuntu Server 12.04 ISO image and create a VM on Hyper-V. Since Hyper-V Server does not provide a GUI, you can do that from a separate host (Windows 8 / Windows Server 2012) or you can issue some simple Powershell commands.
Here’s how to download the Ubuntu ISO image via Powershell:

 

$isourl = "http://releases.ubuntu.com/12.04/ubuntu-12.04.1-server-amd64.iso"
$isopath = "C:\ISO\ubuntu-12.04.1-server-amd64.iso"
Invoke-WebRequest -uri $isourl -OutFile $isopath

 

You will need an external virtual switch, in order for the VMs to communicate with the external world (including Internet for our DevStack VM). You can of course skip this step if you already created one.

 

$net = Get-NetAdapter
$vmswitch = new-vmswitch External -NetAdapterName $net[0].Name -AllowManagementOS $True

 

Finally here’s how to create and start the DevStack VM, with 1 GB RAM and 15 GB HDD, a virtual network adapter attached to the external switch and the Ubuntu ISO attached to the DVD for the installation.

 

$vm = new-VM "DevStack" -MemoryStartupBytes (1024*1024*1024) -NewVHDPath "C:\VHD\DevStack.vhdx" -NewVHDSizeBytes (15*1024*1024*1024)
Set-VMDvdDrive $vm.Name -Path $isopath
Connect-VMNetworkAdapter $vm.Name -SwitchName $vmswitch.Name
Start-VM $vm

 

Now it’s time to connect to the VM console and install Ubuntu. All we need is a basic installation with SSH. DevStack will take care of the rest.

 

Console access

The free Hyper-V Server does not provide a console UI application, so we have two options:

  1. Access the server from another host using Hyper-V Manager on Windows 8 or Windows Server 2012
  2. Using our free FreeRDP based solution directly from the Hyper-V server

We’ll choose the latter in this guide as we simply love it. 🙂

In Powershell from the directory in which you unzipped FreerDP run:

 

Set-ExecutionPolicy RemoteSigned
Import-Module .\PSFreeRDP.ps1

 

And now we can finally access the console of our new VM:

 

Get-VMConsole DevStack

 

Once you are done with the Ubuntu setup, we can go on and deploy DevStack. My suggestion is to connect to the Ubuntu VM via SSH, as it’s way easier especially for pasting commands. In case you should need an SSH client for Windows, Putty is a great (and free) option.
Let’s  start by adding an NTP daemon as time synchronization issues are a typical source of headaches in OpenStack:

 

sudo apt-get install ntp
sudo service ntp start

 

We need Git to download DevStack:

sudo apt-get install git

 

Installing and running DevStack is easy:

 

git clone git://github.com/openstack-dev/devstack.git
cd devstack
./stack.sh

 

The script will ask you for a few passwords. You will find them in the “devstack/localrc” file afterwards.
In case you should prefer to run a specific version of the OpenStack components instead of the latest Grizzly bits, just add the branch name to the git clone command, e.g.:

 

git clone git://github.com/openstack-dev/devstack.git -b stable/folsom

 

Now edit your ~/.bashrc file and add the following lines at the end, in order to have your environment ready whenever you log in:

 

export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=yourpassword
export OS_AUTH_URL="http://localhost:5000/v2.0/"

 

And reload it with:

source ~/.bashrc

 

It’s time to add an image to Glance in order to spawn VMs on Hyper-V. To save you some time we prepared a ready made Ubuntu VM image. When you create your own, remember to use the VHD format and not VHDX.

 

wget http://www.cloudbase.it/downloads/UbuntuServer1204_cloudinit.zip
unzip UbuntuServer1204_cloudinit.zip
glance image-create --name "Ubuntu Server 12.04" --property hypervisor_type=hyperv --container-format bare --disk-format vhd < UbuntuServer1204.vhd

 

Note the hypervisor_type property. By specifying it, we are asking Nova Scheduler to use this image on Hyper-V compute nodes only, which means that you can have a mix of KVM, Xen or Hyper-V nodes in your stack, letting Nova Scheduler taking care of it any time you boot a new image, a great feature IMO!
We are almost done. Let’s create a new keypair and save the private key in your user’s home:

 

test -d ~/.ssh || mkdir ~/.ssh
nova keypair-add key1 >> ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa

 

Ok, we are done with DevStack so far. We need to setup our Hyper-V Nova Compute node, which is even easier thanks to the installer that we released :-). Let’s go back to the Powershell:

 

$src = "http://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
$dest = "$env:temp\HyperVNovaCompute_Folsom.msi"
Invoke-WebRequest -uri $src -OutFile $dest
Unblock-File $dest
Start-Process $dest

 

The installation is very easy, just follow the steps available here!
Remember to specify the IP address of your DevStack VM for Glance and RabbitMQ. As Nova database connection string you can simply use: mysql://root:YourDevStackPassword@YourDevstackIP/nova

(Note: never use “root” in a production environment!)

Now let’s go back to DevStack and check that all the services are up and running:

 

nova-manage service list

 

You should see a smile “:-)” close to each service and no “XXX”.

Now it’s time to boot our first OpenStack VM:

 

nova boot --flavor 1 --image "Ubuntu Server 12.04" --key-name key1 vm1

 

You can check the progress and status of your VM with:

 

nova list

 

The first time that you boot an instance it will take a few minutes, depending on the size of your Glance image, as the image itself gets cached on the compute node. Subsequent image boots will be very fast.

 

Some useful tips

How to delete all the VM at once from the command line

During testing you’ll need to cleanup all the instances quite often. Here’s a simple script to do that on Linux without issuing single “nova delete” commands for every instance:

 

nova list | awk '{if (NR > 3 && $2 != "") {system("nova delete " $2);}}'

 

How to update DevStack

I typically run the following script from the DevStack folder after a reboot to update the OpenStack components (Nova, Glance, etc) and the DevStack scripts, just before running stack.sh

git pull
pushd .
cd /opt/stack/nova
git pull
cd /opt/stack/glance
git pull
cd /opt/stack/cinder
git pull
cd /opt/stack/keystone
git pull
cd /opt/stack/horizon
git pull
cd /opt/stack/python-glanceclient
git pull
python setup.py build
sudo python setup.py install --force 
cd /opt/stack/python-novaclient
git pull
python setup.py build
sudo python setup.py install --force
cd /opt/stack/python-cinderclient
git pull
python setup.py build
sudo python setup.py install --force
cd /opt/stack/python-keystoneclient
git pull
python setup.py build
sudo python setup.py install --force
popd

 

How to check your OpenStack versions

All the components in your stack need to share the same OpenStack version. Don’t mix Grizzly and Folsom components!! Here’s how to check what Nova version you are running:

 

python -c "from nova import version; print version.NOVA_VERSION"

 

For Grizzly you will get: [‘2013, ‘1’, ‘0’]  (the last number might also be “None”).

 

The post DevStack on Hyper-V appeared first on Cloudbase Solutions.

]]>
https://cloudbase.it/devstack-on-hyper-v/feed/ 7 718