As we did for the network node before staring it is good to quickly check if the remote ssh execution of the commands done in the all nodes installation section worked without problems. You can again verify it by checking the ntp installation.
In the next few rows we try to briefly explain what happens behind the scene when a new request for starting an OpenStack instance is done. Note that this is very high level description.
- Authentication is performed either by the web interface horizon
or nova command line tool:
- keystone is contacted and authentication is performed
- a token is saved in the database and returned to the client (horizon or nova cli) to be used with later interactions with OpenStack services for this request.
- nova-api is contacted and a new request is created:
- it checks via keystone the validity of the token
- checks the authorization of the user
- validates parameters and create a new request in the database
- calls the scheduler via queue
- nova-scheduler find an appropriate host
- reads the request
- find an appropriate host via filtering and weighting
- calls the choosen nova-compute host via queue
- nova-compute read the request and start an instance:
- generates a proper configuration for the hypervisor
- get image URI via image id
- download the image
- request to allocate network via queue
- nova-compute requests creation of a neutron port
- neutron allocate the port:
- allocates a valid private ip
- instructs the plugin agent to implement the port and wire it to the network interface to the VM[#]_
- nova-api contacts cinder to provision the volume
- gets connection parameters from cinder
- uses iscsi to make the volume available on the local machine
- asks the hypervisor to provision the local volume as virtual volume of the specified virtual machine
- horizon or nova poll the status of the request by contacting nova-api until it is ready.
Since we cannot use KVM because our compute nodes are virtualized and the host node does not support nested virtualization, we install qemu instead of kvm:
root@compute-1 # apt-get install -y nova-compute-qemu \ neutron-plugin-openvswitch-agent neutron-plugin-ml2 \ python-libguestfs libguestfs-tools
This will also install the nova-compute package and all its dependencies.
The nova-compute daemon must be able to connect to the RabbitMQ
and MySQL servers. The minimum information you have to provide in the
/etc/nova/nova.conf file are:
[DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/run/lock/nova verbose=True # api_paste_config=/etc/nova/api-paste.ini # compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler rabbit_host = db-node rabbit_userid = openstack rabbit_password = gridka # Cinder: use internal URl instead of public one. cinder_catalog_info = volume:cinder:internalURL # Vnc configuration novnc_enabled=true novncproxy_base_url=http://api-node:6080/vnc_auto.html novncproxy_port=6080 vncserver_proxyclient_address=10.0.0.20 vncserver_listen=0.0.0.0 # Compute # compute_driver=libvirt.LibvirtDriver # Auth use_deprecated_auth=false auth_strategy=keystone [glance] # Imaging service api_servers=image-node:9292 image_service=nova.image.glance.GlanceImageService [keystone_authtoken] auth_uri = http://auth-node:5000 admin_tenant_name = service admin_user = nova admin_password = gridka
You can just replace the /etc/nova/nova.conf file with the content
displayed above.
To enable neutron for the nova-compute service you also have to ensure
the following lines to are presents in /etc/nova/nova.conf:
[DEFAULT] # ... network_api_class = nova.network.neutronv2.api.API linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron [neutron] # It is fine to have Noop here, because this is the *nova* # firewall. Neutron is responsible of configuring the firewall and its # configuration is stored in /etc/neutron/neutron.conf url = http://network-node:9696 auth_strategy = keystone admin_tenant_name = service admin_username = neutron admin_password = gridka admin_auth_url = http://auth-node:35357/v2.0
Ensure the br-int bridge has been created by the installer:
root@compute-1:~# ovs-vsctl show
62f8b342-8afa-4ce4-aa98-e2ab671d2837
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
ovs_version: "2.0.1"
Ensure rp_filter is disabled. As we did before, you need to ensure
the following lines are present in /etc/sysctl.conf file.
This file is read during the startup, but it is not read afterwards. To force Linux to re-read the file you can run:
root@compute-1:~# sysctl -p /etc/sysctl.conf net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
Configure RabbitMQ and Keystone options for neutron, by editing
/etc/neutron/neutron.conf:
[DEFAULT] # ... rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host = db-node rabbit_password = gridka auth_strategy = keystone # ... [keystone_authtoken] auth_host = auth-node auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = gridka
The ML2 plugin is configured in
/etc/neutron/plugins/ml2/ml2_conf.ini:
[ml2] # ... type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] # ... tunnel_id_ranges = 1:1000 [ovs] # ... local_ip = 10.0.0.20 [agent] tunnel_type = gre tunnel_types = gre enable_tunneling = True [securitygroup] # ... firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True
Restart nova-compute and the neutron agent:
root@compute-1:~# service nova-compute restart nova-compute stop/waiting nova-compute start/running, process 17740 root@compute-1:~# service neutron-plugin-openvswitch-agent restart neutron-plugin-openvswitch-agent stop/waiting neutron-plugin-openvswitch-agent start/running, process 17788
Ensure that the the /etc/nova/nova-compute.conf has the correct
libvirt type. For our setup this file should only contain:
[DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] virt_type=qemu
Please note that these are the lines needed on our setup because we have virtualized compute nodes without support for nested virtualization. On a production environment, using physical machines with full support for virtualization you would probably need to set:
[libvirt] virt_type=kvm
When Nova is using the libvirt virtualization driver, the SMBIOS serial number supplied by libvirt is provided to the guest instances that are running on a compute node. This serial number may expose sensitive information about the underlying compute node hardware; it is preferrable to use the /etc/machine-id UUID instead of the host hardware UUID. This means that even containers will see a separate /etc/machine-id value.
By default, the data source used to the populate the host "serial" UUID exposed to guest in the virtual BIOS is the file /etc/machine-id, falling back to the libvirt reported host UUID. If your compute node does not contain a valid /etc/machine-id file, generate one with the following command:
root@compute-1:~# uuidgen > /etc/machine-id
For further details: https://wiki.openstack.org/wiki/OSSN/OSSN-0028
After restarting the nova-compute service:
root@compute-1 # service nova-compute restart
you should be able to see the compute node from the api-node:
root@api-node:~# nova-manage service list Binary Host Zone Status State Updated_At nova-cert api-node internal enabled :-) 2013-08-13 13:43:35 nova-conductor api-node internal enabled :-) 2013-08-13 13:43:31 nova-consoleauth api-node internal enabled :-) 2013-08-13 13:43:35 nova-scheduler api-node internal enabled :-) 2013-08-13 13:43:35 nova-compute compute-1 nova enabled :-) None
You should also see the openvswitch agent from the output of neutron agent-list:
root@api-node:~# neutron agent-list +--------------------------------------+--------------------+--------------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+--------------+-------+----------------+---------------------------+ | 33a35494-180c-43e4-8c05-8b67011b4943 | Metadata agent | network-node | :-) | True | neutron-metadata-agent | | 7e238cc3-641e-48ba-83f4-1d825d4a5519 | Open vSwitch agent | compute-1 | :-) | True | neutron-openvswitch-agent | | 82193fd4-b1e8-4248-912a-d736431ab077 | L3 agent | network-node | :-) | True | neutron-l3-agent | | bf45584a-8b4d-42f9-848c-2928821d4e28 | DHCP agent | network-node | :-) | True | neutron-dhcp-agent | | c45fecd8-e893-4dd9-9427-7d561697b8c4 | Open vSwitch agent | network-node | :-) | True | neutron-openvswitch-agent | +--------------------------------------+--------------------+--------------+-------+----------------+---------------------------+
We will test OpenStack first from the api-node using the command line interface, and then from the physical node connecting to the web interface.
The first thing we need to do is to create a ssh keypair and upload
the public key on OpenStack so that we can connect to the instance.
The command to create a ssh keypair is ssh-keygen:
root@api-node:~# ssh-keygen -t rsa -f ~/.ssh/id_rsa Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: fa:86:74:77:a2:55:29:d8:e7:06:4a:13:f7:ca:cb:12 root@api-node The key's randomart image is: +--[ RSA 2048]----+ | | | . . | | = . . | | + + = | | .S+ B | | ..E * + | | ..o * = | | ..+ o | | ... | +-----------------+
Then we have to create an OpenStack keypair and upload our public
key. This is done using nova keypair-add command:
root@api-node:~# nova keypair-add gridka-api-node --pub-key ~/.ssh/id_rsa.pub
you can check that the keypair has been created with:
root@api-node:~# nova keypair-list +-----------------+-------------------------------------------------+ | Name | Fingerprint | +-----------------+-------------------------------------------------+ | gridka-api-node | fa:86:74:77:a2:55:29:d8:e7:06:4a:13:f7:ca:cb:12 | +-----------------+-------------------------------------------------+
Let's get the ID of the available images, flavors and security groups:
root@api-node:~# nova image-list +--------------------------------------+--------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------------+--------+--------+ | 79af6953-6bde-463d-8c02-f10aca227ef4 | cirros-0.3.0 | ACTIVE | | +--------------------------------------+--------------+--------+--------+ root@api-node:~# nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ root@api-node:~# nova secgroup-list +---------+-------------+ | Name | Description | +---------+-------------+ | default | default | +---------+-------------+
Now we are ready to start our first instance:
root@api-node:~# nova boot --image 79af6953-6bde-463d-8c02-f10aca227ef4 \
--security-group default --flavor m1.tiny --key_name gridka-api-node server-1
+-------------------------------------+--------------------------------------+
| Property | Value |
+-------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state | scheduling |
| image | cirros-0.3.0 |
| OS-EXT-STS:vm_state | building |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| flavor | m1.tiny |
| id | 8e680a03-34ac-4292-a23c-d476b209aa62 |
| security_groups | [{u'name': u'default'}] |
| user_id | 9e8ec4fa52004fd2afa121e2eb0d15b0 |
| OS-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
| status | BUILD |
| updated | 2013-08-19T09:37:34Z |
| hostId | |
| OS-EXT-SRV-ATTR:host | None |
| key_name | gridka-api-node |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| name | server-1 |
| adminPass | k7cT4nnC6sJU |
| tenant_id | 1ce38185a0c941f1b09605c7bfb15a31 |
| created | 2013-08-19T09:37:34Z |
| metadata | {} |
+-------------------------------------+--------------------------------------+
This command returns immediately, even if the OpenStack instance is not yet started:
root@api-node:~# nova list +--------------------------------------+----------+--------+----------+ | ID | Name | Status | Networks | +--------------------------------------+----------+--------+----------+ | 8e680a03-34ac-4292-a23c-d476b209aa62 | server-1 | BUILD | | +--------------------------------------+----------+--------+----------+ root@api-node:~# nova list +--------------------------------------+----------+--------+----------------------------+ | ID | Name | Status | Networks | +--------------------------------------+----------+--------+----------------------------+ | d2ef7cbf-c506-4c67-a6b6-7bd9fecbe820 | server-1 | BUILD | net1=10.99.0.2, 172.16.1.1 | +--------------------------------------+----------+--------+----------------------------+ root@api-node:~# nova list +--------------------------------------+----------+--------+----------------------------+ | ID | Name | Status | Networks | +--------------------------------------+----------+--------+----------------------------+ | d2ef7cbf-c506-4c67-a6b6-7bd9fecbe820 | server-1 | ACTIVE | net1=10.99.0.2, 172.16.1.1 | +--------------------------------------+----------+--------+----------------------------+
When the instance is in ACTIVE state it means that it is now
running on a compute node. However, the boot process
can take some time, so don't worry if the following command will fail
a few times before you can actually connect to the instance:
root@api-node:~# ssh 172.16.1.1 The authenticity of host '172.16.1.1 (172.16.1.1)' can't be established. RSA key fingerprint is 38:d2:4c:ee:31:11:c1:1a:0f:b6:3b:dc:f2:d2:46:8f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.16.1.1' (RSA) to the list of known hosts. # uname -a Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux
You can attach a volume to a running instance easily:
root@api-node:~# nova volume-list +--------------------------------------+-----------+--------------+------+-------------+-------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +--------------------------------------+-----------+--------------+------+-------------+-------------+ | 180a081a-065b-497e-998d-aa32c7c295cc | available | test2 | 1 | None | | +--------------------------------------+-----------+--------------+------+-------------+-------------+ root@api-node:~# nova volume-attach server-1 180a081a-065b-497e-998d-aa32c7c295cc /dev/vdb +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | serverId | d2ef7cbf-c506-4c67-a6b6-7bd9fecbe820 | | id | 180a081a-065b-497e-998d-aa32c7c295cc | | volumeId | 180a081a-065b-497e-998d-aa32c7c295cc | +----------+--------------------------------------+
Inside the instnace, a new disk named /dev/vdb will appear. This
disk is persistent, which means that if you terminate the instance
and then you attach the disk to a new instance, the content of the
volume is persisted.
The command is similar to nova boot:
root@api-node:~# euca-run-instances \ --access-key 445f486efe1a4eeea2c924d0252ff269 \ --secret-key ff98e8529e2543aebf6f001c74d65b17 \ -U http://api-node.example.org:8773/services/Cloud \ ami-00000001 -k gridka-api-node RESERVATION r-e9cq9p1o acdbdb11d3334ed987869316d0039856 default INSTANCE i-00000007 ami-00000001 pending gridka-api-node (acdbdb11d3334ed987869316d0039856, None) 0 m1.small 2013-08-29T07:55:15.000Z nova monitoring-disabled instance-store
Instances created by euca2ools are, of course, visible with nova as well:
root@api-node:~# nova list +--------------------------------------+---------------------------------------------+--------+----------------------------+ | ID | Name | Status | Networks | +--------------------------------------+---------------------------------------------+--------+----------------------------+ | ec1e58e4-57f4-4429-8423-a44891a098e3 | Server ec1e58e4-57f4-4429-8423-a44891a098e3 | BUILD | net1=10.99.0.3, 172.16.1.2 | +--------------------------------------+---------------------------------------------+--------+----------------------------+
We have already seen, that there are a number of predefined flavors available that provide certain classes of compute nodes and define number of vCPUs, RAM and disk.:
root@api-node:~# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | True | {} |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | {} |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | {} |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | {} |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | {} |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
In order to create a new flavor, use the CLI like so:
root@api-node:~# nova flavor-create --is-public true x1.tiny 6 256 2 1
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 6 | x1.tiny | 256 | 2 | 0 | | 1 | 1.0 | True | {} |
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
Where the parameters are like this:
--is-public: controls if the image can be seen by all users --ephemeral: size of ephemeral disk in GB (default 0) --swap: size of swap in MB (default 0) --rxtx-factor: network throughput factor (use to limit network usage) (default 1) x1.tiny: the name of the flavor 6: the unique id of the flavor (check flavor list to see the next free flavor) 256: Amount of RAM in MB 2: Size of disk in GB 1: Number of vCPUs
If we check the list again, we will see, that the flavor has been created:
root@api-node:~# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | True | {} |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | {} |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | {} |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | {} |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | {} |
| 6 | x1.tiny | 256 | 2 | 0 | | 1 | 1.0 | True | {} |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
NOTE: This might or might not work on our test setup.
You can change the flavor of an existing VM (effectively resizing it) by running the following command.
First lets find a running instance:
root@api-node:~# nova list --all-tenants +--------------------------------------+---------+--------+----------------------------+ | ID | Name | Status | Networks | +--------------------------------------+---------+--------+----------------------------+ | bf619ff4-303a-417c-9631-d7147dd50585 | server1 | ACTIVE | net1=10.99.0.2, 172.16.1.1 | +--------------------------------------+---------+--------+----------------------------+
and see what flavor it has:
root@api-node:~# nova show bf619ff4-303a-417c-9631-d7147dd50585
+-------------------------------------+------------------------------------------------------------+
| Property | Value |
+-------------------------------------+------------------------------------------------------------+
| status | ACTIVE |
| updated | 2013-08-29T10:24:26Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | compute-1 |
| key_name | antonio |
| image | Cirros-0.3.0-x86_64 (a6d81f9c-8789-49da-a689-503b40bcd23c) |
| hostId | ccc0c0738aea619c49a17654f911a9e2419848aece435cb7f117f666 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000012 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-1 |
| flavor | m1.tiny (1) |
| id | bf619ff4-303a-417c-9631-d7147dd50585 |
| security_groups | [{u'name': u'default'}] |
| user_id | 13ff2976843649669c4911ec156eaa3f |
| name | server1 |
| created | 2013-08-29T10:24:15Z |
| tenant_id | acdbdb11d3334ed987869316d0039856 |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| accessIPv4 | |
| accessIPv6 | |
| net1 network | 10.99.0.2, 172.16.1.1 |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+-------------------------------------+------------------------------------------------------------+
Now resisze the VM by specifying the new flavor ID:
root@api-node:~# nova resize bf619ff4-303a-417c-9631-d7147dd50585 6
While the server is resizing, its status will be RESIZING:
root@api-node:~# nova list --all-tenants
Once the resize operation is done, the status will change to VERIFY_RESIZE and you will have to confirm that the resize operation worked:
root@api-node:~# nova resize-confirm bf619ff4-303a-417c-9631-d7147dd50585
or, if things went wrong, revert the resize:
root@api-node:~# nova resize-revert bf619ff4-303a-417c-9631-d7147dd50585
The status of the server will now be back to ACTIVE.
- On Kilo-RC1, you have to write something in
/etc/machine-id. Cfr. https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1413293
We adapted the tutorial above with what we considered necessary for our purposes and for installing OpenStack on 6 hosts.
| [1] | how this is done, depends on the plugin and neutron configuration. In our setup, this means: 1) create a linux bridge and attach it to the tap interface 2) create a veth pair, attach one end to the bridge and the other to the br-int bridge 3) set vlan tag for the port on the integration bridge 4) configure flows on the integration bridge 5) setup the L2 network (the gre tunnel) if it's not already there 6) configure iptables (between the tap and the bridge interface) to enforce the security groups 7) notify nova that the port is up and running |