OpenStack Archives - Cloudbase Solutions https://cloudbase.it/tag/openstack/ Cloud Interoperability Thu, 10 Aug 2023 07:12:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 124661541 Bare metal Kubernetes on mixed x64 and ARM64 https://cloudbase.it/bare-metal-kubernetes-on-mixed-x64-and-arm64/ Mon, 31 Jul 2023 10:00:00 +0000 https://cloudbase.it/?p=42572 This is the first blog post in a series about running mixed x86 and ARM64 Kubernetes clusters, starting with a general architectural overview and then moving to detailed step by step instructions. Kubernetes doesn’t need any introduction at this point, as it became the de facto standard container orchestration system. If you, dear reader, developed…

The post Bare metal Kubernetes on mixed x64 and ARM64 appeared first on Cloudbase Solutions.

]]>
This is the first blog post in a series about running mixed x86 and ARM64 Kubernetes clusters, starting with a general architectural overview and then moving to detailed step by step instructions.

Kubernetes doesn’t need any introduction at this point, as it became the de facto standard container orchestration system. If you, dear reader, developed or deployed any workload in the last few years, there’s a very high probability that you had to interact with a Kubernetes cluster.

Most users deploy Kubernetes via services provided by their hyperscaler of choice, typically employing virtual machines as the underlying isolation mechanism. This is a well proven solution, at the expense of an inefficient resource usage, causing many organizations to look for alternatives in order to reduce costs.

One solution consists in deploying Kubernetes on top of an existing on-premise infrastructure running a full scale IaaS solution like OpenStack, or a traditional virtualization solution like VMware, using VMs underneath. This is similar to what happens on public clouds, with the advantage of allowing users to mix legacy virtualized workloads with modern container based applications on top of the same infrastructure. It is a very popular option, as we see a rise in OpenStack deployments for this specific purpose.

But, as more and more companies are interested in dedicated infrastructure for their Kubernetes clusters, especially for Edge use cases, there’s no need for an underlying IaaS or virtualization technology that adds unnecessary complexity and performance limitations.

This is where deploying Kubernetes on bare metal servers really shines, as the clusters can take full advantage of the whole infrastructure, often with significant TCO benefits. Running on bare metal allows us to freely choose between the x64 and ARM64 architectures, or a combination of both, taking advantage of the lower energy footprint provided by ARM servers with the compatibility offered by the more common x64 architecture.

A Kubernetes infrastructure comes with non-trivial complexity, which requires a fully automated solution for deployment, management, upgrades, observability and monitoring. Here’s a brief list the key components in the solution that we are about to present.

Host operating system

When it comes to Linux there’s definitely no lack of options. We needed a Linux distro aimed at lean container infrastructure workloads, with a large deployment base on many different physical servers, avoiding a full fledged traditional Linux server footprint. We decided to use Flatcar, for a series of reasons:

  1. Longstanding (in cloud years) proven success, being the continuation of CoreOS
  2. CNCF incubating project
  3. Active community, expert in container scenarios, including Cloudbase engineers
  4. Support for both ARM64 and x64
  5. Commercial support, provided by Cloudbase, as a result of the partnership with Microsoft / Kinvolk

The host OS options are not limited to Flatcar, we tested successfully many other alternatives, including Mariner and Ubuntu. This is not trivial, as packaging and optimizing images for this sort of infrastructure requires significant domain expertize.

Bare metal host provisioning

This component allows to boot every host via IPMI (or other API provided by the BMC), install the operating system via PXE and in general configure every aspect of the host OS and Kubernetes in an automated way. Over the years we worked with many open source host provisioning solutions (MaaS, Ironic, Crowbar), but we opted for Thinkerbell in this case due to its integration in the Kubernetes ecosystem and support for Cluster API (CAPI).

Distributed Storage

Although the solution presented here can support traditional SAN storage, the storage model in our scenario will be distributed and hyperconverged, with every server providing compute, storage and networking roles. We chose Ceph (deployed via Rook), being the leading open source distributed storage and given our involvement in the community. When properly configured, it can deliver outstanding performance even on small clusters.

Networking and load balancing

While traditionally Kubernetes clusters employ Flannel or Calico for networking, a bare metal scenario can take advantage of a more modern technology like Cilium. Additionally, Cilium can provide load balancing via BGP out of the box, without the need for additional components like MetalLB.

High availability

All components in the deployment are designed with high availability in mind, including storage, networking, compute nodes and API.

Declarative GitOps

Argo CD offers a way to manage in a declarative way the whole CI/CD deployment pipeline. Other open source alternatives like Tekton or FluxCD can also be employed.

Observability and Monitoring

Last but not least, observability is a key area beyond simple logs, metrics and traces, to ensure that the whole infrastructure performs as expected, for which we employ Prometheus and Grafana. For monitoring, and ensuring that prompt actions can be taken in case of issues, we use Sentry.

Coming next

The next blog posts in this series will explain in detail how to deploy the whole architecture presented here, thanks for reading!

The post Bare metal Kubernetes on mixed x64 and ARM64 appeared first on Cloudbase Solutions.

]]>
42572
OpenStack on Azure https://cloudbase.it/openstack-on-azure/ Tue, 27 Jul 2021 18:48:22 +0000 https://cloudbase.it/?p=40676 One might argue what is the point in running a cloud infrastructure software like OpenStack on top of another one, namely the Azure public cloud as in this blog post. The main use cases are typically testing and API compatibility, but as Azure nested virtualization and pass-through features came a long way recently in terms…

The post OpenStack on Azure appeared first on Cloudbase Solutions.

]]>
One might argue what is the point in running a cloud infrastructure software like OpenStack on top of another one, namely the Azure public cloud as in this blog post. The main use cases are typically testing and API compatibility, but as Azure nested virtualization and pass-through features came a long way recently in terms of performance, other more advanced use cases are viable, especially in areas where OpenStack has a strong user base (e.g. Telcos).

There are many ways to deploy OpenStack, in this post we will use Kolla Ansible for a containerized OpenStack with Ubuntu 20.04 Server as the host OS.

Preparing the infrastructure

In our scenario, we need at least one beefy virtual machine, that supports nested virtualization and can handle all the CPU/RAM/storage requirements for a full fledged All-In-One OpenStack. For this purpose, we chose a Standard_D8s_v3 size for the OpenStack controller virtual machine (8 vCPU, 32 GB RAM) and 512 GB of storage. For a multinode deployment, subject of a future post, more virtual machines can be added, depending on how many virtual machines are to be supported by the deployment.

To be able to use the Azure CLI from PowerShell, it can be installed following the instructions here https://docs.microsoft.com/en-us/cli/azure/install-azure-cli.

# connect to Azure
az login

# create an ssh key for authentication
ssh-keygen

# create the OpenStack controller VM

az vm create `
     --name openstack-controller `
     --resource-group "openstack-rg" `
     --subscription "openstack-subscription" `
     --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest `
     --location westeurope `
     --admin-username openstackuser `
     --ssh-key-values ~/.ssh/id_rsa.pub `
     --nsg-rule SSH `
     --os-disk-size-gb 512 `
     --size Standard_D8s_v3

# az vm create will output the public IP of the instance
$openStackControllerIP = "<IP of the VM>"

# create the static private IP used by Kolla as VIP
az network nic ip-config create --name MyIpConfig `
    --nic-name openstack-controllerVMNic `
    --private-ip-address 10.0.0.10 `
    --resource-group "openstack-rg" `
    --subscription "openstack-subscription"

# connect via SSH to the VM
ssh openstackuser@$openStackControllerIP

# fix the fqdn
# Kolla/Ansible does not work with *.cloudapp FQDNs, so we need to fix it
sudo tee /etc/hosts << EOT
$(hostname -i) $(hostname)
EOT

# create a dummy interface that will be used by OpenVswitch as the external bridge port
# Azure Public Cloud does not allow spoofed traffic, so we need to rely on NAT for VMs to
# have internal connectivity.
sudo ip tuntap add mode tap br_ex_port
sudo ip link set dev br_ex_port up

OpenStack deployment

For the deployment, we will use the Kolla Ansible containerized approach.

Firstly, installation of the base packages for Ansible/Kolla/Cinder is required.

# from the Azure OpenStack Controller VM

# install ansible/kolla requirements
sudo apt install -y python3-dev libffi-dev gcc libssl-dev python3-venv net-tools

# install Cinder NFS backend requirements
sudo apt install -y nfs-kernel-server

# Cinder NFS setup
CINDER_NFS_HOST=$openStackControllerIP
# Replace with your local network CIDR if you plan to add more nodes
CINDER_NFS_ACCESS=$CINDER_NFS_HOST
sudo mkdir /kolla_nfs
echo "/kolla_nfs $CINDER_NFS_ACCESS(rw,sync,no_root_squash)" | sudo tee -a /etc/exports
echo "$CINDER_NFS_HOST:/kolla_nfs" | sudo tee -a /etc/kolla/config/nfs_shares
sudo systemctl restart nfs-kernel-server

Afterwards, let’s install Ansible/Kolla in a Python virtualenv.

mkdir kolla
cd kolla
 
python3 -m venv venv
source venv/bin/activate
 
pip install -U pip
pip install wheel
pip install 'ansible<2.10'
pip install 'kolla-ansible>=11,<12'

Then, prepare Kolla configuration files and passwords.

sudo mkdir -p /etc/kolla/config
sudo cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
sudo chown -R $USER:$USER /etc/kolla
cp venv/share/kolla-ansible/ansible/inventory/* .
kolla-genpwd

Now, let’s check Ansible works.

ansible -i all-in-one all -m ping

As a next step, we need to configure the OpenStack settings

# This is the static IP we created initially
VIP_ADDR=10.0.0.10
# Azure VM interface is eth0
MGMT_IFACE=eth0
# This is the dummy interface used for OpenVswitch
EXT_IFACE=br_ex_port
# OpenStack Train version
OPENSTACK_TAG=11.0.0

# now use the information above to write it to Kolla configuration file
sudo tee -a /etc/kolla/globals.yml << EOT
kolla_base_distro: "ubuntu"
openstack_tag: "$OPENSTACK_TAG"
kolla_internal_vip_address: "$VIP_ADDR"
network_interface: "$MGMT_IFACE"
neutron_external_interface: "$EXT_IFACE"
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"
enable_neutron_provider_networks: "yes"
EOT

Now it is time to deploy OpenStack.

kolla-ansible -i ./all-in-one prechecks
kolla-ansible -i ./all-in-one bootstrap-servers
kolla-ansible -i ./all-in-one deploy

After the deployment, we need to create the admin environment variable script.

pip3 install python-openstackclient python-barbicanclient python-heatclient python-octaviaclient
kolla-ansible post-deploy
# Load the vars to access the OpenStack environment
. /etc/kolla/admin-openrc.sh

Let’s make the finishing touches and create an OpenStack instance.

# Set you external network CIDR, range and gateway, matching your environment, e.g.:
export EXT_NET_CIDR='10.0.2.0/24'
export EXT_NET_RANGE='start=10.0.2.150,end=10.0.2.199'
export EXT_NET_GATEWAY='10.0.2.1'
./venv/share/kolla-ansible/init-runonce

# Enable NAT so that VMs can have Internet access and be able to
# reach their floating IP from the controller node.
sudo ifconfig br-ex $EXT_NET_GATEWAY netmask 255.255.255.0 up
sudo iptables -t nat -A POSTROUTING -s $EXT_NET_CIDR -o eth0 -j MASQUERADE

# Create a demo VM
openstack server create --image cirros --flavor m1.tiny --key-name mykey --network demo-net demo1

Conclusions

Deploying OpenStack on Azure is fairly straightforward, with the caveat that the OpenStack instances cannot be accessed from the Internet without further changes (this affects only inbound traffic, the OpenStack instances can access the Internet). Here are the main changes that we introduced to be able to perform the deployment in this scenario:

  • Add a static IP on the first interface that will be used as the OpenStack API IP
  • Set the OpenStack Controller FQDN to be the same as the hostname
  • Create a dummy interface which will be used as the br-ex external port (there is no need for a secondary NIC, as Azure drops any spoofed packets)
  • Add iptables NAT rules to allow OpenStack VM outbound (Internet) connectivity

The post OpenStack on Azure appeared first on Cloudbase Solutions.

]]>
40676
OpenStack on ARM64 – LBaaS https://cloudbase.it/openstack-on-arm64-lbaas/ Thu, 18 Feb 2021 13:00:06 +0000 https://cloudbase.it/?p=39611 In part 2 of this series about OpenStack on ARM64, we got to the point where our cloud is fully deployed with all the Compute (VMs), Software Defined Networking (SDN) and Software Defined Storage (SDS) up and running. One additional component that we want to add is a Load Balancer as a Service (LBaaS), which…

The post OpenStack on ARM64 – LBaaS appeared first on Cloudbase Solutions.

]]>
In part 2 of this series about OpenStack on ARM64, we got to the point where our cloud is fully deployed with all the Compute (VMs), Software Defined Networking (SDN) and Software Defined Storage (SDS) up and running. One additional component that we want to add is a Load Balancer as a Service (LBaaS), which is a key requirement for pretty much any high available type of workload and a must-have feature in any cloud.

OpenStack’s current official LBaaS component is called Octavia, which replaced the older Neutron LBaaS v1 project, starting with the Liberty release. Deploying and configuring requires a few steps, which explains the need for a dedicated blog post.

Octavia’s reference implementation uses VM instances called Amphorae to perform the actual load balancing. The octavia-worker service takes care of communicating with the amphorae and to do that we need to generate a few X509 CAs and certificates used to secure the communications. The good news is that starting with the Victoria release, kolla-ansible simplifies a lot this task. Here’s how to:

# Change the following according to your organization
echo "octavia_certs_country: US" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_certs_state: Oregon" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_certs_organization: OpenStack" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_certs_organizational_unit: Octavia" | sudo tee -a /etc/kolla/globals.yml

# This is the kolla-ansible virtual env created in the previous blog post
cd kolla
source venv/bin/activate

sudo chown $USER:$USER /etc/kolla
kolla-ansible octavia-certificates

The communication between Octavia and the Amphorae needs an isolated network, as we don’t want to share it with the tenant network for security reasons. A simple way to accomplish that is to create a provider network with a dedicated VLAN ID, which is why we enabled Neutron provider networks and OVS VLAN segmentation in the previous post. Again, starting with Victoria, this got much easier with kolla-ansible.

# This is a dedicated network, outside your management LAN address space, change as needed
OCTAVIA_MGMT_SUBNET=192.168.43.0/24
OCTAVIA_MGMT_SUBNET_START=192.168.43.10
OCTAVIA_MGMT_SUBNET_END=192.168.43.254
OCTAVIA_MGMT_HOST_IP=192.168.43.1/24
OCTAVIA_MGMT_VLAN_ID=107

sudo tee -a /etc/kolla/globals.yml << EOT
octavia_amp_network:
  name: lb-mgmt-net
  provider_network_type: vlan
  provider_segmentation_id: $OCTAVIA_MGMT_VLAN_ID
  provider_physical_network: physnet1
  external: false
  shared: false
  subnet:
    name: lb-mgmt-subnet
    cidr: "$OCTAVIA_MGMT_SUBNET"
    allocation_pool_start: "$OCTAVIA_MGMT_SUBNET_START"
    allocation_pool_end: "$OCTAVIA_MGMT_SUBNET_END"
    gateway_ip: "$OCTAVIA_MGMT_HOST_IP"
    enable_dhcp: yes
EOT

Unless there is a dedicated network adapter, a virtual ethernet one can be used. This needs to be configured at boot and added to the OVS br-ex switch.

# This sets up the VLAN veth interface
# Netplan doesn't have support for veth interfaces yet
sudo tee /usr/local/bin/veth-lbaas.sh << EOT
#!/bin/bash
sudo ip link add v-lbaas-vlan type veth peer name v-lbaas
sudo ip addr add $OCTAVIA_MGMT_HOST_IP dev v-lbaas
sudo ip link set v-lbaas-vlan up
sudo ip link set v-lbaas up
EOT
sudo chmod 744 /usr/local/bin/veth-lbaas.sh

sudo tee /etc/systemd/system/veth-lbaas.service << EOT
[Unit]
After=network.service

[Service]
ExecStart=/usr/local/bin/veth-lbaas.sh

[Install]
WantedBy=default.target
EOT
sudo chmod 644 /etc/systemd/system/veth-lbaas.service

sudo systemctl daemon-reload
sudo systemctl enable veth-lbaas.service
sudo systemctl start veth-lbaas.service

docker exec openvswitch_vswitchd ovs-vsctl add-port \
  br-ex v-lbaas-vlan tag=$OCTAVIA_MGMT_VLAN_ID

A few more Octavia kolla-ansible configurations…

echo "enable_octavia: \"yes\"" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_network_interface: v-lbaas" | sudo tee -a /etc/kolla/globals.yml

# Flavor used when booting an amphora, change as needed
sudo tee -a /etc/kolla/globals.yml << EOT
octavia_amp_flavor:
  name: "amphora"
  is_public: no
  vcpus: 1
  ram: 1024
  disk: 5
EOT

sudo mkdir /etc/kolla/config/octavia
# Use a config drive in the Amphorae for cloud-init
sudo tee /etc/kolla/config/octavia/octavia-worker.conf << EOT
[controller_worker]
user_data_config_drive = true
EOT

…and we can finally tell kolla-ansible to deploy Octavia:

kolla-ansible -i all-in-one deploy --tags common,horizon,octavia

Octavia uses a special VM image for the Amphorae, which needs to be built for ARM64. We prepared Dockerfiles for building either an Ubuntu or a CentOS image, you can choose either one in the following snippets. We use containers to perform the build in order to isolate the requirements and be independent from the host OS.

git clone https://github.com/cloudbase/openstack-kolla-arm64-scripts
cd openstack-kolla-arm64-scripts/victoria

# Choose either Ubuntu or CentOS (not both!)

# Ubuntu
docker build amphora-image-arm64-docker -f amphora-image-arm64-docker/Dockerfile.Ubuntu \
  -t amphora-image-build-arm64-ubuntu

# Centos
docker build amphora-image-arm64-docker -f amphora-image-arm64-docker/Dockerfile.Centos \
  -t amphora-image-build-arm64-centos

ARM64 needs a trivial patch in the diskimage-create.sh build script (we also submitted it upstream):

git clone https://opendev.org/openstack/octavia -b stable/victoria
# Use latest branch Octavia to create Ubuntu image
cd octavia
# diskimage-create.sh includes armhf but not arm64
git apply ../0001-Add-arm64-in-diskimage-create.sh.patch
cd ..

Build the image (this will take a bit):

# Again, choose either Ubuntu or CentOS (not both!)

# Note the mount of /mnt and /proc in the docker container
# BEWARE!!!!! Without mounting /proc, the diskimage-builder fails to find mount points and deletes the host's /dev,
# making the host unusable
docker run --privileged -v /dev:/dev -v /proc:/proc -v /mnt:/mnt \
  -v $(pwd)/octavia/:/octavia -ti amphora-image-build-arm64-ubuntu

# Create CentOS 8 Amphora image
docker run --privileged -v /dev:/dev -v $(pwd)/octavia/:/octavia \
  -ti amphora-image-build-arm64-centos

Add the image to Glance, using the octavia user in the service project. The amphora tag is used by Octavia to find the image.

. /etc/kolla/admin-openrc.sh

# Switch to the octavia user and service project
export OS_USERNAME=octavia
export OS_PASSWORD=$(grep octavia_keystone_password /etc/kolla/passwords.yml | awk '{ print $2}')
export OS_PROJECT_NAME=service
export OS_TENANT_NAME=service

openstack image create amphora-x64-haproxy.qcow2 \
  --container-format bare \
  --disk-format qcow2 \
  --private \
  --tag amphora \
  --file octavia/diskimage-create/amphora-x64-haproxy.qcow2

# We can now delete the image file
rm -f octavia/diskimage-create/amphora-x64-haproxy.qcow2

Currently, we need a small patch in Octavia to properly render the userdata for the Amphorae:

# Patch the user_data_config_drive_template
cd octavia
git apply  ../0001-Fix-userdata-template.patch
# For now just update the octavia-worker container, no need to restart it
docker cp octavia/common/jinja/templates/user_data_config_drive.template \
  octavia_worker:/usr/lib/python3/dist-packages/octavia/common/jinja/templates/user_data_config_drive.template

Finally, let’s create a load balancer to make sure everything works fine:

# To create the loadbalancer
. /etc/kolla/admin-openrc.sh

openstack loadbalancer create --name loadbalancer1 --vip-subnet-id public1-subnet

# Check the status until it's marked as ONLINE
openstack loadbalancer list

Congratulations! You have a working LBaaS in your private cloud!!

Troubleshooting

In case something goes wrong, finding the root cause might be tricky. Here are a few suggestions to ease up the process.

# Check for errors
sudo tail -f /var/log/kolla/octavia/octavia-worker.log

# SSH into amphora
# Get amphora VM IP either from the octavia-worker.log or from:
openstack server list --all-projects

ssh ubuntu@<amphora_ip> -i octavia_ssh_key #ubuntu
ssh cloud-user@<amphora_ip> -i octavia_ssh_key #centos

# Instances stuck in pending create cannot be deleted
# Password: grep octavia_database_password /etc/kolla/passwords.yml
docker exec -ti mariadb mysql -u octavia -p octavia
update load_balancer set provisioning_status = 'ERROR' where provisioning_status = 'PENDING_CREATE';
exit;

The post OpenStack on ARM64 – LBaaS appeared first on Cloudbase Solutions.

]]>
39611
OpenStack on ARM64 – Deployment https://cloudbase.it/openstack-on-arm64-part-2/ Mon, 08 Feb 2021 13:00:28 +0000 https://cloudbase.it/?p=39576 In the previous blog post we created the ARM OpenStack Kolla container images and we can now proceed with deploying OpenStack. The host is a Lenovo server with an Ampere Computing eMAG 32 cores Armv8 64-bit CPU, running Ubuntu Server 20.04. For simplicity, this will be an “All-in-One” deployment, where all OpenStack components run on…

The post OpenStack on ARM64 – Deployment appeared first on Cloudbase Solutions.

]]>
In the previous blog post we created the ARM OpenStack Kolla container images and we can now proceed with deploying OpenStack. The host is a Lenovo server with an Ampere Computing eMAG 32 cores Armv8 64-bit CPU, running Ubuntu Server 20.04. For simplicity, this will be an “All-in-One” deployment, where all OpenStack components run on the same host, but it can be easily adapted to a multi-node setup.

Let’s start with installing the host package dependencies, in case those are not already there, including Docker.

sudo apt update
sudo apt install -y qemu-kvm docker-ce
sudo apt install -y python3-dev libffi-dev gcc libssl-dev python3-venv
sudo apt install -y nfs-kernel-server

sudo usermod -aG docker $USER
newgrp docker

We can now create a local directory with a Python virtual environment and all the kolla-ansible components:

mkdir kolla
cd kolla

python3 -m venv venv
source venv/bin/activate

pip install -U pip
pip install wheel
pip install 'ansible<2.10'
pip install 'kolla-ansible>=11,<12'

The kolla-ansible configuration is stored in /etc/kolla:

sudo mkdir -p /etc/kolla/config
sudo cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
sudo chown -R $USER:$USER /etc/kolla
cp venv/share/kolla-ansible/ansible/inventory/* .

Let’s check if everything is ok (nothing gets deployed yet):

ansible -i all-in-one all -m ping

kolla-genpwd is used to generate random passwords for every service, stored in /etc/kolla/passwords.yml, quite useful:

kolla-genpwd

Log in to the remote Docker registry, using the registry name and credentials created in the previous post:

ACR_NAME=# Value from ACR creation
SP_APP_ID_PULL_ONLY=# Value from ACR SP creation
SP_PASSWD_PULL_ONLY=# Value from ACR SP creation
REGISTRY=$ACR_NAME.azurecr.io

docker login $REGISTRY --username $SP_APP_ID --password $SP_PASSWD

Now, there are a few variables that we need to set, specific to the host environment. The external interface is what is used for tenant traffic.

VIP_ADDR=# An unallocated IP address in your management network
MGMT_IFACE=# Your management interface
EXT_IFACE=# Your external interface
# This must match the container images tag
OPENSTACK_TAG=11.0.0

Time to write the main configuration file in /etc/kolla/globals.yml:

sudo tee -a /etc/kolla/globals.yml << EOT
kolla_base_distro: "ubuntu"
openstack_tag: "$OPENSTACK_TAG"
kolla_internal_vip_address: "$VIP_ADDR"
network_interface: "$MGMT_IFACE"
neutron_external_interface: "$EXT_IFACE"
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"
enable_barbican: "yes"
enable_neutron_provider_networks: "yes"
docker_registry: "$REGISTRY"
docker_registry_username: "$SP_APP_ID_PULL_ONLY"
EOT

The registry password goes in /etc/kolla/passwords.yml:

sed -i "s/^docker_registry_password: .*\$/docker_registry_password: $SP_PASSWD_PULL_ONLY/g" /etc/kolla/passwords.yml

Cinder, the OpenStack block storage component, supports a lot of backends. The easiest way to get started is by using NFS, but LVM would be a great choice as well if you have unused disks.

# Cinder NFS setup
CINDER_NFS_HOST=# Your local IP
# Replace with your local network CIDR if you plan to add more nodes
CINDER_NFS_ACCESS=$CINDER_NFS_HOST
sudo mkdir /kolla_nfs
echo "/kolla_nfs $CINDER_NFS_ACCESS(rw,sync,no_root_squash)" | sudo tee -a /etc/exports
echo "$CINDER_NFS_HOST:/kolla_nfs" | sudo tee -a /etc/kolla/config/nfs_shares
sudo systemctl restart nfs-kernel-server

The following settings are mostly needed for Octavia, during the next blog post in this series:

# Increase the PCIe ports to avoid this error when creating Octavia pool members:
# libvirt.libvirtError: internal error: No more available PCI slots
sudo mkdir /etc/kolla/config/nova
sudo tee /etc/kolla/config/nova/nova-compute.conf << EOT
[DEFAULT]
resume_guests_state_on_host_boot = true

[libvirt]
num_pcie_ports=28
EOT

# This is needed for Octavia
sudo mkdir /etc/kolla/config/neutron
sudo tee /etc/kolla/config/neutron/ml2_conf.ini << EOT
[ml2_type_vlan]
network_vlan_ranges = physnet1:100:200
EOT

Time to do some final checks, bootstrap the host and deploy OpenStack! The Deployment will take some time, this is a good moment for a coffee.

kolla-ansible -i ./all-in-one prechecks
kolla-ansible -i ./all-in-one bootstrap-servers
kolla-ansible -i ./all-in-one deploy

Congratulations, you have an ARM OpenStack cloud! Now we can get the CLI tools to access it:

pip3 install python-openstackclient python-barbicanclient python-heatclient python-octaviaclient
kolla-ansible post-deploy
# Load the vars to access the OpenStack environment
. /etc/kolla/admin-openrc.sh

The next steps are optional, but highly recommended in order to get the basic functionalities, including basic networking, standard flavors and a basic Linux image (Cirros):

# Set you external netwrork CIDR, range and gateway, matching your environment, e.g.:
export EXT_NET_CIDR='10.0.2.0/24'
export EXT_NET_RANGE='start=10.0.2.150,end=10.0.2.199'
export EXT_NET_GATEWAY='10.0.2.1'
./venv/share/kolla-ansible/init-runonce

All done! We can now create a basic VM from the command line:

# Create a demo VM
openstack server create --image cirros --flavor m1.tiny --key-name mykey --network demo-net demo1

You can also head to “http://<VIP_ADDR>” and access Horizon, OpenStack’s web ui. The username is admin and the password is in /etc/kolla/passwords.yml:

grep keystone_admin_password /etc/kolla/passwords.yml

In the next post we will add to the deployment Octavia, the load balancer as a service (LBaaS) component, Enjoy your ARM OpenStack cloud in the meantime!

P.S.: In case you would like to delete your whole deployment and start over:

#kolla-ansible -i ./all-in-one destroy --yes-i-really-really-mean-it

The post OpenStack on ARM64 – Deployment appeared first on Cloudbase Solutions.

]]>
39576
OpenStack on ARM64 – Kolla container images https://cloudbase.it/openstack-on-arm64-part-1/ Mon, 25 Jan 2021 09:00:00 +0000 https://cloudbase.it/?p=39476 This is the beginning of a short series detailing how to deploy OpenStack on ARM64, using Docker containers with Kolla and Kolla-ansible. The objective of this first post is to create the ARM64 container images and push them to a remote registry in order to be used later on, when deploying our OpenStack cloud. We…

The post OpenStack on ARM64 – Kolla container images appeared first on Cloudbase Solutions.

]]>
This is the beginning of a short series detailing how to deploy OpenStack on ARM64, using Docker containers with Kolla and Kolla-ansible.

The objective of this first post is to create the ARM64 container images and push them to a remote registry in order to be used later on, when deploying our OpenStack cloud. We are going to use Azure Container Registry to store the images, but any other OCI compliant registry will do.

Create a container registry

Let’s start by creating the container registry and related access credentials. This can be done anywhere, e.g. from a laptop, doesn’t need to be ARM. All we need is to have the Azure CLI installed.

az login
# If you have more than one Azure subscription, choose one:
az account list --output table
az account set --subscription "Your subscription"

Next, let’s create a resource group and a container registry with a unique name. Choose also an Azure region based on your location.

RG=kolla
ACR_NAME=your_registry_name_here
LOCATION=eastus

az group create --name $RG --location $LOCATION
az acr create --resource-group $RG --name $ACR_NAME --sku Basic

We’re now creating two sets of credentials, one with push and pull access to be used when creating the images and one with pull only access to be used later on during the OpenStack deployment.

ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
SERVICE_PRINCIPAL_NAME=acr-kolla-sp-push
SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role acrpush --query password --output tsv)
SP_APP_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
echo "SP_APP_ID=$SP_APP_ID"
echo "SP_PASSWD=$SP_PASSWD"

SERVICE_PRINCIPAL_NAME=acr-kolla-sp-pull
SP_PASSWD_PULL_ONLY=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role acrpull --query password --output tsv)
SP_APP_ID_PULL_ONLY=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
echo "SP_APP_ID_PULL_ONLY=$SP_APP_ID_PULL_ONLY"
echo "SP_PASSWD_PULL_ONLY=$SP_PASSWD_PULL_ONLY"

Create and push the OpenStack Kolla container images

It’s now time to switch to an ARM server where the Kolla container images will be built. We are going to use a Lenovo server with an eMAG 32 cores Armv8 64-bit CPU provided by Ampere Computing. The host operating system is Ubuntu 20.04, but the following instructions can be easily adapted to other Linux distros.

Let’s start with installing the dependencies and add your current user to the docker group (or create a separate user).

sudo apt update
sudo apt install -y docker-ce python3-venv git
sudo usermod -aG docker $USER
newgrp docker

Let’s get Docker to login into the remote registry that we just created. Set ACR_NAME, SP_APP_ID and SP_PASSWD as obtained in the previous steps.

REGISTRY=$ACR_NAME.azurecr.io
docker login $REGISTRY --username $SP_APP_ID --password $SP_PASSWD

Now we can install Kolla in a Python virtual environment and get ready to start building our container images. The OpenStack version is the recently released Victoria but a previous version can be used if needed (e.g. Ussuri).

mkdir kolla-build
cd kolla-build
python3 -m venv venv
source venv/bin/activate
pip install wheel
# Install Kolla, Victoria version
pip install "kolla>=11,<12"

Edit: the following step can be skipped on Victoria since it defaults to Ubuntu 20.04 where pmdk-tools is available. Additionally, thanks to a recent patch, it can be skipped on Ussuri and Train.

The pmdk-tools Ubuntu package is not available on ARM, so we need to remove it from the nova-compute docker image build. This is done by creating a “template override” that we are going to pass to the build process.

tee template-overrides.j2 << EOT
{% extends parent_template %}

# nova-compute
{% set nova_compute_packages_remove = ['pmdk-tools'] %}
EOT

We can now build the container images and push them to the registry. This will take a while since it’s building and pushing container images for all OpenStack projects and services. Alternatively, it is possible to reduce the number of containers to a subset by creating a profile in kolla-build.conf as explained here.

kolla-build -b ubuntu --registry $REGISTRY --push
# If you created a template override run:
# kolla-build -b ubuntu --registry $REGISTRY --template-override template-overrides.j2 --push

We are finally ready for our OpenStack AMR64 deployment with Kolla-ansible in the next post!

The post OpenStack on ARM64 – Kolla container images appeared first on Cloudbase Solutions.

]]>
39476
Easily deploy a Kubernetes cluster on OpenStack https://cloudbase.it/easily-deploy-a-kubernetes-cluster-on-openstack/ Tue, 12 Sep 2017 21:40:27 +0000 https://cloudbase.it/?p=37536 Platform and cloud interoperability has come a long way. IaaS and unstructured PaaS options such as OpenStack and Kubernetes can be combined to create cloud-native applications. In this port we're going to show how Kubernetes can de deployed on an OpenStack cloud infrastructure.

The post Easily deploy a Kubernetes cluster on OpenStack appeared first on Cloudbase Solutions.

]]>
Platform and cloud interoperability has come a long way. IaaS and unstructured PaaS options such as OpenStack and Kubernetes can be combined to create cloud-native applications. In this port we’re going to show how Kubernetes can de deployed on an OpenStack cloud infrastructure.

 

Setup

My setup is quite simple, an Ocata all-in-one deployment with compute KVM. The OpenStack infrastructure was deployed with Kolla. The deployment method is not important here, but Magnum and Heat need to be deployed alongside other OpenStack services such as Nova or Neutron. To do this, enable those two services form /etc/kolla/global.yml file. If you are using Devstack, here is a local.conf that is deploying Heat and Magnum.

 

Kubernetes deployment

The Kubernetes cluster will consist of 1 master node and 2 minion nodes. I’m going to use Fedora atomic images for VMs. One useful info is that I used a 1 CPU, 2GB of RAM and 7GB disk flavor for the VMs. Below are the commands used to create the necessary environment setup. Please make sure to change IPs and different configurations to suit your environment.

# Download the cloud image
wget  https://ftp-stud.hs-esslingen.de/pub/Mirrors/alt.fedoraproject.org/atomic/stable/Fedora-Atomic-25-20170512.2/CloudImages/x86_64/images/Fedora-Atomic-25-20170512.2.x86_64.qcow2

# If using HyperV, convert it to VHD format
qemu-img convert -f qcow2 -O vhdx Fedora-Atomic-25-20170512.2.x86_64.qcow2 fedora-atomic.vhdx

# Provision the cloud image, I'm using KVM so using the qcow2 image
openstack image create --public --property os_distro='fedora-atomic' --disk-format qcow2 \
--container-format bare --file /root/Fedora-Atomic-25-20170512.2.x86_64.qcow2 \
fedora-atomic.qcow2

# Create a flavor
nova flavor-create cloud.flavor auto 2048 7 1 --is-public True

# Create a key pair
openstack keypair create --public-key ~/.ssh/id_rsa.pub kolla-ubuntu

# Create Neutron networks
# Public network
neutron net-create public_net --shared --router:external --provider:physical_network \
physnet2 --provider:network_type flat

neutron subnet-create public_net 10.7.15.0/24 --name public_subnet \
--allocation-pool start=10.7.15.150,end=10.7.15.180 --disable-dhcp --gateway 10.7.15.1

# Private network
neutron net-create private_net_vlan --provider:segmentation_id 500 \
--provider:physical_network physnet1 --provider:network_type vlan

neutron subnet-create private_net_vlan 10.10.20.0/24 --name private_subnet \
--allocation-pool start=10.10.20.50,end=10.10.20.100 \
--dns-nameserver 8.8.8.8 --gateway 10.10.20.1

# Create a router
neutron router-create router1
neutron router-interface-add router1 private_subnet
neutron router-gateway-set router1 public_net

Before the Kubernetes cluster is deployed, a cluster template must be created. The nice thing about this process is that Magnum does not require long config files or definitions for this. A simple cluster template creation can look like this:

magnum cluster-template-create --name k8s-cluster-template --image fedora-atomic \
--keypair kolla-controller --external-network public_net --dns-nameserver 8.8.8.8 \
--flavor cloud.flavor --docker-volume-size 3 --network-driver flannel --coe kubernetes

Based on this template the cluster can be deployed:

magnum cluster-create --name k8s-cluster --cluster-template k8s-cluster-template \
--master-count 1 --node-count 2

 

The deployment status can be checked and viewed from Horizon. There are two places where this can be done, first one in Container Infra -> Clusters tab and second in Orchestration -> Staks tab. This is because Magnum relies on Heat templates to deploy the user defined resources. I find the the Stacks option better because it allows the user to see all the resources and events involved in the process. If something goes wrong, the issue can easily be identified by a red mark.

 

In the end my cluster should look something like this:

root@kolla-ubuntu-cbsl:~# magnum cluster-show 2ffb0ea6-d3f6-494c-9001-c4c4e01e8125
+---------------------+------------------------------------------------------------+
| Property            | Value                                                      |
+---------------------+------------------------------------------------------------+
| status              | CREATE_COMPLETE                                            |
| cluster_template_id | 595cdb6c-8032-43c8-b546-710410061be0                       |
| node_addresses      | ['10.7.15.112', '10.7.15.113']                             |
| uuid                | 2ffb0ea6-d3f6-494c-9001-c4c4e01e8125                       |
| stack_id            | 91001f55-f1e8-4214-9d71-1fa266845ea2                       |
| status_reason       | Stack CREATE completed successfully                        |
| created_at          | 2017-07-20T16:40:45+00:00                                  |
| updated_at          | 2017-07-20T17:07:24+00:00                                  |
| coe_version         | v1.5.3                                                     |
| keypair             | kolla-controller                                           |
| api_address         | https://10.7.15.108:6443                                   |
| master_addresses    | ['10.7.15.108']                                            |
| create_timeout      | 60                                                         |
| node_count          | 2                                                          |
| discovery_url       | https://discovery.etcd.io/89bf7f8a044749dd3befed959ea4cf6d |
| master_count        | 1                                                          |
| container_version   | 1.12.6                                                     |
| name                | k8s-cluster                                                |
+---------------------+------------------------------------------------------------+

SSH into the master node to check the cluster status

[root@kubemaster ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeUI is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

So there it is, a fully functioning Kubernetes cluster with 1 master and 2 minion nodes.

 

A word on networking

Kubernetes networking is not the easiest thing to explaing but I’ll do my best to do the essentials. After an app is deployed, the user will need to access it from outside the Kubernetes Cluster. This is done with Services. To achive this, on each minion node there is a kube-proxy service running that will allow the Service to do its job. Now the service can work in multiple ways, some of them are via an VIP LoadBalancer IP provided by the cloud underneath K8S, or with port-forward on the minion node IP.

 

Deploy an app

Now that all is set up, an app can be deployed. I am going to install WordPress with Helm. Helm is the package manager for Kubernetes. It installs applications with charts, which are basically apps definitions written in yaml. Here are documentation on how to install Helm.

 

I am going to install WordPress.

[root@kubemaster ~]# helm install stable/wordpress

Pods can be seen

[root@kubemaster ~]# kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
my-release-mariadb-2689551905-56580     1/1       Running   0          10m
my-release-wordpress-3324251581-gzff5   1/1       Running   0          10m

There are multiple ways of accessing the contents of a pod. I am going to port-forward 8080 port from the master node to the 80 port of the pod.

kubectl port-forward my-release-wordpress-3324251581-gzff5 8080:80

Now WordPress can be accessed via the Kubernetes node IP and port 8080

http://K8S-IP:8080

Kubernetes on OpenStack is not only possible, it can also be easy!

The post Easily deploy a Kubernetes cluster on OpenStack appeared first on Cloudbase Solutions.

]]>
37536
Hyper-V RemoteFX in OpenStack https://cloudbase.it/openstack-remotefx/ Wed, 05 Jul 2017 14:16:40 +0000 https://cloudbase.it/?p=35963 We’ve added support for RemoteFX for Windows / Hyper-V Server 2012 R2 back in Kilo, but the highly anticipated Windows / Hyper-V Server 2016 comes with some new nifty features for which we’re excited about! In case you are not familiar with this feature, it allows you to virtualize your GPUs and share them across…

The post Hyper-V RemoteFX in OpenStack appeared first on Cloudbase Solutions.

]]>
We’ve added support for RemoteFX for Windows / Hyper-V Server 2012 R2 back in Kilo, but the highly anticipated Windows / Hyper-V Server 2016 comes with some new nifty features for which we’re excited about!

In case you are not familiar with this feature, it allows you to virtualize your GPUs and share them across virtual machine instances by adding virtual graphics devices. This leads to a richer RDP experience especially for VDI on OpenStack, as well as the benefit of having a GPU on your instances, enhancing GPU-intensive applications (CUDA, OpenCL, etc).

If you are curious, you can take a look at one of our little experiments. We’ve run a few GPU-intensive demos on identical guests with and without RemoteFX. The difference was very obvious between the two. You can see the recording here.

One of the most interesting feature RemoteFX brings in terms of improving the user experience, is device redirection. This allows you to connect your local devices (USBs, smart cards, VoIP devices, webcams, etc.) to RemoteFX enabled VMs through your RDP client. For a detailed list of devices you can redirect through your RDP session can be found here.

Some of the new features for RemoteFX in Windows / Hyper-V Server 2016 are:

  • 4K resolution option
  • 1GB dedicated VRAM (availble choices: 64MB, 128MB, 256MB, 512MB, 1GB) and up to another 1GB shared VRAM
  • Support for Generation 2 VMs
  • OpenGL and OpenCL API support
  • H.264/AVC codec investment
  • Improved performance

One important thing worth mentioning is the fact that RemoteFX allows you to overcommit your GPUs, the same way you can overcommit disk, memory, or vCPUs!

All of this sounds good, but how can you know if you can enable RemoteFX? All you need for this is a compatible GPU that passes the minimum requirements:

  • it must support DirectX 11.0 or newer
  • it must support WDDM 1.2 or newer,
  • Hyper-V feature must installed.

If you pass these simple requirements, all you have to do to enable the feature is to run this PowerShell command:

Install-WindowsFeature RDS-Virtualization

 

Hyper-V has to be configured to use RemoteFX. This can be done by opening the Hyper-V Manager, opening up Hyper-V Settings, and under Physical GPUs, check the Use this GPU with RemoteFX checkbox.

For more information about RemoteFX requirements and recommended RemoteFX-compatible GPUs, read this blog post.

In order to take advantage of all these features, the RDP client must be RemoteFX-enabled (Remote Desktop Connection 7.1 or newer).

Please do note that the instance’s guest OS must support RemoteFX as well. Incompatible guests will not be able to fully benefit from this feature. For example, Windows 10 Home guests are not compatible with RemoteFX, while Windows 10 Enterprise and Pro guests are. This fact can easily be checked by looking up the Video Graphics Adapter in the guest’s Device Manager.

 

RemoteFX inside a guest VM

 

After the RDS-Virtualization feature has been enabled, the nova-compute service running on the Hyper-V compute node will have to be configured as well. The following config option must be set to True in nova-compute‘s nova.conf file:

[hyperv]
enable_remotefx = True

 

In order to spawn an instance with RemoteFX enabled via OpenStack, all you have to do is provide the instance with a few flavor extra_specs:

  • os:resolution:  guest VM screen resolution size.
  • os:monitors:  guest VM number of monitors.
  • os:vram:  guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016.

There are a few things to take into account:

  1. Only a subset of resolution sizes are available for RemoteFX. Any other given resolution size will be met with an error.
  2. The maximum number of monitors allowed is dependent on the requested resolution. Requesting a larger number of monitors than the maximum allowed per requested resolution size will be met with an error.
  3. Only the following VRAM amounts can be requested: 64, 128, 256, 512, 1024.
  4. On Windows / Hyper-V Server 2012 R2, RemoteFX can only be enabled on Generation 1 VMs.

The available resolution sizes and maximum number of monitors are:
For Windows / Hyper-V Server 2012 R2:

1024x768:   4
1280x1024:  4
1600x1200:  3
1920x1200:  2
2560x1600:  1

For Windows / Hyper-V Server 2016:

1024x768: 8
1280x1024: 8
1600x1200: 4
1920x1200: 4
2560x1600: 2
3840x2160: 1

Here is an example of a valid flavor for RemoteFX:

# nova flavor-create <name> <id> <ram> <disk> <vcpus>
nova flavor-create m1.remotefx 999 4096 40 2
nova flavor-key m1.remotefx set os:resolution=1920x1200
nova flavor-key m1.remotefx set os:monitors=1
nova flavor-key m1.remotefx set os:vram=1024

 

We hope you enjoy this feature as much as we do! What would you use RemoteFX for?

The post Hyper-V RemoteFX in OpenStack appeared first on Cloudbase Solutions.

]]>
35963
Windows Server 2016 OpenStack Images https://cloudbase.it/windows-server-2016-openstack-images/ Wed, 17 May 2017 14:23:58 +0000 https://cloudbase.it/?p=36672 Windows Server 2016 is gaining significant traction in OpenStack and other clouds, thanks to the support for Windows Docker containers and lots of other new features. While there’s no OpenStack Windows Server 2016 image directly available for download, the good news is that our automated build scripts will do all the work for you. All you need is a…

The post Windows Server 2016 OpenStack Images appeared first on Cloudbase Solutions.

]]>
Windows Server 2016 is gaining significant traction in OpenStack and other clouds, thanks to the support for Windows Docker containers and lots of other new features.

While there’s no OpenStack Windows Server 2016 image directly available for download, the good news is that our automated build scripts will do all the work for you. All you need is a Windows Server 2016 ISO.

The automated build tools are publicly available on GitHub, allowing the generation of virtual (Hyper-V, KVM, VMware ESXi) or bare metal (MAAS, Ironic) images, including Cloudbase-Init, VirtIO drivers (KVM), latest Windows updates, etc.

You can kickstart the image generation on any Windows host (e.g. Windows 10, Windows Server 2016, etc) with Hyper-V enabled and the Windows ADK installed.

git clone https://github.com/cloudbase/windows-openstack-imaging-tools
cd windows-openstack-imaging-tools

Edit the create-windows-online-cloud-image.ps1 in the Examples directory to match your environment and requirements.

If you need to make any changes to the image generation (e.g. adding storage or networking drivers for bare metal servers), you have an extensive Readme which will guide through the entire process.

# This needs an elevated PowerShell prompt
./create-windows-online-cloud-image.ps1

For KVM, a frequent use case, the tool supports the latest Fedora VirtIO drivers, improving considerably the stability and performance of the OS.

You are now all set to generate your Windows Server 2016 images, let us know if you have any questions!

 

The post Windows Server 2016 OpenStack Images appeared first on Cloudbase Solutions.

]]>
36672
Deploying OpenStack using Docker containers with Hyper-V and Kolla https://cloudbase.it/openstack-kolla-hyper-v/ https://cloudbase.it/openstack-kolla-hyper-v/#comments Mon, 27 Mar 2017 09:10:02 +0000 https://cloudbase.it/?p=37022 OpenStack is a great technology, but it can be a bit cumbersome to deploy and manage without the proper tools. One easy solution to address this issue is to deploy OpenStack services using pre-built Docker containers. Kolla is a set of deployment tools for OpenStack, consisting in the Kolla project itself, for generating OpenStack Docker images, and…

The post Deploying OpenStack using Docker containers with Hyper-V and Kolla appeared first on Cloudbase Solutions.

]]>
OpenStack is a great technology, but it can be a bit cumbersome to deploy and manage without the proper tools. One easy solution to address this issue is to deploy OpenStack services using pre-built Docker containers.

Kolla is a set of deployment tools for OpenStack, consisting in the Kolla project itself, for generating OpenStack Docker images, and “deliverables” projects, to deploy the Docker containers and thus OpenStack. The most mature deliverable is kolla-ansible, which, as the name implies, uses Ansible playbooks to automate the deployment. The project documentation can be found here.

 

Hyper-V setup

On the Windows host, we need a VM to host the Linux OpenStack controller. For this purpose I created an Ubuntu 16.04 VM with 8GB of RAM, 4 virtual cores and 20GB of disk. All the controller services run here and are deployed with Kolla in Docker containers. Last but not least, the same Hyper-V also serves as a compute host for the OpenStack deployment. This is achieved by installing our Cloudbase OpenStack components. Additional Hyper-V compute nodes can be added later as needed.

 

Networking setup

On the Hyper-V host, I am going to need 2 virtual switches that are going to be connected to the OpenStack controller VM. ext-net is the external network, it is bridged to the Windows physical external interface. I will use this network also for the management of the VM. data-net is the data network, which can be a simple private virtual switch for now (an external one is needed only when adding more compute nodes).

On the OpenStack Controller VM there are 3 interfaces. The first two, eth0 and eth1 are connected to the external network. The former is used for management (SSH, etc) and the latter is used by OpenStack for external traffic, managed by Open vSwitch. Finally, eth2 is the data/overlay network. It is used for tenant traffic between the instances and the Neutron components in the controller.

 

eth1 and eth2 do not have an IP and are set as “manual” in /etc/network/interfaces. The reason for this is that they are managed by OpenvSwitch. Also on these interfaces I need to enable MAC address spoofing (“Advanced Features” tab on the adapter).

The scripts that I will be using configures the Linux network interfaces automatically so I don’t need to bother with that now. The only interface I have already configured is eth0 so I can SSH into the machine.

 

OpenStack controller deployment

I am going to clone a repository that contains the scripts for the Kolla Openstack deployment, which can be found here. At the end of the deployment it will also create some common flavors, a Cirros VHDX Cinder image, a Neutron virtual router and 2 networks, one external (flat) and one private for tenants (VLAN based).

git clone https://github.com/cloudbase/kolla-resources.git
cd kolla-resources

To begin with, we are going to configure the management and external network details by setting some variables in deploy_openstack.sh:

vim deploy_openstack.sh

# deploy_openstack.sh
MGMT_IP=192.168.0.60
MGMT_NETMASK=255.255.255.0
MGMT_GATEWAY=192.168.0.1
MGMT_DNS="8.8.8.8"

# neutron external network information
FIP_START=192.168.0.80
FIP_END=192.168.0.90
FIP_GATEWAY=192.168.0.1
FIP_CIDR=192.168.0.0/24
TENANT_NET_DNS="8.8.8.8"

# used for HAProxy
KOLLA_INTERNAL_VIP_ADDRESS=192.168.0.91

As you can see, I am using the same subnet for management and external floating IPs.

Now I can run the deployment script. I am using the Linux “time” command to see how long the deployment will take:

time sudo ./deploy_openstack.sh

The first thing this script will do is to pull the Docker images for each OpenStack service. The great thing with Kolla is that you just need to create the images once, sparing significant time during deployment. This reduces significantly potential errors due to updated dependencies as the container images already contain all the required components. The images that I am going to use during the deployment are available here. Feel free to create your own, just follow the documentation.

After the deployment is finished, I have a fully functional OpenStack controller. It took around 13 minutes to deploy, that’s quite fast if you ask me.

real	12m28.716s
user	3m7.296s
sys     1m4.428s

 

By running “sudo docker ps” I can see all the containers running.

Admin credentials can be sourced now:

source /etc/kolla/admin-openrc.sh

The only thing left to do is to deploy the OpenStack Hyper-V components.

 

Nova Hyper-V compute node deployment

First, I’m going to edit the Ansible inventory to add my Hyper-V host (simply named “hyperv-host” in this post) as well as the credentials needed to access it:

vim hyperv_inventory

[hyperv]
hyperv-host

[hyperv:vars]
ansible_ssh_host=192.168.0.120
ansible_user=Administrator
ansible_password=Passw0rd
ansible_port=5986
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore

An HTTPS WinRM listener needs to be configured on the Hyper-V host, which can be easily created with this PowerShell script.

Now, I’m going to run the scripts that will fully deploy and configure Nova compute on Hyper-V. The first parameter is the data bridge that I configured earlier, data-net. The third and fourth parameters are are the Hyper-V credentials that FreeRDP will need to use in order to access the Hyper-V host when connecting to a Nova instance console.

sudo ./deploy_hyperv_compute_playbook.sh data-net Administrator Passw0rd

Next, I need to set trunk mode for my OpenStack controller. There are two reasons for this: first, I have a tenant network with type VLAN, and second, the controller is a VM in Hyper-V, so the hypervisor needs to allow VLAN tagged packets on the controller VM data interface. Start an elevated PowerShell and run:

Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList 500-2000 -NativeVlanId 0 openstack-controller

“openstack-controller” is the name of the controller VM in Hyper-V.

 

Spawning a VM

Now I have everything in place to start playing around. I will boot a VM and test its connectivity to the Internet.

NETID=`neutron net-show private-net | awk '{if (NR == 5) {print $4}}'`
nova boot --flavor m1.nano \
--nic net-id=$NETID \
--image cirros-gen1-vhdx hyperv-vm1

Taking a look in Horizon:

 

The FreeRDP console access from Horizon works as well. I can also access the VM directly from Hyper-v if needed.

 

 

Useful information

What if you need to modify the configuration of an OpenStack service running in a container? For example, lets say you want to enable another ML2 type driver. It’s quite easy actually.

In this case i need to edit the ml2_conf.ini file:

sudo vim /etc/kolla/neutron-server/ml2_conf.ini

After I am done with the editing, the only thing left to do is to restart the Neutron server container:

sudo docker restart neutron_server

Done. As you can see, Kolla keeps all the OpenStack configuration files in /etc/kolla.

 

 

Conclusions

Containers can help a lot in an OpenStack deployment. Footprint is small, dependency induced regressions are limited and with the aid of a tool like Ansible, automation can be managed very easily.

What’s next? As I mentioned at the beginning, kolla-ansible is just one of the “deliverables”. kolla-kubernetes is also currently being developed and we can already see the benefits that the Kubernetes container orchestration can bring to OpenStack deployments, so looking forward for kolla-kubernetes to reach a stable status as well!

The post Deploying OpenStack using Docker containers with Hyper-V and Kolla appeared first on Cloudbase Solutions.

]]>
https://cloudbase.it/openstack-kolla-hyper-v/feed/ 6 37022
Setting the Windows admin password in OpenStack https://cloudbase.it/openstack-windows-admin-password/ Mon, 20 Mar 2017 16:02:57 +0000 https://cloudbase.it/?p=37145 We’re getting quite a few questions about how to set the admin password in OpenStack Windows instances, so let’s clarify the available options. nova get-password The secure and proper way to set passwords in OpenStack Windows instances is by letting Cloudbase-Init generate a random password and post it encrypted on the Nova metadata service. The password can…

The post Setting the Windows admin password in OpenStack appeared first on Cloudbase Solutions.

]]>
We’re getting quite a few questions about how to set the admin password in OpenStack Windows instances, so let’s clarify the available options.

nova get-password

The secure and proper way to set passwords in OpenStack Windows instances is by letting Cloudbase-Init generate a random password and post it encrypted on the Nova metadata service. The password can then be retrieved with:

nova get-password <instance> [<ssh_private_key_path>]

You need to boot your instance with a SSH keypair (exactly like you would do on Linux for SSH public key authentication). In this case the public key is used to encrypt the password before posting it to the Nova HTTP metadata service. This way nobody will be able to decrypt it without having the keypair’s private key.

This option is also well supported in Horizon, but not enabled by default. To enable it, just edit openstack_dashboard/local/local_settings.py and add:

OPENSTACK_ENABLE_PASSWORD_RETRIEVE = True

To retrieve the password in Horizon, select “RETRIEVE PASSWORD” from the instance dropdown menu:

Horizon Retrieve password 1
Browse for your private key:

Horizon Retrieve password 3

Click “DECRYPT PASSWORD” (de decryption will occur in the browser, no data will be sent to the server) and retrieve your password:

Horizon Retrieve password 4

 

nova boot –meta admin_pass

In case a password automatically generated is not suitable, there’s an option to provide a password via command line. This is NOT RECOMMENDED due to the security implications of sharing clear text passwords in the metadata content.

In this case the password is provided to the Nova instance via metadata service and assigned by Cloudbase-Init to the admin user:

nova boot --meta admin_pass="<password>" ...

Given the previously mentioned security concerns this feature is disabled by default in Cloudbase-Init. In order to enable it inject_user_password must be set to true in the cloudbase-init.conf and cloudbase-init-unattend.conf config files:

inject_user_password = true

 

Password change in userdata script

The userdata can contain any PowerShell content (note the starting #ps1 line to identify it as such), including commands for creating users or setting passwords, providing a much higher degree of flexibility. The same security concerns for clear text content apply as above.
The main limitation is that it does not work with Heat or other solutions that already employ the userdata content for other means.

Passwordless authentication

Nova allows X509 keypairs to support passwordless authentication for Windows. This is highly recommended as it does not require any password, similarly to SSH public key authentication on Linux. The limitations of this option is that it works only for remote PowerShell and WinRM and not for RDP.

The post Setting the Windows admin password in OpenStack appeared first on Cloudbase Solutions.

]]>
37145