OpenStack Archives - Cloudbase Solutions Cloud Interoperability Thu, 15 Jan 2026 12:54:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 124661541 Hassle-free Migration and Disaster Recovery from VMware vSphere to Proxmox VE with Coriolis https://cloudbase.it/coriolis-vsphere-to-proxmox/ Mon, 08 Apr 2024 14:08:31 +0000 https://cloudbase.it/?p=42974 Following Broadcom’s recent acquisition of VMware, the level of uncertainty within the private virtualized infrastructure space has reached an all-time high. With many companies now put on the spot by unexpected increases to platform procurement and upgrade costs, the need for an Open Source and feature-equivalent alternative to VMware has never been greater. In this…

The post Hassle-free Migration and Disaster Recovery from VMware vSphere to Proxmox VE with Coriolis appeared first on Cloudbase Solutions.

]]>
Following Broadcom’s recent acquisition of VMware, the level of uncertainty within the private virtualized infrastructure space has reached an all-time high. With many companies now put on the spot by unexpected increases to platform procurement and upgrade costs, the need for an Open Source and feature-equivalent alternative to VMware has never been greater.

In this post, we’ll be giving a cursory introduction to Proxmox VE’s features and characteristics, and later showing how easy it is to migrate your VMware virtual infrastructure to Proxmox VE using Cloudbase’s in-house-developed Cloud Migration and Disaster Recovery as a Service solution: Coriolis.

Proxmox VE: the Open Source KVM node manager

The Proxmox Virtual Environment (or PVE for short) is a fully Open Source Hyper-Converged Infrastructure solution for on-premises infrastructure virtualization developed by Proxmox Server Solutions GmbH.

While not as feature-rich as more complete Infrastructure-as-a-Service (IaaS) type solutions like OpenStack, PVE brings to the table a very grounded approach to easily installing, managing, and upgrading numerous independent Open Source solutions packaged together under a single umbrella platform.

Proxmox VE Manager

Key Open Source components of PVE include:

  • Linux Kernel Virtual Machine (KVM) type one hypervisor: the industry standard for Linux-based hardware virtualization which, with the aid of QEMU, can run a wide array of virtual guest configurations
  • Linux Containers (LXC): a containerization solution which brings together the numerous native process/resource isolation mechanisms offered by the Linux kernel into a unified container runtime
  • OpenZFS: the premiere Linux implementation of the highly reliable, scalable, and feature-rich ZFS filesystem, providing flexible local storage to PVE nodes
  • Ceph software defined storage: a robust storage platform designed to provide block, object, and file storage features, even on commodity hardware and disks

Coriolis: CMaaS and DRaaS made easy

Coriolis is the Cloud Migration and Disaster Recovery as a Service (CMaaS/DRaaS) solution developed in-house by Cloudbase Solutions.

While our main goal with Coriolis during its original conception was to prevent vendor lock-in for our customers by offering Lift-and-Shift-type migration capabilities, it has quickly evolved into a fully-fledged Disaster Recovery solution capable of seamlessly replicating virtual infrastructure between most of every Public/Private IaaS platform you can think of.

Coriolis Architectural Diagram

Components and feature set

Coriolis lends much of its architectural inspiration to the OpenStack ecosystem, with its design intrinsically offering the following notable characteristics:

  • highly scalable: all of Coriolis’ components are independently horizontally-scalable, which allows Coriolis to easily accommodate usecases both small and large
  • fault tolerant: Coriolis’ components offer both high redundancy at every level, as well as fault tolerance when it comes to interacting with the source/target platforms every step of the way, thus preventing any transient infrastructure-level issues from tripping up business critical transfers
  • easy to use: Coriolis offers both a user-friendly Web-based Graphical User Interface and bundled command line client for easy management
  • straightforward integration: Coriolis’ easy-to-consume HTTP-based REST API makes integration with existing monitoring infrastructure and tooling a breeze
  • data security and integrity: all datapaths between the source and target platforms go through tightly secured error-checked connections, and any sensitive information such as platform credentials are securely stored within OpenStack’s Barbican secret manager
Coriolis vSphere VM Selection Screen

Agent-less, non-invasive, and non-disruptive

Coriolis is designed to be usable by any end-user of the source/target platforms.

This implies that Coriolis does not demand any access beyond what a normal end-user might have, such as:

  • no need to install an agent in the guest you’re migrating:
    Coriolis performs Lift-and-Shift type transfers of your virtual infrastructure entirely from the platform level, not the guest level
  • no need for admin accounts or access to underlying infrastructure:
    Coriolis strictly leverages normal user accounts and platform-level API features to perform its transfer operations, so handing Coriolis invasive admin access to your control plane is not required
  • no downtime for your virtual infrastructure:
    Coriolis is designed to perform all its transfer operations with zero downtime to your existing virtual infrastructure on the source platform, you can sync and deploy the new infrastructure on the target and perform the cutover whenever you feel comfortable

Coriolis’ edge over similar solutions

Coriolis offers notable advantages over most similar cross-platform guest replication solutions to Proxmox currently on the market, including the recently announced additions to the Proxmox Import Wizard.

Coriolis Platform Selection Screen

Of special note are Coriolis’ abilities to:

  1. support a wide array of sources: apart from VMware vSphere, Coriolis allows migrating guests to Proxmox from a large selection of source platforms, from standalone ESXi hosts with vSAN, all the way to public clouds such as AWS or Azure.
    Have a look at the current list of platforms supported by Coriolis
  2. adapt migrated guest operating systems for their new home: Coriolis automatically takes steps to ensure the migrated virtual infrastructure is perfectly suited for its new environment
    Everything from installing appropriate drivers, adapting the guest’s networking configuration, and injecting any add-ons required for full integration with Proxmox — like the VirtIO drivers for Windows guests, or the QEMU guest agent for management, for example — is transparently handled for you!
  3. replicate guest data with zero adverse effects to business continuity: Coriolis’ agent-less data replication approach and careful design considerations to be as least disruptive as possible enables it move data with no required downtime to your virtualized applications

Coriolis in Action

Here’s a showcase of how to set up full Disaster Recovery between VMware vSphere and Proxmox VE in mere minutes using Coriolis:

Try Coriolis out!

If you’d like to get hands-on with Coriolis’ features for yourself, please contact us for a demo and trial appliance!

While we get back to you, feel free to have a look over:

The post Hassle-free Migration and Disaster Recovery from VMware vSphere to Proxmox VE with Coriolis appeared first on Cloudbase Solutions.

]]>
42974
Bare metal Kubernetes on mixed x64 and ARM64 https://cloudbase.it/bare-metal-kubernetes-on-mixed-x64-and-arm64/ Mon, 31 Jul 2023 10:00:00 +0000 https://cloudbase.it/?p=42572 This is the first blog post in a series about running mixed x86 and ARM64 Kubernetes clusters, starting with a general architectural overview and then moving to detailed step by step instructions. Kubernetes doesn’t need any introduction at this point, as it became the de facto standard container orchestration system. If you, dear reader, developed…

The post Bare metal Kubernetes on mixed x64 and ARM64 appeared first on Cloudbase Solutions.

]]>
This is the first blog post in a series about running mixed x86 and ARM64 Kubernetes clusters, starting with a general architectural overview and then moving to detailed step by step instructions.

Kubernetes doesn’t need any introduction at this point, as it became the de facto standard container orchestration system. If you, dear reader, developed or deployed any workload in the last few years, there’s a very high probability that you had to interact with a Kubernetes cluster.

Most users deploy Kubernetes via services provided by their hyperscaler of choice, typically employing virtual machines as the underlying isolation mechanism. This is a well proven solution, at the expense of an inefficient resource usage, causing many organizations to look for alternatives in order to reduce costs.

One solution consists in deploying Kubernetes on top of an existing on-premise infrastructure running a full scale IaaS solution like OpenStack, or a traditional virtualization solution like VMware, using VMs underneath. This is similar to what happens on public clouds, with the advantage of allowing users to mix legacy virtualized workloads with modern container based applications on top of the same infrastructure. It is a very popular option, as we see a rise in OpenStack deployments for this specific purpose.

But, as more and more companies are interested in dedicated infrastructure for their Kubernetes clusters, especially for Edge use cases, there’s no need for an underlying IaaS or virtualization technology that adds unnecessary complexity and performance limitations.

This is where deploying Kubernetes on bare metal servers really shines, as the clusters can take full advantage of the whole infrastructure, often with significant TCO benefits. Running on bare metal allows us to freely choose between the x64 and ARM64 architectures, or a combination of both, taking advantage of the lower energy footprint provided by ARM servers with the compatibility offered by the more common x64 architecture.

A Kubernetes infrastructure comes with non-trivial complexity, which requires a fully automated solution for deployment, management, upgrades, observability and monitoring. Here’s a brief list the key components in the solution that we are about to present.

Host operating system

When it comes to Linux there’s definitely no lack of options. We needed a Linux distro aimed at lean container infrastructure workloads, with a large deployment base on many different physical servers, avoiding a full fledged traditional Linux server footprint. We decided to use Flatcar, for a series of reasons:

  1. Longstanding (in cloud years) proven success, being the continuation of CoreOS
  2. CNCF incubating project
  3. Active community, expert in container scenarios, including Cloudbase engineers
  4. Support for both ARM64 and x64
  5. Commercial support, provided by Cloudbase, as a result of the partnership with Microsoft / Kinvolk

The host OS options are not limited to Flatcar, we tested successfully many other alternatives, including Mariner and Ubuntu. This is not trivial, as packaging and optimizing images for this sort of infrastructure requires significant domain expertize.

Bare metal host provisioning

This component allows to boot every host via IPMI (or other API provided by the BMC), install the operating system via PXE and in general configure every aspect of the host OS and Kubernetes in an automated way. Over the years we worked with many open source host provisioning solutions (MaaS, Ironic, Crowbar), but we opted for Thinkerbell in this case due to its integration in the Kubernetes ecosystem and support for Cluster API (CAPI).

Distributed Storage

Although the solution presented here can support traditional SAN storage, the storage model in our scenario will be distributed and hyperconverged, with every server providing compute, storage and networking roles. We chose Ceph (deployed via Rook), being the leading open source distributed storage and given our involvement in the community. When properly configured, it can deliver outstanding performance even on small clusters.

Networking and load balancing

While traditionally Kubernetes clusters employ Flannel or Calico for networking, a bare metal scenario can take advantage of a more modern technology like Cilium. Additionally, Cilium can provide load balancing via BGP out of the box, without the need for additional components like MetalLB.

High availability

All components in the deployment are designed with high availability in mind, including storage, networking, compute nodes and API.

Declarative GitOps

Argo CD offers a way to manage in a declarative way the whole CI/CD deployment pipeline. Other open source alternatives like Tekton or FluxCD can also be employed.

Observability and Monitoring

Last but not least, observability is a key area beyond simple logs, metrics and traces, to ensure that the whole infrastructure performs as expected, for which we employ Prometheus and Grafana. For monitoring, and ensuring that prompt actions can be taken in case of issues, we use Sentry.

Coming next

The next blog posts in this series will explain in detail how to deploy the whole architecture presented here, thanks for reading!

The post Bare metal Kubernetes on mixed x64 and ARM64 appeared first on Cloudbase Solutions.

]]>
42572
OpenStack on Azure https://cloudbase.it/openstack-on-azure/ Tue, 27 Jul 2021 18:48:22 +0000 https://cloudbase.it/?p=40676 One might argue what is the point in running a cloud infrastructure software like OpenStack on top of another one, namely the Azure public cloud as in this blog post. The main use cases are typically testing and API compatibility, but as Azure nested virtualization and pass-through features came a long way recently in terms…

The post OpenStack on Azure appeared first on Cloudbase Solutions.

]]>
One might argue what is the point in running a cloud infrastructure software like OpenStack on top of another one, namely the Azure public cloud as in this blog post. The main use cases are typically testing and API compatibility, but as Azure nested virtualization and pass-through features came a long way recently in terms of performance, other more advanced use cases are viable, especially in areas where OpenStack has a strong user base (e.g. Telcos).

There are many ways to deploy OpenStack, in this post we will use Kolla Ansible for a containerized OpenStack with Ubuntu 20.04 Server as the host OS.

Preparing the infrastructure

In our scenario, we need at least one beefy virtual machine, that supports nested virtualization and can handle all the CPU/RAM/storage requirements for a full fledged All-In-One OpenStack. For this purpose, we chose a Standard_D8s_v3 size for the OpenStack controller virtual machine (8 vCPU, 32 GB RAM) and 512 GB of storage. For a multinode deployment, subject of a future post, more virtual machines can be added, depending on how many virtual machines are to be supported by the deployment.

To be able to use the Azure CLI from PowerShell, it can be installed following the instructions here https://docs.microsoft.com/en-us/cli/azure/install-azure-cli.

# connect to Azure
az login

# create an ssh key for authentication
ssh-keygen

# create the OpenStack controller VM

az vm create `
     --name openstack-controller `
     --resource-group "openstack-rg" `
     --subscription "openstack-subscription" `
     --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest `
     --location westeurope `
     --admin-username openstackuser `
     --ssh-key-values ~/.ssh/id_rsa.pub `
     --nsg-rule SSH `
     --os-disk-size-gb 512 `
     --size Standard_D8s_v3

# az vm create will output the public IP of the instance
$openStackControllerIP = "<IP of the VM>"

# create the static private IP used by Kolla as VIP
az network nic ip-config create --name MyIpConfig `
    --nic-name openstack-controllerVMNic `
    --private-ip-address 10.0.0.10 `
    --resource-group "openstack-rg" `
    --subscription "openstack-subscription"

# connect via SSH to the VM
ssh openstackuser@$openStackControllerIP

# fix the fqdn
# Kolla/Ansible does not work with *.cloudapp FQDNs, so we need to fix it
sudo tee /etc/hosts << EOT
$(hostname -i) $(hostname)
EOT

# create a dummy interface that will be used by OpenVswitch as the external bridge port
# Azure Public Cloud does not allow spoofed traffic, so we need to rely on NAT for VMs to
# have internal connectivity.
sudo ip tuntap add mode tap br_ex_port
sudo ip link set dev br_ex_port up

OpenStack deployment

For the deployment, we will use the Kolla Ansible containerized approach.

Firstly, installation of the base packages for Ansible/Kolla/Cinder is required.

# from the Azure OpenStack Controller VM

# install ansible/kolla requirements
sudo apt install -y python3-dev libffi-dev gcc libssl-dev python3-venv net-tools

# install Cinder NFS backend requirements
sudo apt install -y nfs-kernel-server

# Cinder NFS setup
CINDER_NFS_HOST=$openStackControllerIP
# Replace with your local network CIDR if you plan to add more nodes
CINDER_NFS_ACCESS=$CINDER_NFS_HOST
sudo mkdir /kolla_nfs
echo "/kolla_nfs $CINDER_NFS_ACCESS(rw,sync,no_root_squash)" | sudo tee -a /etc/exports
echo "$CINDER_NFS_HOST:/kolla_nfs" | sudo tee -a /etc/kolla/config/nfs_shares
sudo systemctl restart nfs-kernel-server

Afterwards, let’s install Ansible/Kolla in a Python virtualenv.

mkdir kolla
cd kolla
 
python3 -m venv venv
source venv/bin/activate
 
pip install -U pip
pip install wheel
pip install 'ansible<2.10'
pip install 'kolla-ansible>=11,<12'

Then, prepare Kolla configuration files and passwords.

sudo mkdir -p /etc/kolla/config
sudo cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
sudo chown -R $USER:$USER /etc/kolla
cp venv/share/kolla-ansible/ansible/inventory/* .
kolla-genpwd

Now, let’s check Ansible works.

ansible -i all-in-one all -m ping

As a next step, we need to configure the OpenStack settings

# This is the static IP we created initially
VIP_ADDR=10.0.0.10
# Azure VM interface is eth0
MGMT_IFACE=eth0
# This is the dummy interface used for OpenVswitch
EXT_IFACE=br_ex_port
# OpenStack Train version
OPENSTACK_TAG=11.0.0

# now use the information above to write it to Kolla configuration file
sudo tee -a /etc/kolla/globals.yml << EOT
kolla_base_distro: "ubuntu"
openstack_tag: "$OPENSTACK_TAG"
kolla_internal_vip_address: "$VIP_ADDR"
network_interface: "$MGMT_IFACE"
neutron_external_interface: "$EXT_IFACE"
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"
enable_neutron_provider_networks: "yes"
EOT

Now it is time to deploy OpenStack.

kolla-ansible -i ./all-in-one prechecks
kolla-ansible -i ./all-in-one bootstrap-servers
kolla-ansible -i ./all-in-one deploy

After the deployment, we need to create the admin environment variable script.

pip3 install python-openstackclient python-barbicanclient python-heatclient python-octaviaclient
kolla-ansible post-deploy
# Load the vars to access the OpenStack environment
. /etc/kolla/admin-openrc.sh

Let’s make the finishing touches and create an OpenStack instance.

# Set you external network CIDR, range and gateway, matching your environment, e.g.:
export EXT_NET_CIDR='10.0.2.0/24'
export EXT_NET_RANGE='start=10.0.2.150,end=10.0.2.199'
export EXT_NET_GATEWAY='10.0.2.1'
./venv/share/kolla-ansible/init-runonce

# Enable NAT so that VMs can have Internet access and be able to
# reach their floating IP from the controller node.
sudo ifconfig br-ex $EXT_NET_GATEWAY netmask 255.255.255.0 up
sudo iptables -t nat -A POSTROUTING -s $EXT_NET_CIDR -o eth0 -j MASQUERADE

# Create a demo VM
openstack server create --image cirros --flavor m1.tiny --key-name mykey --network demo-net demo1

Conclusions

Deploying OpenStack on Azure is fairly straightforward, with the caveat that the OpenStack instances cannot be accessed from the Internet without further changes (this affects only inbound traffic, the OpenStack instances can access the Internet). Here are the main changes that we introduced to be able to perform the deployment in this scenario:

  • Add a static IP on the first interface that will be used as the OpenStack API IP
  • Set the OpenStack Controller FQDN to be the same as the hostname
  • Create a dummy interface which will be used as the br-ex external port (there is no need for a secondary NIC, as Azure drops any spoofed packets)
  • Add iptables NAT rules to allow OpenStack VM outbound (Internet) connectivity

The post OpenStack on Azure appeared first on Cloudbase Solutions.

]]>
40676
OpenStack on ARM64 – LBaaS https://cloudbase.it/openstack-on-arm64-lbaas/ Thu, 18 Feb 2021 13:00:06 +0000 https://cloudbase.it/?p=39611 In part 2 of this series about OpenStack on ARM64, we got to the point where our cloud is fully deployed with all the Compute (VMs), Software Defined Networking (SDN) and Software Defined Storage (SDS) up and running. One additional component that we want to add is a Load Balancer as a Service (LBaaS), which…

The post OpenStack on ARM64 – LBaaS appeared first on Cloudbase Solutions.

]]>
In part 2 of this series about OpenStack on ARM64, we got to the point where our cloud is fully deployed with all the Compute (VMs), Software Defined Networking (SDN) and Software Defined Storage (SDS) up and running. One additional component that we want to add is a Load Balancer as a Service (LBaaS), which is a key requirement for pretty much any high available type of workload and a must-have feature in any cloud.

OpenStack’s current official LBaaS component is called Octavia, which replaced the older Neutron LBaaS v1 project, starting with the Liberty release. Deploying and configuring requires a few steps, which explains the need for a dedicated blog post.

Octavia’s reference implementation uses VM instances called Amphorae to perform the actual load balancing. The octavia-worker service takes care of communicating with the amphorae and to do that we need to generate a few X509 CAs and certificates used to secure the communications. The good news is that starting with the Victoria release, kolla-ansible simplifies a lot this task. Here’s how to:

# Change the following according to your organization
echo "octavia_certs_country: US" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_certs_state: Oregon" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_certs_organization: OpenStack" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_certs_organizational_unit: Octavia" | sudo tee -a /etc/kolla/globals.yml

# This is the kolla-ansible virtual env created in the previous blog post
cd kolla
source venv/bin/activate

sudo chown $USER:$USER /etc/kolla
kolla-ansible octavia-certificates

The communication between Octavia and the Amphorae needs an isolated network, as we don’t want to share it with the tenant network for security reasons. A simple way to accomplish that is to create a provider network with a dedicated VLAN ID, which is why we enabled Neutron provider networks and OVS VLAN segmentation in the previous post. Again, starting with Victoria, this got much easier with kolla-ansible.

# This is a dedicated network, outside your management LAN address space, change as needed
OCTAVIA_MGMT_SUBNET=192.168.43.0/24
OCTAVIA_MGMT_SUBNET_START=192.168.43.10
OCTAVIA_MGMT_SUBNET_END=192.168.43.254
OCTAVIA_MGMT_HOST_IP=192.168.43.1/24
OCTAVIA_MGMT_VLAN_ID=107

sudo tee -a /etc/kolla/globals.yml << EOT
octavia_amp_network:
  name: lb-mgmt-net
  provider_network_type: vlan
  provider_segmentation_id: $OCTAVIA_MGMT_VLAN_ID
  provider_physical_network: physnet1
  external: false
  shared: false
  subnet:
    name: lb-mgmt-subnet
    cidr: "$OCTAVIA_MGMT_SUBNET"
    allocation_pool_start: "$OCTAVIA_MGMT_SUBNET_START"
    allocation_pool_end: "$OCTAVIA_MGMT_SUBNET_END"
    gateway_ip: "$OCTAVIA_MGMT_HOST_IP"
    enable_dhcp: yes
EOT

Unless there is a dedicated network adapter, a virtual ethernet one can be used. This needs to be configured at boot and added to the OVS br-ex switch.

# This sets up the VLAN veth interface
# Netplan doesn't have support for veth interfaces yet
sudo tee /usr/local/bin/veth-lbaas.sh << EOT
#!/bin/bash
sudo ip link add v-lbaas-vlan type veth peer name v-lbaas
sudo ip addr add $OCTAVIA_MGMT_HOST_IP dev v-lbaas
sudo ip link set v-lbaas-vlan up
sudo ip link set v-lbaas up
EOT
sudo chmod 744 /usr/local/bin/veth-lbaas.sh

sudo tee /etc/systemd/system/veth-lbaas.service << EOT
[Unit]
After=network.service

[Service]
ExecStart=/usr/local/bin/veth-lbaas.sh

[Install]
WantedBy=default.target
EOT
sudo chmod 644 /etc/systemd/system/veth-lbaas.service

sudo systemctl daemon-reload
sudo systemctl enable veth-lbaas.service
sudo systemctl start veth-lbaas.service

docker exec openvswitch_vswitchd ovs-vsctl add-port \
  br-ex v-lbaas-vlan tag=$OCTAVIA_MGMT_VLAN_ID

A few more Octavia kolla-ansible configurations…

echo "enable_octavia: \"yes\"" | sudo tee -a /etc/kolla/globals.yml
echo "octavia_network_interface: v-lbaas" | sudo tee -a /etc/kolla/globals.yml

# Flavor used when booting an amphora, change as needed
sudo tee -a /etc/kolla/globals.yml << EOT
octavia_amp_flavor:
  name: "amphora"
  is_public: no
  vcpus: 1
  ram: 1024
  disk: 5
EOT

sudo mkdir /etc/kolla/config/octavia
# Use a config drive in the Amphorae for cloud-init
sudo tee /etc/kolla/config/octavia/octavia-worker.conf << EOT
[controller_worker]
user_data_config_drive = true
EOT

…and we can finally tell kolla-ansible to deploy Octavia:

kolla-ansible -i all-in-one deploy --tags common,horizon,octavia

Octavia uses a special VM image for the Amphorae, which needs to be built for ARM64. We prepared Dockerfiles for building either an Ubuntu or a CentOS image, you can choose either one in the following snippets. We use containers to perform the build in order to isolate the requirements and be independent from the host OS.

git clone https://github.com/cloudbase/openstack-kolla-arm64-scripts
cd openstack-kolla-arm64-scripts/victoria

# Choose either Ubuntu or CentOS (not both!)

# Ubuntu
docker build amphora-image-arm64-docker -f amphora-image-arm64-docker/Dockerfile.Ubuntu \
  -t amphora-image-build-arm64-ubuntu

# Centos
docker build amphora-image-arm64-docker -f amphora-image-arm64-docker/Dockerfile.Centos \
  -t amphora-image-build-arm64-centos

ARM64 needs a trivial patch in the diskimage-create.sh build script (we also submitted it upstream):

git clone https://opendev.org/openstack/octavia -b stable/victoria
# Use latest branch Octavia to create Ubuntu image
cd octavia
# diskimage-create.sh includes armhf but not arm64
git apply ../0001-Add-arm64-in-diskimage-create.sh.patch
cd ..

Build the image (this will take a bit):

# Again, choose either Ubuntu or CentOS (not both!)

# Note the mount of /mnt and /proc in the docker container
# BEWARE!!!!! Without mounting /proc, the diskimage-builder fails to find mount points and deletes the host's /dev,
# making the host unusable
docker run --privileged -v /dev:/dev -v /proc:/proc -v /mnt:/mnt \
  -v $(pwd)/octavia/:/octavia -ti amphora-image-build-arm64-ubuntu

# Create CentOS 8 Amphora image
docker run --privileged -v /dev:/dev -v $(pwd)/octavia/:/octavia \
  -ti amphora-image-build-arm64-centos

Add the image to Glance, using the octavia user in the service project. The amphora tag is used by Octavia to find the image.

. /etc/kolla/admin-openrc.sh

# Switch to the octavia user and service project
export OS_USERNAME=octavia
export OS_PASSWORD=$(grep octavia_keystone_password /etc/kolla/passwords.yml | awk '{ print $2}')
export OS_PROJECT_NAME=service
export OS_TENANT_NAME=service

openstack image create amphora-x64-haproxy.qcow2 \
  --container-format bare \
  --disk-format qcow2 \
  --private \
  --tag amphora \
  --file octavia/diskimage-create/amphora-x64-haproxy.qcow2

# We can now delete the image file
rm -f octavia/diskimage-create/amphora-x64-haproxy.qcow2

Currently, we need a small patch in Octavia to properly render the userdata for the Amphorae:

# Patch the user_data_config_drive_template
cd octavia
git apply  ../0001-Fix-userdata-template.patch
# For now just update the octavia-worker container, no need to restart it
docker cp octavia/common/jinja/templates/user_data_config_drive.template \
  octavia_worker:/usr/lib/python3/dist-packages/octavia/common/jinja/templates/user_data_config_drive.template

Finally, let’s create a load balancer to make sure everything works fine:

# To create the loadbalancer
. /etc/kolla/admin-openrc.sh

openstack loadbalancer create --name loadbalancer1 --vip-subnet-id public1-subnet

# Check the status until it's marked as ONLINE
openstack loadbalancer list

Congratulations! You have a working LBaaS in your private cloud!!

Troubleshooting

In case something goes wrong, finding the root cause might be tricky. Here are a few suggestions to ease up the process.

# Check for errors
sudo tail -f /var/log/kolla/octavia/octavia-worker.log

# SSH into amphora
# Get amphora VM IP either from the octavia-worker.log or from:
openstack server list --all-projects

ssh ubuntu@<amphora_ip> -i octavia_ssh_key #ubuntu
ssh cloud-user@<amphora_ip> -i octavia_ssh_key #centos

# Instances stuck in pending create cannot be deleted
# Password: grep octavia_database_password /etc/kolla/passwords.yml
docker exec -ti mariadb mysql -u octavia -p octavia
update load_balancer set provisioning_status = 'ERROR' where provisioning_status = 'PENDING_CREATE';
exit;

The post OpenStack on ARM64 – LBaaS appeared first on Cloudbase Solutions.

]]>
39611
OpenStack on ARM64 – Deployment https://cloudbase.it/openstack-on-arm64-part-2/ Mon, 08 Feb 2021 13:00:28 +0000 https://cloudbase.it/?p=39576 In the previous blog post we created the ARM OpenStack Kolla container images and we can now proceed with deploying OpenStack. The host is a Lenovo server with an Ampere Computing eMAG 32 cores Armv8 64-bit CPU, running Ubuntu Server 20.04. For simplicity, this will be an “All-in-One” deployment, where all OpenStack components run on…

The post OpenStack on ARM64 – Deployment appeared first on Cloudbase Solutions.

]]>
In the previous blog post we created the ARM OpenStack Kolla container images and we can now proceed with deploying OpenStack. The host is a Lenovo server with an Ampere Computing eMAG 32 cores Armv8 64-bit CPU, running Ubuntu Server 20.04. For simplicity, this will be an “All-in-One” deployment, where all OpenStack components run on the same host, but it can be easily adapted to a multi-node setup.

Let’s start with installing the host package dependencies, in case those are not already there, including Docker.

sudo apt update
sudo apt install -y qemu-kvm docker-ce
sudo apt install -y python3-dev libffi-dev gcc libssl-dev python3-venv
sudo apt install -y nfs-kernel-server

sudo usermod -aG docker $USER
newgrp docker

We can now create a local directory with a Python virtual environment and all the kolla-ansible components:

mkdir kolla
cd kolla

python3 -m venv venv
source venv/bin/activate

pip install -U pip
pip install wheel
pip install 'ansible<2.10'
pip install 'kolla-ansible>=11,<12'

The kolla-ansible configuration is stored in /etc/kolla:

sudo mkdir -p /etc/kolla/config
sudo cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
sudo chown -R $USER:$USER /etc/kolla
cp venv/share/kolla-ansible/ansible/inventory/* .

Let’s check if everything is ok (nothing gets deployed yet):

ansible -i all-in-one all -m ping

kolla-genpwd is used to generate random passwords for every service, stored in /etc/kolla/passwords.yml, quite useful:

kolla-genpwd

Log in to the remote Docker registry, using the registry name and credentials created in the previous post:

ACR_NAME=# Value from ACR creation
SP_APP_ID_PULL_ONLY=# Value from ACR SP creation
SP_PASSWD_PULL_ONLY=# Value from ACR SP creation
REGISTRY=$ACR_NAME.azurecr.io

docker login $REGISTRY --username $SP_APP_ID --password $SP_PASSWD

Now, there are a few variables that we need to set, specific to the host environment. The external interface is what is used for tenant traffic.

VIP_ADDR=# An unallocated IP address in your management network
MGMT_IFACE=# Your management interface
EXT_IFACE=# Your external interface
# This must match the container images tag
OPENSTACK_TAG=11.0.0

Time to write the main configuration file in /etc/kolla/globals.yml:

sudo tee -a /etc/kolla/globals.yml << EOT
kolla_base_distro: "ubuntu"
openstack_tag: "$OPENSTACK_TAG"
kolla_internal_vip_address: "$VIP_ADDR"
network_interface: "$MGMT_IFACE"
neutron_external_interface: "$EXT_IFACE"
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"
enable_barbican: "yes"
enable_neutron_provider_networks: "yes"
docker_registry: "$REGISTRY"
docker_registry_username: "$SP_APP_ID_PULL_ONLY"
EOT

The registry password goes in /etc/kolla/passwords.yml:

sed -i "s/^docker_registry_password: .*\$/docker_registry_password: $SP_PASSWD_PULL_ONLY/g" /etc/kolla/passwords.yml

Cinder, the OpenStack block storage component, supports a lot of backends. The easiest way to get started is by using NFS, but LVM would be a great choice as well if you have unused disks.

# Cinder NFS setup
CINDER_NFS_HOST=# Your local IP
# Replace with your local network CIDR if you plan to add more nodes
CINDER_NFS_ACCESS=$CINDER_NFS_HOST
sudo mkdir /kolla_nfs
echo "/kolla_nfs $CINDER_NFS_ACCESS(rw,sync,no_root_squash)" | sudo tee -a /etc/exports
echo "$CINDER_NFS_HOST:/kolla_nfs" | sudo tee -a /etc/kolla/config/nfs_shares
sudo systemctl restart nfs-kernel-server

The following settings are mostly needed for Octavia, during the next blog post in this series:

# Increase the PCIe ports to avoid this error when creating Octavia pool members:
# libvirt.libvirtError: internal error: No more available PCI slots
sudo mkdir /etc/kolla/config/nova
sudo tee /etc/kolla/config/nova/nova-compute.conf << EOT
[DEFAULT]
resume_guests_state_on_host_boot = true

[libvirt]
num_pcie_ports=28
EOT

# This is needed for Octavia
sudo mkdir /etc/kolla/config/neutron
sudo tee /etc/kolla/config/neutron/ml2_conf.ini << EOT
[ml2_type_vlan]
network_vlan_ranges = physnet1:100:200
EOT

Time to do some final checks, bootstrap the host and deploy OpenStack! The Deployment will take some time, this is a good moment for a coffee.

kolla-ansible -i ./all-in-one prechecks
kolla-ansible -i ./all-in-one bootstrap-servers
kolla-ansible -i ./all-in-one deploy

Congratulations, you have an ARM OpenStack cloud! Now we can get the CLI tools to access it:

pip3 install python-openstackclient python-barbicanclient python-heatclient python-octaviaclient
kolla-ansible post-deploy
# Load the vars to access the OpenStack environment
. /etc/kolla/admin-openrc.sh

The next steps are optional, but highly recommended in order to get the basic functionalities, including basic networking, standard flavors and a basic Linux image (Cirros):

# Set you external netwrork CIDR, range and gateway, matching your environment, e.g.:
export EXT_NET_CIDR='10.0.2.0/24'
export EXT_NET_RANGE='start=10.0.2.150,end=10.0.2.199'
export EXT_NET_GATEWAY='10.0.2.1'
./venv/share/kolla-ansible/init-runonce

All done! We can now create a basic VM from the command line:

# Create a demo VM
openstack server create --image cirros --flavor m1.tiny --key-name mykey --network demo-net demo1

You can also head to “http://<VIP_ADDR>” and access Horizon, OpenStack’s web ui. The username is admin and the password is in /etc/kolla/passwords.yml:

grep keystone_admin_password /etc/kolla/passwords.yml

In the next post we will add to the deployment Octavia, the load balancer as a service (LBaaS) component, Enjoy your ARM OpenStack cloud in the meantime!

P.S.: In case you would like to delete your whole deployment and start over:

#kolla-ansible -i ./all-in-one destroy --yes-i-really-really-mean-it

The post OpenStack on ARM64 – Deployment appeared first on Cloudbase Solutions.

]]>
39576
OpenStack on ARM64 – Kolla container images https://cloudbase.it/openstack-on-arm64-part-1/ Mon, 25 Jan 2021 09:00:00 +0000 https://cloudbase.it/?p=39476 This is the beginning of a short series detailing how to deploy OpenStack on ARM64, using Docker containers with Kolla and Kolla-ansible. The objective of this first post is to create the ARM64 container images and push them to a remote registry in order to be used later on, when deploying our OpenStack cloud. We…

The post OpenStack on ARM64 – Kolla container images appeared first on Cloudbase Solutions.

]]>
This is the beginning of a short series detailing how to deploy OpenStack on ARM64, using Docker containers with Kolla and Kolla-ansible.

The objective of this first post is to create the ARM64 container images and push them to a remote registry in order to be used later on, when deploying our OpenStack cloud. We are going to use Azure Container Registry to store the images, but any other OCI compliant registry will do.

Create a container registry

Let’s start by creating the container registry and related access credentials. This can be done anywhere, e.g. from a laptop, doesn’t need to be ARM. All we need is to have the Azure CLI installed.

az login
# If you have more than one Azure subscription, choose one:
az account list --output table
az account set --subscription "Your subscription"

Next, let’s create a resource group and a container registry with a unique name. Choose also an Azure region based on your location.

RG=kolla
ACR_NAME=your_registry_name_here
LOCATION=eastus

az group create --name $RG --location $LOCATION
az acr create --resource-group $RG --name $ACR_NAME --sku Basic

We’re now creating two sets of credentials, one with push and pull access to be used when creating the images and one with pull only access to be used later on during the OpenStack deployment.

ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
SERVICE_PRINCIPAL_NAME=acr-kolla-sp-push
SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role acrpush --query password --output tsv)
SP_APP_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
echo "SP_APP_ID=$SP_APP_ID"
echo "SP_PASSWD=$SP_PASSWD"

SERVICE_PRINCIPAL_NAME=acr-kolla-sp-pull
SP_PASSWD_PULL_ONLY=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role acrpull --query password --output tsv)
SP_APP_ID_PULL_ONLY=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
echo "SP_APP_ID_PULL_ONLY=$SP_APP_ID_PULL_ONLY"
echo "SP_PASSWD_PULL_ONLY=$SP_PASSWD_PULL_ONLY"

Create and push the OpenStack Kolla container images

It’s now time to switch to an ARM server where the Kolla container images will be built. We are going to use a Lenovo server with an eMAG 32 cores Armv8 64-bit CPU provided by Ampere Computing. The host operating system is Ubuntu 20.04, but the following instructions can be easily adapted to other Linux distros.

Let’s start with installing the dependencies and add your current user to the docker group (or create a separate user).

sudo apt update
sudo apt install -y docker-ce python3-venv git
sudo usermod -aG docker $USER
newgrp docker

Let’s get Docker to login into the remote registry that we just created. Set ACR_NAME, SP_APP_ID and SP_PASSWD as obtained in the previous steps.

REGISTRY=$ACR_NAME.azurecr.io
docker login $REGISTRY --username $SP_APP_ID --password $SP_PASSWD

Now we can install Kolla in a Python virtual environment and get ready to start building our container images. The OpenStack version is the recently released Victoria but a previous version can be used if needed (e.g. Ussuri).

mkdir kolla-build
cd kolla-build
python3 -m venv venv
source venv/bin/activate
pip install wheel
# Install Kolla, Victoria version
pip install "kolla>=11,<12"

Edit: the following step can be skipped on Victoria since it defaults to Ubuntu 20.04 where pmdk-tools is available. Additionally, thanks to a recent patch, it can be skipped on Ussuri and Train.

The pmdk-tools Ubuntu package is not available on ARM, so we need to remove it from the nova-compute docker image build. This is done by creating a “template override” that we are going to pass to the build process.

tee template-overrides.j2 << EOT
{% extends parent_template %}

# nova-compute
{% set nova_compute_packages_remove = ['pmdk-tools'] %}
EOT

We can now build the container images and push them to the registry. This will take a while since it’s building and pushing container images for all OpenStack projects and services. Alternatively, it is possible to reduce the number of containers to a subset by creating a profile in kolla-build.conf as explained here.

kolla-build -b ubuntu --registry $REGISTRY --push
# If you created a template override run:
# kolla-build -b ubuntu --registry $REGISTRY --template-override template-overrides.j2 --push

We are finally ready for our OpenStack AMR64 deployment with Kolla-ansible in the next post!

The post OpenStack on ARM64 – Kolla container images appeared first on Cloudbase Solutions.

]]>
39476
Windows on ARM64 with Cloudbase-Init https://cloudbase.it/cloudbase-init-on-windows-arm64/ Tue, 10 Nov 2020 16:10:46 +0000 https://cloudbase.it/?p=39141 ARM servers are more and more present in our day to day life, their usage varying from minimal IoT devices to huge computing clusters. So we decided to put the Windows support for ARM64 cloud images to the test, with two primary focuses: Toolchain ecosystem – Building and running Cloudbase-Init on Windows ARM64 Virtualization –…

The post Windows on ARM64 with Cloudbase-Init appeared first on Cloudbase Solutions.

]]>

ARM servers are more and more present in our day to day life, their usage varying from minimal IoT devices to huge computing clusters. So we decided to put the Windows support for ARM64 cloud images to the test, with two primary focuses:

  • Toolchain ecosystem – Building and running Cloudbase-Init on Windows ARM64
  • Virtualization – Running Windows and Linux virtual machines on Windows ARM64

Our friends from https://amperecomputing.com kindly provided the computing resources that we used to check the current state of Windows virtualization on ARM64.

The test lab consisted of 3 Ampere Computing EMAG servers (Lenovo HR330A – https://amperecomputing.com/emag), each with 32 ARM64 processors, 128 GB of RAM and 512 GB SSD.


Toolchain ecosystem on Windows ARM64: building and running Cloudbase-Init

Cloudbase-Init is a provisioning agent designed to initialize and configure guest operating systems on various platforms: OpenStack, Azure, Oracle Cloud, VMware, Kubernetes CAPI, OpenNebula, Equinix Metal (formerly: Packet), and many others.

Building and running Cloudbase-Init requires going through multiple layers of an OS ecosystem, as it needs a proper build environment, C compiler for Python and Python extensions, Win32 and WMI wrappers, a Windows service wrapper and an MSI installer.

This complexity made Cloudbase-Init the perfect candidate for checking the state of the toolchain ecosystem on Windows ARM64.


Install Windows 10 PRO ARM64 on the EMAG ARM servers

EMAG servers come with CentOS 7 preinstalled, so the first step was to have a Windows ARM64 OS installed on them.

Windows Server ARM64 images are unfortunately not publicly available, so the best option consists in using Windows Insider (https://insider.windows.com/), Windows 10 PRO ARM64 images available for download.

As there is no ISO available on the Windows Insiders website, we had to convert the VHDX to a RAW file using qemu-img.exe, boot a Linux Live ISO which had dd binary tool on it (Ubuntu is great for this) on the EMAG server and copy the RAW file content directly on the primary disk.

For the dd step, we needed a Windows machine where to download / convert the Windows 10 PRO ARM64 VHDX and two USB sticks. One USB stick for the Ubuntu Live ISO and one for the Windows 10 PRO ARM64 RAW file.

Rufus was used for creating the Ubuntu Live ISO USB and copying the RAW file to the other USB stick. Note that one USB stick must be at least 32 GB in size to cover for the ~25GB of the Windows RAW file.

Tools used for the dd step:

After the dd process succeeds, a server reboot was required. The first boot took a while for the Windows device initialization followed by the usual “Out of the box experience”.

The following steps show how we built Cloudbase-Init for ARM64. As a side note, Windows 10 ARM64 has a builtin emulator for x86, but not for x64. Practically, we could run the x86 version of Cloudbase-Init on the system, but it would have run very slow and some features would have been limited by the emulation (starting native processes).


Gather information on the toolchain required to build Cloudbase-Init

The Cloudbase-Init ecosystem consists of these main building blocks:

  • Python for Windows ARM64
  • Python setuptools
  • Python pip
  • Python PyWin32
  • Cloudbase-Init
  • OpenStack Service Wrapper executable

Toolchain required:

  • Visual Studio with ARM64 support (2017 or 2019)
  • git



Python for Windows ARM64

Python 3.x for ARM64 can be built using Visual Studio 2017 or 2019. In our case, we used the freely available Visual Studio 2019 Community Edition, downloadable from https://visualstudio.microsoft.com/downloads/.

The required toolchain / components for Visual Studio can be installed using this vsconfig.txt. This way, we make sure that the build environment is 100% reproducible.

Python source code can be found here: https://github.com/python/cpython.

To make the build process even easier, we leveraged GitHub Actions to easily build Python for ARM64. An example workflow can be found here: https://github.com/cloudbase/cloudbase-init-arm-scripts/blob/main/.github/workflows/build.yml.

Also, prebuilt archives of Python for Windows ARM64 are available for download here: https://github.com/ader1990/CPython-Windows-ARM64/releases.

Notes:


Python setuptools

Python setuptools is a Python package that handles the “python setup.py install” workflow.

Source code can be found here: https://github.com/pypa/setuptools.

The following patches are required for setuptools to work:

Installation steps for setuptools (Python and Visual Studio are required):

set VCVARSALL="C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat"
set CL_PATH="C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX86\ARM64\cl.exe"
set MC_PATH="C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\arm64\mc.exe"

call %VCVARSALL% amd64_arm64 10.0.17763.0 & set

git clone https://github.com/ader1990/setuptools 1>nul
IF %ERRORLEVEL% NEQ 0 EXIT 1

pushd setuptools
    git checkout am_64
    echo "Installing setuptools"
    python.exe bootstrap.py 1>nul 2>nul
    IF %ERRORLEVEL% NEQ 0 EXIT 1

    %CL_PATH% /D "GUI=0" /D "WIN32_LEAN_AND_MEAN" /D _ARM64_WINAPI_PARTITION_DESKTOP_SDK_AVAILABLE launcher.c /O2 /link /MACHINE:ARM64 /SUBSYSTEM:CONSOLE /out:setuptools/cli-arm64.exe
    IF %ERRORLEVEL% NEQ 0 EXIT 1

    python.exe setup.py install 1>nul
    IF %ERRORLEVEL% NEQ 0 EXIT 1
popd


Python pip

Python pip is required for easier management of Cloudbase-Init’s requirements installation and wheels building.

Python’s wheel package is required to build wheels. Wheels are the pre-built versions of Python packages. There is no need to have a compiler to install the package from source on the exact system version the wheel has been built for.

Pip sources can be found here: https://github.com/pypa/pip.

The following pip patch is required: https://github.com/ader1990/pip/commit/0559cd17d81dcee43433d641052088b690b57cdd.

The patch introduces two binaries required for ARM64, which were built from: https://github.com/ader1990/simple_launcher/tree/win_arm64

This patched version of pip can use the wheel to create proper binaries for ARM64 (like setuptools).

Installation steps for wheel (Python is required):

echo "Installing pip"
python.exe -m easy_install https://github.com/ader1990/pip/archive/20.3.dev1.win_arm64.tar.gz 1>nul 2>nul
IF %ERRORLEVEL% NEQ 0 EXIT


Python PyWin32

Python PyWin32 package is a wrapper for (almost) all Win32 APIs from Windows. It is a behemoth from the source code perspective, with Cloudbase-Init using a limited amount of Win32 APIs via PyWin32.

Source code can be found here: https://github.com/mhammond/pywin32.

The following patches are required:

Installation steps for PyWin32 (Python 3.8 and Visual Studio 2019 are required):

echo "Installing pywin32"
git clone https://github.com/ader1990/pywin32 1>nul
IF %ERRORLEVEL% NEQ 0 EXIT 1
pushd pywin32
    git checkout win_arm64
    IF %ERRORLEVEL% NEQ 0 EXIT 1
    pushd "win32\src"
        %MC_PATH% -A PythonServiceMessages.mc -h .
    popd
    pushd "isapi\src"
        %MC_PATH% -A pyISAPI_messages.mc -h .
    popd
    mkdir "build\temp.win-arm64-3.8\Release\scintilla" 1>nul 2>nul
    echo '' > "build\temp.win-arm64-3.8\Release\scintilla\scintilla.dll"
    python.exe setup.py install --skip-verstamp
    IF %ERRORLEVEL% NEQ 0 EXIT 1
popd

The build process takes quite a lot of time, at least half an hour, so we took a(nother) cup of coffee and enjoyed the extra time.

The patches hardcode some compiler quirks for Visual Studio 2019 and remove some unneeded extensions from the build. There is work in progress to prettify and upstream the changes.


Cloudbase-Init

Now, as all the previous steps have been completed, it is time to finally build Cloudbase-Init. Thank you for your patience.

Source code can be found here: https://github.com/cloudbase/cloudbase-init

Installation steps for Cloudbase-Init (Python and Visual Studio are required):

echo "Installing Cloudbase-Init"
git clone https://github.com/cloudbase/cloudbase-init 1>nul
IF %ERRORLEVEL% NEQ 0 EXIT 1
pushd cloudbase-init
  echo "Installing Cloudbase-Init requirements"
  python.exe -m pip install -r requirements.txt 1>nul
  IF %ERRORLEVEL% NEQ 0 EXIT 1
  python.exe -m pip install .  1>nul
  IF %ERRORLEVEL% NEQ 0 EXIT 1
popd

After the installation steps were completed, the cloudbase-init.exe AMR64 executable wrapper will be available.


OpenStack Service Wrapper executable

Cloudbase-Init usually runs as a service at every boot. As cloudbase-init.exe is a normal executable, it needs a service wrapper for Windows. A service wrapper is a small program that implements the hooks for the Windows service actions, like start, stop and restart.

Source code can be found here: https://github.com/cloudbase/OpenStackService

The following patch was required: https://github.com/ader1990/OpenStackService/commit/a48c4e54b3f7db7d4df163a6d7e13aa0ead4a58b

For an easier build process, a GitHub actions workflow file can be found here: https://github.com/ader1990/OpenStackService/blob/arm64/.github/workflows/build.yml

A prebuilt release binary for OpenStackService ARM64 is available for download here: https://github.com/ader1990/OpenStackService/releases/tag/v1.arm64


Epilogue

Now we are ready to use Cloudbase-Init for guest initialization on Windows 10 PRO ARM64.

Main takeaways:

  • The main building blocks (Python and Visual Studio) are in great shape to be used for ARM64 applications
  • Some of the Python packages required for Cloudbase-Init still need minor tweaks when it comes to the build process on ARM64.

The post Windows on ARM64 with Cloudbase-Init appeared first on Cloudbase Solutions.

]]>
39141
Easily deploy a Kubernetes cluster on OpenStack https://cloudbase.it/easily-deploy-a-kubernetes-cluster-on-openstack/ Tue, 12 Sep 2017 21:40:27 +0000 https://cloudbase.it/?p=37536 Platform and cloud interoperability has come a long way. IaaS and unstructured PaaS options such as OpenStack and Kubernetes can be combined to create cloud-native applications. In this port we're going to show how Kubernetes can de deployed on an OpenStack cloud infrastructure.

The post Easily deploy a Kubernetes cluster on OpenStack appeared first on Cloudbase Solutions.

]]>
Platform and cloud interoperability has come a long way. IaaS and unstructured PaaS options such as OpenStack and Kubernetes can be combined to create cloud-native applications. In this port we’re going to show how Kubernetes can de deployed on an OpenStack cloud infrastructure.

 

Setup

My setup is quite simple, an Ocata all-in-one deployment with compute KVM. The OpenStack infrastructure was deployed with Kolla. The deployment method is not important here, but Magnum and Heat need to be deployed alongside other OpenStack services such as Nova or Neutron. To do this, enable those two services form /etc/kolla/global.yml file. If you are using Devstack, here is a local.conf that is deploying Heat and Magnum.

 

Kubernetes deployment

The Kubernetes cluster will consist of 1 master node and 2 minion nodes. I’m going to use Fedora atomic images for VMs. One useful info is that I used a 1 CPU, 2GB of RAM and 7GB disk flavor for the VMs. Below are the commands used to create the necessary environment setup. Please make sure to change IPs and different configurations to suit your environment.

# Download the cloud image
wget  https://ftp-stud.hs-esslingen.de/pub/Mirrors/alt.fedoraproject.org/atomic/stable/Fedora-Atomic-25-20170512.2/CloudImages/x86_64/images/Fedora-Atomic-25-20170512.2.x86_64.qcow2

# If using HyperV, convert it to VHD format
qemu-img convert -f qcow2 -O vhdx Fedora-Atomic-25-20170512.2.x86_64.qcow2 fedora-atomic.vhdx

# Provision the cloud image, I'm using KVM so using the qcow2 image
openstack image create --public --property os_distro='fedora-atomic' --disk-format qcow2 \
--container-format bare --file /root/Fedora-Atomic-25-20170512.2.x86_64.qcow2 \
fedora-atomic.qcow2

# Create a flavor
nova flavor-create cloud.flavor auto 2048 7 1 --is-public True

# Create a key pair
openstack keypair create --public-key ~/.ssh/id_rsa.pub kolla-ubuntu

# Create Neutron networks
# Public network
neutron net-create public_net --shared --router:external --provider:physical_network \
physnet2 --provider:network_type flat

neutron subnet-create public_net 10.7.15.0/24 --name public_subnet \
--allocation-pool start=10.7.15.150,end=10.7.15.180 --disable-dhcp --gateway 10.7.15.1

# Private network
neutron net-create private_net_vlan --provider:segmentation_id 500 \
--provider:physical_network physnet1 --provider:network_type vlan

neutron subnet-create private_net_vlan 10.10.20.0/24 --name private_subnet \
--allocation-pool start=10.10.20.50,end=10.10.20.100 \
--dns-nameserver 8.8.8.8 --gateway 10.10.20.1

# Create a router
neutron router-create router1
neutron router-interface-add router1 private_subnet
neutron router-gateway-set router1 public_net

Before the Kubernetes cluster is deployed, a cluster template must be created. The nice thing about this process is that Magnum does not require long config files or definitions for this. A simple cluster template creation can look like this:

magnum cluster-template-create --name k8s-cluster-template --image fedora-atomic \
--keypair kolla-controller --external-network public_net --dns-nameserver 8.8.8.8 \
--flavor cloud.flavor --docker-volume-size 3 --network-driver flannel --coe kubernetes

Based on this template the cluster can be deployed:

magnum cluster-create --name k8s-cluster --cluster-template k8s-cluster-template \
--master-count 1 --node-count 2

 

The deployment status can be checked and viewed from Horizon. There are two places where this can be done, first one in Container Infra -> Clusters tab and second in Orchestration -> Staks tab. This is because Magnum relies on Heat templates to deploy the user defined resources. I find the the Stacks option better because it allows the user to see all the resources and events involved in the process. If something goes wrong, the issue can easily be identified by a red mark.

 

In the end my cluster should look something like this:

root@kolla-ubuntu-cbsl:~# magnum cluster-show 2ffb0ea6-d3f6-494c-9001-c4c4e01e8125
+---------------------+------------------------------------------------------------+
| Property            | Value                                                      |
+---------------------+------------------------------------------------------------+
| status              | CREATE_COMPLETE                                            |
| cluster_template_id | 595cdb6c-8032-43c8-b546-710410061be0                       |
| node_addresses      | ['10.7.15.112', '10.7.15.113']                             |
| uuid                | 2ffb0ea6-d3f6-494c-9001-c4c4e01e8125                       |
| stack_id            | 91001f55-f1e8-4214-9d71-1fa266845ea2                       |
| status_reason       | Stack CREATE completed successfully                        |
| created_at          | 2017-07-20T16:40:45+00:00                                  |
| updated_at          | 2017-07-20T17:07:24+00:00                                  |
| coe_version         | v1.5.3                                                     |
| keypair             | kolla-controller                                           |
| api_address         | https://10.7.15.108:6443                                   |
| master_addresses    | ['10.7.15.108']                                            |
| create_timeout      | 60                                                         |
| node_count          | 2                                                          |
| discovery_url       | https://discovery.etcd.io/89bf7f8a044749dd3befed959ea4cf6d |
| master_count        | 1                                                          |
| container_version   | 1.12.6                                                     |
| name                | k8s-cluster                                                |
+---------------------+------------------------------------------------------------+

SSH into the master node to check the cluster status

[root@kubemaster ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeUI is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

So there it is, a fully functioning Kubernetes cluster with 1 master and 2 minion nodes.

 

A word on networking

Kubernetes networking is not the easiest thing to explaing but I’ll do my best to do the essentials. After an app is deployed, the user will need to access it from outside the Kubernetes Cluster. This is done with Services. To achive this, on each minion node there is a kube-proxy service running that will allow the Service to do its job. Now the service can work in multiple ways, some of them are via an VIP LoadBalancer IP provided by the cloud underneath K8S, or with port-forward on the minion node IP.

 

Deploy an app

Now that all is set up, an app can be deployed. I am going to install WordPress with Helm. Helm is the package manager for Kubernetes. It installs applications with charts, which are basically apps definitions written in yaml. Here are documentation on how to install Helm.

 

I am going to install WordPress.

[root@kubemaster ~]# helm install stable/wordpress

Pods can be seen

[root@kubemaster ~]# kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
my-release-mariadb-2689551905-56580     1/1       Running   0          10m
my-release-wordpress-3324251581-gzff5   1/1       Running   0          10m

There are multiple ways of accessing the contents of a pod. I am going to port-forward 8080 port from the master node to the 80 port of the pod.

kubectl port-forward my-release-wordpress-3324251581-gzff5 8080:80

Now WordPress can be accessed via the Kubernetes node IP and port 8080

http://K8S-IP:8080

Kubernetes on OpenStack is not only possible, it can also be easy!

The post Easily deploy a Kubernetes cluster on OpenStack appeared first on Cloudbase Solutions.

]]>
37536
Hyper-V RemoteFX in OpenStack https://cloudbase.it/openstack-remotefx/ Wed, 05 Jul 2017 14:16:40 +0000 https://cloudbase.it/?p=35963 We’ve added support for RemoteFX for Windows / Hyper-V Server 2012 R2 back in Kilo, but the highly anticipated Windows / Hyper-V Server 2016 comes with some new nifty features for which we’re excited about! In case you are not familiar with this feature, it allows you to virtualize your GPUs and share them across…

The post Hyper-V RemoteFX in OpenStack appeared first on Cloudbase Solutions.

]]>
We’ve added support for RemoteFX for Windows / Hyper-V Server 2012 R2 back in Kilo, but the highly anticipated Windows / Hyper-V Server 2016 comes with some new nifty features for which we’re excited about!

In case you are not familiar with this feature, it allows you to virtualize your GPUs and share them across virtual machine instances by adding virtual graphics devices. This leads to a richer RDP experience especially for VDI on OpenStack, as well as the benefit of having a GPU on your instances, enhancing GPU-intensive applications (CUDA, OpenCL, etc).

If you are curious, you can take a look at one of our little experiments. We’ve run a few GPU-intensive demos on identical guests with and without RemoteFX. The difference was very obvious between the two. You can see the recording here.

One of the most interesting feature RemoteFX brings in terms of improving the user experience, is device redirection. This allows you to connect your local devices (USBs, smart cards, VoIP devices, webcams, etc.) to RemoteFX enabled VMs through your RDP client. For a detailed list of devices you can redirect through your RDP session can be found here.

Some of the new features for RemoteFX in Windows / Hyper-V Server 2016 are:

  • 4K resolution option
  • 1GB dedicated VRAM (availble choices: 64MB, 128MB, 256MB, 512MB, 1GB) and up to another 1GB shared VRAM
  • Support for Generation 2 VMs
  • OpenGL and OpenCL API support
  • H.264/AVC codec investment
  • Improved performance

One important thing worth mentioning is the fact that RemoteFX allows you to overcommit your GPUs, the same way you can overcommit disk, memory, or vCPUs!

All of this sounds good, but how can you know if you can enable RemoteFX? All you need for this is a compatible GPU that passes the minimum requirements:

  • it must support DirectX 11.0 or newer
  • it must support WDDM 1.2 or newer,
  • Hyper-V feature must installed.

If you pass these simple requirements, all you have to do to enable the feature is to run this PowerShell command:

Install-WindowsFeature RDS-Virtualization

 

Hyper-V has to be configured to use RemoteFX. This can be done by opening the Hyper-V Manager, opening up Hyper-V Settings, and under Physical GPUs, check the Use this GPU with RemoteFX checkbox.

For more information about RemoteFX requirements and recommended RemoteFX-compatible GPUs, read this blog post.

In order to take advantage of all these features, the RDP client must be RemoteFX-enabled (Remote Desktop Connection 7.1 or newer).

Please do note that the instance’s guest OS must support RemoteFX as well. Incompatible guests will not be able to fully benefit from this feature. For example, Windows 10 Home guests are not compatible with RemoteFX, while Windows 10 Enterprise and Pro guests are. This fact can easily be checked by looking up the Video Graphics Adapter in the guest’s Device Manager.

 

RemoteFX inside a guest VM

 

After the RDS-Virtualization feature has been enabled, the nova-compute service running on the Hyper-V compute node will have to be configured as well. The following config option must be set to True in nova-compute‘s nova.conf file:

[hyperv]
enable_remotefx = True

 

In order to spawn an instance with RemoteFX enabled via OpenStack, all you have to do is provide the instance with a few flavor extra_specs:

  • os:resolution:  guest VM screen resolution size.
  • os:monitors:  guest VM number of monitors.
  • os:vram:  guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016.

There are a few things to take into account:

  1. Only a subset of resolution sizes are available for RemoteFX. Any other given resolution size will be met with an error.
  2. The maximum number of monitors allowed is dependent on the requested resolution. Requesting a larger number of monitors than the maximum allowed per requested resolution size will be met with an error.
  3. Only the following VRAM amounts can be requested: 64, 128, 256, 512, 1024.
  4. On Windows / Hyper-V Server 2012 R2, RemoteFX can only be enabled on Generation 1 VMs.

The available resolution sizes and maximum number of monitors are:
For Windows / Hyper-V Server 2012 R2:

1024x768:   4
1280x1024:  4
1600x1200:  3
1920x1200:  2
2560x1600:  1

For Windows / Hyper-V Server 2016:

1024x768: 8
1280x1024: 8
1600x1200: 4
1920x1200: 4
2560x1600: 2
3840x2160: 1

Here is an example of a valid flavor for RemoteFX:

# nova flavor-create <name> <id> <ram> <disk> <vcpus>
nova flavor-create m1.remotefx 999 4096 40 2
nova flavor-key m1.remotefx set os:resolution=1920x1200
nova flavor-key m1.remotefx set os:monitors=1
nova flavor-key m1.remotefx set os:vram=1024

 

We hope you enjoy this feature as much as we do! What would you use RemoteFX for?

The post Hyper-V RemoteFX in OpenStack appeared first on Cloudbase Solutions.

]]>
35963
Windows Server 2016 OpenStack Images https://cloudbase.it/windows-server-2016-openstack-images/ Wed, 17 May 2017 14:23:58 +0000 https://cloudbase.it/?p=36672 Windows Server 2016 is gaining significant traction in OpenStack and other clouds, thanks to the support for Windows Docker containers and lots of other new features. While there’s no OpenStack Windows Server 2016 image directly available for download, the good news is that our automated build scripts will do all the work for you. All you need is a…

The post Windows Server 2016 OpenStack Images appeared first on Cloudbase Solutions.

]]>
Windows Server 2016 is gaining significant traction in OpenStack and other clouds, thanks to the support for Windows Docker containers and lots of other new features.

While there’s no OpenStack Windows Server 2016 image directly available for download, the good news is that our automated build scripts will do all the work for you. All you need is a Windows Server 2016 ISO.

The automated build tools are publicly available on GitHub, allowing the generation of virtual (Hyper-V, KVM, VMware ESXi) or bare metal (MAAS, Ironic) images, including Cloudbase-Init, VirtIO drivers (KVM), latest Windows updates, etc.

You can kickstart the image generation on any Windows host (e.g. Windows 10, Windows Server 2016, etc) with Hyper-V enabled and the Windows ADK installed.

git clone https://github.com/cloudbase/windows-openstack-imaging-tools
cd windows-openstack-imaging-tools

Edit the create-windows-online-cloud-image.ps1 in the Examples directory to match your environment and requirements.

If you need to make any changes to the image generation (e.g. adding storage or networking drivers for bare metal servers), you have an extensive Readme which will guide through the entire process.

# This needs an elevated PowerShell prompt
./create-windows-online-cloud-image.ps1

For KVM, a frequent use case, the tool supports the latest Fedora VirtIO drivers, improving considerably the stability and performance of the OS.

You are now all set to generate your Windows Server 2016 images, let us know if you have any questions!

 

The post Windows Server 2016 OpenStack Images appeared first on Cloudbase Solutions.

]]>
36672