{"id":2658,"date":"2015-05-18T19:00:32","date_gmt":"2015-05-18T16:00:32","guid":{"rendered":"http:\/\/www.cloudbase.it\/?p=2658"},"modified":"2016-05-16T16:27:15","modified_gmt":"2016-05-16T13:27:15","slug":"enterprise-openstack-cloud","status":"publish","type":"post","link":"https:\/\/cloudbase.it\/enterprise-openstack-cloud\/","title":{"rendered":"The Open Enterprise Cloud \u2013 OpenStack\u2019s Holy Grail?"},"content":{"rendered":"
The way people think about the enterprise IT is changing fast, putting into question many common assumptions on how hardware and software should be designed and deployed. The upending of these long held tenets of Enterprise IT are happening simply due to the innovation brought on by OpenStack and a handful of other successful open source projects that have gained traction in recent years.<\/p>\n
What is still unclear is how to deliver all this innovation in a form that can be consumed by customers\u2019 IT departments without the need to hire an army of experienced DevOps, itself as notoriously hard to find as unicorns commodity that has a non-trivial impact on the TCO.<\/p>\n
The complexity of an OpenStack deployment is not just perception or FUD spread by the unhappy competition. It\u2019s a real problem that is sometimes ignored by those deeply involved in OpenStack and its core community. The industry is clearly waiting for the solution that can \u201cpackage\u201d OpenStack in a way that hides the inherent complexity of this problem domain and \u201cjust works\u201d. They want something that provides user-friendly interfaces and management tools instead of requiring countless hours of troubleshooting.<\/p>\n
This blog post is the result of our attempt to find and successfully productize this \u2018Holy Grail\u2019, featuring a mixture of open source projects that we actively develop and contribute to (OpenStack, Open vSwitch, Juju, MAAS, Open Compute) alongside Microsoft technologies such as Hyper-V that we integrate into Openstack and that are widely used in the enterprise world.<\/p>\n
We are excited to be able to demonstrate this convergence of all the above technologies at our Cloudbase Solutions booth at the Vancouver Summit, where we shall be hosting an Open Compute OCS chassis demo.<\/p>\n
Here are the prerequisites we identified for this product:<\/p>\n <\/p>\n Let\u2019s start from the bottom of the stack. The way in which server hardware has been designed and produced didn\u2019t really change much in the last decade. But when the Open Compute Project kicked off it introduced a set of radical innovations from large corporations running massive clouds like Facebook.<\/p>\n Private and public clouds have requirements that differ significantly from what traditional server OEMs keep on offering over and over again. In particular, cloud infrastructures don\u2019t require many of the features that you can find on commodity servers. Cloud servers don\u2019t need complex BMCs beyond basic power actions and diagnostics (who needs a graphical console on a server anymore?) or too many redundant components (the server blade itself is the new unit of failure) or even fancy bezels.<\/p>\n Microsoft\u2019s Open CloudServer<\/a> (OCS) design, contributed to the Open Compute Project, is a great example. It offers a half rack unit blade design with a separate chassis manager in a 19\u201d chassis with redundant PSUs, perfectly compatible with any traditional server room, unlike for example other earlier 21\u201d Open Compute Project server designs. The total cost of ownership (TCO) for this hardware is significantly lower compared to traditional alternatives, which makes this is a very big incentive even for companies less prone to changes in how they handle their IT infrastructure.<\/p>\n Being open source, OCS designs can be produced by anyone, but this is an effort that only the larger hardware manufactures can effectively handle. Quanta <\/i>in particular is investing actively in this space, with a product range that includes the OCS chassis on display at our Vancouver Summit booth.<\/p>\n \u201cThe Storage Area Network (SAN) is dead.\u201d This is something that we keep hearing and if it\u2019s experiencing \u00a0a long twilight it\u2019s because vendors are still enjoying the profit margins it offers. SANs used to provide specialized hardware and software that has now moved to commodity hardware and operating systems. This move offers scalable and fault tolerant options such as Ceph or the SMB3 based Windows Scale-Out File Server, both employed in our solution.<\/p>\n The OCS chassis offers a convenient way of storing SAS, SATA or SSD storage in the form of \u201cJust a Bunch of Disks\u201d (JBOD) units that can be deployed alongside regular compute blades having the same form factor. Depending on the requirements, a mixture of typically inexpensive mechanical disks can be mixed with fast SSD units.<\/p>\n <\/p>\n There are still organizations and individuals out there who consider that the only way to install an operating system consists in connecting monitor, keyboard and mouse to a server, insert a DVD, configure it interactively and wait until it\u2019s installed. In a cloud, regardless of being private or public, there are dozens, hundreds or thousands of servers to deploy at once, so manual deployments do not work. Besides this, we need all those servers to be consistently configured, without the unavoidable human errors that manual deployments always incur at scale.<\/p>\n That\u2019s where the need for automated bare metal deployment comes in.<\/p>\n We chose two distinct projects for bare metal: MAAS and Ironic. We use MAAS<\/a> (to which we contributed Windows support and imaging tools), to bootstrap the chassis, deploy OpenStack using Juju<\/a>, including storage and KVM or Hyper-V compute nodes. The user can freely decide any time to redistribute the nodes among the individual roles, depending on how many compute or storage resources are needed.<\/p>\n We recently contributed<\/a> support for the OCS chassis manager in Ironic, so users have also the choice to use Ironic in standalone mode or as part of an OpenStack deployment to deploy physical nodes.<\/p>\n The initial fully automated chassis deployment can be performed from any laptop, server or \u201cjump box\u201d connected to the chassis\u2019 network without the need of installing anything. Even a USB stick with a copy of our v-magine<\/a> tool is enough.<\/p>\n There are quite a few contenders in the IaaS cloud software arena, but none managed to generate as much interest as OpenStack, with almost all relevant names in the industry investing in its foundation and development.<\/p>\n There\u2019s not much to say here that hasn\u2019t been said elsewhere. OpenStack is becoming the de facto<\/i> standard in private clouds, with companies like Canonical, RackSpace and HP basing their public cloud offerings on OpenStack as well.<\/p>\n OpenStack\u2019s compute project, Nova, supports a wide range of hypervisors that can be employed in parallel on a single cloud deployment. Given the enterprise-oriented nature of this project, we opted for two hypervisors: KVM, which is the current standard in OpenStack, and Hyper-V, the Microsoft hypervisor (available free of charge). This is not a surprise as we have contributed and are actively developing all the relevant Windows and Hyper-V<\/a> support in OpenStack in direct coordination with Microsoft Corporation.<\/p>\n The most common use case for this dual hypervisor deployment consists in hosting Linux instances on KVM, and Windows ones on Hyper-V. KVM support for Windows is notoriously shaky, while Windows Hyper-V components are already integrated in the OS and the platform is fully supported by Microsoft, making it a perfect choice for Windows. On the Linux side, while any modern Linux works perfectly fine on Hyper-V thanks to the Linux Integration Services (LIS) included in the upstream Linux kernel, KVM is still preferred by most users.<\/p>\n <\/p>\n Networking has enjoyed a large amount of innovation in recent years, especially in the areas of configuration and multi tenancy. Open vSwitch (OVS) is by far the leader in this domain, commonly identified as software defined networking (SDN). We recently ported OVS to Hyper-V<\/a>, allowing the integration of Hyper-V in multi-hypervisor clouds and VXLAN as a common overlay standard.<\/p>\n Neutron includes also support for Windows specific SDN for both VLAN and NVGRE overlays in the ML2 plugin, which allows seamless integration with other solutions, including OVS.<\/p>\n <\/p>\n Modern managed network switches provide computing resources that were simply unthinkable just a few years ago and today they\u2019re able to natively run operating systems traditionally limited to server hardware.<\/p>\n Cumulus Linux, a network operating system for bare metal switches developed by Cumulus Networks, is a Linux distribution with hardware acceleration of switching and routing functions. The NOS seamlessly integrates with the host-based Open vSwitch and Hyper-V networking features outlined above.<\/p>\n Neutron takes care of orchestrating hosts and networking switches, allowing a high degree of flexibility, security and performance which become particularly critical when the size of the deployment increases.<\/p>\n <\/p>\n
<\/a><\/p>\nObjectives<\/h3>\n
\n
Hardware<\/h3>\n
<\/a><\/p>\nStorage<\/h3>\n
Bare metal deployment<\/h3>\n
OpenStack<\/h3>\n
Software defined networking<\/h3>\n
Physical switches and open networking<\/h3>\n
Deploying OpenStack with Juju<\/h3>\n