Lee Calcote Jekyll 2025-05-08T17:24:39-05:00 https://gingergeek.com/ Lee Calcote https://gingergeek.com/ [email protected] <![CDATA[Talk - Service Meshes, but at what cost]]> https://gingergeek.com/2019/10/service-meshes-but-at-what-cost 2019-10-04T05:10:15-05:00 2019-10-04T05:10:15-05:00 Lee Calcote https://gingergeek.com [email protected] <p>As you learn of the architecture and value provided by service meshes, you’re intrigued and initially impressed. Upon reflection, you, like many others think: <font style="color:grey;position:inline;">“I see the value, but what overhead does being on the mesh incur?” </font></p> <p>Complicating the answer is the fact that there are over 10 service meshes projects to choose from. While this presentation does not take an in-depth look at the<a href="https://layer5.io/landscape"> landscape of service meshes</a>, it does introduce <a href="https://layer5.io/meshery">Meshery</a> as a utility for both benchmarking service mesh performance and provides a playground for familiarizing with the various features of different service meshes.</p> <iframe src="//www.slideshare.net/slideshow/embed_code/key/3PGWiGNg12FP8O" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen=""> </iframe> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/leecalcote/service-meshes-but-at-what-cost" title="Service Meshes, but at what cost?" target="_blank">Service Meshes, but at what cost?</a> </strong> from <strong><a href="https://www.slideshare.net/leecalcote" target="_blank">Lee Calcote</a></strong> </div> <p>This talk was delivered at <a href="https://servicemeshday.com"><b>Service Mesh Day 2019</b></a>.</p> <p><a href="./assets/Meshery-Service-Mesh-Day-2019.pdf"> <b>Meshery - Service Mesh Day 2019</b></a></p> <p><button> <a href="./assets/Meshery-Service-Mesh-Day-2019.pdf" download="Meshery Service Mesh Day 2019"> Download </a> </button> <!-- /wp:file --></p> <p><a href="https://gingergeek.com/2019/10/service-meshes-but-at-what-cost/">Talk - Service Meshes, but at what cost</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on October 04, 2019.</p> <![CDATA[How to establish an open source program office]]> https://gingergeek.com/2018/12/how-to-establish-an-open-source-program-office 2018-12-18T06:41:15-06:00 2018-12-18T06:41:15-06:00 Lee Calcote https://gingergeek.com [email protected] <p>In many companies, open source programs start informally with a group of diligent engineers and a few legal people. The ad-hoc group soon realizes it needs a more formal program to scale to address the litany of important issues and achieve specific business goals. With such an office in place, businesses can establish and execute on their open source strategies in clear terms, giving their leaders, developers, marketers, and other staff the tools they need to make open source a success within their operations. In this talk, I discussed:</p> <ul type="disc"> <li>Why to create an open source program office.</li> <li>The role of the open source program office.</li> <li>Open source programs at prominent technology companies.</li> <li>How to establish an open source office.</li> <li>Program structure and management roles.</li> <li>Setting policy, processes and goals.</li> <li>Measuring and monitoring success.</li> </ul> <p>This talk was presented at InnoTech Austin 2018. Slides below &#8211;</p> <div align="center"><iframe src="https://calcotestudios.com/talks/decks/slides-innotech-austin-2018-establishing-an-open-source-office.html" width="640" height="420" frameborder="0" scrolling="no" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe><a href="https://calcotestudios.com/talks/decks/slides-innotech-austin-2018-establishing-an-open-source-office.html">open slides</a></div> <p><a href="https://gingergeek.com/2018/12/how-to-establish-an-open-source-program-office">How to establish an open source program office</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on December 18, 2018.</p> <![CDATA[Talk - Linux Containers 101 – from engines to orchestrators]]> https://gingergeek.com/2018/12/linux-containers-101-from-containers-to-orchestrators 2018-12-08T06:20:15-06:00 2018-12-08T06:20:15-06:00 Lee Calcote https://gingergeek.com [email protected] <p>This talk was presented at DeveloperWeek Austin 2018 as an introduction to the concept of Docker containers.</p> <p><strong>Slides:</strong></p> <div align="center"> <iframe src="https://calcotestudios.com/talks/decks/slides-developerweek-austin-2018-linux-containers-101" width="640" height="420" frameborder="0" scrolling="no" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe> <a href="https://calcotestudios.com/talks/decks/slides-developerweek-austin-2018-linux-containers-101">open slides</a></div> <p><a href="https://gingergeek.com/2018/12/linux-containers-101-from-containers-to-orchestrators">Talk - Linux Containers 101 – from engines to orchestrators</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on December 08, 2018.</p> <![CDATA[What it means to be Cloud Native]]> https://gingergeek.com/2018/04/what-it-means-to-be-cloud-native 2018-12-08T06:20:15-06:00 2018-12-08T06:20:15-06:00 Lee Calcote https://gingergeek.com [email protected] <p>This talk was presented at InnoTech San Antonio 2018 as a perspective of what it means to be Cloud Native &#8211; from containers to functions. See the <a href="https://gingergeek.com/wp-content/uploads/2018/11/cloud-native-evolution.gif">full-size</a> animated gif.</p> <div align="center"> <blockquote class="twitter-tweet" data-lang="en"> <p dir="ltr" lang="en">Preparing for a talk on &#8220;Establishing an Open Source Program Office&#8221; for <a href="https://twitter.com/INNOTECHAustin?ref_src=twsrc%5Etfw">@INNOTECHAustin</a> this week has me looking back at last year&#8217;s presentation &#8211; &#8220;What It Means to be Cloud Native?&#8221; The answer is cross-cutting&#8230; <a href="https://t.co/Z9hTZVVsVj">https://t.co/Z9hTZVVsVj</a> <a href="https://t.co/2w1lRCZxD2">pic.twitter.com/2w1lRCZxD2</a></p> <p>— Lee Calcote (@lcalcote) <a href="https://twitter.com/lcalcote/status/1046744435248254976?ref_src=twsrc%5Etfw">October 1, 2018</a></p></blockquote> <p><script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8" type="852ae190fb4b7336cfaf374d-text/javascript"></script></p> </div> <p><strong>Slides:</strong></p> <div align="center"><iframe src="https://calcotestudios.com/talks/decks/slides-innotech-san-antonio-2018-what-it-means-to-be-cloud-native.html" width="640" height="420" frameborder="0" scrolling="no" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></div> <p><a href="https://gingergeek.com/2018/04/what-it-means-to-be-cloud-native/">What it means to be Cloud Native</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on December 08, 2018.</p> <![CDATA[Innovate Summit 2017 - State of Serverless & the CNCF]]> https://gingergeek.com/2017/12/innovate-summit-2017-state-of-serverless-the-cncf 2018-12-08T06:20:15-06:00 2018-12-08T06:20:15-06:00 Lee Calcote https://gingergeek.com [email protected] <p>Presented at <a href="https://innovate.solarwinds.io">SolarWinds Innovate Summit 2017</a>.</p> <p><strong>Slides:</strong></p> <div align="center"><iframe src="https://calcotestudios.com/talks/decks/slides-innovate-summit-2017-state-of-serverless-the-cncf.html" width="640" height="420" frameborder="0" scrolling="no" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></div> <p><strong>Video:</strong></p> <p><iframe title="The Serverless Landscape &amp; the CNCF - Innovate Summit 2017" width="500" height="281" src="https://www.youtube.com/embed/iEdltQgfNLU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></p> <p><a href="https://gingergeek.com/2017/12/innovate-summit-2017-state-of-serverless-the-cncf/">Innovate Summit 2017 - State of Serverless & the CNCF</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on December 08, 2018.</p> <![CDATA[Containers and Functions - Leveraging Ephemeral Infrastructure Effectively]]> https://gingergeek.com/2018/04/containers-and-functions-leveraging-ephemeral-infrastructure-effectively 2018-12-03T06:06:15-06:00 2018-12-03T06:06:15-06:00 Lee Calcote https://gingergeek.com [email protected] <p><i>Originally published on Container Journal on December 3rd, 2018.</i></p> <p>With containers, microservices and functions interweaving through modern application design, diligence is necessary to make sure you’re successfully navigating when to use containers and functions as application packaging technologies and how to employ post-deployment techniques.</p> <p>We all know this can be daunting—it’s an ephemeral world out there. Establishing a delivery pipeline and streamlining workflow for microservices is key to achieving benefits from containers and functions at both an operational level for confidence of resiliency, performance and so on, and at a strategic business level for competitive advantages of speed, flexibility and more.</p> <p>Let’s explore several universal best practices for succeeding in the ephemeral world of containers and functions, and walk through a few of the ins and outs of discerning befitting use of serverless computing. Then, in my next blog post, we’ll look at how to harness the value promised by incorporation of a service mesh into your stack.</p> <p>Incorporating Orchestration: Wrangling Container Management</p> <p>Given their near-ubiquitous adoption, container formats and their runtime engines have effectively standardized and stabilized as reliable and interoperable infrastructure. Organizations of all sizes have been running containers in production for a number of years now. Their success in operating containerized workloads in complex ways may be largely attributed to the capabilities of container orchestrators.</p> <p>As I described when contrasting and comparing container orchestrators, use of a container orchestrator addresses much of layer of infrastructure needs, it does not meet all application or service-level requirements. Isn’t that why we run infrastructure? To serve the application? With that rhetorical question asked, container orchestrators have necessarily focused first on infrastructure level concerns, critical to ensuring robust management of the underlying substrate of distributed systems challenges.</p> <p>Use of a container orchestrator does not meet all application or service-level requirements. Isn’t that why we run infrastructure? To serve the application? With that rhetorical question asked, container orchestrators have necessarilyfocused first on infrastructure-level concerns, critical to ensuring robust management of the underlying substrate of distributed systems challenges.</p> <p>Unfortunately, this leaves a number of distributed systems concerns for developers to address. Until recently, developers have largely addressed these concerns by writing infrastructure logic into application code—things such as circuit breaking, timeouts and retries—employing client-side libraries to do so. In my second part in this series, I’ll highlight how DevOps teams can manage the layer of challenges unaddressed by container orchestrators using a service mesh. For now, let’s turn our focus to another ephemeral piece of infrastructure—functions.</p> <p>Costs and Benefits of Serverless Computing</p> <p>Many of you have become comfortable running multiple containers and are now looking to transcend containers and microservices, augmenting your stack by interweaving functions. Writing individual functions to complete specific tasks is appealing, as doing so facilitates faster startup times, better resource utilization, finer-grained management, flexible and precise scaling and no provisioning, updating or managing server infrastructure. However, certain use cases are better-suited for serverless computing than others. Testing, startup latency, debuggability and cost all must be considered when deciding if serverless is the right fit for an environment.</p> <p>The notion of running a function to perform a task and only paying for the execution time needed to run that task is veryappealing. As functions take foothold with your applications, exercise caution with respect to serverless pricing models, as cost accumulates quickly. Costs can accrue in short order when either a given function enjoys too much success (is invoked well beyond the number of times initially accounted for), particularly if the execution of one function in turn calls other many functions (or, perhaps, calls back to itself, creating an endless loop of execution). It’s therefore important to understand how many times the function is going to be invoked when deciding if serverless is the right fit. Functions are best-suited for a task that’s run under a short time period. Be conscientious when calling a function from another function: You run the risk of doubling your cost and increasing the complexity of debugging your software as it divides into more and smaller units of independent execution.</p> <p>FaaScinating Use Cases</p> <p>The architectural pattern of the use of functions follows an event-driven design, typically persisting output/results from a function to a datastore or queue that in turn triggers the next function (if needed). When ascribing to this pattern, treat all data as though it is in motion, not at rest, at any point during the execution of your function.</p> <p>It’s best to consider serverless when a workload is: asynchronous; concurrent; easy to parallelize into independent units of work; infrequent or with sporadic demand; with large, unpredictable variance in scaling requirements; stateless; ephemeral; without a major need for instantaneous cold start time; or highly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity. Example workloads that readily benefit from serverless architectures include:</p> <p>Executing logic in response to database changes (insert, update, trigger, delete). Performing analytics on IoT sensor input messages, for example, as Message Queuing Telemetry Transport (MQTT) messages. Handling stream processing (analyzing or modifying data in motion). Managing single time extract, transform, and load jobs that require a great deal of processing for a short time. Providing cognitive computing via a chat bot interface (asynchronous, but correlated). Scheduling tasks performed for a short time (e.g., cron or batch style invocations). Serving machine learning and AI models (retrieving one or more data elements such as tables or images and matching against a pre-learned data model to identify text, faces, anomalies, etc.). Continuous integration pipelines that provision resources for build jobs on-demand, instead of keeping a pool of build slave hosts waiting for jobs to be dispatched. Universal Tips for Successfully Navigating an Ephemeral World</p> <p>As application packing technologies, both containers and functions have their own caveats, so knowing how and when to leverage them is key. In your organization, you can apply four universally applicable best practices to packaging, running, deploying and operating containers and functions, including:</p> <p>Prioritize Observability: When writing an application for containers, particularly in the case of a microservices design, it’s crucial to ensure both your orchestration and application layers are observable to ensure they expose key metrics about the performance of your infrastructure and application, so that you may reason over their health as needed. Adopt Modern Tooling: Containers, microservices and functions pose different application development patterns than you may have traditionally encountered, so the right tooling is not always available. However, it’s crucial to adopt monitoring and debugging tools that can support these application development patterns, to help ensure success in deployment and running workloads. Application Design: The modern application development landscape is ephemeral; a function will come and go, a container will come and go, and applications must be designed to support this life cycle. For functions specifically, you can run into issues with incorrect logic and end up having functions fall into a vicious cycle of calling each other, billing spikes and generally not working effectively. Fit Your Use Case: How many of the characteristics listed above apply to your user case? Is this use case or your application well-positioned for these ephemeral execution environments? Conclusion</p> <p>At first, the idea of running a function to perform a task and only paying for the execution time needed to run that task is attractive. However, this pricing model can become expensive if you are executing many functions or running a specific function millions of times. With that in mind, it’s crucial to understand how many times the function is going to be invoked when deciding if serverless is the right fit—lengthy batch-processing tasks may not be the best fit for use of a function; functions are better-suited for a task that’s run under a short time period.</p> <p>As containers, microservices and functions become even more integrated into hybrid and cloud environments, you must remain diligent to ensure you’re navigating these aspects of the modern application development landscape successfully. Implementing several universal best practices including prioritizing observability, adopting modern DevOps monitoring tools, application design and knowing specific use cases can all help you succeed in the world of containers, microservices and functions.</p> <p><a href="https://gingergeek.com/2018/04/containers-and-functions-leveraging-ephemeral-infrastructure-effectively/">Containers and Functions - Leveraging Ephemeral Infrastructure Effectively</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on December 03, 2018.</p> <![CDATA[Automation and Orchestration in a Container World]]> https://gingergeek.com/2018/09/automation-and-orchestration-in-a-container-world 2018-09-13T07:50:15-05:00 2018-09-13T07:50:15-05:00 Lee Calcote https://gingergeek.com [email protected] <p><a href="https://gingergeek.com/wp-content/uploads/2018/09/Automation-Orchestration-Container.jpg"><img class="alignright wp-image-2199 size-medium" src="https://gingergeek.com/wp-content/uploads/2018/09/Automation-Orchestration-Container-300x129.jpg" alt="" width="300" height="129" srcset="http://gingergeek.com/wp-content/uploads/2018/09/Automation-Orchestration-Container-300x129.jpg 300w, http://gingergeek.com/wp-content/uploads/2018/09/Automation-Orchestration-Container-768x329.jpg 768w, http://gingergeek.com/wp-content/uploads/2018/09/Automation-Orchestration-Container.jpg 800w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p><em><a href="https://containerjournal.com/2018/09/17/automation-and-orchestration-in-a-container-world/" target="_blank" rel="nofollow noopener">Originally published</a> on Sept. 17th, 2018 on Container Journal.</em></p> <p>Since Docker began popularizing containers about five and a half years ago, this technology has become a key element of digital business transformation. Characteristics such as portability, highly efficient sharing of system resources and broad support have made containers an increasingly popular choice. In fact, in March 2017, a <u><a href="https://i.dell.com/sites/doccontent/business/solutions/whitepapers/en/Documents/Containers_Real_Adoption_2017_Dell_EMC_Forrester_Paper.pdf">Forrester study </a></u>found 66 percent of organizations that had adopted containers experienced accelerated developer efficiency, while 75 percent of companies achieved a moderate-to-significant increase in application deployment speed.<span id="more-2196"></span></p> <p>These positive impacts are accelerating container adoption; in SolarWinds’ recent “<u><a href="https://www.solarwinds.com/company/press-releases/2018-q2/solarwinds-study-of-it-professionals-finds-cloud-computing-is-top-transformative-technology-and-main-cause-of-mounting-performance-challenges">IT Trends Report 2018</a></u>,” 44 percent of respondents ranked containers as the most important technology priority today, and 38 percent of respondents ranked containers as the most important technology priority three to five years from now. These industry statistics confirm that container technology interest and adoption is increasing with time.</p> <p>To successfully deploy containers, technology professionals must understand the impact this technology will have on various aspects of the way in which their applications and infrastructure run, and uplevel both their skills and tools accordingly. Even the most nominally sized container deployment calls upon orchestration to help manage new aspects of this technology’s life cycle. With the rapid adoption of containers, container orchestrators saw a sharp rise in popularity as they streamlined management for IT administrators and broadly provided for the general caretaking (e.g., scheduling, service discovery, health checking, etc.) of a cluster of nodes (servers) running containers.</p> <h2><strong>Deploying Containers: Impact on the Network and Security</strong></h2> <p>The impact containers have on various aspects of an IT environment are in part contingent on the type of containerized workload deployed. To understand this better, let’s characterize two classes of containerized workloads being run as either a system container or an application container. System containers can be described as VM-like in nature, and generally contain a full operating system image and run multiple processes, while an application container is often lighterweight both in terms of footprint and number of processes running (ideally, only a single process). Both classes of containers use namespaces to deal with resource isolation and control groups (cgroups) to manage and enforce resource limits. However, the former lends itself to the containerization of pre-existing applications, while the latter is a best practices pattern for applications initially written to be run as a container.</p> <p>So, how does this distinction materialize in the administration of IT environments? For example, when a system container is deployed, the network may not be a significant consideration when containerizing an entire system and treating it similarly to a VM. By contrast, using application containers to deploy microservices requires requests and application traffic to transit several containers and hosts over the network—potentially even several different networks—making it crucial to have a network monitoring system in place to track latency and ensure requests are addressed in a timely and effective manner. This is easier said than done; although implementing microservices can allow teams to iterate more quickly, they introduce several different components of an application and can be difficult to monitor.</p> <p>As with the implementation of any new technology, security must be strongly considered. Vulnerability scanning and runtime protection should be weaved into the security practices of any organization deploying containers. Vulnerability scanning is necessary when various packages and libraries are built into each container, whether those are pulled from open source repositories, from internal code repositories or even if the container images are reused from public container registries. Security scanning must be completed to verify that there are no inherent vulnerabilities within a given image’s layers.</p> <p>Whether containerizing an entire system or building an application container from scratch, static analysis of an image’s layers and its Dockerfile (a text document that contains all the commands a user could call on the command line to assemble an image) is key to identifying any vulnerabilities. In the second step of maintaining security, after verifying the container images have a clean bill of health and then deploying them, it’s necessary to implement runtime security. Containers can elicit abnormal behavior that could be caused by an administrator not adhering to operational best practices, or a malicious hacker that has penetrated a container. Runtime security helps ensure that once deployed, containers function within their intended bounds of operation.</p> <p>With this in mind, how can technology professionals best manage container technology to reap the maximum benefits of leveraging it?</p> <h2><strong>Mitigating Challenges through Automation and Orchestration</strong></h2> <p>Leveraging automation and orchestration helps technology professionals save time and money, prevents issues, and ultimately deploys containers in the most effective way possible. Once a container deployment grows beyond a few hosts, technology professionals typically find the operational functionality provided by container orchestrators critical. Orchestrators commonly provide cluster management (host discovery and host health monitoring), scheduling (placement of containers across hosts in the cluster), service discovery (automatic registration of new services and provisioning of friendly DNS names) and so on. Orchestration is key to scaling deployments and to facilitating efficient collaboration between different engineering teams.</p> <p>As container implementation becomes more mainstream, clear standards in container technology have begun to emerge, particularly for the foundational components of containers. The <u><a href="https://www.opencontainers.org/">Open Container Initiative</a></u> (OCI), for example, calls for creating open industry standards around runtime and image specifications to ensure vendors are able to guarantee and deliver on the promise of portability, allowing containers to be effectively shipped and interoperable across different systems.</p> <p>Various tools and best practices in the realm of automation and orchestration can help facilitate successful container deployment and management. For example, leveraging a configuration management tool such as <u><a href="https://puppet.com/">Puppet</a></u> gives technology professionals a way to automatically inspect and deliver their software.</p> <p>Additionally, treating infrastructure as code can help companies that are working toward faster deployments, as this method calls for managing the infrastructure with the same tools and processes that software developers use, such as automated testing, continuous integration, code review, and version control. These enable infrastructure changes to be completed more easily, rapidly, safely, and reliably. Understanding the type of (and extent of) management that a given container deployment requires is also essential. In some cases leveraging a solution such as <u><a href="https://www.docker.com/">Docker</a></u> in swarm mode is sufficient, while others may call for a tool such as <u><a href="https://kubernetes.io/">Kubernetes</a></u>.</p> <h2><strong>Successful Container Management and Deployment</strong></h2> <p>In addition to leveraging automation and orchestration, technology professionals should develop new skills and leverage tools and services when implementing container technology in their organization. Here are a few tips to help facilitate successful container deployment and management:</p> <ul> <li><strong>Get certified in third-party tools:</strong> For NetAdmins, SysAdmins, or those who are “container curious,” getting certified in Docker and Kubernetes can help uplevel container management skills. Whether certification is achieved or not, merely studying the curriculum provides a helpful guide to aspects of container management that a tech pro may be otherwise unaware of.</li> <li><strong>Monitor as a Discipline (MaaD): </strong>Companies expect performance guarantees, cost efficiency, and service availability from their IT departments. One effective way to meet these requirements while using container technology is by leveraging monitoring tools. Actively tracking the activity in environments when application traffic is transiting the network is crucial and using a monitoring tool to set up automated alerts can also be beneficial when container deployments fail.</li> <li><strong>Conduct regular vulnerability scan</strong><strong>s: </strong>Organizations that choose to work with container technology will need to create a security framework and set of procedures that are consistently evaluated and updated to prevent attacks. Conducting regular scans of container images is key, as it provides visibility into their security, including vulnerabilities, malware, and policy violations. Even prominently and popularly used container images on public-facing container repositories are subject to being laced with vulnerabilities. Many container-related security vendors such as <u><a href="https://www.twistlock.com/">Twistlock</a></u> and container registries offer help to identify issues introduced by these vulnerabilities. Container orchestration systems and container runtimes enable regular health-checks by probing the software inside containers to ensure that the application is still healthy and functional.</li> </ul> <p><strong>Words of Encouragement: Skill Up and Fear Not</strong></p> <p>With container use increasingly becoming mainstream, it’s important for technology professionals to embrace them now, and avoid letting containers happen to them. But fear not—for tech pros hesitant to implement this new technology, containers are not as foreign or intimidating as they seem. While distinct from virtual machines, administrators familiar with VM management will find many of these paradigms reincarnated. To get started, administrators can even experiment with containers by leveraging the technology on their personal laptops.</p> <p>Implementing automation tooling and orchestration can enhance strategies and tactics to enable smooth container use and adoption, and several additional best practices can help ensure successful implementation and deployment. Beyond automation and orchestration, technology professionals without a background in software engineering should also look to learn scripting, as it will serve them well as deployments scale. Practice building efficient container images, using multi-stage builds, and sorting multi-line arguments alphanumerically is also key. Docker even offers <u><a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/">best practices and methods</a></u> for building container images.</p> <p>At the end of the day, as emerging technologies with higher-level capabilities such as functions and serverless platforms, service meshes, analytics, and machine learning are poised to augment container technology in the coming years, it’s important for administrators to skill up now more than ever.</p> <p><a href="https://gingergeek.com/2018/09/automation-and-orchestration-in-a-container-world/">Automation and Orchestration in a Container World</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on September 13, 2018.</p> <![CDATA[Now Available - The Enterprise Path to Service Mesh Architectures]]> https://gingergeek.com/2018/08/now-available-the-enterprise-path-to-service-mesh-architectures 2018-09-08T07:25:15-05:00 2018-09-08T07:25:15-05:00 Lee Calcote https://gingergeek.com [email protected] <p><a href="https://www.nginx.com/resources/library/the-enterprise-path-to-service-mesh-architectures?utm_source=calcote"><img class="alignleft size-medium wp-image-2089" src="https://gingergeek.com/wp-content/uploads/2018/08/The-Enterprise-Path-to-Service-Mesh-Architectures-200x300.png" alt="" width="200" height="300" srcset="http://gingergeek.com/wp-content/uploads/2018/08/The-Enterprise-Path-to-Service-Mesh-Architectures-200x300.png 200w, http://gingergeek.com/wp-content/uploads/2018/08/The-Enterprise-Path-to-Service-Mesh-Architectures-768x1152.png 768w, http://gingergeek.com/wp-content/uploads/2018/08/The-Enterprise-Path-to-Service-Mesh-Architectures-683x1024.png 683w" sizes="(max-width: 200px) 100vw, 200px" /></a>As someone interested in modern software design, you have likely heard of service mesh architectures in context of microservices. Service meshes introduce a new layer into modern infrastructures, offering the potential for creating robust and scalable applications and granular control over them. Is a service mesh right for you?</p> <p>My newly published short book,&nbsp;<a href="https://learning.oreilly.com/library/view/the-enterprise-path/9781492041795/?utm_source=calcote"><em>The Enterprise Path to Service Mesh Architectures</em></a>, helps answer common questions on service mesh architectures through the lens of a large enterprise and addresses how to evaluate your organization’s readiness, factors to consider when building new applications and converting existing applications to best leverage a service mesh, and offers insight on deployment architectures used to get you there.</p> <p>This isn&#8217;t the only O&#8217;Reilly title available on the topic of service meshes, however. Before authoring&nbsp;<em>The Enterprise Path to Service Mesh Architectures</em>&nbsp;I happily provided technical review of two other excellent O’Reilly titles on the topic of service meshes: <a href="https://twitter.com/christianposta">Christian Posta</a>&#8216;s and <a href="https://twitter.com/burrsutter">Burr Sutter</a>&#8216;s <em><a href="http://blog.christianposta.com/our-book-has-been-released-introducing-istio-service-mesh-for-microservices?utm_source=calcote">Introducing Istio Service Mesh for Microservices</a></em>&nbsp;and <a href="https://twitter.com/gmiranda23">George Miranda</a>&#8216;s <em>The Service Mesh</em>. <span id="more-2079"></span>These three complement each other well, helping educate adopters and onlookers alike:</p> <div> <div class="" style="padding-left: 30px;"><i class="">Introducing Istio Service Mesh for Microservices</i>&nbsp;by Christian Posta and George Miranda does an excellent job of introducing Istio, specifically, and walks through examples of each of its core capabilities. Their book provides a unique perspective of Istio through ever so faintly-tinted OpenShift lens. The code samples incorporated to the book are clear and helpful in quickly ramping on Istio.</div> <div class="" style="padding-left: 30px;"><i class="">&nbsp;</i></div> <div class="" style="padding-left: 30px;"><i class="">The Service Mesh</i> by George Miranda is an introduction to service meshes in general and successfully avoids the natural inclination to compare specifics between popular service mesh offerings. Instead, George gives real-world examples of where and how service meshes have benefited customers. There’s clear value derived in learning from George&#8217;s experience.</div> <div class="" style="padding-left: 30px;"><i class="">&nbsp;</i></div> <div class="" style="padding-left: 30px;"><i class="">The Enterprise Path to&nbsp;Service Mesh Architectures</i>&nbsp;by Lee Calcote focuses on the value of service meshes, how they contrast against container orchestrators and other microservices frameworks, what shape various deployment models take, customization and integration of service meshes into existing infrastructure.</div> <div></div> <p>&nbsp;</p> <p>For as well as these three short books cover the space and specific technologies, much room is left for a deeper, post-Istio-1.0 book. And its exactly such a book that I and my two coauthors, <a href="https://twitter.com/baldwinmathew">Matt Baldwin</a> and <a href="https://twitter.com/zackbutcher?lang=en">Zack Butcher</a>, have set forth to produce:&nbsp;<i class="">Istio: Up and Running</i>. There’s clear underlap and complement in these short books and our forthcoming title.</p> <div>Like other titles on emergent technology, there’s risk in a short shelf-live of some of the technical content. The post-1.0 publication of <em>Istio: Up and Running</em> certainly improves the&nbsp;durability of its content. To further its usefulness (and shelf-life), we&#8217;re deemphasizing specifics of Istio&#8217;s deployment and focusing on other aspects like advanced deployment, multi-cluster, multi-tenant, best practices, case studies, etc., lacing these with code samples throughout.</div> </div> <p>Stay tuned on&nbsp;<a href="https://gingergeek.com">my blog</a> or sign-up below for updates &#8211;</p> <p> <link href="//cdn-images.mailchimp.com/embedcode/classic-10_7.css" rel="stylesheet" type="text/css" /> <style type="text/css"> #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; }<br /> /* Add your own Mailchimp form style overrides in your site stylesheet or in this style block.<br /> We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */<br /> </style> <div id="mc_embed_signup"> <form action="https://calcotestudios.us15.list-manage.com/subscribe/post?u=6b50be5aea3dfe1fd4c041d80&amp;id=6bb65defeb" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate=""> <div id="mc_embed_signup_scroll"> <div class="indicates-required"><span class="asterisk">*</span> indicates required</div> <div class="mc-field-group"> <label for="mce-EMAIL">Email Address <span class="asterisk">*</span><br /> </label><br /> <input type="email" value="" name="EMAIL" class="required email" id="mce-EMAIL" /></div> <div class="mc-field-group"> <label for="mce-FNAME">First Name </label><br /> <input type="text" value="" name="FNAME" class="" id="mce-FNAME" /></div> <div class="mc-field-group"> <label for="mce-LNAME">Last Name </label><br /> <input type="text" value="" name="LNAME" class="" id="mce-LNAME" /></div> <div id="mce-responses" class="clear"> <div class="response" id="mce-error-response" style="display:none"></div> <div class="response" id="mce-success-response" style="display:none"></div> </div> <p></p> <div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_6b50be5aea3dfe1fd4c041d80_6bb65defeb" tabindex="-1" value="" /></div> <div class="clear"><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button" /></div> </div> </form> </div> <p></p> </p> <p><a href="https://gingergeek.com/2018/08/now-available-the-enterprise-path-to-service-mesh-architectures/">Now Available - The Enterprise Path to Service Mesh Architectures</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on September 08, 2018.</p> <![CDATA[How to customize an Istio Service Mesh]]> https://gingergeek.com/2018/05/how-to-customize-an-istio-service-mesh 2018-05-09T08:20:15-05:00 2018-05-09T08:20:15-05:00 Lee Calcote https://gingergeek.com [email protected] <p><a href="https://pixabay.com/en/louvre-pyramid-mesh-perspective-2189967/"><img aria-describedby="caption-attachment-2050" class="wp-image-2050 size-medium" src="https://gingergeek.com/wp-content/uploads/2018/05/louvre-2189967_crop-98f00246a2776bae3338fdcb88b2badf-300x200.jpg" alt="" width="300" height="200" srcset="http://gingergeek.com/wp-content/uploads/2018/05/louvre-2189967_crop-98f00246a2776bae3338fdcb88b2badf-300x200.jpg 300w, http://gingergeek.com/wp-content/uploads/2018/05/louvre-2189967_crop-98f00246a2776bae3338fdcb88b2badf.jpg 720w" sizes="(max-width: 300px) 100vw, 300px" /></a>&lt;p id="caption-attachment-2050" class="wp-caption-text"&gt;Louvre mesh perspective (source: jraffin via Pixabay)&lt;/p&gt;&lt;/div&gt;</p> <p><em><a href="https://www.oreilly.com/ideas/how-to-customize-an-istio-service-mesh" target="_blank" rel="nofollow noopener">Originally published</a> on April 26th, 2018 on O’Reilly.</em></p> <p>Even though service meshes provide value outside of the use of microservices and containers, it&#8217;s in these environments that many teams first consider using a service mesh. The sheer volume of services that must be managed on an individual, distributed basis with microservices (versus centrally for a monolith) creates challenges for ensuring reliability, observability, and security of these services.</p> <p>Adoption of a container orchestrator addresses a layer of infrastructure needs, but leaves some application or service-level needs unmet. Rather than attempting to overcome distributed systems concerns by writing infrastructure logic into application code, some teams choose to manage these challenges with a service mesh. A service mesh can help by ensuring the responsibility of service management is centralized, avoiding redundant instrumentation, and making observability ubiquitous and uniform across services.<span id="more-2049"></span></p> <h2>Choosing a service mesh</h2> <p>Factors such as your teams’ operational and technology expertise, existing observability, and access control tooling will influence the service mesh components, adapters, and deployment model you choose. Among others, Istio is a popularly adopted, open source service mesh. Some choose Istio (or any service mesh) for the automatic and immediate visibility it provides into top-line service metrics. In fact, many become hooked on service meshes for the observability they provide alone.</p> <p>As a microservices platform, Istio is extensible through the way in which it offers choice of adapters and sidecars. Istio envelops and integrates with other open source projects to deliver a full-service mesh, which both bolsters its set of capabilities and offers a choice of which specific projects are included and deployed. Whether through Mixer adapters for observability or through swapping sidecars, Istio allows you to choose which components to include in your deployment.</p> <h2>Customizing an Istio service mesh</h2> <p>There are multiple deployment models you can use to lay down a service mesh. One of the most popular options is to deploy your service proxies as sidecars. Sidecarring your service proxy offers benefits like fine-grained policy enforcement and intra-cluster service-to-service encryption. This deployment model is the model of choice for Istio. Other Istio deployment choices include:</p> <ul> <li>Mixer adapters: typically used for integrating with access control, telemetry, quota enforcement, and billing systems.</li> <li>Service proxies: abstract the network, translating requests between a client and service.</li> </ul> <p>Though Envoy is the default service proxy sidecar, you may choose another service proxy for your sidecar. While there are multiple service proxies in the ecosystem, outside of Envoy, only two have currently demonstrated integration with Istio: Linkerd and NGINX. The arrival of choice in service proxies for Istio has generated a lot of excitement. <a href="https://linkerd.io/getting-started/istio/" target="_blank" rel="nofollow noopener">Linkerd’s integration</a> was created early in Istio’s 0.1.6 release. Similarly, the <a href="https://github.com/nginmesh/nginmesh" target="_blank" rel="nofollow noopener">nginMesh</a> project has drawn much interest in the use of NGINX as Istio’s service proxy, as many organizations have broad and deep operational expertise built around this battle-tested proxy.</p> <p><em>Learn more about how to deploy your sidecar (Istio proxy) of choice in the free webcast &#8220;</em><a href="https://www.oreilly.com/pub/e/3926?intcmp=il-data-webcast-lp-webcast_new_site_how-to-customize-an-istio-service-mesh_end_body_link_cta" target="_blank" rel="nofollow noopener"><em>Istio &#8211; The extensible service mesh</em></a><em>&#8220;.</em></p> <p><a href="https://gingergeek.com/2018/05/how-to-customize-an-istio-service-mesh/">How to customize an Istio Service Mesh</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on May 09, 2018.</p> <![CDATA[The Hybrid Evolution of IT]]> https://gingergeek.com/2017/04/the-hybrid-evolution-of-it 2017-04-25T16:01:00-05:00 2017-04-25T16:01:00-05:00 Lee Calcote https://gingergeek.com [email protected] <p>“It’s a great time to be in Information Technology.” While this is a true statement, not everyone clearly understands why (or perhaps, has the fortitude to make it so). In the face of a massive movement to public cloud—by 2020, 92% of world’s workloads will be in cloud—68% in public and 32% in private<a href="http://www.cisco.com/c/dam/en/us/solutions/collateral/service-provider/global-cloud-index-gci/white-paper-c11-738085.pdf"><sup>[1]</sup></a>—many in IT feel their value in the workplace eroding along with their identity. This feeling doesn’t need to be reality. Businesses are changing the way they operate and are transforming to leverage information technology more strategically. IT has a real opportunity to <em>lead</em> this transformation, not let the transformation <em>happen to them</em>.<!--more--></p> <p>IT has led digital transformations before and can do it again. About ten years ago, the video security surveillance industry underwent a digital transformation wherein video security systems transitioned from coaxial cable networks to IP-based Ethernet, from analog video on tape to digitally-encoded video on disk, and from physically separate networks to consolidating into IT-run data centers. IT was the digital leader here, bringing many improvements to the way in which physical security functions. At the end of the day, the physical security guard remained, and in combination with their IT partners, delivered on their charter more efficiently than before.</p> <p>IT has an opportunity to drive digital transformation again, particularly as many businesses are changing the way they operate. Concerned with disrupting or being disrupted, many businesses are pivoting to become software companies. Yes, software is eating the world. As I arrive to the SolarWinds corporate headquarters each work day, I’m reminded of that fact by literal example—AMD, a leading chip designer, has shrunk its operations to share it campus with SolarWinds, a global software company. As businesses shift, CIOs are poised to help IT switch from a cost center to a source of differentiated value in terms of how a business might differentiate from other players in their industry. CIOs are positioned to be in a highly strategic, visible, and collaborative position within the company. A recent Harvard Business Review study<a href="https://hbr.org/resources/pdfs/comm/RedHat/RedHatReportMay2015.pdf"><sup>[2]</sup></a> shows that while nearly half of lines-of-business leader respondents said they would like to learn more about digital trends from their CIO, unfortunately, close to two-fifths said their CIO does not seek to educate and empower line-of-business leaders when it comes to all things digital. Over a third of the organizations polled said IT does not provide useful knowledge about technology or understand which digital knowledge is important to specific business functions. Expectations of CIOs are changing and it behooves IT to rise to the challenge.</p> <p>The white knuckles of IT needs to relax their grip and embrace internal customers as their lifeline, not shun those running shadow IT—be an accelerator, not an inhibitor. Understand that <em>convenience</em> drives retail consumer purchasing behavior more so than price does. Considering those same individuals bring their consumer behaviors (convenience = <em>agility</em>) to the workplace, it’s no wonder shadow IT is prevalent and always lurking. IT needs to develop holistic strategies in alignment with the business mission. IT organizations that are digital leaders don’t just let hybrid <em>happen to them</em>. Digital leaders are three times more likely than to have a comprehensive, enterprise-wide strategy for hybrid cloud<a href="http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=GMW14087USEN"><sup>[3]</sup></a>.</p> <p>Hybrid IT strategies may include outsourcing commodity functions. IT can be the provider <em>and</em> the trusted broker by enabling lines of business with application support, cloud design, not necessarily equipment. A foremost focus on empowerment of the business mission—whether sourcing or providing—is how businesses will leverage IT to renovate I&amp;O and innovate. In some cases, that strategy may involve factions of IT reporting into different LOBs (e.g. marketing and finance). Strategies of hybrid IT organizations embracing public and private cloud are evolving from infrastructure-centric thinking to application-centric thinking, recognizing that operations automation is friend, not foe.</p> <p>Implementing a strategy is not without challenge. Less than a third of the IT organizations polled in a recent SolarWinds<a href="http://it-trends.solarwinds.com"><sup>[4]</sup></a> study consider that they have adequate resources to manage hybrid IT environments. <img class="aligncenter wp-image-1805 size-full" src="https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/04/solarwinds-study.png?resize=703%2C282" alt="" data-id="1805" srcset="https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/04/solarwinds-study.png?w=703 703w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/04/solarwinds-study.png?resize=300%2C120 300w" sizes="(max-width: 703px) 100vw, 703px" data-recalc-dims="1" />Fortunately, any business can excel at digital leadership and management, regardless of its size or budget. Strategies may consider aggressively retiring legacy technology where the application and business case allow. Often it’s not technology impeding implementation of strategy, but people and process. CIOs can mitigate inhibitors from evolving into a hybrid IT organization by helping their people to set aside fear, insecurity, and politics. CIOs need to help individuals within their organization to understand their changing jobs, migrate to new roles, and how to be champions of change in their organizations while continuing to ensure security and continuity.</p> <p>The digital transformation of today is a hybrid evolution of IT. The broad-sweeping influence technology has on how businesses operate continues to accelerate and leaves no industry untouched. Organizations are learning how to become software companies. Established businesses are being turned upside-down and inside-out as new players have a software-centric view of the world. Current market dynamics are fundamentally changing the relationship businesses have with their IT organization and IT must evolve as business leaders need IT more than ever. It’s an exciting future ahead and <em>a great time to be in information technology</em>!</p> <p><a href="http://www.cisco.com/c/dam/en/us/solutions/collateral/service-provider/global-cloud-index-gci/white-paper-c11-738085.pdf"><sup>[1]</sup></a>_Cisco Global Cloud Index: Forecast and Methodology, 2015–2020</p> <p>_ <a href="https://hbr.org/resources/pdfs/comm/RedHat/RedHatReportMay2015.pdf"><sup>[2]</sup></a>_Driving Digital Transformation: New Skills for Leaders, New role for the CIO</p> <p>_ <a href="&gt;http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=GMW14087USEN" name="_ftn3"><sup>[3]</sup></a>_Growing up hybrid: Accelerating digital transformation</p> <p>_ <a href="http://opennetworkingusergroup.com/the-hybrid-evolution-of-it/#_ftnref4" name="_ftn4"><sup>[4]</sup></a><em>IT Trends Report 2016: The Hybrid IT Evolution</em></p> <p><a href="https://gingergeek.com/2017/04/the-hybrid-evolution-of-it/">The Hybrid Evolution of IT</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on April 25, 2017.</p> <![CDATA[Understanding and Extending Prometheus AlertManager]]> https://gingergeek.com/2017/04/understanding-and-extending-prometheus-alertmanager 2017-04-25T15:34:08-05:00 2017-04-25T15:34:08-05:00 Lee Calcote https://gingergeek.com [email protected] <iframe width="560" height="315" src="https://www.youtube.com/embed/jpb6fLQOgn4" frameborder="0" allowfullscreen></iframe> <p>Presented at <a href="https://cloudnativeeu2017.sched.com/event/9Td7?iframe=no">CloudNativeCon + KubeCon EU 2017</a>.</p> <ul> <li>Tutorial – <a href="https://thenewstack.io/contributing-prometheus-history-alertmanager/">Contributing to Prometheus: An Open Source Tutorial</a></li> <li>Sample code – <a href="https://github.com/leecalcote/alertmanager/">leecalcote/alertmanager</a></li> <li>Slide <a href="http://calcotestudios.com/kubecon-alertmanager">deck</a>{.deck}</li> <li>Talk <a href="https://youtu.be/jpb6fLQOgn4">video</a>{.deck}</li> </ul> <p><a href="https://gingergeek.com/2017/04/understanding-and-extending-prometheus-alertmanager/">Understanding and Extending Prometheus AlertManager</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on April 25, 2017.</p> <![CDATA[Create Great CNCF User-Base from Lessons Learned from Other Open Source Communities]]> https://gingergeek.com/2017/03/create-great-cncf-user-base-from-lessons-learned-from-other-open-source-communities 2017-03-30T23:55:15-05:00 2017-03-30T23:55:15-05:00 Lee Calcote https://gingergeek.com [email protected] <p style="text-align: center;"> <br /> Presented at <a href="https://cloudnativeeu2017.sched.com/event/9Tc3?iframe=no">CloudNativeCon + KubeCon EU 2017</a>. </p> <p><a href="https://gingergeek.com/2017/03/create-great-cncf-user-base-from-lessons-learned-from-other-open-source-communities/">Create Great CNCF User-Base from Lessons Learned from Other Open Source Communities</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on March 30, 2017.</p> <![CDATA[Developer-defined application delivery]]> https://gingergeek.com/2017/03/developer-defined-application-delivery 2017-03-11T05:17:09-06:00 2017-03-11T05:17:09-06:00 Lee Calcote https://gingergeek.com [email protected] <div id="attachment_1741" style="width: 310px" class="wp-caption alignleft"> <a href="https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/03/ship-84139.jpg"><img class="wp-image-1741 size-medium" src="https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/03/ship-84139.jpg?resize=300%2C225" alt="" data-id="1741" srcset="https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/03/ship-84139.jpg?resize=300%2C225 300w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/03/ship-84139.jpg?resize=768%2C576 768w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/03/ship-84139.jpg?resize=1024%2C768 1024w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/03/ship-84139.jpg?w=2000 2000w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2017/03/ship-84139.jpg?w=3000 3000w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a> <p class="wp-caption-text"> Ship with tug (source: <a href="https://pixabay.com/en/ship-containers-products-shipping-84139/">tpsdave via Pixabay</a>). </p> </div> <p>Cloud-native applications are designed to draw upon the performance, scalability, and reliability benefits of distributed systems. Unfortunately, distributed systems often come at the cost of added complexity. As individual components of your application are distributed across networks, and those networks have communication gaps or experience degraded performance, your distributed application components need to continue to function independently.</p> <p>To avoid inconsistencies in application state, distributed systems should be designed with an understanding that components will fail. Nowhere is this more prominent than in the network. Consequently, at their core, distributed systems rely heavily on load balancing—the distribution of requests across two or more systems—in order to be resilient in the face of network disruption and horizontally scale as system load fluctuates.<!--more--></p> <p>As distributed systems become more and more prevalent in the design and delivery of cloud-native applications, load balancers saturate infrastructure design at every level of modern application architecture. In their most commonly thought-of configuration, load balancers are deployed in front of the application, handling requests from the outside world. However, the emergence of microservices means that load balancers play a critical role behind the scenes: i.e. managing the flow between services.</p> <p>Therefore, when you work with cloud-native applications and distributed systems, your load balancer takes on other role(s):</p> <p>Read the full article on <a href="https://www.oreilly.com/learning/developer-defined-application-delivery">O’Reilly</a>.</p> <p><a href="https://gingergeek.com/2017/03/developer-defined-application-delivery/">Developer-defined application delivery</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on March 11, 2017.</p> <![CDATA[Growing a Community – Leveraging Meetups to Educate, Grow and Facilitate]]> https://gingergeek.com/2016/11/growing-a-community-leveraging-meetups-to-educate-grow-and-facilitate 2016-11-10T02:10:58-06:00 2016-11-10T02:10:58-06:00 Lee Calcote https://gingergeek.com [email protected] <p><span class="embed-youtube" style="text-align:center; display: block;"></span></p> <p>Presented at KubeCon + CloudNativeCon 2016 on Nov. 9th, 2016 –&gt; <a href="http://calcotestudios.com/talks/slides-kubecon-growing-a-community-leveraging-meetups-to-educate-grow-and-facilitate.html">Slides</a></p> <div align="center"> </div> <p style="text-align: center;"> <p> See also <a href="http://gingergeek.com/2016/09/cloud-native-ambassadors-and-docker-captains-navigate-users-through-the-container-ecosystem/">Cloud Native Ambassadors and Docker Captains navigate users through the container ecosystem</a>. </p> </p> <p><a href="https://gingergeek.com/2016/11/growing-a-community-leveraging-meetups-to-educate-grow-and-facilitate/">Growing a Community &#8211; Leveraging Meetups to Educate, Grow and Facilitate</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on November 10, 2016.</p> <![CDATA[From Engines to Orchestrators]]> https://gingergeek.com/2016/10/from-engines-to-orchestrators 2016-10-04T16:34:05-05:00 2016-10-04T16:34:05-05:00 Lee Calcote https://gingergeek.com [email protected] <div align="center"> </div> <p style="text-align: center;"> Presented at ContainerizeThis 2016 on Sept. 30th, 2016. </p> <p>An introduction to container runtimes (engines) and an understanding of when container orchestrators enter and what role they play. We’ll look at what makes them alike, yet unique.</p> <p><a href="https://gingergeek.com/2016/10/from-engines-to-orchestrators/">From Engines to Orchestrators</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on October 04, 2016.</p> <![CDATA[Powering Microservices & Sockets with Nginx and Kubernetes]]> https://gingergeek.com/2016/09/powering-microservices-sockets-with-nginx-and-kubernetes 2016-09-17T02:05:45-05:00 2016-09-17T02:05:45-05:00 Lee Calcote https://gingergeek.com [email protected] <p>Microservices present challenges of coordination, SSL termination and socket connection among others. Looking to different cloud providers to assist with their load-balancers leaves you wanting as features socket connection support, SSL termination and geo-distributed load-balancing are often absent.</p> <p><span class="embed-youtube" style="text-align:center; display: block;"></span></p> <div align="center"> </div> <p style="text-align: center;"> Presented at Nginx Conference 2016 on Sept. 8th, 2016. </p> <p><a href="https://gingergeek.com/2016/09/powering-microservices-sockets-with-nginx-and-kubernetes/">Powering Microservices &#038; Sockets with Nginx and Kubernetes</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on September 17, 2016.</p> <![CDATA[The Container Networking Landscape: CNI from CoreOS and CNM from Docker]]> https://gingergeek.com/2016/09/the-container-networking-landscape-cni-from-coreos-and-cnm-from-docker 2016-09-16T07:48:03-05:00 2016-09-16T07:48:03-05:00 Lee Calcote https://gingergeek.com [email protected] <p><a href="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/10/glen-canyon.jpg"><img data-id="1582" src="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/10/glen-canyon.jpg?resize=300%2C224" alt="glen-canyon" class="alignleft size-medium wp-image-1582" srcset="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/10/glen-canyon.jpg?resize=300%2C224 300w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/10/glen-canyon.jpg?resize=768%2C574 768w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/10/glen-canyon.jpg?w=960 960w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p style="text-align: right;"> <em>Originally published on <a href="http://thenewstack.io/container-networking-landscape-cni-coreos-cnm-docker/">The New Stack</a> on Sept. 4th, 2016.</em> </p> <p>There are two proposed standards for configuring network interfaces for Linux containers: the container network model (CNM) and the container network interface (CNI). Networking is complex, and there are many ways to deliver functionality. Arguments can be made as to which one is easier to adopt than the next, or which one is less tethered to their benefactor’s technology.</p> <p>When evaluating any technology, some important considerations are community adoption and support. Some perspectives have been formed on which model has a lower barrier to entry. Finding the right metrics to determine the velocity of a project is tricky. Plugin vendors also need to consider the relative ease by which plugins may be written for either of these two models.<!--more--></p> <h2 id="container-network-model">Container Network Model</h2> <p>The <a href="https://github.com/docker/libnetwork/blob/master/docs/design.md">Container Network Model</a> (CNM) is a specification proposed by Docker, adopted by projects such as <a href="https://github.com/docker/libnetwork/blob/master/docs/design.md">libnetwork</a>, with integrations from projects and companies such as <a href="http://contiv.github.io/">Cisco Contiv</a>, <a href="https://wiki.openstack.org/wiki/Kuryr">Kuryr</a>, Open Virtual Networking (OVN), <a href="https://www.projectcalico.org">Project Calico</a>, <a href="https://github.com/vmware/docker-volume-vsphere">VMware</a> and <a href="https://github.com/weaveworks/weave">Weave</a>.</p> <div id="attachment_1587" style="width: 310px" class="wp-caption aligncenter"> <a href="https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Drivers.png"><img data-id="1587" src="https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Drivers.png?resize=300%2C200" alt=" Figure 1: Libnetwork provides an interface between the Docker daemon and network drivers." class="size-medium wp-image-1587" srcset="https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Drivers.png?resize=300%2C200 300w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Drivers.png?resize=768%2C512 768w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Drivers.png?resize=1024%2C683 1024w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Drivers.png?w=1600 1600w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a> <p class="wp-caption-text"> Figure 1: Libnetwork provides an interface between the Docker daemon and network drivers. </p> </div> <p>Libnetwork is the canonical implementation of the CNM specification. Libnetwork provides an interface between the Docker daemon and network drivers. The network controller is responsible for pairing a driver to a network. Each driver is responsible for managing the network it owns, including services provided to that network like IPAM. With one driver per network, multiple drivers can be used concurrently with containers connected to multiple networks. Drivers are defined as being either native (built-in to libnetwork or Docker supported) or remote (third party plugins). The native drivers are none, bridge, overlay and MACvlan. Remote drivers may bring any number of capabilities. Drivers are also defined as having a local scope (single host) or global scope (multi-host).</p> <div id="attachment_1586" style="width: 310px" class="wp-caption aligncenter"> <a href="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Interfacing.png"><img data-id="1586" src="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Interfacing.png?resize=300%2C200" alt="Figure 2: Containers being connected through a series of network endpoints." class="size-medium wp-image-1586" srcset="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Interfacing.png?resize=300%2C200 300w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Interfacing.png?resize=768%2C512 768w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Interfacing.png?resize=1024%2C683 1024w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Model-Interfacing.png?w=1600 1600w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a> <p class="wp-caption-text"> Figure 2: Containers being connected through a series of network endpoints. </p> </div> <ul> <li><strong>Network Sandbox:</strong> Essentially the networking stack within a container, it is an isolated environment to contain a container’s network configuration.</li> <li><strong>Endpoint:</strong> A network interface that typically comes in pairs. One end of the pair sits in the network sandbox, while the other sits in a designated network. Endpoints join exactly one network, and multiple endpoints can exist within a single network sandbox.</li> <li><strong>Network</strong>: A group of endpoints. A network is a uniquely identifiable group of endpoints that can communicate with each other.</li> </ul> <p>A final, flexible set of CNM constructs are <strong>options</strong> and <strong>labels</strong> (key-value pairs of metadata). CNM supports the notion of user-defined <strong>labels</strong> (defined using the — label flag), which are passed as metadata between libnetwork and drivers. Labels are powerful in that the runtime may inform driver behavior.</p> <h2 id="container-network-interface">Container Network Interface</h2> <p>The <a href="https://github.com/containernetworking/cni">Container Network Interface</a> (CNI) is a container networking specification proposed by CoreOS and adopted by projects such as <a href="https://github.com/apache/mesos/blob/master/docs/cni.md">Apache Mesos</a>, <a href="https://github.com/cloudfoundry-incubator/guardian-cni-adapter">Cloud Foundry</a>, <a href="http://kubernetes.io/docs/admin/network-plugins/">Kubernetes</a><a href="https://github.com/cloudfoundry-incubator/guardian-cni-adapter">,</a> <a href="http://kurma.io/">Kurma</a> and <a href="https://coreos.com/blog/rkt-cni-networking.html">rkt</a>. There are also plugins created by projects such as <a href="https://github.com/contiv/netplugin">Contiv Networking</a>, <a href="https://github.com/projectcalico/calico-cni">Project Calico</a> <a href="https://github.com/contiv/netplugin">and</a> <a href="https://github.com/weaveworks/weave">Weave</a>.</p> <div id="attachment_1589" style="width: 310px" class="wp-caption aligncenter"> <a href="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Interface-Drivers.png"><img data-id="1589" src="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Interface-Drivers.png?resize=300%2C200" alt="Figure 3: CNI is a minimal specification for adding and removing containers to networks." class="size-medium wp-image-1589" srcset="https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Interface-Drivers.png?resize=300%2C200 300w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Interface-Drivers.png?resize=768%2C512 768w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Interface-Drivers.png?resize=1024%2C683 1024w, https://i1.wp.com/gingergeek.com/wp-content/uploads/2016/09/Chart_Container-Network-Interface-Drivers.png?w=1600 1600w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a> <p class="wp-caption-text"> Figure 3: CNI is a minimal specification for adding and removing containers to networks. </p> </div> <p>CNI was created as a minimal specification, built alongside a number of network vendor engineers to be a simple contract between the container runtime and network plugins. A JSON schema defines the expected input and output from CNI network plugins.</p> <p>Multiple plugins may be run at one time with a container joining networks driven by different plugins. Networks are described in configuration files, in JSON format, and instantiated as new namespaces when CNI plugins are invoked. CNI plugins support two commands to add and remove container network interfaces to and from networks. Add gets invoked by the container runtime when it creates a container. Delete gets invoked by the container runtime when it tears down a container instance.</p> <h3 id="cni-flow">CNI Flow</h3> <p>The container runtime needs to first allocate a network namespace to the container and assign it a container ID, then pass along a number of parameters (CNI config) to the network driver. The network driver then attaches the container to a network and reports the assigned IP address back to the container runtime via JSON.</p> <p>Mesos is the latest project to add CNI support, and there is a Cloud Foundry implementation in progress. The current state of Mesos networking uses host networking, wherein the container shares the same IP address as the host. Mesos is looking to provide each container with its own network namespace, and consequently, its own IP address. The project is moving to an IP-per-container model and, in doing so, seeks to democratize networking such that operators have freedom to choose the style of networking that best suits their purpose.</p> <p>Currently, CNI primitives handle concerns with IPAM, L2 and L3, and expect the container runtime to handle port-mapping (L4). From a Mesos perspective, this minimalist approach comes with a couple of caveats, one of these being that the CNI specification does not specify any port-mapping rules to be used for a container; this capability may be handled by the container runtime. A second caveat is the fact that while operators should be allowed to change the CNI configuration, the behavior of container operation when CNI configuration is modified is not accounted for in the specification. Mesos is addressing this ambiguity by ensuring that, upon restart of the CNI agent, they will checkpoint the CNI config when it is associated with the particular instance of the container.</p> <h2 id="cni-and-cnm">CNI and CNM</h2> <p>In many respects, these two container networking specifications democratize the selection of which type of container networking may be used, in that both are driver-based models, or plugin-based, for creating and managing network stacks for containers. Each allows multiple network drivers to be active and used concurrently, in that each provides a one-to-one mapping of the network to that network’s driver. Both models allow containers to join one or more networks. And each allows the container runtime to launch the network in its own namespace, segregating the application/business logic of connecting the container to the network to the network driver.</p> <p>This modular driver approach is arguably more attractive to network operators than to application developers, in that operators are afforded the flexibility to select one or more drivers that deliver on their specific needs and fit into their existing mode of operation. Operators bear responsibility for ensuring service-level agreements (SLAs) are met, and security policies are enforced.</p> <p>Both models provide separate extension points, aka plugin interfaces, for network drivers — to create, configure and connect networks — and IPAM — to configure, discover, and manage IP addresses. One extension point per function encourages composability.</p> <p>CNM does not provide network drivers access to the container’s network namespace. The benefit here is that libnetwork acts as a broker for conflict resolution. An example conflict being when two independent network drivers provide the same static route, using the same route prefix, but point to different next-hop IP addresses. CNI does provide drivers with access to the container network namespace. CNI is considering how it might <a href="https://github.com/containernetworking/cni/issues/147">approach arbitration</a> in such conflict resolution scenarios.</p> <p>CNI supports integration with third-party IPAM and can be used with any container runtime. CNM is designed to support the Docker runtime engine only. With CNI’s simplistic approach, it’s been argued that it’s comparatively easier to create a CNI plugin than a CNM plugin.</p> <p>These models promote modularity, composability and choice by fostering an ecosystem of innovation by third-party vendors who deliver advanced networking capabilities. The orchestration of network micro-segmentation can become simple API calls to attach, detach and swap networks. Interface containers can belong to multiple networks, and each container can publish different services in different networks. The idea of different network constructs as first-class citizens is reflected in the ability to detach a network service from an old container and attach it to a new container.</p> <h2 id="summary">Summary</h2> <p>As vendors and projects continue to evolve, the networking landscape continues to shift. Some offerings have consolidated or combined, such as Docker’s acquisition of SocketPlane, and the transition of Flannel to <a href="http://thenewstack.io/project-calico-flannel-join-forces-policy-secured-networking/">Tigera</a> — a new startup that has <a href="http://thenewstack.io/project-calico-flannel-join-forces-policy-secured-networking/">formed around Canal</a>. Canal is a portmanteau of Calico and Flannel and a combination of those two projects. CoreOS will provide ongoing support for Flannel as an individual project, and will be integrating Canal with Tectonic, their enterprise solution for Kubernetes. Other changes come in the form of new project releases. Docker 1.12’s release of networking features, including underlay and load-balancing support, is no small step forward for the project.</p> <p>While there’s a large number of container networking technologies and distinctly unique ways of approaching them, we’re fortunate in that much of the container ecosystem seems to have converged and built support around only two networking models, at least for now. Developers would like to eliminate manual network provisioning in containerized environments, and barring those who have misconceptions of their job insecurity, network engineers are ready for the same.</p> <p>Like other resources, an intermediary step to automated provisioning is pre-provisioning, meaning network engineers would preallocate networks with assigned characteristics and services, such as, IP address space, IPAM, routing, QoS, etc., and developers or deployment engineers would identify and select from a pool of available networks in which to deploy their applications. Pre-provisioning needs to become a thing of the past, as we’re all ready to move on to automated provisioning.</p> <p><a href="https://gingergeek.com/2016/09/the-container-networking-landscape-cni-from-coreos-and-cnm-from-docker/">The Container Networking Landscape: CNI from CoreOS and CNM from Docker</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on September 16, 2016.</p> <![CDATA[Container Networking: A Breakdown, Explanation and Analysis]]> https://gingergeek.com/2016/09/container-networking-a-breakdown-explanation-and-analysis 2016-09-14T07:36:24-05:00 2016-09-14T07:36:24-05:00 Lee Calcote https://gingergeek.com [email protected] <p><a href="https://i2.wp.com/gingergeek.com/wp-content/uploads/2016/09/container-rope.jpg"><img data-id="1577" src="https://i2.wp.com/gingergeek.com/wp-content/uploads/2016/09/container-rope.jpg?resize=300%2C169" alt="container-rope" class="alignleft size-medium wp-image-1577" srcset="https://i2.wp.com/gingergeek.com/wp-content/uploads/2016/09/container-rope.jpg?resize=300%2C169 300w, https://i2.wp.com/gingergeek.com/wp-content/uploads/2016/09/container-rope.jpg?resize=768%2C432 768w, https://i2.wp.com/gingergeek.com/wp-content/uploads/2016/09/container-rope.jpg?w=960 960w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p style="text-align: right;"> <em>Originally published on <a href="http://thenewstack.io/container-networking-breakdown-explanation-analysis/">The New Stack</a> on Sept. 4th, 2016.</em> </p> <p>While many gravitate toward network overlays as a popular approach to addressing container networking across hosts, the functions and types of container networking vary greatly and are worth better understanding as you consider the right type for your environment. Some types are container engine-agnostic, and others are locked into a specific vendor or engine. Some focus on simplicity, while others on breadth of functionality or on being IPv6-friendly and multicast-capable. Which one is right for you depends on your application needs, performance requirements, workload placement (private or public cloud), etc. Let’s review the more commonly available types of container networking.</p> <p>There are various ways in which container-to-container and container-to-host connectivity are provided. This article focuses primarily on a breakdown of current container networking types, including:</p> <ul> <li>None</li> <li>Bridge</li> <li>Overlay</li> <li>Underlay</li> </ul> <!--more--> <h2 id="antiquated-types-of-container-networking">Antiquated Types of Container Networking</h2> <p>The approach to networking has evolved as container technology advances. Two modes of networking have come and all but disappeared already.</p> <h3 id="links-and-ambassadors">Links and Ambassadors</h3> <p>Prior to having multi-host networking support and orchestration with Swarm, Docker began with single-host networking, facilitating network connectivity via links as a mechanism for allowing containers to discover each other via environment variables or /etc/hosts file entries, and transfer information between containers. The links capability was commonly combined with the <a href="https://docs.docker.com/engine/admin/ambassador_pattern_linking/">ambassador pattern</a> to facilitate linking containers across hosts and reduce the brittleness of hard-coded links. The biggest issue with this approach was that it was too static. Once a container was created and the environment variables defined, if the related containers or services moved to new IP addresses, then it was impossible to change the values of those variables.</p> <h3 id="container-mapped-networking">Container-Mapped Networking</h3> <p>In this mode of networking, one container reuses (maps to) the networking namespace of another container. This mode of networking may only be invoked when running a Docker container like this: –net:container:<em>some_container_name_or_id</em>.</p> <p>This run command flag tells Docker to put this container’s processes inside of the network stack that has already been created inside of another container. While sharing the same IP and MAC address and port numbers as the first container, the new container’s processes are still confined to its own filesystem, process list and resource limits. Processes on the two containers will be able to connect to each other over the loopback interface.</p> <p>This style of networking is useful for performing diagnostics on a running container and the container is missing the necessary diagnostic tools (e.g., curl or dig). A temporary container with the necessary diagnostics tools may be created and attached to the first container’s network.</p> <p>Container-mapped networking may be used to emulate pod-style networking, in which multiple containers share the same network namespace. Benefits, such as sharing localhost communication and sharing the same IP address, are inherent to the notion that containers run in the same pod, which is the behavior of rkt containers.</p> <h2 id="current-types-of-container-networking">Current Types of Container Networking</h2> <p>Lines of delineation of networking revolve around IP-per-container versus IP-per-pod models and the requirement of network address translation (NAT) versus no translation needed.</p> <h3 id="none">None</h3> <p>None is straightforward in that the container receives a network stack, but lacks an external network interface. It does, however, receive a loopback interface. Both the rkt and Docker container projects provide similar behavior when none or null networking is used. This mode of container networking has a number of uses including testing containers, staging a container for a later network connection, and being assigned to containers with no need for external communication.</p> <h3 id="bridge">Bridge</h3> <p>A Linux bridge provides a host internal network in which containers on the same host may communicate, but the IP addresses assigned to each container are not accessible from outside the host. Bridge networking leverages iptables for NAT and port-mapping, which provide single-host networking. Bridge networking is the default Docker network type (i.e., docker0), where one end of a virtual network interface pair is connected between the bridge and the container.</p> <p>Here’s an example of the creation flow:</p> <ol> <li>A bridge is provisioned on the host.</li> <li>A namespace for each container is provisioned inside that bridge.</li> <li>Containers’ ethX are mapped to private bridge interfaces.</li> <li>iptables with NAT are used to map between each private container and the host’s public interface.</li> </ol> <p>NAT is used to provide communication beyond the host. While bridged networks solve port-conflict problems and provide network isolation to containers running on one host, there’s a performance cost related to using NAT.</p> <h3 id="host">Host</h3> <p>In this approach, a newly created container shares its network namespace with the host, providing higher performance — near metal speed — and eliminating the need for NAT; however, it does suffer port conflicts. While the container has access to all of the host’s network interfaces, unless deployed in privilege mode, the container may not reconfigure the host’s network stack.</p> <p>Host networking is the default type used within Mesos. In other words, if the framework does not specify a network type, a new network namespace will not be associated with the container, but with the host network. Sometimes referred to as native networking, host networking is conceptually simple, making it easier to understand, troubleshoot and use.</p> <h3 id="overlay">Overlay</h3> <p>Overlays use networking tunnels to deliver communication across hosts. This allows containers to behave as if they are on the same machine by tunneling network subnets from one host to the next; in essence, spanning one network across multiple hosts. Many tunneling technologies exist, such as virtual extensible local area network (VXLAN).</p> <p>VXLAN has been the tunneling technology of choice for Docker libnetwork, whose multi-host networking entered as a native capability in the 1.9 release. With the introduction of this capability, Docker chose to leverage HashiCorp’s Serf as the gossip protocol, selected for its efficiency in neighbor table exchange and convergence times.</p> <p>For those needing support for other tunneling technologies, Flannel may be the way to go. It supports udp, vxlan, host-gw, aws-vpc or gce. Each of the cloud provider tunnel types creates routes in the provider’s routing tables, just for your account or virtual private cloud (VPC). The support for public clouds is particularly key for overlay drivers given that among others, overlays best address hybrid cloud use cases and provide scaling and redundancy without having to open public ports.</p> <p>Multi-host networking requires additional parameters when launching the Docker daemon, as well as a key-value store. Some overlays rely on a distributed key-value store. If you’re doing container orchestration, you’ll already have a distributed key-value store lying around.</p> <p>Overlays focus on the cross-host communication challenge. Containers on the same host that are connected to two different overlay networks are not able to communicate with each other via the local bridge — they are segmented from one another.</p> <h3 id="underlays">Underlays</h3> <p>Underlay network drivers expose host interfaces (i.e., the physical network interface at eth0) directly to containers or VMs running on the host. Two such underlay drivers are media access control virtual local area network (MACvlan) and internet protocol VLAN (IPvlan). The operation of and the behavior of MACvlan and IPvlan drivers are very familiar to network engineers. Both network drivers are conceptually simpler than bridge networking, remove the need for port-mapping and are more efficient. Moreover, IPvlan has an L3 mode that resonates well with many network engineers. Given the restrictions — or lack of capabilities — in most public clouds, underlays are particularly useful when you have on-premises workloads, security concerns, traffic priorities or compliance to deal with, making them ideal for brownfield use. Instead of needing one bridge per VLAN, underlay networking allows for one VLAN per subinterface.</p> <h4 id="macvlan">MACvlan</h4> <p>MACvlan allows the creation of multiple virtual network interfaces behind the host’s single physical interface. Each virtual interface has unique MAC and IP addresses assigned, with a restriction: the IP addresses need to be in the same broadcast domain as the physical interface. While many network engineers may be more familiar with the term subinterface (not to be confused with a secondary interface), the parlance used to describe MACvlan virtual interfaces is typically upper or lower interface. MACvlan networking is a way of eliminating the need for the Linux bridge, NAT and port-mapping, allowing you to connect directly to the physical interface.</p> <p>MACvlan uses a unique MAC address per container, and this may cause an issue with network switches that have security policies in place to prevent MAC spoofing, by allowing only one MAC address per physical switch interface.</p> <p>Container traffic is filtered from being able to speak to the underlying host, which completely isolates the host from the containers it runs. The host cannot reach the containers. The container is isolated from the host. This is useful for service providers or multitenant scenarios and has more isolation than the bridge model.</p> <p>Promiscuous mode is required for MACvlan; MACvlan has four modes of operation, with only the bridge mode supported in Docker 1.12. MACvlan bridge mode and IPvlan L2 mode are just about functionally equivalent. Both modes allow broadcast and multicast traffic ingress. These underlay protocols were designed with on-premises use cases in mind. Your public cloud mileage will vary as most do not support promiscuous mode on their VM interfaces.</p> <p>A word of caution: MACvlan bridge mode assigning a unique MAC address per container can be a blessing in terms of tracing network traffic and end-to-end visibility; however, with a typical network interface card (NIC), e.g., Broadcom, having a ceiling of 512 unique MAC addresses, this upper limit should be considered.</p> <h4 id="ipvlan">IPvlan</h4> <p>IPvlan is similar to MACvlan in that it creates new virtual network interfaces and assigns each a unique IP address. The difference is that the same MAC address is used for all pods and containers on a host — the same MAC address of the physical interface. The need for this behavior is primarily driven by the fact that a commonly configured security posture of many switches is to shut down switch ports with traffic sourced from more than one MAC address.</p> <p>Best run on kernels 4.2 or newer, IPvlan may operate in either L2 or L3 modes. Like MACvlan, IPvlan L2 mode requires that IP addresses assigned to subinterfaces be in the same subnet as the physical interface. IPvlan L3 mode, however, requires that container networks and IP addresses be on a different subnet than the parent physical interface.</p> <p>802.1q configuration on Linux hosts, when created using IP Link, is ephemeral, so most operators use network startup scripts to persist configuration. With container engines running underlay drivers and exposing APIs for programmatic configuration of VLANs, automation stands to improve. For example, when new VLANs are created on a top of rack switch, these VLANs may be pushed into Linux hosts via the exposed container engine API.ico</p> <h4 id="macvlan-and-ipvlan">MACvlan and IPvlan</h4> <p>When choosing between these two underlay types, consider whether or not you need the network to be able to see the MAC address of the individual container.</p> <p>With respect to the address resolution protocol (ARP) and broadcast traffic, the L2 modes of both underlay drivers operate just as a server connected to a switch does, by flooding and learning using 802.1d packets. In IPvlan L3 mode, however, the networking stack is handled within the container. No multicast or broadcast traffic is allowed in. In this sense, IPvlan L3 mode operates as you would expect an L3 router to behave.</p> <p>Note that upstream L3 routers need to be made aware of networks created using IPvlan. Network advertisement and redistribution into the network still needs to be done. Today, Docker is experimenting with Border Gateway Protocol (BGP). While static routes can be created on top of the rack switch, projects like <a href="http://osrg.github.io/gobgp/">goBGP</a> have sprouted up as a container ecosystem-friendly way to provide neighbor peering and route exchange functionality.</p> <p>Although multiple modes of networking are supported on a given host, MACvlan and IPvlan can’t be used on the same physical interface concurrently. In short, if you’re used to running trunks down to hosts, L2 mode is for you. If scale is a primary concern, L3 has the potential for massive scale.</p> <h4 id="direct-routing">Direct Routing</h4> <p>For the same reasons that IPvlan L3 mode resonates with network engineers, they may choose to push past L2 challenges and focus on addressing network complexity in Layer 3 instead. This approach benefits from the leveraging of existing network infrastructure to manage the container networking. The container networking solutions focused at L3 use routing protocols to provide connectivity, which is arguably easier to interoperate with existing data center infrastructure, connecting containers, VMs and bare metal servers. Moreover, L3 networking scales and affords granular control, in terms of filtering and isolating network traffic.</p> <p><a href="https://www.projectcalico.org">Calico</a> is one such project and uses BGP to distribute routes for every network —  specifically to that workload using a /32 — which allows it to seamlessly integrate with existing data center infrastructure without the need for overlays. Without the overhead of overlays or encapsulation, the result is networking with exceptional performance and scale. Routable IP addresses for containers expose the IP address to the rest of the world; hence, ports are inherently exposed to the outside world. Network engineers trained and accustomed to deploying, diagnosing and operating networks using routing protocols may find direct routing easier to digest. However, it’s worth noting that Calico doesn’t support overlapping IP addresses.</p> <h4 id="fan-networking">Fan Networking</h4> <p>Fan networking is a way of gaining access to many more IP addresses, expanding from one assigned IP address to 250 IP addresses. This is a performant way of getting more IPs without the need for overlay networks. This style of networking is particularly useful when running containers in a public cloud, where a single IP address is assigned to a host and spinning up additional networks is prohibitive, or running another load-balancer instance is costly.</p> <h4 id="point-to-point">Point-to-Point</h4> <p>Point-to-point is perhaps the simplest type of networking and the default networking used by CoreOS rkt. Using NAT, or IP Masquerade (IPMASQ), by default, it creates a virtual ethernet pair, placing one on the host and the other in the container pod. Point-to-point networking leverages iptables to provide port-forwarding not only for inbound traffic to the pod but also for internal communication between other containers in the pod over the loopback interface.</p> <h2 id="capabilities">Capabilities</h2> <p>Outside of pure connectivity, support for other networking capabilities and network services needs to be considered. Many modes of container networking either leverage NAT and port-forwarding or intentionally avoid their use. IP address management (IPAM), multicast, broadcast, IPv6, load-balancing, service discovery, policy, quality of service, advanced filtering and performance are all additional considerations when selecting networking.</p> <p>The question is whether these capabilities are supported and how developers and operators are empowered by them. Even if a container networking capability is supported by your runtime, orchestrator or plugin of choice, it may not be supported by your infrastructure. While some tier 2 public cloud providers offer support for IPv6, the lack of support for IPv6 in top public clouds reinforces the need for other networking types, such as overlays and fan networking.</p> <p>In terms of IPAM, to promote ease of use, most container runtime engines default to host-local for assigning addresses to containers, as they are connected to networks. Host-local IPAM involves defining a fixed block of IP addresses to be selected. Dynamic Host Configuration Protocol (DHCP) is universally supported across container networking projects. Container Network Model (CNM) and Container Network Interface (CNI) both have IPAM built-in and plugin frameworks for integration with IPAM systems — a key capability to adoption in many existing environments.</p> <p><a href="https://gingergeek.com/2016/09/container-networking-a-breakdown-explanation-and-analysis/">Container Networking: A Breakdown, Explanation and Analysis</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on September 14, 2016.</p> <![CDATA[Cloud Native Ambassadors and Docker Captains navigate users through the container ecosystem]]> https://gingergeek.com/2016/09/cloud-native-ambassadors-and-docker-captains-navigate-users-through-the-container-ecosystem 2016-09-02T08:20:47-05:00 2016-09-02T08:20:47-05:00 Lee Calcote https://gingergeek.com [email protected] <p><a href="https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/US_Navy_100804-N-5483N-026_Capt._Karl_Thomas_commanding_officer_of_the_amphibious_command_ship_USS_Mount_Whitney_LCC-JCC_20_greets_U.S._ambassador_to_Portugal_Allan_J._Katz.jpg"><img class="size-medium wp-image-1507 alignleft" src="https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/US_Navy_100804-N-5483N-026_Capt._Karl_Thomas_commanding_officer_of_the_amphibious_command_ship_USS_Mount_Whitney_LCC-JCC_20_greets_U.S._ambassador_to_Portugal_Allan_J._Katz.jpg?resize=300%2C214" alt="100804-N-5483N-026 LIBSON, Portugal (Aug. 4, 2010) Capt. Karl Thomas, commanding officer of the amphibious command ship USS Mount Whitney (LCC/JCC 20) greets U.S. ambassador to Portugal Allan J. Katz before a reception highlighting the partnership between Portugal and the United States. (U.S. Navy photo by Mass Communication Specialist 2nd Class Sylvia Nealy/Released)" data-id="1507" srcset="https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/US_Navy_100804-N-5483N-026_Capt._Karl_Thomas_commanding_officer_of_the_amphibious_command_ship_USS_Mount_Whitney_LCC-JCC_20_greets_U.S._ambassador_to_Portugal_Allan_J._Katz.jpg?resize=300%2C214 300w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/US_Navy_100804-N-5483N-026_Capt._Karl_Thomas_commanding_officer_of_the_amphibious_command_ship_USS_Mount_Whitney_LCC-JCC_20_greets_U.S._ambassador_to_Portugal_Allan_J._Katz.jpg?resize=768%2C549 768w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/US_Navy_100804-N-5483N-026_Capt._Karl_Thomas_commanding_officer_of_the_amphibious_command_ship_USS_Mount_Whitney_LCC-JCC_20_greets_U.S._ambassador_to_Portugal_Allan_J._Katz.jpg?resize=1024%2C731 1024w, https://i0.wp.com/gingergeek.com/wp-content/uploads/2016/09/US_Navy_100804-N-5483N-026_Capt._Karl_Thomas_commanding_officer_of_the_amphibious_command_ship_USS_Mount_Whitney_LCC-JCC_20_greets_U.S._ambassador_to_Portugal_Allan_J._Katz.jpg?w=2000 2000w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p><em>Originally posted on <a href="http://www.networkworld.com/article/3114747/open-source-tools/cloud-native-ambassadors-and-docker-captains-navigate-users-through-the-container-ecosystem.html" target="_blank">Network World</a> on Sept. 6th, 2016.</em></p> <p>Navigating the container ecosystem can be confusing. Deciding where to dip your toes is challenging for those stepping into container and microservices waters. Even those who have already ventured knee-deep still wade through many questions as they progress in their cloud native journey. To help them guide them through the ecosystem, the <a href="http://cncf.io/">Cloud Native Computing Foundation</a> (CNCF) recently launched a Cloud Native Ambassadors <a href="https://cncf.io/news/blogs/2016/08/ambassador-program-meetup-program-community-store-available-growing-cloud-native">program</a> at its inaugural <a href="http://events.linuxfoundation.org/events/cloudnativeday">CloudNativeDay</a> in Toronto.</p> <p>Recognized for their expertise, <a href="https://cncf.io/about/ambassadors">Cloud Native Ambassadors</a> are individuals who belong to a CNCF member organization and are selected based on their passion for cloud native technology and willingness to help others learn. Most ambassadors also organize or are involved in community meetups oriented toward technologies and projects governed by the CNCF. Forty-one meetups worldwide have joined the program to date (<em>disclaimer: I’m a CNCF Ambassador  and an organizer of the <a href="https://www.meetup.com/Microservices-and-Containers-Austin/">Microservices and Containers Austin</a> meetup in Austin, TX.</em>).<!--more--></p> <p>Under a similar <a href="https://blog.docker.com/2016/04/docker-captains/">program</a>, Docker Inc. provides central community outreach and resources for 228 meetups around the world. The Docker community does not have Ambassadors, rather it has Captains. <a href="https://www.docker.com/community/docker-captains">Docker Captains</a> are community leaders, who demonstrate a commitment to sharing their knowledge of Docker open source or commercial offerings with others (<em>disclaimer: I’m an organizer of the <a href="http://www.meetup.com/Docker-Austin/">Docker Austin</a> meetup in Austin, TX.</em>).</p> <p>As you would expect, the two programs are alike in many ways. Both are purposeful in how they approach tech communities – each with an emphasis on engineers focused on containers, microservices, cloud native, distributed systems, mode 2, continuously delivering, etc. Both tend to be open source-oriented and developer and operator-friendly. Both provide project swag (t-shirts, stickers, annotated mugs, etc.) to their respective community organizers (Captains and Ambassadors). In that vein, the <a href="https://store.cncf.io/">CNCF Store</a> was also launched last week and each Ambassador and their user groups seeded with an initial dose of gear.</p> <p>Cloud Native Ambassadors and Docker Captains advocate and educate on behalf of their respective and highly-related technologies. Docker Captains promulgate the many varied uses of <a href="https://www.docker.com/technologies/overview#/docker_projects">Docker’s projects</a>: Engine, Machine, Compose, Swarm Registry, Kitematic and all of the plumbing projects around these. So do Cloud Native Ambassadors disseminate and advocate use of the two current projects managed by the CNCF: <a href="http://kubernetes.io/">Kubernetes</a> (system for automating deployment, scaling, and management of containerized applications) and <a href="https://prometheus.io/">Prometheus</a> (systems monitoring and alerting tool). There are additional proposed projects in the pipeline to be considered for first incubation and next adoption, including <a href="https://github.com/nats-io/nats">NATS</a> (pubsub), <a href="http://www.fluentd.org/">Fluentd</a> (logging), <a href="https://github.com/twitter/heron">Heron</a> (real time stream processing), <a href="https://www.minio.io/">Minio</a> (storage), <a href="http://opentracing.io/">OpenTracing</a> (distributed tracing), <a href="https://github.com/miekg/coredns">CoreDNS</a> (distributed systems-friendly DNS), <a href="https://github.com/cockroachdb/cockroach">CockroachDB</a> (distributed SQL DB) and more.</p> <p>Having spent a couple years participating in and organizing technology meetups, I liken them to functioning as an underground conference circuit, where quality of speakers and content varies similarly to that of the quality of speakers and content in larger industry conferences. In general, technology meetups are surprisingly refreshing in their candidness, content, and convenience – three C’s that deliver value to me as an organizer and to regular attendees.</p> <p>While individual tech meetups don’t typically fall into the trap of being a low-cost marketing pulpit for large companies and small startups to get the message out about their commercial offerings, this occasionally happens. Most meetups screen speakers and their talks. Some are more at luxury than others in being able to turn away blatant sales pitches. Those that have this luxury tend to be located in cities with technology hubs. Irrespective, meetups provide an alternative forum for direct feedback, cross-technology pollination and practitioner-to-practitioner interaction.</p> <p>One of the goals of the Cloud Native Ambassador and Docker Captain programs (whether via meetups or other forums) is to woo the developer. Developer advocacy in this sense goes by many names (e.g. evangelism, technical marketing, community organizing, etc.) and is an emergent, purposeful practice that has been established within many vendor organizations. Developer advocacy becomes critical in that if you understand that as developers write new software, they are defining the infrastructure of tomorrow. It follows then that if as an industry, we’ve come to identify the application as king (and infrastructure lesser important in the face of software defined everything), so might we identify the developer as queen. Even queens don’t define infrastructure in a vacuum or without collaboration from their Ops/Sec/IT partners, however.</p> <p><a href="https://gingergeek.com/2016/09/cloud-native-ambassadors-and-docker-captains-navigate-users-through-the-container-ecosystem/">Cloud Native Ambassadors and Docker Captains navigate users through the container ecosystem</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on September 02, 2016.</p> <![CDATA[Contrasting Swarmkit, Kubernetes, Mesos+Marathon]]> https://gingergeek.com/2016/09/contrasting-swarmkit-kubernetes-mesosmarathon 2016-09-01T08:20:05-05:00 2016-09-01T08:20:05-05:00 Lee Calcote https://gingergeek.com [email protected] <p style="text-align: center;"> </p> <p style="text-align: center;"> <a href="https://lcccna2016.sched.org/speaker/leecalcote?iframe=yes&amp;w=&amp;sidebar=yes&amp;bg=no">Presented</a> at <a href="http://events.linuxfoundation.org/events/linuxcon-north-america">LinuxCon+ContainerCon</a>, August 2016. Includes Swarm 1.12, Kubernetes, Mesos+Marathon. </p> <p style="text-align: center;"> (<a href="http://calcotestudios.com/ccka">slides</a>) </p> <p><a href="https://gingergeek.com/2016/09/contrasting-swarmkit-kubernetes-mesosmarathon/">Contrasting Swarmkit, Kubernetes, Mesos+Marathon</a> was originally published by Lee Calcote at <a href="https://gingergeek.com">Lee Calcote</a> on September 01, 2016.</p>