<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Systems Approach]]></title><description><![CDATA[Larry Peterson and Bruce Davie, authors of "Computer Networks: A Systems Approach" explain the Internet – its technology, architecture, and evolution]]></description><link>https://systemsapproach.substack.com</link><generator>Substack</generator><lastBuildDate>Mon, 13 Apr 2026 02:53:34 GMT</lastBuildDate><atom:link href="https://systemsapproach.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Systems Approach, LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[systemsapproach@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[systemsapproach@substack.com]]></itunes:email><itunes:name><![CDATA[Bruce Davie]]></itunes:name></itunes:owner><itunes:author><![CDATA[Bruce Davie]]></itunes:author><googleplay:owner><![CDATA[systemsapproach@substack.com]]></googleplay:owner><googleplay:email><![CDATA[systemsapproach@substack.com]]></googleplay:email><googleplay:author><![CDATA[Bruce Davie]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Systems Approach Has a New Home]]></title><description><![CDATA[Our Systems Approach newsletter is leaving Substack and you should already have the latest issue in your mailbox, unless it is sitting in your Spam folder.]]></description><link>https://systemsapproach.substack.com/p/systems-approach-has-a-new-home</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/systems-approach-has-a-new-home</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 08 Jan 2024 07:01:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/33923b78-49c6-4546-9817-52ed0ceb7bc5_1536x2049.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As people who have written a lot about the risks of centralized platforms, we always had a degree of uneasiness about using Substack to host our newsletter. Knowing that we could take our content and subscribers elsewhere was a feature we liked, and now we are availing ourselves of this. You will not receive any more newsletters from us at Substack, and our latest newsletter (hosted on <a href="http://systemsapproach.org/newsletter/">Wordpress</a>) should now be in your inbox. However, there are some challenges in sending thousands of emails out to a mailing list in our current world of agressive spam-filtering, so there is a solid chance that your newsletter is sitting in Spam&#8211;please take a look if you don&#8217;t already have it. </p><p>If you&#8217;d like to undertand why we moved, aside from our belief in the benefits of decentralized architectures, you won&#8217;t find a better summary of the problems at Substack than this post by first amendment expert Ken White:</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:139893879,&quot;url&quot;:&quot;https://popehat.substack.com/p/substack-has-a-nazi-opportunity&quot;,&quot;publication_id&quot;:86716,&quot;publication_name&quot;:&quot;The Popehat Report&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6d0f415-4618-48a1-883f-f5c2087af66c_240x240.png&quot;,&quot;title&quot;:&quot;Substack Has A Nazi Opportunity &quot;,&quot;truncated_body_text&quot;:&quot;Substack has Nazis, because of course it does. Substack is on the internet, Nazis are on the internet, and if Substack doesn&#8217;t want Nazis it has to take affirmative steps to get rid of them. Flies don&#8217;t stop coming into the house because you want them to; they stop because you get off the couch and close the screen door. Any social media or blogging&#8230;&quot;,&quot;date&quot;:&quot;2023-12-21T20:25:33.018Z&quot;,&quot;like_count&quot;:1355,&quot;comment_count&quot;:513,&quot;bylines&quot;:[{&quot;id&quot;:14683966,&quot;name&quot;:&quot;Ken White&quot;,&quot;handle&quot;:&quot;popehat&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/3f6b6a1a-a4cd-45c0-9415-0356fadb88d6_315x315.jpeg&quot;,&quot;bio&quot;:&quot;Ken White is a criminal defense attorney and civil litigator in Los Angeles.  He writes about criminal justice and free speech issues.&quot;,&quot;profile_set_up_at&quot;:&quot;2022-01-11T17:37:22.788Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:7138,&quot;user_id&quot;:14683966,&quot;publication_id&quot;:86716,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:86716,&quot;name&quot;:&quot;The Popehat Report&quot;,&quot;subdomain&quot;:&quot;popehat&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;A newsletter about law, liberty, and leisure. &quot;,&quot;logo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/d6d0f415-4618-48a1-883f-f5c2087af66c_240x240.png&quot;,&quot;author_id&quot;:14683966,&quot;theme_var_background_pop&quot;:&quot;#0068ef&quot;,&quot;created_at&quot;:&quot;2020-08-24T17:59:24.742Z&quot;,&quot;rss_website_url&quot;:null,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Ken White&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false}},{&quot;id&quot;:881295,&quot;user_id&quot;:14683966,&quot;publication_id&quot;:906465,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:906465,&quot;name&quot;:&quot;Serious Trouble&quot;,&quot;subdomain&quot;:&quot;serioustrouble&quot;,&quot;custom_domain&quot;:&quot;www.serioustrouble.show&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;An irreverent podcast about the law&quot;,&quot;logo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/631a99bb-b508-45b3-9f43-0653abfb11c2_256x256.png&quot;,&quot;author_id&quot;:96663804,&quot;theme_var_background_pop&quot;:&quot;#FF81CD&quot;,&quot;created_at&quot;:&quot;2022-05-26T18:53:26.424Z&quot;,&quot;rss_website_url&quot;:null,&quot;email_from_name&quot;:&quot;Josh and Ken from Serious Trouble&quot;,&quot;copyright&quot;:&quot;Very Serious Media&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:10000}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://popehat.substack.com/p/substack-has-a-nazi-opportunity?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!kboz!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6d0f415-4618-48a1-883f-f5c2087af66c_240x240.png"><span class="embedded-post-publication-name">The Popehat Report</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Substack Has A Nazi Opportunity </div></div><div class="embedded-post-body">Substack has Nazis, because of course it does. Substack is on the internet, Nazis are on the internet, and if Substack doesn&#8217;t want Nazis it has to take affirmative steps to get rid of them. Flies don&#8217;t stop coming into the house because you want them to; they stop because you get off the couch and close the screen door. Any social media or blogging&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 years ago &#183; 1355 likes &#183; 513 comments &#183; Ken White</div></a></div><p>Hopefully this all goes smoothly and your subscription will be uninterrupted, but you can always cancel here and <a href="http://systemsapproach.org/newsletter/">sign up again at our new site</a> if necessary.</p><p>As a pleasant side effect, our main <a href="http://systemsapproach.org">website</a> and newsletter now live in the same place, so we actually dropped another centralized hosting provider out of our system as part of this process. And we know that by choosing Wordpress we are not in any way locked into any one hosting provider. Now that the basics are under control, we will start playing around with the <a href="https://fedi.tips/wordpress-turning-your-blog-into-a-fediverse-server/">Fediverse integration</a> that Automattic is developing.</p>]]></content:encoded></item><item><title><![CDATA[Holiday Reading]]></title><description><![CDATA[A collection of posts and books to tide us over until the New Year]]></description><link>https://systemsapproach.substack.com/p/holiday-reading</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/holiday-reading</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Sun, 24 Dec 2023 04:41:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a02b286c-5471-4fa2-9911-f79297acf41f_640x481.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As we noted in our last issue, we plan to have a bit of down time over the holidays, so we&#8217;re using this opportunity to suggest some reading material to keep you going over the break. Since many of our readers joined us fairly recently, we are resurfacing some of our old posts, and we&#8217;re also recommending some of our favorite reading from other writers.</p><div><hr></div><p>As people who write a lot (more than average for computer scientists, I would guess) we also spend a lot of time thinking&#8211;and reading&#8211;about language. One of the <a href="https://harpers.org/wp-content/uploads/HarpersMagazine-2001-04-0070913.pdf">best pieces I&#8217;ve ever read on language</a> is from David Foster Wallace, and that inspired my take on the &#8220;<a href="https://open.substack.com/pub/systemsapproach/p/light-reading-for-the-new-year?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">On Premises vs On Premise</a>&#8221; debate. As in much of my career, I managed to irritate plenty of people by taking a position between the two extremes, but mostly I wanted to inspire people to read the essay by Foster Wallace. He has a bit of a reputation (undeserved in my view) of being hard to read&#8211;perhaps due in part to his extensive reliance on footnotes&#8211;but I recommend that if you want to get a gentle introduction to his work, you could start at the essay noted above or &#8220;<em><a href="https://en.wikipedia.org/wiki/A_Supposedly_Fun_Thing_I%27ll_Never_Do_Again">A Supposedly Fun Thing I'll Never Do Again</a></em>&#8221;.&nbsp;</p><p>Larry has also written about his literary influences. These include his <a href="https://open.substack.com/pub/systemsapproach/p/the-tip-of-the-iceberg?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcome=true">Hemingway reference</a> a few weeks back, his <a href="https://systemsapproach.substack.com/p/stakeholders-and-2-pine">glowing review of Tracy Kidder&#8217;s </a><em><a href="https://systemsapproach.substack.com/p/stakeholders-and-2-pine">House</a></em>, and his discussion of <a href="https://systemsapproach.substack.com/p/omission-by-design">John McPhee&#8217;s &#8220;</a><em><a href="https://systemsapproach.substack.com/p/omission-by-design">Draft No. 4: Reflections on The Writing Process</a></em><a href="https://systemsapproach.substack.com/p/omission-by-design">&#8221;</a>.&nbsp;</p><p>Given our interest in language, it should come as no surprise that we are intrigued by large language models. I first started studying AI in 1984&#8211;yes, really&#8211;and I recently happened to crack open the book by <a href="https://en.wikipedia.org/wiki/Patrick_Winston">Patrick Winston</a> from that year, titled simply &#8220;Artificial Intelligence&#8221;. The preface is amazing in that it would look perfectly timely in a book published today:&nbsp;</p><blockquote><p>The field of Artificial Intelligence has changed enormously since the first edition of this book was published. Subjects in Artificial Intelligence are de rigueur for undergraduate computer-science majors, and stories on Artificial Intelligence are regularly featured in most of the reputable news magazines.&nbsp;</p></blockquote><p>Ah yes, our brilliant AI future was just around the corner in 1984, just as it is today, and the media was all over it.</p><p>I think my piece on &#8220;<a href="https://open.substack.com/pub/systemsapproach/p/looking-inside-large-language-models?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">looking inside LLMs</a>&#8221; is the one I spent the most time researching this year, and that&#8217;s because the picture is complex. While the view of LLMs as &#8220;<a href="https://dl.acm.org/doi/10.1145/3442188.3445922">stochastic parrots</a>&#8221; (or &#8220;spicy autocomplete&#8221;) is not far from my own, getting a closer look at how LLMs work internally gave me a better understanding of why there is so much confusion about what is possible, as well as optimism from many who work in the field. It is just unfortunate that there is so much hype and shallow thinking getting in the way of properly understanding the potential and current risks of these systems. We plan to produce a book about machine learning (as applied in the context of networking) next year, and it will be as hype-free as we can make it.</p><p>A lot of what we have written over the last two years has been about the increase in centralization of various aspects of the Internet. My own experience with SDN was one of leveraging the capabilities of modern distributed systems to create logically centralized abstractions as a way to overcome the limitations of fully decentralized network architectures. I talked about this with <a href="https://systemsapproach.substack.com/p/a-conversation-with-sdn-pioneer-martin-casado">Nicira founder Martin Casado</a>, and wrote about it in <a href="https://open.substack.com/pub/systemsapproach/p/sdn-and-the-alignment-of-planets?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">this post</a> and in our <a href="https://www.systemsapproach.org/books.html#sdnbook">SDN book</a>. So I&#8217;m not entirely anti-centralization, but the tradeoffs are subtle.&nbsp;</p><p>It&#8217;s now more than a year since we moved to Mastodon, and that&#8217;s very much a story about the <a href="https://open.substack.com/pub/systemsapproach/p/decentralization-strikes-back?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">renaissance of decentralization</a> in the social media sphere. Hardly a day goes by that we don&#8217;t feel good about the decision to leave the bird site. And in a reminder that there is more to decentralized social media than Mastodon, here is my talk on 60 Years of Networking (with a strong focus on decentralization) on <a href="https://peertube.roundpond.net/w/1hSTyT2J4cKrLaoU3sJeqT">peertube</a>.&nbsp;</p><p>In terms of the articles from our early days that have seen the greatest interest, I would say that every time we talk about <a href="https://systemsapproach.substack.com/p/its-tcp-vs-rpc-all-over-again">RPC vs TCP</a> we get lots of traffic. So too with anything questioning the <a href="https://open.substack.com/pub/systemsapproach/p/was-mpls-necessary?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">value of MPLS</a>. The reaction to the last newsletter on Outrageous Opinions makes me think I might need to write something about the rise and fall of ATM.&nbsp;</p><p>What else should you read over the holidays? I enjoyed this article on <a href="https://www.nybooks.com/articles/2023/06/22/life-is-short-indexes-are-necessary-dennis-duncan/">Indexes</a> from the New York Review of Books. For a literary magazine, the NYRB also does a nice job on covering issues of the Internet and society, as in this piece on the <a href="https://www.nybooks.com/articles/2023/03/09/private-eyes-the-fight-for-privacy-citron/">Internet and privacy</a>. (The <a href="https://www.nybooks.com/">NYRB</a>, by the way, is not related to other publications with New York in their title, and was founded in 1963 during a newspaper strike.)</p><p>We&#8217;re big fans of Cory Doctorow, whose rate of publication puts us to shame. His <a href="https://pluralistic.net/2023/12/19/bubblenomics/#pop">recent post on the AI bubble</a> is a good example of his work, and he may well have given us 2023&#8217;s word of the year: &#8220;enshittification&#8221;. A good place to start on this topic is <a href="https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys">here</a>.&nbsp;</p><p>My non-fiction reading for the holidays includes <em><a href="https://www.penguin.com.au/books/chernobyl-9780141988351">Chernobyl: History of a Tragedy</a></em>, which I was inspired to pick up after watching the HBO series on Chernobyl. I thought I had a basic understanding of how nuclear reactors work, but I had massively underestimated just how complex they are. This book helps expose that complexity and the fatal combination of technical and human failures that followed.&nbsp;</p><p>Finally, I made a concerted effort to read more novels this year, and one book deserves mention here: <a href="https://www.gutenberg.org/ebooks/2276">The Private Memoirs and Confessions of a Justified Sinner</a>. I think I can safely say that it&#8217;s the funniest 19th century novel I&#8217;ve read, and surprisingly timely. Also Scottish, which is how I came to hear about it. Thanks to <a href="https://www.gutenberg.org/">Project Gutenberg</a>&#8211;a project aligned with our goals as open source publishers&#8211;for making it available.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books and newsletters accessible to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p>I had an exchange with Paul Francis about his work on distributed search after I quoted him in <a href="https://open.substack.com/pub/systemsapproach/p/outrageous-opinions?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcome=true">last week&#8217;s newsletter</a>. My enthusiasm for his work was perhaps a bit misplaced, if I take Paul at face value:&nbsp;</p><blockquote><p>Regarding Ingrid, distributed search could never work. First it would never be fast enough (or as fast as centralized alternatives). Even if you could solve this, it'd be hard as hell to deal with spam in a distributed system...Exploring the distributed alternative is the prerogative of the researcher, but it was a stupid idea.&nbsp;</p></blockquote><p>Well, it was a good talk in any case.&nbsp;</p><p>Thanks for supporting us this year and we will return with fresh content in 2024.</p><div><hr></div><p>Got an idea for something that Systems Approach should cover in either a book or a newsletter? <a href="mailto:discuss@systemsapproach.org">Send us a note</a>.&nbsp;</p><p>Follow us <a href="https://discuss.systems/@SystemsAppr">on Mastodon</a>. </p>]]></content:encoded></item><item><title><![CDATA[Outrageous Opinions]]></title><description><![CDATA[Reviewing some bold predictions from another era]]></description><link>https://systemsapproach.substack.com/p/outrageous-opinions</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/outrageous-opinions</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 11 Dec 2023 07:11:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6894fe09-dd1d-4348-b1fc-29955e441091_7954x3956.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The end of the year is often a time for people in tech to make predictions, but rather than making our own, this week we&#8217;re looking back on some of the bold predictions of the past&#8211;specifically the inaugural Outrageous Opinion session held at SIGCOMM in 1995. Coincidentally the event took place just a few months before we finished writing the first edition of &#8220;Computer Networks: A Systems Approach&#8221;. Some of the best talks that we recall from that era still resonate today.</p><div><hr></div><p>One of my most lasting contributions to the SIGCOMM community would have to be my 2003 Outrageous Opinion talk &#8220;<a href="https://www.slideshare.net/drbruced/mpls-outrage">MPLS Considered Helpful</a>&#8221;. Twenty years later I still run into people who remember the punchline &#8220;I&#8217;m not bitter&#8221;. I suspect this will be remembered much longer than my four-year tenure as SIGCOMM Chair.&nbsp;</p><p>A less well-known fact is that I chaired the first SIGCOMM outrageous opinion session at <a href="https://conferences.sigcomm.org/sigcomm/1995/techprog.html">SIGCOMM 1995</a>. (Another fun fact from that conference: Marc Andreesen was scheduled to give a tutorial at the same time as me, but he bowed out at the last minute, citing business pressures&#8211;the Netscape IPO had taken place 2 weeks earlier. I picked up a bunch of disappointed attendees at my tutorial on <a href="https://systemsapproach.substack.com/p/the-accidental-smartnic">SmartNICs</a> as a result.)</p><p>While outrageous opinion sessions subsequently took a turn towards stand-up comedy, in that first year there were a bunch of quite serious (and nevertheless funny) talks that have stuck with me. I often refer back to David Clark&#8217;s talk &#8220;We Should All Become Economists&#8221;. Already well established as &#8220;the architect of the Internet&#8221;, David had started to branch out into areas of economics and policy, with notable works including &#8220;<a href="https://nap.nationalacademies.org/catalog/4755/realizing-the-information-future-the-internet-and-beyond">Realizing the Information Future</a>&#8221;, which made the case to a broad audience that the Internet was the appropriate architecture for the &#8220;Information Superhighway&#8221;.  This might seem an obvious truth today, but at the time there was plenty of support for alternative architectures based around evolved versions of either the telephone network (&#8220;Broadband ISDN&#8221;) or the Cable TV network (&#8220;imagine 500 channels!&#8221;). His later paper with Marjorie Blumenthal &#8220;<a href="https://groups.csail.mit.edu/ana/Publications/PubPDFs/Rethinking%20the%20design%20of%20the%20internet2001.pdf">Rethinking the Design of the Internet: End-to-End Arguments vs. the Brave New World</a>&#8221; is a wonderful examination of the tension between the idealized architecture of the Internet and the commercial pressures that came to bear on it once it became a mainstream communications platform. While there are few things more annoying than tech people discussing economics from a position of ignorance, David was making the case that we should do more to educate ourselves about economics, and I wish more of us had taken his advice.</p><p>Equally memorable was a talk by Paul Francis, another Internet pioneer whose influence spans topics as diverse as <a href="https://dl.acm.org/doi/10.1145/166237.166246">scalable multicast</a>, IPv6, and <a href="https://dl.acm.org/doi/10.1145/383059.383072">Distributed Hash Tables</a>. At that point we were in the early days of Internet search&#8211;Google was still three years away from being founded, and <a href="https://www.websearchworkshop.com.au/altavista-history.php">Alta Vista</a> was an internal project to index the entire Web at DEC (released to the world later that year). Paul approached me with two topics he was interested in presenting: his latest research on scalable Internet search, which he had named <a href="https://www.cs.cornell.edu/people/francis/Ingrid_%20A%20Self-Configuring%20Information%20Navigation%20Infrastructure.pdf">Ingrid</a>, and Network Address Translation (NAT). I was rather more favorable towards the latter, because it was a very hot topic at the time, Paul was credited with its invention, and he had an amusing way to present his argument. Creative person that he is, Paul found a way to weave the two topics together in one talk. The gist of the NAT part of his talk was that the prevailing IETF view on NAT at the time&#8211;don&#8217;t do NAT&#8211;should be viewed as analogous to abstinence-only sex education. No matter how much the people giving the advice might believe in the correctness of their position, they were not going to have much impact on the outcome.  In hindsight, he was obviously correct: most of the world&#8217;s Internet users today sit behind one <a href="https://www.aussiebroadband.com.au/blog/what-is-cgnat/">or more</a> NAT devices, but his position was much more controversial at the time. In the end, living with NAT became more important than preventing it, and today there is a whole body of work on <a href="https://datatracker.ietf.org/doc/html/rfc8445">NAT traversal</a> that allows us to deal with it fairly painlessly.&nbsp;</p><p>I have had reason to refer back to the other part of Paul&#8217;s talk more often, because of the way it resonates today as we deal with the centralization of the Internet and the recent efforts to <a href="https://systemsapproach.substack.com/p/decentralization-strikes-back">re-decentralize</a> it. While early search engines were centralized&#8211;for example, Alta Vista was apparently developed to show off the capabilities of a powerful DEC database server&#8211;Paul argued that a decentralized approach to search was going to be necessary as the Web took off. (Recall this was the same year as the Netcape IPO.) Again, he was right, but Google would eventually deliver a hugely successful distributed system to index and search the Web, and put it behind a logically centralized front end. So while the technical solution was indeed decentralized, the user experience is centralized: just go to Google.com and ask your question. And of course that is where the Web mostly sits today, 25 years later: distributed systems do the work under the covers but the average user interacts with a handful of centralized entities, such as the social media titans and streaming services. I&#8217;m cautiously optimistic that we are seeing a reversal of this trend, especially with <a href="https://joinmastodon.org/">federated social media</a>, but I do look back nostalgically on the era when it seemed possible that search itself might be decentralized. Since it is now increasingly apparent that <a href="https://mastodon.social/@danluu/111506788692079608">search is getting worse</a>, we can still hope. Also, the idea that decentralized technologies alone do not protect us from the perils of centralized control of technology (e.g., the ownership of search or social media by a small number of companies) is something that I developed further in my &#8220;<a href="https://systemsapproach.substack.com/p/60-years-of-networking">60 Years of Networking</a>&#8221; talk earlier this year.</p><p>Finally, I can remember that there were multiple talks about the relative merits of ATM and IP. This seems hard to fathom today, where the Internet reaches something like half the world&#8217;s population and ATM is little more than a historical footnote in the large collection of layer 2 technologies that IP has accommodated. As my comments about &#8220;Realizing the Information Future&#8221; above suggest, it was far from clear in 1995 that this is how things would play out. I was placing an each-way bet at this point, having worked on ATM at Bellcore (owned by telcos) but believing by 1995 that ATM would be adopted as a substrate for part of the Internet, not a standalone networking technology that would replace IP. Indeed, it was my focus on IP-over-ATM that provided me with the opportunity to join Cisco later that year as they increased their investment in ATM switching. That would ultimately land me in the team that <a href="https://open.substack.com/pub/systemsapproach/p/was-mpls-necessary?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">developed MPLS</a>&#8211;another technology that, like NAT, faced a fair amount of opposition at the IETF, but is widely deployed today.&nbsp;&nbsp;</p><p>The fact that I remember so much of that one evening in 1995 (whereas I remember almost nothing of the tutorial I gave) has a lot to do with how provocative, forward-looking, and accurate many of the opinions were. I think some of the talks in later years may have been funnier, but the sheer amount of predictive accuracy of those I remember from 1995 is striking. (No doubt I&#8217;ve forgotten some talks that were totally off base.) David Clark&#8217;s comments about economists continue to resonate as we argue about network neutrality and hear <a href="https://ccianet.org/news/2023/02/network-fees-eu-commission-launches-consultation-on-telco-demands/">demands from telcos</a> that they be compensated not only by their customers but by the content providers. The increased centralization of the Internet is also partly an economic (winner-takes-all) phenomenon. In the current debates about the future of the <a href="https://sigcomm.quest/">SIGCOMM conference</a>, and more broadly as we look to shape the Internet of the future, I hope we keep a place for outrageous opinions.&nbsp;</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books and newsletters accessible to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p>While the next issue of this newsletter is due to land on Christmas Day, we&#8217;ll be taking some time off for the holidays (some of us have <a href="https://runnerstribe.com/features/in-isolation-at-falls-creek-a-column-by-len-johnson/">running camps</a> to get to) so expect a short newsletter in roughly two weeks time and we&#8217;ll return to our normal schedule in January. We found <a href="https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/?truid=&amp;utm_source=the_algorithm&amp;utm_medium=email&amp;utm_campaign=the_algorithm.unpaid.engagement&amp;utm_content=12-04-2023&amp;mc_cid=2cc05b263b&amp;mc_eid=2c62436d27">this article</a> on the energy cost of generative AI worth a read&#8211;the headline isn&#8217;t as interesting as the observation that generative models are wasteful compared to more specialized models in many cases. The <a href="https://www.theregister.com/2023/12/08/twitch_quits_south_korea/">departure of Twitch from South Korea</a> is the latest twist in arguments about who should pay for network traffic. And Bruce Schneier&#8217;s article on <a href="https://slate.com/technology/2023/12/ai-mass-spying-internet-surveillance.html">AI and privacy</a> makes a well-reasoned case for regulation of how AI is used for surveillance.&nbsp;To support decentralized social media you should follow us <a href="https://discuss.systems/@SystemsAppr">on Mastodon</a>. </p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.systemsapproach.org/books.html&quot;,&quot;text&quot;:&quot;Books make great gifts!&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.systemsapproach.org/books.html"><span>Books make great gifts!</span></a></p>]]></content:encoded></item><item><title><![CDATA[The Tip of the Iceberg]]></title><description><![CDATA[A Personal Reflection on the Writing Process.&#160; In many ways, picking a technical book topic is no different than a researcher picking a problem to work on, a programmer deciding to build a new system or app, or a novelist deciding their next project.]]></description><link>https://systemsapproach.substack.com/p/the-tip-of-the-iceberg</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/the-tip-of-the-iceberg</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 27 Nov 2023 07:33:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/88cb86f2-698f-485d-b674-033a34940216_1920x1080.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As we enter our fourth year of running Systems Approach, LLC, we take another look at what goes into the process of producing technical books. How do we decide to focus on one topic versus another? There is more to this than just &#8220;write what you know&#8221; (actually a <a href="https://lithub.com/should-you-write-what-you-know-31-authors-weigh-in/">hotly debated</a> piece of writing advice) as Larry explains below.</p><div><hr></div><p>In our last newsletter, Bruce gave a retrospective of the last <a href="https://open.substack.com/pub/systemsapproach/p/three-years-of-systems-approach?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">three years of our Systems Approach journey</a>. This week I thought I&#8217;d chime in with my thoughts on how it is we decide what to write about. In many ways, this is no different than a researcher picking a problem to work on, a programmer deciding to build a new system or app, or a novelist deciding their next book project. It's mostly about personal tastes, but also includes an element of opportunity (e.g., to leverage some unique experience).</p><p>You can see these elements in the First Edition of our Computer Networks book. I had found the organizing principles of networking&#8212;along with the practice of building network software&#8212;to be a rewarding research avenue. Seeing Tanenbaum&#8217;s book as the current (in 1994) state-of-the-art in network textbooks, I realized that there was an opportunity to have impact by putting the Internet architecture front and center. The leverage was that I had spent time working on the TCP/IP stack (and adding <a href="https://open.substack.com/pub/systemsapproach/p/its-tcp-vs-rpc-all-over-again?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">RPC</a> to the mix), so I had a strong sense of how everything worked in practice. It was only after I got&nbsp; started that I understood (a) how daunting the task was, and (b) how many blind spots I had about the technology I was trying to write about. Fortunately, Bruce saved the day by joining me as co-author.</p><p>That 1st Edition made heavy use of the <em><a href="https://ieeexplore.ieee.org/document/67579">x</a></em><a href="https://ieeexplore.ieee.org/document/67579">-kernel</a> (my research project at the time) to illustrate how various protocols are implemented. That turned out to be a bootstrapping process: it provided the grounding that made it possible to speak with some authority about network systems, but (virtually) no one wants to learn about some obscure research OS as a prerequisite for learning about the Internet. The <em>x</em>-kernel all but disappeared in later editions, but the experience did teach me a valuable lesson, which I&#8217;ve come to think of as my version of the Hemingway <a href="https://en.wikipedia.org/wiki/Iceberg_theory">Iceberg Theory</a>:</p><blockquote><p><em>Hemingway said that only the tip of the iceberg showed in fiction&#8212;your reader will see only what is above the water&#8212;but the knowledge that you have about your character that never makes it into the story acts as the bulk of the iceberg. And that is what gives your story weight and gravitas. (Jenna Blum, 2013)</em></p></blockquote><p>(I also appreciate <a href="https://www.wordsthatsing.com.au/post/hemingway-rules">Hemingway&#8217;s writing style</a>, which shares much with technical writing: short/direct sentences and short/focused paragraphs. But I attribute the short-paragraph rule to Ms. Penny Wika, my high school journalism teacher, who insisted that the lead paragraph of any story be no more than 25 words, and never, ever start with &#8220;The&#8221; or &#8220;There&#8221;. I often break both rules, but doing so always weighs on me.)</p><p>I should bring this back to writing textbooks, but before I do, I can&#8217;t help but observe that even technical writing, while unlikely to be mistaken for a novel, can still include a narrative. In the books Bruce and I write, the narrative is just as important as the factual details. You can find the latter anywhere; it&#8217;s the former that helps you understand how to think about that information. In research papers the narrative is heavily prescribed&#8212;in systems, for example, it&#8217;s: (1) Introduction, (2) Design, (3) Implementation, (4) Performance, (5) Related Work. With textbooks you get a little more leeway to craft the narrative, and weave multiple themes through the text.</p><p>The key to a narrative is part organizational structure and part knowing what to omit, the latter being a challenge I elaborated on in an earlier <a href="https://systemsapproach.substack.com/p/omission-by-design">post</a>. It turns out the Iceberg Theory and knowing what to omit are two sides of the same coin.&nbsp; Having a real system underpinning a book&#8212;and being selective in how much detail you expose&#8212;is critical to constructing the narrative. A real system provides the depth you need to talk about abstract concepts, and it helps fill the gaps between the architectural elements. Both are essential if you&#8217;re going to try to construct an end-to-end narrative for the reader.</p><h2>Books Series</h2><p>Since starting our Systems Approach endeavor three years ago, we&#8217;ve produced four books on relatively narrow topics. There&#8217;s a unique case for why we selected each of these specific topics, but what they have in common is that each was (1) co-authored with a domain expert, and (2) backed by open source implementation. I&#8217;ve come to believe both are essential: you not only need to have someone in the know to ask questions of, but you also need to know what questions to ask (and trying to understand code is a great way to fuel the latter).</p><p>Starting with the most obvious choice of topic, our <a href="https://www.systemsapproach.org/books.html#sdnbook">SDN book</a> is a natural continuation of our earlier interests: it describes an approach to implementing network software. Coupled with Bruce&#8217;s and my being in the thick of the industry building out SDN products and platforms (at Nicira/VMware and ONF), this made SDN an obvious candidate to write about. Having up-close access to a complete software stack&#8212;from programmable data planes, to network operating systems, to a range of control applications&#8212;gave us a unique perspective. No matter where you come down on the commercial viability of SDN, the kind of exposure the book gives you to the construction of network software is, in my judgment, essential to understanding networks.</p><p>Our <a href="https://www.systemsapproach.org/books.html#opsbook">Edge Cloud Operations book</a> is probably the least obvious choice in the series, especially if you&#8217;re expecting Internet-focused topics. But my experience working with network operators convinced me of two things: (1) that the Internet is rapidly evolving into a set of cloud services, and (2) being able to operate these services is the chief challenge facing the industry. That&#8217;s still a tough sell in academic settings (where cloud operations is just the latest incarnation of network management, another neglected topic), but that doesn&#8217;t&#8212;again in my judgement&#8212;make it any less true. It also happens to be a rapidly evolving space, with countless offerings (both commercial and open source) vying for mindshare. And quickly stepping into that gap, today&#8217;s cloud providers say <em>&#8220;Don&#8217;t worry about this problem; it&#8217;s really hard, so we&#8217;ll take care of it for you.&#8221; </em>Personally, that is the loudest invitation to write a book that I can imagine.</p><p>Our 5G books (the original primer and our more recent <a href="https://www.systemsapproach.org/books.html#5gbook">Private 5G book</a>) were born out of the frustration of trying to understand the mobile cellular network. Initially this frustration was due to the many acronyms I encountered, but the closer I looked, the more the problem turned out to be one of trying to internalize a system that had evolved within an entirely different framework. There simply was no easy way to map the elements behind those acronyms onto a framework that made any sense to me. In 5G, the mobile cellular network is trying to redefine itself as a cloud service, which does at least provide a shared language (complicated by subtle differences in our definitions for terminology in that language). Sorting all of that out is essentially what our Private 5G book is about.</p><p>Finally, our <a href="https://www.systemsapproach.org/books.html#tcpbook">TCP Congestion Control book</a> is different from the other three, each of which could be characterized as covering an emerging topic. In contrast, TCP congestion control goes back over 30 years. But it is the one hard problem in networking that continues to demand attention even as the overall landscape changes. This makes it worth understanding.</p><p>Whether these four topics turned out to be the most useful to write about&#8212;or just satisfied our own personal interests&#8212;is difficult to say. We can say that our books are among the top non-sponsored search results for their respective topics and the online versions collectively received thousands of visits each week. That&#8217;s gratifying, and helps motivate us to look for the next itch to scratch.</p><div><hr></div><p>Our Systems Approach books are always freely available on <a href="https://github.com/SystemsApproach">GitHub</a> and as <a href="https://book.systemsapproach.org/">web</a> versions, but we also make them available via <a href="https://www.systemsapproach.org/books.html">print-on-demand and as eBooks</a>. For the latter, we occasionally make a second printing available when the amount of new content becomes large enough, and last week we reached that milestone for our <a href="https://www.systemsapproach.org/books.html#5gbook">Private 5G book</a> with an expanded Appendix on deploying 5G using Aether. It&#8217;s currently discounted at both our site and Amazon.</p><p>The massive outage at Australian telco Optus continues to be <a href="https://www.kentik.com/blog/digging-into-the-optus-outage/">analyzed</a> and led to the resignation of the CEO. The <a href="https://www.aph.gov.au/DocumentStore.ashx?id=2ed95079-023d-49d5-87fd-d9029740629b&amp;subId=750333">Optus submission</a> to the Australian government was notable for its catalog of large failures by <em>other</em> telcos, a classic &#8220;everyone does it&#8221; defence.  And you probably don&#8217;t need us to tell you about what happened at OpenAI last week, but Gergely Orosz at <a href="https://newsletter.pragmaticengineer.com/p/what-is-openai?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">The Pragmatic Engineer</a> has a good overview. </p><p>Image this week by <a href="https://unsplash.com/@66north">66 north</a> on Unsplash.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books and newsletters accessible to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Three Years of Systems Approach]]></title><description><![CDATA[Reviewing our progress as an open source publisher]]></description><link>https://systemsapproach.substack.com/p/three-years-of-systems-approach</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/three-years-of-systems-approach</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 13 Nov 2023 07:33:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e631c6b8-1c3c-404c-902e-a40c51b98c03_640x480.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few weeks ago we passed the three year anniversary of founding Systems Approach LLC. The Systems Approach, of course, is much older than that, but it seemed like a good time to look back and assess what we&#8217;ve been able to achieve since we made the decision to focus on producing books and other educational materials approximately full-time.</p><div><hr></div><p>As many of our readers will know, Larry and I have been writing books together since 1995, and the Systems Approach has been our guiding principle from day one. When Larry first approached me to come on board as a co-author, he already had the title picked out and a contract in place with Morgan Kaufmann Publishers. MKP in those days was a small, CS-focussed press whose main claim to fame was publishing &#8220;Computer Architecture: A Quantitative Approach&#8221; by Hennessy and Patterson. That book had already achieved legendary status and a huge chunk of the computer architecture market, so our goal was to be as much like them as possible. That included having a paper dust-jacket in a shade of beige and a similar structure to our title: hence &#8220;Computer Networks: A Systems Approach&#8221;.&nbsp;</p><p>To be honest, I needed Larry to explain what he meant by &#8220;Systems Approach&#8221; even though I&#8217;d been working on systems research for quite some time. As we say in the book&#8217;s <a href="https://book.systemsapproach.org/preface.html#what-is-a-systems-approach">preface</a>, the key to the Systems Approach is the &#8220;big-picture&#8221; view: you don&#8217;t get to optimize inside a single box without thinking about the bigger interactions among components. For me, one of the pivotal moments that turned me into a &#8220;systems thinker&#8221; was hearing David Clark present the ideas of <a href="https://dl.acm.org/doi/10.1145/99517.99553">Application Layer Framing</a> at the 1990 SIGCOMM conference, which made me realize that you shouldn&#8217;t just optimize a single layer of the protocol stack without thinking about how other layers depend on the services of that layer. In recent years, the rethinking of layers that is apparent in <a href="https://book.systemsapproach.org/e2e/tcp.html#alternative-design-choices-sctp-quic">QUIC</a> is a good example of how systems thinking is applied in protocol design&#8211;and also a reminder that rigid layering is often not the best way to think about networking.</p><p>The big change for us three years ago was that I left the corporate world (in a so-called &#8220;retirement&#8221; from VMware) while Larry moved to part-time at <a href="https://opennetworking.org/">ONF</a> and Princeton, so the work of Systems Approach LLC became our &#8220;main job&#8221;. Coincidentally, the <a href="https://systemsapproach.substack.com/p/escape-from-big-publishing">publisher</a> of our original book from 1995 had just pushed us to produce a <a href="https://www.systemsapproach.org/books.html">sixth edition</a> and so our first order of business was to complete the edits and handle all the back-end processing that happens when doing a book with a big publisher. The content edits were largely in place thanks to the open source nature of the book (we&#8217;d been taking inputs on GitHub since the fifth edition). One of the few valuable tasks performed by a big publisher is a professional copy edit, which takes a fair bit of time to review. So we were busy enough in those first few months putting the last finishing touches on the book. We also undertook a complete update to the ancillary materials for teachers (<a href="https://github.com/SystemsApproach/book/tree/master/6E-bottomupslides">lesson slides</a> and solutions to the exercises). Experience has shown that instructors struggle to get access to these materials through the publisher, so do reach out to us if you are looking for them.&nbsp;</p><p>Our main vision for Systems Approach, however, was not to keep on working for <a href="https://open.substack.com/pub/systemsapproach/p/escape-from-big-publishing?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">Big Publisher</a> but to produce our own books. So back in 2020 we drew up a list of possible topics to cover in the Systems Approach book series and in the last three years we have chipped away at that list, occasionally adding to it and sometimes shelving topics as they lose appeal. (I&#8217;m pretty sure a book on blockchains is off the table now.) Here are the books that we have managed to complete in the last three years: <a href="https://www.systemsapproach.org/books.html#sdnbook">Software-defined Networking</a>, <a href="https://www.systemsapproach.org/books.html#tcpbook">TCP Congestion Control,</a> <a href="https://www.systemsapproach.org/books.html#opsbook">Edge Cloud Operations</a>, and <a href="https://www.systemsapproach.org/books.html#5gbook">Private 5G</a>. Add in the revision of the big textbook and that&#8217;s five books in three years, which feels like a decent amount of progress. On top of that, we&#8217;ve created an <a href="https://www.edx.org/learn/computer-networking/the-linux-foundation-introduction-to-magma-cloud-native-wireless-networking">online class about Magma</a>, the open source mobile core project.&nbsp;</p><p>We are both fans of physical books, and we are benefiting from the increased quality and cost-effectiveness of print-on-demand that enables us to make print books without a big publisher. There is something very satisfying about seeing your work in a printed volume, and so we make the extra effort to get our books nicely formatted and ready for printing once we are fairly happy with the content. We&#8217;ve also worked with translators to produce foreign language versions of several of our books. A particular highlight was working with three of our Japanese colleagues to expand and customize our <a href="https://www.amazon.co.jp/-/en/Larry-Peterson/dp/4798172049/ref=sr_1_2?crid=3LVEZR7N8VI0B&amp;keywords=software-defined+network+davie&amp;qid=1656471131&amp;sprefix=software-defined+network+davie%2Caps%2C213&amp;sr=8-2">SDN book</a> for the Japanese market.&nbsp;</p><p>Producing physical books turns out to be something that we really enjoy. It is admittedly a bit annoying to get all the page breaks in the right place and wrestle with LaTeX to make everything look nice, but in the end we believe we&#8217;re producing a professional and aesthetically pleasing product. I also enjoy hunting around for the ideal cover art; we&#8217;ve made extensive use of the public domain images on <a href="https://unsplash.com/">Unsplash</a>. We very nearly ended up with a picture of a marijuana farm on the cover of the 5G book&#8211;which would have been fine, I guess, but we opted for something less potentially controversial.</p><p>Much of what we have written draws on our personal experience (especially Larry&#8217;s&#8211;he&#8217;s really quite good at building systems and then writing about them). Three of the above books are based on experience with open source projects such as <a href="https://docs.aetherproject.org/">Aether</a> and other SDN projects of the <a href="https://opennetworking.org/">ONF</a>. We also leveraged my experience with the open source projects <a href="https://www.openvswitch.org/">Open vSwitch</a> and <a href="https://magmacore.org/">Magma</a>. In our ideal world, every book we write would have a companion set of open source software so that students could get hands-on experience with whatever it is we are writing about. We&#8217;ve met that goal for all the books to date aside from TCP Congestion Control.&nbsp; Of course, there <em>is</em> open source code for congestion control, but it&#8217;s mostly in the Linux kernel and thus not ideal for our instructional purposes. &nbsp;As Larry pointed out in his post about <a href="https://systemsapproach.substack.com/p/democratizing-5g">Aether OnRamp</a>, there is more to making a technology accessible than open source software.</p><p>Because we make all our books freely available online, we&#8217;re not expecting to make much money from selling physical books and ebooks, and we are totally meeting those expectations! We&#8217;ve discussed the rationale behind this choice before&#8211;we&#8217;re more interested in impact than revenue. But of course it&#8217;s much easier to measure revenue. Once upon a time our publisher even measured course adoption rates for us (ah, those halcyon days of working with a small publisher). Today we are measuring things like the number of readers of this newsletter (growing&nbsp; nicely, thank you&#8211;tell your friends) and visits to our web sites (also doing well, although maybe a bit less well as Google search deteriorates). We definitely appreciate direct feedback from our readers. We even have our favorite Amazon one-star review (called &#8220;<a href="https://www.amazon.com/gp/customer-reviews/R1IVEX207N7WY8/ref=cm_cr_arp_d_rvw_ttl?ie=UTF8&amp;ASIN=B004VF6216">Wall of Text</a>&#8221;) that we use as inspiration to write more concisely and to draw more pictures.&nbsp;</p><p>The open source model is working well in my view. We&#8217;ve received dozens of corrections, issues, and pull requests from around the world, and we try to thank every contributor in the preface of each book. Sometimes it&#8217;s corrections to our grammar, sometimes it&#8217;s a bigger error like mixing up big- and little-endian (which was wrong in every edition of our big book until last month). Our books are better because we adopted this model. And occasionally one of us gets the satisfaction of resolving a merge conflict using git so we feel smart for a day.</p><p>At this point we have a number of books in the pipeline. Network security feels like a topic that we should cover, but will take some time given our relatively limited first-hand experience. There is part of me that wants to tackle quantum computing but that would definitely be a stretch. Machine learning is clearly close to our hearts and its application to networking is a topic that we plan to cover soon. We find that writing these books (and this newsletter) keeps us hungry to understand new technologies well enough to explain them to others, especially when there is a real system to underpin our learning.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books and newsletters accessible to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p>Got an idea for something that Systems Approach should cover in either a book or a newsletter? <a href="mailto:discuss@systemsapproach.org">Send us a note</a>.&nbsp;</p><p>In our periodic coverage of broken networks, Australian Telco Optus made a solid contribution last week with an <a href="https://www.theguardian.com/business/2023/nov/08/optus-outage-who-and-what-does-it-affect-australia-network-down-internet-of-things-iot">outage</a> lasting over eight hours. The best analysis we&#8217;ve seen so far comes from Mastodon user <a href="https://mastodon.au/@xrobau/111376847362633903">Rob Thomas</a> who used routing advertisements to deduce the root cause (suspected to be route reflector upgrades gone wrong). Optus has said little of substance other than to claim such outages are <a href="https://www.afr.com/chanticleer/optus-ceo-kelly-bayer-rosmarin-is-sorry-and-a-little-defiant-20231108-p5eilj">not unusual</a> (!). </p><p>Speaking of Mastodon, we will soon celebrate our first anniversary of leaving the dead bird site. Follow us <a href="https://discuss.systems/@SystemsAppr">here</a>. </p>]]></content:encoded></item><item><title><![CDATA[Applying a Systems Lens to Software Testing]]></title><description><![CDATA[What Should Our Acceptance Criteria Be?]]></description><link>https://systemsapproach.substack.com/p/applying-a-systems-lens-to-software</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/applying-a-systems-lens-to-software</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 30 Oct 2023 13:58:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/efaf3653-f468-4448-943e-083e43f98dcc_3024x4032.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A large chunk of the bandwidth of the Systems Approach team has been consumed in recent months by trying to increase the robustness and ease of use of Aether, an edge cloud platform for delivering private 5G service. And of course you can&#8217;t have a robust, usable platform without testing, which has proven to be something of a pain-point. So that has provided the inspiration for this week&#8217;s post, as we try to bring the Systems lens to testing.</p><div><hr></div><p>Perhaps the single biggest aspect of systems building I&#8217;ve come to appreciate since shifting my focus from academic pursuits to open source software development is the importance of testing and test automation. In academia, it&#8217;s not much of an overstatement to say that we teach students about testing only insofar as we need test cases to evaluate their solutions, and we have our grad students run performance benchmarks to collect quantitative data for our research papers, but that&#8217;s pretty much it. There are certainly exceptions (e.g., Software Engineering focused curricula), but my experience is that the importance placed on testing in academia is misaligned with its importance in practice.</p><p>I said I appreciate the role of software testing, but I&#8217;m not sure that I understand it with enough clarity and depth to explain it to anyone else. As is the nature of our <a href="https://systemsapproach.substack.com/p/defining-a-systems-approach">Systems Approach</a> mindset, I&#8217;d like to have a better understanding of the &#8220;whys&#8221;, but mostly what I see and hear is a lot of jargon: unit tests, smoke tests, soak tests, regression tests, integration tests, and so on. The problem I have with this and similar terminology is that it&#8217;s more descriptive than prescriptive. Surely an integration test (for example) is a good thing, and I can see why you could reasonably claim that a particular test is an integration test, but I&#8217;m not sure I understand why it&#8217;s either necessary or sufficient in the grand scheme of things. (If that example is too obscure, here&#8217;s another example posted by a <a href="https://www.reddit.com/r/Jokes/comments/prdi4x/a_software_tester_walks_into_a_bar/">Reddit user</a>.) The exception might be <a href="https://smartbear.com/learn/automated-testing/what-is-unit-testing/">unit tests</a>, where <em>code coverage</em> is a quantifiable metric, but even then, my experience is that more value is being put on the ability to measure progress than its actual contribution to producing quality code.</p><p>With that backdrop, I have recently found myself trying to perform triage on the 700+ <a href="https://www.techtarget.com/searchsoftwarequality/definition/quality-assurance">QA</a> jobs (incurring substantial monthly AWS charges) that have accumulated over the last five years on the <a href="https://opennetworking.org/aether/">Aether</a> project. I don&#8217;t think the specific functionality is particularly important&#8212;Aether consists of four microservice-based subsystems, each deployed as a Kubernetes workload on an edge cloud&#8212;although it probably is relevant that the subsystems are managed as independent open source projects, each with its own team of developers. The projects do, however, share common tools (e.g., <a href="https://www.jenkins.io/">Jenkins</a>) and feed into the same <a href="https://ops.systemsapproach.org/lifecycle.html#design-overview">CI/CD pipeline</a>, making it fairly representative of the practice of building systems from the integration of multiple upstream sources.</p><p>What is clear from my &#8220;case study&#8221; is that there are non-trivial tradeoffs involved, with competing requirements pulling in different directions. One is the tension between feature velocity and code quality, and that&#8217;s where test automation plays a key role: providing the tools to help engineering teams deliver both. The best practice (which Aether adopts) is the so-called <em>Shift Left</em> strategy: introducing tests as early as possible in the development cycle (i.e., towards the &#8220;left&#8221; end of CI/CD pipeline). But Shift Left is easier in theory than in practice because testing comes at a cost, both in time (developers waiting for tests to run) and resources (virtual and physical machines needed to run the tests).&nbsp;</p><h2>What Happens in Practice?</h2><p>In practice, what I&#8217;ve seen is heavy dependency on developers <em>manually</em> running component-level functional tests. These are the tests most people think of when they think of testing (and when they post jokes about testing to Reddit), with independent QA engineers providing value by looking for issues that developers miss, yet still failing to anticipate critical edge cases. In the case of Aether, one of the key functional tests exercises how well developers have implemented the <a href="https://5g.systemsapproach.org/core.html#sd-core">3GPP protocol spec</a>, a task so complex that the tests are commonly purchased from a third-party vendor. As for automated testing, the CI/CD pipeline performs mostly <em>pro forma</em> tests (e.g., does it build, does it have the appropriate copyright notice, has the developer signed the <a href="https://github.com/SystemsApproach/book/blob/master/CLA.rst">CLA</a>) as the gate to merging a patch into the code base.</p><p>That puts a heavy burden on post-merge integration tests, where the key issue is to ensure sufficient &#8220;configuration coverage&#8221;, that is, validating that the independently developed subsystems are configured in a way that represents how they will be deployed as a coherent whole. Unit coverage is straightforward; whole-system coverage is not. For me, the realization that configuration management and testing efficacy are deeply intertwined is the key insight. (It is also why automating the CI/CD pipeline is so critical).</p><p>To make this a little more tangible, let me use a specific example from Aether (which I don&#8217;t think is unique). To test a new feature&#8212;such as the ability to run multiple <a href="https://5g.systemsapproach.org/core.html?highlight=upf#functional-components">User Plane Functions (UPF)</a>, each serving a different partition (Slice) of wireless devices&#8212;it is necessary to deploy a combination of (a) the Mobile Core that implements the UPF, (b) the Runtime Controller that binds devices to UPF instances, and (c) a workload generator that sends meaningful traffic through each UPF. Each of the three components comes with its own &#8220;config file&#8221;, which the integration test has to align in a way that yields an end-to-end result. In a loosely coupled cloud-based system like Aether, integration <em>equals</em> coordinated configuration.</p><p>Now imagine doing that for each new feature that gets introduced, and either the number of unique configurations explodes, or you figure out how to get adequate feature converge by selectively deciding which combinations to test and which to not test. I don&#8217;t have a good answer as to how to do this, but I do know that it requires both insider knowledge and judgment. Experience also shows that many bugs will only be discovered through usage, which says to me that pre-release testing and post-release <a href="https://systemsapproach.substack.com/p/observability-joins-the-list-of-essential">observability</a> are closely related. Treating release management (including staged deployments) as just another stage of the testing strategy is a sensible holistic approach.</p><p>Going back to where I started&#8212;trying to understand software testing through the systems lens&#8212;I don&#8217;t think I&#8217;ve satisfied my personal acceptance criteria. There are a handful of design principles to work with, but the task still feels like equal parts art and engineering. As for the triage I set out to perform on the set of Aether QA jobs, I&#8217;m still struggling to separate the wheat from the chaff. That&#8217;s a natural consequence of an evolving system, without a clear plan to disable obsolete tests. This remains a work-in-progress, but one clear takeaway is that both students and practitioners would be well-served by having a rigorous foundation in software testing.</p><div><hr></div><p>With energy costs comprising up to 10% of their operating costs, it should be no surprise that network operators are looking for ways to save energy. The <a href="https://opennetworking.org/sustainable-5g/">SMaRT-5G Project</a> at ONF has recently published a <a href="https://opennetworking.org/wp-content/uploads/2023/07/ONF-SMaRT-5G-White-Paper-v11.pdf">whitepaper</a> laying out a broad research agenda to address the challenge.</p><p>Preview photo this week by Bruce Davie</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books and newsletters accessible to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Network Virtualization Revisited]]></title><description><![CDATA[The power of illusions created in software]]></description><link>https://systemsapproach.substack.com/p/network-virtualization-revisited</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/network-virtualization-revisited</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 16 Oct 2023 07:30:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/53cde748-124f-43d5-a6a2-d19d33654ce3_1920x2880.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When writing our Systems Approach books, we generally try to put ourselves in the shoes of a reader who doesn&#8217;t yet understand the topic we are trying to explain. This might seem obvious, but it means we need to be constantly checking our assumptions about what readers can be expected to know. Back in 1995 we couldn&#8217;t even assume that a reader would have spent much time on the Internet&#8211;we recently noticed a section of our big book that was quite overdue for an update since it still reflected our 1995 assumptions. In a related vein, some of our recent discussions have made us wonder how well our readers understand virtualization, especially when applied to networking, so we&#8217;re taking a look back at how we came to understand virtualization and its power.&nbsp;</p><div><hr></div><p>I have a vague memory of hearing about server virtualization for the first time in the early 2000s when I was working at Cisco. Most of my knowledge of operating systems had been picked up early in my career, so I had a working knowledge of concepts like virtual memory, but I was pretty surprised when I learned just how popular virtual machines had become. One thing I wondered was why virtual machines had become the <em>de facto</em> means to isolate applications rather than making use of process isolation capabilities of a single operating system. I wouldn&#8217;t get a satisfactory answer to that question until I joined Nicira many years later (more on that in a moment).</p><p>Even more perplexing to a networking person was to hear that the prevalence of virtual machines in data centers was driving a push towards big, flat layer-2 networks. One of the most remarkable (to me) consequences of virtualization is virtual machine migration: because a virtual machine is completely decoupled from the physical hardware on which it runs, it can be picked up and moved to another physical host without interruption. VMware&#8217;s version of live migration, vMotion, was released in 2003 to considerable acclaim, as it allowed a VM to be moved across a data center without interruption to the applications running on it. But there was an unfortunate networking-related side effect to VM migration: VMs retained their IP addresses as they moved. And this is what led to the push to build big, flat L2 networks in modern datacenters: so VMs could move around without finding themselves on a subnet that didn&#8217;t match their IP address.&nbsp;</p><p>By this time, I was convinced that scalable L2 networks were something of an oxymoron, so my first reaction was to argue that VMs should simply change their address to match the destination subnet when they moved across the DC. But that just illustrated what I didn&#8217;t know about real-world applications in datacenters. The consequences of an IP address change range from a dropped TCP connection to complete breakage of an application in the case where it has a builtin assumption of L2-adjacency to some other component. From the perspective of a datacenter operator, you simply can&#8217;t expect the application to respond correctly if the IP address is changing underneath it.</p><p>Fortunately there are alternatives to attempting to build datacenter-scale L2 networks. One of the seminal efforts to tackle the problem is <a href="https://dl.acm.org/doi/pdf/10.1145/1592568.1592576">VL2</a>, described in 2009 by Greenberg <em>et al</em>. While this was not the first paper to use the term network virtualization, it did (I think) introduce the term in the way that it is most widely used today. Interestingly, Albert Greenberg (a SIGCOMM award winner) led the team that developed Azure&#8217;s network virtualization system, while his co-author James Hamilton later led the networking team at AWS. VL2 gives&nbsp;</p><blockquote><p><em>each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch&#8212;</em>a Virtual Layer 2<em>&#8212; and maintain this illusion even as the size of each service varies from 1 server to 10,000.</em></p></blockquote><p>Too often when I read something about network virtualization, it turns out to just be some way of partitioning network resources among different users (also known as <a href="https://open.substack.com/pub/systemsapproach/p/does-network-slicing-solve-a-real?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">slicing</a>). But the key word in the above quote is &#8220;illusion&#8221; because it gets to the heart of virtualization: creating an illusion of something that, strictly speaking, doesn&#8217;t exist. As the <a href="https://dl.acm.org/doi/abs/10.1145/361011.361073">seminal 1974 paper on virtualization</a> puts it, these illusions are &#8220;efficient, isolated duplicates&#8221; of the physical thing being virtualized. Virtual memory gives processes the illusion of a massive amount of address space, generally much larger than what is physically present, completely available to that process and protected from other processes. Virtual machines create the illusion of a complete set of computing resources (CPU, memory, disk, I/O) that are so fully independent of the underlying physical machine that they can be moved across a datacenter (or further). And virtual networks create the illusion of a private switched Ethernet (in the case of VL2) that can span the datacenter, even when the datacenter is a Layer 3 network built out of routers interconnecting many subnets. So while partitioning of resources is <em>part</em> of the story, it&#8217;s just one element in service of creating these illusions.&nbsp;</p><p>By the time I started working at Nicira in 2012, the team had settled on this view of network virtualization that mirrored the success of server virtualization. The idea of virtualization creating an illusion of something was core to the vision: just as a virtual machine creates the illusion of a physical machine so faithfully that an unmodified operating system and its applications can run on it&#8211;even as it migrates from one piece of hardware to another&#8211;so too, a virtual network should perfectly replicate the features of a physical network, while remaining independent of the actual underlying hardware. Just like servers, networks are complicated things with lots of moving parts, so Nicira&#8217;s product needed to do a lot more than what VL2 did: not just creating a virtual layer 2 switch, but virtualizing every layer of the network. That meant (eventually) virtual layer 3 routing, virtual firewalls, virtual load balancers and so on. I used to chuckle to myself about the prospect of a 50-person engineering team managing to recreate in software all the networking capabilities that had been developed over the previous several decades by companies like F5, Checkpoint and Cisco, but that is pretty much what eventually happened (thanks to the injection of considerable engineering resources over the following years).</p><p>You can find descriptions of network virtualization as implemented at <a href="https://www.usenix.org/conference/nsdi14/technical-sessions/presentation/koponen">Nicira</a>, <a href="https://www.usenix.org/system/files/conference/nsdi18/nsdi18-dalton.pdf">Google</a>, and <a href="https://www.usenix.org/system/files/conference/nsdi17/nsdi17-firestone.pdf">Azure</a>. It&#8217;s a bit harder to get details on how AWS does it but a <a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html">VPC</a> is a form of virtual network; here is some of the information as <a href="https://youtu.be/JIQETrFC_SQ?si=oubAsLjjkerEAEy2">presented by James Hamilton</a>. We also cover network virtualization as a key use case of SDN in our <a href="https://www.systemsapproach.org/books.html#sdnbook">book</a>. And it&#8217;s not limited to the datacenters of the hyperscalers; <a href="https://www.vmware.com/au/products/nsx.html">VMware claims</a> that their network virtualization product (following on from the work at Nicira) is used in the datacenters of 91 of the large enterprises making up the Fortune 100.</p><p>In some respects, network virtualization has followed the same path as server virtualization and for similar reasons. Nicira founder <a href="https://youtu.be/6dEUk9c8RBE">Martin Casado</a> has talked about how virtualization changes the &#8220;laws of physics&#8221;: the salient example for server virtualization is live VM migration, but there are others, such as snapshotting and cloning of VMs, made possible by recreating the illusion of a physical machine entirely in software. Not only did network virtualization bring similar capabilities to networking, but it facilitated entirely new ones such as <a href="https://open.substack.com/pub/systemsapproach/p/the-challenge-of-east-west-traffic?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">microsegmentation</a>, laying the groundwork for what was arguably the &#8220;killer&#8221; use case, <a href="https://open.substack.com/pub/systemsapproach/p/is-zero-trust-living-up-to-expectations?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">zero-trust networking</a>. We had a running joke at Nicira about the movie &#8220;<a href="https://www.imdb.com/title/tt1375666/">Inception</a>&#8221; (particularly when running virtual networks inside virtual networks). Network virtualization, with its own &#8220;laws of physics&#8221;, allowed us not only to recreate the capabilities of physical networks but to create new ones.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books and newsletters accessible to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>We&#8217;re always interested in understanding how things go wrong with the Internet, and recent weeks have provided plenty of examples. </p><p>Cloudflare has an interesting <a href="https://blog.cloudflare.com/1-1-1-1-lookup-failures-on-october-4th-2023/">blog</a> about how they came to suffer an outage in part of their DNS infrastucture weeks after the root cause. The DDOS attacks leveraging a feature of HTTP2 have been well explained by both <a href="https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/">Cloudflare</a> and <a href="https://cloud.google.com/blog/products/identity-security/how-it-works-the-novel-http2-rapid-reset-ddos-attack">Google</a>: this feels like a case study for some future version of our book and a cautionary tale about implementing RFCs to the letter of the law. And it turns out that lots of commercial BGP implementations will <a href="https://blog.benjojo.co.uk/post/bgp-path-attributes-grave-error-handling">fail under fuzz testing</a>. </p><p>Preview photo this week by <a href="https://unsplash.com/@luca_nicoletti?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Luca Nicoletti</a> on <a href="https://unsplash.com/photos/-Mhr3UWS5n0">Unsplash</a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Democratizing 5G]]></title><description><![CDATA[We've been involved in multiple efforts to bring 5G to a wider audience, through both open source efforts (Aether, Magma, etc.) and our books. These efforts have served to remind us how much the mobile wireless architecture, in sharp contrast to the Internet, was designed with the needs of telco operators in mind. That said, 5G bases much of its architecture on standard cloud technologies, and it is now feasible for enterprise users&#8211;and the occasional]]></description><link>https://systemsapproach.substack.com/p/democratizing-5g</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/democratizing-5g</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 02 Oct 2023 07:15:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3b3ee205-7f75-4f14-bb21-672e092c57f5_2946x1625.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the last several years, we've been involved in multiple efforts to bring 5G to a wider audience, through both open source efforts (<a href="https://docs.aetherproject.org/master/index.html">Aether</a>, <a href="https://magmacore.org/">Magma</a>, etc.) and our&nbsp;<a href="https://www.systemsapproach.org/books.html#5gbook">books</a>. These efforts have served to remind us how much the mobile wireless architecture, in sharp contrast to the Internet, was designed with the needs of telco operators in mind. That said, 5G bases much of its architecture on standard cloud technologies, and it is now feasible for enterprise users&#8211;and the occasional <a href="https://open.substack.com/pub/systemsapproach/p/private-5g-as-easy-as-wi-fi?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">book author</a>&#8211;to deploy 5G. This places 5G in a long line of efforts to democratize access to technology as we discuss below.</p><div><hr></div><p>I recently attended SIGCOMM for the first time in many years, and was immediately reminded of the standard salutation when you run into someone at an academic conference: <em>&#8220;What are you working on?&#8221; </em>I wasn&#8217;t prepared with a succinct-yet-meaningful response, but quickly settled on <em>&#8220;Democratizing 5G&#8221;</em>. It wasn&#8217;t the expected research-focused answer, but it did have the quaint advantage of not directly involving ML. It seemed to do the trick, but also got me thinking about how common the word &#8220;democratizing&#8221; has become in our technical jargon.</p><p>I believe the first time I heard the word being used in a technical setting was in 2004, when Mike Freedman and colleagues published <a href="https://www.usenix.org/legacy/events/nsdi04/tech/full_papers/freedman/freedman.pdf">Democratizing Content Publication with Coral</a> at the first NSDI. And if the meaning wasn&#8217;t already intuitive, the paper gave a definition that nicely captures the spirit of what it means to democratize something technology-related:</p><blockquote><p><em>CoralCDN replicates content in proportion to the content&#8217;s popularity, regardless of the publisher&#8217;s resources, in effect democratizing publication.</em></p></blockquote><p>The Internet (like the printing press centuries before) is widely regarded as playing a central role in the democratization of knowledge&#8212;making information readily available among the wider population. CoralCDN focused specifically on the publishing side of the equation, making it easier for the wider population to also disseminate information. That&#8217;s now taken for granted with the explosion of various platforms (e.g., blogs, social media) that include built-in content distribution mechanisms, but was novel at the time. It also highlights the idea that there are multiple barriers to lower; in this case, both acquiring and disseminating content. Less obviously, it points to the role &#8220;access to resources&#8221; plays in the equation. CoralCDN was only able to democratize publication because it ran on a publicly-funded infrastructure, in this case <a href="https://systemsapproach.substack.com/p/its-been-a-fun-ride">PlanetLab</a>.</p><p>My use of the word democratization in the context of 5G focuses more on the underlying network infrastructure&#8212;with the intended audience being people who build and operate that infrastructure&#8212;than on end-users that benefit from that infrastructure. I tend to think of it as lowering the barrier to innovation, but it also shares much with the idea of <a href="https://freedom-to-tinker.com/">Freedom to Tinker</a>, where know-how is essential to crafting good policy. Either way, it&#8217;s about broadening the set of people able to participate in technologies that impact our lives.</p><p>For 5G, democratization turns out to be a multi-faceted challenge. A necessary condition is access to open source implementations of both the RAN and the Mobile Core, which, thanks to various open source organizations (<a href="https://opennetworking.org/aether/">ONF&#8217;s Aether</a> and <a href="https://openairinterface.org/">OAI</a> for example), now exist. But a quick perusal of the Git repos for those and similar projects will immediately convince you that the mere existence of open source software is not sufficient; users also need the wherewithal to deploy and operate the code if they have any ambition to take advantage of it. My experience is that &#8220;wherewithal&#8221; maps onto a combination of tooling and documentation, with the latter being the &#8220;long pole&#8221;. The result&#8212;our <a href="https://www.systemsapproach.org/books.html#5gbook">Private 5G</a> book, which includes a tutorial guiding the reader through the <a href="https://github.com/opennetworkinglab/aether-onramp">OnRamp</a> deployment toolset&#8212;is now available.</p><p>I&#8217;ve written about that topic <a href="https://systemsapproach.substack.com/p/onramp-incrementally-mastering-complexity">before</a>, so won&#8217;t rehash the challenges here, except to make two observations. The first is that in our haste to implement new functionality, we have created a nearly impenetrable mountain of configuration variables needed to manage that functionality; YAML files layered on top of Jinja2 templates overlaid on still other YAML files layered on top of JSON&#8230; I&#8217;m sure it&#8217;s not the intention to obfuscate the underlying code, but that is the practical effect. It&#8217;s almost as though we&#8217;ve created a problem only AI will be able to help us solve. The other option is to be a well-resourced company with a team of experienced engineers, but of course that flies in the face of our goal, which is to democratize access to that know-how.</p><p>The second observation, which follows from my experience trying to bring up a <a href="https://systemsapproach.substack.com/p/private-5g-as-easy-as-wi-fi">5G small cell</a>, is that the mobile cellular technology has operational complexity baked into its design. This makes sense (in a perverse way) when you consider that the technology was defined by MNOs that built businesses around their ability to operate the network on behalf of their subscribers. But this creates additional barriers that need to be lowered if you want to broaden participation. For example, programming a SIM is a required step to establishing a secure 5G connection, but being able to do that in turn requires having the necessary credentials (plus the know-how to correctly specify another few hundred lines of YAML). Fortunately, the broader <a href="https://sysmocom.de/products/sim/sysmousim/index.html">ecosystem</a> includes players that help on that front, but the main takeaway is that there is much more to the democratization of technology than initially meets the eye.</p><h2>Why are we doing this?</h2><p>But that all raises the question&#8212;typically following immediately after <em>&#8220;What are you working on?&#8221;</em>&#8212;which is: <em>&#8220;Why should I care?&#8221; </em>It&#8217;s a good question. If you&#8217;re going to put the effort into lowering the barrier-to-entry, there ought to be something important on the other side of those barriers. Again, there are a couple of parts to the answer.</p><p>The first part is to simply acknowledge that Internet access is going to be dominated by mobile wireless connectivity. Nearly 60% of all web traffic already comes from mobile devices, and that doesn&#8217;t yet take into account the tens of billions of IoT devices that are expected to connect to the Internet over the next few years. We&#8217;ve spent 40 years building the Internet out of open and accessible technologies, but going forward, democratizing Internet infrastructure is meaningless without also democratizing wireless access. That the mobile cellular industry has been so closed and proprietary for so long makes that goal all the more relevant.</p><p>The second part is to zero in on 5G vs other wireless technologies, most obviously Wi-Fi. At the coding and modulation level, Wi-Fi 6 and 5G&#8217;s New Radio (NR) are converging on <a href="https://5g.systemsapproach.org/radio.html">OFDMA</a>. The difference is how the available spectrum is allocated by the two systems, which Bruce and his Magma collaborators discuss in depth in their <a href="https://arxiv.org/abs/2209.10001">NSDI paper on Magma</a>. Digging deeper, 5G scheduling includes the ability to dynamically change the size and number of schedulable resource units, including scheduling intervals as short 0.125ms. This opens the door to making fine-grained scheduling decisions that are critical to predictable, low-latency communication. The 5G scheduler also allocates some of the available spectrum to a light-weight over-the-air-interface that is simple enough for IoT devices to implement. These devices are not particularly latency-sensitive or bandwidth-hungry, but they often require long battery lifetimes, and hence, reduced hardware complexity that draws less power.</p><p>There&#8217;s reason to be skeptical, but as Bruce discusses in a <a href="https://systemsapproach.substack.com/p/why-5g-matters">previous post</a>, the application of cloud best-practices to 5G is a game changer. The hype around 5G has certainly gotten out in front of the reality, but it&#8217;s only just now the case that 5G is becoming viable in the enterprise (aka Private 5G). For example, the Aether project has only recently been able to certify a commercially available <a href="https://opennetworking.org/products/moso-canopy-5g-indoor-small-cell/.html#gnodeb-setup">small cell radio</a>, with ubiquitous 5G devices (other than smartphones) still to follow. Until these components become ubiquitous, many of these advantages outlined above will not be realized.&nbsp;</p><p>Bringing this discussion back to the question of what democratization means in the technical world, a personal takeaway is that I now see a direct line between &#8220;Democratizing X&#8221; and &#8220;X: A Systems Approach&#8221;. The Systems Approach has always looked at technology through the lens of deployed systems, using real implementations to explain the design decisions. Democratizing access to technologies is an almost inevitable consequence of how we help readers to understand them&#8212;by first deconstructing them into their elemental components and then showing how all the pieces are assembled into a coherent whole.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our open source books and newsletters free to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p><a href="https://docs.aetherproject.org/master/onramp/overview.html">Aether OnRamp</a> is now available, providing what aims to be a relatively painless way to stand up a private 5G deployment. Watch the <a href="https://opennetworking.org/events/aether-v2-2-0-dev-techinar/">Techniar Video</a> and learn about the commercial <a href="https://opennetworking.org/products/moso-canopy-5g-indoor-small-cell/">5G small cell radio</a> that can be deployed with Aether. And for a different approach to democratizing 5G, see the <a href="https://www.helium.com/">Helium Network</a>. </p>]]></content:encoded></item><item><title><![CDATA[How Congestion Control Saved the Internet]]></title><description><![CDATA[Distributed resource management for the win]]></description><link>https://systemsapproach.substack.com/p/how-congestion-control-saved-the</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/how-congestion-control-saved-the</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 18 Sep 2023 08:05:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9e438816-fcdd-4e94-a605-e3fa0dc891ee_1689x949.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>With the annual <a href="https://conferences.sigcomm.org/sigcomm/2023/">SIGCOMM conference</a> taking place last week, we observed that congestion control still gets an hour in the program, 35 years after the first paper on TCP congestion control was published. So it seems like a good time to appreciate just how much the success of the Internet has depended on its approach to managing congestion.&nbsp;</p><div><hr></div><p>Following my recent <a href="https://peertube.roundpond.net/w/1hSTyT2J4cKrLaoU3sJeqT">talk</a> and <a href="https://open.substack.com/pub/systemsapproach/p/60-years-of-networking?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">article</a> on &#8220;60 years of networking&#8221;, which focussed almost entirely on the Internet and ARPANET, I received quite a few comments about various networking technologies that were competing for ascendancy at the same time. These included the OSI stack (anyone remember CLNP and TP4?), the <a href="https://en.wikipedia.org/wiki/Coloured_Book_protocols">Coloured Book protocols</a> (including the Cambridge Ring), and of course ATM (Asynchronous Transfer Mode) which was actually the first networking protocol on which I worked in depth. It&#8217;s hard to fathom now, but in the 1980s I was one of many people who thought that ATM might be the packet switching technology to take over the world. ATM proponents used to refer to existing technologies such as Ethernet and TCP/IP as &#8220;legacy&#8221; protocols that could, if necessary, be carried over the global ATM network once it was established. One of my fond memories from those days is of Steve Deering (a pioneer of IP networking) boldly (and correctly) stating that ATM would never be successful enough to even be a legacy protocol.</p><p>One reason I skipped over these other protocols at the time was simply to save space&#8211;it&#8217;s a little-known fact that Larry and I aim for brevity, especially since receiving a <a href="https://www.amazon.com/gp/customer-reviews/R1IVEX207N7WY8/ref=cm_cr_arp_d_rvw_ttl?ie=UTF8&amp;ASIN=B004VF6216">1-star review on Amazon</a> that called our book &#8220;a wall of text&#8221;. But I was also focused on how we got to the Internet of today, where TCP/IP has effectively out-competed other protocol suites to achieve global (or <a href="https://arxiv.org/abs/2209.10001">near-global</a>) penetration.&nbsp;&nbsp;</p><p>There are many theories about why TCP/IP was more successful than its contemporaries, and they are not readily testable. Most likely, there were many factors that played into the success of the Internet protocols. But I rate congestion control as one of the key factors that enabled the Internet to progress from moderate to global scale. It is also an interesting study in how the particular architectural choices made in the 1970s proved themselves over the subsequent decades.&nbsp;</p><h3>Distributed Resource Management</h3><p>In David Clark&#8217;s paper &#8220;<a href="http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-clark.pdf">The Design Philosophy of the DARPA Internet Protocols</a>&#8221;, a stated design goal is &#8220;The Internet architecture must permit distributed management of its resources&#8221;. There are many different implications of that goal, but the way that <a href="https://ee.lbl.gov/papers/congavoid.pdf">Jacobson and Karels</a> first implemented congestion control in TCP is a good example of taking that principle to heart. Their approach also embraces another design goal of the Internet: accommodate many different types of networks. Taken together, these principles pretty much rule out the possibility of any sort of network-based admission control, a sharp contrast to networks such as ATM, which assumed that a request for resources would be made from an end-system to the network before data could flow. Part of the &#8220;accommodate many types of networks&#8221; philosophy is that you can&#8217;t assume that all networks have admission control. Couple that with distributed management of resources and you end up with congestion control being something that end-systems have to handle, which is exactly what Jacobson and Karels did with their initial changes to TCP.</p><p>The history of TCP congestion control is long enough to <a href="https://tcpcc.systemsapproach.org/">fill a book</a> (and we did) but the work done at Berkeley from 1996 to 1998 casts a long shadow, with Jacobson&#8217;s <a href="https://dl.acm.org/doi/10.1145/52324.52356">1988 SIGCOMM paper</a> ranking among the most cited networking papers of all time. Slow-start, AIMD (additive increase, multiplicative decrease), RTT estimation, and the use of packet loss as a congestion signal were all in that paper, laying the groundwork for the following decades of congestion control research. One reason for that paper's influence, I believe, is that the foundation it laid was solid, while it left plenty of room for future improvements&#8211;as we see in the continued efforts to improve congestion control today. And the problem is fundamentally hard: we&#8217;re trying to get millions of end-systems that have no direct contact with each other to cooperatively share the bandwidth of bottleneck links in some moderately fair way using only the information that can be gleaned by sending packets into the network and observing when and whether they reach their destination.&nbsp;</p><p>Arguably one of the biggest leaps forward after 1988 was the realization by Brakmo and Peterson (yes, that guy) that packet loss wasn't the only signal of congestion: so too was increasing delay. This was the basis for the 1994 <a href="https://dl.acm.org/doi/10.1145/190809.190317">TCP Vegas paper</a>, and the idea of using delay rather than loss alone was quite controversial at the time. However, Vegas kicked off a new trend in congestion control research, inspiring many other efforts to take delay into account as an early indicator of congestion before loss occurs. Data center TCP (<a href="https://dl.acm.org/doi/10.1145/1851275.1851192">DCTCP</a>) and Google&#8217;s <a href="https://github.com/google/bbr">BBR</a> are two examples.&nbsp;</p><p>One reason that I give credit to congestion control algorithms in explaining the success of the Internet is that the path to failure of the Internet was clearly on display in 1986. Jacobson describes some of the early congestion collapse episodes, which saw throughput fall by three orders of magnitude. When I joined Cisco in 1995 we were still hearing customer stories about catastrophic congestion episodes.&nbsp; The same year Bob Metcalfe, inventor of Ethernet and recent <a href="https://awards.acm.org/about/2022-turing">Turing Award</a> winner, famously predicted that the Internet would collapse as consumer Internet access and the rise of the Web drove rapid growth in traffic. It didn&#8217;t. Congestion control has continued to evolve, with the QUIC protocol, for example, offering both better mechanisms for detecting congestion and the option of experimenting with multiple congestion control algorithms. And some congestion control has moved into the application layer, e.g., Dynamic Adaptive Streaming over HTTP (<a href="https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP">DASH</a>).&nbsp;</p><p>An interesting side effect of the congestion episodes of the 1980s and &#8216;90s was that we observed that small buffers were sometimes the cause of congestion collapse. An influential paper by Villamizar and Song showed that TCP performance dropped when the amount of buffering was less than the average delay &#215; bandwidth product of the flows. Unfortunately, the result only held for very small numbers of flows (as was acknowledged in the paper) but it was widely interpreted as an inviolable rule that influenced the next several years of router design. This was finally debunked by the buffer sizing work of <a href="https://dl.acm.org/doi/10.1145/1015467.1015499">Appenzeller et al.</a> in 2004, but not before the unfortunate phenomenon of <a href="https://www.bufferbloat.net/projects/">Bufferbloat</a>&#8211;truly excessive buffer sizes leading to massive queuing delays&#8211;had made it into millions of low-end routers. The <a href="https://www.waveform.com/tools/bufferbloat">self-test for Bufferbloat</a> in your home network is worth a look.</p><p>So, while we don&#8217;t get to go back and run controlled experiments to see exactly how the Internet came to succeed while other protocol suites fell by the wayside, we can at least see that the Internet avoided potential failure because of the timely way congestion control was added. It was relatively easy in 1986 to experiment with new ideas by tweaking the code in a couple of end-systems, and then push the effective solution out to a wide set of systems. Nothing inside the network had to change. It almost certainly helped that the set of operating systems that needed to be changed and the community of people who could make those changes was small enough to see widespread deployment of the initial BSD-based algorithms of Jacobson and Karels.&nbsp;</p><p>It seems clear that there is no such thing as the perfect congestion control approach, which is why we continue to see new papers on the topic 35 years after Jacobson&#8217;s. But the Internet&#8217;s architecture has fostered the environment in which effective solutions can be tested and deployed to achieve distributed management of shared resources. In my view that&#8217;s a great testament to the quality of that architecture.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our work open to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>As fans of decentralization, we&#8217;ve moved our social media home to Mastodon&#8211;follow us <a href="https://discuss.systems/@SystemsAppr">here</a>. If you need any further reason to get off X/Twitter, <a href="https://open.substack.com/pub/theconnector/p/finishing-with-twitterx?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">this</a> may help. And you can read how <a href="https://chrlschn.medium.com/mastodon-is-rewinding-the-clock-on-social-media-in-a-good-way-8998f6d9f1aa">Mastodon is Rewinding the Clock on Social Media &#8212; in a Good Way</a>. Related to our <a href="https://open.substack.com/pub/systemsapproach/p/looking-inside-large-language-models?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">earlier post on LLMs</a>, Emily Bender has a <a href="https://faculty.washington.edu/ebender/papers/Envisioning_IAS_preprint.pdf">new paper</a> on the role of LLMs in information retrieval, making a compelling case that generative AI may not be what you want in a search engine. And given our interest in <a href="https://open.substack.com/pub/systemsapproach/p/verified-networks?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">network verification</a>, we were pleased to see <a href="https://dl.acm.org/doi/pdf/10.1145/3603269.3604866">this paper</a> on experience with Batfish at SIGCOMM last week.</p>]]></content:encoded></item><item><title><![CDATA[Abstraction Merchants Revisited]]></title><description><![CDATA[A response from Scott Shenker to Larry's post on rethinking our approach to the SIGCOMM conference.]]></description><link>https://systemsapproach.substack.com/p/abstraction-merchants-revisited</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/abstraction-merchants-revisited</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 04 Sep 2023 08:01:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ade8f14c-ce37-4a5a-9227-51c4151fd864_1221x975.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Four weeks ago Larry commented on an ongoing debate in the networking research community about how to improve SIGCOMM&#8217;s conferences. His post, entitled <a href="https://systemsapproach.substack.com/p/abstraction-merchants">Abstraction Merchants</a>, specifically referred to Scott Shenker&#8217;s CCR article <a href="https://dl.acm.org/doi/abs/10.1145/3577929.3577933">Rethinking SIGCOMM's Conferences: Making Form Follow Function</a>. When Scott approached us with his response, we thought it was a good time to try the first-ever guest post in Systems Approach. You can read his reply below with some concluding thoughts from Larry at the end.&nbsp;</p><div><hr></div><p>I appreciated Larry&#8217;s generous and thoughtful commentary on my CCR editorial, and I agree with much of what he wrote. However, I was surprised by his last paragraph where he suggests we should first reach consensus on how we should review nontraditional papers and then trust that our collective actions will follow suit.&nbsp;</p><blockquote><p><em>My takeaway is that focusing on the &#8220;mismatch between&#8230; SIGCOMM's goals&#8230; and what our current practices achieve&#8221; will not be fruitful until we are certain we understand our goals. Crafting a policy about how reviewers should evaluate papers describing new abstractions, new architectures, and new systems will not make a difference unless the community truly values that work, even when (1) it&#8217;s difficult to identify an immediate path to deployment or imagine the associated business model, and (2) it falls outside the traditional boundaries of what is considered to be in-scope. If we can reach consensus on this, then I believe the form (practice) will follow.</em></p></blockquote><p>At the outset, let me be clear about where Larry and I agree and disagree. We agree on the need for change in SIGCOMM&#8217;s conferences. We also agree on the need to engage in an ongoing discussion about our goals and how to review the kinds of papers Larry refers to above. Where we disagree is that I don&#8217;t see that conversation ever reaching consensus, nor do I think consensus is needed for achieving significant change. In the next few paragraphs I delve more deeply into why I believe in this non-consensus path for change, because I think this exchange between us raises the crucial question of how to achieve change in large and diverse communities such as SIGCOMM. My reasoning is built on four basic points.</p><p>(1) <strong>Inclusion, consensus, and change: pick two.</strong> As individuals and as a community, we in SIGCOMM believe that (i) it is important to include a wide range of voices in SIGCOMM&#8217;s deliberations and (ii) policy decisions should be driven by community-wide consensus. We also, at least in the past decade or so, have come to believe that we need to change our conferences so that they better support SIGCOMM&#8217;s intellectual goals. Unfortunately, achieving consensus in any large organization is hard, especially in a community as diverse and loosely coupled as SIGCOMM. For instance, several people strongly objected to the statement that the research goal of SIGCOMM was to &#8220;lay the intellectual foundation&#8221; of our field. I thus do not think that achieving consensus should be our initial goal, because we are almost certainly doomed to fail. Instead, we should focus on inclusion and change, sidestepping the need for community-wide consensus; I will return to this in my fourth point.</p><p>(2) <strong>Our reviewing practices are shaped more by our experience and our sense of community norms than by vague exhortations about how we should review.</strong> It has been widely observed that when you ask graduate students to review papers, they are often sharply if not brutally critical. This is understandable because seeing the overly-negative reviews of their own papers is their only contact with what the community expects from reviewers, and they are merely following what they perceive as the community&#8217;s norm. SIGCOMM has provided <a href="https://www.sigcomm.org/conference-planning/sigcomm-program-bcp/reviewing">guidance</a> for reviewers, but this guidance has been widely ignored for over a decade. While I think we, as a community, should emphasize this guidance more than we do, and be engaged in an ongoing dialog about the nature of such guidance, we should not fool ourselves that such vague directives by themselves will have the impact we hope for.&nbsp;</p><p>(3) <strong>Change is hard, and the alternative is the status quo.</strong> Creating change requires hard work, and merely registering a disagreement while remaining on the sideline is essentially a vote for the status quo. Over the past year, my three intrepid collaborators &#8211; Fabian E. Bustamante, Nate Foster, and Aurojit Panda &#8211; and I have talked with dozens of people in SIGCOMM (actively seeking out those who disagree with us), surveyed the community for input, and written thousands of words articulating our thinking. Our writings have described both some general principles and some specific incremental steps forward. You can see these efforts documented <a href="http://sigcomm.quest">here</a> and on the SIGCOMM Slack conference channel (which also contains valuable input from many others, including this <a href="https://docs.google.com/document/d/1EqpF3GWfB6P_nVKe-ziUytzklH7iGKhnVamEZh0wiAA/edit#heading=h.hitzcqxpmzxy">report</a>). Our process started with a fairly radical set of <a href="https://sigcomm.quest/principles.html">principles</a> and ended with (i) a specific incremental proposal (<a href="https://sigcomm.quest/proposal.html">described</a> and later <a href="https://sigcomm.quest/clarification.html">clarified</a>) and (ii) a general <a href="https://sigcomm.slack.com/archives/C012HHB2BV2/p1691443251402049">process</a> for ongoing change (which was developed in collaboration with the SIGCOMM Executive Committee). While our goals remain ambitious and far-reaching, we settled on a concrete first step and a process that could support ongoing progress, because we thought the former was the most we could achieve in the short-term, and the latter provided a solid foundation for future evolution. We greatly appreciate those who, while not agreeing completely with our ideas, were willing to lend their support to these efforts. From this I have learned the following lesson, modeled on Jon Postel&#8217;s famous <a href="https://en.wikipedia.org/wiki/Robustness_principle#:~:text=It%20is%20often%20reworded%20as,an%20early%20specification%20of%20TCP.">advice</a>: <em>Be conservative in what changes you propose, but liberal in what changes you&#8217;ll support.</em>&nbsp;</p><p>(4) <strong>Empowering positive voices may be sufficient.</strong> Larry and I agree that we need to value papers that describe new abstractions, new architectures, and new systems even if they are hard to evaluate and inconsistent with current practice. You might naturally ask: <em>How might our proposed set of actions cause the community to value such papers?</em> Our community norms of how to review have been strongly shaped by how others review, and by what papers are accepted by the conference. Seeing the negative reviews of papers that fall outside the current norms, and their high rejection rate, has led many to believe that they shouldn&#8217;t be accepted, and I don&#8217;t think we can change people&#8217;s minds by telling them otherwise. Instead, we want to empower the current minority of reviewers who value such papers via our <a href="https://sigcomm.quest/proposal.html">proposal</a> (alluded to above) to accept papers that have a single explicit champion. This will change the kinds of papers that people see at SIGCOMM conferences, thereby demonstrating that there are those in the community who value this kind of work; this will hopefully lead to more people writing such papers, and more reviewers willing to advocate for such papers. <em>I believe this gradual process of changing perceived norms, without reaching explicit consensus, is the path to change in organizations like SIGCOMM.</em> Note that this approach is not restricted to accepting a particular set of &#8220;nontraditional&#8221; papers, but instead leads to a greater diversity of papers; any set of papers that has at least a minority of voices in favor will receive a more favorable hearing from program committees. This is crucial for SIGCOMM, because I think our conferences should broaden our perspectives, not just reinforce our current preconceptions. On that, I think we can all agree.</p><div><hr></div><h4><em><strong>Larry&#8217;s Closing Thoughts</strong></em></h4><p>I like Scott&#8217;s focus on how to make change happen. Talk is cheap, and his attention to the process is something I can get behind. If the consensus I called for is manifest in concrete proposals for how SIGCOMM improves its conferences, then I&#8217;m happy. I do think this process is complicated by three things I talked about, which we need to acknowledge and address head-on. One is the breadth of methodologies we apply, which, to oversimplify a bit, range from theory to practice with plenty of ground in the middle. A second is the range of opinions about how we measure impact, which (to oversimplify again) range from changing perspectives to changing protocol specs. A third is the diversity of values we bring to the table, which is especially problematic when we conflate value with quality. As I wrote, I place a lot of value on the synthesis of complex systems that lead to new abstractions. But that&#8217;s just me. Others put their emphasis on policy questions, economic factors, and societal impact (to name a few). I&#8217;m not sure how to best reconcile all of that, but I&#8217;m encouraged by the fact that SIGCOMM is willing to try.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to making our work available to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>As we continue to follow the AI/ML space, an article on the <a href="https://slate.com/technology/2023/08/chatgpt-vs-algorithms-class.html">poor performance of ChatGPT</a> on an Algorithms exam caught our attention (and frankly didn&#8217;t surprise us). Thanks to user solstice333 on Github, we <a href="https://github.com/SystemsApproach/book/commit/83d06a1f49bbb376690d5e376d56c3d6a3a54dab">fixed a bug</a> in our main textbook that had lurked undetected for 28 years&#8211;score one for open source publishing. Thanks also to Richard Clegg who sent us <a href="https://www.youtube.com/watch?v=B14Gtm2Z_70">this video</a> about the history of putting compass points other than North at the top of maps. Finally, don&#8217;t forget to do your bit for Internet decentralization by joining us on <a href="https://discuss.systems/@SystemsAppr">Mastodon</a> or <a href="https://peertube.roundpond.net/c/systemsappr/videos">Peertube</a>.&nbsp; </p>]]></content:encoded></item><item><title><![CDATA[Looking inside Large Language Models]]></title><description><![CDATA[We like to think we can give our readers a bit of perspective on the field when there is no shortage of hyperbole and strong opinions. We&#8217;re not claiming to be AI experts but we&#8217;ve read many thousands of words on the topic and some of the central issues (in our view) are becoming clear]]></description><link>https://systemsapproach.substack.com/p/looking-inside-large-language-models</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/looking-inside-large-language-models</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 21 Aug 2023 07:04:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/639648bf-e16f-4c65-8f16-71da92ac58d9_1035x732.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While we&#8217;re not quite ready to give the full Systems Approach book-length treatment to AI and machine learning, we like to think we can give our readers a bit of perspective on the field when there is no shortage of hyperbole and strong opinions. We&#8217;re not claiming to be AI experts (although Bruce very nearly ended up studying in the legendary AI department at Edinburgh in the 1980s, as discussed in his <a href="https://peertube.roundpond.net/w/1hSTyT2J4cKrLaoU3sJeqT">recent talk</a>). We&#8217;ve read many thousands of words on the topic and some of the central issues (in our view) are becoming clear.</p><div><hr></div><p>A couple of questions put to me recently led me to think that perhaps it was time for another post on Large Language Models (LLMs) and the broader topic of AI. (As in an <a href="https://open.substack.com/pub/systemsapproach/p/putting-large-language-models-in?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">earlier post</a>, I&#8217;m going to use &#8220;AI&#8221; in the way the term is most widely used now, an umbrella term that includes LLMs and other machine learning systems, in spite of some pushback. Noted roboticist Rodney Brooks has a <a href="https://rodneybrooks.com/what-will-transformers-transform/">good piece</a> that touches on the change in meaning of &#8220;AI&#8221; since the term was coined over 60 years ago.) First, in a recent podcast, I was asked about the biggest challenge at Systems Approach, and I immediately answered &#8220;figuring out what&#8217;s important&#8221;.&nbsp; This goes way beyond AI, but it&#8217;s a challenge even to decide what aspects of AI warrant our attention. And when a person in the investing community asked for my opinion on AI, my response (which might have verged on a rant) was this: my biggest concern with AI is that too many people, whether they are journalists, investors, business decision makers or whatever, are focused on the wrong problem. This issue is well captured in a recent <a href="https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/">Scientific American article</a> by Emily Bender (of <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">Stochastic Parrots</a> fame) and Alex Hanna of <a href="https://www.dair-institute.org/">DAIR</a>. If you only read one article on AI, I&#8217;d suggest that one&#8211;even if that means skipping the rest of this post. To summarize, we should focus not on theoretical problems of some future superhuman intelligence, but on the harms that AI is already capable of causing, such as fostering discrimination in areas from housing to health care, or helping the spread of misinformation.&nbsp;</p><p>That said, I continue to find the inner workings of AI quite fascinating and it&#8217;s worth understanding them well enough to know what AI is and is not capable of. In the last month I&#8217;ve read a number of more in-depth articles on LLMs, and these have given me a little more insight into why the question of what is really going on inside these systems remains, for most people, a matter of debate. I mostly agree with the position that LLMs have no idea what they are doing, but as with most topics, there is a bit more to this one than meets the eye.&nbsp;</p><p>If you want to go deep into the internals of LLMs such as ChatGPT, Stephen Wolfram has written an excellent (if long) <a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/">article</a> that is also available in book form. One of the aspects that he drills down on is what it means to have a &#8220;model&#8221; of something. For example, if we had a set of data showing how long it takes a cannonball to fall to earth from various heights, we could fit a straight line to the data, and extrapolate or interpolate to predict times to fall from other heights not in the data set. But by choosing a straight line, we&#8217;ve adopted a model that&#8217;s not very accurate, and will be increasingly inaccurate as we go further outside the range of the original data. Knowing how gravity works, we&#8217;d be more inclined to fit a parabola, but that&#8217;s only possible because we already have a model for gravity.&nbsp;</p><p>With LLMs, words are modelled in a vector space with hundreds of dimensions. The impressive feat is that an exceptionally complex model (with over a trillion parameters in GPT-4) can be trained (using vast amounts of input text) to do a pretty good job of mimicking human writing. In effect, GPT builds a model of language that captures a lot of the complexity of how humans string words together. With that model in place, an LLM is then able to generate strings of text not in the training data set&#8211;which is exactly what we observe when we interact with a system such as ChatGPT. As we know, the generated text often looks pretty authentic. As Timothy Lee and Sean Trott pointed out in <a href="https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/">another very helpful article</a> at Ars Technica, LLMs deal with issues such as disambiguating the multiple meanings of words depending on context by passing the input text through multiple layers of neural networks. (&#8220;Fruit flies like a banana&#8221; is an example requiring some serious disambiguation.) Each layer is called a &#8220;transformer&#8221; (the T in GPT) and you can think of a line of text being passed through successive layers of transformers. Each layer adds metadata to the words: for example, having seen the sentence above, one transformer layer might add metadata to indicate that the word &#8220;flies&#8221; refers to insects rather than motion through the air.</p><p>There is a lot more in that article and I recommend you read it, but I had a disconcerting feeling as I was reading it that my confident assertion that LLMs have no understanding of the words they are producing was a bit overstated. At this stage, we all know of examples where LLMs have produced laughable results indicating a lack of understanding of the <em>world</em>, but the details of how they work show that they are very good at understanding <em>language</em>. I think the issue is the difference between understanding language (a set of symbols) and understanding the world. If a human understands language, we generally assume that they also understand the world, but making this extrapolation in the case of LLMs is a bridge too far. Here is a quote from the Ars Technica article that gave me pause:</p><blockquote><p><em>For example, as an LLM &#8220;reads through&#8221; a short story, it appears to keep track of a variety of information about the story&#8217;s characters: sex and age, relationships with other characters, past and current location, personalities and goals, and so forth.</em></p></blockquote><p>The description here comes awfully close to suggesting that the LLM &#8220;understands&#8221; what it is reading. <a href="https://rodneybrooks.com/what-will-transformers-transform/">Brooks</a> calls out the issue here: we mistake performance (producing realistic text) for competence (understanding the world). Since he&#8217;s a roboticist (he founded <a href="https://en.wikipedia.org/wiki/IRobot">iRobot</a>), I found his prediction that GPT won&#8217;t be used for robots, because they have to understand the real world, very compelling. (Brooks is good at making predictions and he boosts his credibility by keeping his <a href="https://rodneybrooks.com/predictions-scorecard-2023-january-01/">predictions online</a> for the long haul and reporting back on them.) To quote:</p><blockquote><p><em>&#8230; it will be bad if you try to connect a robot to GPT. GPTs have no understanding of the words they use, no way to connect those words, those symbols, to the real world. A robot needs to be connected to the real world and its commands need to be coherent with the real world. Classically it is known as the &#8220;symbol grounding problem&#8221;. GPT+robot is only ungrounded symbols.</em></p></blockquote><p>This is the key takeaway for me: having a model for language is different from having a model of the world. For example, we know that LLMs have a tendency to <a href="https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/">make up citations</a>. These citations look &#8220;correct&#8221; because they conform to the model of language (they have authors, realistic titles, journal names, etc.). But they fail a basic test: they are not drawn from real, legitimate publications. So the language model doesn&#8217;t understand &#8220;what is a legitimate citation&#8221; - a fact about the world that is pretty basic for a human to grasp.&nbsp;</p><p>So I remain convinced that we need to be cautious about how LLMs and other AI techniques are put to work. Not because they are going to achieve superhuman intelligence, but because they have serious limitations, and because humans are already using them in ways that cause harm. This is certainly not limited to AI, but the difficulty of understanding what AI systems are actually doing and the human tendency to assume greater competence than they really have presents some unique challenges.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are commited to making our content freely available. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>Another interesting example of the failures of today&#8217;s LLMs has been reported in <a href="https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/">The Register</a>, with ChatGPT proving itself to be quite poor at answering Stack Overflow-style questions. In more positive news, <a href="https://www.theguardian.com/technology/2023/aug/13/only-ai-made-it-possible-scientists-hail-breakthrough-in-tracking-british-wildlife">AI does seem to be helping</a> keep track of the flora and fauna species recorded around British train tracks.  You can go deeper on the risks of AI in <a href="https://publicinfrastructure.org/podcast/85-timnit-gebru/">a podcast with Timnit Gebru</a>, founder of the Distributed AI Research Institute and another of the co-authors of the Stochastic Parrots paper. Cory Doctorow has another great piece on <a href="https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means">&#8220;openwashing&#8221; in the AI field</a>. And don&#8217;t forget that you can do your part to decentralize the Internet again by <a href="https://discuss.systems/@SystemsAppr">following us on Mastodon</a>.&nbsp;&nbsp;</p>]]></content:encoded></item><item><title><![CDATA[Abstraction Merchants]]></title><description><![CDATA[As computer networking researchers, we&#8217;ve been active members of the SIGCOMM community for most of our careers. So we are very interested in the ongoing discussion about the future of the flagship conference, which is really a discussion about the future of networking research.]]></description><link>https://systemsapproach.substack.com/p/abstraction-merchants</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/abstraction-merchants</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 07 Aug 2023 07:47:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8f120574-9f4f-401c-97cb-a3a4b320c714_961x1280.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As computer networking researchers, we&#8217;ve been active members of the <a href="https://www.sigcomm.org/">SIGCOMM</a> community for most of our careers. Last week, Bruce talked about origin story of Systems Approach in a <a href="https://hamilton-barnes.com/podcasts/e133-bruce-davie-at-systems-approach-llc/">podcast</a>, pointing back to the first edition of <em><a href="https://book.systemsapproach.org">Computer Networks: A Systems Approach</a>, </em>published in 1996. But in fact it was working together on a <a href="https://dl.acm.org/doi/10.1145/190314.190315">1994 SIGCOMM paper</a> that first gave us the idea that perhaps we might be able to collaborate on a book. So given our long history with SIGCOMM, we&#8217;re very much interested in the current discussion about the future of the flagship conference, which requires a discussion about the future of networking research.</p><div><hr></div><p>I recently reread <a href="https://dl.acm.org/doi/abs/10.1145/3577929.3577933">Scott Shenker&#8217;s essay</a> questioning whether the practices used by SIGCOMM-sponsored conferences are helping the community achieve its research goals. The essay has triggered much discussion around the practices, but what I found to be most interesting is how the article first takes a position on what those research goals should be. If form is to follow function (as the essay&#8217;s title suggests), then you need to start with a good handle on the desired function. My sense is that the ongoing debate about practices (form) are more strongly rooted in how we think about the research goals (function) than we acknowledge.</p><p>Scott starts with a perfectly good statement of what he believes our shared goal to be:</p><blockquote><p><em>SIGCOMM should strive to lay the intellectual foundation for future networks and networked systems.</em></p></blockquote><p>And then he teases apart the nuanced meaning of the key phrases. This reminds me of a similar discussion 20 years ago, when the National Research Council published a report entitled:&nbsp; <a href="https://nap.nationalacademies.org/catalog/10183/looking-over-the-fence-at-networks-a-neighbors-view-of">Looking Over the Fence at Networks: A Neighbor&#8217;s View of Networking Research</a>. Two things stand out about that report. One is that it was written at a time when the network research community was gnashing its collective teeth over the state of networking research, with companies like Cisco dominating what did or did not happen in the Internet. A second is that it purposely included input from people <em>outside</em> the networking community, providing a healthy dose of perspective. That history repeats itself is reason enough to take a fresh look at that two-decade old report, but I think there might be some other lessons to learn from both the report and Scott&#8217;s essay, and they (in part) have to do with the 800 pound gorilla in the room: the Internet.</p><p>For starters, the report lays out three promising research thrusts that are just as appropriate today: <em>Measuring</em> (understanding the Internet artifact); <em>Modeling</em> (defining a theory of networking); and <em>Making</em> (building disruptive prototypes). This is where the Internet is a double-edged sword. That it exists as a real-world phenomenon worthy of study is a boon for network research. That it exists as a multi-billion dollar industry makes it difficult to have impact by proposing new abstractions or architectures unless they can be justified and evaluated in the context of today&#8217;s artifact. I have a lot of respect for researchers who devise techniques to collect data about the Internet and then analyze that data to to gain a deeper understanding of its behavior, but I&#8217;ve spent my career more on the synthesis side than the analysis side of CS, and so I&#8217;ve been especially aware of the Internet as a barrier to research.&nbsp;</p><p>The report introduced the term <em>ossification</em> into our vocabulary, and motivated work like <a href="https://systemsapproach.substack.com/p/its-been-a-fun-ride">PlanetLab</a> (for me) and <a href="https://sdn.systemsapproach.org/">SDN</a> (for Scott and others). Building platforms to support innovative and disruptive systems research (which has a strong synthesis component in its own right) is surely a good thing, but it is a means to an end. The &#8220;end&#8221; is discovering the fundamental abstractions and design principles that are the &#8220;intellectual foundation&#8221; of networking. And I believe this point to be at the core of why we struggle to get the &#8220;conference practices'' right. Here are my personal observations.</p><p>First, ten years ago this month, at the other end of my PlanetLab experience, I gave the Keynote at SIGCOMM. It was entitled <a href="https://www.cs.princeton.edu/~llp/zana.pdf">Zen and the Art of Network Architecture</a>, inspired by Robert Pirsig&#8217;s <em>&#8220;Zen and the Art of Motorcycle Maintenance&#8221;</em>. As I noted in my talk, Pirsig held the world record for having been rejected by 121 publishers; the parallel to academic network architecture papers was too poignant to pass up. In my mind, an architecture is just a multi-faceted abstraction (or a suite of interconnected abstractions), where abstractions represent the &#8220;essence&#8221; of the fundamental ideas systems researchers are in the business of discovering. If I were to amend Scott&#8217;s goal statement, it would be to emphasize the research community&#8217;s role as <em>Abstraction Merchants</em> (reintroducing a term Dave Clark once used to describe what we do).</p><p>Second, it is interesting to compare the OS community (SIGOPS) with the networking community (SIGCOMM). At SOSP (SIGOPS&#8217; flagship conference) I generally feel qualified to review every paper submitted to the conference, and interested in every paper presented at the conference. It doesn&#8217;t matter what the system is&#8212;it could be a storage-related system, a network-related system, a compute-related system, and so on&#8212;I&#8217;m likely to learn about (1) an emerging mismatch between application requirements and technology constraints; (2) a new abstraction that fills that void; (3) the lessons the authors learned as they reduced the abstraction to practice; and (4) an evaluation of how the resulting mechanism stacks up across some subset of the -ities (scalability, reliability, availability, and so on). There will always be room to improve the mechanism, and write papers about those improvements, but the abstractions the community generates and is hammering on at any given time is what keeps the field vital. My sense, which is consistent with my reading of Scott&#8217;s position, is that SIGCOMM has become more focused on improving mechanisms <em>within the boundaries of the existing (Internet) architecture</em> than in introducing and exploring new abstractions. (There are exceptions, such as datacenter networking, but even in what could be an area that&#8217;s viewed as a greenfield, the inertia of <a href="https://systemsapproach.substack.com/p/its-tcp-vs-rpc-all-over-again">&#8220;that&#8217;s not how we do it in the Internet&#8221;</a> weighs heavy.)</p><p>Third, I believe the biggest risk the SIGCOMM community faces right now is a narrowing opportunity to have impact. This is related to the brief discussion about <em>&#8220;networks and networked systems&#8221;</em> in Scott&#8217;s essay (the question being whether SIGCOMM&#8217;s scope should include the latter). The answer is obvious to both Scott and me, but the fact that he raises it as an issue is telling. It also happens to be a tension I was keenly aware of in my keynote, having just spent the previous few years shepherding the <a href="https://www.networkworld.com/article/2309750/geni-looks-to-conjure-up-next-generation-network.html">GENI</a> project, and watching my colleagues on both sides of that question have heated arguments (and on one occasion, a shouting match) about the research opportunities inside the network versus on top of the network. There are important and hard inside-the-network research questions&#8212;we&#8217;ve been working on distributed resource allocation (aka TCP congestion control) for decades&#8212;but the more networking technology matures, the more the interesting research questions move up the stack. And as a corollary, today&#8217;s research problems may be best framed in terms of the &#8220;cloud architecture&#8221; rather than the &#8220;Internet architecture&#8221;; the vocabulary we use can itself be limiting.</p><p>My takeaway is that focusing on the <em>&#8220;mismatch between&#8230; SIGCOMM's goals&#8230; and what our current practices achieve&#8221; </em>will not be fruitful until we are certain we understand our goals. Crafting a policy about how reviewers should evaluate papers describing new abstractions, new architectures, and new systems will not make a difference unless the community truly values that work, even when (1) it&#8217;s difficult to identify an immediate path to deployment or imagine the associated business model, and (2) it falls outside the traditional boundaries of what is considered to be in-scope. If we can reach consensus on this, then I believe the form (practice) will follow.</p><p></p><div><hr></div><p>For an example of the creation of new abstractions, it&#8217;s hard to think of one more important than the packet, the origins of which feature in Bruce&#8217;s talk on <a href="https://www.ed.ac.uk/informatics/60-years-of-computer-science-and-ai/60-years-of-computer-science-and-ai-events/academic-and-industry/distinguished-lecture-bruce-davie">60 years of Networking</a>. On returning from Edinburgh he has made a recording of the talk available on <a href="https://peertube.roundpond.net/w/1hSTyT2J4cKrLaoU3sJeqT">Peertube</a> (which is the Fediverse&#8217;s streaming video platform, also discussed in the talk). You can hear about the creation of Systems Approach and what we think are our main current challenges in this <a href="https://hamilton-barnes.com/podcasts/e133-bruce-davie-at-systems-approach-llc/">podcast</a>. And sticking with our theme of the Internet&#8217;s evolution, we enjoyed <a href="https://pluralistic.net/2023/08/03/there-is-no-cloud/#only-other-peoples-computers">this piece</a> from Cory Doctorow, which talks about how, under different conditions and constraints, the old, good Internet could have given way to a new, good Internet. Perhaps we can still create those conditions.</p>]]></content:encoded></item><item><title><![CDATA[The Challenge of East-West Traffic]]></title><description><![CDATA[While one half of the Systems Approach team has been out bagging Munros in Scotland, we&#8217;ve found some time to reflect on the changing approaches to datacenter security, particularly the focus on East-West traffic, which is the topic of this week&#8217;s newsletter.]]></description><link>https://systemsapproach.substack.com/p/the-challenge-of-east-west-traffic</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/the-challenge-of-east-west-traffic</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 24 Jul 2023 07:57:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/981895b9-272b-4cba-aaf7-3784b399fc64_1663x1247.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While one half of the Systems Approach team has been out <a href="https://www.walkhighlands.co.uk/munros/">bagging Munros</a> in Scotland, we&#8217;ve found some time to reflect on the changing approaches to datacenter security, particularly the focus on East-West traffic, which is the topic of this week&#8217;s newsletter.</p><div><hr></div><p>One of the fun things about being an Australian living in the Northern hemisphere (which was my situation for over thirty years) is having repeated conversations about which way water rotates when it goes down the drain. OK, it becomes a bit less fun over time, but I was always surprised that few people actually tried the experiment of checking out a few different drains in their own hemisphere. It turns out that drain geometry and initial water movement in the sink, not the Coriolis effect, dominate the direction of rotation, so you will see both directions in either hemisphere. There is actually a nice <a href="https://www.youtube.com/watch?v=mXaad0rsV38">YouTube video</a> that shows this, and then impressively proceeds to show the effect of Coriolis force on water draining out of a pair of identical kiddie pools in the two hemispheres (thus removing the confounding factors in most sinks, toilets, etc.).</p><p>A similar amount of time goes into explaining that North is not actually &#8220;up&#8221;, it&#8217;s just shown that way on maps drawn by Northern hemisphere-based explorers and almost every other cartographer since then. My father, who travelled quite a bit to the Northern hemisphere in the 1970s, might have been one of the first to make custom maps that put South at the top, to point out to his overseas colleagues that their map-drawing conventions were just that&#8211;arbitrary conventions. Never mind the issues of projection which left me thinking Greenland was bigger than Australia until I learned about <a href="https://en.wikipedia.org/wiki/Gall%E2%80%93Peters_projection">alternatives</a> to Mercator.&nbsp;</p><p>All of this has been on my mind this week as (a) I returned to the Northern hemisphere for the first time since 2020 (see previous <a href="https://open.substack.com/pub/systemsapproach/p/60-years-of-networking?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">notes</a> about my <a href="https://www.ed.ac.uk/informatics/60-years-of-computer-science-and-ai/60-years-of-computer-science-and-ai-events/academic-and-industry/distinguished-lecture-bruce-davie">talk in Edinburgh</a>) (b) I spent a lot of time with maps when I was out <a href="https://elk.aus.social/aus.social/@Drbruced/110719094750175543">walking in the highlands</a> of Scotland (c) I found myself needing to explain the difference between East-West and North-South traffic to some colleagues in the context of datacenter security. I&#8217;m not exactly sure of the origins of this naming convention, but the idea is that the ingress/egress point of a datacenter carries the &#8220;North-South&#8221; traffic, while the traffic that flows between servers within the datacenter is the &#8220;East-West&#8221; traffic.</p><p>Why do we even make this distinction? One big reason is security. Historically, the simplest way to &#8220;secure&#8221; a datacenter was to put some set of appliances (firewalls, intrusion detection systems, etc.) at the ingress/egress point. This is the &#8220;perimeter&#8221; model of security, which became prominent for several reasons. First, the number of ingress points to a datacenter is small&#8211;maybe as low as one, certainly no more than a handful. So it is natural to place centralized security appliances next to those choke points so that all the traffic can be passed through them. Furthermore, the bandwidth involved at ingress is likely to be orders of magnitude lower than the total East-West bandwidth: traffic entering a datacenter is likely to be measured in gigabits per second, while East-West traffic can easily run into the terabits.&nbsp;Neither of these points means that perimeter security is a <em>good</em> model&#8211;just that it was for a long time the most practical approach.</p><p>It is worth taking a step back to ask why centralized appliances became the preferred way to apply security controls. One version of this story is that the original Internet architecture had no security, and that early efforts to add security followed the end-to-end argument, which makes a good case for putting security into end-systems. For example, encryption and authentication are security mechanisms that can be implemented in end-systems (provided you can find a way to manage key distribution, which has proven challenging). However, as David Clark (co-author of the end-to-end argument) pointed out in a <a href="https://dl.acm.org/doi/10.1145/383034.383037">2001 paper with Marjory Blumenthal</a>, a multitude of factors pushed the Internet towards the adoption of centralized appliances inserted into the path of traffic by the late 1990s, such as the rise of malware, the adoption of the Internet by unsophisticated users, and the unreliability of software implementations on end-systems (e.g., OS bugs). While many Internet purists lamented the decline of the end-to-end principle, Clark and Blumenthal adopted the position that we have to deal with the world we live in rather than some idealized parallel universe. Centralized firewalls became part of the landscape because they allowed IT administrators to gain some control over the security of their networks in a world of increasing threats, without depending on the impractical notion that every end-system would do the right thing.</p><p>By the time I came to be involved in datacenter networking around 2012, the idea of securing the &#8220;perimeter&#8221; of the datacenter&#8211;which essentially involved putting a number of appliances into the ingress/egress path&#8211;was well established. Unfortunately, it was also fast becoming clear that this approach was inadequate, as a lack of East-West security meant that a compromise of a single (perhaps non-critical) system inside the perimeter could provide the launching pad for a much more serious attack via lateral movement among systems. The poster child for this issue was the <a href="https://www.zdnet.com/article/anatomy-of-the-target-data-breach-missed-opportunities-and-lessons-learned/">2013 Target hack</a>, in which the initial breach took place via a refrigeration contractor&#8217;s computer, allowing the attackers to gain a foothold inside the perimeter of Target&#8217;s network, from where they were able, over a series of weeks, to move laterally among systems until they obtained the credit card details of about 100 million customers. There was no reason for the contractor portal (the original entry point for the attack) to have any connectivity to the systems that had credit card data. Nevertheless, because both systems were &#8220;inside the perimeter,&#8221; there were limited security controls between them. Lack of control over East-West traffic was the key to this and many other attacks.</p><p>Securing East-West traffic in 2013 was a fundamentally hard problem, because there is a vast number of paths between systems carrying massive volumes of data, and the traditional way to secure this would be to divide the network into a small number of zones with firewalls between them. Within a zone, traffic still flowed freely. It was either impractical or prohibitively expensive to place firewalls in such a way that all East-West traffic could be intercepted.&nbsp;</p><p>I came to be interested in this issue because of the evolution of network virtualization that was taking place at about the same time as the Target breach. Our early network virtualization product at Nicira virtualized layer 2 (switching) and layer 3 (routing) and we had long held the view that we would work our way up the layers to virtualize all of networking. A simple firewall operates at layer 4 (looking at transport protocol port numbers) and so this was the logical next step.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Xgp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Xgp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 424w, https://substackcdn.com/image/fetch/$s_!6Xgp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 848w, https://substackcdn.com/image/fetch/$s_!6Xgp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 1272w, https://substackcdn.com/image/fetch/$s_!6Xgp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Xgp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png" width="1309" height="246" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:246,&quot;width&quot;:1309,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Xgp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 424w, https://substackcdn.com/image/fetch/$s_!6Xgp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 848w, https://substackcdn.com/image/fetch/$s_!6Xgp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 1272w, https://substackcdn.com/image/fetch/$s_!6Xgp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b210879-e89a-468d-9883-4f7ead0d8e8c_1309x246.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Network virtualization enables an SDN-style implementation of a firewall. By &#8220;SDN-style&#8221; I mean that the data plane is distributed while the control plane is logically centralized. In the image above, the distributed data plane runs in the virtual switch of each server, inspecting the traffic entering and leaving each virtual machine. (Similar approaches can be applied to containerized or bare-metal workloads.) This means it is now possible to apply firewall policies to every single packet that traverses the data center&#8211;even packets that only pass from one VM to another in the same server. Since virtual switches can process packets as fast as the server can send them, it became feasible to have terabits of firewall capacity allocated to East-West traffic. But because the architecture is based on SDN, there is a logically centralized control plane that simplifies management of the distributed data plane. From a control plane and management perspective, the firewall still looks like a centralized device, where an IT administrator (or an automated system calling an API) can set the firewall policies for the entire datacenter. But the data plane scales out with server capacity, and there is no need for heroic efforts to force traffic to flow through some centralized appliance.</p><p>There is more detail about this aspect of network virtualization in <a href="https://sdn.systemsapproach.org/netvirt.html">our SDN book</a>. This is by no means the last word in East-West security; <a href="https://aviatrix.com/secure-egress/">Aviatrix</a>, for example, addresses East-West security for cloud workloads. And it&#8217;s important to do more than just inspect protocol ports as Thomas Graf shows in a <a href="https://youtu.be/k0KQz6JrKXc">talk on Cilium</a>. Overall, the creation of tools to efficiently provide security services to East-West traffic was one of the key components to implementing zero-trust security, a topic we&#8217;ve covered <a href="https://open.substack.com/pub/systemsapproach/p/is-zero-trust-living-up-to-expectations?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">previously</a>. It&#8217;s also one of the main reasons that network virtualization achieved mainstream adoption in enterprise datacenters: it became obvious that relying only on perimeter security and a handful of firewall zones was insufficient for today&#8217;s security challenges. There is plenty more to be done here, with <a href="https://systemsapproach.substack.com/p/service-mesh-and-the-goldilocks-zone">service meshes</a> being another area of active work addressing (among other things) East-West security. But at least we no longer punt on the problem by relying solely on centralized appliances at the ingress to focus only on North-South traffic.&nbsp;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our work open to all.  To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>Our previous notes on the importance of <a href="https://open.substack.com/pub/systemsapproach/p/why-apis-matter?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">APIs</a> and <a href="https://open.substack.com/pub/systemsapproach/p/observability-joins-the-list-of-essential?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">observability</a> caused us to take note of the recent <a href="https://blog.pragmaticengineer.com/building-an-an-early-stage-startup/">acquisition of Akita</a>, an API observability company. Our book on Private 5G is available at a discount if you buy the ebook <a href="https://www.systemsapproach.org/books.html">via our website</a>. You can also pick up Systems Approach <a href="https://www.systemsapproach.org/store/p5/Systems_Approach_Mug.html">coffee mugs</a>. And don&#8217;t forget to <a href="https://discuss.systems/@SystemsAppr">follow us on Mastodon</a>, the thinking person&#8217;s social network. </p>]]></content:encoded></item><item><title><![CDATA[OnRamp: Incrementally Mastering Complexity]]></title><description><![CDATA[We have been on an open source kick recently, and that continues with today&#8217;s look at the challenges involved in helping users come up-to-speed on an open source project.]]></description><link>https://systemsapproach.substack.com/p/onramp-incrementally-mastering-complexity</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/onramp-incrementally-mastering-complexity</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 10 Jul 2023 07:01:03 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/646bceae-8ce7-4a03-b17c-5c82cc03afc0_2668x2000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We have been on an open source kick recently, and that continues with today&#8217;s look at the challenges involved in helping users come up-to-speed on an open source project. The challenge is especially thorny when those users are exploring an unfamiliar topic  and the software is to be deployed as a scalable cloud service.</p><div><hr></div><p>Here&#8217;s a problem I&#8217;ve been struggling with for the last several months: How to make a complex system assembled from dozens of components easily consumable by a wide range of users. The system is the <a href="https://opennetworking.org/aether/">Aether</a> edge cloud, which serves as the blueprint for our <a href="https://5g.systemsapproach.org/index.html">Private 5G</a> book. The &#8220;dozens of components&#8221; include a long list of Cloud Native tools plus an open source implementation of 4G/5G Mobile Core and RAN. The &#8220;wide range of users&#8221; includes students who want hands-on experience with concepts they&#8217;re seeing for the first time, researchers who want to investigate narrow problems in the larger 5G/edge space, and organizations that want to deploy and operate Private 5G in everything from lab trials to commercial offerings. I&#8217;ll get to my definition of &#8220;easily consumable&#8221; in a moment, which is at the heart of why I find this to be an interesting problem in general.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>You could make the case that this is a self-inflicted challenge&#8212;Aether is available in GitHub and anyone with the technical chops is free to do with it what they will&#8212;but it follows from the goal of realizing the know-how/educational value of open source software I talked about in my previous <a href="https://systemsapproach.substack.com/p/open-source-another-value-proposition">post</a>. To provide a little background, a multi-site deployment of Aether has been running as a managed cloud service since 2020, in support of the <a href="https://prontoproject.org/">Pronto Research Project</a>. But that deployment depends on an ops team with significant insider knowledge about Aether&#8217;s engineering details. It has proven difficult for others to reproduce that know-how and bring up their own Aether deployments.</p><p>Offering &#8220;Aether-as-a-Service&#8221; comes with an ongoing obligation to provide operational support, but it is easier than releasing &#8220;Aether-as-Software&#8221;, complete with the machinery needed for others to deploy and operate Aether as their own service. This is well understood by anyone that has attempted to take that step, and familiar to me from my experience operating <a href="https://systemsapproach.substack.com/p/its-been-a-fun-ride">PlanetLab</a> (but never packaging it in a way that made it easy for others to replicate). In the case of PlanetLab, the biggest value was in the network effect&#8212;having access to compute resources contributed by others all over the world&#8212;so operating a multi-tenant service made sense. In contrast, the biggest value for Aether is having full ownership of your deployment.</p><p>A minimal version of Aether is also available in a package called <a href="https://docs.aetherproject.org/master/developer/aiab.html">Aether-in-a-Box (AiaB)</a>. Originally designed to give developers a streamlined modify-build-test loop they could run on their laptops, AiaB serves as a good way to get started. But there is a considerable gap between what it provides and an operational 5G-enabled edge cloud deployed in a particular environment. Or as I&#8217;ve been known to say on family road-trips: <em>You can&#8217;t get there from here.</em></p><p>As a &#8220;getting started&#8221; package, AiaB is straightforward to use. You set up a VM (on your laptop or in the cloud), clone the AiaB repo, and type <strong>make 5g-test</strong>. Doing so installs Kubernetes, brings up the Mobile Core, runs an emulated 5G RAN workload, and prints out the results. There are more intermediate Make targets users can explore, which helps from a learning perspective, but ultimately AiaB favors easy-of-use for canned configurations over ease-of-transition to more complex configurations that have been customized for a particular use case. That is the crux of the problem we try to address with <a href="https://github.com/SystemsApproach/aether-onramp">Aether OnRamp</a>: start with something as easy as AiaB, and then incrementally expose (and document) the information needed to take ownership of Aether in its full glory: a multi-site / hybrid cloud / managed service. OnRamp tries to do this in a way that supports more than one off-ramp, so that users that want to focus on a particular subsystem need not pay too steep of an up-front price.</p><p>The general approach OnRamp takes is to draw crisp lines between different stages (e.g., development, integration, deployment, operations) and layers (infrastructure, services, applications, traffic sources), and then introduce them incrementally, coupled with documentation that calls out the relevant &#8220;decision points&#8221; and &#8220;configuration parameters&#8221;. We include a first version of <a href="https://5g.systemsapproach.org/software/overview.html">OnRamp with our Private 5G</a> book, with the caveat that there is still much work to be done. That version relies heavily on Makefiles, which means the key configuration parameters are exposed as ad hoc variables. This summer I have been working with Bilal Saleem and Muhammad Shahbaz at Purdue to transition OnRamp to use Ansible. This started as an effort to scale Aether from a single node to a multi-node cluster, but Ansible is proving to be a better way to manage the overall stepwise configuration strategy.&nbsp;</p><p>Although conceptually straightforward, this general approach is not easy to execute in practice. I have two takeaways from the experience so far. One is the importance of using tools that are powerful enough to get the job done, but not so heavy-weight as to obscure (abstract away) the know-how you&#8217;re trying to impart. This is important for two reasons: (1) so the novice can easily see what&#8217;s going on under the covers, and (2) so the expert can easily unwind engineering choices and apply different tooling. Ansible seems to be a good compromise in that it explicitly exposes the playbook, and provides a well-defined way to specify the relevant variables.&nbsp;There&#8217;s still a syntactic gap between an Ansible task and the corresponding kubectl call, but that gap is easy to document.</p><p>The second difficulty is to identify those relevant variables, which are often buried in a sea of configuration parameters. This process is complicated by developer bias&#8212;my term for codifying config variables (for example Helm Chart values files) that perfectly suit the developer&#8217;s needs, but obfuscate where a user trying to <em>deploy</em> the code needs to take ownership, so as to customize the parameters for their particular scenario. The only way I know to untangle this obfuscation is to try to document it, not just in a technical sense (i.e., most parameters are defined somewhere, <em>if</em> you know where to look), but rather, in an intuitive way that will make sense to a non-expert. Helping the non-expert understand what they can safely ignore goes hand-in-hand with identifying what they need to know.</p><p>Whether OnRamp hits the mark is a matter of judgment, where the proof of the pudding is in the eating. You can try our <a href="https://github.com/SystemsApproach/aether-onramp">first attempt</a> at helping users consume Aether today, and we expect to announce the availability of OnRamp-v2 by the end of the summer. Watch this site for an announcement. Finally, I would be remiss if I didn&#8217;t mention all the excellent hand-on tutorials created by other open source projects; the <a href="https://kubernetes.io/docs/tutorials/">Kubernetes Tutorial</a> is a great example. Like what we&#8217;re trying to do with Aether, the goal is to lower the barrier for people being able to consume open source software. My personal spin on that objective is to use such hands-on experience as a way to teach the underlying principles and concepts, and to enable research that builds on those concepts.</p><div><hr></div><p>With Bruce in Scotland presenting his lecture at Edinburgh University&#8217;s celebration of <a href="https://www.ed.ac.uk/informatics/60-years-of-computer-science-and-ai/60-years-of-computer-science-and-ai-events/academic-and-industry/distinguished-lecture-bruce-davie">60 years of Computer Science and Artificial Intelligence</a>, it&#8217;s up to his apprentice to see to it that this week&#8217;s post makes it to your mailbox. Confidence level: 7-out-of-10.</p><p>For those with a vested interest in SIGCOMM, you may want to read (and comment on) the community&#8217;s effort to <a href="https://sigcomm.quest/">establish a consensus</a> about how its flagship conference should evolve.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[60 Years of Networking]]></title><description><![CDATA[Next week I am heading to Edinburgh University to give a lecture as part of the events celebrating 60 years of Computer Science and Artificial Intelligence at Edinburgh. I realized that there was a lot going on in the networking world 60 years ago too, making for a nice tie-in to the event. My research for the talk inspired me to write this post. I&#8217;m sure I will be hearing from readers if I make any claims here (or in my talk) that can&#8217;t be backed up&#8211;some of the stories here are hard to verify, but there are some good primary sources cited in the following post.]]></description><link>https://systemsapproach.substack.com/p/60-years-of-networking</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/60-years-of-networking</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 26 Jun 2023 07:42:27 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d8c14301-b036-4abd-8f6e-12a60bd22221_1920x2864.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Next week I am heading to Edinburgh University, where I did my <a href="https://systemsapproach.substack.com/p/verified-networks">PhD</a> back in the 1980s, to give a lecture as part of the events celebrating <a href="https://www.ed.ac.uk/informatics/60-years-of-computer-science-and-ai/60-years-of-computer-science-and-ai-events/academic-and-industry/distinguished-lecture-bruce-davie">60 years of Computer Science and Artificial Intelligence</a> at Edinburgh. Having decided that I would speak about networking (stick to what you know, after all), I realized that there was a lot going on in the networking world 60 years ago too, making for a nice tie-in to the event. My research for the talk inspired me to write this post. I&#8217;m sure I will be hearing from readers if I make any claims here (or in my talk) that can&#8217;t be backed up&#8211;some of the stories here are hard to verify, but there are some good primary sources cited in the following post.</p><div><hr></div><p>Most people with some knowledge of networking history can name a few famous people from its history, such as the inventors of TCP/IP (Bob Kahn and Vint Cerf) and of the World Wide Web (Sir Tim Berners-Lee). While TCP/IP dates back to the early 1970s, and the Web to the 1990s, I wanted to go back further to cover the 60-year period being celebrated at Edinburgh. I found a wonderful resource in &#8220;<a href="https://groups.csail.mit.edu/ana/A%20brief%20history%20of%20the%20internet%20-%20p22-leiner.pdf">A Brief History of the Internet</a>&#8221; by Barry Leiner <em>et al.</em> (with co-authors including Kahn and Cerf among a long list of famous names). One of the striking things about that paper is just how many people were involved in the creation of the Internet. Fortuitously, Leiner <em>et al</em>. trace the Internet&#8217;s roots back to the early 1960s, lining up nicely with the timeframe I wanted to cover, and highlighting a number of developments I was unfamiliar with. I&#8217;ve leaned heavily on that paper for my talk.</p><p>First up is the invention of packet switching. It&#8217;s worth noting that the dominant network in the 1960s was the telephone network, which dealt in circuits. A small group of computer scientists recognized that there might be a form of networking that was better suited to computer communication. Len Kleinrock wrote a report on the queuing theory behind packet switching (without using the term packet) in 1961 (and later wrote books that were part of my education). Over the next few years Paul Baran in the U.S. and Donald Davies in the U.K. independently developed the main concepts of packet switching&#8211;they are generally recognized as packet switching&#8217;s co&#8211;inventors. Davies gets the nod for having coined the term &#8220;packet&#8221; to describe a bundle of bits packaged for transmission.</p><p>Arguably just as important was the work of <a href="https://en.wikipedia.org/wiki/J._C._R._Licklider">J.C.R. Licklider</a>, who seems to have been an amazing visionary across wide areas of computing. He proposed the idea of an &#8220;Intergalactic Computer Network&#8221; around the time he joined ARPA in 1962. These ideas took hold with Larry Roberts, who was appointed as the first program manager for ARPANET. The ideas of Kleinrock, Davies and Baran were adopted in the ARPANET design, with the &#8220;Interface Message Processors (IMPs)&#8221; being the first routers and ARPANET the first wide-area packet-switched network. Prior to this, Roberts apparently built a long-haul connection between two computers on opposite sides of the U.S. using a telephone circuit, illustrating by counterexample that packet switching really was the way to go. Leiner calls <em>this</em> the first wide-area computer network.</p><p>While the ARPANET is generally viewed as the predecessor of the Internet, what surprised me is how <em>quickly</em> the ideas of the Internet came together once the ARPANET existed. The first ARPANET hosts were connected in 1969, and as early as 1972 Bob Kahn was working on the ideas of &#8220;Internetting&#8221;: connecting different networks together. There weren&#8217;t a lot of networks to connect, but packet radio was developed around the same time, so the idea that very different networks could be connected arose quite early. Realizing that the protocols that he was creating to support internetting were going to need to be implemented on various operating systems, Kahn teamed up with Vint Cerf, who had experience with the implementation of NCP (which can be viewed as the predecessor of TCP/IP) on ARPANET. The idea that TCP/IP had to support heterogeneous networks was another big departure from the networks of the past&#8211;telephone networks were much more homogeneous.</p><p>My favorite paper on how the Internet developed is the 1988 paper by David Clark: <a href="http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-clark.pdf">The Design Philosophy of the DARPA Internet Protocols</a>. With the benefit of 15 years of experience, Clark&#8211;often referred to as &#8220;The Architect of the Internet&#8221;&#8211;clearly lays out what he, Kahn, Cerf, and many others were trying to achieve when the Internet was designed. There was a lot more to the Internet than the TCP and IP protocols, and Clark lays out a detailed list of design goals, all in service of one top-level goal: to interconnect <em>existing</em> networks. That meant, importantly, that you couldn&#8217;t impose changes on the existing networks to make them &#8220;fit&#8221; the Internet architecture: the Internet had to be able to accommodate widely varying types of networks, e.g., those with no reliable delivery mechanisms or ability to provide notifications of failures. The ability to accommodate networks in spite of wide variations among their capabilities, including networks not yet invented, was quite a radical idea, and drove a lot of critical design decisions, such as the adoption of a best-effort service model.</p><p>The list of second-order design goals starts with:&nbsp;</p><blockquote><p><em>Internet communication must continue despite loss of networks or gateways.&nbsp;</em></p></blockquote><p>I feel obligated to point out that there is no mention in this paper of the Internet being designed to withstand nuclear attack. Unfortunately, I was responsible for perpetuating that misconception in the an early edition of &#8220;<a href="https://book.systemsapproach.org/">Computer Networks: A Systems Approach</a>&#8221;. Leiner et al debunk this myth and explain its origins in their paper, but it&#8217;s a remarkably persistent one. &#8220;[C]ontinue despite loss of networks or gateways&#8221; is as close as it gets.&nbsp;</p><p>Of the other design goals, the one that I keep coming back to is &#8220;distributed management of resources&#8221;. I&#8217;m personally very interested in the many ways in which the decentralized architecture of the Internet has been eroding, and have written prior posts on this topic <a href="https://systemsapproach.substack.com/p/decentralizing-the-internet-again">here</a> and <a href="https://systemsapproach.substack.com/p/what-can-we-learn-from-internet-outages">here</a>. As far back as 2016 I began to take an interest in blockchains as a possible path to a more decentralized approach, but these days I mostly find myself agreeing with <a href="https://www.mollywhite.net/">Molly White</a> and her ironic &#8220;<a href="https://web3isgoinggreat.com/">Web3 is Going Just Great</a>&#8221; site, which catalogs the steady stream of Web3 failures and grifts. Fortunately, there is more to the decentralization of the Internet than blockchains and Web3.</p><p>As we discussed <a href="https://systemsapproach.substack.com/p/decentralization-strikes-back">last year</a>, there are real signs of life for the decentralization of social media thanks to the emergence of <a href="https://www.w3.org/TR/activitypub/">ActivityPub</a> and the Fediverse. Just as I was putting finishing touches on my slide deck, there was yet another meltdown in the world of centralized platforms, with the CEO of Reddit deciding that <a href="https://mashable.com/article/reddit-api-apollo-subreddits-protest">sudden changes to API pricing</a>&#8211;to the point that lots of third-party applications become economically unsustainable&#8211;was such a good idea at Twitter that he would bring the same approach to Reddit. The response from volunteer moderators at Reddit&#8211;the unpaid community members who make the platform valuable to users&#8211;has been swift and, in some cases, <a href="https://www.theverge.com/2023/6/17/23764729/reddit-users-pics-gifs-subreddits-john-oliver">hilarious</a>. But the aspect of this story that really caught my eye was the rapid rise of ActivityPub-powered Reddit alternatives <a href="https://github.com/ernestwisniewski/kbin">Kbin</a> and <a href="https://join-lemmy.org/">Lemmy</a>. The number of users of these services, which are interoperable with each other and connect to the rest of the <a href="https://systemsapproach.substack.com/p/decentralization-strikes-back">Fediverse</a>, has roughly tripled in a week (off a low base). Furthermore, there is a flurry of activity to beef up the implementations so that moderation tools&#8211;an essential part of any social media platform&#8211;can keep up with the growth in users.&nbsp;</p><p>As an active user of another part of the Fediverse, <a href="https://aus.social/@Drbruced">Mastodon</a>, I can tell you that there is <em>lots</em> of discussion among Fediverse users at present around how best to deal with all this growth and what it would take to scale the Fediverse up to the size of the big centralized platforms (which are still orders of magnitude larger than the Fediverse). It&#8217;s really too early to tell how this will all play out, and there is certainly more to the Internet than social media platforms, but I do feel a degree of optimism that the pendulum is swinging back towards the decentralized approaches that enabled the Internet to grow and flourish over the last sixty years.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to making our work freely available to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>In a timely coincidence, Steve Bellovin (well known as an Internet security expert) has published a <a href="https://www.cs.columbia.edu/~smb/papers/netnews-hist.pdf">first-hand history of netnews</a> that is full of interesting details on how it developed in parallel with the Internet. And while I was digging around for bits of Internet history, I rediscovered the note from 2000 written by Bob Kahn and Vint Cerf regarding <a href="http://amsterdam.nettime.org/Lists-Archives/nettime-l-0009/msg00311.html">Al Gore&#8217;s role in the Internet</a> &#8211; an assessment that is more favorable than most people would expect. As they say &#8220;No one person or even small group of persons exclusively &#8216;invented&#8217; the Internet.&#8221;</p>]]></content:encoded></item><item><title><![CDATA[Open Source: Another Value Proposition]]></title><description><![CDATA[I must create a System, or be enslav&#8217;d by another Man&#8217;s; I will not Reason and Compare: my business is to Create.]]></description><link>https://systemsapproach.substack.com/p/open-source-another-value-proposition</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/open-source-another-value-proposition</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 12 Jun 2023 07:58:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f32e4863-61ac-4e3d-8abf-0b4d2b03e648_1920x2880.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em>I must create a System, or be enslav&#8217;d by another Man&#8217;s; I will not Reason and Compare: my business is to Create. - William Blake</em></p></blockquote><p>That&#8217;s the quote we chose to open our first textbook back in 1995, in an effort to capture our belief in the importance of system-building. In a follow-up to last week&#8217;s <a href="https://open.substack.com/pub/systemsapproach/p/escape-from-big-publishing?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">post</a> about the writing process, this week we look at how building systems with open source software has shaped our books.&nbsp;</p><div><hr></div><p>Open source software has become an integral part of today&#8217;s technology marketplace. We&#8217;ve written about <a href="https://systemsapproach.substack.com/p/venn-diagram-engineering">weaponizing</a> open source software in the context of our experiences using SDN to disrupt the networking industry. It&#8217;s also common to hear people talk about business models for <a href="https://www.forbes.com/sites/forbestechcouncil/2022/05/31/how-companies-can-monetize-open-source-software/?sh=20d6f033a359">monetizing</a> open source software, for example, by selling support (think <a href="https://www.redhat.com">RedHat</a>) or cloud services (think <a href="https://www.databricks.com/">Databricks</a> operationalizing Apache Spark). Discussions of this kind typically reduce to an exercise in defining a value proposition for open source software; if you&#8217;re going through that exercise in a business setting, that value has to be direct and quantifiable.</p><p>But there&#8217;s another way to think about value, which struck me recently as we went to press with our <a href="https://5g.systemsapproach.org/">Private 5G book</a>. Without consciously adopting it as a strategy, I realized that all four of the Systems Approach books we&#8217;ve written recently are strongly influenced by&#8212;and in most cases, organized around&#8212;open source software. The <a href="https://5g.systemsapproach.org/">Private 5G</a> book describes Aether as a combination of SD-RAN, SD-Core, and SD-Fabric (all open source); the <a href="https://sdn.systemsapproach.org/">SDN</a> book describes a software stack that starts with a P4-programmable forwarding plane, and builds through a Switch OS, a Network OS, and a set of control applications (all open source); and the <a href="https://ops.systemsapproach.org/">Edge Cloud Operations</a> book describes how to build a cloud management platform from a combination of over 20 open source projects ranging from Kubernetes to <a href="https://github.com/keycloak/keycloak">Keycloak</a> to <a href="https://www.elastic.co/elastic-stack">Elastic Stack</a>. Even the <a href="https://tcpcc.systemsapproach.org/">TCP Congestion Control</a> book leans heavily on the implementation in the Linux kernel (which as I&#8217;ve argued in <a href="https://systemsapproach.substack.com/p/tcp-the-p-is-for-platform">another post</a>, is effectively the <em>specification</em> of the algorithm). And as Bruce reminded me in his last post, one of the reasons I decided to write the original <a href="https://book.systemsapproach.org/">Computer Networks</a> book was that I was able to leverage open source protocol stacks that I had been working with. I should have seen this pattern long before now; I thought I was just being opportunistic.</p><p>I&#8217;m not sure what name to put on that value, or even the right way to characterize it. Part of it is my own internal drive to understand how complex systems work, and part is wanting to share that understanding with anyone struggling with the same questions&#8212;but it definitely feels like value to me. I guess the easy label would be <em>educational value,</em> but with understanding comes empowerment to make the ideas your own, adapt them to your purposes, and ultimately, to innovate. None of these are easily quantifiable, and I have no idea how to go about computing the Return on Investment, but that doesn&#8217;t make the value any less real.</p><p>But I&#8217;m getting ahead of myself. I think there&#8217;s more than meets the eye to the conclusion that open source software leads to understanding and know-how. For example, it&#8217;s obvious that open source makes it possible to see the engineering details of a given software tool. That&#8217;s important, but what I have found to be equally true is that having access to a breadth of implementation detail is essential to having a deep conceptual framework for complex systems like the Internet or the cloud. Or said another way, seeing the &#8220;Powerpoint rendition&#8221; of a system often leads to a superficial understanding unless you also have an opportunity to look &#8220;under the hood&#8221;, and ideally, play with the code. Here are three other observations that I came to appreciate about the topics we&#8217;ve been reporting on in our book series.</p><p>Starting with SDN, it is well known that the objective was to catalyze a horizontal market around the historically vertical networking industry, and to this end, our book describes all the components that go into building an end-to-end Software-Defined Network. There comes a point about three-quarters of the way through the book where we acknowledge that everything discussed up to that point merely <em>&#8220;reproduce(s) functionality that already exists&#8221;</em>, at which point we are finally ready to start talking about SDN&#8217;s supposed value proposition: the ability to rapidly evolve and customize the network. But if you stop and think about it, that first 75% is a complete blueprint for how to build a modern high-speed L2/L3 switch, which until fairly recently has been the proprietary know-how of a handful of device and chip vendors. (And this understanding is now finding its way into undergraduate networking courses; see for example <a href="https://gitlab.com/purdue-cs422/spring-2023/public/tree/main/assignments">CS 422 at Purdue</a>.) It&#8217;s impossible to predict how a widespread understanding of the internals of packet forwarding will impact the future of networking (even at a time when the <a href="https://systemsapproach.substack.com/p/the-future-of-p4-one-perspective">commercial viability</a> of programmable hardware is being questioned), but I have no doubt that it will.</p><p>There is a similar story about 5G, arguably with the potential for even greater impact. An in-depth understanding of the mobile cellular network has been known only to a handful of incumbents for 40 years. The availability of an open, software-defined RAN and Mobile Core changes that dynamic. And even though the MNOs are starting to pull back from Open-RAN, presumably because the incumbent vendors have given them good business reasons to do so, I think it&#8217;s fair to say that there&#8217;s no turning back. It&#8217;s now the case that <a href="https://systemsapproach.substack.com/p/private-5g-as-easy-as-wi-fi">even I</a> am able to bring up a 5G network; surely others will figure out ways to leverage that know-how in innovative ways. Making it easier for anyone to do that is the motivation behind the book&#8217;s <a href="https://5g.systemsapproach.org/software/overview.html">hands-on appendix</a>, which is starting to find its way into courses like <a href="https://gaia.cs.umass.edu/cs596_spring_2023/">CS 596 at UMass</a>.</p><p>The final example is from how to operate an edge cloud. The know-how needed to operationalize an inert pile of code (whether it&#8217;s open source or proprietary) is substantial enough that we&#8217;re willing to pay other people to do it for us. Ease-of-use is often worth paying for, but doing so should not be due to a belief that wizardry is required. This is especially true since all the tools you need to operationalize cloud services are readily available as open source (complete with excellent tutorials). At the very least, you should know what you&#8217;re paying for when you outsource the problem. It&#8217;s also the case that the steepness of the learning curve is partially related to the newness of the technology, with lots of overlapping tools competing for mind-share. Once enough people understand the space in a principled way, we should expect to see a distillation that simplifies the toolset, hopefully lowering the barrier to entry.</p><p>At this point it&#8217;s fair to observe that <em>someone</em> has to pay for it, and it&#8217;s difficult to fund open source software for its educational value alone. Fortunately, the educational angle is usually not the whole story. Sometimes the technical people get out in front of the business people and make their code available without an obvious business model. That happened many years ago at Bell Labs, and still happens with researchers who value impact over monetary reward. Sometimes governments pay for it as a matter of public policy (or national security). That is what&#8217;s happening today in the 5G space. Sometimes companies take a long-term perspective as a way of growing a market rather than focusing on their share of an existing market. You could argue Google has done that by releasing tools like Kubernetes, or Nicira with <a href="https://www.openvswitch.org/">Open vSwitch</a>. But I suspect that most of the time, it&#8217;s software for which someone originally put together a business plan that ends up delivering this indirect and unquantifiable value (whether or not the business plan ever panned out).</p><p>These observations may be obvious in hindsight, but I think they are easy to overlook in a field so often focused on entrepreneurship and business value. There&#8217;s the marketplace of products, but also the marketplace of ideas, and how those ideas are manipulated impacts all of us. Learning from open source software&#8212;as a means of internalizing the know-how that went into building it&#8212;is one way to consume it. Our books aim to be an aid in finding and extracting that value. As we noted in <a href="https://open.substack.com/pub/systemsapproach/p/escape-from-big-publishing?r=cxpek&amp;utm_campaign=post&amp;utm_medium=web">our last post</a>, we are primarily motivated by our potential impact, and open source software is one of the best ways we&#8217;ve found to deliver it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our work freely available. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>Regular readers of our posts will know we like to stay on top of developments in <a href="https://systemsapproach.substack.com/p/quantum-reality">quantum computing</a>, and we were excited to read <a href="https://scottaaronson.blog/?p=7321">Scott Aaronson&#8217;s review</a> of a new book called &#8220;Quantum Supremacy&#8221;. It&#8217;s a highly entertaining take-down of what seems to be a deeply flawed book, which also serves as a quick summary of all the mistakes you should avoid when talking about quantum computing. We&#8217;ve toyed with writing a book on quantum computing ourselves but this review certainly showcases the perils facing the non-expert author. And in another oft-hyped topic area, we enjoyed this <a href="https://www.linuxfoundation.org/blog/are-open-ai-models-safe">guest blog post</a> at the Linux Foundation on the benefits of open source large language models over their proprietary counterparts.&nbsp;</p>]]></content:encoded></item><item><title><![CDATA[Escape from Big Publishing]]></title><description><![CDATA[How Open Source provided the path to achieve our publishing goals]]></description><link>https://systemsapproach.substack.com/p/escape-from-big-publishing</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/escape-from-big-publishing</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 29 May 2023 07:56:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a38b886a-5aa5-478c-ab6f-50721b9656e8_1920x1281.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most of the time we write about technology topics close to our hearts, and open source software is certainly one of those topics. However, this post is also about the process of writing books, which we&#8217;ve been doing for almost 30 years now. How we came to embrace open source as part of our book publishing endeavors is a story going back to the early days of Systems Approach.</p><div><hr></div><p>There was a ripple in the world of academic publishing a couple of weeks back when <a href="https://www.theguardian.com/science/2023/may/07/too-greedy-mass-walkout-at-global-science-journal-over-unethical-fees">the entire editorial board of </a><em><a href="https://www.theguardian.com/science/2023/may/07/too-greedy-mass-walkout-at-global-science-journal-over-unethical-fees">Neuroimage</a></em><a href="https://www.theguardian.com/science/2023/may/07/too-greedy-mass-walkout-at-global-science-journal-over-unethical-fees"> walked out</a> in protest at the high publication charges being imposed by Elsevier. Our own story of freeing content from Elsevier is a bit less dramatic but still makes for an interesting tale. Let&#8217;s start with the origins of <em>Computer Networks: A Systems Approach</em>, which we first published in 1996, when both the Internet and the publishing world were very different places than they are today.</p><p>Larry started the book on his own in 1994 and there was considerable interest in the publishing world to sign the contract for a new textbook on networking. The Internet had just become mainstream enough to make the cover of <em>Time</em> magazine, as Internet access started to spread beyond academic and research institutions. One of the distinguishing features of Larry&#8217;s proposal was that it put the Internet and its protocols at the heart of networking, whereas other books at the time treated TCP/IP as just one example among many competing approaches. In the end, Larry signed with then-independent <a href="https://en.wikipedia.org/wiki/Morgan_Kaufmann_Publishers">Morgan Kaufmann Publishers</a> (MKP), whose main claim to fame was that they also published the iconic <a href="https://books.google.com.au/books/about/Computer_Architecture.html?id=XX69oNsazH4C&amp;redir_esc=y">Computer Architecture</a> book by Hennessy and Patterson. Not only did we like the idea of being associated with that book, but working with a small publisher, we discovered, meant that you had the attention of the entire company when your book was getting ready for publication.&nbsp;</p><p>Larry brought me on as a co-author in 1995 when he realized what a large task he had bitten off. We&#8217;d collaborated on research projects and academic papers before, so we knew that we could work together effectively, but I don&#8217;t think either of us quite appreciated how much work it takes to produce 500-plus pages of text and the images to go with them. I found writing a book that will be in print for years to be qualitatively different from writing academic papers, where a reviewer is likely to tell you what you did wrong and small errors may well go unnoticed. This work consumed our weekends and evenings for months. Often I would motivate myself to keep going by imagining that one day our work would be read by students of networking who would be influenced as much by our book as I had been when reading the works of Comer, Tanenbaum, and other giants of the field.&nbsp;</p><p>We submitted the final manuscript to the publisher late in 1995. Not entirely by coincidence, I also ended up deciding that day to leave my research job at Bellcore and go to work at Cisco. Having just finished a book on networking, I knew what I cared about most in the technology landscape, and I was pretty sure Cisco was going to be a better place to push forward on that technology than Bellcore. I wasn&#8217;t wrong.</p><p>Once the book was published, the team at MKP set out to get it adopted by as many professors as possible, and the fact that we had taken a &#8220;<a href="https://www.systemsapproach.org/about.html">Systems Approach</a>&#8221; and centered our work on the Internet gave us a good position in the market. We never got to be the most popular networking textbook, but we carved out a good niche in what turned out to be a rapidly growing market.</p><p>One thing we didn&#8217;t quite appreciate was that textbook publishers <em>really</em> like to publish new editions every couple of years. For one thing, it gives their sales teams something new to talk about, but more importantly for the publisher, it stifles the used book market. If you make enough changes from edition N to edition N+1, then students will find they can&#8217;t really get by with a used copy, so they are pushed to buying the new edition. More book sales! We were happy enough to write the second edition in any case, since we&#8217;d found plenty of things in the first edition that we wanted to change&#8211;in fact Larry came to refer to the first edition as the Beta version. Security and wireless networks were two topics of note that first appeared in the second edition.</p><p>It was when we got asked to produce a third edition that Larry and I realized our incentives were not well-aligned with those of the publisher. Thinking back to what motivated us to write the first edition, it wasn&#8217;t about <em>selling</em> as many books as possible&#8211;it was about having an impact on the upcoming generation of students. We wanted our books to be <em>read. </em>Making used books obsolete was irrelevant to our goals. Furthermore, we&#8217;d worked hard to make sure our book focused primarily on timeless, fundamental aspects of networking. We didn&#8217;t aim to be the encyclopedia of networking as it exists at one point in time, but rather the guide for future networking engineers and protocol developers to help them build on the foundations laid in the last 30-plus years. So even though networking standards do change, the fundamentals change much more slowly.&nbsp;</p><p>In parallel with this development, a chain of acquisitions left us writing for Elsevier, not MKP. (Readers who&#8217;ve worked for tech startups may recognize this phenomenon.) Our original, very supportive editorial team had moved on. Now we were just a mid-sized book project in the giant publishing machine. This made the non-alignment of incentives ever more apparent. We settled into a multi-year, low-key struggle with our publisher in which they were always asking us for a new edition&#8211;sometimes before the last one had even made it into print&#8211;and we were always trying to delay the next update.&nbsp;</p><p>Larry&#8217;s involvement in open source software goes back a long way, including the <em>x</em>-kernel code that was included in the first edition. But it was in 2012 that I really started to pay attention to open source, thanks to my joining the Nicira team and making some small contributions to the <a href="https://www.openvswitch.org/">Open vSwitch project</a>. Coincidentally, I had to take a medical leave of absence from work in mid-2012 after a serious cycling accident, and since I was neither working nor sleeping much for a couple of months, I had a lot of time for creative thinking. It was during one of those sleepless nights that I came to the realization that an open source model for our textbook would be the best path forward for us. We could achieve our main goal of making the book accessible to the widest possible audience, while also leveraging the input of the broad networking community to keep on making updates to the book. Our vision was that the book would become an open source <a href="https://github.com/SystemsApproach/book">repository</a> just like a software project, taking contributions from the community, but with Larry and me still retaining the ability to steer the overall direction of the book. And every once in a while we could &#8220;release a version&#8221; to produce the next published edition for Elsevier.&nbsp;&nbsp;</p><p>The big problem with this plan was how to sell it to Elsevier. Clearly if the book was put online for free, that would negatively impact their book sales. But it would not drive them to zero. In fact Elsevier already had a series of books that were simply printed versions of RFCs on selected topics, which for some reason people paid for rather than reading the RFCs freely online. So we made the pitch that (a) the book would be better if we did it this way (b) they might not get another edition any other way, and (c) it wasn&#8217;t going to be the end of revenue for the book. To their credit, when we had the meeting to discuss this plan, one of the Elsevier editors made the analogy to the Oxford English Dictionary: a book that (apparently) loses money on every sale, but which is a great brand asset for its publisher (Oxford University Press). &#8220;We&#8217;re happy to be your OED&#8221;, we said. (It&#8217;s an open question whether we&#8217;ve done much to help the Elsevier brand, which obviously has bigger problems these days.)</p><p>It took <em>several</em> <em>years</em> to work out all the details, but today we are free to publish &#8220;<em>Computer Networks: A Systems Approach</em>&#8221; under a <a href="https://creativecommons.org/licenses/">Creative Commons license</a>, and we make small fixes to the online version frequently. A sixth edition, pulled from our repo, was published by Elsevier in 2021, and while sales aren&#8217;t anything like they were on the first few editions, they are indeed well above zero. There are now plenty of teachers and students using the <a href="https://book.systemsapproach.org/">latest version</a> of the book without waiting for the multi-year update cycles that a publisher can manage. We even had a professor reformat the book with a <a href="https://opendyslexic.org/">large font</a> that aids dyslexic students&#8211;a benefit of open source we had never anticipated. Getting control over our content also provided the starting point for a number of more focussed <em>Systems Approach </em>books, such as our one on <a href="https://www.systemsapproach.org/books.html">TCP Congestion Control</a>.</p><p>So this is also the origin story of Systems Approach, LLC. Last week we published our 4th book in the <a href="https://www.systemsapproach.org/books.html">Systems Approach series</a>: <em>Private 5G: A Systems Approach</em>. Like all our books now, it is an open source project and free to use under a Creative Commons license. Some of the details about our choice of license are covered in our <a href="https://gaia.cs.umass.edu/sigcomm_education_workshop_2020/papers/sigcommedu20-final5.pdf">submission</a> to the 2020 Sigcomm Education Workshop. As we noted in that paper, open source software provides a model for textbooks that aligns well with where we believe the world of networking is heading, leveraging the input of the networking community and enabling us to share our efforts as widely as possible.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we remain committed to making our content freely available. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Another thing we can do with our open source content is license translations, and we recently did that with Edmar Candeia Gurj&#227;o producing a <a href="https://5g.smartideia.com.br/index.html">Portuguese translation</a> of our 5G book. And we finally got that book to be complete enough that we have made it available as both <a href="https://www.systemsapproach.org/books.html#5gbook">print and eBook</a> editions. And if you always wanted a <a href="https://www.systemsapproach.org/store/p5/Systems_Approach_Mug.html">Systems Approach coffee mug</a>, now we have them too.</p><p>Our <a href="https://www.edx.org/course/introduction-to-magma-cloud-native-wireless-networking">course on Magma</a>, which we created two years ago, has now been revised and expanded. A good place to start if you want to learn about open source approaches to mobile networking. </p><p>Bruce is gearing up to give a lecture about &#8220;60 years of Networking&#8221; as part of the <a href="https://www.ed.ac.uk/informatics/60-years-of-computer-science-and-ai/60-years-of-computer-science-and-ai-events/academic-and-industry/distinguished-lecture-bruce-davie">60th anniversary event</a> for Computer Science and AI at the University of Edinburgh. One of the themes will be the need to return to the decentralized roots of the Internet. Which is a reason why you should <a href="https://discuss.systems/@SystemsAppr">follow us on Mastodon</a> and admit that <a href="https://www.theatlantic.com/technology/archive/2023/05/elon-musk-ron-desantis-2024-twitter/674149/">Twitter is no longer fit for purpose</a>. </p>]]></content:encoded></item><item><title><![CDATA[The Future of P4, Revisited]]></title><description><![CDATA[The P4 Workshop was a couple weeks ago, and as General Chair, I went into it with a fair amount of trepidation. My concern was that Intel&#8217;s announcement earlier this year that they&#8217;re cancelling development of the Tofino 3 switching chip would have a chilling effect, not only on the Workshop, but also on the future of P4. That concern has been voiced in several forums]]></description><link>https://systemsapproach.substack.com/p/the-future-of-p4-one-perspective</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/the-future-of-p4-one-perspective</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 15 May 2023 07:55:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c7e86fc6-aedf-4426-9e0a-417840f7a078_2049x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The P4 workshop has now been chaired by both co-founders of Systems Approach, but this year the P4 landscape has shifted again with Intel&#8217;s announcement that Tofino 3, its flagship P4-powered switching chip, would not go ahead. There is much more to P4 than Tofino, however, as we explore in this week&#8217;s newsletter.</p><div><hr></div><p>The <a href="https://opennetworking.org/events/2023-p4-workshop/">P4 Workshop</a> was a couple weeks ago, and as General Chair, I went into it with a fair amount of trepidation. My concern was that <a href="https://www.world-today-news.com/tofino-network-solutions-were-surprisingly-cut-by-intel/">Intel&#8217;s announcement</a> earlier this year that they&#8217;re cancelling development of the Tofino 3 switching chip would have a chilling effect, not only on the Workshop, but also on the future of P4. That concern has been voiced in several forums, including <a href="https://sigcomm.slack.com/">SIGCOMM&#8217;s Slack workspace</a>, with members of the <a href="https://p4.org/governance/">P4 Advisory Board</a> making reassuring pronouncements in various settings. (See for example, Nick McKeown&#8217;s post to the <a href="https://groups.google.com/a/lists.p4.org/g/p4-announce/c/frXi_jjmawE?pli=1">P4 Forum</a>, and Nick along with Nate Foster and Jennifer Rexford discussing the future of Network Programmability on <a href="https://www.youtube.com/channel/UCAtFAG5JdQrHac6ArIWJ-hw">The Networking Channel</a>).</p><p>I won&#8217;t try to give a point-by-point replay of what Nick, Nate, and Jen and others have been saying, except to observe that at a high level it can be summarized as follows:</p><blockquote><p><em>Programmable Networks&nbsp; &gt;&gt;&nbsp; P4 Language&nbsp; &gt;&gt;&nbsp; Tofino Switching Chip</em></p></blockquote><p>They point out, for example, that Tofino is just one of many interesting backend targets for P4 programs (<a href="https://systemsapproach.substack.com/p/the-accidental-smartnic">SmartNICs</a> and <a href="https://systemsapproach.substack.com/p/infrastructure-processing-units-balancing?r=cxpek">IPUs</a> being the next &#8220;big deal&#8221;) and P4 is one of many tools being used to inject functionality into the end-to-end network path (DPDK and eBPF being two active projects that people are integrating with P4). Ultimately, the value of programmability comes from having visibility and control over the network, and there are many complementary approaches to making that happen. With that background, I do have three takeaways from what turned out to be an interesting and vibrant two days at the P4 Workshop (despite my initial concerns).</p><p>First, we&#8217;re often so focused on P4 as a tool to program the forwarding pipeline that we forget the other half of its value proposition: It also provides a way to specify the behavior of a pipeline (independent of how that pipeline is implemented). We talk about this idea, and the value of being able to auto-generate the Control API, in the P4 chapter of our <a href="https://opennetworking.org/events/2023-p4-workshop/">SDN Book</a>. Rob Sherwood made a <a href="https://www.youtube.com/watch?v=MPESVsy1Ejo">similar argument</a> at the P4 Workshop. It is now becoming a reality as companies like Google are starting to use such behavioral definitions as a Hardware Abstraction Layer (see <a href="https://www.youtube.com/watch?v=bk2i1Y42wls">Parveen Patel&#8217;s Keynote</a> at the Workshop). This makes me hopeful that we are rapidly approaching the day when a P4 program (plus the generated P4RT interface) will become the standard way network providers specify their requirements to network vendors, and proposed new features (whether proprietary or standard) will be specified by a P4 program (potentially augmenting the intuition and design rationale presented in an RFC).</p><blockquote><p><em>As an aside, I couldn&#8217;t help but notice the similarities between the architecture Parveen described and the way P4 has been used to program the forwarding plane of the <a href="https://5g.systemsapproach.org/core.html#p4-implementation">5G Mobile Core</a>.&nbsp; Both include a P4-based &#8220;abstract forwarding model&#8221; that&#8217;s independent of the underlying implementation details</em>.</p></blockquote><p>Second, it is common to divide forwarding pipelines into &#8220;programmable&#8221; versus &#8220;fixed function&#8221;, but this glosses over what might be the more important distinction: whether the pipeline is <em>open</em> or <em>closed</em>. Even &#8220;fixed function&#8221; pipelines are increasingly flexible&#8211;it&#8217;s just a question of how restrictive the vendor is in who they allow to make changes. This restriction may have the biggest impact on researchers who want to experiment with a new feature (especially ones that do not yet have a proven market), but maybe less so in the commercial world where incentives to make changes are (arguably) well-defined. Using P4 as the &#8220;spec language&#8221; (as I just outlined) has the potential to accelerate the process on the commercial side. On the research side, there is a strong argument in favor of using Tofino 2 to demonstrate the feasibility and value of new ideas (12.8 Tb/s still makes for a credible Proof-of-Concept), and repeating the refrain yet again, P4-as-spec makes for a compelling tech transfer story. If that were to happen, it would be interesting to see how vendors and chip designers adapt to reduce their spec-to-hardware implementation overhead. I would argue that programmable forwarding planes have a time-to-market advantage even for closed solutions.</p><p>Third, our focus on quantifiable metrics makes it easy to forget about the less quantifiable aspects of programmability. At its core, P4 is a programming language that does a good job of abstracting the essence of a packet forwarding pipeline. It is enormously impressive that a P4 program can be compiled onto a <a href="https://sdn.systemsapproach.org/switch.html?highlight=pisa#forwarding-pipeline">PISA-based</a> switching chip that has the same performance, die area, cost, and power consumption of a fixed-function ASIC (and that equivalency was probably necessary for P4 to be taken seriously), but hitting that quantifiable mark is not sufficient. Well-designed languages are software tools that bring clarity to the intellectual challenge of programming. For me, the biggest &#8220;aha&#8221; moment of the Workshop was when Chris Sommers (long-time P4 contributor and new co-Chair of the API Working Group) started rattling off all the functions he&#8217;d been involved in writing in P4, and remarking on how natural P4 makes that process. There is certainly room to add new language features as P4 expands its domain to include SmartNICs and IPUs&#8212;as Chris and the other WG chairs are now pursuing&#8212;but having an existing target to evolve is a great position to be in.</p><p>One common thread that weaves its way through these three takeaways is that Intel&#8217;s cancellation of the Tofino 3 chip is a potentially helpful forcing function: The P4 community has to demonstrate the value of the language without being buttressed by ever-improving performance numbers that have more to do with 7nm semiconductor technology than anything networking people have done. I saw a lot of evidence that exactly that is happening at last month&#8217;s workshop. The march to programmable networks is inevitable (in my view), and I&#8217;m still optimistic about the role P4 will play a central role.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books and articles open to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p>We continue to run into people who want to translate our books into other languages, and if you are one of them, you should definitely reach out to us. The latest entrant is a <a href="https://5g.smartideia.com.br">Portuguese translation</a> of our Private 5G book by Edmar Candeia Gurj&#227;o. You can find other translations of our books <a href="https://www.systemsapproach.org/books.html">here</a>. </p><p>You can follow us on <a href="https://discuss.systems/@SystemsAppr">Mastodon</a>. </p>]]></content:encoded></item><item><title><![CDATA[How SDN Came to the Wide Area]]></title><description><![CDATA[While the early success of SDN was in local area networks (and particularly the special case of datacenter networks) it didn&#8217;t take too long before it made its impact in the wide area. The WAN applications of SDN can be further subdivided: there was the application of SDN to traffic engineering of inter-datacenter networks and then there is what came to be known as SD-WAN.]]></description><link>https://systemsapproach.substack.com/p/how-sdn-came-to-the-wide-area</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/how-sdn-came-to-the-wide-area</guid><dc:creator><![CDATA[Bruce Davie]]></dc:creator><pubDate>Mon, 01 May 2023 08:10:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ecd89d62-2720-45d9-a812-fb1c29361285_1920x1440.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week I recorded a podcast with John Spiegel and Jaye Tillson (available <a href="https://soundcloud.com/sse-forum/arc-of-networking-with-bruce-davie?utm_source=clipboard&amp;utm_medium=text&amp;utm_campaign=social_sharing">here</a>) and one of the first things we discussed was the emergence of SD-WAN (Software-Defined Wide Area Networks) as I saw it. It&#8217;s a story with a lot of history that provides the jumping off point for this week&#8217;s newsletter.</p><div><hr></div><p>While the early <a href="https://systemsapproach.substack.com/p/sdn-and-the-alignment-of-planets">success of SDN</a> was in local area networks (and particularly the special case of datacenter networks) it didn&#8217;t take too long before it made its impact in the wide area. The WAN applications of SDN can be further subdivided: there was the application of <a href="https://systemsapproach.substack.com/p/how-traffic-engineering-became-a">SDN to traffic engineering</a> of inter-datacenter networks (notably from <a href="https://dl.acm.org/doi/10.1145/2486001.2486019">Google</a> and <a href="https://dl.acm.org/doi/10.1145/2486001.2486012">Microsoft</a>) and then there is what came to be known as SD-WAN. This part of the story relates primarily to enterprise WANs&#8211;a huge business but not something that gets much coverage in the discussions of SDN at academic conferences. So let&#8217;s look a bit more closely at enterprise WANs and why SDN turned out to be a good fit for the challenges in that environment.</p><p>My introduction to enterprise WANs came in the early days of MPLS, around 1996. My team at Cisco had published the first drafts on Tag Switching at the IETF, which would eventually lead to the creation of the MPLS working group and all the RFCs that followed. As described in a <a href="https://systemsapproach.substack.com/p/was-mpls-necessary">previous post</a>, we were approached by a team at AT&amp;T whose main problem, in essence, was that their enterprise WAN business was too successful. Their core business was building WANs using Frame Relay virtual circuits to connect the offices and datacenters of enterprises. Deploying a WAN for one customer entailed provisioning a set of frame relay circuits to interconnect the sites, plus configuring a router at every site to manage the routing of traffic over the WAN. The complexity of managing these circuits and routing configurations was becoming overwhelming both because of the popularity of the service and because of the increasing desire to provide full-mesh connectivity (or something close to it) among sites.&nbsp;</p><p>At the time, one of the options that was being considered as an alternative to Frame Relay was to use IPSEC tunnels across the Internet to interconnect the sites. Leaving aside the fact that the Internet in 1996 was way less ubiquitous than it is today, the big downside of that approach was that it actually didn&#8217;t do much to reduce the complexity of configuration. You would need to configure as many IPSEC tunnels as Frame Relay circuits, and you still need to configure the routing overlay on top of all those tunnels to forward traffic between all the sites. In rough terms, both Frame Relay and IPSEC tunnels require <em>n</em><sup>2</sup> configuration steps, where <em>n</em> is the number of sites. MPLS/BGP VPNs came out ahead by reducing that configuration cost to order <em>n</em>, even with full mesh connectivity among sites. The full story of how that works is in <a href="https://datatracker.ietf.org/doc/html/rfc2547">RFC 2547</a> and in the <a href="https://www.amazon.com/MPLS-Technology-Applications-Kaufmann-Networking/dp/1558606564">book</a> I wrote with Yakov Rekhter. One thing that we made sure of was that our system had no central point of control, because in 1996 we knew that central control was not an option for any networking solution aiming to scale up.&nbsp;</p><p>A few years later (2003) I gave the SIGCOMM talk for which I am most well known &#8211;&nbsp; &#8220;<a href="https://www.slideshare.net/drbruced/mpls-outrage">MPLS Considered Helpful</a>&#8221; (in the outrageous opinion session of course). Talking to people afterwards I was struck by the lack of awareness of how widely deployed MPLS/BGP VPN service was at that point. There were over 100 service providers using it to provide their enterprise WAN services by 2003, but that was completely invisible to most of the SIGCOMM community (perhaps because universities don&#8217;t rely on such services and because enterprise network admins don&#8217;t show up at academic conferences).&nbsp;</p><p>Fast forward to 2012 and MPLS VPNs were the <em>de facto</em> choice for enterprise WANs. But many things had changed since 1996, and those changes were about to align to create the conditions for SD-WAN to emerge. Importantly, the idea that central control was a non-starter had been effectively challenged with the rise of SDN in other settings. Scott Shenker&#8217;s influential talk &#8220;<a href="https://www.youtube.com/watch?v=YHeyuD89n1Y">The Future of Networking, and the Past of Protocols</a>&#8221; had made the compelling case for why central control was needed in SDN, and developments in distributed systems had enabled scalable, fault-tolerant centralized controllers such as the one we built for datacenter <a href="https://www.usenix.org/conference/nsdi14/technical-sessions/presentation/koponen">SDN at Nicira</a>. The first time I heard about the ideas behind SD-WAN was when one of my Nicira colleagues told me he was leaving for another startup. The simple high-level sketch he gave of applying an SDN-style controller to the problem of enterprise WANs immediately made sense.&nbsp;</p><p>Whereas in 1996 we relied on a fully distributed approach using BGP to determine how sites could communicate with each other, an SDN controller allows policies for inter-site connectivity to be set centrally while still being implemented in a distributed manner at the edges. The benefits of this approach are numerous, especially in an era where high-speed Internet access is a widely-available commodity. Building a mesh of encrypted tunnels among a large number of enterprise sites no longer has <em>n</em><sup>2</sup> configuration complexity, because you can let the controller figure out which tunnels are needed and push configuration rules to the sites. Furthermore, there is no longer a dependence on getting a particular MPLS service provider to connect your site to their network: you just have to get Internet connectivity to your site. This factor alone&#8211;replacing &#8220;premium&#8221; MPLS access with &#8220;commodity&#8221; Internet&#8211;was decisive for some SD-WAN early adopters.&nbsp;</p><p>The other big change in the decades since RFC 2547 was the rise of cloud-based services as an important component of enterprise IT (Office 365, Salesforce, etc.). Traditionally, an MPLS VPN would provide site-to-site connectivity among branches and central sites. If access to the outside Internet was required, it would entail backhauling traffic to one of a handful of central sites with external connectivity and all the firewalls etc. needed to secure that connection. But as more business services were delivered from &#8220;the cloud&#8221;, and with SD-WAN leveraging Internet access rather than dedicated MPLS circuits to every branch, it started to make sense to provide direct Internet access to branch offices. This meant a significant change to the security model for networking. Rather than a single point of connection between the enterprise and the Internet (with a central set of devices to control attendant security risks) there are now potentially as many connection points as there are branches.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qjRq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qjRq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 424w, https://substackcdn.com/image/fetch/$s_!qjRq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 848w, https://substackcdn.com/image/fetch/$s_!qjRq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 1272w, https://substackcdn.com/image/fetch/$s_!qjRq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qjRq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png" width="1294" height="802" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:802,&quot;width&quot;:1294,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Diagram of an SD-WAN deployment, with a central controller accepting policies as input and pushing configuration to three sites&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Diagram of an SD-WAN deployment, with a central controller accepting policies as input and pushing configuration to three sites" title="Diagram of an SD-WAN deployment, with a central controller accepting policies as input and pushing configuration to three sites" srcset="https://substackcdn.com/image/fetch/$s_!qjRq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 424w, https://substackcdn.com/image/fetch/$s_!qjRq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 848w, https://substackcdn.com/image/fetch/$s_!qjRq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 1272w, https://substackcdn.com/image/fetch/$s_!qjRq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7a73abd-2877-4408-9012-7ea4455017ee_1294x802.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">SD-WAN: central control over distributed forwarding, with direct cloud access from branches</figcaption></figure></div><p>So with SD-WAN and the rise of cloud services, you need a way to set and enforce security policy at all the edges of your enterprise&#8211;and now there is potentially an edge at every branch. In a sense, this aspect of SD-WAN contributed to the rise of SASE (secure access services edge): once you started putting SD-WAN devices at every site, you needed a way to apply security services at those sites. Fortunately, the &#8220;centralized configuration with distributed enforcement&#8221; model of SDN provides a natural way to address this issue. The SD-WAN device at the edge is not just an IPSEC tunnel terminating device, but is also a policy enforcement point to apply the security policies of the enterprise. And the central point of control for an SD-WAN system provides a tractable means of configuring the policies in one place even though they will be implemented in a distributed way. (Further complicating the picture is the fact that an SD-WAN edge device can also forward traffic to or from cloud-based security services.)</p><p>There is a lot <a href="https://sdn.systemsapproach.org/uses.html#software-defined-wans">more to SD-WAN</a> than I have space to cover here. For example, dealing with QoS in the presence of the Internet&#8217;s best-effort service turns out to be important and is one area where the commercial providers of SD-WAN equipment try to differentiate themselves. SD-WAN remains an area where open standards have yet to make much of an impact. But it certainly provides an interesting case study in how a change in the adjacent technologies can make ideas that once seemed impractical (such as central control and VPNs built with meshes of encrypted tunnels) viable again.&nbsp;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported and we are committed to keeping our books open to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>Following up on <a href="https://systemsapproach.substack.com/p/private-5g-as-easy-as-wi-fi">our prior post</a> about getting a 5G small cell up and running, Larry finally tracked down the last bug in his configuration and was able to get traffic flowing from his mobile devices to the Internet. It turned out that debugging networking among a set of containers connected by overlay networks isn&#8217;t that easy! We&#8217;re updating the appendix of our <a href="https://5g.systemsapproach.org">5G book</a> accordingly. There is still time to submit your bugfixes to the book via <a href="https://github.com/SystemsApproach/private5g/pulls">GitHub</a> to earn our thanks and a copy of the book. </p><p>You can find us on Mastodon <a href="https://discuss.systems/@SystemsAppr">here</a>. <a href="https://www.theverge.com/2023/4/20/23689570/activitypub-protocol-standard-social-network">The Verge</a> has a good article about the Fediverse. Or maybe you want to have a look at <a href="https://substack.com/profile/21727964-bruce-davie?utm_source=share-profile-sidebar">Substack Notes</a>, which we have just started to play with. Photo this week by <a href="https://unsplash.com/@aaronburden?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Aaron Burden</a> on <a href="https://unsplash.com/photos/gmy25xvSkq8?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[Private 5G: As Easy As Wi-Fi*]]></title><description><![CDATA[*Some Restrictions May Apply.&#160; It should come as no surprise that designing/implementing Private 5G is not exactly the same as deploying/operating Private 5G, and since we wanted our forthcoming book to help readers with the latter, we decided to take the system we had built out for a test drive.]]></description><link>https://systemsapproach.substack.com/p/private-5g-as-easy-as-wi-fi</link><guid isPermaLink="false">https://systemsapproach.substack.com/p/private-5g-as-easy-as-wi-fi</guid><dc:creator><![CDATA[Larry Peterson]]></dc:creator><pubDate>Mon, 17 Apr 2023 07:38:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a477027e-e537-468b-98a6-40bf7a837afb_1016x1202.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As we&#8217;re working towards finishing our Private 5G book, we&#8217;ve been completing the <a href="https://5g.systemsapproach.org/software/overview.html">Hands-On Experience</a> Appendix. That necessitated a bit more, er, hands-on work than we normally would do, which provided the impetus for this week&#8217;s post.</p><div><hr></div><p>Our <a href="https://5g.systemsapproach.org/">Private 5G book</a> is informed by our experience designing and implementing an open source Kubernetes-based edge cloud that hosts&#8212;among other edge workloads&#8212;a managed 5G connectivity service. Edge applications can take advantage of <em>local breakout,</em> meaning they can communicate directly with IoT devices (and the like) without their packets ever leaving the enterprise. This local <em>Connectivity-as-a-Service</em> is then offered as a managed cloud service (rather than a traditional Telco service), including an API and Dashboard that makes it easy to monitor and control connectivity at runtime. The hope is that Private 5G will be as easy to deploy and use as Wi-Fi is today. (For the reader prepared to argue that Wi-Fi is sufficient for all edge use cases, we leave the <a href="https://www.techtarget.com/searchnetworking/feature/A-deep-dive-into-the-differences-between-5G-and-Wi-Fi-6#:~:text=Together%2C%20Wi%2DFi%206%20and,with%20speed%2C%20reliability%20and%20flexibility.">5G vs. Wi-Fi debate</a> for another time.)</p><p>It should come as no surprise that <em>designing/implementing</em> Private 5G is not exactly the same as <em>deploying/operating</em> Private 5G, and since the main goal of the Appendix is to help readers with the latter, we decided to take the system we had built out for a test drive. But there&#8217;s an important qualifier before we get to that. The system we&#8217;re talking about, <a href="https://opennetworking.org/aether/">Aether</a>, is not a collection of isolated components that leaves the dirty work of operationalization to someone else. Aether includes all the integration glue needed to bring up an operational system in support of live traffic&#8212;a topic we&#8217;ve covered in other <a href="https://systemsapproach.substack.com/p/if-gitops-is-the-answer-whats-the">posts</a> and written an <a href="https://ops.systemsapproach.org/">entire book</a> about&#8212;but that doesn&#8217;t mean it&#8217;s easy for ivory tower architects like Bruce and myself to bring up Aether without a little friction. Some of the challenges were our missteps, but some point to the inherent difficulty in the Telco-to-Cloud transition Aether is trying to catalyze.</p><p>Step one is getting your hands on a 5G small cell radio, and they aren&#8217;t exactly available today at Best Buy. We&#8217;re using the <a href="https://opennetworking.org/products/sercomm-sce5164-b78/">Bridgestone Indoor 5G Sub-6G Small Cell</a> from Sercomm. We also have experience with Sercomm&#8217;s 4G counterpart (which is less expensive and easier to find). You&#8217;ll also need a starter set of UEs, and while several smartphones support CBRS (e.g., iPhone 11, Google Pixel 4, or newer), our recommendation is to include a <a href="https://www.apaltec.com/dongle/">5G dongle</a> that can be attached to a Raspberry Pi. Acquiring the 5G hardware is still a problem today, but that&#8217;s probably a short-term situation. The other bit of hardware you&#8217;ll need is a server (or VM) to run Aether on, but the requirements aren&#8217;t too steep (Quad-Core, 12GB RAM, running Ubuntu 20.04 or 22.04). Note that the approach I&#8217;m describing uses CBRS spectrum that is allocated in the US; other countries are in different stages of establishing similar allocations.</p><p>Step two is where 5G is the most unfamiliar to anyone who has installed a Wi-Fi AP: Configuring the small cell radio. There are three parts to this. The first part is setting the <a href="https://5g-tools.com/5g-nr-gscn-calculator/">RF-related parameters</a>, which I am wholly unqualified to do. Their names are cryptic (e.g., FreqSsb, Arfcn), their settings seemingly arbitrary (e.g., 3609120, 643356), and the formulas to compute them&#8230; not exactly intuitive:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XNEv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XNEv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 424w, https://substackcdn.com/image/fetch/$s_!XNEv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 848w, https://substackcdn.com/image/fetch/$s_!XNEv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 1272w, https://substackcdn.com/image/fetch/$s_!XNEv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XNEv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png" width="1230" height="343" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:343,&quot;width&quot;:1230,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XNEv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 424w, https://substackcdn.com/image/fetch/$s_!XNEv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 848w, https://substackcdn.com/image/fetch/$s_!XNEv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 1272w, https://substackcdn.com/image/fetch/$s_!XNEv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a220146-e1b4-4a06-9b10-0cb0f54261fb_1230x343.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These (and other) parameters are related to the control the operator has over how the available frequency band is used, which is part of the value 5G brings. I clearly have more to learn, but fortunately, the out-of-the-box defaults work. The second part is connecting the small cell to the local network, which is straightforward, complicated only by the fact that the radio has two 802.3 ports: one known as WAN (but labeled 2.5G on the Sercomm 5G small cell) and the other know as LAN (but labeled 1G on the Sercomm 5G small cell). The WAN port is how the small cell connects to the Internet (indirectly via the Mobile Core, which we&#8217;ll get to in a moment). The LAN port is for connecting the radio to a management network, which is worth mentioning because you&#8217;ll eventually need to learn TR-069/TR-098 (in place of SNMP/MIB, respectively), since you&#8217;ll technically be managing on-prem Telco equipment instead of IETF-spec&#8217;ed Internet devices. There&#8217;s also an <a href="https://docs.o-ran-sc.org/en/latest/architecture/architecture.html">O1 Management</a> interface, which is the O-RAN approach to managing RAN elements, but I have not yet had an opportunity to use it. It&#8217;s probably better to have too many programmatic interfaces than too few, but I was able to do everything I needed to through the dashboard, which is enough to get started. The third part is configuring the Spectrum Allocation Server (SAS), which is responsible for managing access to the <a href="https://www.fcc.gov/wireless/bureau-divisions/mobility-division/35-ghz-band/35-ghz-band-overview">three tiers</a> of the CBRS spectrum. You&#8217;ll need to familiarize yourself with the SAS requirements and get credentials from a SAS provider (we use <a href="https://www.google.com/get/spectrumdatabase/sas/">Google</a>) if you want to get past the &#8220;turn it on and see if it boots up&#8221; stage. (You&#8217;ll also need to connect a GPS antenna, which the radio needs so it can tell the SAS its precise location.)</p><p>Step three is interesting because it&#8217;s related to how you assemble a system out of building-block components. As I discussed <a href="https://systemsapproach.substack.com/p/whats-in-a-name">previously</a>, the mobile cellular network defines a global naming scheme that makes it possible for any two RAN-connected devices to communicate with each other. You need to configure both the small cell radio and the Mobile Core software stack so they know how to plug into that global network. This means defining the Mobile Country Code (MCC) and Mobile Network Code (MNC) that you plan to use. This <a href="https://en.wikipedia.org/wiki/Mobile_country_code">MCC/MNC pair</a> forms a Public Land Mobile Network (PLMN) code, where we&#8217;ve used two different ids in different settings: <strong>315010</strong> constructed from MCC=315 (US) and MNC=010 (CBRS), and <strong>00101</strong> constructed from MCC=001 (TEST) and MNC=01 (TEST). And since you&#8217;ll technically be the MNO responsible for the Private 5G network you bring up, you&#8217;ll also need to burn the SIM cards that are to be inserted into all the UEs. The SIM cards include a unique identifier (called an IMSI), which is a 15-digit number with the PLMN code as its prefix. (You can buy a 5G SIM writer on Amazon, where one product description reads: <em>PLS Kindly Note: The cards be provided to professional engineers, PLS be professional, you need have knowledge about sim cards, if you don't have, PLS do not buy it!</em>)</p><p>Finally, in step four, you&#8217;ll be back in familiar IP-land, but your ability to juggle IP subnets, Linux bridges, and iptable rules will be taxed to the max. I won&#8217;t go through all the details, and your mileage will vary depending on how deeply you want to integrate the RAN into your enterprise network, but by my count, there are as many as seven subnets in play. This is in part because the Mobile Core is implemented in Kubernetes (with its own set of intra-cluster and service-visible addresses), in part because the backhaul that connects the small cell radios to the Mobile Core is an overlay network (for example, running on top of your local enterprise network), and in part because the forwarding plane of the Mobile Core&#8212;the User Plane Function (UPF) running as a Kubernetes-hosted microservice&#8212;is itself an IP router that forwards packets between the RAN and the rest of the Internet. You&#8217;ll certainly find that having access to diagnostic tools such as ping, traceroute, and tcpdump to be essential (which is one reason we recommend connecting at least one Pi+Dongle UE).</p><p>I&#8217;m pretty sure Wi-Fi configuration was never this complicated. To some extent, this may be due to where the line is drawn between the customer and the provider: Telcos have strived to keep the end-system they sell subscribers simple, but have accepted operational complexity in the network devices (such as base stations) that they manage. In contrast, anyone who purchases a Wi-Fi AP from a vendor assumes it will be straightforward to install. One would expect that, with time, small cells deployed in enterprises (and maybe even homes) can be pre-configured before they are shipped or auto-configured after they are installed, but our goal with the book is to demystify 5G, including all the configuration steps. If you&#8217;re an enterprise system admin (or a hobbyist who wants to try out the technology at home) you will need to know about all of this. That&#8217;s why we wrote the book! It&#8217;s also why it&#8217;s important to have access to open source implementations of all this technology.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://systemsapproach.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Systems Approach is reader-supported. We are committed to keeping our open source books and newsletters free to all. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>As noted above, our <a href="https://5g.systemsapproach.org/index.html">Private 5G book</a> is close to completion. Not only does the appendix now describe our own hands-on experience, we also just received a very kind <a href="https://5g.systemsapproach.org/foreword.html">foreword</a> from networking pioneer Jennifer Rexford. Congratulations to Jen on her recent <a href="https://www.princeton.edu/news/2022/11/22/jennifer-rexford-named-princetons-next-provost">ascension</a> to the position of Provost at Princeton. The final step before we can commit the book to print is a last round of bug-scrubbing. Once again, we are offering a free copy of the book to anyone who <a href="https://github.com/SystemsApproach/private5g/pulls">submits</a> two or more accepted bugfixes, and a thank-you in the Acknowledgments to everyone who submits a fix. </p><p>You can find us on Mastodon <a href="https://discuss.systems/@SystemsAppr">here</a>. </p>]]></content:encoded></item></channel></rss>