An Antenna Website This is the default websites created by the antenna init action. https://rsdoiel.github.io/pages.xml 13 Mar 26 15:18 -0700 antenna/0.0.23-dev https://cyber.harvard.edu/rss/rss.html Markdown & Frontmatter: Taking Text Further https://rsdoiel.github.io/blog/2026/03/04/markdown_and_frontmatter.html --- author: R. S. Doiel dateCreated: "2026-03-04" dateModified: "2026-03-05" datePublished: "2026-03-04" description: | Explore the secret sauce of Markdown and frontmatter. The post discusses how they can transform content management by simplifying metadata, theming, and document structure. Markdown with YAML frontmatter to streamline static site generation, create flexible themes, and even express RSS feeds and OPML aggregations. Markdown and frontmatter can provide a straightforward way to manage content without getting tangled in templates or complex tools. keywords: - Markdown - CommonMark - Frontmatter - YAML postPath: blog/2026/03/04/markdown_and_frontmatter.md title: 'Markdown & Frontmatter: Taking Text Further' --- # Markdown & Frontmatter: Taking Text Further By R. S. Doiel, 2026-03-04 (updated: 2026-03-05) There is a secret sauce that came with the adoption of Markdown and CommonMark in static website generators. **Frontmatter** is used to express metadata about the document. Often, this is expressed as a [block of YAML](https://jekyllrb.com/docs/front-matter/). YAML is a language designed to express structured data, such as metadata. Early Markdown-oriented static site generators like Jekyll leveraged this language to simplify content management. Today, frontmatter and Markdown are used not only in static content management but also in the data science community, with dialects of [Markdown](https://daringfireball.net/projects/markdown/) like [RMarkdown](https://rmarkdown.rstudio.com/). What does attaching a complex structured data expression bring to the table? This post is being typed up as a Markdown document. When I am ready to publish it to my blog, I will add YAML frontmatter. The frontmatter will be used in the site rendering process. It will include metadata such as title, authorship, dates, description, and keywords. Some of this metadata will end up in the HTML document, while some will be used to form other documents, like RSS feeds. Here’s a typical example of what I include at the top of my Markdown posts: ~~~yaml title: Markdown and Frontmatter, taking text further author: R. S. Doiel dateCreated: "2026-03-04" datePublished: "2026-03-04" description: | A description of Markdown with frontmatter and a discussion of how it can be used keywords: - Markdown - CommonMark - Frontmatter - YAML ~~~ This use case is typical in static content site generators. It allows you to keep the creation of page content simple. By attaching frontmatter at the top of the document, you also make the document’s metadata available for processing[^1]. [^1]: Front matter is the name of a practice adapted from print books. Printed books contain metadata at the start of books like title, authorship, publication date, publisher, copyright, preface and table of contents. This allowed for books to be easily aggregated in collections while also providing a small degree of provenance to the document. Applied in the electronic context the words "front matter" have come to form a compound word "frontmatter" in English. My content management tool, [Antenna App](https://rsdoiel.github.io/antennaApp), like many others, uses this information to determine how to add a post or a single web page. In a dynamic content management system like WordPress or Drupal, this information is collected from a web form and stored in a database. Static site systems avoid complexity by keeping the document metadata as part of the document itself. For a long time, I used frontmatter in my Markdown documents in just this way. Last year, I realized I could take it a step further. Markdown and frontmatter can directly express the contents of an RSS feed (for example, a list of posts) and can express aggregations of feeds in [OPML](https://en.wikipedia.org/wiki/OPML) using a similar combination of Markdown with frontmatter. Over time, as I’ve continued to develop my content management tools, I’ve taken this approach even further. You can express many parts of an assembled web page as individual Markdown documents held in a folder. One of the challenges I’ve seen people struggle with in content management systems over the years is that you often ask creators to commit to either a specific template language (for example, PHP) or restrict them to a theme system where specialists provide the templates used to transform content into pages and posts. Theme ecosystems can become a key factor in selecting a specific CMS and are often an indicator of the adoption and health of the CMS project. Many static site generators are mature and have a significant community around them[^2]. They do not compare in size to the communities formed around with WordPress or Drupal. In either case I am unaware of an common approach to expressing themes so that you can migrate or integrate these systems without duplicating your theming efforts. I think this is why we haven't seen collaboration between systems in exploring theming collectively. The lack of portability of themes is rooted in part by theme engines that become bound up in the language implementation of the content system or in the specific data structures expressed by those systems. In some cases systems will embrace a more generic templating language like [Handlebars](https://handlebarsjs.com/) but their assumed data structures for organizing content prevents portability. Divergence has been a large enough problem that when HTML5 added templates, the W3C went with yet another approach. Looking at web practice as HTML5 elements have become widely adopted suggests an opportuntity to address theming challenges. With a smaller cognitive shift, focusing on the expression of Markdown and structured HTML we can provide a middle ground to sharing themes across systems. This is particularly true for content systems that already embrace Markdown. [^2]: examples Hugo, Ghost, and Jekyll ## Finding a middle path by embracing Markdown **What is a theme?** Most theme engines include some content structure (for example, headers, footers, navigation elements, and a central content block). For visual layout and design, CSS has become the de facto means to achieve visual placement. But there remains the question of content structure that is common between pages on the website. The goal of a theme is usually to specify these common elements in a uniform way to meet the needs of the website. In a language like PHP, you might output HTML with a pointer to a known CSS location. In a language like Python, you might use some template system to express this. In Go, you’d use Go’s own template language to do this or an extension of it, like that found in Hugo. Can we avoid a third language if we’re already using Markdown and a little YAML? The humble file system provides an **opportunity**. In the theme engine of Antenna App, I’ve taken the approach that a theme is a CSS file and a set of elements based on Markdown documents, all contained within a folder. Here’s an example layout: - theme folder - style.css - header.md - nav.md - footer.md - top_content.md - bottom_content.md This folder represents a general structure. If one of these elements is missing, it will not be included when translating a Markdown document into HTML for presentation on the website. This simple approach means you can write your navigation element just like you write a blog post, using Markdown. The same applies to headers, footers, and any pre- or post-content information. Want to build a theme? Create a new folder. Add the specific elements you require as Markdown documents to the folder and perhaps a CSS file, too. Then apply the theme to your Antenna App site. This would be fine if Antenna App was used by a large number of people. It isn't; it's used by me. I think the approach to theming sites can be applied to other static content systems. In Antenna App, when you apply a theme folder, it takes the directory structure and its documents and folds them into the site or collection configuration. Antenna App understands the theme folder layout. What if I was using Pandoc and I wanted to theme a site build with it? Well I think I could write a simple script to take the Markdown elements and render them as Pandoc templates that then could be used by Pandoc. It doesn't seem that hard. The theme folder becomes an intermediate representation available to other systems. I picture a simple tool that works similarly to my Pandoc example. It could convert this structure into other theme and template expressions. You could then replicate them across content management systems, giving you more flexibility as your needs change and evolve. What I hope is that other people who write static content engines could see the value in this approach too (and improve on it!). This approach doesn't solve the challenge of writing CSS. Prior art like [CSS Zen Garden](https://www.csszengarden.com/) shows how CSS collections can be built around a known set of HTML elements. A web search for "CSS examples use with Markdown"[^3] will lead to articles and examples. Large language models can be used effectively to generate CSS for styling Markdown-generated HTML elements too. Does this solve CSS complexity? Not really but it does offer a leg up by providing an option to barrow or adapt existing CSS that is more general purpose and could be applied to a common theme representation approach. [^3]: In my quick web search these two site show example of ready made simple CSS for use with Markdown, <https://markdowncss.github.io/> and <https://simon.lc/sites/markdown/>. This approach suggests a means of collaborating in a community to provide out of the box Markdown oriented CSS more generally. ## Using Pandoc as a model Pandoc has been a tried and true converter of Markdown to HTML for a very long time. Pandoc implements CommonMark, which is an **unification** of Markdown extensions along with John Grubber's original specification. Pandoc is also really good at translating other document formats. It does so by leveraging an internal representation (an abstract syntax tree) so that an importer just needs to implement a single mapping, while an exporter does the reverse. What if this simple structure was used in a similar way to translate one theme system to another? While my implementation was designed for use with Antenna App, I could see translating it to work with Pandoc templates or Hugo. You'd just need to create the crosswalks. ## Wait, isn't Markdown hard to teach? The other day, I read someone claim that Markdown is hard to teach. Having helped out with Data Carpentry courses from time to time, I'm not sure I buy that. First, when compared to HTML, there is just **far less text to manage**. You don't need to explain the SGML-inspired greater-than and less-than signs. You don't really need to explain structured documents initially. You start simply by asking someone to type a paragraph and see how it translates. This isn't surprising because Markdown was inspired by what people were doing in plain text email back in the day. Similarly, you can add elaboration as it is needed. There is no worry about open and closing elements, no worry about quoting things just right. There are a few complex expressions to pick up (for example, links and embedding image references), but most plain text works as expected. There is a second hidden power in Markdown: it was designed to be easy to read. This latter part is **crucial**. Markdown is far easier to edit than XML or HTML as a result of its readability. What it gives up in the range of nuanced expression, it gains in how easy it is to learn and use day to day. Of course, just like HTML, you can find WYSIWYG editors that support Markdown. If the editor also provides a **convenient** way to edit frontmatter, then all that I've stated before still applies. You learn one tool, and the rest just follows. ## Where does this leave us? I think Markdown's application has, to a large extent, been a missed opportunity because it hasn't been full embraced as a hypertext format. I think it should be. When combined with frontmatter, it can provide all that is necessary for driving content systems. It can be used to fully express the types of files needed in a modern website, such as RSS feeds, OPML for subscription lists, and sitemaps, too. I don't think my exploration of Antenna App is the complete answer. I wrote it to fit my writing preferences. I hope others see what I've done and take things a few steps further. It's time we take back the tooling around our own writing without relying on ever more complex software to do so. Language Models, Services and the Edge https://rsdoiel.github.io/blog/2026/03/03/LLM_service_and_edge.html --- abstract: | I have been experimenting with two large language models recently. I've used them via their Web user interface as well as offline via Ollama. The two models are Mistral 3 and [Apertus](https://publicai.co/). Mistral is the more mature of the two. It is from the French company [Mistral AI](https://mistral.ai/). While the training material used for Mistral 3 is not disclosed, the model can be downloaded to use with Ollama. Mistral AI's models are in the same league as more well-known Anthropic's Claude models. Claude models have been developed over the last three years. Mistral AI's have been developing for about two years. Apertus is been public about six months. Post continues exploring trade offs and costs. author: R. S. Doiel dateCreated: "2026-03-03" dateModified: "2026-03-03" datePublished: "2026-03-03" keywords: - LLM - Offline - Edge - services postPath: blog/2026/03/03/LLM_service_and_edge.md title: Language Models, Services and the Edge --- # Large Language Models, Services and the Edge By R. S. Doiel, 2026-03-03 I have been experimenting with two large language models recently. I've used them via their Web user interface as well as offline via Ollama. The two models are Mistral 3 and [Apertus](https://publicai.co/). Mistral is the more mature of the two. It is from the French company [Mistral AI](https://mistral.ai/). While the training material used for Mistral 3 is not disclosed, the model can be downloaded to use at the edge with Ollama. Mistral AI's models are in the same league as the well-known Anthropic's Claude models. Claude models have been developed over the last three years. Mistral AI's have been developing for about two years. Apertus is been public about six months. Last Fall the Web UI subscriptions for Claude and Mistral were notably different. Claude was pricey. Anthropic has recently dropped their prices to be more competitive (see <https://www.infoworld.com/article/4095894/anthropics-claude-opus-4-5-pricing-cut-signals-a-shift-in-the-enterprise-ai-market.html>). Mistral AI's Web UI offering presently runs a few dollars cheaper at $14.99 per month for the base plan and $5.99 per month for students. Switching models does mean re-learning the nuances of prompting for a given model. Apertus is different. It is an effort by the Swiss and Singapore university communities to approach providing large language models as a public service. The models are trained on publicly available material with respect to licensing. It is a multi-lingual model from the get-go. I expect this from the Swiss, European, and Singaporean contexts. Apertus is developed fully in the open. It is aligned with the practices of academic research that spawned it. It is audit-able and the effort can be replicated. Both Mistral 3 and Apertus models can be run on your own hardware using the Ollama command line program. The Mistral 3 model can be found on <https://ollama.com>. You can run it locally using `ollama run mistral`. Public AI doesn't directly supply a GUFF model image required by Ollama. Fortunately [Michel Rosselli](https://registry.ollama.com/MichelRosselli) has converted the Apertus model into GUFF. You can test Apertus using `ollama run MichelRosselli/apertus`. It runs reasonably well on my M1 Mac Mini with 16 GB of RAM. It can also run, albeit slower, on a Raspberry Pi 5 with 16 GB RAM. Offline or edge models are where things get interesting. You don't have to give up autonomy. Offline models let you exchange the surveillance and speed of the corporate data center for hardware under your control. I believe that the offline capabilities and performance of LLM will become increasingly crucial over time. Right now, the Web UIs provide faster responses for the three models I've mentioned. Among them, Apertus stands out as the fastest, though its output quality is less mature compared to Mistral 3 or Claude Opus 4.6. Anthropic's price adjustments suggest to me that we have entered LLM services as commodities. If so, the commercial space may become cartel-like. When price stability occurs, expect to see rents rise. Remember that rents should be measured both in monetary terms and in terms of privacy costs. Like social media, the interaction between the LLM and human will result in a massive body of material ripe for exploitation by companies, governments, and nefarious actors. As a response to risk and energy impact I expect offline capability to become increasingly important. This is why I consider using models at the edge as important as the convenience provided by a speedy Web user interface. Open models will be a critical relief valve to pricing pressures and the better models will include provenance for the training data. Apertus is the first where I've seen this happen, given time, the output quality will improve. I think it will follow a similar curve as the closed trained models deployed by corporate providers improved over time. Here's why I think provenance will become key. The flood of generative AI content trained on publicly accessible Internet resources presents a significant problem. The adage 'garbage in, garbage out' is particularly important when it comes to training material for large language models. Provenance is a means of combating this issue. When provenance becomes a key feature Apertus will hold a key advantage. Academia possesses a valuable corpus of open-access materials. It features rich metadata describing the corpus, including licensing terms. Apertus has been innovative in leveraging this resource. While the for-profit companies currently hold an edge, I think that will diminish. I look forward to the Apertus approach being adopted in the Americas as well as across the Pacific nations. Verify the model before you trust it. A Simple Web We Own https://rsdoiel.github.io/blog/2026/02/21/a_simple_web_we_own.html --- author: R. S. Doiel dateCreated: "2026-02-18" dateModified: "2026-03-03" datePublished: "2026-02-21" description: | An exploration of what could happen if we choose a simpler end user friendly cooperative an ownership model for the Web and Internet. keywords: - web - markdown postPath: blog/2026/02/21/a_simple_web_we_own.md title: A Simple Web We Own --- # A simple web we own By R. S. Doiel, 2026-02-21 (revised: 2026-03-03) > Tenant and product or co-owner and participant? Today, the Web and Internet are owned and controlled by large for-profit corporations and a few governments[^1]. Corporate ownership combined with government policies has left us as tenant and product. It has given us a surveillance economy and enshittification[^2]. - What if I do not wish to be tenant and product? - What can I do to change the equation? Those two questions lead me to a bigger question. - What happens when ownership and control of hardware and software shifts from the domain of corporations to a world where a significant percentage are owned by individual people and cooperatives? [^1]: Many governments sub-contract the public Internet to corporations. The corporations are the ones with real control. You see this in domain registries, data lines, storage and virtual services. The United States government today depends on Amazon. As a result Amazon holds the keys to the systems. [^2]: Cory Doctorow both defines the term and what brought it about, see <https://us.macmillan.com/books/9780374619329/enshittification/> I think the answer is suggested by a corollary found in the history of labor movements. When a significant percentage of industries were unionized the unions exerted a strong influence across the political economy. I think ownership of the hardware and software can mirror that impact on the Web and Internet. I think when a significant number of individuals and cooperatives own the hardware and uses simpler software we can impact the Web and Internet in a positive way. That's my hypothesis. An observation and some common assumptions: - Most content on the web is already created by individuals not Big Co - Big Co persuaded people that only Big Co could provide easy Web publication - Big Co convinced many there was no point in looking for alternatives The assumption that only Big Co can provide easy Web publication is just flat out wrong. These systems don't last for more than a decade before they decay. Each of Big Co origin story are similar. They started small. They get to scale by having investors who fund and push rapid expansion. Innovation slows so they buy up any potential rivals. Big Co then either shut them down or fold the rival system into their product lines. The last real innovation these companies introduced was decades ago. Lack of real innovation is one of the factors that drive the Big Co and Big Tech hype cycle. They proclaim a new shiny thing in order to maintain the circus that accumulates more money. Along the way Big Co insists on tax breaks and zero regulation as a prerequisite for innovation which isn't delivered. When they did innovate they didn't have the breaks they insist on now, hell they didn't have the investment or market lock they have now. They only need the hype cycle, not innovation, to keep the money rolling in. At the end of the day we wind up the product, we wind up being exploited and we get very little useful in return. Folks, there is an alternative. In 1992 authoring for the Web did require significant technical knowledge. HTML itself was very challenging to teach people. It was challenging to teach computer enthusiasts! I was involved in helping out at classes that taught HTML back in the early 1990s. I speak from first hand experience. But a funny thing happened on the way to 2026. A tech writer (John Grubber and friends) came up with a simpler expression of hypertext called Markdown. You don't need to know HTML to create a web page or blog post today. You can write it or read it using Markdown. You can write it using the simple text editor that came with your operating system on the computer you own. You only need a program to flip Markdown into HTML. There are plenty of programs out there that do that. Many efforts in the past to break free of Big Co have met with limited success. Usually the energy and effort has been spent re-creating the centralized systems as distributed systems. There was a sense we needed to offer the same experience as Big Co. While ideally individuals and groups could easily run these distributed version the reality is that it remains challenging. I'm really happy to see some of them have some degree of success[^3]. It is an impressive effort. They have broken new ground and importantly they are playing an important role in the world today. I don't think they alone will get us to where we need to go. Even Cory Doctorow uses a system administrator to setup his system. Cory Doctorow is a smart technical guy. It should be easier to do (see <https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-a>). [^3]: Currently Mastodon and BlueSky seem to be the most successful with a possible for longer term persistence. I think there is a simpler path. The Web itself is a decentralized system. What is needed is an easier way for individuals to create content for it. Markdown I believe is a significant piece of the solution. You start learning it just by typing. You add the little decorations as you needed (example linking to another document or creating a bullet list). There are many software programs that can convert Markdown into an HTML page. [Pandoc](https://pandoc.org) is a brilliant example of that. There are WYSIWYG Markdown editors too (see <https://github.com/mundimark/awesome-markdown-editors>). The typing bit is not the problem. It's the content management piece that becomes the barrier. A website is more than a single Web page otherwise we'd be done. This is why content management systems were adopted on the Web. What you need is a way of getting to the HTML typing something easier to read and type. You need a simple way to manage the website structure for what you have written. Again there are programs that do this today. Unfortunately many are complex and come with their own steep learning curve. The most popular content system on the web today is WordPress. It was designed to run remotely on a server. It integrates with the social web systems like Mastodon. It is open source software and you could run it on a personal computer in a pinch. Unfortunately WordPress is complex to maintain. WordPress is really a bundle of software. It requires running Apache or NginX Web Server. It requires running a database like MySQL or MariaDB. It is built from a bunch of PHP, JavaScript, CSS and templates. WordPress out of the box does some really nice things. But it comes with overhead too. If you are a developer WordPress isn't a huge barrier. It's dandy. But running and maintaining it amounts to running and maintaining a whole bundle of interconnected software. That takes up computer resources like memory and computation time. That is problematic. It's challenging to set it up to use as simply as you use your text editor or word processor. Your stuck because it is designed to run on a remote server. If you only want to type up some Markdown to turn into a web page WordPress adds another whole other level of complexity to that big kettle of fish. Complex content management systems were what lead to a renaissance of popularity of using static website generators. Static websites are simple to generate, cheap and easy to host and can be surprisingly interactive. You can even hand craft a static website page by page using Markdown and Pandoc. It did that for years. What Pandoc doesn't do easily for you is provide the trimmings like RSS feeds and sitemaps. If doesn't help manage this site structure. Many people build websites with more elaborate systems like Jekyll and Hugo because, like WordPress, they provide more in the way of content management. There are literally hundreds of other static website generators out there. Unfortunately they don't complete solve the problem. The ones I've tried have been too complex or didn't run on the machines I wanted to do my writing on. I think this is because most were created by developers like me. We grew up on large large complex content management systems. So when we build our own they easily become large and complex too. That is a problem. As a writer you shouldn't need to put on a developer hat to produce a website. You shouldn't have to use Medium or Substack either. What is needed is different. What is needed is an easy way to go from Markdown documents to websites without extra knowledge. Ideally you'd only need to know Markdown to build a nice website. This lack of simplicity disappoints me as a writer. The Web is over thirty years old. It is reasonable to expect a simpler writing system for the web. One that can run on small computers. Ones that don't make you use a text input box for writing. Yet the systems out there are are stuck with complexity because they are solving the problem faced by professional Web developers decades ago. They are making old assumptions about requiring complexity. In a way developers like me keep building formula one race cars when what is needed is a single speed bicycle. How do we get to a simple web? I've been search for an answer. I don't think any invention of is needed. The answer in 2026 is already built-in to the Web. What is needed to change is the software holding that technology. The Web can interconnect us. The software needs to take Markdown and generate the rest of the website so we can take advantage of that. I think we need to break the assumptions of complexity of use and complexity of multi author or centralized models. The core software requirements include an easy way to express hypertext (Markdown), an easy way to generate the HTML. It needs to make content syndication and discovery automatic (create RSS files and sitemaps). The Web browser will see HTML, CSS, JavaScript, RSS, sitemap.xml but the author only needs to work with Markdown documents. I've written experimental software to prove this is possible. My hope with this post and pointing at my own software contribution will shed some light on how easy it could be. I hope it is an example that this can become collectively understood. ## What is needed? A simple web of our own has three core characteristics. 1. A computing device owned and controlled by an individual or cooperative 2. A network owned and controlled by an individual or cooperative 3. Simple to use software that empowers us to both read and write hypertext[^4] and syndicated content [^4]: I will talk about software I have am working on but your should not limit your choice. My hope is by showing that is possible others will step up and provide their own solutions too. Choice is necessary for a thriving ecosystem of the Web. ## Examining the current state The Web and Internet we have today isn't required by the technologies that created it. Human choices and human organizations combined with past scarcity of knowledge and resources is what lead us to this point. That's good news moving forward. Between 1992 and 2026 resource scarcity has changed. Spreading knowledge through communication is the strength and purpose of the Web. They are solid foundations to build on if we choose. ### Changes in scarcity of resource and knowledge Let me illustrate. In 1926 we didn't have a global e-waste problem. In 2026 it is a huge problem. In 1950 a computer filled a room and could only be afforded by governments and the largest corporations[^5]. They required special high capacity power connections. They required cooling systems. Often required physical changes to the buildings (example sub flooring for cable access and fire supression systems). A single computer like the Raspberry Pi 400 runs $60.00 in the United States in 2026. It can run off a USB battery or wall socket. It can run in ambient temperatures. Throw in a monitor, power supply and cables and you're your computer budget comes in at about $200.00. This price includes the crazy United States tariffs. It includes the crazy AI hype inflated memory pricing[^6]. A good desktop computer capable of producing Web content and hosting it is far less than the price of a smart phone which you don't control. [^5]: Timeline of Internet history, see <https://www.computerhistory.org/internethistory/> [^6]: The price of RAM has risen dramatically since the start of the 2026, especially after Big Co and their AI corporate paramours inked circular deals to loan, purchase and sell to each other using assets that don't exist in reality. See <https://www.raspberrypi.com/news/more-memory-driven-price-rises/>. ### Exploring the possible, the value proposition of common nouns Let's explore the Internet and Web not as proper nouns but as common nouns. The underlying technology is a distributed system. We happen to use it like a monolithic system. You see a similar pattern in computer operating systems. Windows is based on NT, it was based on VMS. VMS was a mini computer based multi user operating system. Linux and macOS are modeled on Unix. Unix was originally a mini computer based multi user system. Similarly our two most popular phone operating systems, Android and iOS are, on built on top of Linux and macOS. They are multi user systems used on single user machines. We choose to use them as single systems to avoid thinking about their complexity. Similarly we assume the Web must be run by Big Co because we avoid thinking about the complexity underlying it. Abstraction and re-purposing abstraction is a common theme in software systems. Re-purposing abstraction allows us to move where the complexity is based. It allows us to experience a simple system. What's changed is we don't require Big Co to have a simple user experience. I am arguing for managing complexity through simple to use software running directly on a computer we control and own. It not a remote service. It's doesn't run until you tell it to. When it does run it takes care of the complex details of generating the website HTML, RSS and sitemaps from the simpler expression of Markdown. The Internet is a network of network. An internet as a common noun is also a network of networks. Specifically it is a network of one or more computers connected using Internet Protocols. The Internet Protocols provides for public facing networks and private ones. One that runs on your computer and is only available to your computer is called localhost. You can author a website and view it on your own computer using localhost. Localhost is a private network. If you are running macOS, Linux, Windows or Raspberry Pi OS it's already available to you. You only need to choose to use it. You have a private network the minute you turn on your computer. You can have a private piece of the Web if you choose. If you are lucky enough to have Internet access at home that network is probably setup as a private network. Your private network is then connected to your Internet Provider via a switch or cable modem. The Internet Provider connects our private network to the public Internet on our behalf[^7]. Both the public and private systems run using the same set of technologies and protocols. This is something we can leverage to our own ends. [^7]: Be aware that the reason you can't share your private network with your neighbor isn't technical. Most terms of service issued by Internet Providers prohibit sharing your Internet connection with your neighbor. Remember that the next time your Internet access slows down or stops working. They are not allowing us to share. - The Internet is just a network of networks using Internet Protocols - The network starts on your own computer - Networks can be private or public - We can own a private network and connect it to another public or private one There are two versions of Internet Protocols running in parallel today, IPv4 and IPv6 (IP stands for Internet Protocol and "v" is followed by the version number). IPv6 provides a larger possible number of uniquely identifiable connections on the network. Each network connection can provide a Web destination. Much of the globe has already shifted to IPv6. The United States lingers with quite a bit of IPv4. We stopped innovating a long long time ago. I slow WiFi and copper wire networks reflect that. A Raspberry Pi computer[^8] running the Raspberry Pi Operating System supports both IPv4 and IPv6. As a Raspberry Pi computer owner you don't really need to worry about the distinction. If you are connecting to more than one computer you'll need a device called a switch or router. There are cheap hardware switches used to connect computers via Ethernet (faster) or WiFi (more convenient). They usually support both protocols. This means individuals can create a local internet (a network compatible with the Internet). When I checked the prices at my local appliance store a four port network switch start at under $50.00. Some were under $20.00. By comparison when the Arpanet (the original Internet) started it required a DEC PDP-1 mini computer to interconnect networks with the Arpanet. A DEC PDP-1 cost approximately $120,000.00 (1960s United States dollars). There was a huge change in cost from then to now. Raspberry Pi and inexpensive network switches are way more available than all the DEC PDP-1 ever made. They consume far less electrical power too. You can spend less than $500.00 to create a nicely little Internet compatible network with a couple computers. [^8]: There are many different types of small computers. I am focusing on Raspberry Pi because the 400, 500 and 500+ models present an all in one keyboard and computer combo which is easy to setup with little computing back ground. If you can attach your TV to your cable provider's box you are likely able to setup one of these computers without much more effort. Why do I keep pointing out prices? Back in the late 1980s when I was a student and first encountered the Internet the hardware and software used to connect to it cost a small fortune. The price of an Internet connected Workstation I used at University was more than the value of my parents suburban home! Creating an Internet compatible network at my home was not possible do to coast. I actually talked to the people who setup the University's network about doing this (I commuted from a long distance). Fast forward to 2026. Prices have changed. Computer availability has changed. In 1969 computers were still rare devices. Today there is one built into your TV and probably your toaster. The cost and availability has radically changed since the creation of the Web too. That should inform our expectation of how things can work. Sometime I couldn't so in 1989 is very doable in 2026. In 2026 rural communities in the United States are forming their own Internet Provider cooperatives[^9]. These cooperatives are connecting homes using fiber optic cables. This transforms their access from none or slow to really fast and very reliable. It also can be done for a lower cost than relying on Big Co Internet Providers if they even service the area. In 2026 my city of 200,000 plus people we don't have fiber optic connections to homes. In my case one Big Co paid the another Big Co to stop expanding home fiber access anywhere in the county of Los Angeles. That includes my city. They've been paying the other Big Off for more than a decade. The Big Co created scarcity ensures their profit margins. They are like the rail road companies in the nineteenth and twentieth century. Not about public service, not even about being effective transport corporations. It's all about profit at the expense of the public. [^9]: Community Networks website is a group that advocates for local network cooperatives, see <https://communitynets.org/content/cooperatives-build-community-networks> ### The Web on top of the Internet Let's focus on the Web running on top of the Internet. What is it? The Web is a hypertext system built on top of the Internet. **hypertext** is the key take away. It's the Web's origin killer feature. The Web's hypertext system is built from a set of core technologies. These technologies are now mature. That collection includes things like HTTP, HTML, CSS, JavaScript and RSS. The two that go back to the beginning are HTTP and HTML. Let's take a look at where these started and where we are today. HTTP stands for Hypertext Transport Protocol. It is a way of using the Internet protocol and text to reliably transfer hypertext between computers. The interaction model is a client (requester, web browser) and server (responder, web server). It is a call and response system. In 1992 this required specialized software. It required one or more skilled specialist to run it. Most websites ran on expensive multi user mini computers that cost the price of suburban single family home. The computers required specialists to run and maintain them too. In short it was an expensive luxury affordable only by large institutions with significant government funding[^10]. In 2026 most programming languages ship with a standard library that allows creating a web server in a few lines of code. You do not need to be a network systems programmer to create one. No networking engineer required either. Ethernet and WiFi are available as commodity hardware components that largely work plug and play. Today web servers run inside appliances. This allows them to be labeled as "smart" and to fetch a higher price. You can do the same thing these embedded devices do using a $15.00 Raspberry Pi Zero 2W, power supply and SD Card for storage. A Raspberry Pi Zero 2W can even be configured to be a public WiFi access point[^11]. That's the impact of an abundance of computers and resources. Creating Web services is a solved problem. [^10]: Originally the Internet, including the Web was available to government funded research institutions. It was created to allow the sharing of science and technology among United States and it's allies institutions. [^11]: See Digital Free Library plans from 2014 at AdaFruit website, <https://learn.adafruit.com/digital-free-library/what-youll-need> The technology that originated back in 1990s is still largely the same. It has just been updated slowly over time. That slowness has lead many people to not notice the changes. They haven't fully revise their assumptions they made back in 1990, 2000, 2010 or 2020. None of what I discussed here is rocket science. It is clearly visible in computing history. Through an understanding of the historical view that you can see how things were and how they can be. I'm making the point that things have change even when the collective wisdom the Tech Bros and Big Co hasn't. ### Next neighbor opportunities The Internet is built on next-neighbor connections. If I have a home internet owned by me and my neighbor has their own little home internet we can connect them. I literally can pass an Ethernet cable through the fence if neccessary. Doing so forms a slightly larger network. If we choose we could split the costs of connecting to the public Internet assuming we had a provider willing to connect us in their terms of service. Internet cooperatives take advantage of this simple relationship. The recurring bills are electricity and the common connection to a larger publicly connected Internet. The way the Internet evolved is that each organization (university or research institution) payed to connect to their neighbor and agreed to carry their neighbor's traffic as well as their own. Larger organizations wound up having multiple connections to other institutions. They operated like hubs. Multiple connections enhanced reliability. Smaller institutions might connect only to one other Internet site. That was called a leaf connection on the network. Importantly whether you were a hub or leaf you could reach any other available site in the network just by knowing it's address. The old metaphor, Internet Super Highway, was based on the corollary that each town paid for its road and they paid for a road connecting to next ones. Roads interconnect. Traffic, in the form of cars, trucks and motorcycles, can follow the roads from one town to another. The road system can be expanded to include new towns, home, cities or other destinations. Like the road system the Internet is extensible. It can be expanded as needed just by adding connections. A home might be a computer, a town might be a local network with a small collection of computers and cities might be large hubs with large data centers owned by Big Co. In the real world most roads are owned collectively by the public. Some roads are private. Some are private roads allow access for a toll. All are still just roads. The Internet today is built as a series of toll roads. There are few public roads. We all pay for access in cash, in loss of privacy and loss of autonomy. Many commercial Internet Providers prohibit direct sharing of the your network with your neighbors in their terms of service. These are human organizational choices. They are not technical choices or constraints. On the Internet today most people might own the device (example phone) but they're still rent access where the payments are in the form currency, loss privacy and loss autonomy. When the companies wish they can force the purchase of a new devise by using the Internet to delivery software to disable them. This is the big reason I think we need to change our relationship. The country prospered when the public freeway system was created in the 1950s. The country could prosper if we had a real option of public Internet access mirroring our public roads. In the mean time we can take maters into our own hands. Own and control our computers and local networks. Form cooperatives for connecting to the Internet where appropriate. ## Changing the ownership model It feels like a paradox. Ownership and control of our hardware give us agency to function better collectively. It reminds me of the adage, "you reach the global by first focusing on the local". What an interesting human concept. If we own our hardware and control it we can choose to band together in cooperatives. We can change the equation and get out from under the thumb of Big Co and their toll system. Many of us carry a smartphone in our pockets. These are computers but most are not suited to creating a Web of our own. Why? If you are using an iPhone running iOS or an Android phone provisioned with Google's software then Apple, Google or another Big Co controls your device. This is true even through you may have thought you purchased the phone. Case in point I used to carry a Samsung phone. I really enjoyed it. It ran a version of Android controlled by Samsung. Samsung sent an update that bricked (disabled) the phone. When I reached out to them the automated email reply indicated since my device was over 3 years old I would have to buy a new one. My phone was five years old. It worked really well and I liked it. Samsung had made the decision that they would update the software on my phone knowing that it would make it inoperable. Needless to say I haven't owned a Samsung phone since. I haven't trusted any Android device since. My Apple iPod mini faced a similar situation. My point is I owned the hardware but didn't control the software. It was really convenient that updates were pushed out. I really liked not paying attention to the detail. My life is busy. That arrange worked well right up until it didn't. If a corporation or government controls the software then they also control the hardware. It doesn't matter how much you payed to purchase it. You don't really own it. Good to know. So this is what I propose. We individually obtain computers where we control the software on it. The computers don't have to be powerful. I've done real computing (writing software) using Raspberry Pi 400 and Raspberry Pi 500. I have chosen to go with new computers because I own them a really long time. I still have a Raspberry Pi 2 that works. Skipping Starbucks and some Pizzas allowed me to save for these relatively inexpensive new computers. I understand that I'm privileged that I can afford these. You don't have to go with new machines. There are less expensive options. I have a ten and another fifteen year old Mac Mini. I still can use them. I got them used. I think I paid five dollars for one and the other was given to me. Since they know IPv4 I can run them on my private network. I wouldn't run them on the public Internet. Apple stopped updating their OS for these machines decades ago. They can be run safely on a private network. They don't run the latest web browser but my website doesn't use the latest bells and whistles either. My point is they still work and can be used to curate or produce web content even if another machine is used to make it available on the public Internet. There is a thriving market in refurbished and used machines. Companies and governments often lease hardware. When the lease is up after two or three years all that equipment goes either to e-waste or is resold. Going refurbish and used has the advantage of not adding to the e-waste problem. There are also civic groups that get refurbished equipment to people that need it low or no cost. Getting a computer to write web content can be challenging but it is possible even when you have limited means. You don't need a powerful machine, you don't need the latest fastest one either. You need one that has a text editor and can run software to turn Markdown into HTML. Here's what I used for writing this post (it has the advantage of being portable to the nearest electrical plug). - Raspberry Pi 500 and power supply - Raspberry Pi Monitor and power supply - Raspberry Pi Mouse - cables to connect everything - a wireless switch connected to a cable modem and my Internet Provider - A Raspberry Pi 3B+ with a 3 gigabyte hard drive setup as a "server" (makes this site available on my home network[^12]) - I publish this site via GitHub Pages service for public Internet access (I have the least expensive subscription for this) [^12]: I can view my personal web on my home network from my phone, tablet and computers. So can the rest of my family. The software I am using to write this post is as follows (all programs are open source software, free to share, free to use) - Raspberry Pi OS (a Linux distribution based on Debian GNU Linux) - Firefox (web browser) - Mousepad (the text editor that ships with Raspberry Pi OS) - antennaApp (a command line content system and web server I wrote) With this software and hardware setup I can published my blog (see <https://rsdoiel.github.io>) and I can aggregate the news (see <https://rsdoiel.github.io/antenna>). I run the most up to date copy of both on my private home network. I can view the home network copy on my phone as well as my computer. My family can view it too on the home network. I update the public copy periodically. That way when I am away from my home network I can still read the aggregated news. The setup provides a little corner of the Web which I own and control. It is not hard to replicate it for yourself. I don't need to use Yahoo News, Google News, Bing, Twitter/X, Facebook, Instagram, Whats App, Spotify, YouTube to know what is happening. I just check my own aggregations. Since I didn't implement an infinite scroll and I aggregate on a slow schedule I don't get sucked doom scrolling. Slow news gives me more time for being with the humans I love and experiencing real life without distraction. When I read my aggregated site it feels much more like choosing to read a newspaper or magazine. The open source software I created to make this easy to do is called [Antenna App](https://rsdoiel.github.io/antennaApp). You can run the latest version on macOS, Windows, Linux and Raspberry Pi OS machines. The Antenna App software is driven my Markdown files. Markdown is a really good expression of hypertext. Posts and pages are Markdown files. The list of websites I aggregate are defined by Markdown files containing a list of links to the RSS news feeds. The Antenna App takes care of harvesting content and generating the HTML files, RSS and sitemaps used by your web browser. Antenna App is written as a command line tool. It could be re-implemented as a graphical system or interactive program. My software is released under an open source license so anyone can build on what I've already provided as long as they respect the terms of the license (a GNU license). There are other software systems out there. I mention mine because it provides it is possible. You should look for one that works for you. ## Tiny computers are like tiny homes I use two computers for my home network. One is a Raspberry Pi 500 and the other is a Raspberry Pi 3B+ with a USB hard drive. I write using the Pi 500 but the Pi 3B+ is where this blog lives. A public copy is managed via Git and GitHub. The public copy is where you are likely reading this. But the public copy is just that, a copy. It's cheap to copy bits in digital space. I could actually just use the one. That's because operating systems like Raspberry Pi OS support the concept of localhost. Localhost presents the machine as if it is a network node. If I had a Linux based phone I could run the aggregation service directly on it. Then I would have my Web right there in my pocket. In the meantime, I am saving my pennies for a Linux-based phone. Working with small computers is like living in a small or tiny home. It can be very cozy and comfortable. It will never be a mansion. Mansions and castles are fine for some people. While I've enjoyed visiting a few castles I would not choose to live in one. There are really expensive to own, heat/cool and maintain. I like small and simple. I choose to live in a cottage. I accept living in a small home isn't for everyone just as running little computers isn't for everyone. That is why I don't say people should abandon the computer systems that work for them. I am pushing for people, like myself, who have a problem with the predatory Web and Internet we have today. Assert ownership (individually or collectively) to correct our relationship. Collectively we need a Web and Internet where we are co-owner and participant. I am no longer interested in being a tenant and product.