Jens Willmer In this blog I write about my projects and everything else that comes to my mind. https://jwillmer.de/ Fri, 27 Jun 2025 21:39:34 +0000 Fri, 27 Jun 2025 21:39:34 +0000 Jekyll v3.10.0 Gear Ratio Advanced: Velocity, Torque & Efficiency <p>This post builds on <a href="https://jwillmer.de/blog/tutorial/gear-ratio-basics">Gear Ratio Basics</a> and explains how gear ratios affect speed (velocity), torque, and efficiency. Understanding these concepts helps you design gear systems that work the way you need them to.</p> <h3 id="-gear-ratio-and-velocity">🔄 Gear Ratio and Velocity</h3> <p>The gear ratio controls how fast the output gear turns compared to the input gear. The formula for calculating this is shown below:</p> \[\text{Gear Ratio} = \frac{\text{Input Speed}}{\text{Output Speed}}\] <p>For example, if your input gear rotates at 200 revolutions per minute (RPM) and you use a 2:1 gear ratio, you can calculate the output speed like this:</p> \[\text{Output Speed} = \frac{200\, \text{RPM}}{2} = 100\, \text{RPM}\] <p>This means the output gear turns at 100 RPM. A higher gear ratio slows down the output gear because the driven gear rotates fewer times for each turn of the driver gear. That’s how gear reductions let you decrease speed when needed, like in a robot’s drivetrain or a power tool’s gearbox.</p> <h3 id="-gear-ratio-and-torque">đź’Ş Gear Ratio and Torque</h3> <p>Gear ratios also affect torque, which is the twisting force transmitted through the gears. The relationship between gear ratio and torque is:</p> \[\text{Output Torque} = \text{Input Torque} \times \text{Gear Ratio}\] <p>For example, if your motor produces 1 newton-meter (Nm) of torque and you use a 3:1 gear ratio, you calculate the output torque like this:</p> \[\text{Output Torque} = 1\, \text{Nm} \times 3 = 3\, \text{Nm}\] <p>This means you get 3 Nm of torque at the output. By slowing the rotation, the gear system lets the same power apply more force. In practical terms, a gear reduction gives you more torque at the cost of slower speed, which is essential when you need to move something heavy.</p> <h3 id="️-direction-reminder">⚠️ Direction Reminder</h3> <p>Whenever gears mesh, they change the direction of rotation. Every time you add a gear between the driver and the driven gear, the direction reverses again. For example, if you connect your driver gear to an idler gear, the idler reverses the direction, and the next gear will spin the opposite way compared to the driver. An idler gear does not change the gear ratio, but it flips the rotation direction once more. An even number of gears in the train means the output spins in the same direction as the input. An odd number of gears means the output spins in the opposite direction.</p> <h3 id="️-gear-efficiency">⚙️ Gear Efficiency</h3> <p>No gear system is perfectly efficient. Energy is lost through friction, heat, and flexing of materials, so it is important to understand gear efficiency. Spur gears usually operate at 95 to 98 percent efficiency for each gear mesh. Worm gears have much lower efficiency, often below 70 percent, because of the sliding friction that happens between the worm and worm wheel.</p> <p>To calculate the total efficiency of a gear train with multiple stages, multiply the efficiency of each gear pair together. For example, if you have two spur gear pairs with 97 percent efficiency each, you calculate the total efficiency like this:</p> \[\text{Total Efficiency} = 0.97 \times 0.97 = 0.9409 \quad (\text{or } 94.1\%)\] <p>This means your gear train will deliver about 94.1 percent of the input power to the output. Lower efficiency means more heat and less useful power, which can become a problem in systems where energy savings or heat management are important.</p> <h3 id="-summary">âś… Summary</h3> <p>A higher gear ratio reduces the output speed but increases the output torque. Gears reverse rotation direction with each mesh, and the total number of gears in a train determines whether the output spins in the same or opposite direction as the input. Efficiency losses add up with every gear pair, so always include them in your calculations when designing a gear system.</p> Fri, 27 Jun 2025 21:21:00 +0000 https://jwillmer.de/blog/tutorial/gear-ratio-advanced https://jwillmer.de/blog/tutorial/gear-ratio-advanced gear ratio calculation mechanics Tutorial What is Retrieval-Augmented Generation (RAG) <p>Large language models are impressive, but they’re limited by what they were trained on. They can’t access your internal documentation, stay current with new data, or reliably distinguish fact from fiction.</p> <p><strong>Retrieval-Augmented Generation (RAG)</strong> addresses this gap. It augments a language model by giving it access to external data at runtime. When a question is asked, the system first retrieves relevant information from a knowledge base—usually a vector database of semantically indexed chunks. Only then does the model generate a response, grounded in this retrieved context.</p> <p>This enables more accurate, domain-aware, and verifiable answers without retraining the model. RAG effectively gives language models a dynamic memory—on your terms.</p> <h2 id="when-to-use-rag"><strong>When to Use RAG</strong></h2> <p>Retrieval-Augmented Generation is ideal when your application needs accurate, current, and domain-specific responses—but you don’t want to (or can’t) retrain the model.</p> <p>Use RAG when:</p> <ul> <li><strong>Your data changes frequently.</strong> Traditional fine-tuning locks knowledge at training time. RAG lets you update answers by simply changing the underlying documents.</li> <li><strong>You need traceability.</strong> With RAG, every response is backed by retrievable content. Users (or auditors) can trace outputs to their original source.</li> <li><strong>Your knowledge is proprietary.</strong> Whether it’s internal policies, customer reports, or technical documentation, RAG can surface private data securely at inference time.</li> <li><strong>You want modular updates.</strong> By storing and referencing chunks with unique IDs, you can update individual pieces of content without retraining or reindexing everything.</li> </ul> <p>RAG is especially useful for:</p> <ul> <li>Internal support agents</li> <li>Developer or product documentation assistants</li> <li>Compliance and legal tools</li> <li>Systems needing multilingual or version-aware responses</li> </ul> <p>If your app needs <em>dynamic answers with real references</em>, RAG is the right foundation. It’s not just a theoretical model—real systems are using it to solve hard problems today.</p> <p>Real-world systems already use this architecture to great effect. For instance, <a href="https://mem0.com/">Mem0</a> implements a memory layer built on RAG. It retrieves semantically indexed memory entries—rather than relying on fragile prompt chains—enabling consistent, personalized responses over time.</p> <p>At the infrastructure level, vector search engines like <a href="https://qdrant.tech/">Qdrant</a> power these retrieval systems. Qdrant supports hybrid filtering, payload scoring, and fast nearest neighbor search, making it ideal for large-scale, production-grade RAG systems.</p> <h2 id="preparing-data-for-ingestion"><strong>Preparing Data for Ingestion</strong></h2> <p>To make RAG work reliably, your content must be structured for retrieval and usable by a language model. This isn’t about dumping documents into a vector database—it’s about shaping the content so the model can reason over it effectively.</p> <p>Start by <strong>extracting clean content</strong> from your sources. Remove layout artifacts, navigation elements, and anything irrelevant to the actual information. You want concise, plain-language text that reflects what a human would read to understand the topic.</p> <p>Next, <strong>normalize the language</strong>. Rewrite content — especially code snippets, config files, or logs — into complete, natural sentences. Instead of embedding a raw string like <code>user_limit: 250</code>, convert it to “The maximum number of users allowed is 250.” The goal is natural language that the model can easily process and use in a response.</p> <p>Every chunk should include <strong>descriptive metadata</strong>. This includes values like the source URL, section title, date, author, product or version, and any tags or classifications you use internally. Metadata can be stored separately or embedded into the text depending on your system design, but it must be consistent and queryable.</p> <p>Finally, and critically, assign a <strong>unique and stable ID</strong> to every chunk or document. This lets you update or delete specific entries later without affecting the rest of your dataset. It’s essential for keeping your index maintainable over time.</p> <p>RAG is only as good as the data it retrieves—so preparing your content with care is the foundation for everything that follows.</p> <h2 id="improving-retrieval-with-context"><strong>Improving Retrieval with Context</strong></h2> <p>Once your data is clean, natural, and enriched with metadata, the next step is making it <strong>retrievable in a meaningful way</strong>. This is where context comes in. A single paragraph or sentence often lacks enough information on its own to match a user’s query effectively. By embedding <strong>context into each chunk</strong>, you improve both recall and precision during retrieval.</p> <p>Inspired by <a href="https://www.anthropic.com/news/contextual-retrieval">Anthropic’s contextual retrieval approach</a>, one method is to <strong>prepend a short description</strong> that explains what the chunk is about. For example, instead of storing:</p> <blockquote> <p>“The system will reject login attempts after 5 failed tries.”</p> </blockquote> <p>You might store:</p> <blockquote> <p>“From the user authentication section of the security policy: The system will reject login attempts after 5 failed tries.”</p> </blockquote> <p>This extra framing helps the embedding model encode <em>why</em> this text matters and <em>where</em> it belongs. It also improves match quality for more abstract or high-level questions like “What are our login security rules?”</p> <p>Context can come from:</p> <ul> <li>Section headings or document structure</li> <li>File paths or category tags</li> <li>Summaries or topic labels</li> <li>Manual annotations (if scale allows)</li> </ul> <p>In addition to prepending context to text, you can enrich your vector index with structured metadata. Many systems support hybrid search—combining vector similarity with keyword filters. For example, you can restrict results by audience (<code>developer</code>), language (<code>de</code>), or document type (<code>release_notes</code>).</p> <p>The key idea is: <strong>make the meaning explicit</strong>. Give the system as much information as possible, up front, to help it retrieve the right content later.</p> <h2 id="conclusion-why-rag-matters"><strong>Conclusion: Why RAG Matters</strong></h2> <p>RAG brings structure, memory, and accountability to generative systems. It bridges the gap between static models and real-world knowledge — without the overhead of retraining.</p> <p>To recap:</p> <ul> <li>Use RAG when your data changes often, needs to stay private, or must be cited.</li> <li>Prepare your data with clean, readable language and real metadata.</li> <li>Track unique IDs for each entry so your dataset stays maintainable.</li> <li>Add contextual information to each chunk to improve retrieval precision.</li> </ul> <p>Done right, RAG systems are more flexible than fine-tuning, more trustworthy than standalone LLMs, and more adaptable to your evolving needs.</p> <p>If you’re building AI systems that need to be smart <em>and</em> reliable, RAG isn’t just an option—it’s the standard.</p> Tue, 20 May 2025 06:20:00 +0000 https://jwillmer.de/blog/tutorial/retrieval-augmented-generation https://jwillmer.de/blog/tutorial/retrieval-augmented-generation rag llm agent ai vector-search memory-layer semantic-search retrieval-augmented-generation Tutorial Docker Networking Pitfalls <p>Docker networking is powerful but can be confusing, especially when dealing with communication between the host machine and Docker containers. Many developers assume that network behavior inside a Docker network works the same as on the host, leading to unexpected issues. In this post, we’ll explore common pitfalls when running services with Docker Compose and how to handle them correctly.</p> <hr /> <h2 id="the-basics-host-vs-docker-network">The Basics: Host vs. Docker Network</h2> <p>Docker Compose creates an isolated network for services to communicate. Each container gets a unique DNS name corresponding to its service name in <code>docker-compose.yml</code>. However, the way networking works inside Docker is different from how it works on the host machine.</p> <h3 id="host-perspective-external-access"><strong>Host Perspective (External Access)</strong></h3> <p>When accessing a service from the host machine (outside Docker), you must use <code>localhost</code> and the mapped port:</p> <pre><code class="language-bash">curl http://localhost:8080 # Accessing a container-bound service via a mapped port </code></pre> <p>You <strong>cannot</strong> use Docker-internal DNS names (such as <code>service_name</code>) or <code>host.docker.internal</code> from the host machine.</p> <h3 id="docker-container-perspective-internal-communication"><strong>Docker Container Perspective (Internal Communication)</strong></h3> <p>Containers within the same Docker network can refer to each other by service name:</p> <pre><code class="language-bash">curl http://app-store:5000 # Works inside Docker </code></pre> <p>But if a container needs to communicate with a service running <strong>on the host machine</strong>, it must use <code>host.docker.internal</code>:</p> <pre><code class="language-bash">curl http://host.docker.internal:3000 # Works inside Docker to reach host </code></pre> <hr /> <h2 id="common-pitfalls-and-how-to-solve-them">Common Pitfalls and How to Solve Them</h2> <hr /> <h3 id="pitfall-1-trying-to-access-a-container-using-service-names-from-the-host"><strong>Pitfall 1: Trying to Access a Container Using Service Names from the Host</strong></h3> <h4 id="-incorrect">❌ Incorrect:</h4> <pre><code class="language-bash">curl http://app-store:5000 # Won't work from the host machine </code></pre> <p>The <code>app-store</code> service name is only resolvable inside Docker’s internal network.</p> <h4 id="-correct">âś… Correct:</h4> <pre><code class="language-bash">curl http://localhost:5000 # Use localhost + mapped port from the host </code></pre> <p>This works if the port is correctly mapped in <code>docker-compose.yml</code>:</p> <pre><code class="language-yaml">docker-compose.yml: services: app-store: ports: - "5000:5000" </code></pre> <hr /> <h3 id="pitfall-2-using-localhost-inside-a-container-to-reach-another-container"><strong>Pitfall 2: Using <code>localhost</code> Inside a Container to Reach Another Container</strong></h3> <h4 id="-incorrect-1">❌ Incorrect:</h4> <pre><code class="language-bash">curl http://localhost:5000 # Won't work inside a container </code></pre> <p>Inside a container, <code>localhost</code> refers to <strong>itself</strong>, not other services in the Docker network.</p> <h4 id="-correct-1">âś… Correct:</h4> <pre><code class="language-bash">curl http://app-store:5000 # Use the service name inside Docker </code></pre> <hr /> <h3 id="pitfall-3-a-container-trying-to-access-a-host-service-without-hostdockerinternal"><strong>Pitfall 3: A Container Trying to Access a Host Service Without <code>host.docker.internal</code></strong></h3> <h4 id="-incorrect-2">❌ Incorrect:</h4> <pre><code class="language-bash">curl http://localhost:3306 # Won't work inside Docker </code></pre> <p>Since <code>localhost</code> inside a container refers to the container itself, this won’t connect to a host service.</p> <h4 id="-correct-2">âś… Correct:</h4> <pre><code class="language-bash">curl http://host.docker.internal:3306 # Correct way inside Docker </code></pre> <hr /> <h3 id="pitfall-4-web-applications-generating-incorrect-urls"><strong>Pitfall 4: Web Applications Generating Incorrect URLs</strong></h3> <p>Web applications often generate links for users dynamically based on their environment. If the application is running inside a Docker container, it may generate links using internal Docker service names, which are not accessible to users.</p> <h4 id="-incorrect-3">❌ Incorrect:</h4> <pre><code class="language-html">&lt;a href="http://web-service:8080"&gt;Click here&lt;/a&gt; &lt;!-- Won't work for the user --&gt; </code></pre> <h4 id="-correct-3">âś… Correct:</h4> <p>Ensure the application differentiates between:</p> <ul> <li><strong>Internal URLs</strong> (used by services within Docker, such as <code>web-service:8080</code>)</li> <li><strong>External URLs</strong> (used by users, such as <code>localhost:8080</code> or a domain name)</li> </ul> <hr /> <h2 id="understanding-hostdockerinternal-availability">Understanding <code>host.docker.internal</code> Availability</h2> <p><code>host.docker.internal</code> is a special hostname that resolves to the host machine’s IP address from within a Docker container. However, its availability depends on the operating system:</p> <table> <thead> <tr> <th>Platform</th> <th>Availability</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td><strong>Windows</strong></td> <td>âś… Available</td> <td>Works out of the box</td> </tr> <tr> <td><strong>Mac (Intel &amp; M1/M2)</strong></td> <td>âś… Available</td> <td>Built-in since Docker Desktop 18.03</td> </tr> <tr> <td><strong>Linux</strong></td> <td>❌ Not Available</td> <td>Requires manual setup via <code>extra_hosts</code></td> </tr> <tr> <td><strong>WSL 2</strong></td> <td>âś… Available</td> <td>Works with Docker Desktop</td> </tr> <tr> <td><strong>iOS</strong></td> <td>❌ Not Available</td> <td>No official support</td> </tr> </tbody> </table> <p>For Linux users, a workaround is required:</p> <pre><code class="language-yaml">docker-compose.yml: services: my-service: extra_hosts: - "host.docker.internal:host-gateway" </code></pre> <p>This maps <code>host.docker.internal</code> to the host gateway.</p> <hr /> <h2 id="visualizing-the-networking-model">Visualizing the Networking Model</h2> <table> <thead> <tr> <th>Source</th> <th>Destination</th> <th>Works?</th> <th>Solution</th> </tr> </thead> <tbody> <tr> <td><strong>Host → Container</strong></td> <td><code>localhost:port</code></td> <td>âś…</td> <td>Use mapped port in <code>docker-compose.yml</code></td> </tr> <tr> <td><strong>Host → Container</strong></td> <td><code>service_name</code></td> <td>❌</td> <td>Won’t resolve</td> </tr> <tr> <td><strong>Container → Container</strong></td> <td><code>service_name</code></td> <td>âś…</td> <td>Works within the same Docker network</td> </tr> <tr> <td><strong>Container → Host</strong></td> <td><code>localhost</code></td> <td>❌</td> <td>Refers to itself</td> </tr> <tr> <td><strong>Container → Host</strong></td> <td><code>host.docker.internal:port</code></td> <td>âś…</td> <td>Works correctly</td> </tr> <tr> <td><strong>Web App → User Link</strong></td> <td><code>service_name:port</code></td> <td>❌</td> <td>Won’t work for users</td> </tr> <tr> <td><strong>Web App → User Link</strong></td> <td><code>localhost:port</code></td> <td>âś…</td> <td>Use proper externally accessible URLs</td> </tr> </tbody> </table> <hr /> <h2 id="key-takeaways">Key Takeaways</h2> <figure> <img src="https://jwillmer.de/media/img/2025-02-02-docker-networking-pitfalls.drawio.png" /> <figcaption>Overview of the communication flow</figcaption> </figure> <ul> <li><strong>Use <code>localhost:port</code> to access Docker services from the host.</strong></li> <li><strong>Use service names for inter-container communication inside Docker.</strong></li> <li><strong>Use <code>host.docker.internal</code> for Docker-to-host communication (if supported).</strong></li> <li><strong>Linux requires manual configuration for <code>host.docker.internal</code>.</strong></li> <li><strong>Mapped ports are crucial for external access.</strong></li> <li><strong>Ensure web applications generate URLs users can actually reach.</strong></li> </ul> <p>By keeping these rules in mind, you can avoid common networking pitfalls when using Docker Compose. Share this guide with your colleagues to clarify how Docker networking works and improve debugging efficiency!</p> Sun, 02 Feb 2025 21:15:00 +0000 https://jwillmer.de/blog/programming/docker-networking-pitfalls https://jwillmer.de/blog/programming/docker-networking-pitfalls docker docker-compose networking devops containers port-mapping Programming Interactive Motion Extraction with JavaScript <h1 id="motion-extraction-in-action-real-time-video-processing-with-javascript">Motion Extraction in Action: Real-time Video Processing with JavaScript</h1> <p>In this blog post, we’ll explore a motion extraction technique inspired by an approach presented in <a href="https://www.youtube.com/watch?v=NSS6yAMZF78">this YouTube video by Steve of CodeParade</a>. Steve’s work showcases impressive concepts in image processing, and this implementation applies similar ideas using JavaScript to manipulate video frames in real time. You can interact with the code embedded below or visit the GitHub gist for the complete snippet.</p> <h2 id="how-motion-extraction-works">How Motion Extraction Works</h2> <p>Motion extraction refers to isolating differences between frames in a video or stream. This technique can be useful in scenarios like surveillance, where detecting motion helps trigger specific events, or for analyzing movements in sports or other activities.</p> <p>This example utilizes the following core steps:</p> <ol> <li><strong>Frame Capture</strong>: We take snapshots of a video feed (from the camera or an uploaded file).</li> <li><strong>Color Inversion</strong>: Each frame is inverted to enhance the contrast between motion and background.</li> <li><strong>Motion Detection</strong>: By comparing consecutive frames, we cancel out parts that remain static, highlighting only the regions that exhibit change (motion).</li> <li><strong>Display</strong>: The processed frame is displayed, and the effect is repeated in intervals to create continuous motion detection.</li> </ol> <p>Users can adjust parameters like <strong>resolution</strong> and <strong>delay</strong> to experiment with the results.</p> <h2 id="key-features-of-the-code">Key Features of the Code</h2> <ul> <li><strong>Live or Uploaded Video</strong>: You can toggle between your camera feed and an uploaded video file.</li> <li><strong>Freeze Frame</strong>: This feature freezes one frame so that you will see all motion changes in respect to the frozen frame.</li> <li><strong>Inverted Color Blending</strong>: The code uses color inversion and blending between consecutive frames to cancel out static areas, highlighting only moving objects.</li> </ul> <h3 id="freeze-frame-example">Freeze Frame Example</h3> <p>Imagine you activate the freeze-frame option, and at that moment, a grey background is captured, like in the image below:</p> <p><img src="/media/img/2024-10-13-motion-extraction_no-motion.jpg" alt="Frozen Grey Scene" /></p> <p>Now, when you remove an object from the scene—such as a highlighting pen—the motion extraction process will clearly detect what has changed. In the following image, you can see the pen, which was removed, now highlighted by the motion extraction algorithm:</p> <p><img src="/media/img/2024-10-13-motion-extraction_missing-item.jpg" alt="Motion Detected: Removed Pen" /></p> <p>This technique is highly effective in identifying what has been removed or changed in the scene by comparing the captured freeze-frame with subsequent frames.</p> <h3 id="try-it-out">Try it Out</h3> <p>Here is the live version of the code, which you can interact with to test motion extraction:</p> <iframe src="/assets/posts/motion-extraction.html" width="100%" height="800px"></iframe> <p>For those who want to dive deeper into the code, you can check out the <a href="https://gist.github.com/jwillmer/06d5d71f794cac86a45f40c18ae21fc1">GitHub Gist</a>.</p> Sun, 13 Oct 2024 19:15:00 +0000 https://jwillmer.de/blog/programming/motion-extraction https://jwillmer.de/blog/programming/motion-extraction motion extraction detection javascript video processing surveillance Programming Personal Docker Cheat Sheet <p>This is a personal reference for Docker commands that are not used often but frequently need to be looked up. Instead of searching every time, this list provides direct access to those less common yet essential commands.</p> <h4 id="remove-all-stopped-containers"><strong>Remove All Stopped Containers</strong></h4> <pre><code class="language-bash">docker rm $(docker ps -a -q) </code></pre> <ul> <li><strong>Explanation</strong>: This command removes all stopped containers.</li> <li><code>docker ps -a -q</code>: Lists the IDs of all containers (stopped and running).</li> <li><code>docker rm</code>: Removes the listed containers.</li> </ul> <h4 id="remove-all-docker-images-not-in-use"><strong>Remove All Docker Images Not In Use</strong></h4> <pre><code class="language-bash">docker rmi $(docker images -q) </code></pre> <ul> <li><strong>Explanation</strong>: This command removes all Docker images.</li> <li><code>docker images -q</code>: Lists the IDs of all images.</li> <li><code>docker rmi</code>: Removes the listed images.</li> </ul> <h4 id="check-docker-disk-usage"><strong>Check Docker Disk Usage</strong></h4> <pre><code class="language-bash">docker system df -v </code></pre> <ul> <li><strong>Explanation</strong>: Shows detailed information about Docker disk usage, including volumes, images, containers, and how much space they occupy.</li> </ul> <h4 id="remove-all-unused-volumes-except-specified-ones"><strong>Remove All Unused Volumes Except Specified Ones</strong></h4> <p>Test Command (preview volumes to be deleted):</p> <pre><code class="language-bash">docker volume ls -qf dangling=true | grep -vE 'volume1|volume2' </code></pre> <p>Actual Removal Command:</p> <pre><code class="language-bash">docker volume ls -qf dangling=true | grep -vE 'volume1|volume2' | xargs -r docker volume rm </code></pre> <ul> <li><strong>Explanation</strong>: This command removes all unused (dangling) Docker volumes except the specified ones.</li> <li><code>docker volume ls -qf dangling=true</code>: Lists all unused volumes. <ul> <li><code>-q</code>: Shows only the volume names.</li> <li><code>-f dangling=true</code>: Filters to list only the volumes that are dangling (unused).</li> </ul> </li> <li><code>grep -vE 'volume1|volume2'</code>: Filters out the volumes you want to keep (replace <code>volume1</code>, <code>volume2</code>, etc.).</li> <li><code>xargs -r docker volume rm</code>: Passes the remaining volume names to <code>docker volume rm</code> and removes them. The <code>-r</code> option ensures the command is only run if there are volumes to delete.</li> </ul> Thu, 19 Sep 2024 13:35:00 +0000 https://jwillmer.de/blog/tutorial/docker-cheat-sheet https://jwillmer.de/blog/tutorial/docker-cheat-sheet docker commands cheat sheet Tutorial Zero Downtime Deployment with Docker Rollout <p>This guide demonstrates a <strong>Zero Downtime Deployment</strong> using <a href="https://github.com/Wowu/docker-rollout"><strong>Docker Rollout</strong></a>, featuring an intentional failure, rollback scenario, and a successful rollout. Additional I demonstrate the use of a <a href="https://traefik.io/traefik/">Traefik reverse proxy</a> to expose the service ports.</p> <h4 id="prerequisites">Prerequisites:</h4> <p>Ensure you have:</p> <ul> <li><strong>Docker</strong> and <strong>Docker Compose</strong> installed.</li> <li><a href="https://github.com/Wowu/docker-rollout#installation"><strong>Docker Rollout</strong> installed</a>:</li> </ul> <pre><code class="language-bash"># Create directory for Docker cli plugins mkdir -p ~/.docker/cli-plugins # Download docker-rollout script to Docker cli plugins directory curl https://raw.githubusercontent.com/wowu/docker-rollout/master/docker-rollout -o ~/.docker/cli-plugins/docker-rollout # Make the script executable chmod +x ~/.docker/cli-plugins/docker-rollout </code></pre> <h4 id="directory-setup">Directory Setup</h4> <p>Create the following files in your working directory:</p> <p><strong>Dockerfile-1</strong> (working version):</p> <pre><code class="language-Dockerfile"># Use the official NGINX base image FROM nginx:latest # Create a custom text file with "Hello World V1" RUN echo "Hello World V1" &gt; /usr/share/nginx/html/index.html # Expose port 80 (the default NGINX port) EXPOSE 80 </code></pre> <p><strong>Dockerfile-2</strong> (intentional failure):</p> <pre><code class="language-Dockerfile"># Use the official NGINX base image FROM nginx:latest # Create a custom text file with "Hello World V2" RUN echo "Hello World V2" &gt; /usr/share/nginx/html/index.html # Expose port 8080 (causing health check failure due to mismatch) EXPOSE 8080 </code></pre> <blockquote> <p><strong>Note:</strong> Version 2 exposes port 8080, which will cause the health check to fail initially as it is configured to check port 80.</p> </blockquote> <p><strong>docker-compose.yml</strong>:</p> <pre><code class="language-yaml">services: web: image: static-site:1 restart: always healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:80 || exit 1"] interval: 10s timeout: 5s retries: 2 start_period: 5s networks: - my_network test: image: alpine container_name: test command: sh -c "apk add --no-cache wget &amp;&amp; tail -f /dev/null" networks: - my_network networks: my_network: driver: bridge </code></pre> <h4 id="build-docker-images">Build Docker Images</h4> <ol> <li>Build version 1:</li> </ol> <pre><code class="language-bash">docker build -t static-site:1 -f Dockerfile-1 . </code></pre> <ol> <li>Build version 2 (the failing one):</li> </ol> <pre><code class="language-bash">docker build -t static-site:2 -f Dockerfile-2 . </code></pre> <h4 id="run-the-initial-version">Run the Initial Version</h4> <p>Start the services using Docker Compose:</p> <pre><code class="language-bash">docker-compose up -d </code></pre> <p>This will start version 1 (<code>static-site:1</code>) of the web service.</p> <h4 id="test-the-initial-deployment">Test the Initial Deployment</h4> <p>Verify that version 1 is running by executing <code>wget</code> inside the <code>test</code> container:</p> <pre><code class="language-bash">docker exec -it test wget -qO- http://web </code></pre> <p>You should see:</p> <pre><code>Hello World V1 </code></pre> <h4 id="update-the-web-service-version-for-rollout">Update the Web Service Version for Rollout</h4> <p>Before running Docker Rollout, update the image of the <code>web</code> service to <strong>version 2</strong> in your <code>docker-compose.yml</code> file:</p> <pre><code class="language-yaml">web: image: static-site:2 # rest of the configuration remains the same </code></pre> <blockquote> <p><strong>Important:</strong> Updating the version in the <code>docker-compose.yml</code> file is necessary before executing the rollout command to initiate the update.</p> </blockquote> <h4 id="perform-zero-downtime-deployment-with-docker-rollout">Perform Zero Downtime Deployment with Docker Rollout</h4> <p>Run Docker Rollout to perform the zero downtime deployment:</p> <pre><code class="language-bash">docker rollout web --file docker-compose.yml </code></pre> <h3 id="observations-during-rollout">Observations During Rollout:</h3> <ul> <li><strong>No downtime for the web service</strong>: The web service remains fully operational during the update process.</li> <li><strong>One container becomes unhealthy</strong>: Since version 2 exposes a different port (<code>8080</code>), the new version’s container will fail the health check, causing the deployment to fail.</li> <li><strong>Automatic rollback</strong>: Docker Rollout will automatically roll back to version 1, ensuring the service remains healthy.</li> </ul> <h4 id="update-the-health-check-for-a-successful-rollout">Update the Health Check for a Successful Rollout</h4> <p>To observe a successful rollout, update the health check in the <code>docker-compose.yml</code> file to match port <code>8080</code>, which version 2 is using:</p> <pre><code class="language-yaml">healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:8080 || exit 1"] interval: 10s timeout: 5s retries: 2 start_period: 5s </code></pre> <ol> <li>Update the <code>docker-compose.yml</code> file with the health check for port <code>8080</code>.</li> <li>Run the rollout again with:</li> </ol> <pre><code class="language-bash">docker rollout web --file docker-compose.yml </code></pre> <h3 id="observations-during-successful-rollout">Observations During Successful Rollout:</h3> <ul> <li>The service will now update successfully from version 1 to version 2, as the health check matches the correct port.</li> <li>You should now see the new version output by running:</li> </ul> <pre><code class="language-bash">docker exec -it test wget -qO- http://web </code></pre> <p>Output:</p> <pre><code>Hello World V2 </code></pre> <h4 id="setting-up-traefik-reverse-proxy-for-http-access">Setting up Traefik Reverse Proxy for HTTP Access</h4> <p>You can use <a href="https://traefik.io/traefik/"><strong>Traefik</strong></a> to expose your services externally without needing to define specific ports or container names in your Docker Compose file. This is especially useful when working with the Docker Rollout plugin, as it imposes <strong>limitations similar to Docker Swarm</strong>, where ports or container names cannot be explicitly defined. Traefik simplifies this by handling service discovery and routing dynamically, allowing you to maintain external accessibility while supporting zero downtime deployments.</p> <p>By routing traffic based on service labels instead of container specifics, Traefik enables seamless updates and rollbacks without requiring manual changes to the container’s network settings.</p> <p><strong>docker-compose.yml</strong> with Traefik added:</p> <pre><code class="language-yaml">version: '3' services: web: image: static-site:1 restart: always healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:80 || exit 1"] interval: 10s timeout: 5s retries: 2 start_period: 5s networks: - my_network labels: - "traefik.enable=true" - "traefik.http.routers.web.rule=PathPrefix(`/`)" - "traefik.http.services.web.loadbalancer.server.port=80" traefik: image: traefik:v2.4 command: - "--providers.docker=true" - "--entrypoints.web.address=:80" ports: - "80:80" networks: - my_network test: image: alpine container_name: test command: sh -c "apk add --no-cache wget &amp;&amp; tail -f /dev/null" networks: - my_network networks: my_network: driver: bridge </code></pre> <h3 id="summary">Summary</h3> <p>The Docker Rollout plugin enables near zero downtime deployments by updating service instances one at a time, ensuring that previous versions remain active until the new ones pass health checks. While the plugin strives to eliminate downtime, there is still a small possibility of downtime during the transition, as noted in <a href="https://github.com/Wowu/docker-rollout/issues/21">Issue #21</a>, which is currently being worked on and may be resolved in the future.</p> <p>Additionally, using the restart: always policy is discouraged in favor of restart: unless-stopped, as outlined in <a href="https://github.com/Wowu/docker-rollout/issues/25">Issue #25</a>. This prevents conflicts with Docker Rollout’s management of container lifecycles during updates.</p> <p><strong>Key features of Docker Rollout:</strong></p> <ul> <li><strong>Sequential updates</strong>: New instances are spun up one at a time, with health checks ensuring each instance is functioning properly before proceeding.</li> <li><strong>Health check validation</strong>: Only instances that pass their health checks are considered live, reducing the risk of introducing faulty versions.</li> <li><strong>Automatic rollback</strong>: In the event of a failure, Docker Rollout triggers a rollback to the last known good version, maintaining service stability.</li> <li><strong>Zero downtime goal</strong>: Though there may be brief downtimes in specific scenarios, Docker Rollout aims to keep services uninterrupted.</li> <li><strong>Flexible configuration</strong>: It supports custom timeouts, wait times, and health check parameters for diverse deployment needs.</li> </ul> Fri, 13 Sep 2024 14:40:00 +0000 https://jwillmer.de/blog/tutorial/zero-downtime-deployment-with-docker-rollout https://jwillmer.de/blog/tutorial/zero-downtime-deployment-with-docker-rollout docker zero-downtime deployment rollback Tutorial Securely Expose Local Ports with Tailscale Funnel <p>When developing applications that need to interact with external services such as OAuth providers or webhooks, it’s often necessary to expose your local environment to the internet. Tailscale Funnel provides a quick, secure, and hassle-free method to do this, allowing any port on your local machine to be accessible over the internet with minimal configuration. This guide will walk you through setting up Tailscale Funnel to expose your application’s port, making it ideal for developers who need a temporary public endpoint for testing.</p> <h3 id="what-is-tailscale-funnel">What is Tailscale Funnel?</h3> <p><a href="https://tailscale.com/kb/1223/funnel"><strong>Tailscale Funnel</strong></a> allows you to expose a local service running on your machine to the internet via HTTPS, leveraging Tailscale’s secure VPN. This feature is especially useful for scenarios where you need a public endpoint, such as testing OAuth callbacks, receiving webhooks, or sharing a development environment temporarily.</p> <h3 id="key-benefits-of-tailscale-funnel">Key Benefits of Tailscale Funnel</h3> <ul> <li><strong>Quick Setup</strong>: The process is straightforward and quick, making it perfect for rapid testing and debugging.</li> <li><strong>Secure and Easily Disabled</strong>: Public access can be enabled or disabled with a single command, ensuring your local environment remains secure when not in use.</li> <li><strong>Automatic HTTPS and MagicDNS</strong>: Tailscale handles HTTPS provisioning and DNS management, simplifying the setup.</li> </ul> <h3 id="prerequisites">Prerequisites</h3> <p>To use Tailscale Funnel, you need the following:</p> <ol> <li><strong>Tailscale account</strong>: Sign up at <a href="https://tailscale.com">Tailscale</a>.</li> <li><strong>Tailscale client</strong>: Installed and running on your development machine.</li> <li><strong>An application running locally</strong>: Your application should be running on a specific port (e.g., <code>localhost:5000</code>).</li> </ol> <h3 id="step-by-step-guide-to-set-up-tailscale-funnel">Step-by-Step Guide to Set Up Tailscale Funnel</h3> <h4 id="1-install-and-configure-tailscale">1. Install and Configure Tailscale</h4> <p>Make sure Tailscale is installed and configured on your machine:</p> <ol> <li><strong>Download and install the Tailscale client</strong> from <a href="https://tailscale.com/download">Tailscale Downloads</a>.</li> <li><strong>Log in</strong> to your Tailscale account using the Tailscale client.</li> <li><strong>Authorize your device</strong> in the Tailscale admin console.</li> </ol> <h4 id="2-provision-a-certificate-for-your-device">2. Provision a Certificate for Your Device</h4> <p>To ensure your service is <a href="https://tailscale.com/kb/1153/enabling-https">accessible over HTTPS</a>, you need to provision a certificate for your device using Tailscale:</p> <ol> <li><strong>Open the DNS page</strong> in the Tailscale admin console.</li> <li><strong>Enable MagicDNS</strong> if it is not already enabled for your tailnet.</li> <li>Under <strong>HTTPS Certificates</strong>, select <strong>Enable HTTPS</strong>.</li> <li>To obtain a certificate on your machine run the following command in the terminal:</li> </ol> <pre><code class="language-bash">tailscale cert </code></pre> <p>This step is crucial as it enables secure connections to your public endpoint through HTTPS.</p> <h4 id="3-configure-access-controls">3. Configure Access Controls</h4> <p>To enable Funnel, you need to adjust your Tailscale network’s access control list (ACL) settings:</p> <ol> <li><strong>Open the Tailscale admin console</strong> and go to <strong>Access Controls</strong>.</li> <li>Modify your ACL configuration to include the Funnel attribute:</li> </ol> <pre><code class="language-json">{ "nodeAttrs": [ // Adds the "funnel" attribute to all devices in your network { "target": ["autogroup:member"], "attr": ["funnel"] } ], "acls": [ // Allow all connections. { "action": "accept", "src": ["*"], "dst": ["*:*"] }, ] } </code></pre> <h4 id="4-enable-tailscale-funnel-for-your-application">4. Enable Tailscale Funnel for Your Application</h4> <p>Now, enable Funnel to expose your application:</p> <pre><code class="language-bash">tailscale funnel &lt;port&gt; </code></pre> <p>This command will make your application accessible via a public URL over HTTPS.</p> <h4 id="4-obtain-and-test-your-public-url">4. Obtain and Test Your Public URL</h4> <p>Once Funnel is enabled, Tailscale generates a public URL for your service. It will be in the format:</p> <pre><code class="language-bash">https://&lt;device-name&gt;.ts.net:&lt;port&gt; </code></pre> <p>For instance, if your device name is <strong><code>my-laptop</code></strong> and your application is running on port <strong><code>5000</code></strong>, the URL will be:</p> <pre><code class="language-bash">https://my-laptop.ts.net:5000 </code></pre> <h4 id="7-use-the-url-for-external-integrations">7. Use the URL for External Integrations</h4> <p>With your application now accessible online, you can:</p> <ul> <li><strong>Test OAuth callbacks</strong>: Configure your OAuth provider’s redirect URI to your Tailscale Funnel URL (e.g., <code>https://my-laptop.ts.net:5000/callback</code>).</li> <li><strong>Receive webhooks</strong>: Set the Funnel URL as the endpoint for services needing to send data to your application.</li> <li><strong>Collaborate easily</strong>: Share your development environment securely for team testing or demonstrations.</li> </ul> <h4 id="8-disable-funnel-when-done">8. Disable Funnel When Done</h4> <p>After testing, you can easily disable Funnel to secure your environment:</p> <ol> <li><strong>Open your terminal</strong> or command prompt.</li> <li>Run the command to disable Funnel:</li> </ol> <pre><code class="language-bash">tailscale funnel disable &lt;port&gt; </code></pre> <h3 id="nice-to-know-using-tailscale-funnel-to-share-files">Nice to Know: Using Tailscale Funnel to Share Files</h3> <p>In addition to exposing your local development environment, Tailscale Funnel can be used to <a href="https://tailscale.com/kb/1247/funnel-examples">share files quickly</a> over the internet. This is particularly useful when you need to send files directly from your device without using an external file-sharing service. Here’s how to use Funnel for file sharing:</p> <pre><code class="language-bash">tailscale funnel /tmp/public </code></pre> <p>This command will activate file sharing mode, and Tailscale will automatically generate a public URL for you.</p> Tue, 03 Sep 2024 15:35:00 +0000 https://jwillmer.de/blog/tutorial/securely-expose-local-ports-for-testing-with-tailscale-funnel https://jwillmer.de/blog/tutorial/securely-expose-local-ports-for-testing-with-tailscale-funnel tailscale funnel development networking testing app Tutorial UXG-Lite router and Vigor-167 modem configuration <p>I’m using the UniFi <a href="https://eu.store.ui.com/eu/en/pro/category/all-cloud-keys-gateways/products/uxg-lite">UXG-Lite</a> as a router in my home network. It connects to my DrayTek <a href="https://www.draytek.com/products/vigor167/">Vigor 167</a> modem (previously <a href="https://www.draytek.com/products/vigor130/">Vigor 130</a>). In this post I publish the settings to configure the two devices and describe how you can reach the modem once the setup is running.</p> <h2 id="modem-configuration">Modem configuration</h2> <p>My configuration is Telekom specific. Especially the VDSL2 Tag <code>7</code> can be different depending on the provider. I configured it on the modem, but it can also be configured on the router. It does not make a difference. The Modem has a static IPv4 assigned: <code>192.168.0.1</code></p> <div class="album"> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-mpoa-settings.png" /> <figcaption>MPoA Settings</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-pppoe-settings.png" /> <figcaption>PPPoE Settings</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-wan1-ipv6-settings.png" /> <figcaption>IPv6 Settings</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-wan1-vdsl2-settings.png" /> <figcaption>WAN Settings</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-physical-connection.png" /> <figcaption>Physical Connection Overview</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-lan-settings.png" /> <figcaption>Physical Connection Overview</figcaption> </figure> </div> <h2 id="router-configuration">Router Configuration</h2> <p>The internet configuration on the router is a no brainer. Open the Internet settings and if you haven’t configured the VLAN ID on the modem you have to select it in the router. In the IPv4 configuration section you have to select PPPoE and input username and password that was provided by your ISP (Telekom).</p> <h3 id="telekom-username">Telekom username</h3> <p>The construction of the username is very cumbersome. Below is the format you have to generate:</p> <pre><code>Anschlusskennung: 111111111111 Teilnehmernummer: 222222222222 (12 digits) or 33333333333 (11 digits) Mitbenutzerkennung: 0001 Teilnehmernummer with 12 digits example: [email protected] Teilnehmernummer with 11 digits example: 11111111111133333333333#[email protected] 11111111111133333333333\#[email protected] </code></pre> <h2 id="connecting-to-the-modem">Connecting to the Modem</h2> <p>Once everything is configured and in place the modem is not reachable anymore. On a UXG Gateway you can create the following configuration to make it work:</p> <pre><code class="language-json">{ "interfaces": { "pseudo-ethernet": { "peth0": { "address": ["192.168.0.2/24"], "description": "Access to Modem", "link": ["eth1"] } } }, "service": { "nat": { "rule": { "5000": { "description": "MASQ Modem access", "destination": { "address": ["192.168.0.1"] }, "outbound-interface": ["peth0"], "type": "masquerade" } } } } } </code></pre> <p>However, the UXG-Lite does not have the option to configure it permanently. What you can do instead is to login to the UXG-Lite via SSH and add the routing for eth1 manually:</p> <pre><code class="language-bash"># Assign a Static IP to eth1 sudo ip addr add 192.168.0.2/24 dev eth1 # Bring the Interface Up sudo ip link set eth1 up # Add a static route sudo ip route add 192.168.0.0/24 dev eth1 </code></pre> <p>After that you can create a SSH tunnel to your UXG-Lite and setup a SOCKS5 proxy in your browser in order to reach the Modem.</p> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-putty-tunnel.png" /> <figcaption>Putty tunnel</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2024-01-10-uxg-lite-router-vigor-167-modem-configuration-firefox-socks5-proxy.png" /> <figcaption>Firefox SOCKS5 proxy</figcaption> </figure> <h3 id="revert-the-changes-to-the-routes">Revert the changes to the routes</h3> <p>This is not strictly necessary. After a reboot of the device the settings are lost. I also noticed that any modification to the WAN settings on the UXG-Lite will remove the route. However, if you like to manually remove it after you are done you can execute the following command:</p> <pre><code class="language-bash">sudo ip route del 192.168.0.0/24 </code></pre> Wed, 10 Jan 2024 20:10:00 +0000 https://jwillmer.de/blog/tutorial/uxg-lite-router-vigor-167-modem-configuration https://jwillmer.de/blog/tutorial/uxg-lite-router-vigor-167-modem-configuration UXG-Lite gateway modem router connectivity Tutorial Tiny Cube Satellite <p>A couple of months ago I found this <a href="https://www.bhoite.com/sculptures/tiny-cube-sat/">tiny cube satellite build</a> on the internet and was intrigued. So I started to source materials to build my own.</p> <aside> <figure class="right"> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-idea.png#right" /> <figcaption>Mockup</figcaption> </figure> </aside> <p>The only thing I wanted to do differently was to add a mountain base to it. I found a open source model of <a href="https://en.wikipedia.org/wiki/Monte_Civetta">Monte Civetta</a> and sliced it to fit my base.</p> <p>Sadly I don’t have the source anymore. I remember it was quiet tricky to find. And the model was huge meaning that I had to reduce the mesh significantly before I even could start to modify it.</p> <p>My printer required 9h to print the hollow(!) mountain in ABS after which I vapor smoothed it to get a really nice finish.</p> <p>I bought a glass dome with a white wooden base and used some hard wax oil to make the base grey. The part of the base that will be covered by the mountain I plated with some aluminium foil to increase the reflection of the surface. The goal was to illuminate the mountain by the satellite once in a while.</p> <h3 id="electronics-and-source-code">Electronics and source code</h3> <p>I spent most time developing the code for the ATtiny85. The final breadboard design is in the picture below. The real breadboard looked way messier since I also attached my ISP and <a href="https://jloh02.github.io/projects/connecting-attiny85-serial-monitor/">serial reader</a> to it.</p> <figure> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-breadboard.png" /> <figcaption>Breadboard</figcaption> </figure> <div class="tip"> <p>The bottom plus line of the breadboard is connected by a <a href="https://en.wikipedia.org/wiki/Schottky_diode">Schottky diode</a> with the plus line on the top.</p> </div> <p>I flashed my Attiny85 with a clock speed of 1 MHz. I also <a href="https://www.best-microcontroller-projects.com/attiny-ultra-low-power.html">experimented with lower clock speeds</a> but ended up soft bricking one of my ATtiny85.</p> <p>My next project will be to <a href="https://www.electronics-lab.com/recover-bricked-attiny-using-arduino-as-high-voltage-programmer/">rescue the bricked ATtiny85 microcontroller</a> by <a href="https://www.engbedded.com/fusecalc/">resetting the fuses</a>. Below 1 MHz seams interesting but I think it will be too messy since the sleep does not work correctly anymore and you have to implement your won sleep. That also means that the code for the LED duration needs to be modified.</p> <p>That said I am quite pleased with the 1 MHz clock speed since the microcontroller seams to always have enough battery left to light up. The code I ended up using on my ATtiny85 is hosted as a Gist on GitHub called <a href="https://gist.github.com/jwillmer/5c22c1071e47aff0efba87d8b5bec1ef">CubeSat.ino</a>. Feel free to comment on the code. I am by no means an expert in Arduino - I usually work with .NET.</p> <p>The microcontroller uses the solar cells voltage to detect if it is day or night time. If it is night, it will run the glow effect once every ~10 minutes as long as the total voltage of the system does not go below a threshold that would prevent the LED from light up.</p> <p>The glow effect can be seen in the following video. Please keep in mind that it had to be dark in order to trigger the effect. That means the video is a little bit noisy.</p> <video width="416" height="640" controls="" autoplay="" muted=""> <source src="https://jwillmer.de/media/file/2022-06-17-attiny-cube-satellite-lighting.mp4" type="video/mp4" /> </video> <h3 id="soldering">Soldering</h3> <aside> <figure class="right"> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-cube.jpg#right" /> <figcaption>Brass cube</figcaption> </figure> </aside> <p>I am usually not bad with soldering and know how to get a good joint. But the scale on the cube sat is tiny and I haven’t found any good clamp system that was not fiddley.</p> <p>Most parts are floating in midair when you solder them or, in case of the cube, require you to solder 3 parts at once in a <em>perfect</em> angle.</p> <p>My cube has a diameter of 15 mm and I use .8 mm round brass rods. I started by bending most of the edges of the cube out of one rod but I did not like the curved edges. That’s why I had to solder 3 legs at a time.</p> <p>Long story short for a perfectionist the soldering part is a nightmare. And I am reluctant to post close-ups. If anyone has a good setup to solder a perfect cube - I’m looking at you <a href="https://www.bhoite.com/about/">Mohit Bhoite</a> - then by all means enlighten me!</p> <h3 id="bill-of-materials-bom">Bill of Materials (BOM)</h3> <div class="tip"> <p>Make sure you buy the ATtiny85 <strong>10PU</strong> version. Only that one can operate on very low voltage (starting at 1.8 Volt).</p> </div> <h4 id="electronics">Electronics</h4> <ul> <li>SM141K04LV-ND MONOCRYST SOLAR CELL 123MW 2.76V</li> <li>2085-TPLC-3R8/10MR8X14-ND 10F 3.8V T/H</li> <li>ATTINY85V-10PU</li> <li>SMD LED DIODES 3528 1210 WHITE</li> <li>1N4148 DO-35 IN4148 AXIAL LEAD SWITCHING SIGNAL DIODE</li> <li>100KΩ AND 10KΩ RESISTOR</li> <li>.8 MM ROUND BRASS RODS</li> </ul> <h4 id="base">Base</h4> <ul> <li>2MM ACRYLIC ROD</li> <li>ONE WAY MIRROR WINDOW PRIVACY FILM BLUE</li> <li>GLASS DOME WITH WODDEN BASE 20CM</li> <li>LIGNOCOLOR HARD WAX OIL ANTHRACITE GREY</li> </ul> <h3 id="pictures">Pictures</h3> <div class="album"> <figure> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-front.jpg" /> <figcaption>Cube Satellite Diorama</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-top.jpg" /> <figcaption>Satellite top view</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-details.jpg" /> <figcaption>Satellite close-up</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-close-up.jpg" /> <figcaption>Satellite close-up glowing in the dark</figcaption> </figure> <figure> <img src="https://jwillmer.de/media/img/2022-06-17-attiny-cube-satellite-dark.jpg" /> <figcaption>Satellite glow in the dark</figcaption> </figure> </div> <h3 id="additional-references">Additional references</h3> <ul> <li><a href="https://www.marcelpost.com/wiki/index.php/ATtiny85_ADC">ATtiny ADC explained</a></li> <li><a href="https://www.gadgetronicx.com/attiny85-adc-tutorial-interrupts/">ATtiny ADC Interrupts</a></li> <li><a href="https://electronics.stackexchange.com/questions/396345/is-the-led-drop-voltage-difference-between-colors-linked-to-the-different-wavele">Information about LED brightness and voltage</a></li> </ul> Fri, 17 Jun 2022 12:10:00 +0000 https://jwillmer.de/blog/projects/tiny-cube-satellite https://jwillmer.de/blog/projects/tiny-cube-satellite satellite cube tiny attiny arduino electronics diorama model Projects Server setup for the Amazon Container Registry (ECR) <p>We recently moved our docker images to the <a href="https://aws.amazon.com/ecr/">Amazon Container Registry (ECR)</a>. To access ECR in docker you need to install and configure the <a href="https://github.com/awslabs/amazon-ecr-credential-helper"><code>amazon-ecr-credential-helper</code></a>. This post contains a short writeup on the necessary steps.</p> <h2 id="installation">Installation</h2> <pre><code class="language-bash">$ apt update $ apt install amazon-ecr-credential-helper </code></pre> <h2 id="configuration">Configuration</h2> <p>Set the AWS credentials:</p> <pre><code class="language-bash">$ mkdir ~/.aws $ nano ~/.aws/credentials [default] aws_access_key_id=ABCDEFGABCDEFG aws_secret_access_key=abcdefgabcdefgabcdefgabcdefg </code></pre> <p>Set the AWS default options (yours might be different):</p> <pre><code class="language-bash">$ nano ~/.aws/config [default] region=eu-north-1 output=json </code></pre> <p>Update the docker configuration (replace <code>1234567890</code> with your registry id):</p> <pre><code class="language-bash">$ nano ~/.docker/config.json { "credHelpers": { "1234567890.dkr.ecr.eu-north-1.amazonaws.com": "ecr-login" }, "credsStore": "ecr-login" } </code></pre> Thu, 09 Sep 2021 12:51:00 +0000 https://jwillmer.de/blog/tutorial/configuration-of-amazon-container-registry https://jwillmer.de/blog/tutorial/configuration-of-amazon-container-registry amazon container registry docker setup ecr configuration Tutorial