[{"content":"This talk was held at Python meetup Zagreb on February 10, 2026.\nPrevious Next \u0026nbsp; \u0026nbsp; / [pdf] View the PDF file here. ","permalink":"https://tmarice.dev/talks/nix-loves-python/","summary":"\u003cp\u003eThis talk was held at \u003ca href=\"https://www.meetup.com/python-hrvatska/events/313159184\"\u003ePython meetup Zagreb on February 10, 2026\u003c/a\u003e.\u003c/p\u003e\n\u003cscript type=\"text/javascript\" src= '/js/pdf-js/build/pdf.js'\u003e\u003c/script\u003e\n\n\u003cstyle\u003e\n #embed-pdf-container {\n position: relative;\n width: 100%;\n height: auto;\n min-height: 20vh;\n \n }\n \n .pdf-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n #the-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n \n .pdf-loadingWrapper {\n display: none;\n justify-content: center;\n align-items: center;\n width: 100%;\n height: 350px;\n }\n \n .pdf-loading {\n display: inline-block;\n width: 50px;\n height: 50px;\n border: 3px solid #d2d0d0;;\n border-radius: 50%;\n border-top-color: #383838;\n animation: spin 1s ease-in-out infinite;\n -webkit-animation: spin 1s ease-in-out infinite;\n }\n \n \n \n \n \n #overlayText {\n word-wrap: break-word;\n display: grid;\n justify-content: end;\n }\n \n #overlayText a {\n position: relative;\n top: 10px;\n right: 4px;\n color: #000;\n margin: auto;\n background-color: #eeeeee;\n padding: 0.3em 1em;\n border: solid 2px;\n border-radius: 12px;\n border-color: #00000030;\n text-decoration: none;\n }\n \n #overlayText svg {\n height: clamp(1em, 2vw, 1.4em);\n width: clamp(1em, 2vw, 1.4em);\n }\n \n \n \n @keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n @-webkit-keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n \u003c/style\u003e\u003cdiv class=\"embed-pdf-container\" id=\"embed-pdf-container-e367faf6\"\u003e\n \u003cdiv class=\"pdf-loadingWrapper\" id=\"pdf-loadingWrapper-e367faf6\"\u003e\n \u003cdiv class=\"pdf-loading\" id=\"pdf-loading-e367faf6\"\u003e\u003c/div\u003e\n \u003c/div\u003e\n \u003cdiv id=\"overlayText\"\u003e\n \u003ca href=\"Nix-loves-Python.pdf\" aria-label=\"Download\" download\u003e\n \u003csvg aria-hidden=\"true\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 18 18\"\u003e\n \u003cpath d=\"M9 13c.3 0 .5-.1.7-.3L15.4 7 14 5.6l-4 4V1H8v8.6l-4-4L2.6 7l5.7 5.7c.2.2.4.3.7.3zm-7 2h14v2H2z\" /\u003e\n \u003c/svg\u003e\n \u003c/a\u003e\n \u003c/div\u003e\n \u003ccanvas class=\"pdf-canvas\" id=\"pdf-canvas-e367faf6\"\u003e\u003c/canvas\u003e\n\u003c/div\u003e\n\n\u003cdiv class=\"pdf-paginator\" id=\"pdf-paginator-e367faf6\"\u003e\n \u003cbutton id=\"pdf-prev-e367faf6\"\u003ePrevious\u003c/button\u003e\n \u003cbutton id=\"pdf-next-e367faf6\"\u003eNext\u003c/button\u003e \u0026nbsp; \u0026nbsp;\n \u003cspan\u003e\n \u003cspan class=\"pdf-pagenum\" id=\"pdf-pagenum-e367faf6\"\u003e\u003c/span\u003e / \u003cspan class=\"pdf-pagecount\" id=\"pdf-pagecount-e367faf6\"\u003e\u003c/span\u003e\n \u003c/span\u003e\n \u003ca class=\"pdf-source\" id=\"pdf-source-e367faf6\" href=\"Nix-loves-Python.pdf\"\u003e[pdf]\u003c/a\u003e\n\u003c/div\u003e\n\n\u003cnoscript\u003e\nView the PDF file \u003ca class=\"pdf-source\" id=\"pdf-source-noscript-e367faf6\" href=\"Nix-loves-Python.pdf\"\u003ehere\u003c/a\u003e.\n\u003c/noscript\u003e\n\n\u003cscript type=\"text/javascript\"\u003e\n (function(){\n var url = 'Nix-loves-Python.pdf';\n\n var hidePaginator = \"\" === \"true\";\n var hideLoader = \"\" === \"true\";\n var selectedPageNum = parseInt(\"\") || 1;\n\n \n var pdfjsLib = window['pdfjs-dist/build/pdf'];\n\n \n if (pdfjsLib.GlobalWorkerOptions.workerSrc == '')\n pdfjsLib.GlobalWorkerOptions.workerSrc = \"https:\\/\\/tmarice.dev\\/\" + 'js/pdf-js/build/pdf.worker.js';\n\n \n var pdfDoc = null,\n pageNum = selectedPageNum,\n pageRendering = false,\n pageNumPending = null,\n scale = 3,\n canvas = document.getElementById('pdf-canvas-e367faf6'),\n ctx = canvas.getContext('2d'),\n paginator = document.getElementById(\"pdf-paginator-e367faf6\"),\n loadingWrapper = document.getElementById('pdf-loadingWrapper-e367faf6');\n\n\n \n showPaginator();\n showLoader();\n\n \n\n function renderPage(num) {\n pageRendering = true;\n \n pdfDoc.getPage(num).then(function(page) {\n var viewport = page.getViewport({scale: scale});\n canvas.height = viewport.height;\n canvas.width = viewport.width;\n\n \n var renderContext = {\n canvasContext: ctx,\n viewport: viewport\n };\n var renderTask = page.render(renderContext);\n\n \n renderTask.promise.then(function() {\n pageRendering = false;\n showContent();\n\n if (pageNumPending !== null) {\n \n renderPage(pageNumPending);\n pageNumPending = null;\n }\n });\n });\n\n \n document.getElementById('pdf-pagenum-e367faf6').textContent = num;\n }\n\n \n\n function showContent() {\n loadingWrapper.style.display = 'none';\n canvas.style.display = 'block';\n }\n\n \n\n function showLoader() {\n if(hideLoader) return\n loadingWrapper.style.display = 'flex';\n canvas.style.display = 'none';\n }\n\n \n\n function showPaginator() {\n if(hidePaginator) return\n paginator.style.display = 'block';\n }\n\n \n\n function queueRenderPage(num) {\n if (pageRendering) {\n pageNumPending = num;\n } else {\n renderPage(num);\n }\n }\n\n \n\n function onPrevPage() {\n if (pageNum \u003c= 1) {\n return;\n }\n pageNum--;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-prev-e367faf6').addEventListener('click', onPrevPage);\n\n \n\n function onNextPage() {\n if (pageNum \u003e= pdfDoc.numPages) {\n return;\n }\n pageNum++;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-next-e367faf6').addEventListener('click', onNextPage);\n\n \n\n pdfjsLib.getDocument(url).promise.then(function(pdfDoc_) {\n pdfDoc = pdfDoc_;\n var numPages = pdfDoc.numPages;\n document.getElementById('pdf-pagecount-e367faf6').textContent = numPages;\n\n \n if(pageNum \u003e numPages) {\n pageNum = numPages\n }\n\n \n renderPage(pageNum);\n });\n })();\n\u003c/script\u003e","title":"Nix ❤️ Python: Maybe You Don't Need Devcontainers After All"},{"content":"I really like Django. I would pick Django over any other option for setting up a website regardless of expected complexity. In my opinion, if you fully embrace Django, it will allow you to focus on the product and not fight an uphill battle against the computer.\nI do a bit of freelancing on the side, and it saddens me that I rarely see Django projects in the wild. Wherever I join and I\u0026rsquo;m lucky enough that it\u0026rsquo;s a Python gig, it\u0026rsquo;s usually Flask or FastAPI.\nWhen I ask why, it\u0026rsquo;s usually something along the lines of: \u0026ldquo;Oh, we don\u0026rsquo;t need Django, it\u0026rsquo;s too complex. We just need a simple API\u0026rdquo;.\nYet, they need database access, and ORMs are nice so they brought in SQLAlchemy. And they need user authentication, so they roll their own roles and permissions. And they need JWTs because the frontend is a React app with its own stack. And they need caching so they roll their own. And they need request validation and OpenAPI Javascript client generation so they bring in Pydantic. And of course they need horizontal scalability so they deploy everything on Kubernetes. And then: \u0026ldquo;We don\u0026rsquo;t need Celery, it\u0026rsquo;s too complex.\u0026rdquo; so they add APScheduler. And then it turns out they do need simple workflows and CPU-heavy processing so they roll their own background task manager.\nAnd here I am, looking at this amalgamation of bytes created in the name of simplicity, thinking: \u0026ldquo;What a poor reimplementation of Django.\u0026rdquo;\n","permalink":"https://tmarice.dev/blog/the-illusion-of-simplicity/","summary":"\u003cp\u003eI really like Django. I would pick Django over any other option for setting up a website regardless of expected\ncomplexity. In my opinion, if you fully embrace Django, it will allow you to focus on the product and not fight an\nuphill battle against the computer.\u003c/p\u003e\n\u003cp\u003eI do a bit of freelancing on the side, and it saddens me that I rarely see Django projects in the wild. Wherever I\njoin and I\u0026rsquo;m lucky enough that it\u0026rsquo;s a Python gig, it\u0026rsquo;s usually Flask or FastAPI.\u003c/p\u003e","title":"The Illusion of Simplicity"},{"content":"Intro The internet is full of articles on CEOs declaring their companies \u0026ldquo;AI-first\u0026rdquo;, in the name of increasing efficiency. After all, why have 50 engineers if you can have 5 managing a swarm of AI agents?\nI am a heavy user of GenAI assistive tools and they truly do help me achieve results faster. But another thing that helps you achieve results faster is not having to fight an uphill battle against whichever text editor you\u0026rsquo;re using. If you have to lift your hands from the keyboard, you\u0026rsquo;re wasting time.\nYet I never heard any company issuing a \u0026ldquo;Vim keybindings mandate\u0026rdquo; and declaring themselves \u0026ldquo;modal-editing first\u0026rdquo;, even though these tools are over 30 years old.\nWhat is programming? If we squint hard enough, defining programming is very simple \u0026ndash; we solve problems by translating abstract processes into textual artifacts we feed into our magic-rune-inscribed melted-sand tablets so they can understand and execute them.\nThis process is not linear: we do not first come up with a complete solution in our head, and then type it out, then run it, and call it a day. We iteratively think about the problem, type in some of the code, then think about it some more, then maybe look up something in the documentation, then type some more, etc.\nOur brains are quite efficient at thinking, so most of the friction happens in other steps: typing text and looking things up. For now, the keyboard is still the best tool we have for this. From seasoned engineers to vibe coders, everyone has to move the text from their heads to the computer by typing out code (or prompts!).\nIf every time you notice a typo you have to hunt for the mouse, or smash those arrow keys for 10 seconds, you risk losing your train of thought and falling out of the flow state. The more you fight with inputting text, the less you focus on the problem itself.\nOur duty as software engineering professionals is to reduce this friction as much as possible. We learn the ins and outs of programming languages so we can think in appropriate abstractions and avoid frequent lookups. We learn multiple languages so we can deliver efficient solutions without performing acrobatics in languages not appropriate for the task. We utilize GenAI tools to handle boilerplate and cruft. And we optimize our text editors so we can input text as fast as possible.\nWhy Vim? Vim tries to reduce the friction in text editing as much as possible. Your hands stay on the keyboard on the home row all the time. The editor comes with a rich set of built-in shortcuts encompassing every text manipulation you can imagine. For specialized tasks, there are plugins that address them, and you can define your own custom shortcuts for your own workflow needs.\nBuilding your own configuration is a crucial part of the process, where you get acquainted with the editor and the ecosystem. Realistically, the migration to Vim will be a J-curve: at first, you will be less productive, then after a week or two, you\u0026rsquo;ll be where you were before, and if you pushed through, only after a couple of weeks of use you will start seeing gains in productivity.\nThe worst and the best part is that if you embrace Vim, it will ruin all other software for you. You will start looking at software through the lens of keyboard-only usability. If you have to reach for the mouse, or if it has its own silly shortcuts, you won\u0026rsquo;t use it. And it really is a slippery slope: Vim is just a gateway drug, leading to tiling window managers and terminal multiplexers, and before you know it, you\u0026rsquo;ll be using Nix and wondering how on Earth you managed to get anything done before.\nWhere does this leave us? The \u0026ldquo;AI mandates\u0026rdquo; bullshit is purely performative, and the sad thing is everyone knows this \u0026ndash; the CEOs know it, the employees know it, the rest of us observing from the sidelines know it. LLMs truly are wonderful technology, and the productivity gains are real, but if the true goal is productivity, there are already many many ways it can be improved, and no one wrote memos about it. The truly efficient companies are staffed with conscientious engineers who do not have to be mandated to use the best tools available; they seek them out themselves.\nTime is the only real currency in this world, and you\u0026rsquo;re leaving money on the table if you\u0026rsquo;re not using Vim. After all, Vim won\u0026rsquo;t take your job. But someone using Vim will.\n","permalink":"https://tmarice.dev/blog/vim-mandates-gt-ai-mandates/","summary":"\u003ch2 id=\"intro\"\u003eIntro\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://x.com/tobi/status/1909251946235437514\"\u003eThe internet\u003c/a\u003e \u003ca href=\"https://www.linkedin.com/posts/duolingo_below-is-an-all-hands-email-from-our-activity-7322560534824865792-l9vh/\"\u003eis full\u003c/a\u003e \u003ca href=\"https://x.com/michakaufman/status/1909610844008161380\"\u003eof articles\u003c/a\u003e \u003ca href=\"https://www.bloomberg.com/news/videos/2024-12-12/klarna-ceo-on-us-banking-ambitions-video\"\u003eon CEOs\u003c/a\u003e declaring their companies \u0026ldquo;AI-first\u0026rdquo;, in the name of increasing efficiency.\nAfter all, why have 50 engineers if you can have 5 managing a swarm of AI agents?\u003c/p\u003e\n\u003cp\u003eI am a heavy user of GenAI assistive tools and they truly do help me achieve results faster. But another thing that\nhelps you achieve results faster is not having to fight an uphill battle against whichever text editor you\u0026rsquo;re using.\nIf you have to lift your hands from the keyboard, you\u0026rsquo;re wasting time.\u003c/p\u003e","title":"Vim Mandates \u003e\u003e AI Mandates"},{"content":"If your project is not fully containerized, but you still want to use PostgreSQL in your GitHub Actions workflow, you can use the services feature of GitHub Actions to easily spin up a PostgreSQL container.\nHowever, the services functionality restricts what you can configure declaratively in the workflow file \u0026ndash; namely you cannot configure the PostgreSQL server parameters that you would usually set up in the postgresql.conf file. Fortunately, most of these can be configured through ALTER SYSTEM commands.\nFor example, this is how to configure max_locks_per_transaction:\njobs: build: runs-on: ubuntu-latest services: postgres: image: postgres:17 env: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: production ports: - 5432:5432 options: \u0026gt;- --name pg_container --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: - name: Set up environment run: | sudo apt-get update sudo apt-get install -y \\ postgresql-client \\ wait-for-it - name: Configure PostgreSQL env: PGUSER: postgres PGPASSWORD: postgres PGHOST: 127.0.0.1 PGPORT: 5432 PGDATABASE: template1 run: | psql -c \u0026#34;SHOW max_locks_per_transaction;\u0026#34; psql -c \u0026#34;ALTER SYSTEM set max_locks_per_transaction = 128;\u0026#34; docker restart pg_container wait-for-it localhost:5432 --timeout=30 --strict -- echo \u0026#34;PostgreSQL is up\u0026#34; psql -c \u0026#34;SHOW max_locks_per_transaction;\u0026#34; # ... the rest of the workflow Of course, if you need heavy customization, it makes more sense to skip services and run your own container through a docker run step or docker-compose, but for simple use cases, this is a quick way to get started.\n","permalink":"https://tmarice.dev/blog/configuring-postgres-on-github-actions/","summary":"\u003cp\u003eIf your project is not fully containerized, but you still want to use PostgreSQL in your GitHub Actions workflow,\nyou can use the \u003ca href=\"https://docs.github.com/en/actions/tutorials/use-containerized-services\"\u003e\u003ccode\u003eservices\u003c/code\u003e\u003c/a\u003e feature of GitHub\nActions to easily spin up a PostgreSQL container.\u003c/p\u003e\n\u003cp\u003eHowever, the \u003ccode\u003eservices\u003c/code\u003e functionality restricts what you can configure declaratively in the workflow file \u0026ndash; namely you\ncannot configure the PostgreSQL server parameters that you would usually set up in the \u003ccode\u003epostgresql.conf\u003c/code\u003e file.\nFortunately, most of these can be configured through \u003ccode\u003eALTER SYSTEM\u003c/code\u003e commands.\u003c/p\u003e","title":"Configuring PostgreSQL server parameters on GitHub Actions"},{"content":"This talk was held at Python meetup Zagreb on June 11, 2025.\nPrevious Next \u0026nbsp; \u0026nbsp; / [pdf] View the PDF file here. ","permalink":"https://tmarice.dev/talks/taskmaster/","summary":"\u003cp\u003eThis talk was held at \u003ca href=\"https://www.meetup.com/python-hrvatska/events/303511934/\"\u003ePython meetup Zagreb on June 11, 2025\u003c/a\u003e.\u003c/p\u003e\n\u003cscript type=\"text/javascript\" src= '/js/pdf-js/build/pdf.js'\u003e\u003c/script\u003e\n\n\u003cstyle\u003e\n #embed-pdf-container {\n position: relative;\n width: 100%;\n height: auto;\n min-height: 20vh;\n \n }\n \n .pdf-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n #the-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n \n .pdf-loadingWrapper {\n display: none;\n justify-content: center;\n align-items: center;\n width: 100%;\n height: 350px;\n }\n \n .pdf-loading {\n display: inline-block;\n width: 50px;\n height: 50px;\n border: 3px solid #d2d0d0;;\n border-radius: 50%;\n border-top-color: #383838;\n animation: spin 1s ease-in-out infinite;\n -webkit-animation: spin 1s ease-in-out infinite;\n }\n \n \n \n \n \n #overlayText {\n word-wrap: break-word;\n display: grid;\n justify-content: end;\n }\n \n #overlayText a {\n position: relative;\n top: 10px;\n right: 4px;\n color: #000;\n margin: auto;\n background-color: #eeeeee;\n padding: 0.3em 1em;\n border: solid 2px;\n border-radius: 12px;\n border-color: #00000030;\n text-decoration: none;\n }\n \n #overlayText svg {\n height: clamp(1em, 2vw, 1.4em);\n width: clamp(1em, 2vw, 1.4em);\n }\n \n \n \n @keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n @-webkit-keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n \u003c/style\u003e\u003cdiv class=\"embed-pdf-container\" id=\"embed-pdf-container-bd7d3630\"\u003e\n \u003cdiv class=\"pdf-loadingWrapper\" id=\"pdf-loadingWrapper-bd7d3630\"\u003e\n \u003cdiv class=\"pdf-loading\" id=\"pdf-loading-bd7d3630\"\u003e\u003c/div\u003e\n \u003c/div\u003e\n \u003cdiv id=\"overlayText\"\u003e\n \u003ca href=\"Taskmaster.pdf\" aria-label=\"Download\" download\u003e\n \u003csvg aria-hidden=\"true\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 18 18\"\u003e\n \u003cpath d=\"M9 13c.3 0 .5-.1.7-.3L15.4 7 14 5.6l-4 4V1H8v8.6l-4-4L2.6 7l5.7 5.7c.2.2.4.3.7.3zm-7 2h14v2H2z\" /\u003e\n \u003c/svg\u003e\n \u003c/a\u003e\n \u003c/div\u003e\n \u003ccanvas class=\"pdf-canvas\" id=\"pdf-canvas-bd7d3630\"\u003e\u003c/canvas\u003e\n\u003c/div\u003e\n\n\u003cdiv class=\"pdf-paginator\" id=\"pdf-paginator-bd7d3630\"\u003e\n \u003cbutton id=\"pdf-prev-bd7d3630\"\u003ePrevious\u003c/button\u003e\n \u003cbutton id=\"pdf-next-bd7d3630\"\u003eNext\u003c/button\u003e \u0026nbsp; \u0026nbsp;\n \u003cspan\u003e\n \u003cspan class=\"pdf-pagenum\" id=\"pdf-pagenum-bd7d3630\"\u003e\u003c/span\u003e / \u003cspan class=\"pdf-pagecount\" id=\"pdf-pagecount-bd7d3630\"\u003e\u003c/span\u003e\n \u003c/span\u003e\n \u003ca class=\"pdf-source\" id=\"pdf-source-bd7d3630\" href=\"Taskmaster.pdf\"\u003e[pdf]\u003c/a\u003e\n\u003c/div\u003e\n\n\u003cnoscript\u003e\nView the PDF file \u003ca class=\"pdf-source\" id=\"pdf-source-noscript-bd7d3630\" href=\"Taskmaster.pdf\"\u003ehere\u003c/a\u003e.\n\u003c/noscript\u003e\n\n\u003cscript type=\"text/javascript\"\u003e\n (function(){\n var url = 'Taskmaster.pdf';\n\n var hidePaginator = \"\" === \"true\";\n var hideLoader = \"\" === \"true\";\n var selectedPageNum = parseInt(\"\") || 1;\n\n \n var pdfjsLib = window['pdfjs-dist/build/pdf'];\n\n \n if (pdfjsLib.GlobalWorkerOptions.workerSrc == '')\n pdfjsLib.GlobalWorkerOptions.workerSrc = \"https:\\/\\/tmarice.dev\\/\" + 'js/pdf-js/build/pdf.worker.js';\n\n \n var pdfDoc = null,\n pageNum = selectedPageNum,\n pageRendering = false,\n pageNumPending = null,\n scale = 3,\n canvas = document.getElementById('pdf-canvas-bd7d3630'),\n ctx = canvas.getContext('2d'),\n paginator = document.getElementById(\"pdf-paginator-bd7d3630\"),\n loadingWrapper = document.getElementById('pdf-loadingWrapper-bd7d3630');\n\n\n \n showPaginator();\n showLoader();\n\n \n\n function renderPage(num) {\n pageRendering = true;\n \n pdfDoc.getPage(num).then(function(page) {\n var viewport = page.getViewport({scale: scale});\n canvas.height = viewport.height;\n canvas.width = viewport.width;\n\n \n var renderContext = {\n canvasContext: ctx,\n viewport: viewport\n };\n var renderTask = page.render(renderContext);\n\n \n renderTask.promise.then(function() {\n pageRendering = false;\n showContent();\n\n if (pageNumPending !== null) {\n \n renderPage(pageNumPending);\n pageNumPending = null;\n }\n });\n });\n\n \n document.getElementById('pdf-pagenum-bd7d3630').textContent = num;\n }\n\n \n\n function showContent() {\n loadingWrapper.style.display = 'none';\n canvas.style.display = 'block';\n }\n\n \n\n function showLoader() {\n if(hideLoader) return\n loadingWrapper.style.display = 'flex';\n canvas.style.display = 'none';\n }\n\n \n\n function showPaginator() {\n if(hidePaginator) return\n paginator.style.display = 'block';\n }\n\n \n\n function queueRenderPage(num) {\n if (pageRendering) {\n pageNumPending = num;\n } else {\n renderPage(num);\n }\n }\n\n \n\n function onPrevPage() {\n if (pageNum \u003c= 1) {\n return;\n }\n pageNum--;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-prev-bd7d3630').addEventListener('click', onPrevPage);\n\n \n\n function onNextPage() {\n if (pageNum \u003e= pdfDoc.numPages) {\n return;\n }\n pageNum++;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-next-bd7d3630').addEventListener('click', onNextPage);\n\n \n\n pdfjsLib.getDocument(url).promise.then(function(pdfDoc_) {\n pdfDoc = pdfDoc_;\n var numPages = pdfDoc.numPages;\n document.getElementById('pdf-pagecount-bd7d3630').textContent = numPages;\n\n \n if(pageNum \u003e numPages) {\n pageNum = numPages\n }\n\n \n renderPage(pageNum);\n });\n })();\n\u003c/script\u003e","title":"Taskmaster: Solving Deployment Headaches Caused by Long-Running Celery Jobs"},{"content":"Having no documentation is often less harmful than having inaccurate documentation.\nLike code, documentation degrades over time. What was once accurate may now be obsolete. And practices we once ignored might now be part of our daily workflow. Unless maintaining documentation is an intentional process, it will rot, maybe beyond saving.\nIn this article I\u0026rsquo;ll outline a few guidelines that worked well for me in the past for keeping the developer documentation usable. They\u0026rsquo;re certainly not universal, but they are a good starting point for a small team of experienced developers. We’ll skip the philosophical debates and focus on real-world practices that work for small, fast-moving engineering teams - the aim is to get the most value with the least effort. Documentation doesn\u0026rsquo;t pay the bills.\n1. Keep Documentation in the Codebase If your documentation is for developers, the natural thing to do is to keep the documentation close to the code. Create a docs/ folder in the project root and add some markdown files to it. Everyone already has a documentation viewer \u0026ndash; their code editor. As a cherry on top, GitHub renders markdown files in the browser so you can visually browse the documentation. Add a README.md with a short description to every subfolder to make it easier to navigate on the web.\nGreppability is a huge plus \u0026ndash; searching for a class name will render both code and documentation results.\nI usually read the raw markdown files because most of our articles are text-only, but in case of multimedia, I can view it either using my editor\u0026rsquo;s markdown preview, or navigate to GitHub.\n2. Flat is generally better Engineers tend to value neatness. It\u0026rsquo;s very tempting to start developing a complex hierarchy of folders, enumerating all possible domains, adding placeholder folders and articles. The thing is, documentation is meant to be read. If the structure is too complex, the information becomes fragmented and hard to find, even though it\u0026rsquo;s seemingly well-organized.\nWhen organizing the documentation, ask yourself how would you find this if production was down and finding this information was the only thing that could help.\nAnother benefit of flatter organization is surfacing \u0026ldquo;unknown unknowns\u0026rdquo; — useful insights you wouldn’t have searched for but found by proximity to other information.\nNot so good structure:\n/docs /architecture /infrastructure servers.md linux.md dns.md cdn.md /backend django.md celery.md /frontend components.md Better structure:\n/docs README.md local_setup.md infrastructure.md deployment.md troubleshooting.md 3. Keep It Company-Specific Your documentation should contain company-specific notes, and not write in general about engineering concepts. If we have a special way of handling CORS, write about our specifics, do not explain what CORS is. There are plenty of articles on that written by people who know more about it than you.\nNot so good article:\nCORS stands for Cross-Origin Resource Sharing. It is a mechanism that allows ... Better article:\nCORS is handled by `django-cors-headers`. See `settings.py` for list of allowed origins. If it\u0026rsquo;s really important to understand the concept well, link to an external source, but keep in mind that links rot as well, and you will need to update them periodically. Prefer well-established sources like MDN, Django documentation, etc.\n4. Someone Has To Be the Documentation Police Good documentation is a process, not a one-off task — and every process needs an owner.\nEven in small teams, there are people with different priorities. If we\u0026rsquo;re moving fast, it\u0026rsquo;s OK to skip writing documentation for a while, but someone needs to keep the documentation debt on their mind, and schedule updates, deprecations and additions. This can be anyone who is willing. Tenured engineers are probably a better choice than engineering managers who are more removed from day to day operations.\n5. Continuous Improvement Best test of your documentation is a new person joining the team. They come from a different way of doing things, and haven\u0026rsquo;t yet got accustomed to \u0026ldquo;our way\u0026rdquo;. Their first months in the company should be used to re-evaluate the usefulness and correctness of the documentation.\nThey might suggest new additions because we\u0026rsquo;re taking some knowledge as a given. They will quickly catch things that are no longer true because they will be following the steps in the documentation to the letter. It would be a shame to not use this opportunity for improvement.\nIt\u0026rsquo;s especially a good idea to make documentation review a part of the onboarding process: tell the new hires it\u0026rsquo;s one of their first duties to review the documentation and suggest improvements. Make sure to follow-up on their suggestions, providing feedback on acceptance or rejection, and let them implement the changes.\n6. Periodic Revision Very related to previous point, but still different. You will probably not reorganize the documentation because one person found it unclear. But, if it\u0026rsquo;s a pattern, then you will schedule time for this on the roadmap, and approach it very seriously.\n6 or 12 month cadence seems reasonable. Dedicate some time to gather feedback. Take a high-level overview:\nWhich articles are rarely used? What\u0026rsquo;s outdated? Are any links broken? Are any articles too long or unfocused? Are we missing any key topics? Asking publicly in Slack what people think of the current state of the documentation is a good start.\n7. Make Documentation a Habit Unused documentation is a sign of bad documentation.\nPropagate the documentation. Link to articles instead of providing the answer directly, this increases the chance of discovering unknown unknowns. If the answer is not in the documentation, then write it down. Encourage your team colleagues to do the same. Build the culture of writing documentation.\nWhen done right, usable documentation is a force multiplier that makes onboarding faster, reduces support noise, and helps everyone move faster with confidence.\n","permalink":"https://tmarice.dev/blog/on-usable-documentation/","summary":"\u003cp\u003eHaving no documentation is often less harmful than having inaccurate documentation.\u003c/p\u003e\n\u003cp\u003eLike code, documentation degrades over time. What was once accurate may now be obsolete. And practices we once ignored\nmight now be part of our daily workflow. Unless maintaining documentation is an intentional process, it will rot, maybe\nbeyond saving.\u003c/p\u003e\n\u003cp\u003eIn this article I\u0026rsquo;ll outline a few guidelines that worked well for me in the past for keeping the developer\ndocumentation usable. They\u0026rsquo;re certainly not universal, but they are a good starting point for a small team of\nexperienced developers. We’ll skip the philosophical debates and focus on real-world practices that work for small,\nfast-moving engineering teams - the aim is to get the most value with the least effort. Documentation doesn\u0026rsquo;t pay the\nbills.\u003c/p\u003e","title":"On Usable Documentation"},{"content":"What\u0026rsquo;s CSRF? Cross site request forgery is a type of attack where a malicious website tricks a user into performing actions on another site where they\u0026rsquo;re authenticated. This is usually done by embedding a form in the malicious site, and submitting it to the target site.\nAn example of this would be a card game website where, when you hit the \u0026ldquo;Play\u0026rdquo; button, it sends a POST request to another site with the payload to change your login email address to the attacker\u0026rsquo;s. Since you\u0026rsquo;re logged in to the target site, the request goes through and you lose access to your account.\nHow Does it Work in Django By default, Django servers you a cookie with the CSRF token on the first request. This token (in a masked form) is embedded in every form that Django generates, and is unique to the user and the session.\nThe form token is checked on every unsafe request (POST, PUT, DELETE, PATCH). If the token is missing, invalid, or does not match the token in the cookie, the server responds with a 403 Forbidden response.\nThis way Django ensures that the request is coming from the site itself, and not from a malicious third party, since no other server can generate valid CSRF tokens.\nThe Problem The scenario is as follows:\nYou open a website in one tab You open the same website in another tab You log in in the second tab, and start using the website You go back to the first tab, and try to do something that requires a POST request (like submitting a form) You get a 403 Forbidden CSRF Error response For security reasons, Django cycles CSRF tokens on every login. This means that the token embedded in the form in the first tab is now invalid since it was generated before your login in the second tab.\nDjango, being the best web framework out there, even warns you about this if you have DEBUG = True and you get a CSRF failure.\nSince this can happen to regular users, it\u0026rsquo;s not just a security problem, but also a UX problem. The users are most likely to encounter it on the login page because it is one of the few public forms every site has, and a successful login cycles the token.\nSolution #1: Pure Django Solution Django allows setting a custom CSRF failure handler view via settings.CSRF_FAILURE_VIEW variable. For a seamless UX, in case this happens on the login view, you could redirect the user back to the referrer page. Since they\u0026rsquo;re already logged in, they will be able to access it.\nAs a bonus, let\u0026rsquo;s add nice template for the CSRF failure view that explains what happened and offers a button to go back to the previous page.\n# settings.py CSRF_FAILURE_VIEW = \u0026#39;myapp.views.csrf_failure\u0026#39; # views.py from http import HTTPStatus from django.shortcuts import redirect, render def csrf_failure(request, reason=\u0026#34;\u0026#34;): referer = request.META.get(\u0026#34;HTTP_REFERER\u0026#34;, \u0026#34;/\u0026#34;) if resolve(request.path).url_name == \u0026#34;login\u0026#34;: return redirect(referer) return render(request, \u0026#34;csrf_failure.html\u0026#34;, context={\u0026#34;referer\u0026#34;: referer}, status=HTTPStatus.FORBIDDEN) Solution #2: Javascript Another solution would be to use Javascript to periodically check if the CSRF cookie changed since the initial page load and warn the user if it did.\n// csrf.js const COOKIE_NAME = \u0026#39;csrftoken\u0026#39;; function getCookie(name) { const value = `; ${document.cookie}`; const parts = value.split(`; ${name}=`); if (parts.length === 2) return parts.pop().split(\u0026#39;;\u0026#39;).shift(); } function checkCSRFChange() { const currentToken = getCookie(COOKIE_NAME); if (currentToken \u0026amp;\u0026amp; currentToken !== initialToken) { alert(\u0026#34;Your session has changed or expired. Please reload the page to avoid losing changes.\u0026#34;); } } const initialToken = getCookie(COOKIE_NAME); setInterval(checkCSRFChange, 5000); ","permalink":"https://tmarice.dev/blog/handling-csrf-login-errors-gracefully-in-django/","summary":"\u003ch1 id=\"whats-csrf\"\u003eWhat\u0026rsquo;s CSRF?\u003c/h1\u003e\n\u003cp\u003eCross site request forgery is a type of attack where a malicious website tricks a user into performing actions on\nanother site where they\u0026rsquo;re authenticated. This is usually done by embedding a form in the malicious site, and\nsubmitting it to the target site.\u003c/p\u003e\n\u003cp\u003eAn example of this would be a card game website where, when you hit the \u0026ldquo;Play\u0026rdquo; button, it sends a POST request to\nanother site with the payload to change your login email address to the attacker\u0026rsquo;s. Since you\u0026rsquo;re logged in to the\ntarget site, the request goes through and you lose access to your account.\u003c/p\u003e","title":"Handling Csrf Login Errors Gracefully in Django"},{"content":"Every engineer that loves Django and has a blog has at least one of these posts.\nDjango\u0026rsquo;s ORM is excellent, but given enough time it\u0026rsquo;s easy for approaches that weren\u0026rsquo;t mistakes to grow into mistakes This is a great thing, because it usually means your company didn\u0026rsquo;t go bankrupt, you\u0026rsquo;re still here and can fix things, and the company is doing well because the scale increased (hopefully your compensation as well).\nThis is a recap of my recent experience into optimizing Celery tasks that started out as non-problematic, but with the passage of time became problematic, causing server and database stability issues.\nprefetch_related + iterator() problem Up until Django 4.1, calling iterator() on a queryset with prefetch_related() caused the prefetched data to be dropped, causing N+1 queries problem.\nIn Django 4.1 the iterator started accepting batch_size argument that allows us to get the best of both worlds \u0026ndash; avoid pulling the entire dataset into memory while avoiding the N+1 queries problem. Or at least turn the N+1 into N / batch_size + 1, which is considerably better. But the batch_size\u0026rsquo;s default value is None which reverts to old behavior, discarding the prefetched data silently.\nThe pseudocode from the problematic task looked something like this:\nprofiles = Profile.objects.filter( community=community ).select_related( a bunch of joins here ).prefetch_related( \u0026#39;user__groups\u0026#39;, a bunch of other m2ms ) BATCH_SIZE = 1000 records = [] for profile in profiles.iterator(): group_names = [g.name for g in profile.user.groups.all()] primary_group = determine_primary_group(group_names) denorm_profile = DenormProfileRecord( user_id=profile.user_id, primary_group=primary_group, groups=group_names, ) records.append(denorm_profile) if len(records) == BATCH_SIZE: DenormProfileRecord.objects.bulk_create(records) records = [] DenormProfileRecord.objects.bulk_create(records) This task took hours, and used up a lot of database resources. When it was originally written, there were not that many profiles and there were not many other tasks demanding database resources so it worked well. But, we were lucky: people registered in increasing numbers, other business processes took their own chunk of database\u0026rsquo;s resources, and this really became a bottleneck.\nMy first optimization attempt was:\nIncrease the batch size to 10k, to reduce the number of bulk_create() calls Use only() on the Profile queryset to avoid fetching unnecessary data Since I believed only() reduced the memory requirements enough, I dropped the iterator() to enable prefetch_related() to do its thing This optimization turned out to be \u0026hellip; less than optimal.\nOne week and one server and database outage later, I was forced to revisit my optimization.\nMy first failing was in not examining the entire context in which this code is run. It\u0026rsquo;s part of a Celery task executed by a Celery worker with --concurrency=4, meaning that it\u0026rsquo;s possible that we try to refresh 4 big communities at the same time.\nSecond, I failed to account for some communities having 100s of thousands of profiles. Removing the iterator() call means all of these profiles are loaded into memory at once.\nThird, I underestimated the difference in memory consumption between Python models instances (which are still constructed when you use only()) and Python built-in types.\nThe second optimization attempt was:\nRecognizing that we don\u0026rsquo;t really need model instances from the prefetched relations, we just need certain values \u0026ndash; we can get much better performance by using PostgreSQL-specific ArrayAgg only() only marginally reduced the memory footprint and since we don\u0026rsquo;t actually need the Profile model instances, we can get a huge benefit from enumerating all required fields in a values_list() call and avoid constructing the model instances completely The final version looked something like:\nprofiles = Profile.objects.filter( community=community ).annotate( group_names=ArrayAgg( \u0026#39;user__groups\u0026#39;, filter=Q(user__groups__isnull=False), distinct=True, default=[] ), ...other values that can be aggreged as well, ).values_list( \u0026#39;user_id\u0026#39;, \u0026#39;group_names\u0026#39;, ...all other fields that were necessary for DenormProfileRecord named=True ) BATCH_SIZE = 10000 records = [] for profile in profiles.iterator(): primary_group = determine_primary_group(profile.group_names) denorm_profile = DenormProfileRecord( user_id=profile.user_id, primary_group=primary_group, groups=profile.group_names, ) records.append(denorm_profile) if len(records) == BATCH_SIZE: DenormProfileRecord.objects.bulk_create(records) records = [] DenormProfileRecord.objects.bulk_create(records) This way we could keep the iterator() call since there were no prefetch_related() calls. The values_list() optimization wasn\u0026rsquo;t actually necessary because we only had a single row of Profile data in memory at the same time, but I kept it just in case.\nThis reduced the memory strain on the server from \u0026ldquo;fills up the RAM and swap and causes OOM killer to go on a rampage\u0026rdquo; to \u0026ldquo;unnoticeable\u0026rdquo;. The runtime dropped from several hours to ~30s.\nOptimizing Redis Access This one isn\u0026rsquo;t really Django ORM related, but it was done in the same batch of optimizations so I\u0026rsquo;ll touch on it. We utilize Redis to keep a sorted set of video name prefixes allowing live autocomplete while typing in the search box on the site.\nPopulating this Redis sorted set is done a daily basis: it\u0026rsquo;s completely dropped and recreated from scratch.\nThe function looks something like:\nfrom redis.client import Redis AUTOCOMPLETE_REDIS_KEY = \u0026#39;autocomplete\u0026#39; MAX_PREFIX_LENGHT = 8 def update_autocomplete_prefixes(): redis = Redis.from_url(REDIS_CACHE_LOCATION) redis.delete(AUTOCOMPLETE_REDIS_KEY) for video_title in Video.objects.values_list(\u0026#39;title\u0026#39;, flat=True): title = strip(video_title.lower()) for i range(1, MAX_PREFIX_LENGHT + 1): redis.zadd({title[0:i]: 0}) Upon investigating the used redis library, it turns out each zadd() call is a single network request. As the number of videos grew, the number of network requests grew as well, until this task took about 15 minutes to complete.\nThe optimization approach here was to collect all Redis updates in a single dictionary and push it in a single network call. This approach also allowed moving the delete call much closer to the single zadd() call, reducing the time where the autocomplete prefixes were only partially available.\nOne small database related improvement was pushing the strip() and lower() calls to the database utilizing the Trim() and Lower() database functions.\nThe rewritten task looks something like:\nfrom redis.client import Redis AUTOCOMPLETE_REDIS_KEY = \u0026#39;autocomplete\u0026#39; MAX_PREFIX_LENGHT = 8 def update_autocomplete_prefixes(): prefixes = {} for video_title in Video.objects.annotate(values_list(Trim(Lower(\u0026#39;title\u0026#39;)), flat=True): prefixes |= {video_title[0:i]: 0 for i in range(1, MAX_PREFIX_LENGTH + 1)} redis = Redis.from_url(REDIS_CACHE_LOCATION) redis.delete(AUTOCOMPLETE_REDIS_KEY) redis.zadd(prefixes) This reduced the runtime of the task from 15 minutes to ~5 seconds.\nI also entertained the idea of using memoryviews to avoid constructing new string objects for each prefix, but there were risks associated with handling unicode characters (which were present in the video titles, and memoryviews operate on bytes), and not really being familiar with how the redis Python library handles the passed data (it could quite easily cast these memoryviews back to strings, annuling any gains).\nOptimizing Deletion of Old Records For debugging purposes, we retain a copy of each email sent to our users. Since we like to keep things simple, this data is kept within a table in PostgreSQL, and the old records are purged from the table daily. Retention policy is 2 weeks, so every day there is a Celery task that identifies old records and deletes them:\ndef remove_old_emails(): old_mailer_ids = list( Mailer.objects.filter(sent__lte=timezone.now() - relativedelta(weeks=2) ) old_emails = Email.objects.filter(mailer_id__in=old_mailer_ids) old_email._raw_delete(old_emails.db) This regularly took 20-30 minutes, even with the _raw_delete optimization. The table is not humongous, it definitely should not take that long.\nFor historical reasons (when the table was humongous and a join would kill the database) the table doesn\u0026rsquo;t have any foreign key constraints, and all other tables are referenced through soft foreign keys (e.g. mailer_id is an integer column with an index, without a foreign key constraint). In the meantime we introduced a data retention policy to manage the table\u0026rsquo;s size.\nThe problematic part quickly emerged upon inspecting the SQL query: the list of old mailer IDs has 100k members, and is growing daily since it\u0026rsquo;s a list of all mailers ever sent. This makes Postgres\u0026rsquo; life hard, and degrades every query to a full table sequential scan.\nThe solution is clear: reduce the list of mailer IDs to something manageable. Since the task is run daily, it\u0026rsquo;s safe to reduce the list to mailers that were sent between 2 weeks ago and 2 weeks and 1 day ago. We want to have some redundancy so we increased that range to 2 weeks an 3 days ago, in case something prevents the task from running for a day or two.\nPostgres started using an index scan instead of a sequential scan, and things sped up drastically \u0026ndash; the runtime dropped from 20-30 minutes to 3-5 minutes.\nOptimizing Creation of New Many-to-Many Records Using Model.objects.create() in a for loop is a surefire way to degrade the performance of you database \u0026ndash; every create() call is a network request to the database with an INSERT command.\nfor_date = date(2024, 8, 24) impression_data = Impression.objects.filter( created__date=for_date ).values( \u0026#39;content_type\u0026#39;, \u0026#39;object_id\u0026#39; ).annotate(total_impressions=Count(\u0026#39;id\u0026#39;)) for content_impression in impression_data: DailyImpressionStats.objects.create(**content_impression) One of basic ways to improve performance, if memory allows it, is to accumulate unsaved model instances in memory and then create them all at once using a bulk_create call:\nfor_date = date(2024, 8, 24) impression_data = Impression.objects.filter( created__date=for_date ).values( \u0026#39;content_type\u0026#39;, \u0026#39;object_id\u0026#39; ).annotate(total_impressions=Count(\u0026#39;id\u0026#39;)) daily_impressions_list = [] for content_impression in impression_data: daily_impressions_list.append(DailyImpressionStats(**content_impression)) DailyImpressionStats.objects.bulk_create(daily_impressions_list) This is a fairly common optimization, but here\u0026rsquo;s a follow-up problem: what if we also have to set a many-to-many relationship on the model we want to bulk create? At first, it seems like we cannot use the bulk_create() approach anymore:\nfor content_impression in impression_data: daily_impression = DailyImpressionStats(**content_impression) tags = get_tags_for_content( content_type=content_impression[\u0026#39;content_type_id\u0026#39;], object_id=content_impression[\u0026#39;object_id\u0026#39;], ) daily_impression.tags.set(tags) # ERROR: daily_impression has to be saved before we can set the M2M relationship Luckily, digging a bit deeper into how Django implements the many to many relationship offers an answer. When we define a many to many relationship between models, Django creates an intermediary table with an accompanying through model accessible via Model.m2m_field.through. This allows us to also accumulate the through model instances in another list and bulk create them as well.\nIf the id field is declared as an AutoField, PostgreSQL, MariaDB and SQLite set the id field on model instances when using bulk_create(). Our many to many records reference these instances so we can first bulk create the model instances, and then bulk create the many to many instances:\nfor_date = date(2024, 8, 24) impression_data = Impression.objects.filter( created__date=for_date ).values( \u0026#39;content_type\u0026#39;, \u0026#39;object_id\u0026#39; ).annotate(total_impressions=Count(\u0026#39;id\u0026#39;)) daily_impressions_list = [] daily_impressions_tag_list = [] for content_impression in impression_data: daily_impression = DailyImpressionStats(**content_impression) daily_impressions_list.append(daily_impression) tags = get_tags_for_content( content_type=content_impression[\u0026#39;content_type_id\u0026#39;], object_id=content_impression[\u0026#39;object_id\u0026#39;], ) daily_impressions_tag_list.extend( DailyImpressionStats.tags.through( daily_impression=daily_impression, tag=tag ) for tag in tags ) DailyImpressionStats.objects.bulk_create(daily_impressions_list) DailyImpressionStats.tags.through.objects.bulk_create(daily_impressions_tag_list) Epilogue It\u0026rsquo;s easy to dismiss the problems with the original code snippets as \u0026ldquo;skill issues\u0026rdquo;, and sometimes they really are. But we need to keep in mind that equally often code starts out as performant and ends up as a bottleneck. Conditions change, scale increases, tech stack evolves. If you tried to do some of the described optimizations in the initial code push, I would probably be the first one to invoke YAGNI and ask for a simplification. Business first, tech second \u0026ndash; every minute you spend on optimizing a query means the business might not make enough money and not live to see the day your optimization pays off.\nIt\u0026rsquo;s not important to write the optimal code in the first go, it\u0026rsquo;s important to be able to write it once it becomes problematic, and in the mean time, build things that matter.\n","permalink":"https://tmarice.dev/blog/better-living-through-optimized-django/","summary":"\u003cp\u003eEvery engineer that loves Django and has a blog has at least one of these posts.\u003c/p\u003e\n\u003cp\u003eDjango\u0026rsquo;s ORM is excellent, but given enough time it\u0026rsquo;s easy for approaches that weren\u0026rsquo;t mistakes to grow into mistakes This is a great thing, because it usually means your company didn\u0026rsquo;t go bankrupt, you\u0026rsquo;re still here and can fix things, and the company is doing well because the scale increased (hopefully your compensation as well).\u003c/p\u003e","title":"Better Living Through Optimized Django"},{"content":"@property decorator is an excellent way to reduce the readability of Python code. It obfuscates a perfectly good function call and tricks readers into thinking they\u0026rsquo;re performing a regular attribute access or assignment.\nUnless there\u0026rsquo;s a really good and explicit reason to do this, don\u0026rsquo;t.\nList of Good and Explicit Reasons: Refactoring That\u0026rsquo;s pretty much it.\nIf you need to turn something that (rightfully so) started out as a simple attribute, but with time accrued some more complex logic, @property is a good way to gracefully transition from attributes to function calls.\nVersion 1 We start out with a simple attribute. You can get it, you can set it. As a consenting adult, you\u0026rsquo;re free to do with it whatever you want.\nclass Client: def __init__(self, value): self.value = value Version 2: The project gains traction. You need to add two new features:\nEmit an event whenever the Client.value attribute is accessed, so other parts of the code can listen to it and do their own thing You want a central place to validate values being assigned, to avoid littering the rest of your codebase with error handling Because we\u0026rsquo;re a self-aware smol brain developer, we like plain old functions. We craft a plan to change the class interface to use getter/setter functions instead of direct attribute access. But since we\u0026rsquo;re also responsible and respectful to our colleagues/clients, we don\u0026rsquo;t just change the API abruptly. No, we will be emitting a deprecation warning for some time, and only introduce breaking changes in the API after we\u0026rsquo;ve given everyone ample time to migrate.\nimport warnings class Client: def __init__(self, value): # We add a private attribute to hold the value self._value = None self.set_value(value) @property def value(self): # We can now emit a deprecation warning on # each access, urging our users to migrate to the new API warnings.warn(\u0026#34;A.value is deprecated, use A.get_value() instead!\u0026#34;, DeprecationWarning) # ... and offload the act of retrieving the value # to the newly-introduced function value = self.get_value() return value @property.setter def value(self, new_value): warnings.warn(\u0026#34;A.value is deprecated, use A.set_value() instead!\u0026#34;, DeprecationWarning) self.set_value(new_value) # We add getter/setter functions with the new logic def get_value(self): self._emit_event(\u0026#39;value_access\u0026#39;) return self._value def set_value(self, new_value): self._validate_value(new_value) self._value = new_value Version 3: Time has passed, and people have migrated to the new API. We\u0026rsquo;re ready to make our lives easier, and simplify the codebase by removing the dirty @property. Life is good again.\nclass Client: def __init__(self, value): self._value = None self.set_value(value) def get_value(self): self._emit_event(\u0026#39;value_access\u0026#39;) return self._value def set_value(self, new_value): self._validate_value(new_value) self._value = new_value Going a Bit Deeper @property is an example of a descriptor. Descriptors are a neat Python construct that \u0026ldquo;lets objects customize attribute lookup, storage, and deletion\u0026rdquo;. Some of the nicer things in life I enjoy are made using descriptors, namely Django\u0026rsquo;s ORM.\nBut just because you can doesn\u0026rsquo;t mean you should. We always strive for the least complex option, and if you\u0026rsquo;re certain descriptors will make everyone\u0026rsquo;s (not just yours!) lives easier, then go for it. Most of the time, though, plain functions are the way to go.\nStop worrying and learn to love the function call.\n","permalink":"https://tmarice.dev/blog/on-pythons-property-decorator/","summary":"\u003cp\u003e\u003ccode\u003e@property\u003c/code\u003e decorator is an excellent way to reduce the readability of Python code. It obfuscates a perfectly good\nfunction call and tricks readers into thinking they\u0026rsquo;re performing a regular attribute access or assignment.\u003c/p\u003e\n\u003cp\u003eUnless there\u0026rsquo;s a really good and explicit reason to do this, don\u0026rsquo;t.\u003c/p\u003e\n\u003ch2 id=\"list-of-good-and-explicit-reasons\"\u003eList of Good and Explicit Reasons:\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eRefactoring\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eThat\u0026rsquo;s pretty much it.\u003c/p\u003e\n\u003cp\u003eIf you need to turn something that (rightfully so) started out as a simple attribute, but with time accrued some more\ncomplex logic, @property is a good way to gracefully transition from attributes to function calls.\u003c/p\u003e","title":"On Python's @property Decorator"},{"content":"Instead of\ndef do_something(a, b, c): return res_fn( fn(a, b), fn(b), c ) I do:\ndef do_something(a, b, c): inter_1 = fn(a, b) inter_2 = fn(b) result = res_fn(inter_1, inter_2, c) return result The first version is much shorter, and when formatted properly, equally readable.\nBut the reason I prefer the second approach is because all intermediate steps are saved to local variables.\nException tracking tools like Sentry, and even Django\u0026rsquo;s error page that pops up when DEBUG=True is set, capture the local context. On top of that, if you ever have to step through the function with a debugger, you can see the exact return value before stepping out from the function. This is the reason why I even save the final result in a local variable, just before returning it.\nAt the performance cost of couple of extra variable assignments, and couple of extra lines of code, this makes debugging much easier.\n","permalink":"https://tmarice.dev/blog/why-i-always-assign-intermediate-values-to-local-variables-instead-of-passing-them-directly-to-function-calls/","summary":"\u003cp\u003eInstead of\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"\u003e\u003ccode class=\"language-python\" data-lang=\"python\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003edef\u003c/span\u003e \u003cspan style=\"color:#a6e22e\"\u003edo_something\u003c/span\u003e(a, b, c):\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e \u003cspan style=\"color:#66d9ef\"\u003ereturn\u003c/span\u003e res_fn(\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e fn(a, b),\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e fn(b),\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e c\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e )\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eI do:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"\u003e\u003ccode class=\"language-python\" data-lang=\"python\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003edef\u003c/span\u003e \u003cspan style=\"color:#a6e22e\"\u003edo_something\u003c/span\u003e(a, b, c):\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e inter_1 \u003cspan style=\"color:#f92672\"\u003e=\u003c/span\u003e fn(a, b)\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e inter_2 \u003cspan style=\"color:#f92672\"\u003e=\u003c/span\u003e fn(b)\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e result \u003cspan style=\"color:#f92672\"\u003e=\u003c/span\u003e res_fn(inter_1, inter_2, c)\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e \u003cspan style=\"color:#66d9ef\"\u003ereturn\u003c/span\u003e result\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eThe first version is much shorter, and when formatted properly, equally readable.\u003c/p\u003e\n\u003cp\u003eBut the reason I prefer the second approach is because all intermediate steps are saved to local variables.\u003c/p\u003e\n\u003cp\u003eException tracking tools like Sentry, and even Django\u0026rsquo;s error page that pops up when \u003ccode\u003eDEBUG=True\u003c/code\u003e is set, capture the local context. On top of that, if you ever have to step through the function with a debugger, you can see the exact return value before stepping out from the function. This is the reason why I even save the final result in a local variable, just before returning it.\u003c/p\u003e","title":"Why I Always Assign Intermediate Values to Local Variables Instead of Passing Them Directly to Function Calls"},{"content":"This talk was held at Python meetup Zagreb on October 10, 2023.\nPrevious Next \u0026nbsp; \u0026nbsp; / [pdf] View the PDF file here. ","permalink":"https://tmarice.dev/talks/using-jupyter-outside-of-data-science/","summary":"\u003cp\u003eThis talk was held at \u003ca href=\"https://www.meetup.com/python-hrvatska/events/296540003\"\u003ePython meetup Zagreb on October 10, 2023\u003c/a\u003e.\u003c/p\u003e\n\u003cscript type=\"text/javascript\" src= '/js/pdf-js/build/pdf.js'\u003e\u003c/script\u003e\n\n\u003cstyle\u003e\n #embed-pdf-container {\n position: relative;\n width: 100%;\n height: auto;\n min-height: 20vh;\n \n }\n \n .pdf-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n #the-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n \n .pdf-loadingWrapper {\n display: none;\n justify-content: center;\n align-items: center;\n width: 100%;\n height: 350px;\n }\n \n .pdf-loading {\n display: inline-block;\n width: 50px;\n height: 50px;\n border: 3px solid #d2d0d0;;\n border-radius: 50%;\n border-top-color: #383838;\n animation: spin 1s ease-in-out infinite;\n -webkit-animation: spin 1s ease-in-out infinite;\n }\n \n \n \n \n \n #overlayText {\n word-wrap: break-word;\n display: grid;\n justify-content: end;\n }\n \n #overlayText a {\n position: relative;\n top: 10px;\n right: 4px;\n color: #000;\n margin: auto;\n background-color: #eeeeee;\n padding: 0.3em 1em;\n border: solid 2px;\n border-radius: 12px;\n border-color: #00000030;\n text-decoration: none;\n }\n \n #overlayText svg {\n height: clamp(1em, 2vw, 1.4em);\n width: clamp(1em, 2vw, 1.4em);\n }\n \n \n \n @keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n @-webkit-keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n \u003c/style\u003e\u003cdiv class=\"embed-pdf-container\" id=\"embed-pdf-container-6b449905\"\u003e\n \u003cdiv class=\"pdf-loadingWrapper\" id=\"pdf-loadingWrapper-6b449905\"\u003e\n \u003cdiv class=\"pdf-loading\" id=\"pdf-loading-6b449905\"\u003e\u003c/div\u003e\n \u003c/div\u003e\n \u003cdiv id=\"overlayText\"\u003e\n \u003ca href=\"Using_jupyter_outside_of_data_science.pdf\" aria-label=\"Download\" download\u003e\n \u003csvg aria-hidden=\"true\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 18 18\"\u003e\n \u003cpath d=\"M9 13c.3 0 .5-.1.7-.3L15.4 7 14 5.6l-4 4V1H8v8.6l-4-4L2.6 7l5.7 5.7c.2.2.4.3.7.3zm-7 2h14v2H2z\" /\u003e\n \u003c/svg\u003e\n \u003c/a\u003e\n \u003c/div\u003e\n \u003ccanvas class=\"pdf-canvas\" id=\"pdf-canvas-6b449905\"\u003e\u003c/canvas\u003e\n\u003c/div\u003e\n\n\u003cdiv class=\"pdf-paginator\" id=\"pdf-paginator-6b449905\"\u003e\n \u003cbutton id=\"pdf-prev-6b449905\"\u003ePrevious\u003c/button\u003e\n \u003cbutton id=\"pdf-next-6b449905\"\u003eNext\u003c/button\u003e \u0026nbsp; \u0026nbsp;\n \u003cspan\u003e\n \u003cspan class=\"pdf-pagenum\" id=\"pdf-pagenum-6b449905\"\u003e\u003c/span\u003e / \u003cspan class=\"pdf-pagecount\" id=\"pdf-pagecount-6b449905\"\u003e\u003c/span\u003e\n \u003c/span\u003e\n \u003ca class=\"pdf-source\" id=\"pdf-source-6b449905\" href=\"Using_jupyter_outside_of_data_science.pdf\"\u003e[pdf]\u003c/a\u003e\n\u003c/div\u003e\n\n\u003cnoscript\u003e\nView the PDF file \u003ca class=\"pdf-source\" id=\"pdf-source-noscript-6b449905\" href=\"Using_jupyter_outside_of_data_science.pdf\"\u003ehere\u003c/a\u003e.\n\u003c/noscript\u003e\n\n\u003cscript type=\"text/javascript\"\u003e\n (function(){\n var url = 'Using_jupyter_outside_of_data_science.pdf';\n\n var hidePaginator = \"\" === \"true\";\n var hideLoader = \"\" === \"true\";\n var selectedPageNum = parseInt(\"\") || 1;\n\n \n var pdfjsLib = window['pdfjs-dist/build/pdf'];\n\n \n if (pdfjsLib.GlobalWorkerOptions.workerSrc == '')\n pdfjsLib.GlobalWorkerOptions.workerSrc = \"https:\\/\\/tmarice.dev\\/\" + 'js/pdf-js/build/pdf.worker.js';\n\n \n var pdfDoc = null,\n pageNum = selectedPageNum,\n pageRendering = false,\n pageNumPending = null,\n scale = 3,\n canvas = document.getElementById('pdf-canvas-6b449905'),\n ctx = canvas.getContext('2d'),\n paginator = document.getElementById(\"pdf-paginator-6b449905\"),\n loadingWrapper = document.getElementById('pdf-loadingWrapper-6b449905');\n\n\n \n showPaginator();\n showLoader();\n\n \n\n function renderPage(num) {\n pageRendering = true;\n \n pdfDoc.getPage(num).then(function(page) {\n var viewport = page.getViewport({scale: scale});\n canvas.height = viewport.height;\n canvas.width = viewport.width;\n\n \n var renderContext = {\n canvasContext: ctx,\n viewport: viewport\n };\n var renderTask = page.render(renderContext);\n\n \n renderTask.promise.then(function() {\n pageRendering = false;\n showContent();\n\n if (pageNumPending !== null) {\n \n renderPage(pageNumPending);\n pageNumPending = null;\n }\n });\n });\n\n \n document.getElementById('pdf-pagenum-6b449905').textContent = num;\n }\n\n \n\n function showContent() {\n loadingWrapper.style.display = 'none';\n canvas.style.display = 'block';\n }\n\n \n\n function showLoader() {\n if(hideLoader) return\n loadingWrapper.style.display = 'flex';\n canvas.style.display = 'none';\n }\n\n \n\n function showPaginator() {\n if(hidePaginator) return\n paginator.style.display = 'block';\n }\n\n \n\n function queueRenderPage(num) {\n if (pageRendering) {\n pageNumPending = num;\n } else {\n renderPage(num);\n }\n }\n\n \n\n function onPrevPage() {\n if (pageNum \u003c= 1) {\n return;\n }\n pageNum--;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-prev-6b449905').addEventListener('click', onPrevPage);\n\n \n\n function onNextPage() {\n if (pageNum \u003e= pdfDoc.numPages) {\n return;\n }\n pageNum++;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-next-6b449905').addEventListener('click', onNextPage);\n\n \n\n pdfjsLib.getDocument(url).promise.then(function(pdfDoc_) {\n pdfDoc = pdfDoc_;\n var numPages = pdfDoc.numPages;\n document.getElementById('pdf-pagecount-6b449905').textContent = numPages;\n\n \n if(pageNum \u003e numPages) {\n pageNum = numPages\n }\n\n \n renderPage(pageNum);\n });\n })();\n\u003c/script\u003e","title":"Using Jupyter Outside of Data Science"},{"content":"This talk was held at Python meetup Zagreb on June 14, 2022.\nPrevious Next \u0026nbsp; \u0026nbsp; / [pdf] View the PDF file here. ","permalink":"https://tmarice.dev/talks/hacking-analytics-with-postgres/","summary":"\u003cp\u003eThis talk was held at \u003ca href=\"https://www.meetup.com/python-hrvatska/events/286414733\"\u003ePython meetup Zagreb on June 14, 2022\u003c/a\u003e.\u003c/p\u003e\n\u003cscript type=\"text/javascript\" src= '/js/pdf-js/build/pdf.js'\u003e\u003c/script\u003e\n\n\u003cstyle\u003e\n #embed-pdf-container {\n position: relative;\n width: 100%;\n height: auto;\n min-height: 20vh;\n \n }\n \n .pdf-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n #the-canvas {\n border: 1px solid black;\n direction: ltr;\n width: 100%;\n height: auto;\n display: none;\n }\n \n \n .pdf-loadingWrapper {\n display: none;\n justify-content: center;\n align-items: center;\n width: 100%;\n height: 350px;\n }\n \n .pdf-loading {\n display: inline-block;\n width: 50px;\n height: 50px;\n border: 3px solid #d2d0d0;;\n border-radius: 50%;\n border-top-color: #383838;\n animation: spin 1s ease-in-out infinite;\n -webkit-animation: spin 1s ease-in-out infinite;\n }\n \n \n \n \n \n #overlayText {\n word-wrap: break-word;\n display: grid;\n justify-content: end;\n }\n \n #overlayText a {\n position: relative;\n top: 10px;\n right: 4px;\n color: #000;\n margin: auto;\n background-color: #eeeeee;\n padding: 0.3em 1em;\n border: solid 2px;\n border-radius: 12px;\n border-color: #00000030;\n text-decoration: none;\n }\n \n #overlayText svg {\n height: clamp(1em, 2vw, 1.4em);\n width: clamp(1em, 2vw, 1.4em);\n }\n \n \n \n @keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n @-webkit-keyframes spin {\n to { -webkit-transform: rotate(360deg); }\n }\n \u003c/style\u003e\u003cdiv class=\"embed-pdf-container\" id=\"embed-pdf-container-782bae57\"\u003e\n \u003cdiv class=\"pdf-loadingWrapper\" id=\"pdf-loadingWrapper-782bae57\"\u003e\n \u003cdiv class=\"pdf-loading\" id=\"pdf-loading-782bae57\"\u003e\u003c/div\u003e\n \u003c/div\u003e\n \u003cdiv id=\"overlayText\"\u003e\n \u003ca href=\"Hacking_analytics_with_Postgres.pdf\" aria-label=\"Download\" download\u003e\n \u003csvg aria-hidden=\"true\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 18 18\"\u003e\n \u003cpath d=\"M9 13c.3 0 .5-.1.7-.3L15.4 7 14 5.6l-4 4V1H8v8.6l-4-4L2.6 7l5.7 5.7c.2.2.4.3.7.3zm-7 2h14v2H2z\" /\u003e\n \u003c/svg\u003e\n \u003c/a\u003e\n \u003c/div\u003e\n \u003ccanvas class=\"pdf-canvas\" id=\"pdf-canvas-782bae57\"\u003e\u003c/canvas\u003e\n\u003c/div\u003e\n\n\u003cdiv class=\"pdf-paginator\" id=\"pdf-paginator-782bae57\"\u003e\n \u003cbutton id=\"pdf-prev-782bae57\"\u003ePrevious\u003c/button\u003e\n \u003cbutton id=\"pdf-next-782bae57\"\u003eNext\u003c/button\u003e \u0026nbsp; \u0026nbsp;\n \u003cspan\u003e\n \u003cspan class=\"pdf-pagenum\" id=\"pdf-pagenum-782bae57\"\u003e\u003c/span\u003e / \u003cspan class=\"pdf-pagecount\" id=\"pdf-pagecount-782bae57\"\u003e\u003c/span\u003e\n \u003c/span\u003e\n \u003ca class=\"pdf-source\" id=\"pdf-source-782bae57\" href=\"Hacking_analytics_with_Postgres.pdf\"\u003e[pdf]\u003c/a\u003e\n\u003c/div\u003e\n\n\u003cnoscript\u003e\nView the PDF file \u003ca class=\"pdf-source\" id=\"pdf-source-noscript-782bae57\" href=\"Hacking_analytics_with_Postgres.pdf\"\u003ehere\u003c/a\u003e.\n\u003c/noscript\u003e\n\n\u003cscript type=\"text/javascript\"\u003e\n (function(){\n var url = 'Hacking_analytics_with_Postgres.pdf';\n\n var hidePaginator = \"\" === \"true\";\n var hideLoader = \"\" === \"true\";\n var selectedPageNum = parseInt(\"\") || 1;\n\n \n var pdfjsLib = window['pdfjs-dist/build/pdf'];\n\n \n if (pdfjsLib.GlobalWorkerOptions.workerSrc == '')\n pdfjsLib.GlobalWorkerOptions.workerSrc = \"https:\\/\\/tmarice.dev\\/\" + 'js/pdf-js/build/pdf.worker.js';\n\n \n var pdfDoc = null,\n pageNum = selectedPageNum,\n pageRendering = false,\n pageNumPending = null,\n scale = 3,\n canvas = document.getElementById('pdf-canvas-782bae57'),\n ctx = canvas.getContext('2d'),\n paginator = document.getElementById(\"pdf-paginator-782bae57\"),\n loadingWrapper = document.getElementById('pdf-loadingWrapper-782bae57');\n\n\n \n showPaginator();\n showLoader();\n\n \n\n function renderPage(num) {\n pageRendering = true;\n \n pdfDoc.getPage(num).then(function(page) {\n var viewport = page.getViewport({scale: scale});\n canvas.height = viewport.height;\n canvas.width = viewport.width;\n\n \n var renderContext = {\n canvasContext: ctx,\n viewport: viewport\n };\n var renderTask = page.render(renderContext);\n\n \n renderTask.promise.then(function() {\n pageRendering = false;\n showContent();\n\n if (pageNumPending !== null) {\n \n renderPage(pageNumPending);\n pageNumPending = null;\n }\n });\n });\n\n \n document.getElementById('pdf-pagenum-782bae57').textContent = num;\n }\n\n \n\n function showContent() {\n loadingWrapper.style.display = 'none';\n canvas.style.display = 'block';\n }\n\n \n\n function showLoader() {\n if(hideLoader) return\n loadingWrapper.style.display = 'flex';\n canvas.style.display = 'none';\n }\n\n \n\n function showPaginator() {\n if(hidePaginator) return\n paginator.style.display = 'block';\n }\n\n \n\n function queueRenderPage(num) {\n if (pageRendering) {\n pageNumPending = num;\n } else {\n renderPage(num);\n }\n }\n\n \n\n function onPrevPage() {\n if (pageNum \u003c= 1) {\n return;\n }\n pageNum--;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-prev-782bae57').addEventListener('click', onPrevPage);\n\n \n\n function onNextPage() {\n if (pageNum \u003e= pdfDoc.numPages) {\n return;\n }\n pageNum++;\n queueRenderPage(pageNum);\n }\n document.getElementById('pdf-next-782bae57').addEventListener('click', onNextPage);\n\n \n\n pdfjsLib.getDocument(url).promise.then(function(pdfDoc_) {\n pdfDoc = pdfDoc_;\n var numPages = pdfDoc.numPages;\n document.getElementById('pdf-pagecount-782bae57').textContent = numPages;\n\n \n if(pageNum \u003e numPages) {\n pageNum = numPages\n }\n\n \n renderPage(pageNum);\n });\n })();\n\u003c/script\u003e","title":"Hacking Analytics With Postgres"},{"content":"Here are some of the projects I\u0026rsquo;ve worked on:\nDevenv.sh Raycast Extension Browse devenv.sh docs from the comfort of your Raycast command line.\nFast Masked Mail Creator An unofficial Chrome extension for the easy creation of new, single-purpose Fastmail masked emails.\ndjango-timed-tests Django test runner that pinpoints the slowest tests with precise timing reports, enabling faster, more efficient test suites.\n","permalink":"https://tmarice.dev/projects/","summary":"\u003cp\u003eHere are some of the projects I\u0026rsquo;ve worked on:\u003c/p\u003e\n\u003chr\u003e\n\u003ch1 id=\"devenvsh-raycast-extension\"\u003e\u003ca href=\"https://www.raycast.com/tmarice/devenv-docs\"\u003eDevenv.sh Raycast Extension\u003c/a\u003e\u003c/h1\u003e\n\u003cp\u003eBrowse devenv.sh docs from the comfort of your Raycast command line.\u003c/p\u003e\n\u003chr\u003e\n\u003ch1 id=\"fast-masked-mail-creator\"\u003e\u003ca href=\"https://github.com/tmarice/fast_masked_mail_creator\"\u003eFast Masked Mail Creator\u003c/a\u003e\u003c/h1\u003e\n\u003cp\u003eAn unofficial Chrome extension for the easy creation of new, single-purpose Fastmail masked emails.\u003c/p\u003e\n\u003chr\u003e\n\u003ch1 id=\"django-timed-tests\"\u003e\u003ca href=\"https://github.com/tmarice/django-timed-tests\"\u003edjango-timed-tests\u003c/a\u003e\u003c/h1\u003e\n\u003cp\u003eDjango test runner that pinpoints the slowest tests with precise timing reports, enabling faster, more efficient test suites.\u003c/p\u003e\n\u003chr\u003e","title":"Projects"}]