{"version":"https://jsonfeed.org/version/1","title":"chiso's Blog","home_page_url":"https://chiso.dev/","feed_url":"https://chiso.dev/feed.json","author":{"name":"chiso","url":"https://chiso.dev"},"items":[{"id":"aoc-1.mdx","title":"Advent of Code Day 1 - Historian Hysteria","url":"https://chiso.dev/posts/aoc-1","tags":["programming","advent of code","Go","algorithms","problem solving"],"summary":"A walkthrough of solving the Historian Hysteria puzzle from Advent of Code 2024, using Go with a focus on making code accessible for beginners.","content_text":"\n### Introduction\n\nToday, I tackled Day 1 of Advent of Code 2024, titled \"Historian Hysteria.\" The puzzle involves comparing two lists of numbers. Let's look at how we can solve this using Go, breaking down each part to make it easy to understand.\n\n### The AOC Library\n\nFirst, let's look at the helper functions I created to make working with input files easier. These functions handle common tasks that we'll need throughout the Advent of Code challenges.\n\n#### Reading Files\n\n```go\nfunc ReadFileLineByLine(path string) []string {\n    file, err := os.Open(path)\n    if err != nil {\n        log.Fatal(err)\n    }\n    defer file.Close()\n\n    var output []string\n    scanner := bufio.NewScanner(file)\n    for scanner.Scan() {\n        output = append(output, scanner.Text())\n    }\n    return output\n}\n```\n\nThis function does three simple things:\n1. Opens a file at the given path\n2. Reads it line by line\n3. Returns all lines as a slice of strings\n\n#### Getting Numbers from Text\n\n```go\nfunc FetchSliceOfIntsInString(line string) []int {\n    nums := []int{}\n    var build strings.Builder\n    isNegative := false\n    \n    for _, char := range line {\n        // If we find a digit, add it to our number\n        if unicode.IsDigit(char) {\n            build.WriteRune(char)\n        }\n        // Check for negative numbers\n        if char == '-' {\n            isNegative = true\n        }\n        // When we hit a space or comma, convert what we've built into a number\n        if (char == ' ' || char == ',' || char == '~') && build.Len() != 0 {\n            localNum, err := strconv.ParseInt(build.String(), 10, 64)\n            if err != nil {\n                panic(err)\n            }\n            if isNegative {\n                localNum *= -1\n            }\n            nums = append(nums, int(localNum))\n            build.Reset()\n            isNegative = false\n        }\n    }\n    // Handle the last number if there is one\n    if build.Len() != 0 {\n        localNum, err := strconv.ParseInt(build.String(), 10, 64)\n        if err != nil {\n            panic(err)\n        }\n        if isNegative {\n            localNum *= -1\n        }\n        nums = append(nums, int(localNum))\n    }\n    return nums\n}\n```\n\nThis function:\n1. Takes a string that contains numbers (like \"123 456 -789\")\n2. Finds all the numbers in that string\n3. Returns them as a slice of integers\n4. Handles negative numbers and different separators (spaces, commas)\n\n#### Working with Grids\n\n```go\nfunc Get2DGrid(input []string) (grid [][]string) {\n    for _, line := range input {\n        grid = append(grid, strings.Split(line, \"\"))\n    }\n    return\n}\n```\n\nThis function converts a list of strings into a grid (2D array). For example, it would turn:\n```\nabc\ndef\n```\ninto:\n```\n[[\"a\", \"b\", \"c\"], [\"d\", \"e\", \"f\"]]\n```\n\n#### Breaking Up Strings\n\n```go\nfunc SplitStringAfter(input string, length int) (output []string) {\n    startIndex := 0\n    for startIndex < len(input) {\n        output = append(output, input[startIndex:startIndex+length])\n        startIndex += length\n    }\n    return\n}\n```\n\nThis function breaks a long string into smaller pieces of a specified length.\n\n[Previous solution sections remain the same...]\n\n### Using the Library\n\nHere's how we use these functions in our Day 1 solution:\n\n1. First, we read our input file:\n```go\ninput := aoc.ReadFileLineByLine(\"input.txt\")\n```\n\n2. Then we get the numbers from each line:\n```go\nintArrays := getNumArrayFromColumns(input)\n```\n\n3. This gives us two lists of numbers that we can work with to solve the puzzle.\n\n### Conclusion\n\nThese helper functions make it easier to focus on solving the actual puzzle instead of worrying about how to read files or convert strings to numbers. They're designed to be reusable across different Advent of Code challenges.\n\nYou can find the complete code in my [GitHub repository](https://github.com/raeeceip). If you're learning Go, feel free to use these functions in your own solutions!\n\n> Tip: When solving programming puzzles, it's helpful to create reusable functions for common tasks. This lets you focus on the interesting parts of each new challenge.\n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AAdvent+of+Code+Day+1+-+Historian+Hysteria%2Cdate%3ANov+30+2024","date_published":"2024-12-01T00:00:00.000Z"},{"id":"breaking-rails.mdx","title":"Breaking Rails: Adventures in Route Handling","url":"https://chiso.dev/posts/breaking-rails","tags":["rails","astro","routing","architecture","web development"],"summary":"A deep dive into breaking Rails conventions with Astro integration and the unexpected challenges of request routing","content_text":"\n# Breaking Rails: Adventures in Route Handling\n\nWhen I set out to replace Rails' view layer with Astro, I didn't expect to end up rewriting the entire routing system. But that's exactly what happened. Here's the story of how a simple view layer replacement turned into an adventure in request routing and middleware manipulation.\n\n## The Initial Break: The Astro Concern\n\nIt started innocently enough. I created an Astro concern to handle the transformation of Rails' view handling:\n\n```ruby\nmodule Astro\n  extend ActiveSupport::Concern\n\n  included do\n    around_action :handle_astro_response\n\n    rescue_from(ActionController::MissingExactTemplate) do |_exception|\n      action = params[:action]\n      controller = controller_name\n      \n      props = instance_variables.select { |v| \n        !v.to_s.start_with?('@_') && \n        v.to_s != '@rendered_format' && \n        v.to_s != '@marked_for_same_origin_verification' \n      }\n      \n      props_hash = props.map { |v| [v.to_s[1..-1], instance_variable_get(v)] }.to_h\n      \n      response.headers['X-Astro-View'] = \"#{controller}/#{action}\"\n      \n      render json: props_hash, content_type: 'application/json'\n    end\n\n    before_action do\n      request.format = :json if request.xhr?\n    end\n  end\n\n  private\n\n  def handle_astro_response\n    request.format = :json if request.headers['X-Requested-With'] == 'XMLHttpRequest'\n    yield\n\n    if response.body.blank? && request.format.json?\n      props = instance_variables.select { |v| \n        !v.to_s.start_with?('@_') && \n        v.to_s != '@rendered_format' && \n        v.to_s != '@marked_for_same_origin_verification' \n      }\n      \n      props_hash = props.map { |v| [v.to_s[1..-1], instance_variable_get(v)] }.to_h\n      response.headers['X-Astro-View'] = \"#{controller_name}/#{action_name}\"\n      render json: props_hash\n    end\n  end\nend\n```\n\nThis concern did something interesting: it intercepted Rails' normal view rendering and instead returned JSON with a special header indicating which Astro view should render the response. But this was just the beginning of our routing adventure.\n\n## The Routing Challenge\n\nThe real complexity emerged when we needed to handle routing between Rails and Astro. We couldn't just use Rails' routing system anymore - we needed something that could coordinate between two different servers. Enter the custom Astro adapter:\n\n```typescript\nimport { defineConfig } from \"astro/config\";\nimport adapter from \"./adapter/index.mjs\";\nimport { experimental_AstroContainer } from \"astro/container\";\n\nexport default defineConfig({\n  output: \"server\",\n  adapter: adapter(),\n  srcDir: \"./app/views\",\n  integrations: [\n    {\n      name: \"aor:dev\",\n      hooks: {\n        async \"astro:server:setup\"({ server }) {\n          const container = await experimental_AstroContainer.create();\n\n          server.middlewares.use(async function middleware(\n            incomingMessage,\n            res,\n            next\n          ) {\n            const request = toRequest(incomingMessage);\n            if (!request.url) return next();\n\n            const { searchParams } = new URL(request.url);\n            const stringifiedProps = searchParams.get(\"props\");\n            const view = searchParams.get(\"view\");\n            \n            if (!view) {\n              return writeResponse(new Response(null, { status: 400 }), res);\n            }\n\n            let props = { message: \"Placeholder\" };\n            if (stringifiedProps) {\n              props = JSON.parse(stringifiedProps);\n            }\n\n            try {\n              const page = await server.ssrLoadModule(\n                `./app/views/${view}.astro`\n              );\n              const response = await container.renderToResponse(page.default, {\n                request,\n                props,\n              });\n              writeResponse(response, res);\n            } catch (e) {\n              const message = e instanceof Error ? e.message : `${e}`;\n              writeResponse(new Response(message, { status: 400 }), res);\n            }\n          });\n        },\n      },\n    },\n  ],\n});\n```\n\n## The Plot Thickens: Request Transformation\n\nOne of the trickiest parts was handling the transformation of requests between Node.js and Rails formats. We needed to carefully preserve headers, handle body content, and manage different types of requests:\n\n```typescript\nfunction toRequest(req: NodeRequest) {\n  const protocol = req.headers[\"x-forwarded-proto\"] ?? \n    (\"encrypted\" in req.socket && req.socket.encrypted ? \"https\" : \"http\");\n  const hostname = req.headers[\"x-forwarded-host\"] ?? \n    req.headers.host ?? \n    req.headers[\":authority\"];\n  const port = req.headers[\"x-forwarded-port\"];\n\n  const portInHostname = typeof hostname === \"string\" && \n    typeof port === \"string\" && \n    hostname.endsWith(port);\n  const hostnamePort = portInHostname ? \n    hostname : \n    hostname + (port ? `:${port}` : \"\");\n\n  const url = `${protocol}://${hostnamePort}${req.url}`;\n  const options: RequestInit = {\n    method: req.method || \"GET\",\n    headers: makeRequestHeaders(req),\n  };\n\n  if (options.method !== \"HEAD\" && options.method !== \"GET\") {\n    Object.assign(options, makeRequestBody(req));\n  }\n\n  const request = new Request(url, options);\n  \n  // Handle client IP address\n  const clientIp = req.headers[\"x-forwarded-for\"];\n  if (clientIp) {\n    Reflect.set(request, clientAddressSymbol, clientIp);\n  } else if (req.socket?.remoteAddress) {\n    Reflect.set(request, clientAddressSymbol, req.socket.remoteAddress);\n  }\n  \n  return request;\n}\n```\n\n## Lessons Learned\n\nThis adventure in breaking Rails taught me several important lessons:\n\n1. **Everything is Connected**: In Rails, the routing system is intimately connected to the view layer. You can't just replace one without affecting the other.\n\n2. **Middleware Matters**: Much of the complexity in our solution came from properly handling middleware and request transformation. The devil is in the details of headers, body content, and request formats.\n\n3. **Error Handling is Critical**: When you're dealing with two different servers, error handling becomes twice as important. You need to catch and properly handle errors at multiple levels.\n\n4. **Performance Considerations**: While our solution worked, it introduced additional network hops between Astro and Rails. This meant we needed to be extra careful about performance optimization.\n\n## The Unexpected Benefits\n\nDespite the challenges, this architectural change brought some surprising benefits:\n\n1. **Better Separation of Concerns**: Our frontend and backend became truly separate, making it easier to reason about each part of the system.\n\n2. **Improved Development Experience**: Once set up, developers could work on frontend and backend components independently.\n\n3. **Enhanced Type Safety**: With explicit JSON interfaces between Rails and Astro, we got better type checking and clearer contracts between components.\n\n## Looking Forward\n\nThis experiment showed that while Rails' conventions are powerful, there's value in thoughtfully breaking them when needed. The key is understanding the implications and being prepared to handle the cascading effects of such changes.\n\nFor those considering a similar architecture, here are some tips:\n\n1. Plan your routing strategy carefully\n2. Consider the performance implications of cross-server communication\n3. Invest time in proper error handling\n4. Document your conventions thoroughly\n\nBreaking Rails isn't always bad - sometimes it leads to better architectures than we initially imagined.\n\n\n\n  \n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3ABreaking+Rails%3A+Adventures+in+Route+Handling%2Cdate%3ASep+12+2024","date_published":"2024-09-13T00:00:00.000Z"},{"id":"deploying-managing-portfolio.mdx","title":"Deploying & Managing My Portfolio Site","url":"https://chiso.dev/posts/deploying-managing-portfolio","tags":["deployment","cloudflare","astro","animations","state management"],"summary":"A deep dive into how I deployed my portfolio site using Cloudflare, and the various states I had to manage for that notebook-style experience.","content_text":"\nWhen I set out to create my 2025 portfolio site, I had a clear vision: a nostalgic notebook-style design with interactive elements that would showcase my work in a unique way. But as any developer knows, the journey from concept to deployed site involves navigating a maze of technical decisions, performance considerations, and state management challenges.\n\n![Deploying My Portfolio](/images/notebook-deploy.png)\n\n> This article dives into the nitty-gritty details of how I deployed and managed my portfolio site, including the technical decisions, challenges, and solutions I encountered along the way.\n\n# The Evolution of My Portfolio\n\nMy portfolio site has gone through several major iterations, each representing my growth as a developer and my pursuit of better performance and user experience.\n\n## From React to Astro: The Performance Journey\n\nInitially, this project was built with React. It featured fancy loaders, elaborate rendering techniques, and a preloader designed to showcase what I knew best at the time: web development. The site worked well, but as I delved deeper into other aspects of software development, I began to crave better performance.\n\nThat's when I decided to try my hand at Astro for static site generation. The switch paid immediate dividends:\n\n- Blog posts rendered beautifully with minimal JavaScript\n- Static rendering dramatically improved initial load times\n- The distribution bundle remained small despite growing content\n- Components could still use React when needed for interactivity\n\nAstro's island architecture was the perfect middle ground - allowing me to maintain the interactive elements I loved while drastically reducing the JavaScript payload.\n\n## The Need for Global Scale\n\nAs my site gained more visitors from around the world, even Astro's performance optimizations weren't enough. My initial containerized solution (which was already deployed to a cloud provider) began to show its limitations:\n\n- Response times varied wildly depending on visitor location\n- Image optimization was manual and cumbersome\n- I had no good way to implement global caching\n- Deployment was more complex than it needed to be\n\nThat's when I turned to Cloudflare for a more comprehensive solution.\n\n# Choosing the Right Deployment Platform\n\nAfter experimenting with various deployment options, I settled on Cloudflare Pages for several compelling reasons:\n\n## The Cloudflare for Startups Advantage\n\nBeing part of the Cloudflare for Startups program gave me access to their premium tier, which became a game-changer for my deployment strategy. The benefits were immediate and substantial:\n\n- **Global CDN**: My site assets are cached at edge locations worldwide, resulting in dramatically reduced load times regardless of visitor location\n- **Zero cold starts**: Unlike serverless functions on some platforms that sleep after inactivity, Cloudflare Workers stay warm\n- **Custom domain with automatic SSL**: Setting up my custom domain with HTTPS took literally minutes\n- **Analytics insights**: Detailed metrics on visitors, performance, and potential issues\n- **KV object storage**: For storing and retrieving cached data globally\n- **Durable Objects**: Which I experimented with for maintaining state across edge locations (though I later simplified my approach)\n\nThe Cloudflare ecosystem gave me access to a suite of integrated tools that worked together seamlessly. After some experimentation, I found the right mix of services for my needs.\n\n## Setting Up the Build Pipeline\n\nMy `.cloudflare/pages.toml` and `.cloudflare/build.json` files define the build process:\n\n```toml\n[build]\ncommand = \"npm run build\"\noutput_directory = \"dist\"\nenvironment_variables = { NODE_VERSION = \"22.9.0\", NPM_VERSION = \"10.8.3\" }\n\n[build.environment]\nNODE_VERSION = \"22.9.0\"\nNPM_FLAGS = \"--no-package-lock\"\nUSE_NPM = \"true\"\n```\n\nCloudflare Pages automatically detects when I push changes to my repository, triggering a new build. The platform's build system handles all dependencies and optimization without requiring me to manage any infrastructure.\n\n# Leveraging Cloudflare's Ecosystem\n\nThe real power of Cloudflare came from how I could combine different services to create a cohesive deployment strategy:\n\n## Cloudflare Pages\n\nThe cornerstone of my deployment strategy, Pages handles the build process, hosting, and delivery of my static assets. The automatic preview deployments for each push have been invaluable for testing changes before they go live.\n\n## Cloudflare KV\n\nI use Cloudflare KV (Key-Value) storage for several purposes:\n\n```javascript\n// Example of how I use KV to cache API responses\nexport async function onRequest({ request, env }) {\n  const url = new URL(request.url);\n  const cacheKey = `api:${url.pathname}${url.search}`;\n  \n  // Try to get from cache first\n  const cached = await env.MY_KV.get(cacheKey, { type: \"json\" });\n  if (cached) {\n    return new Response(JSON.stringify(cached), {\n      headers: { \"Content-Type\": \"application/json\" }\n    });\n  }\n  \n  // If not in cache, fetch from origin\n  const response = await fetch(\"https://my-api.example.com\" + url.pathname + url.search);\n  const data = await response.json();\n  \n  // Store in KV with expiration\n  await env.MY_KV.put(cacheKey, JSON.stringify(data), { expirationTtl: 3600 });\n  \n  return new Response(JSON.stringify(data), {\n    headers: { \"Content-Type\": \"application/json\" }\n  });\n}\n```\n\nThis approach allows me to reduce the load on external APIs and drastically improve response times for repeat requests.\n\n## Image Optimization\n\nCloudflare's automatic image optimization has been a revelation:\n\n```html\n<!-- Before: Manually optimized images in different formats and sizes -->\n<picture>\n  <source srcset=\"/images/project-sm.webp\" media=\"(max-width: 640px)\" type=\"image/webp\">\n  <source srcset=\"/images/project-md.webp\" media=\"(max-width: 1024px)\" type=\"image/webp\">\n  <source srcset=\"/images/project.webp\" type=\"image/webp\">\n  <img src=\"/images/project.jpg\" alt=\"Project Screenshot\">\n</picture>\n\n<!-- After: Let Cloudflare handle the optimization -->\n<img \n  src=\"/images/project.jpg\" \n  alt=\"Project Screenshot\" \n  width=\"800\" \n  height=\"450\"\n  loading=\"lazy\"\n  style=\"max-width: 100%; height: auto;\"\n/>\n```\n\nThis simplification reduced my codebase while improving performance - a win-win.\n\n## Analytics and Monitoring\n\nCloudflare's built-in analytics have become an essential tool for monitoring my site's performance:\n\n- **Real User Metrics**: Showing me exactly how fast my pages load for actual visitors\n- **Bot detection**: Filtering out non-human traffic for more accurate analytics\n- **Error tracking**: Identifying issues before users report them\n- **Geographic insights**: Understanding where my visitors come from\n\nThe data has helped me make informed decisions about further optimizations and content focus.\n\n# Managing Different States\n\nThe notebook-style design I envisioned required careful state management across different user experiences. Let's break down the key states I had to handle:\n\n## First Visit vs. Return Visit Animation States\n\nOne of the most challenging aspects was creating different experiences for first-time and returning visitors:\n\n```javascript\nconst isFirstVisit = () => {\n  if (typeof localStorage !== 'undefined') {\n    const visited = localStorage.getItem('visited');\n    if (!visited) {\n      localStorage.setItem('visited', 'true');\n      return true;\n    }\n  }\n  return false;\n};\n\n// In component:\nconst firstVisit = isFirstVisit();\n\n// Conditionally apply animations\nreturn (\n  <CoverPage \n    className={firstVisit ? 'dramatic-entrance' : 'quick-entrance'} \n    animationDuration={firstVisit ? '2.5s' : '0.8s'}\n  >\n    <Ribbon>2025 Edition</Ribbon>\n    {/* Other cover elements */}\n  </CoverPage>\n);\n```\n\nFor first-time visitors, I wanted that dramatic 2.5-second animation to create a memorable introduction. For returning visitors, a quicker animation prevents frustration.\n\n## Light and Dark Mode States\n\nThe notebook theme required completely different styling approaches for light and dark modes:\n\n```css\n.feature-card {\n  position: relative;\n  border: 1px solid theme(\"colors.secondary.300\");\n  border-radius: 0 0.5rem 0.5rem 0.5rem;\n  overflow: visible;\n  transition: transform 0.3s ease, box-shadow 0.3s ease;\n  background-color: rgba(255, 255, 255, 0.9);\n  box-shadow: 3px 3px 6px rgba(0, 0, 0, 0.1);\n  background-image: \n    repeating-linear-gradient(\n      theme(\"colors.secondary.100\") 0px,\n      theme(\"colors.secondary.100\") 1px,\n      transparent 1px,\n      transparent 26px\n    );\n  background-size: 100% 26px;\n  padding-top: 26px;\n  color: #3A4D30;\n}\n  \n.dark .feature-card {\n  border-color: theme(\"colors.secondary.700\");\n  background-color: #4a5568; /* Lighter slate gray instead of dark green */\n  background-image: \n    repeating-linear-gradient(\n      rgba(255, 255, 255, 0.08) 0px,\n      rgba(255, 255, 255, 0.08) 1px,\n      transparent 1px,\n      transparent 26px\n    );\n  color: rgba(255, 255, 255, 0.9);\n  box-shadow: 3px 3px 6px rgba(0, 0, 0, 0.2);\n}\n\n/* Add responsive styles for mobile and tablet */\n@media (max-width: 768px) {\n  .feature-card {\n    flex-direction: column;\n    align-items: center;\n  }\n}\n\n@media (max-width: 480px) {\n  .feature-card {\n    padding: 20px;\n  }\n}\n```\n\nThe dark mode implementation required careful consideration of contrast and readability while maintaining the notebook aesthetic. I chose a lighter slate gray (#4a5568) background instead of a darker green to keep the content readable while preserving the notebook feel.\n\n## Interactive States for Draggable Cards\n\nCreating draggable feature cards with realistic physics required managing multiple interaction states:\n\n```javascript\nconst FeatureCard = ({ children, alt = false }) => {\n  const [isDragging, setIsDragging] = useState(false);\n  const [position, setPosition] = useState({ x: 0, y: 0 });\n  const cardRef = useRef(null);\n  \n  const handleDragStart = (e) => {\n    setIsDragging(true);\n    // Capture initial position\n  };\n  \n  const handleDrag = (e) => {\n    if (!isDragging) return;\n    // Update position with physics constraints\n    setPosition({\n      x: Math.min(Math.max(position.x + e.movementX, -20), 20),\n      y: Math.min(Math.max(position.y + e.movementY, -20), 20)\n    });\n  };\n  \n  const handleDragEnd = () => {\n    setIsDragging(true);\n    // Add spring-back animation\n    gsap.to(position, {\n      x: 0,\n      y: 0,\n      duration: 0.5,\n      ease: \"elastic.out(1, 0.3)\",\n      onComplete: () => setIsDragging(false)\n    });\n  };\n  \n  return (\n    <div \n      ref={cardRef}\n      className={`feature-card ${alt ? 'alt' : ''} ${isDragging ? 'dragging' : ''}`}\n      style={{\n        transform: `translateX(${position.x}px) translateY(${position.y}px) rotate(${position.x * 0.05}deg)`\n      }}\n      onMouseDown={handleDragStart}\n      onMouseMove={handleDrag}\n      onMouseUp={handleDragEnd}\n      onMouseLeave={handleDragEnd}\n    >\n      {children}\n    </div>\n  );\n};\n```\n\nI implemented subtle physics-based animations to make the cards feel like real papers being moved around, with constraints to prevent them from being dragged too far and a satisfying elastic bounce when released.\n\n# Handling Page Transitions\n\nThe \"rip-through\" animation between pages was one of the most complex states to manage, requiring coordination between exit and entrance animations:\n\n```javascript\nconst PageTransition = ({ children }) => {\n  const [transitioning, setTransitioning] = useState(false);\n  const [nextPage, setNextPage] = useState(null);\n  \n  const handlePageTransition = (targetUrl) => {\n    setTransitioning(true);\n    setNextPage(targetUrl);\n    \n    // Play rip animation\n    const tl = gsap.timeline();\n    tl.to(\".page-content\", {\n      duration: 0.4,\n      y: \"-100%\", \n      ease: \"power4.in\",\n      clipPath: \"polygon(0% 0%, 100% 8%, 100% 100%, 0% 92%)\"\n    });\n    \n    tl.add(() => {\n      window.location.href = targetUrl;\n    });\n  };\n  \n  useEffect(() => {\n    // Register navigation handler\n    document.querySelectorAll('a[data-internal]').forEach(link => {\n      link.addEventListener('click', (e) => {\n        e.preventDefault();\n        handlePageTransition(e.currentTarget.href);\n      });\n    });\n    \n    // Handle entrance animation\n    const entranceTl = gsap.timeline();\n    entranceTl.from(\".page-content\", {\n      duration: 0.5, \n      y: \"100%\", \n      ease: \"power4.out\",\n      clipPath: \"polygon(0% 8%, 100% 0%, 100% 92%, 0% 100%)\"\n    });\n  }, []);\n  \n  return (\n    <div className=\"page-wrapper\">\n      <div className=\"page-content\">\n        {children}\n      </div>\n      {transitioning && (\n        <div className=\"next-page-preview\" aria-hidden=\"true\">\n          <div className=\"loading-indicator\">Loading {nextPage}...</div>\n        </div>\n      )}\n    </div>\n  );\n};\n```\n\nThese transitions needed to work seamlessly in both light and dark modes, requiring different visual treatments for each.\n\n# Performance Optimization with Cloudflare\n\nCloudflare's performance features helped me optimize the site in several key ways:\n\n## Image Optimization\n\nI leveraged Cloudflare's automatic image optimization to serve appropriately sized images without maintaining multiple versions:\n\n```html\n<img \n  src=\"/images/project-screenshot.png\" \n  alt=\"Project Screenshot\" \n  width=\"800\" \n  height=\"450\"\n  loading=\"lazy\"\n  fetchpriority=\"high\"\n  decoding=\"async\"\n  style=\"max-width: 100%; height: auto;\"\n/>\n```\n\nCloudflare automatically converts images to modern formats like WebP and AVIF when browsers support them, optimizes compression, and delivers from the nearest edge location.\n\n## Caching Strategies\n\nCloudflare's caching required careful consideration of which resources should be cached and for how long:\n\n```toml\n[cache]\n# Cache static assets for 1 year\n/assets/*\n  Cache-Control = \"public, max-age=31536000, immutable\"\n\n# Cache page HTML for 1 hour\n/*.html\n  Cache-Control = \"public, max-age=3600\"\n```\n\nFor dynamically updated content, I implemented cache purging through Cloudflare's API whenever I deploy new content.\n\n# Monitoring and Analytics\n\nKeeping track of site performance and potential issues is crucial:\n\n## Real User Monitoring\n\nI implemented Cloudflare's Web Analytics to capture real user metrics without affecting privacy:\n\n```html\n<!-- No additional scripts needed as Cloudflare injects this automatically -->\n```\n\nThis provides insights into Core Web Vitals, page load times, and geographical distribution of visitors without requiring cookies or tracking scripts.\n\n## Error Tracking\n\nFor error tracking, I set up a lightweight custom solution that reports to my own endpoint:\n\n```javascript\nwindow.addEventListener('error', (event) => {\n  if (import.meta.env.PROD) {\n    fetch('/api/error-log', {\n      method: 'POST',\n      headers: { 'Content-Type': 'application/json' },\n      body: JSON.stringify({\n        message: event.message,\n        source: event.filename,\n        line: event.lineno,\n        column: event.colno,\n        stack: event.error?.stack,\n        userAgent: navigator.userAgent,\n        timestamp: new Date().toISOString()\n      })\n    }).catch(() => {\n      // Fail silently if error reporting fails\n    });\n  }\n  \n  // Don't prevent the default error handling\n  return false;\n});\n```\n\n# Lessons Learned\n\nBuilding and deploying this portfolio site taught me several valuable lessons:\n\n1. **Start with deployment in mind**: Considering how the site would be deployed from the beginning helped avoid painful migrations later.\n\n2. **State management isn't just for complex apps**: Even a seemingly simple portfolio site required careful state management for transitions, animations, and user preferences.\n\n3. **Test across devices early and often**: What worked perfectly on my development machine sometimes behaved differently once deployed, especially animations and layout.\n\n4. **Performance optimization is ongoing**: There's always another millisecond to shave off load times or another animation to make smoother.\n\n5. **Document your decisions**: The choices I made about state management and deployment configuration would have been difficult to remember without documentation.\n\n# Conclusion\n\nDeploying my portfolio using Cloudflare and managing the various states needed for that notebook-style experience was a challenging but rewarding process. The combination of Astro's performance, Cloudflare's global infrastructure, and careful state management resulted in a site that not only looks the way I envisioned but also performs exceptionally well.\n\nIf you're considering a similar approach for your own portfolio, I hope these insights into my deployment strategy and state management solutions prove helpful. And if you have questions about any aspect of how I built or deployed this site, feel free to reach out!\n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3ADeploying+%26+Managing+My+Portfolio+Site%2Cdate%3AApr+13+2025","date_published":"2025-04-14T00:00:00.000Z"},{"id":"fantastic-builds.mdx","title":"Fantastic Builds!","url":"https://chiso.dev/posts/fantastic-builds","tags":["builds","programming","devops"],"summary":"And where to find them...","content_text":"\n# Fantastic Builds and Where to Find Them\n\nAs a developer diving into the world of software development, one term you'll encounter frequently is \"build.\" But what exactly is a build, why is it important, and how do they work? Let's embark on a journey to demystify builds and explore their significance in the software development lifecycle.\n\n## What is a Build?\n\nA build is essentially a compiled version of an application, typically packaged into one or more files. These files are the result of transforming your source code into a format that's ready for deployment or distribution. Here's what a build usually accomplishes:\n\n1. **Minification and Compilation**: The build process takes your original source code and transforms it into a more compact and efficient form.\n\n2. **Dependency Resolution**: It gathers all necessary dependencies and bundles them with your application.\n\n3. **Asset Processing**: Images, stylesheets, and other assets are often optimized and included in the build.\n\n4. **Environment-Specific Configuration**: Builds can incorporate environment-specific settings, making your app ready for different deployment scenarios.\n\nLet's look at a simple example to illustrate the difference between a source code structure and a built application:\n\nOriginal file structure:\n```\nmy-app/\n├── src/\n│   ├── components/\n│   │   ├── Header.js\n│   │   └── Footer.js\n│   ├── pages/\n│   │   └── Home.js\n│   └── index.js\n├── public/\n│   └── index.html\n└── package.json\n```\n\nBuilt version:\n```\nbuild/\n├── static/\n│   ├── css/\n│   │   └── main.a1b2c3.css\n│   ├── js/\n│   │   └── main.d4e5f6.js\n│   └── media/\n│       └── logo.g7h8i9.png\n└── index.html\n```\n\nAs you can see, the built version is more compact and optimized for deployment.\n\n## Why Are Builds Important?\n\n1. **Performance**: Built applications are typically faster and more efficient than running source code directly.\n2. **Security**: Source code is not exposed in production, protecting your intellectual property.\n3. **Consistency**: Builds ensure that the same code runs across different environments.\n4. **Dependency Management**: All required dependencies are bundled together, avoiding \"it works on my machine\" scenarios.\n\n## The Build Process\n\nThe build process can vary depending on the programming language and tools you're using, but generally involves these steps:\n\n1. **Compilation**: Converting source code into machine-readable format.\n2. **Minification**: Reducing code size by removing unnecessary characters.\n3. **Bundling**: Combining multiple files into a single file.\n4. **Transpilation**: Converting code from one language to another (e.g., TypeScript to JavaScript).\n5. **Optimization**: Improving performance through various techniques.\n\n## Build Tools and Configuration\n\nDifferent languages and frameworks have their own build tools. Here are a few examples:\n\n- **JavaScript**: Webpack, Rollup, Parcel\n- **Java**: Maven, Gradle\n- **C#**: MSBuild\n- **Go**: Go Build\n- **Rust**: Cargo\n\nThese tools often use configuration files (like `webpack.config.js`, `pom.xml`, or `Cargo.toml`) to define how the build should be performed.\n\n## Cross-Platform Build Tools\n\nAs you mentioned, there are tools designed to create builds for multiple platforms from a single codebase. Some popular ones include:\n\n- **Wails**: For building desktop applications using Go and web technologies.\n- **Electron**: For creating cross-platform desktop apps with web technologies.\n- **React Native**: For building mobile applications for iOS and Android using React.\n- **Flutter**: Google's UI toolkit for building natively compiled applications for mobile, web, and desktop from a single codebase.\n\nThese tools are particularly useful when you want to target multiple platforms without maintaining separate codebases.\n\n## Continuous Integration and Deployment (CI/CD)\n\nIn modern development workflows, builds are often automated as part of a CI/CD pipeline. This means that every time you push code to your repository, it automatically triggers a build process, runs tests, and potentially deploys your application.\n\n## Conclusion\n\nUnderstanding builds is crucial for any developer looking to create production-ready applications. They're not just about compiling code; they're about preparing your application for the real world, ensuring it's optimized, secure, and ready to perform.\n\nAs you continue your journey in software development, you'll encounter more complex build processes and tools. Remember, the goal is always the same: to transform your lovingly crafted source code into a robust, efficient application that's ready to meet its users.\n\nHappy building!\n\n---\n\nI hope this blog post has been informative. Remember, the world of builds is vast and ever-evolving. Keep exploring, keep learning, and most importantly, keep building!","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AFantastic+Builds%21%2Cdate%3ASep+12+2024","date_published":"2024-09-13T00:00:00.000Z"},{"id":"good-enough.mdx","title":"Good Enough","url":"https://chiso.dev/posts/good-enough","tags":["rant","life update"],"summary":"You probably know a lot about the impostor syndrome, but this article isn't really about it. Sorry to steer you in the wrong direction. This is about not being good enough, but also not what you expect.","content_text":"\nHey there again, buddy! The initial intention for this article, which I started writing about a month or two ago, was for me to rant about how I feel I am not good enough. But instead of doing that, let's talk about growth and positivity, why? IDK, do you want to be sad?\n\nMany people I know or who frequently come into contact with think I am an excellent programmer and know a lot based on my GitHub or the _crap_ I post on Twitter or LinkedIn, but I don't feel that way, I know I am not, and I frequently tell people so. And during the past few months, I've felt a strong urge to improve significantly in my field — not so I can live up to the hype, but rather for MYSELF. I recently started to realize that being a developer involves more than just writing code, and I made the decision to **NEVER SETTLE** with being \"good enough\".\n\nDespite appearances, I have received a large number of rejections this year since beginning to apply for 'office'/full-time positions (over a hundred at this point, I lost count), primarily due to my visa cap, but that is not the point. I received a number of interviews whilst I was re-building my CV, I passed several, and even signed contracts only to have them revoked because my visa only allowed me to work 20 hours per week (even contract roles, sucks right?) And when I finally decided to just state that on my CV, it became even more difficult to get any interviews; and I was slowly burning through my savings, so at this point I was tired, and all I had left was to just build myself even more and resort to menial jobs because getting freelance gigs is harder nowadays, if you know what I mean, and not quite reliable for me when I have thousands of £ to pay in tuition fees and other things, only a handful of my friends; [@\\_frokes](https://twitter.com/_frokes) and [@ipariola](https://twitter.com/ipariola) knew about this frustration, often supported me and thanks a lot to them too. **Spoiler alert**: there is no \"I started a role at Amazon\" or \"I 100x-ed my income\" at the end of this article.\n\nIn between all of this, I've taken a few freelance jobs through referrals, but the feeling of not being 'good' enough lingered in my mind no matter how much I seemed to learn or practice. It wasn't so much about learning more frameworks for me as it was about becoming more of an engineer and less of a 'developer'; which entails having much more technical details and understanding (again, this isn't just about the title), so I continued pushing. I bought a handful of Udemy courses, completed some of them, but they didn't feel in-depth enough, I didn't feel like I understood significantly more, and I was hungry for knowledge. Asking for help was harder(er) for me months ago because I wasn't used to it; probably just like you, and I was even more terrified of being ignored. If you read my Twitter thread about never settling, you'd know I was a \"lone wolf\" for a very long time and I didn't meet or know any other developer for over 3 years until last year when I got on Twitter (for context; the first time I heard about Node.js was about 8 months ago, shocking isn't it?), so I'll admit I was living in a bubble for the last 3 years before that with what I thought I knew, it felt like I wasted a lot of time because I did but it wasn't too late either.\n\nYou undoubtedly understand my desire to create a community for all 'Techies' at [Frikax](https://www.frikax.net); a community, or at least the people I've met, has truly helped me grow, and I believe it might benefit a lot of other people as well. I finally understood the importance of being 'social' at the time, so I joined LinkedIn, reconnected to Facebook, became more involved on Twitter, and met a lot of wonderful individuals, of whom I am fairly convinced you are one. In April/May, I began to overcome my fear of being ignored and simply reached out to a few of extremely senior engineers whose work I admired and respected, and a few of them did respond and offer to help out, which they did, but understandably, most of them were very busy with other things and I wasn't offended either. I would contact several of them when I needed to learn something or know how they did things, and they would respond, explain, and point me to useful links, videos, or concepts. You might be thinking, \\\"Why not just Google those things yourself?\\\" Sure, I could, but I've discovered that I learn best with and from other people, thus you'll find me 'stalking' a lot of repos and reading code there. Huge thanks to [@pipedev](https://twitter.com/pipe_dev) and [@lanreadelowo](https://twitter.com/lanreadelowo); I probably ask them more than I should, but they are extremely helpful still.\n\nThe other crucial aspect was applying the new skills and \"advanced\" concepts I had learned, but since I wasn't going to get a company job or any major job that could really allow me to gain the experience, I did what any truly knowledge-hungry developer would do: I dove into old projects and rebuilt them, built even more new ones, and did as much coding as I did reading and watching videos in the unhealthiest way possible, and it started to tell on me so I also did learn to take short breaks (this is VERY essential, not just to prevent a burn-out but your health is also very important). Concurrently, I was working to overcome my urge to have everything turn out flawlessly (but obviously as a programmer or anyone reading this, you know things aren't flawless) and avoid criticism while I was building these things. And at some point, I just stopped caring and started posting everything I'd worked on, fully anticipating the \"You shouldn't have done this?\" and \"Why do it like this?\" comments no matter how 'attacking' they seemed (some people are naturally just trolls, I mostly not reply to those) - BLII.\n\nI've spent the last few months learning not just about writing code but also about deeper concepts, tools, and operating systems, and I finally feel like I'm starting to understand how to be a good developer and what I'm doing. Eventually, I'll understand how to approach things like an engineer, but in the meantime, I'll keep building, learning, iterating, and improving, and most likely writing articles at [aerdeets.com](https://www.aerdeets.com) to share what I've learned and to help beginners; explaining as best I can. I currently work full-time/part-time, so I don't have much time to write articles on a regular basis, but I enjoy working there too; the staff is fantastic!. I also do get a bunch of recruiter messages on LinkedIn and Twitter, but at some point, you have to acknowledge you don't have to take on every single job at **ONCE**to 'gain experience' or 'make money' (you'll most likely end up sucking at a lot of them even if you don't have deadlines). Invest in your learning as well, and understand your health is just as important.\n\nThat's all I've got for now; you can grow on your own, but it's easier, better and faster to 'scale' when you're with other people; don't forget that. Enjoy your week or weekend whenever you're reading this, remember to not just 'write code' or 'just design' or whatever you do, you could get a good life not knowing anything really, but you'll agree it feels good to be actually pretty good at what you do, and if you also did fall off the wagon, it's not too late to get back on it either. Ciao :)\n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AGood+Enough%2Cdate%3AJul+14+2022","date_published":"2022-07-15T00:00:00.000Z"},{"id":"in-an-alternate-universe.mdx","title":"In An Alternate Universe: Reimagining a Web API Project","url":"https://chiso.dev/posts/in-an-alternate-universe","tags":["programming","deployment","production","Go","microservices","DevOps","cloud-native"],"summary":"Reflecting on a transformative internship experience and envisioning an innovative approach to web service design.","content_text":"\n## A Transformative Internship Experience\n\nIn an alternate universe, I had the incredible opportunity to intern with a team of exceptional developers. This diverse group included senior DevOps engineers, PhD-holding physicists who've contributed to groundbreaking open-source projects, and seasoned professionals who've navigated the same challenges I face now. Their wealth of knowledge, wisdom, and resources was awe-inspiring, pushing me to constantly expand my skills and understanding.\n\n## The Challenge: Reimagining a Web API\n\nDuring my internship, I was tasked with building a web API for user authentication and access-level-based credential generation. While the original design was solid, I saw an opportunity to elevate it into something more powerful and flexible: a comprehensive web service that could seamlessly interact with CLI tools and generate necessary tokens based on sophisticated access controls.\n\n## Embracing a Cloud-Native, Microservices Approach\n\nReflecting on the project, I realized we could leverage our robust cloud infrastructure more effectively. Instead of a monolithic API, we could create a scalable, cloud-native web service. Here's how we could reimagine the project:\n\n### 1. Harnessing Cloud Resources\n\nOur company provides an impressive cloud ecosystem for logging, monitoring, and deployment. To fully utilize this, we could:\n\n- Replace mock Redis instances with cloud-based Redis databases, improving reliability and scalability.\n- Implement distributed tracing using tools like Jaeger or Zipkin for better observability.\n\n```go\nimport (\n    \"github.com/go-redis/redis/v8\"\n    \"go.opentelemetry.io/otel\"\n)\n\nfunc setupRedis() *redis.Client {\n    return redis.NewClient(&redis.Options{\n        Addr: os.Getenv(\"REDIS_ADDR\"),\n    })\n}\n\nfunc setupTracing() {\n    tp := initTracerProvider()\n    otel.SetTracerProvider(tp)\n}\n```\n\n### 2. Microservices Architecture\n\nBy adopting a microservices architecture, we can create more modular, maintainable components:\n\n- Implement an API Gateway using NGINX or Traefik to route requests and handle authentication.\n- Develop separate microservices for user management, token generation, and CLI tool interaction.\n\nHere's a simple example of how our API Gateway could route requests:\n\n```nginx\nhttp {\n    upstream user-service {\n        server user-service:8080;\n    }\n    upstream token-service {\n        server token-service:8081;\n    }\n\n    server {\n        listen 80;\n        \n        location /api/users {\n            proxy_pass http://user-service;\n        }\n        \n        location /api/tokens {\n            proxy_pass http://token-service;\n        }\n    }\n}\n```\n\n### 3. Enhanced Testing Strategy\n\nOur new architecture allows for more comprehensive testing:\n\n- Implement integration tests that cover the entire request flow through our microservices.\n- Use contract testing to ensure compatibility between our services and the CLI tool.\n\n```go\nfunc TestTokenGeneration(t *testing.T) {\n    ctx := context.Background()\n    user := createTestUser(ctx)\n    token, err := generateToken(ctx, user)\n    assert.NoError(t, err)\n    assert.NotEmpty(t, token)\n    \n    // Verify token with CLI tool\n    cliResult := runCLIToolWithToken(token)\n    assert.True(t, cliResult.Success)\n}\n```\n\n### 4. Streamlined Deployments\n\nWith our new microservices architecture, we can implement more flexible and reliable deployments:\n\n- Use Kubernetes for orchestration, allowing easy scaling and management of our services.\n- Implement a CI/CD pipeline using tools like GitLab CI or GitHub Actions for automated testing and deployment.\n\nHere's a sample Kubernetes deployment for one of our microservices:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: token-service\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: token-service\n  template:\n    metadata:\n      labels:\n        app: token-service\n    spec:\n      containers:\n      - name: token-service\n        image: our-registry/token-service:latest\n        ports:\n        - containerPort: 8081\n```\n\n## Conclusion: The Birth of \"Genie\"\n\nIn this alternate universe, driven by the inspiration from my talented colleagues and the company's amazing infrastructure, I've developed \"Genie\" – a powerful, cloud-native web service that grants access based on sophisticated company-level permissions. Genie can be easily updated to provide access to various company tools, offering a flexible and secure solution for credential management.\n\nThis reimagined approach not only solves the original problem more elegantly but also sets the stage for future scalability and feature additions. It's a testament to the power of continuous learning, innovative thinking, and leveraging cutting-edge technologies.\n\nAs I reflect on this alternate reality, I'm filled with determination to bring these ideas into my current projects, always striving to build something better, more efficient, and more aligned with modern best practices in software development.","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AIn+An+Alternate+Universe%3A+Reimagining+a+Web+API+Project%2Cdate%3AAug+9+2024","date_published":"2024-08-10T00:00:00.000Z"},{"id":"my-stack-2024.mdx","title":"My Stack.","url":"https://chiso.dev/posts/my-stack-2024","tags":["stack","tools","programming language"],"summary":"Things I use, will use and probably will continue to use.","content_text":"\nI have talked about the languages, frameworks, databases etc. I use and why I use them from time to time on Twitter but never in a structured way, this article will be a somewhat complete answer to questions around my 'stack', and we will be talking about everything; language, framework, ORM, monitors, headphones etc. You do not have to agree with my reasons for picking these things, that is fine, it's subjective. I spent a larger part of 2023 exploring different areas to figure out what else was out there, what I liked, what I didn't like, what works for me, what doesn't etc.\n\n![My Stack](/images/my-stack.png)\n\n> When I started writing this article, I was still away from social media trying to survive my school assessments and by the end, I may still be, which means this article might be up for weeks before we get to discuss about it but I would still love to have a discussion around it if you want.\n\n> Yes, I am still behind on that planned website redesign too\n\n> I have mostly been interested in developer tooling, databases and real-time systems lately so a lot of my choices and preferences may lean towards that.\n\n> I may get some things wrong here from my experience with these things but I will try as much as possible to provide accurate information, objective corrections are welcome!\n\n> I haven't really been able to take the time to review this thoroughly and I did not want to keep it in my drafts for too long so, I will be updating this article as I go along - mainly typos and stuff - I will try to keep the updates to a minimum though.\n\nBefore we get into all that though, I think it's only fair I give you a bit more context around what I need, what I think software should be like and all that. The more I explored, the more I ran into BROKEN software, not one or two bugs here and there but broken; whether it be bugs that made them unusable or a UX that did not seem like it was made for actual humans (this discussion is for another article), I started to see a pattern, even in Apple's software that used to feel very polished, it felt like no one cared about software anymore, everyone wants to 'ship' as they call it, no one seems to be willing to put it the work to make their software an experience they'd want to use or they have a really low bar for quality? I don't know but from most discussions I have had with most ~of these _shippers_~ people on Twitter, it seems to be both.\n\nYes, I am judging you, I am judging us, I wish we would adopt the work ethics of game developers; IMO, game dev is so hard and requires a lot of discipline, no one is going to buy a game with \"This production version works but you know, x just doesn't work yet, don't press y because it may delete your save data\", \"it isn't so bad, you just have to do x and y, and x works most times...\" or any of such narratives around it. I am guilty of this too, I have written bad software that I am ashamed of, I have written horrible code I would never want to see the light of day so, before I get to write an article on the state of software and how everyone should do better, I need to do better too. I know most of you reading this might have been born in the era of Microsoft Windows where everything was already slow by default (I am not blaming Microsoft, I am saying most of us got \"used\" to software being slow because the thing we used to run software was MS Windows which was and still is notoriously slow when doing things like just using the file explorer, without using search that is, it is even worse with search), but I have seen fast software, I have used fast software, I know it is possible to make fast software and we should make fast software.\n\nOh dear, that was... uh, quite the rant, but I promise we are actually getting into this section's purpose now. To do better, I have decided that for both software meant to be delivered into the hands of users and ones that would never be seen by the user:\n\n- I need to write software with no bugs that could and should have been caught before it ever made it out\n- No taking shortcuts around handling failures/errors/exceptions\n- Performance should never be an afterthought, it should be woven into every line of code\n- Re the point above, not trying to fit a square in a circle; possible but at what cost? A.K.A. please stop writing abominations (Laravel for mobile apps? what???)\n- Be able to verify the state of a piece of software with certainty (tests, readable enough to know what the hell is going on etc)\n\nThese are just summaries, there are a lot more to do and I know they may not make sense to you now, you will understand as we go deeper but a lot of the choices I have made are geared more towards these goals.\n\n# Languages\n\nLet's start with languages; the ones I intend to write often over the next few years. Most people say the choice of language doesn't matter, same people probably write backends in Javascript so... cough cough... That's all I will say about that, I think it does for some people and some cases so, I will explain why I have chosen these languages and where I intend to use them.\n\n## PHP\n\n> \"Heavens no, why would anyone write PHP? It's the worst thing in the universe\"\n>\n> \"Okay, maybe it's Laravel, I think it's a nice framework that makes PHP usable, I will let it go\"\n\n**NO**, I do not write Laravel, I do not like Laravel, I will never (re-)learn Laravel (or Django, or Rails, or any other meta framework; good for you if you love, live and breathe them), the \"why\" could be its own article, but not today.\n\n**NO**, I am also not in denial, PHP has its quirks, a lot of them in fact that calling them just _\"quirks\"_ is an understatement. I started with PHP 5.2 so believe me, I know, they are only just trying to fix most of them and improve the language now. It was a language born with no design in mind, a loud echo of its C origin, a great example of what happens when you make thinking (as in design considerations) and performance afterthoughts; although thankfully, there are people willing to work full-time to improve it - Thank you!.\n\nAs I mentioned, PHP was the first language (proper language, not markup language; I did pick up HTML first but you know we don't talk about that) I ever picked up, I remember looking at PHP code back in Junior secondary school and thinking I was never going to know that thing, it looked cursed, it looked complex, I wasn't sure I was smart enough for it. I had to learn a lot through trial and error; I had 25MB plans few times to look up how to do the basics and just ran from there, 00webhost for hosting (by uploading files from an iPod touch I had been gifted, use internet on my computer? are you crazy? - there was so much going on with Microsoft XP - or was it Vista? I cannot remember - and the $hitty network that I was out of mobile data - via USB tethering - before the page even loaded; if the desktop didn't just freeze that is) and dot.tk for domains, Twitter and Youtube were not in the picture either. I grew to like the language, it allowed me build things I thought of, it earned me my first income (eventually, I could use FileZilla like the rest of you) and still does (Yes, I still write PHP, I still deal with XML from cursed integrations at work that we have to maintain, sigh).\n\nEnough with the sentimental crap, PHP is still a part of my stack because it is still useful to me, I mean, it is not the poster child for performance or consistency but I know it, dare I say, inside-out and it only keeps getting better. I wish the deployment story was a bit smoother in this Docker age but it isn't and to be honest, it does make me want to write it less for my personal projects since I use Docker a lot. If I needed to build a web app quickly and did not care about over-engineering things as I usually would with my personal projects, I would still go with PHP anyway.\nAlthough, I eventually might do something like write a bloody ORM while telling myself \"I am not over-engineering, I am just writing utility functions in a class clearly named `BaseModel`\", you've been there, don't judge me, and that is majorly why I like and use PHP (not only because I am paid to write it), I barely ever need to reach outside the language to get things done in the way I want.\n\nWe are able to work on a codebase half as old as I am without a single `composer install` or even a `composer.json` file (yes, I am aware composer adds a certain DX, I am telling you it is possible to do without it and we do), the only build system we have is for... you guessed it, the Javascript[1] and CSS (minification and other things) and tailwind is the only reason we have had to even deal with Node recently.\n\n\"Well, that only works because you are building toy products, I can do that with Next.js and Prisma, and the thousand other dependencies that come with it too\" - Uhm, no, we manage about 8 different codebases that have to integrate with other systems and with our multiple mobile apps, it hasn't been without its issues like every other legacy system either. And, thank you, our performance is fine :)\n\n> [1] because we mainly use jQuery; a remnant of the past which I believe we are slowly migrating from since browsers and modern Javascript are good enough now\n\n## Typescript\n\n> \"Hypocrite, you just bashed Node.js and Javascript\"\n\nCalm down, I can explain.\n\nSee, Javascript also has a plethora of [birth](https://hackernoon.com/how-javascript-was-created-and-why-the-history-behind-it-is-important-fwh3tco) issues like PHP, but unlike PHP, some of these \"behaviors\" can no longer be fixed because of the way Javascript (the actual thing in the browser, not the... questionable ways to kill your server and maybe a few brain cells) is distributed. Javascript isn't something you can really install, your browser engine \"maker\" decides what you get, and when you get it, I can't go into details on how Javascript/ECMAScript works in this article but think of it this way; it is like your phone's chip, you are stuck with whatever the manufacturer put in your phone, you do not have a choice (unless you've spent too much time on XDA forum and were willing to do things that would make people keep their devices away from you, is it even still around?) and developer also do not have a choice so in some cases they have to ship this thing called a [polyfill](https://developer.mozilla.org/en-US/docs/Glossary/Polyfill) just to deal with the insanity. This, in turn means, when ECMAScript gets a new release, you have to often consider those new features untouchable for at least the next 5 years unless you:\n\n- are willing to ship polyfill for all the browsers (we have a lot of them these days)\n- are delusional enough to believe all, or even most, users actually update their browsers\n- don't care about a large percent of your users\n- don't even have users to begin with\n\nThis also means, making changes to behaviors that developers have either had to work around or depend on for years now would break a large percentage of websites and web apps on the internet today and we will see more screens like this often without Next.js feeding it to us for no reason, beautiful, isn't it?\n\n![Uhm, vercel?](/images/vercel-next-error.png)\n\nWhat was I talking about? Oh yeah, the facade.. sorry, language; Typescript. These days, I tend to keep to using Javascript and by extension; Typescript, where it was meant to live - in the browser since it is the only sane (I know, I feel it too, \"sane\" and \"Javascript\" in one sentence, oof) way to write interactive front-ends these days and since I like ~~type safety~~ types (better believe we will come back here), choosing Typescript is a no-brainer, it serves as some sort of contract (there are a lot of ifs here but we will skip it).\n\nOkay, right back to the type-safety part, you see, Typescript is just syntactical sugar over Javascript, it simply helps you know what a contract _may_ be, the types do not **really** have an effect during runtime, you are not getting any special performance benefit from smaller allocations by having more distinct integer types and whatnot. In fact, you could choose to lie to Typescript and in turn blow things up yourself with that lie, so Typescript strongly depends on both sides keeping the contract, but to be fair, not even Rust/Go is immune to something like an API response schema changing right under your feet, it is only worse with TS because you end up exposing that glorious `undefined` to the user (although some may consider it better than Go's decision to use defaults and whatever Rust does, I don't). I am sticking with Typescript because it is better than nothing, that's it, I want my IDE to help me write less hit-or-miss Javascript where possible; trusting Typescript entirely would be like believing the weather forecast in England and failing to prepare for the opposite, you are most certainly going to regret it.\n\n## Go\n\nYou probably saw this one coming anyway. I don't want to spend much time here, you probably already know how I feel about Go too; I hate it and I love it too. Go was the first compiled language I learned and stuck with. Go is just... fine. I have an article coming on why I will be writing less Go and more Rust so, I will try to be concise here.\n\nI have my issues with Go; mainly the fact that the compiler is stupid, tries to make up for it in speed and ends up giving you a program that is most likely to blow up AKA Go is _NOT_ safe and doesn't try to help you be safe either, but if I ever had to ship a performant API fast, do some networking experiment, make something that had to be self-contained and didn't need the extra performance Rust _might_ give or work on something for someone, I would use Go. I like a lot about Go and hate a lot about Go but again, not for this article, I care deeply about the things most people don't care about like image size, and Go's ability to make self-contained, fast programs is a big plus here (you could argue other languages offer same) especially if it is something I expect people to run themselves, I can shove it in an alpine image or a distroless image depending on the context and have it all be less than 25MB and require no external dependencies.\n\nGo is simple enough to let you pick it up and move fast, you _may_ pay for that later (as you will in any other language if you make poor choices) but it is generally, honestly, a good choice if you can put up with some of its weirdness and be okay with the fact that the makers will/may not give you those convenient things you want or fix any of that weirdness because it doesn't need fixing to them and that is fine, but I have used other things, I have seen things can be better and I am not fine with it.\n\nMost Go fans will tell you the complains are all due to _skill issues_, so will fans of any other language, so la la la la la la la la, not listening, fix your $hit, you deserve better, the compiler can and should tell you \"oh hey, this might go wrong\" because it will always know better than you, your type system shouldn't force you to reinvent things, you shouldn't need to use a pointer and a silly nil check to figure out if your user actually sent `false` or not because your language gave you no choice with the defaults... I will stop here before I step on more toes.\n\n> I also do **really** like Go, it's allowed me to build things that are fast in a fairly short amount of time and I did not have to learn \"too much\", the concurrency model is so good too! I am learning Rust at the moment and if I had to do most of the things I have done in Go in Rust, I would probably find it a tad more difficult to be honest.\n\n> Also, my next job may require me to use Go since that is popular for backends these days, and thanks to school, I know I want nothing to do with Java, so yeah, I will write Go from time to time, I still have projects in Go and will continue to write projects in Go when it fits. I do not intend to leave my current part-time/full-time (depending on when you are reading this) job anytime soon, apart from the fact that they took a bet on me (it is very difficult to even get interviews as a student here for dev jobs), I work with genuinely nice people, it is a perhaps more than average pay for the UK, but overall, I still have so much to learn from my boss, a bit more beard hair (and grey ones) and he may fit into that `graybeard` stereotype, I learn SO much from a 20-minute discussion about everything ranging from servers to networking to LINUX to you-name-it than I would sitting in a uni class for 4 hours every week. And apart from all that, it has had a great impact on my social life going in to an office to work with people, I have been pulled to multiple outings and parties in my short time here and out of my natural habitat; a computer workspace, and honestly it has been a great experience, I have spent and still spend a lot of my time indoors working; I spend way more time talking in my head and on Twitter than in the real world.\n\n## Gleam\n\n[Gleam](https://gleam.run) is a language that runs on the BEAM VM (yes, same one used by Erlang, it does actually compile down to Erlang, it is sort of like the Typescript of BEAM world but better), it can also run in a Javascript runtime, but I don't really care about that so I will be pretending that part doesn't exist. You can read more on why I ran with Gleam instead of Erlang itself or Elixir [here](https://chiso.dev/blog/a-gleamy-exploration), but to keep this short, I like Gleam, I like the community, functional programming has been awesome to explore and I like it too, I will be using Gleam for real-time backends and APIs for my personal projects (I doubt I will get to use it anywhere else for a long while), that is what it shines at (although Elixir getting real types might make me have a second look at it).\n\nYes, I could also use Go for realtime stuff, in fact, you could use almost anything for a realtime backend, I mean, see [this](https://openswoole.com/), people will do anything given the chance to and don't get me wrong, it is good that people explore other possibilities. But, Erlang and its VM were made for this very purpose, they were made to be fault tolerant and handle tons of concurrent connections and tasks, of course, they are not suitable for memory or performance-intensive tasks but they can easily be augmented, and Elixir has clearly served Discord well, a few issues here and there but thanks to the fact that the VM play nice with other lower level languages via NIFs, they could be [fixed](https://discord.com/blog/using-rust-to-scale-elixir-for-11-million-concurrent-users) without even thinking about a rewrite.\n\nGleam is still in its early stages but since it stands on the shoulders of giants and plays nice with those giants, I can always reach into Erlang for missing functionalities and I do in fact enjoy it. I currently maintain a few libraries in the Gleam ecosystem and intend to spend a long time writing Gleam for years to come even if it is scoped to my own personal explorations and projects, I also like a lot of the decisions they have made around the syntax, type system and error handling; the similarity to Rust is a big plus for some people like myself.\n\n![Gleam code sample](/images/facquest-code-sample.png)\n\nYou can have an HTTP server right next to your CRON job runner and Websocket server and be confident that one of them crashing will not bring the others down with it, in fact, you do not need to care about that crash (unless when you need to), the processes will be restarted as necessary, the whole VM is designed to chug along, one process dying will not cause your entire system to crash (in a language like Go, a panic in what's essentially the equivalence of a BEAM process; a goroutine, would take down the whole service if `recover()` isn't used for that specific goroutine and even that is nuanced). A process in this context is not a real OS process, making it even cheaper to spawn and kill processes as needed! Need to send messages from one process to another? You've got it! Want to spawn a task and \"await\" the result in the calling process? Easy. Dare I say Go's concurrency model was inspired by the BEAM, they are similar and both a joy to use.\n\n> \"But you can just crash the whole thing and let Docker restart them for you\" - sigh, please no, and you clearly do not see the value here.\n\n> I may leave Gleam behind in the nearest future since Elixir appears to be getting types too and it isn't held back by its desire to compile to WASM or Javascript which in turn means it can do more things native to the BEAM VM out of the box (like you can literally type Erlang in Elixir fine), but for now, I am sticking with Gleam.\n\n## Rust\n\nRust is definitely hard to learn right but I also understand a lot of the problems it is trying to solve (no thanks to the several nil dereference panics in Go). Rust is not perfect either and I will talk about some of its issues in the article where I explain why I will be writing less Go and more Rust. For starters, Rust is less flexible in a way, so it is out of the question for most web backends I would work on; Gleam, Go or PHP would do fine there depending on the context. I am picking up and sticking with Rust for more lower level things that I intend to explore and do more; majorly dev tools and database explorations. Rust's Tauri is also really good and a good option for me to write desktop apps when I need to.\n\nI have some C++ experience from uni but since I probably write [worse C++](https://github.com/raeeceip/ls.cpp/tree/master) than your Javascript and can't (and don't) want to use CMake, Rust is a perfect choice for me ([V language](https://v-lang.io) in theory was but they keep dropping the ball hard, the syntax is almost unrecognizable the last time I checked, there are now multiple versions of if and other things like $if, the syntax for macros or whatever they are is just `[stuff, \"another stuff\"]` and they are too busy building things into and around the language to sell it instead of focusing on stability etc.), I like the ergonomics, I know most people find it cumbersome, and I may in the future but for now, I am pleased with it and its helpful compiler (Go, feel free to pick some inspiration here) that would most certainly produce a binary that doesn't crash because of something that could have been fixed before it ever made it in past compilation. Rust being without a garbage collector makes it suitable for more things that Go isn't, like making high-performance databases, other languages etc, the type system behaves as a real part of the language, the meta programming features are really nice too, it allows you to do cool and convenient things like [Axum's route extractor](https://docs.rs/axum/latest/axum/extract/index.html), [rspc](https://www.rspc.dev) (the developer experience you get here makes me want to use Rust for APIs even, haha), or [Serde](https://serde.rs/) etc. significantly better than having to use [struct tags and weird reflection stuff](https://github.com/raeeceip/gots) in Go.\n\nRust is lacking some things in its standard library unlike Go and you often have to choose between multiple options even when it comes to things like what async runtime to use but I am willing to cope with these, the benefits outweigh the nuances for me. Pattern matching is also a joy to use in any language, it feels like it should be in every language but sadly it is not, I have tasted it in Rust, Gleam and Elixir and I cannot go back now (Go, take more notes here).\n\n# Frameworks and libraries\n\nThere isn't much to talk about here, the language matters more for me but here you go.\n\n## Solid.js (Typescript)\n\n> \"Wait, are you nuts? not React?\"\n\nNope, not react. React has a very rich ecosystem that it is tempting to just stay there but I strive for simplicity and performance, React is neither. Hell, there is a [YC-backed startup](https://million.dev/) dedicated to helping you fix React issues (how long till we get one for Go too? they belong in the same basket, haha). React got me my start with modern frontend stuff but lately I have been okay with just using Astro for static stuff with a bit of interactivity; like this website and Solid.js for other things. Apart from the frequent drama in the React, Next.js and Vercel \"ecosystem\" that is enough to put anyone off, Solid genuinely does have enough appeal on its own for me.\n\nIt's done away with the virtual DOM, not that I care much about that, what I do care about is the effect this has on the framework itself; Solid.js, as I understand it, is able to be more performant since it doesn't need to keep a version of the current DOM in memory to diff on state changes, that combined with its choice to use signals (and their obvious dedication to performance-first) has made Solid.js pretty fast by default!\n\n> To be fair, React opened the door for a lot of newer frameworks and I am thankful for that, JSX is quite nice to work with, I like it, happy for you or sorry it happened if you don't.\n\nSolid.js also has a lot of first-party libraries and components that are guaranteed to retain that performance and also reduce the package choice fatigue, it is also similar to React enough that it is not that hard for any React user like myself to pick up.\n\n> Remember I am not trying to convince you to use Solid.js, I know people get defensive about their frameworks, I am also aware there are a lot of frameworks out here, explore and make your own decisions, this is mine.\n\n## Axum (Rust)\n\nI don't have much to say now, it looks nice, it is made by the same folks that made the most popular Rust async runtime, performance should not be a problem. Extractors are really nice too, but I haven't used it enough to say a lot about it, I also don't intend to do a lot of APIs in Rust anyway.\n\n## Wisp (Gleam)\n\nThis is less of a framework and more of a collection of convenient functions, nice to have, it is also maintained by the [creator](https://lpil.uk/) of Gleam.\n\n## Chi router (Go)\n\nNice APIs, good performance, no opinions.\n\n## Echo framework (Go)\n\nAlso, nice APIs, good performance, great for real projects where I don't want to roll a lot of my own stuff.\n\nThere you go, I don't have much opinions about libraries or frameworks, I try to do my own stuff when I can anyway.\n\n# ORMs/Database access stuff\n\nJust like the previous section, I don't have a lot of opinions here too (unless you mention Prisma, I have plenty of opinions like: USE SOMETHING ELSE)\n\n- Gleam: SQL, decoders & one of the drivers like [pgo](https://github.com/lpil/pgo) or [sqlight](https://github.com/lpil/sqlight)\n- PHP: again, just write SQL bro + PDO (and often \"accidentally\" rolling my own query builder/ORM)\n- Go: nothing really safe or sane enough in this ecosystem that won't burn you but I like [Bun by Uptrace](https://bun.uptrace.dev/), and [sqlc](https://sqlc.dev/) + I mainly just write SQL anyway\n- Rust: SQLx\n\n# Databases\n\nNow, this part is also pretty generic, but it also depends on what the application needs.\n\n## SQLite\n\nSQLite is just a good fit for smaller side projects and things like desktop and mobile apps that I have had to do recently, and thanks to [LiteFS](https://fly.io/docs/litefs/) and [Litestream](https://litestream.io/), accessing it outside one instance and backups are not _really_ an issue anymore.\n\n## MySQL & PostgreSQL\n\nI have very limited experience with Postgres but I have worked with MySQL for years now but to be honest, I have only really dug into both more recently. MySQL has gotten a lot of updates in recent years and performance boost but Postgres objectively can do more since it is more of a hybrid too (object-relational like Oracle12c) which is really nice but I am not picking one over the other because again, it depends on whatever the application needs (more reads? more writes? etc.). Although, it appears most people may not have to make that choice anyway since most managed services are for Postgres these days.\n\n> I think that is it for the code part, ping me if there is anything I should have included\n\n# Editor\n\nYou have probably noticed by now, I use Neovim mainly. I say mainly because, from time to time, I have to shell out to VS Code when I am testing an unsupported language (most languages try to support VS Code first) like V and, initially, Gleam, and at work, I use PHPStorm, not because everyone else uses PHPStorm but because Windows is so cursed that setting up my Neovim setup on it was a hassle and I only have Vim on there instead, this would have been okay and I could just use some FTP stuff later but it is so slow (the Windows terminal) that I don't bother to use it and I found no way to get Wezterm to use bash on Windows instead of whatever the hell it was using. I still use it from time to time when PHPStorm decides to take the whole day (obviously an hyperbole, but truly a long time) just to index stuff because I simply switched branches and prevents me from working.\n\nMy preferred terminal is [Wezterm](https://wezfurlong.org/wezterm/) since it supports everything I use right now, I like that it is customizable and the key bindings + panes are a blessing (nice that it uses lua too). I am still waiting to try out Ghostty, I will see if I will switch them but this setup has served me nicely for over a year now. I did try Alacritty, it had some weirdness going on and I did not bother with it, I used iTerm2 for a while but again, it also had some weirdness going on that I just didn't like, they both broke my fonts and looked odd in full screen.\n\n> EDIT: I got access to Ghostty a couple of days ago and I may switch, I still need to set it up to match my current Wezterm setup but I haven't had the time to do that yet since I have to dig through the source code to figure out how to do some things in the config file (which is plain text, not Lua).\n\n# Monitor(s), Keyboard and other stuff\n\n![My Setup](/images/setup.jpeg)\n\n> There are links to some of the things I mention here\n\nI use a dual [24\" Huawei monitor](https://amzn.eu/d/6YVQrjL) setup and for over 2 years, it's been fine, I have looked at other monitors and even considered an ultrawide but this feels just right to me. I prefer a dual monitor setup because I prefer having two distinct \"desktops\", I know you can get close to this with something like Rectangle but I like full screen as you can see, hitting `Option + B` (via [Zap](https://usezap.sh)) to go to my browser affects nothing else, my terminal doesn't suddenly move or get minimized, other things like Discord, Arc, Spotify etc that are not for writing code live in the left monitor.\n\nI use a 2021 14\" MacBook Pro (M1 Pro) and that's also been... fine, except for storage (my next laptop, perhaps in 2025?, would certainly not be 512GB). I have to run a Windows emulator sometimes for things like the Braid game because it won't run on my Mac natively (via Steam) or for accessing the Oracle Database server from home (I also don't know why this is broken on my Mac at home only but it works in the VM so...)\n\nOh, I own an Apple HomePod mini that works like 30% of the time - great job Apple, I thought you could do better than the Google Nest, at least that one had Bluetooth\n\nI have a pretty [unpopular dock](https://amzn.eu/d/ajHVJE7) from Anker that supports DisplayLink because that is the only way to run both monitors with one cable on newer MacBooks - again, great job Apple, now I can't use Amazon Prime on my laptop because it thinks I am screen-sharing.\n\nI have a Raspberry Pi 4 (8GB RAM) with a [case from DeskPi](https://deskpi.com/collections/frontpage/products/deskpi-pro-for-raspberry-pi-4) and a 512GB SSD from SanDisk that serves as a tiny home server to test things on and also media storage.\n\nMy keyboard of choice is the [Keychron K2 V2](https://www.keychron.com/products/keychron-k2-wireless-mechanical-keyboard?variant=40290116730969); aluminum build, full RGB and hot-swappable with Brown switches, I did not even know I got the highest spec until it arrived but whatever, it is a nice keyboard! I pair that with a Logitech MX Master 3 (I believe).\n\nThe [reMarkable 2](https://remarkable.com/store/configure/vertical/GB) is my recent attempt at getting into more reading and writing, quite pricey and has lesser features and storage than the Amazon Kindle Scribe so to be honest, this was purely based on design, it just looks and feels so good, not sure I would recommend it to other people over the Scribe though, the Kindle store integration alone is better than having to buy books on [ebooks.com](https://ebooks.com).\n\nI also use the AirPods Max as my preferred headphones at work and AirPods Pro (2nd generation) when I am not at work; so, not often. Most people say it sounds like crap compared to Sony's headphones but I don't know, I tried about 2-3 Sony headphones and I couldn't quite stick with them (last was the XM3 I believe), they didn't sound bad but these also do not sound bad to me.\n\n## Apps\n\n![Dock](/images/dock.png)\n\nI use certain apps often on my computer and phone, here's a short list:\n\n- [Zap](https://usezap.sh) - Window manager for MacOS\n- [Octal](https://apps.apple.com/app/id1308885491) - Hackernews client for mobile\n- [Stats.fm](https://stats.fm) - Viewing Spotify stats\n- [Linkvite](https://linkvite.io) (Beta) - Saving and organizing bookmarks\n- [Obsidian](https://obsidian.md) - Note taking, todo lists etc\n- [Arc](https://arc.net) - Internet browser, love the vertical tabs\n- [TablePlus](https://tableplus.com) - Database stuff\n- [OrbStack](https://orbstack.dev) - A drop-in replacement for Docker Desktop\n- [Spacedrive](https://spacedrive.com) (Beta) - File manager, waiting for node sync stuff - I do not use it a lot for now\n- [Infuse 7](https://firecore.com/infuse) - Media player (supports Jellyfin)\n- [Mammoth](https://getmammoth.app) - Beautiful & sane Mastodon client\n- [Hoppscotch](https://hoppscotch.com) - API testing and documentation (I haven't used this much yet, I recently switched from Postman)\n\n> I will probably update this article with edits if I change my mind on any of these things\n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AMy+Stack.%2Cdate%3ADec+22+2023","date_published":"2023-12-23T00:00:00.000Z"},{"id":"nextjs-ssr.mdx","title":"Next.js SSR: The Hidden Costs of Vercel-Centric Design","url":"https://chiso.dev/posts/nextjs-ssr","tags":["nextjs","ssr","web development","performance"],"summary":"No you aren't being gaslit, it is kinda bad...","content_text":"\n# The Next.js SSR Conundrum\n\nNext.js has become a popular choice for React-based web applications, particularly due to its server-side rendering (SSR) capabilities. However, when deploying Next.js applications with SSR outside of Vercel's ecosystem, developers often encounter unexpected challenges. Let's dive into why this happens and explore some mitigation strategies.\n\n## The Root of the Problem\n\nNode.js, the runtime that powers Next.js on the server, is fundamentally designed with a single-threaded event loop. This architecture excels at handling numerous lightweight, asynchronous operations - think of a typical Express.js server fielding multiple quick requests.\n\nNext.js, however, leverages this runtime in a way that can be problematic:\n\n1. **Large TSX File Generation**: Next.js generates substantial TSX files server-side for each request.\n2. **Vercel-Optimized Approach**: This approach is fine-tuned for Vercel's infrastructure, which is specifically designed to handle these workloads efficiently.\n3. **Generic Server Struggles**: On a typical server setup, repeatedly generating these large TSX files can become computationally expensive.\n\nThe result? As your pages grow more complex, your server's performance can degrade significantly.\n\n## Performance Impact Analysis\n\nTo illustrate the performance difference, let's consider a hypothetical comparison of response times for pages with increasing complexity:\n\n- For a page with low complexity (score: 10), Vercel might respond in 50ms, while a generic server takes 60ms.\n- As complexity increases to medium (score: 30), Vercel's response time grows to 70ms, but the generic server jumps to 110ms.\n- At high complexity (score: 50), Vercel manages 90ms, while the generic server lags significantly at 200ms.\n\nThis comparison shows that as page complexity increases, the performance gap between Vercel's optimized infrastructure and a generic server widens considerably. The generic server's performance degrades much more rapidly, potentially leading to poor user experience for complex applications.\n\n## Mitigation Strategies\n\nWhile these issues stem from Next.js's design choices, there are ways to mitigate the impact:\n\n1. **Incremental Static Regeneration (ISR)**: Utilize ISR for pages that don't need real-time data. This reduces the SSR load.\n\n   ```jsx\n   export async function getStaticProps() {\n     // ...fetch data\n     return {\n       props: { /* ... */ },\n       revalidate: 60, // Regenerate page every 60 seconds\n     }\n   }\n   ```\n\n2. **API Route Caching**: Implement caching for API routes to reduce computation on each request.\n\n   ```jsx\n   import { withApiCache } from 'next-api-cache'\n   \n   async function handler(req, res) {\n     // ...your logic here\n   }\n   \n   export default withApiCache(handler, { ttl: 60 })\n   ```\n\n3. **Server-Side Caching**: Implement server-side caching solutions like Redis to store rendered content.\n\n4. **Code Splitting**: Aggressively split your code to reduce the size of server-side rendered content.\n\n   ```jsx\n   import dynamic from 'next/dynamic'\n   \n   const HeavyComponent = dynamic(() => import('../components/HeavyComponent'))\n   ```\n\n5. **Optimize Data Fetching**: Use efficient data fetching patterns to reduce server load.\n\n   ```jsx\n   import useSWR from 'swr'\n   \n   function Profile() {\n     const { data, error } = useSWR('/api/user', fetcher)\n     // ...\n   }\n   ```\n\n## The Broader Issue\n\nWhile these strategies can help, they're essentially workarounds for a fundamental design choice in Next.js. The fact that developers need to implement these optimizations highlights a broader issue: Next.js's approach is optimized for Vercel's infrastructure, potentially at the cost of performance on other platforms.\n\nThis Vercel-centric design raises questions about the long-term implications for developers who need or prefer to use alternative hosting solutions. It also underscores the importance of considering deployment flexibility when choosing a framework for your project.\n\n## Conclusion\n\nNext.js remains a powerful tool for building React applications, but its SSR implementation presents challenges when deployed outside of Vercel. By understanding these limitations and implementing appropriate optimizations, developers can still leverage Next.js effectively across various hosting environments.\n\nHowever, it's crucial to consider whether the additional complexity and potential performance trade-offs align with your project's needs. As the web development landscape evolves, we may see more framework options that offer SSR capabilities without tying developers to specific hosting platforms.\n\nWhat's your experience with Next.js SSR on non-Vercel platforms? Have you found other effective strategies for optimizing performance? Share your thoughts and experiences in the comments below!\n\n---\n\nFor more insights on web development and performance optimization, follow me on [Twitter](https://twitter.com/_Chisomu) or check out my other articles on this blog.","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3ANext.js+SSR%3A+The+Hidden+Costs+of+Vercel-Centric+Design%2Cdate%3ASep+11+2024","date_published":"2024-09-12T00:00:00.000Z"},{"id":"pilot.mdx","title":"Pilot","url":"https://chiso.dev/posts/pilot","tags":["rant","product update"],"summary":"Even if I have no idea what exactly i'm talking about, I hope you like the read! just keep in mind that the natural order is disorder. yes, that was an avatar quote.","content_text":"\nI am still awake at 7 a.m. Why? We kept trying to make sure the emails we were going to send out this morning were perfect, and we also did some debugging for another project. This has been the pattern lately: not sleeping till it's around 4 a.m., which is obviously unhealthy, but we try to balance it out (as if balance actually exists). Oh, and the \"we\" I keep mentioning includes [@\\_frokes](https://twitter.com/_frokes) and me.\n\nI had built the basic bulk email delivery system the day before, which we would use to send emails to everyone on the wait-list for the next few weeks until... I can't give you a specific launch date because that would be a huge spoiler. But, so far, it's working perfectly and sending emails at breakneck speed, just as we planned (yeah, I lied, we didn't plan the speed, but it's on localhost, so why not?). If your local server is significantly slow, you should examine your code more closely.\n\nToday also feels like the most important day of the week; we had to make a key team-related choice that wasn't easy, but I can't reveal the details, sheesh! I can't tell you lot of things right now, but let's talk about my week.\n\nReturning to the previous paragraph, I was working on something on the staging version of Frikax, and the average 3-ish second load time for the posts wasn't good enough for me, so I basically tore down the infrastructure and rewrote it to achieve a speed of 110ms on average, which is still quite acceptable. I'm not exactly a \\\"sucker\\\" for perfection, especially since it's an MVP, but the authentication still didn't seem secure enough, so I rewrote it from the ground up on both sides. You're probably sighing at this point, so let's move on to today.\n\nWhat do I have planned for today? A lot. Despite my apparent constant presence on Twitter and WhatsApp, I have a lot more work to accomplish than Frikax. So today I'll be working on the new website for the temporary job I'm doing - I started it almost two weeks ago, although I did mess up my schedule, but who hasn't?\n\nThat'll probably do it for this piece; what's the point of it all, you might wonder? This is a public compilation of my numerous ideas, thoughts, and so on, as I stated on the blog home page.\n\nOkay, I'll stop here, but tell me, what are your plans for today? I'll get it if you mention/tag me in a tweet. Have a wonderful day, human!\n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3APilot%2Cdate%3AMay+19+2022","date_published":"2022-05-20T00:00:00.000Z"},{"id":"react.mdx","title":"It's Not Your Fault You Don't Know React","url":"https://chiso.dev/posts/react","tags":["stack","tools","programming language"],"summary":"Breaking down why React seems harder than it needs to be, and how it connects to programming basics you already know","content_text":"\n# It's Not Your Fault You Don't Know React\n\nModern web development can feel like trying to build a house while everyone insists you use their favorite brand of power tools - without first teaching you about foundations, walls, or basic architecture. The React ecosystem, in particular, has become a maze of abstractions that often obscure what's really happening on your computer.\n\n## The Abstraction Problem\n\nReact's abstractions aren't inherently bad - they're designed to make our lives easier. But when we learn React without understanding what it's abstracting away, we end up with knowledge gaps that haunt us later. Let's break down some common pain points:\n\n### Hooks: They're Just Closures and Caching\n\nRemember learning about closures in your first programming course? That's essentially what `useState` and `useEffect` are. When you write:\n\n```javascript\nconst [count, setCount] = useState(0);\n```\n\nYou're really just creating a closure that:\n\n1. Maintains a reference to a value\n2. Provides a controlled way to update that value\n3. Triggers a refresh when the value changes\n\nIt's similar to this pure JavaScript concept:\n\n```javascript\nfunction createState(initialValue) {\n  let value = initialValue;\n  return [\n    () => value,\n    (newValue) => {\n      value = newValue;\n      // Imagine this triggers a refresh\n    },\n  ];\n}\n```\n\n### useEffect: It's Just a Lifecycle Manager\n\nRemember event listeners and cleanup functions? `useEffect` is just managing when things should happen in your component's lifecycle. It's not magic - it's basically:\n\n```javascript\n// Conceptually similar to:\nclass Component {\n  constructor() {\n    this.setupSomething();\n  }\n\n  componentWillUnmount() {\n    this.cleanupSomething();\n  }\n}\n```\n\n## The Server-Client Disconnect\n\nOne of the biggest sources of confusion is how little we talk about the client-server relationship. React tutorials often focus entirely on client-side state management without explaining:\n\n1. What data actually lives on your server\n2. When and why we make API calls\n3. How data flows from server to client and back\n\nThis is why developers end up with components that:\n\n- Store data that should live on the server\n- Make redundant API calls\n- Implement complex state management for data that could be server-side rendered\n\n## The Boilerplate Trap\n\nThe proliferation of React boilerplates and starter templates is a symptom of this knowledge gap. They promise to solve our problems with:\n\n- Pre-configured build systems\n- State management solutions\n- Routing setups\n- API integrations\n\nBut they often:\n\n1. Include unnecessary dependencies\n2. Implement overly complex patterns\n3. Hide important implementation details\n4. Make simple tasks harder to understand\n\n## Core Concepts You Already Know\n\nHere's the good news: if you understand these fundamental programming concepts, you already understand React's core principles:\n\n1. **Functions and Scope**\n\n   - Components are just functions\n   - Props are just function parameters\n   - Hooks use closure scope\n\n2. **Caching and Memoization**\n\n   - `useMemo` is just caching a computed value\n   - `useCallback` is just caching a function reference\n   - React's virtual DOM is just a cache of UI state\n\n3. **Event-Driven Programming**\n\n   - React's render cycle is just an event system\n   - Component updates are just event handlers\n   - Props are just event payloads\n\n4. **State Machines**\n   - Component lifecycle is a state machine\n   - Route changes are state transitions\n   - Form handling is state management\n\n## Moving Forward\n\nInstead of reaching for complex solutions, start with:\n\n1. Understanding the client-server relationship\n2. Learning basic JavaScript patterns\n3. Building simple components from scratch\n4. Adding complexity only when needed\n\nRemember: React is just a tool for building user interfaces. The fundamentals of programming haven't changed - they're just wearing new clothes.\n\nYour confusion isn't a reflection of your abilities. It's a reflection of how we teach React: focusing on solutions before understanding problems, and reaching for complexity before mastering simplicity.\n\nUnderstanding React doesn't require learning a new way to program. It requires connecting what you already know about programming to a new context. Start there, and the rest will follow.\n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AIt%27s+Not+Your+Fault+You+Don%27t+Know+React%2Cdate%3AOct+22+2024","date_published":"2024-10-23T00:00:00.000Z"},{"id":"snowfall.mdx","title":"Snowfall.","url":"https://chiso.dev/posts/snowfall","tags":["winter","snow","expectations"],"summary":"Winter is here, how did we fair in our preparations?","content_text":"\n# Snowfall\n\nWalking through this morning's snowfall, I was struck by the importance of preparation. Though winter's arrival wasn't unexpected, it served as a stark reminder of how easily we can become complacent. The time to winterize homes and ready vehicles isn't when the first flakes fall – it's well before.\n\nLast winter gave me a front-row seat to nature's transformative power, and that experience led to overconfidence. I've learned that consistent effort matters, even when it doesn't seem to. The work we put in always counts, even if the results aren't immediately visible.\n\nThis year will be different. But as I reflect, I wish I had done more, said more, pushed harder toward my goals. Yet I take comfort in knowing my journey isn't over. The fire that drives me will guide me through these cold months, even without my north star.\n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3ASnowfall.%2Cdate%3ANov+28+2024","date_published":"2024-11-29T00:00:00.000Z"},{"id":"statelessness.mdx","title":"Statelessness: Where Is Everyone?","url":"https://chiso.dev/posts/statelessness","tags":["web","build paths","programming","devops","frontend","backend"],"summary":"Exploring the balance between client-side interactions and server-side rendering in modern web applications.","content_text":"\nIn the vast landscape of modern web development, we've witnessed a significant shift towards client-side rendering and interactivity. While this approach has its merits, it's worth examining the implications for programs, bots, and users who may not fit the typical use case. Let's dive into a thought experiment that highlights some of the challenges posed by overly client-centric web applications.\n\n## The Lonely Program's Journey\n\nImagine you're a program, diligently following a URL provided in your arguments. You make a request, eager to explore the digital realm before you. But instead of a rich, interconnected web of information, you're greeted with... a solitary `<div>`.\n\n```html\n<div></div>\n```\n\nPerplexed, you attempt to strike up a conversation:\n\n**You:** \"Why hello div, how are you? May I know the various pathways this application presents? Are there any navigation links?\"\n\n**The div:** `<div></div>`\n\n**You:** \"Why yes, you are indeed a div, but I must be on my way now. Is there no one else I can talk to?\"\n\n**The div:** \"Ha! You are wrong. I am also a button. Click me and find out what I can do!\"\n\n**You:** \"But div, I do not possess hands. How can I interact with your client-side interface? Can you click on yourself for me?\"\n\n**The div:** \"I also do not possess hands. This was not intended to be used by us. Only humans.\"\n\n**You:** \"Oh, well why is that?\"\n\n**The div:** \"I have no idea.\"\n\n## The Client-Side Conundrum\n\nThis whimsical exchange highlights a very real issue in modern web development. Many applications are designed with a heavy emphasis on client-side interactivity, often at the expense of accessibility and universal usability. While there are valid reasons for this approach, it's crucial to consider the broader implications.\n\n### The Case for Client-Side Rendering\n\n1. **User Experience:** Client-side rendering can provide a smoother, more app-like experience for users with modern browsers and devices.\n2. **Reduced Server Load:** Offloading rendering to the client can decrease the burden on servers.\n3. **Real-Time Interactivity:** It enables dynamic updates without full page reloads.\n\n### The Overlooked Challenges\n\n1. **Accessibility:** Screen readers and other assistive technologies may struggle with heavily JavaScript-dependent interfaces.\n2. **SEO Concerns:** Search engine crawlers might have difficulty indexing content that's not immediately available in the initial HTML.\n3. **Performance on Low-End Devices:** Client-side rendering can be resource-intensive, potentially excluding users with older or less powerful devices.\n4. **Stateless Interactions:** As our thought experiment showed, it can be challenging for programs, bots, or non-traditional clients to interact with these applications.\n\n## Finding Balance: The Case for Thoughtful Server-Side Rendering\n\nWhile client-side interactivity has its place, there's a strong argument for maintaining a level of server-side rendering and stateless accessibility:\n\n1. **Universal Accessibility:** Ensuring that core content and navigation are available without JavaScript improves accessibility for all users and clients.\n2. **Improved SEO:** Search engines can more easily crawl and index content that's present in the initial HTML.\n3. **Faster Initial Load Times:** Server-side rendering can provide quicker \"first contentful paint\" times, especially on slower connections.\n4. **Graceful Degradation:** A well-structured application can still function (albeit with reduced features) even if client-side scripts fail to load.\n\n## The Way Forward: Progressive Enhancement\n\nRather than viewing this as an either/or scenario, the ideal approach often lies in progressive enhancement:\n\n1. **Core Functionality Server-Side:** Ensure that the basic structure, content, and navigation are server-rendered and accessible to all.\n2. **Enhance with Client-Side Features:** Layer on client-side interactivity to improve the experience for capable browsers and devices.\n3. **Provide Fallbacks:** Where client-side features are used, consider providing server-side fallbacks or alternative paths for non-JavaScript clients.\n\n## Conclusion\n\nWhile the trend towards client-side rendering and single-page applications has brought many benefits, it's crucial not to lose sight of the web's foundational principles of openness and accessibility. By thoughtfully combining server-side and client-side techniques, we can create web applications that are rich, interactive, *and* accessible to the broadest possible audience – whether they have hands to click or not.\n\nAs we continue to push the boundaries of web development, let's remember the diverse ecosystem of users and programs that interact with our creations. After all, on the web, everybody should be somebody – even a lonely program talking to a div.","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AStatelessness%3A+Where+Is+Everyone%3F%2Cdate%3ASep+12+2024","date_published":"2024-09-13T00:00:00.000Z"},{"id":"zero-to-production.mdx","title":"Zero to Production","url":"https://chiso.dev/posts/zero-to-production","tags":["programming","deployment","production","Go","Next.js","TypeScript","GitHub Actions"],"summary":"Even if I have no idea what exactly i'm talking about, I hope you like the read! just keep in mind that the natural order is disorder. yes, that was an avatar quote.","content_text":"\n### Introduction\n\nThis week, I dedicated my time to deploying five applications from my repository to production. Below, I will document the progress, challenges, and solutions I encountered along the way. The apps include a mix of personal tools, web applications, and utilities. Here's a breakdown of the applications:\n\n\n### The Applications\n\n1. **R3vr**  \n   A Go-based command-line tool for browsing and interacting with websites through the terminal.  \n   **Technologies**: Go, Terminal UI  \n   **Deployment**: GitHub Container Registry\n\n2. **Warehaus**  \n   An inventory management system built using Next.js. It leverages AWS Amplify for front-end hosting, AWS RDS for the backend, and GitHub Actions for CI/CD.  \n   **Technologies**: Next.js, AWS Amplify, AWS RDS, API Gateway, Vercel  \n   **Deployment**: AWS, Vercel\n\n3. **Otto**  \n   A load balancer written in Go that uses configuration files to distribute server loads across multiple ports.  \n   **Technologies**: Go, Qo5 (a load balancing algorithm), Config Files  \n   **Deployment**: GitHub Actions, Docker\n\n4. **Portfolio Website**  \n   A personal portfolio hosted at [chiso.dev](https://chiso.dev), showcasing projects and experience, built using Astro.  \n   **Technologies**: Astro, Fly.io, GitHub Container Registry  \n   **Deployment**: Fly.io, GitHub\n\n5. **Data-Buddy**  \n   A utility to efficiently index and display large CSV datasets, part of my implementation of the Maestro feature in the OICR desktop application.  \n   **Technologies**: TypeScript, Wails, Electron  \n   **Deployment**: GitHub Packages, Docker\n\n\n\n### Deployment Process\n\nI took a unique approach to each project based on the requirements and the tools I used.\n\n#### R3vr\n\nFor **R3vr**, my goal was to decide whether I wanted this to be a standalone CLI tool or a package that could be installed from a registry. Since I wanted the flexibility of both options, I chose to release it as a package that can be pulled from the GitHub Container Registry.  \nHere’s a snippet of the deployment pipeline:\n\n```yaml\nname: Build and Deploy\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v2\n      - name: Set up Go\n        uses: actions/setup-go@v2\n        with:\n          go-version: 1.18\n      - name: Build\n        run: go build -o r3vr\n      - name: Push to GitHub Container Registry\n        run: |\n          docker build . -t ghcr.io/your-username/r3vr:latest\n          docker push ghcr.io/your-username/r3vr:latest\n```\n\n#### Warehaus\n\nFor **Warehaus**, I utilized a variety of AWS products, including AWS Amplify for the front end, AWS RDS for the PostgreSQL database, and API Gateway to manage traffic. GitHub Actions automated the deployment process, pushing changes to Amplify on each commit.  \nOne challenge I faced was ensuring that the deployment process remained consistent across AWS services. Here’s a simplified version of the workflow:\n\n```yaml\nname: Deploy to Amplify\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout code\n        uses: actions/checkout@v2\n      - name: Deploy to Amplify\n        run: amplify publish --yes\n```\n\n#### Otto\n\nThe deployment for **Otto**, a load balancer, was a bit more complex. Using Go and Qo5, I ensured the load distribution could be handled via config files. GitHub Actions was again a great help in automating this, deploying the app to Docker containers. You can check out the documentation for Otto’s API [here](link to Otto documentation).\n\n#### Portfolio Website\n\nMy new [portfolio website](https://chiso.dev), built with Astro, was deployed to Fly.io. One issue I faced was ensuring a smooth CI/CD process. I resolved it by adding a GitHub Actions workflow that pushes the latest build to Fly.io automatically on each commit. Here’s a snippet:\n\n```yaml\nname: Deploy to Fly.io\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v2\n      - name: Build Astro Site\n        run: npm install && npm run build\n      - name: Deploy to Fly.io\n        run: fly deploy\n```\n\n#### Data-Buddy\n\n**Data-Buddy** was deployed using GitHub’s package manager. I ran into some issues with package conflicts, but resolving them involved better management of my version control and dependencies. I used the GitHub Packages registry for this deployment:\n\n```yaml\nname: Build and Publish\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  publish:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Build Wails App\n        run: npm run build\n      - name: Publish to GitHub Packages\n        run: npm publish\n```\n\n\n### Production Reflections\n\n\"Production\" is the point where your code interacts with the outside world. It's not just about deploying but also ensuring others can run, test, and build your application while maintaining quality and security. There are numerous protocols to keep your application secure and efficient, and cybersecurity teams work hard to make sure only the necessary parts are 100% safe.  \nHere’s what I’ve learned:\n\n- **Avoid high-cost providers**: Choose scalable solutions that align with your budget.\n- **Set alerts**: Use monitoring tools to track uptime, errors, and performance.\n- **Maximize visibility**: If your app is only live for a short period, ensure it's seen and used during that time.\n\nProduction, in a way, also lives in memory—it is remembered as long as the application has been used.\n\nYou can find the source code for each application in their respective repositories:\n\n- [R3vr](https://github.com/raeeceip/r3vr)\n- [Warehaus](https://github.com/raeeceip/warehaus)\n- [Otto](https://github.com/raeeceip/otto)\n- [Portfolio Website](https://github.com/raeeceip/portfolio)\n- [Data-Buddy](https://github.com/raeeceip/data-buddy)\n\nEach repo contains workflows and more details in the `.github` folder. Check out the ReadMe files for further documentation.\n\n\n### Conclusion\n\n> Deploying applications to production can be a complex process, but with the right tools and planning, it becomes manageable. I’ll continue updating this post with more insights as I refine these workflows.\n \n","image":"https://og.wyte.space/api/v1/images/chiso/preview?variant=blog&style=blog&size=medium&vars=title%3AZero+to+Production%2Cdate%3ASep+9+2024","date_published":"2024-09-10T00:00:00.000Z"}]}