reaper https://reaper.is reaper's rants,notes and stuff https://reaper.is/writing/06012020-Getting-better-at-development.html Getting better at development https://reaper.is/writing/06012020-Getting-better-at-development.html Getting better at development

First, as Samuel L Jackson would say, Happy New Year, Mother... !

Now, there's like a million posts about this topic and they actually share a lot of good information because they are written by developers far more skilled than I am.

Also, I won't be able to add much more value to any of those but I'm still going to give it a try so, bear with me.

Things that I think are important

  • Algorithms (No shit!)
  • Understanding of the actual language
  • Your ability to humble down
  • Learn by Teaching so you can understand what you actually learned (Sorry, What?)

Algorithms

Pretty self-explanatory and probably hyped enough in a world where everyone who wants to join FAANG(Facebook, Apple, Amazon, Netflix, Google) or similar companies, already break their heads on problem solving and algorithm applications.

Which is good, and should be done but not just for FAANG, you should be doing it for your own god damn improvement, you might not want to join any of the above companies but getting better at these will help you solve problems in real life applications quite a bit.

A very simple use case,

You're building a cab booking app and you create an order and start payment, you're using something like Braintree or stripe for payment and you've already initiated the payment but the user decided that he wanted to cancel the order a second after the payment intent was fired, guess what, you now have a race condition, the payment gateway might win, or the cancel request might win, either way, you now have 2 dependent actions running in parallel and that leads to junk data or complete failure.

The Client Sided Solution

The client has the option to not show the cancel button once the payment went through which works and is fine, no big issues but we should've thought of a better solution to start with. Though a user kill might create an issue for other dependent processes (rare case scenario, so let's ignore it for now)

The Better Solution

Queues!

Most CS students would already know where I'm going with this but for the self taught humans, You add these requests into queues , or basically some implementation of a channeled queue where you can handle the concurrency of certain categories to avoid processing them in parallel aka Redis + rsmq, Apache Kafka, etc etc etc etc!

You have worker instances on the lookout for such requests and complete one request before they complete the other, if a transaction was initialised, wait for it to complete, then cancel it and refund accordingly, this works because even if the client app crashes or is killed by the user your queue isn't botched and is processed as intended.

I can go in more detail but for now , the point is, you need to realise what problems have already been solved at a base level to find advanced implementations of them to make it easier for you to implement features and these are basically what algorithms are, solutions that already exist in the wild that you just need to understand and implement or use a tool that takes care of the implementation for you.

Understanding the Language you use

Now, I'm not going to argue over which language is the best, they were all built for specific use cases and have been adapted for various use cases over the years.

The best language is the one you already know , though that doesn't mean you are going to argue with the internet trying to prove that whatever language you know is the best.

Learn as many programming languages as you can to find the obvious differences that can help you choose which language to go with for certain scenarios, you can find a good overview that Drew wrote over at his blog

Yes JS is used by SpaceX, because it's easier and cheaper to find web devs, not because it was an efficient decision but then this statement will fire up all the JS devs around the globe so I'm not going to into the depths of it for now.

To the actual point, what do I mean by understand the language?

Nah, not the syntax, neither the keywords, what you need to understand is how that language analyses what you instruct it to do. Your syntax is limited to just giving an instruction. For eg.

function one(obj) {
  return obj.one
}

or

  type exampleType struct{
    one bool
  }

  func one(obj *exampleType){
    return obj.one
  }

or

same thing in python, c , I'm not typing each and every snippet, don't have to show off the number of languages I know.

Back to the explanation, we have snippets that try to access one from an object or a struct and both will fail if obj doesn't exist, in case of JS undefined can be passed since there's no type checking and even when go has type-checking you'll still fail during runtime because the pointer can point to a nil address. Either way, during runtime I have an issue, now this has nothing to do with the syntax , nor is related to something people will tell you to remember, you add this into your set of checks as you keep growing into using the language.

The fixed version of the above is a simple check on the existence of obj, both are short circuited by AND and OR conditions though you can write them in simpler more readable if(obj){} fashion as well.

function one(obj) {
  return (obj && obj.one) || false
}

or

  type exampleType struct{
    one bool
  }

  func one(obj *exampleType){
    return (obj && obj.one) || false
  }

Now, understanding the language can be both proactively done (learn and read about the interpretor and/or compiler while learning the syntax) or reactively done (learn by seeing the code break).

While I'd recommend doing both, where you read about it and then forcefully go ahead and break the code to see and set the error up in your brain (don't do this in production!)

That's for understanding the languages. Yeah simple examples, we don't have to go too deep to understand the importance, though if you need more tips , you are free to hit me up on email.

Ability to Humble Down

I'm going to shout right now so, one sec.

ITS OKAY TO ASK FOR HELP!

Keep reading the above till you understand it.

As a programmer, you're always going to jump in on something new, something really old, or something thats irritating before you even get to it.

I've tried learning a few languages , been successful in learning a lot of them but then there's rust, hard time understanding that language, ended up giving up on that language twice and then got back on it again and again till I at least understood the core of the language, I still haven't written anything useful in that language (then again, I haven't written anything useful in any language...).

Readers would already know that I've been using Go extensively but then, how do I know my code is efficient? who reviews it?

Do I have a mentor? Nope Do I ask random strangers to review it? Yes.

It's a simple thing, go to reddit, request a review, someone might be kind enough to actually review your codebase, chances are they might be new as well but now you have things that he/she learnt added to your knowledge drive. Win Win.

If I go, "Nah, I'm way too good at JS , i don't need anyone to help me learn this new fancy language, I can handle it", I'm literally pushing away all the free knowledge I could've gained. I'm not kidding, a person reviewed my commitlog codebase that's written in golang and also wrote about the mistakes I made you can read it here.

There's some really simple things that I messed up but then I'm not set with the language's standards, I wasn't smart enough to check other existing go repositories read through them and see how things were structured like I did for my JS/TS projects.

In my defence the reason was that it was a POC(Proof of Concept) implementation and was written accordingly, but still I got to learn a lot about how it's better to structure one package into multiple files instead of how I keep creating package out of every folder I create , which is a habit I have from JS projects.

Long story short, BE HUMBLE! and learn from wherever and whoever you can!

Learning by Teaching

There's a great deal of psychological research and articles you can find about why this works , I'm just going to give you a gist of it.

If you've learned something and you can teach it to someone else aka explain it well to another person then you've successfully learnt and understood that concept.

Basically, the source of the rubber duck debugging method, if you can explain the duck your code, you understand your code and possibly even found the bug while doing that.

If you're trying to explain it to someone and you are stuck at a certain concept that you can't explain then guess what, you didn't understand it, so go back and try again!

That's about it for now,

Adios!

]]>
Mon, 06 Jan 2021 00:00:00 +0000
https://reaper.is/writing/07-09-2020-Simple-Design-Tips-for-Developers.html Simple Design Tips for Developers https://reaper.is/writing/07-09-2020-Simple-Design-Tips-for-Developers.html Simple Design Tips for Developers

I've got a very small list of hobbies, namely reading, writing, coding, gaming and designing. Coding being my ikigai (Japanese concept that means "a reason for being"). Designing on the other hand was something I did to make cool monograms for myself but during the learning phase of designing I did learn a few things, I think as a developer you can accommodate these tips pretty easily without having a lot of design knowledge.

There's been numerous posts about this already and probably better ones but let's give it a try, shall we?

The post uses https://reaper.is and https://tillwhen.barelyhuman.dev as references for a lot of things and that's because the points I mention are implemented on both. Even though TillWhen still has some UI inconsistencies, considering it has other UI Kit's involved and I can't just throw them out without actually getting the functionalities done first.

There's basically 3 concepts I want to cover here

Spacing

If we go through https://reaper.is for a brief second you can see everything follows a consistent spacing standard and this is what I call spacing harmony , it's not a new concept but something very simple that people forget.

If you observe, we have the same amount of vertical and horizontal spacing in the 2 lines of buttons. Now this is not the only place where this is being used. Point being though, having a base spacing amount and using it's multiples to decide the spacing works wonders.

8px is my magic number for spacing and its multiples are used around the website. Some people start of with 4px since it works well in mobile devices but I'd recomment you use device specific spacing measurements when working with mobile screens to avoid making the spacing too small on hidpi devices.

Why is it important ?

Spacing and Typography create a sense of hierarchy between elements and while Typography tells you what you should read first, spacing tells you what items are together and what items are not. This gives you a general idea of the relationship between elements is.

A good example would be the spaces that split the Navbar from the page title and then page title and subtitle having minimal gap to show that they are grouped and then a massive gap between that and the data cards, making clear separation between elements that are together, and elements that are subsets.

Typography

I've already mentioned what role Typography plays in designing and if you get good with typography and spacing, you don't really need a lot of design knowledge to get a good looking design up and ready.

I'm in no sense a master of this but have been experimenting with it for a while now and you are reading on a blog that's been testing this very approach.

Let me guess you saw the logo first, the name next and the contact button next and the subtitle last ? If that's the order you had then I was successful in maintaining a good Visual Hierarchy. If not, then ....

Getting a control of what the user reads and the way he flows through your content makes it easier for you to decide what you want them to do.

This is equally helpful in Apps as it's on blogs.

Layout and Alignment

This is probably the only tip that almost all frontend developers already know. The thing about Layouts is that one layout never works for everyone and everything.

But, a general rule of thumb is to make sure everything lines up perfectly. (Thanks, captain obvious.)

Though, it doesn't change the fact that almost every design we see today is over designed and have each element follows it's own sense of alignment. (You can preach about minimalism later...)

Forget others, TillWhen's login email is a centered card with text left aligned and justified as needed with a huge center aligned button (Hypocrite much, reaper?).

These design choices can change based on what pleases you aesthetically but following a grid makes it all easy. Using a good grid system always comes in handy if used properly. I've seen people nest rows inside rows inside rows with bootstrap. Don't! Just Don't!. Learn to use flexbox instead.

Layouts will help you get prototypes out quicker and probably why css libraries like Bulma, Bootstrap are drop in defaults from the dawn of their existence in most web frontend projects even today.

I'm not sure if theres anything else that I can add to this.

Adios!

]]>
Mon, 07 Sep 2020 00:00:00 +0000
https://reaper.is/writing/09052021-Updates---8th-and-9th-May.html Updates - 8th and 9th May https://reaper.is/writing/09052021-Updates---8th-and-9th-May.html Updates - 8th and 9th May

First off,

Yeah, I haven't posted for a while, reason being that it's hard to work and do everything from a raspberry pi and I was stuck with office work.

Second,

I announced a new app on LinkedIn, which was supposed to be launched by now but due to the unforseen circumstances of the macbook just dying on me I had to redo a lot of the structural changes for it again and I expect it to be done by the end of this month. hopefully.

I'll talk more about it.

Music

Fixed a small hiccup in music that was causing the track to break and stop playing during playback and that's unacceptable for a music player.

I think I should add a proper backend to this app and allow people to create accounts but I'm not sure whether anyone else even uses this so I'm going to just keep it as a single queue player.

TillWhen

Obviously, as promised , the Project Deadline's addition was moved to live during the past week and I did have a bit of a testing blockage due to other work. This updates includes the ability to modify the project details as well.

Small Additions

coming-soon a simple template that I had to create for Taco's teaser site. As always, very minimal and to the point, includes a countdown if you need that.

grator a simple cli runner for sql migration files , this was built to work with Hasura's migration files but that's a simple set of sql queries in a order so you can use it for pretty much anything, also can be extended to build other tools if you wish to. It uses knex internally so you can literally build the tool yourself.

Taco

Lastly, the product that I plan to complete this month, it's a very simple project shouldn't really take me a month but I work only on the weekends on these things so, it's going to be the end of May till I have it ready.

Tasks and Collaborations / Taco , is going to be a simple project management app that I wish the other management apps were. It's definitely going to take inspiration from already existing apps because all the research in task and organisation of projects and work does lead to the same path but there's a certain minor details that I wish others add and they'll get added to this since I have the freedom to decide what stays and what doesn't.

As always, it's being built for my personal use and requirements which might not match everyone's requirement but do give it a try when possible.

It's in very early development right now, I don't even have a staging server right now for this, literally just got finished designing the base website layout and how the pages will look.

That's all the update that I have for you right now.

Adios!

]]>
Mon, 09 May 2021 00:00:00 +0000
https://reaper.is/writing/09092020-Deciding-on-Web-Technologies.html Deciding on Technologies https://reaper.is/writing/09092020-Deciding-on-Web-Technologies.html Deciding on Technologies

NOTE: This post won't give you options to choose from but instead teaches you how to

In an ideal world, there'd be one set standard that we could follow for building apps but then there's no such thing as an ideal world in the first place. In an era where a new frontend framework arises every few days and if you monitor github like I do, you can see one pop up every few hours.

So then, how do you decide on what to use and what not to use?

The Comfortable one

No bullshitry here, the best answer is to go with the one you are already comfortable with. If I were to instantly select a framework to build a frontend app, I'd choose nextjs without giving it a second thought. I'd obviously have to make hacky changes later in the development phase to patch up stuff that doesn't directly work well with the library/framework but most of the times you can just go with this option.

I'd say you stop reading here if you already have a framework, library in your mind at this point. The remaining of the post focuses on understanding requirements and choosing the next comfortable one.

You can use jQuery with .Net or PHP without laravel for all I care as long as you can get the work done without snatching your hair.

But!

Theres always a time were something is obsolete because the community says so or the maintainers gave up. I'd know, because I gave up jQuery for React because someone told me, then I got into RiotJS because I thought it was interesting and then we came to VueJs because Angular 1 was a decent framework and vue inherits a lot of the templating aspects from it but then Angular 2+ ... let's not talk about this. The same goes for backend options, I've shifted from python flask to Django to Node + Express to Loopback to now, shameless plug ftrouter. I was comfortable with Loopback 3 but then EOL and now I have my own because I went crazy.

How do you decide then ?

There's 3 major factors

Nature of the Project

If this is a production project, I'd ask you to shift to Requirements as that dictates the choice in this case. If it's a personal project and something you want to toy around with, you can literally pick up any new tech that you think you want to try or looks interesting and go ahead with it and the other points might not even make much of a difference.

Requirements

Let's get on how these would change my decision.

We need a portable binary that has both frontend and server running from it.

With that as a requirement, my obvious goto would be Go Lang with or without any frontend framework but, what if this requirement comes in after you've started building the base with node and react, and you've only build the node side and the react part hasn't started? I'd switch to Vue or Svelte , because the output spa html is smaller and that reduces my overall binary size which is always good.

We need it built in the next 4 days for a prototype

And now we'd use something like Angular because it's easy to find resources only from where you can copy paste templates and setup stuff pretty quickly and then it's easy to scale on it as well so that's that.

Point being, your requirement and output dictates what you want. If I had a absurd requirement like

It should be crazy fast! like I don't care the time you take but it should be epic fast

At this point, we all know webassembly for computation and maybe even rendering (if I have the time) would be a good selection or you can go old school and use server's to render plain html and use form actions for http requests and validations. AdonisJS, ROR, Django, etc etc etc are the easier ways to get this done or maybe a custom GO server responding with html with some templating library. Point is, it depends.

Team

While the requirements dictate a good percentage of the selection, if you are working alone, the above section should be enough but then when teams are involved you gotta consider how well the team already knows the selected tech. If you suddenly decide that the whole team is going to work in Go Lang or Rust just because it's the new cool thing, you'll be hindering the speed of development and maybe even developer motivation because you still have a deadline the devs need to hit.

Choose something that the team already knows and grow with it, you could change things like use Koa instead of Express for better async await support but don't just decide to change the language without actually consulting the team

Still. There's always cases where you have to make the shift because the needed requirement isn't available in the current tech stack. For example, there's no authorised stripe library in flutter but a slightly more dependable one in react native so I go with React Native on this one.

In such cases, take in consideration how much of a shift it's for everyone. You can adapt to the new structure and go with it no issues, but can everyone? Take that into consideration and then make the decision to move. If the learning curve is just too high then maybe change the deadlines or you'll end up with some frustrated devs.

To all developer reading this, there's no harm in learning more languages, programming languages are a skillset and you shouldn't be scared to learn a new language. It opens a lot of doors. Have one primary language that you try to master but have others on the side for cases where that knowledge can come in handy. Knowledge of haskell and elm have helped me make app a little more bulletproof and code better abstracted while being composable.

Adios!

]]>
Mon, 09 Sep 2020 00:00:00 +0000
https://reaper.is/writing/10012021-dev-in-simple-terms.html Simplifying Dev for the rest of the world https://reaper.is/writing/10012021-dev-in-simple-terms.html Simplifying Dev for the rest of the world

I've had a few instances where people have asked as to what I do and what my role means, and writing a post just for that one thing will short and useless so we're going to go around a few things that are common among the dev / engineers and I'll try to explain them for you in the simplest language that I can.

I'll give you an overview of what I do and I'll get to the remaining details.

As of writing this post I'm a Principal Developer at a startup called Fountane, I take care of deciding architechture, tech stack (No we don't have the same stack for every project), toolchains and process pipelines, automations, CI/CD that are then used by the devs to write/build apps based on the client requirements.

Software and Software Engineers

Now these are really common terms and most people understand the gist, I won't really go into the depth of everything that a software engineer does.

Software : Packaged set of digital instructions that run on various platforms a.k.a , it's just an App, the platforms might vary from desktop(PC, Mac), embedded (Chipsets, Microwave, fridge, etc), mobile (Android, iOS) so on. I say packaged since it is a combination of a lot of instructions that are given to the underlying platform and sometimes on a layer that's talking to the underlying platform.

Software Engineers : These are the amazing humans that take the responsibility of writing these instructions and often also work on ways to find optimal ways to make those set of instructions make the best use of the hardware.

Software Engineers can be divided into a lot of roles based on a lot of variables, the most common one that you can find almost always is the classification based on the platform the engineer works for.

So examples would be

  • Embedded Systems Software Engineer
  • iOS Developer
  • Android Developer
  • Web Developer
  • IOT Developer (Sometimes, comes in Embedded development)

After this, there's another variable that makes these role a little more specific, where people start adding details of what they really want. A general start or add-on to the above would be the language that the person works with.

For Example the iOS Developer and Android Developer can be split into

  • iOS Swift Developer
  • iOS ObjectiveC Developer
  • Android Java Devloper
  • Android Kotlin Developer

A lot of times the same person might have both languages in his/her skillset but these are specified by the hiring company when the codebase already exists and they are looking for a language specific role, the actual engineer on the other hand might just mention "iOS Developer" in his resume/profile.

Languages

As I pointed out language in the previous point. I'll explain what languages are.

These are basically how the instructions are written for the software to be created.

Developers specialise in these languages and take pride in it. Technical term for this is Programming Languages and Python, Javascript, GoLang, Rust, C, C++, Objective C, Fortran, LISP, Crystal, D lang, are all a few examples, there's over 300 (maybe more) such programming languages (and still none were built specifically for Desktop UI development...)

Going further we can classify some of these into even smaller parts where we start adding which side of the work do they do, a general classification would be whether the developer works on "Business Logic" or BLOC , or on User Interface or UI.

A BLOC developer works on Server Sided Code, Shared Logical code between various systems and similar things. These are the guys who work and use the algorithms a lot, also called the backend/back-office developer

A UI Developer is works on making sure what the user sees is functional and in certain cases uses the shared code developed by the BLOC Dev into the interface to make sure the app follows the requirement generally called the UI Developer or in certain cases Frontend Developer.

In a very ideal case these 2 should be enough to handle or get an App out in the market (this can vary a lot based on a lot of things)

Oh, and examples of these will be of the fashion

  • Platform - BLOC(Backend/Back-office) | UI(Frontend) - Framework / Language

  • Web Developer - Frontend - React / Javascript

  • Web Developer - Frontend - Angular / Javascript

  • Web Developer - Frontend - Vue / Javascript

  • Web Developer - Backend - Express / Node

  • Web Developer - Backend - Buffalo / Golang

  • iOS Developer - Frontend - SwiftUI / Swift

  • iOS Developer - Frontend

Now obviously there's people who enjoy doing both BLOC and UI and these people are called Full Stack Developers, though a full stack developer is never limited to just those 2 things. More often than not he/she is to have knowledge of how the architecture of the app works and how he can improve it.

While today a Full Stack Dev is limited to a single language and people are proud of that , I remember mentioning in the previous post, don't limit your skillset! , so if I were to be a full stack dev I'd be learning every language there is and work on projects handling both frontend and backend and understanding architecture for dev.

Since we're done with the classification part, let's get to the general roles that are assigned to someone in a company.

  • Junior Dev / The Trainee
  • Senior Dev / The Guide
  • Lead Dev / The Mentor
  • Principal Dev / The Overlord
  • CTO / The Elder

The name beside the role are just something that I've come up with, I'll try explaining why and what they are.

Junior Dev

This guy is here to learn, you can have junior devs who've been in the industry for over 10 years and prefer having mentors and keep learning (a very good thing to do!) but yeah, these are the trainee, they keeps learning ,These people like having a mentor on top of them almost always, they enjoy the part where they keep learning new things and growing.

A developer should always keep this mentality no matter what role they actually get in a company. I've made quite a few attempts to stay at this role but i don't know , get pushed up always....

Senior Dev

This guy has made enough fuck ups in his dev life to understand what and where to look for solutions and can guide you on how to approach a problem, again these can be people who just started development and also people that have been doing it for 10+ years, there's no limit to good you get at a skill, it's different for everyone.

Lead Dev

This person's role is to make sure the other two are able to find resources, docs around well enough and acts as the person who gets rid of the road blocks. His/her work is to manage your work, review your work and 90% of the time acts as an indirect quality check engineer and the person both junior and senior go to when trying to understand the architectural decisions that were made and how the code should/could be structured to perform better , 8/10 this is where the Full Stack Developer ends up being after he's learned how to handle both people and code fuck ups.

Principal Dev

He/she are there to be with the development from start to finish, they take care of architecture, development toolchain, dev processes, automations, programming standard / principles and overlooks and plans stuff.

An interested Developer grows towards this mentality everyday without even knowing but at this point you are the person who researches a lot more than he implements, unless you are in a startup then you might have more than one role anyway but this guy has plans for plans that act as backup for the backup of a backup while being a backup.

Tagged him as The Overlord , because he's just that, plan everything, has people acting on those plans and doesn't get into the field unless it's absolutely necessary.

I'm no where close to a Good Principal Dev yet but we're getting there.

CTO - Chief Technology Officer

The Elder, aka the know it all , That's all I'm going to put here, there's nothing more you can talk about the CTO.

This is one huge post....

I've probably missed a few specifications in terms of roles and thus it's not to be considered a hard limit on what you can/cannot do and a passionate developer can never be limited to a single role anyway.

You can be a junior working towards becoming a senior, you can be a junior who has moments where you help out a principal developer make decisions, it's always possible, not everyone knows everything, even if I just said that the CTO is supposed to be the know it all.

That's it for now,

Adios!

]]>
Mon, 10 Jan 2021 00:00:00 +0000
https://reaper.is/writing/10122020-Moving-away-from-web-apps---Story-of-Another-experiment.html Moving away from web apps - Story of Another experiment https://reaper.is/writing/10122020-Moving-away-from-web-apps---Story-of-Another-experiment.html Moving away from web apps - Story of Another experiment

The world is moving towards making everything available via the browser and I'm moving the other way where I want to build desktop apps.

I'm not the only one that thinks this way.

Also, it's always easier to build apps that don't need a web-server to exist and can handle data and work in and around the system, makes it easier to keep personal data safe and thus less things to worry about for the developer. Again, just my opinion.

Yeah okay, What experiment are we talking about?

So I've been crying and ranting about electron being heavy, apps being ram hungry, my mac giving up it's life trying to keep up with all the things I multi-task with, for a really long time now.

On the other hand I did mention about learning rust and go quite a times as well. So quick update, haven't been working on rust much but definitely have tried out a little more in go and this post is basically going to lead to that and why it was the chosen solution.

Reaper and Laziness

I'm quite a lazy person and that's why sitting in one place while typings lines and lines of code everyday is something I enjoy but this laziness can get to your head and then you start using your head more than you use your body.

example: thinking of every possible way to get the remote from the dining table without moving anything but the hand and the broom stick that was lying on the floor in front of you.

Similar situation lead to the experiment , this time the distance was a full king sized bed with a table on one side and my old phone connected to the charger on the other side.

This is a daily setup, this phone is put on charge after my main device is charged overnight and is responsible for playing my spotify playlists over the day, problem is, to change playlist and or tracks I need to either open it on the laptop or my main device which is again on some corner of the bed because I don't like distractions when i'm working.

Last option? The open.spotify.com web app, but then this page takes a good 300-400MB worth of ram on Safari and close 290-340MB on chrome with the GPU helper using up to about 50-100MB so yeah, back to 300-400MB of RAM usage which while I could afford if I wasn't running anything else of the system but in my normal dev workflows there's a few code editors, a few docker containers, figma for design reference(which takes 1GB because of the amount of designs in that one window) and a few simulators because I do too muhc React Native dev right now for office projects.

In summary,

RAM Available - 8GB( ignoring the paging cache ) RAM Used - 7.1 ( Swap prevents the remaining from being used) SWAP Used - 1.5-3Gig (based on which emulator/simulator is running)

At that point , opening another webpage that takes up 400MB worth of ram is bascially pushing the macbook at it's limit which in turn slows down everything else that's moved to the swap and certain electron apps don't like working from swap because they weren't built with that in mind but that's on the devs and not electron.

So now, I either get up, walk to the other side of the bed and change the tracks or find my main device and change it on that. Either way, it wastes time and i'm too lazy.

Spotify Lite!

The name is as original as it can get, that reminds me , I'm to put a disclaimer on the readme for this.

Back to the topic, so what did I build ? A simple GUI interface with 4 buttons and a Text showing the current playing track.

Oh ah, a config screen that you'll see when you first open the app and it's available only for Mac right because it was built it 5 hours, how much do you expect from me!?

Anyway, yeah, it's a mac client for now, but since both the framework and the language used can cross compile, I might get linux and windows releases out soon as well, though I'd first sit and clean up my code, right now its a disaster.

Close to what a 5 year old would scribble all over the house, only he understands the code, same vibe.

The simple principle of building it is already explained above but why build a desktop app over a web app that could've done the same thing and I could've built it in half the time?

To Learn.

I wanted to learn go and get better at it, I always slow down the moment strict typing comes into the picture and I would like to change that, yes I've worked with TypeScript but I just write any and escape from situations where I don't want to define the types which is bad , both for code and speed so to improve my speed with types involved this is was necessary.

The experience was good, I had a good 5 hours to spare since I woke up early, I did hit some segfaults because of lack of knowledge of channels and go routines but have a better understanding now so it's all good.

Still most web app oauth flows don't have PKCE protocol well implemented and need a redirect url which a desktop app can't really provide so I ended up going with the usual OAuth2 flow but the user has to set it up for himself which I think is fine since It adds to the security. Your client token and secrets are not stored on some external servers but your own device and you can delete the app from your dashboard whenever you want to avoid Spotify lite from using it.

All teasing aside, I'll link you to the repo.

https://github.com/barelyhuman/spotify-lite-go

That's all for now, Adios!

]]>
Mon, 10 Dec 2020 00:00:00 +0000
https://reaper.is/writing/1112020-Update-October---2020.html Update October - 2020 https://reaper.is/writing/1112020-Update-October---2020.html Update October - 2020

Yeah, I haven't posted in quite sometime and really don't have anything that I'd like you to know from me at this point but this update is just a chore.

We'll be going over things regarding

  • TillWhen - Time Tracker
  • Pending - Kanban Board / To be Project Tracker
  • A bit about what's up with me

TillWhen

First Time reader? TillWhen is a time tracker that was built as an alternative to the pricey time trackers out there and would like to be simple unless someone requests for some crazy cool feature which I'm not sure if I'll build or not but for now, it's simple.

Whats the Update? It now has a beta version of Tasks added to it, so you don't have to use another tool to manage your tasks based on Projects and it's Beta because I still haven't added project based filters and a per task time logger.

Project Based Filters

Simple filters that'll help you segregate between tasks for specific projects, pretty simple feature but I haven't pushed it into the live version yet.

Per Task Time Logger

So, there's a dilemma for this feature, should I add them as time trackers per task or should I just redirect the user to the time logger and then follow the normal flow. The per task one will allow users to have multiple tasks being run simultaneously which would be nice , since I never actually am doing just one task at a time but then the amount of timers running on the page can slow down on certain older browsers and I wouldn't wanna do that so this feature is still under consideration.

There's no roadmap to TillWhen but a much awaited feature would be a bot for the tillwhen that you could integrate into slack and telegram and that's something I plan on doing soon. The Telegram version is under tests and was quite easy to go about,I haven't built a slack bot before so will have to go through that soon.

Pending

I doubt that anyone knows about the existence of Pending(Now shut down) mainly because I never really talked about it but it's a simple Kanban board at this point. The reason it's in this update is that Pending will be getting a overhaul soon.

It was meant to be a browser based storage but getting the peer to peer data sharing was something that would create friction for normal users and that's something I didn't want. I mean having people store their project data offline and on their system is fine but it normally involves a team and thus the requirement of a distributed/central server. I wanted to go the distributed route but I'll start with a central arch because It'll be easier to start with.

TLDR;

Pending is going to turn into a basecamp like central project board instead of being just a browser storage kanban board.

What's Up with You?

Not much actually, got better at Go, going through the documentation of Gio to be able to build decent GUI apps with Go and then maybe build something small to test the waters.

Also, planning to move towards building system level tooling and get into kernels and build tools so I can leave Web Development to be just a part of the skillset and not the whole skillset. Kinda tired of frontend development for a now, really into CLI tools and that's probably because I use the terminal more than I use the GUI apps.

Experimenting with moving the Music to be have a cli version so I don't have to open the browser , because the browsers take a lot more RAM than needed.

A few apps that I switched to, to save some RAM.

  • Chrome to Safari (increased battery life...)
  • VS Code to Sublime (People use Vim without any plugins, it's not that hard to change back to something as fast as sublime)
  • Source Tree / Sublime Merge - I use them alternatively, normally have source tree open but sometimes I use sublime merge for bigger files. This was added to arsenal because I got rid of VS Code. Both Apps Combined don't use as much RAM.
  • Hyper to iTerm
  • Postico from Table Plus (not much of a memory usage difference but I just don't like the bothering pop up from Table Plus every time I try to open a new Tab, replace the existing tab instead of bugging the user?)

The search for good cross platform GUI has ended, Gio as I mentioned above but since I'm still learning, it'll take time to get used too. Another good option is V Lang that comes with a ui package that is still in alpha phase but is decent for GUI apps. The language itself is pretty easy to learn but I'm going to wait for the package to mature a bit more. For people who don't want to learn a new language then your options include libui bindings for your specific language, it's pretty limited in terms of usable widgets but can get the job done.

That's about it from me

Adios!

]]>
Mon, 01 Nov 2020 00:00:00 +0000
https://reaper.is/writing/12042021-This-Weekend-in-BarelyHuman-Dev-Labs---March-10th-and-11th.html This Weekend in BarelyHuman Dev Labs - April 10th and 11th https://reaper.is/writing/12042021-This-Weekend-in-BarelyHuman-Dev-Labs---March-10th-and-11th.html This Weekend in BarelyHuman Dev Labs - April 10th and 11th

Project Updates

statico

Mentioned that we had a static generator written specifically for reaper.is and all I did was pull it out into a separate repository and is now under the name statico and while it can be used for basic markdown to html page generation while maintaining folder structure1 it's still a very simple generator and not as generic as it should be, it will be as the project progresses.

commitlog

There was a little progress in the release command addition to the cli, not a lot and no it's not been pushed, the older release command is still there in the codebase and while functional it doesn't increment the tag as I expected it to be so the cases are still being handled.

vscode-monoclean-theme

I started writing a vscode theme long time back, just completed it this weekend and also added a light variant to it. It's only been tested on Go and JS so I'm not sure about the other languages while it shouldn't cause many issues in other languages/workflows , do raise an issue if you do see one. You can find the install instructions and screenshots on github

Life Updates

Wasn't able to post last week since there wasn't any progress, I've been sick the past week and wasn't able to pick up anything so there was nothing to post last weekend. Somehow I've caught covid while staying home all day, anyway I don't see any direct symptoms so I'm not that worried but we'll see how it goes.

In terms of business setup, Tillwhen codebase is to be cleaned and I need a good project management workflow to be setup so I don't treat TillWhen like a side project and just abandon it, which will definitely happen if I don't setup a good workflow forcing me to sit and work on it.

That's about it for now,

Adios!


  1. You can go through the reaper.im codebase to see what I mean by a simple structure ↩︎

]]>
Mon, 12 Apr 2021 00:00:00 +0000
https://reaper.is/writing/12102020-The-Fight-with-Ones-brain---My-Sleep-Solution.html The Fight with One's brain - My Solution to Sleep https://reaper.is/writing/12102020-The-Fight-with-Ones-brain---My-Sleep-Solution.html The Fight with One's brain - My Solution to Sleep

I've had problems sleeping for as long as I can remember and this has made it hard for me to keep my brain functioning properly and getting me addicted to solutions like Hot Chocolate with Caffeine and/or Coffee to keep me up for as long as needed for work.

While this was getting me through the day with enough productivity, there's always times where I wish I could just fall asleep but then I have a startup job that needs me to be there for a decent number of hours before I can give myself the permission to fall on the bed.

That said, the past few months during the lockdown made it worse. You see, when I had to go to the office there was a perfect schedule that I had to follow and I'd be tired by the travelling so I'd normally fall asleep somehow by 2 - 3 AM but since there's no such physical effort required and I literally just roll over my bed to reach my workstation it's hardly any tiring even after 8-10 hours of continuous work.

Internet Solutions

The solutions I see all over the place are the likes of

  • Get away from your devices 30-40 mins before bed time
  • Listen to white noise or rain sounds
  • Drop the room's temperature to something that you find comfortable (generally about 18 C )
  • Drink some kind of sleep inducing herb tea or something
  • And the final recommendation Melatonin Drug (Please consult a doctor first!)

While they all do work in one way or the other because to sleep you gotta get comfortable, your body needs to feel like it needs rest but the devices , white noise aren't something that helped me. Might work for you, but didn't work for me. I ended up trying things I used to do as a kid when I couldn't sleep.

My Attempted Solutions

These are things I tried that worked sometimes but not every time.

  • Arms Straight, head straight, look at the god damn fan till you fall asleep
  • Work out a bit just before I went to sleep

I haven't recorded the events of the day so can't really confirm why and when they work.

The One

Now this, probably the most simplest and obvious thing for most of you but with a head like mine where you are constantly bombarded with memories, issues, possible bug fixes and project ideas, regrets and theoretical analysis and their results. It's kinda hard (no i'm no smarty pants, I just think a lot).

I applied something I do for getting rid of or getting into a habit, conscious control.

Let me explain.

When trying to learn something or trying to get rid of a habit you have to consciously tell yourself that you are to avoid or do this thing. If i'm learning a new language, I'm to force myself to talk in that language. If learning a programming langugage, same applies, force myself to build something with it.

Though, you can't force yourself to sleep but the point of all the tips mentioned is to get yourself physically and mentally comfortable and calm. Yeah, I know, meditation is the best solution for that but, It doesn't always work for everyone.

For certain people getting rid of the thoughts needs you to go through them, for some people it's just focusing on their breath, for some it's forgetting the present and living in the white noise and while you can call it meditation cause I still follow the base principle of redirecting the focus to something else it's a little different than your conventional meditation.

The trick is to shift focus to a blank or shift your thoughts to plain nothingness, not my breath, not the story a meditation guru is narrating, nor trying to listen to white noise. Just blank out!

How though?

I don't know... I don't even think I can explain it properly but I'll give it a try.

Let's say I'm thinking about a project, the features I'd like, the issues it'll bring, the things that can go totally wrong, the amount of time it'd take for me to perfect it the amount of research I have to do, and you can see the fall towards the negative, normally at this point you go towards a, "Forget it" and try imagining a blank black view, there's nothing there, no project, no humans, no positive, no negative nothing and apparently this relaxes your body automatically.

There's no tension in the shoulder, the legs stop shaking, my fingers stops typing in the air (Yes, I do this) but not on the first go obviously.

My brain is notorious enough to take me back to something else, like how I fell off the stairs while trying to impress a girl or how I botched a deployment and almost broke the entire code flow, it's all there in my memory and it keeps coming back and I guess it happens to the most of us considering the amount of memes this topic has. All I have to do is repeat the process, go back to blank.

The amount of time you have to stay in the blank depends on how the day went, too much on your mind? Will take a little longer than 10-15 mins. Tiring day? You might just fall asleep on the first transition. It really differs, even for me but it's better than not sleeping at all, or sleeping at 6 and waking up at 9 to start working again.

Adios.

]]>
Mon, 12 Oct 2020 00:00:00 +0000
https://reaper.is/writing/13012021-Commitlog---an-unneeded-utility.html commitlog - An unneeded changelog generator utility https://reaper.is/writing/13012021-Commitlog---an-unneeded-utility.html commitlog - An unneeded changelog generator utility

Ahoy people!

Another post, another project, another story as to why it was built.

We'll be talking about commitlog, fork or star the project just to let me know you liked / are using the project so I know if I should be dedicating my weekend hours to the project or just focus on experimenting with newer projects.

I've been working with mini tools/projects a lot for the past year, somewhere around the time when I announced TillWhen and a few UI repositories that I wrote to support TillWhen. They all depend on the tags as releases git flow and as any person using it, changelogs are an important part of the flow.

The changelogs in this flow are pretty much the difference between the 2 tag references and/or if anything breaking was added that's notified with an added description. Most of these are nodejs based projects so vercel's release was my primary solution to the problem of generating a changelog.

I was lazy to copy paste commits messages from one tag to another and had a tool do that for me, while I write posts worth 300 words. Ironic. I know.

Anyway, writing a Changelog is a redundant task and a lot of companies moved away from using them a while back, vercel's solution was built for their internal usage and ended up getting adapted quickly by the node dev community. Here's what release does for you if you haven't used it before.

It creates a cli utility that takes all your commits from a certain point to the other , generally between the last semver change to current commit ignoring any (.dev , .canary) type tags and asks you to specify whether a commit is a (major/minor/patch) change and you are asked about it per commit so if your last semver change was 100 commits away, Good Luck!

Though you shouldn't have such a major change anyway but I can see myself having 100 atomic commits and or canary tags that separate the 2 tags which can be 10-20 each with 3-4 commits on average and that's a lot of questions already. At this point, this data is then pushed to the projects GitHub release creation page with the new semver/dev/canary tag created for you and changes pushed.

It generates and fills the github release description for you in this fashion

## Major

c13e227 - some commit message for major change

## Minor

c13e227 - some commit message for minor change

## Patch

c13e227 - some commit message for a patch type change

Now there's definitely a easier way to avoid the questions, where you provide the commit message with the change type in it's message, so the commit message can be some commit message for major (major) and the tool will not ask you to specify that commit's type or you can tell the cli to not ask any questions and it doesn't classify the commits at all resulting in something like

c13e227 - some commit message for major change c13e227 - some commit message for
minor change c13e227 - some commit message for a patch type change

Now while I like release, and have been using it for the node projects, as I started moving to golang and gomod based projects I had a small hiccup. release depends on package.json to check for semver definition and this means I have to initialise my go projects with a package.json just so I can maintain changelogs which wasn't really pleasant , I could've just taken a fork , fixed this case and used it but then I had a few other things that I thought needed changing and so I ended up writing commitlog.

So, now let's list the issues I had

  • Needs a node project init on any other programming language (not a fan of the node_modules folder in the local setup)
  • has it's own standard for classification of commit types (not a deal breaker but still an issue)
  • limited to GitHub releases (I'm moving to personal git servers and sourcehut so....)
  • asks too many questions when you forget to specify the type of change in the commit message (can be silenced with -s but then the others aren't classified either)

4 issues, that's good enough to write one for myself and learn how to do it in a language I'm not comfortable in yet aka, Go lang.

Commits Standard

I use a simple extract from commitlint where the commits are prefixed with a simple commit type and that's a generic commit format that almost everyone should be following , you don't have to add commitlint to your project but just writing messages with the commit standard is good enough, again it's not really needed but, it's an easier way to track commits. No more dealing with 100 questions but just making sure that I write proper commit messages, even if I don't they'd still get classified as Other Commits and the proper one's still get classified as they are supposed to be.

No Language / Git Platform Dependency

It's a binary that can be run directly without any other programming language or need for any file to maintain the semver for you, since it doesn't really do anything GitHub/GitLab specific and hence it doesn't need any network to complete it's task, instead it just uses the existing tags to define what is to be considered a start point and end point and classifies all commits in-between and outputs a markdown to the stdout , so you can pipe this to other programs like grep cat echo or direct it to a file with > in unix systems

No type? No problem

You forgot to mention the type in a commit or a certain commit has no dedicated type? it'll be categorised into Other Changes and you still have to just run the command , nothing blocking you and thus makes it easier for CI/CD aka, runs on any platform that can run the binary.

The only example you need to see is commitlog's own repository, every release changelog you see in the repository was generated by commitlog itself and you can check the GitHub actions workflow YAML to see how the binaries are packaged / how the changelogs are generated and published.

Did I really need to make one?

Nope, the creators of commitlint have their own cli tool as well , you can check it here: conventional-changelog/conventional-changelog and it solves mostly all problems mentioned above.

Don't have to justify why I made one because I've done that with a lot of tools already, but yeah, the go version binary is a lot more portable but still 5MB because 3MB is the go-runtime that's packaged with the binary.

Though I should probably make a version in portable C to reduce the binary from 5 MB to like something in KB's and maybe a lot smaller of a memory footprint. A project for later.

For now. Adios!

]]>
Mon, 13 Jan 2021 00:00:00 +0000
https://reaper.is/writing/15112020-Crypto-Nonce-and-Association-Flow---Integrating-Slack-Bot-with-Internal-Services.html Crypto Nonce and Association Flow - Integrating Slack Bot with Internal Services https://reaper.is/writing/15112020-Crypto-Nonce-and-Association-Flow---Integrating-Slack-Bot-with-Internal-Services.html Crypto Nonce and Association Flow - Integrating Slack Bot with Internal Services

This is base note for the slack bot integration that TillWhen is being built on for reference in future (in case I forget, how I did it)

This is just for the internal testing of the app in my workspace, the official published slack bot will have the oauth flow.

The Base Flow

  1. Slash command send a request with the user_id in the post body that is used to generate a login url with a cypto nonce involved.
  2. The login url redirects to the existing app which sends an app login request if the session doesn't already exist and also takes care of pairing the slack user id with the app's user id. If already paired then look at Step 4.
  3. If the user is paired for the first time with the service, a DM is sent to the user confirming the association and also the nonce based login page will show a pairing successful message.
  4. Generate an API Access token for the associated user to be used with the slack bot's slash commands and use that to identify the session and even block the session if needed.

The Changes needed in App

  1. Add an integrations table (database) and tab(TillWhen Dashboard)
  2. Add an Integrations Login page to the flow (TillWhen App)
  3. Create API's for generating integration login , and api access tokens for access handling.
  4. Write an auth check middleware for integrations and assign it to each slash command
  5. Make sure to prioritize expiry token on both API tokens and integration's crypto nonces.
]]>
Mon, 15 Nov 2020 00:00:00 +0000
https://reaper.is/writing/15112020-Figma---Design-Tokens---Easing-up-handling-themes-between-apps.html Figma - Design Tokens - Easing up handling themes between apps. https://reaper.is/writing/15112020-Figma---Design-Tokens---Easing-up-handling-themes-between-apps.html Figma - Design Tokens - Easing up handling themes between apps.

As a designer there's often times where I prototype something and then go ahead and handle the colors in a constants file.

It's mostly for colors because that gives me access to be able to handle the changes in theme from a single source of truth.

Manual Process

The manual process is pretty common, you create a small block that consists of all the colors and then you just copy the colors out to a simple file which can export the colours. In case of Javascript its a simple JSON file since javascript natively supports handling json. In other languages , like GO I prefer having it written with stricter types and/or enums.

Semi-Automatic

The other process is to use a design system library that generates the colours into a usable format for you. You can write a script for using with Illustrator and since I just started using Figma there's a few plugins that figma's community has built that makes it a little more easier.

The official term of this is a "Style Dictionary" and the most followed standard is the Amazon's Style Dictionary. I don't really want to go in depth of how style dictionaries work and what they are, there's better posts about it on the web.

We'll just setup Figma so it can export the styles we define as a style dictionary to be used with various services. I'm focusing on React Native and Web frameworks for this so the JSON output works fine. Other languages can use a JSON decoder to get it in the language's native map/hashmap/dictionary format.

Figma and Design Tokens

People who've been using figma for while might already know about this plugin but this is coming from a person who just started moving towards using Figma for interface design and I came across this plugin a fews weeks back but couldn't recommend it till I was done with my own set of tests.

The plugin is still in dev so both the creator and I'd like you to keep that in mind while using the plugin.

Design Tokens is a simple plugin that you can add to your figma workflow that will allow you to export all your style definitions to a json file.

We'll start small, let's say I setup 2 styles in my Figma File,

  1. for the color #121212 as Black
  2. one for the basic font Roboto as Font-Normal

Once installed I can just right click on the file > Plugins > Design Tokens > Export to JSON and you will be prompted to download a JSON file with the following content

{
  "black": {
    "category": "fill",
    "value": "rgba(18, 18, 18, 1)",
    "type": "color"
  },
  "font-normal": {
    "fontSize": {
      "value": 12,
      "type": "number",
      "unit": "pixel"
    },
    "textDecoration": {
      "value": "none",
      "type": "string"
    },
    "fontFamily": {
      "value": "Roboto",
      "type": "string"
    },
    "fontStyle": {
      "value": "Regular",
      "type": "string"
    },
    "letterSpacing": {
      "value": 0,
      "type": "number",
      "unit": "percent"
    },
    "lineHeight": {
      "value": "normal",
      "type": "string",
      "unit": "auto"
    },
    "paragraphIndent": {
      "value": 0,
      "type": "number",
      "unit": "pixel"
    },
    "paragraphSpacing": {
      "value": 0,
      "type": "number",
      "unit": "pixel"
    },
    "textCase": {
      "value": "none",
      "type": "string"
    }
  }
}

As you can see, that's basically every property that you could've edited per style.

Simplifying the Copy Paste of JSON

The plugin also allows you to setup a GitHub repository or a server where the updated tokens can be sent and this makes it a painless process for both the design and dev process, if the designer decided to change the theme, he can do so in the figma styles and just export it to the url you've provided in the settings.

This can point to an existing code repo or a separate design repository that is being used as a submodule and you don't have to manually check if the theme works since the standard is to be followed.

I'll write about more plugins and more design to dev process simplification as I get more and more tools involved in my design process.

Till then, Adios!

]]>
Mon, 15 Nov 2020 00:00:00 +0000
https://reaper.is/writing/17052021-Updates-15th-and-16th-May.html Updates 15th and 16th May https://reaper.is/writing/17052021-Updates-15th-and-16th-May.html Updates 15th and 16th May

I think I should add more tutorials and tips than just post updates but I don't know, I just prefer using the blog as an Update Log, I will try writing more tips and considerations from now on.

As for now.

This update log is mostly focused on Taco cause that's all I worked on this weekend. There is a certain parser I wrote for a bit of work from office and it's a small update so let's start with that.

Mobile Version Sync

So, at Fountane we've been getting a lot of hybrid app requests for mobile development recently and it's a pain to keep the NPM, Android(Gradle) , iOS version and build numbers in sync and this causes an issues when things like an Update server are involved.

I did look up some NPM alternatives for the same but they weren't being maintained much and I didn't want to sit on a fork of a simple tool I could've written (or so I thought) and I got onto it. I already knew the iOS just needed updates in the Info.plist file so that would be a quick thing.

  • Read the plist file, parse the xml , edit the CFBundle properties for the version and build and write it back to the plist, easy.
  • Same goes for Android, read the build properties , update the versionName and versionCode, write it back and done.

Much to what I thought, there was no parser for gradle files in golang , so I ended up writing one which took me almost a day because I wanted to handle a few edge cases(not all) that would cause issues during the use case.

And it's a tightly coupled parser right now, as it's part of the version-sync cli right now, I'll decouple it as soon as I'm done with Taco, don't want to pause work on Taco and pick up new projects right now.

If you wish to understand the logic behind the parser, I did write a small prototype for the same in JS before I wrote the actual go lang parser so you can find the JS version on barelyhuman/gradle-parser-proto and the go lang cli tool on barelyhuman/mobile-version-sync , again it's in development and being used for a very specific use case, generalising it is a plan but not till I'm done with the existing project.

Taco

Trust me, I try to keep these posts short, they just end up being big since I unconsiously add in stuff that I think people should know.

Back to the topic, we've got the

  • Auth (will be changing to passwordless before release)
  • Projects
  • Tasks and Tasks Status
  • Settings and Plans
  • Integration with TillWhen (This one might come as a shock)

Auth

I'm basically done with the Auth part and it used the traditional email and password but I'll be moving it to the magic link setup that TillWhen has once I'm done with the rest of the features, this was done because it's quick and I didn't want to use TillWhen's code because I wanted the code to be concise for this project and TillWhen has patchy code that I didn't want to just copy over.

Projects

Simplest one to implement, creation and assignment and visualising of projects is done, here's a preview

Preview Projects Taco

What you see is the light version of the screen, the creation is using keyboard shortcuts for now because this is a developer preview and not what the end version will have. You'll have proper buttons to create projects but the keyboard shortcuts are most likely going to come bundled as well.

The alert icon you see is actually a pulsing icon and I can't really show it well in a picture but it pulses when you are approaching a project deadline.

The menu on the bottom allows you to go to the project details to check on members and tasks as needed.

Tasks and Status

Already implemented this quite a few times now with todo(not shut down) and TillWhen also has a beta task list so I don't think I need to explain what and how this is but it's not Kanban, I really don't see the need for someone working to have to go through moving items here and there in a board instead of just marking it as done or any other status, but that's just my thinking and Asana does it well but get's very bloaty in terms of things you can do and you accidentally end up clicking on something you didn't want too. Not what I want from a project manager.

Here's what the tasks look like right now.

Preview Tasks Taco

Again, the creation is left on keyboard for now and the assignment to users and projects is still being done since I need to add filters for the same in a way that doesn't add to the bulk of what you see visually. I'm still working on the UX for the same.

The squircle dots on the left are the visual representation of the current status of the task and are what you click on to move it up or down, since the concept of backlog doesn't exist right now, it's not in the menu, I do plan to add that later since I use the backlog more than I use the actual task list.

Settings and Plans

Nothing huge here, just simple notification settings for now and obviously the ability to update your username, the Plan defaults to Hobby for everyone right now since the payment gateway etc will need a little more work and that's not going to be a part of the initial release anyway.

Preview Setting General Taco

Preview Setting Profile Taco

Preview Setting Profile Taco

Also since payments are going to be involved I'll have to prepare legal documents for the same and hence the whole billing and card addition is going to part of future releases and not the initial one.

Integration with TillWhen

Sad to say but I might shut down TillWhen after this particular feature is done and added to the release, this is still in works since both apps need to be updated to be able to do this. The overall plan is for you to be able to export all data from TillWhen and import it on Taco and then taco is where you also handle time logs, so yes while this a big thing to do it is something I do have in mind. Reason being it'll be hard to maintain and work on 2 decently sized projects as an Indie developer.

Or i can move everything that Taco does into Tillwhen but then that's more patches here and there which I don't want too. Why is TillWhen Patchy? It was built in 2 days, what do you expect ?

Also I didn't plan scaling, integrations anything, I got lucky with them because of how I write but that's about it. It's got enough hacky and patchy implementations. The slack integration is a joke on it, exactly why Multi Workspace Slack integration on it has not been done yet. I don't want to break the existing implementation thats working well.

That was a long post...

Fortunately, that's all.

Adios!

]]>
Mon, 17 May 2021 00:00:00 +0000
https://reaper.is/writing/18042021-Phone-Skins-in-India.html Phone Skins in India https://reaper.is/writing/18042021-Phone-Skins-in-India.html Phone Skins in India

I've been a fan of skins for a really long time, and dbrand has been the favourite of a lot of people, mainly because of the amount of customisation you can do when selecting skins, you can mix and match various textures and patterns for various parts of the phone.

  • A different texture for the camera cutout
  • one for the back of the phone
  • one for the front
  • with or without the logo cutout

and others.

Now, as someone residing in India , it's hard to get the skins considering the whole pandemic situation and also they do get a little pricey if you replace skins quite often. I've had a few custom one's printed at local stores that do this and while that's a nice and cheap option it's not very near to me, so I ended up playing a risky hand by trying out various skins from online stores that do similar textures like dbrand in India.

Alternatives

I'm just going to provide some alternatives for someone who'd like to still apply skins but for a little less.

To the point, here's the two I've used quite a few times and I'm satisfied with both.

I've tried and used GadgetShieldz and CapesIndia, mostly because they both kind off replicate the dBrand customization web app and the quality of the shields I recieved was good enough to stay on the phone for about 7 months or so, and that too because I decided to rip out the skin and apply a different one.

Here's the one I applied on my phone last week, a leather one from CapesIndia, and it's pretty easy plus you get extra camera cover in case you botch the first attempt, like I did when I first applied a marble skin on an older phone, totally ripped the side flaps, wasn't concentrating while doing it.

leather-phone-skin

That's it's for now, I'm kinda out of my house right now and can't do much tech work so this is just so I can be consistent at blogging.

Adios!

]]>
Mon, 18 Apr 2021 00:00:00 +0000
https://reaper.is/writing/19052021-Code-Deployments-and-Security.html Code Deployments and Security https://reaper.is/writing/19052021-Code-Deployments-and-Security.html Code Deployments and Security

A simple post focusing on what CI/CD methods I think are worth spending time on and things you can do to avoid letting it out in the open for hackers

The post is specific to server based apps, I will make a separate post about mobile app setups in a different post.

Here's a few CI/CD methods that I know of and the one's we'll be talking about

  • Self Hosted CI/CD and Deployments via SSH
  • Docker Containers and Deployments via SSH
  • AWS's Systematic deployments
  • Git style deployments (Heroku, Dokku, etc)

Self Hosted CI/CD and Deployments via SSH

This is the traditional way of doing deployments. The idea is to have a bastion host and then this host has access to your deployment environments and takes care of running builds and pushing the builds to the needed server or even better, triggering a build on the needed server and monitoring the progress.

A good example is the a Jenkins setup

Quick Note: Never, I mean never let the bastion host be open via default ports, and obviously, shut down the 80 port as well, use a random set of ports for both SSH and HTTP.

The Cons

  • The setup can be time taking and financially intensive since you setup multiple instances , one for jenkins / go ci cd / drone ci cd , and then one or more for projects, and then have to upscale in terms of memory and store based on how many builds you want etc and can get costly when doing that
  • You technically have a single point of failure, someone hacks the bastion host and you basically gifted them access to all other instances.
  • You get a limited number of parallel deployments due to the hardware limitation, easily fixed by increasing the ram etc but not the best solution for everyone, not everyone can throw in extra money.

The Pros

  • SSH keys are on the actual bastion host and these keys are machine specific so you don't have to worry about it getting leaked unless someone actually hacks the bastion host which is rare if you're using a good provider and the point of failure can be made a little more secure.
  • Good User management, tracking logging, etc etc everything that every self hosted CI/CD will ever tell you. Next setup.

Docker Containers and Deployments via SSH

With everything moving to cloud native setups this is a lot more common around devs and this mainly involves the Runners / Container that we can find with most CI/CD services today, examples would be Buddy Works, Gitlab CI, Github Actions, Circle CI etc etc

These basically give you a isolated container that you can run your script in and thus reduces the overlap of past caches (does support caching if you really need it) effecting a fresh build. These are amazing and obviously a lot more scalable since each runner can run a deployment and you are still limited as to how many you can based on the service you use but considering Github Actions and Gitlab Runners I've had 3-4 run in parallel and I can't really complain with that count on a free plan.

The Cons

  • The total isolation makes the build time longer if you are using the generic images from docker hub as you need to setup the environment again and again compared to a one time setup on the self hosted or bastion host method. This adds to the build time but easily fixable via writing your own Docker Images and using them if your service is docker based. If they have a container service of their own then well the build times are going to stay. (really irritating during monkey patch deployments)

  • Also brings a learning curve as the scripts need an additional syntax, so if you haven't worked with yaml, you'll have to learn that, if you're on AWS ECS, AppSpec JSON, etc etc, while all these config standards are quite simple to learn , it still adds up to the setup time so going to mention this as a con

  • The SSH Setup and deployments from these normally need a private key pair added to the actual container using some form of config from the platform which differs in each service that you use. Which people just blindly add because a post on some blog told them it was okay to throw their private keys like that.

The Pros

  • Clean Environment so no trash from earlier runs create an issue and can confirm fresh builds with the most recent clone of the repository
  • Don't have to setup the environment again and again if you do create a docker image for one it's easily usable in any other CI/CD environment today since almost all of them follow a similar pattern in terms of deployment requirements. I have used multiple github action workflows in gitlab by making just changes to the accepted config syntax (again, adds to the time so still a con)

AWS's Systematic deployments

The nightmare of deployment setups, you start with 100 services and then link those services and then hope everything works but no you forgot to change the security group for one service and now you'll have to fix that but oh no you forgot which service was to be restarted to make it run so you restart one by one and it's been 4 hours.

Sarcasm aside. A major contender in terms of a Dev Ops environment and technically they've taken care of all security concerns I normally worry about but still the amount of setup it takes is better off done with terraform config. Manually is really going to take a lot of time, there's a reason terraform exists and devops love it.

anyway, no cons and pros here because it's basically a good standard as long as the ports are left on default. Mail me stuff you think should be added to improve security here and I'll test it out and edit the post as needed.

Git style deployments (Heroku, Dokku, etc)

This is one of my favorite ways of doing deployments since there's no involvment of keys , heroku can hook onto your git repository to and take care of deploying on changes.

On the other hand, there's Ansible which can also handle this while using a local ssh key so your private key isn't gifted to any service.

Dokku is another alternative and works in a similar fashion and you can push to a dokku repository and it'll build it there for you.

The only flaw is heroku since it is protected with email and password and people use amazing passwords and don't add 2fa so .... for that i'll blame the developer and not the service but if someone hacks on the service, you pretty much loose your source code and not someone who hacks on all your data because in these cases the data is a separate service.

One other problem is the rollback mechanism but it's only on dokku since I have to trigger another rebuild, I can use plugins to cache the rollback build but doesn't provide it by default so I'm going to rant about it. Kidding, it's a simple fix plus has enough plugins to take care of a lot of things.

You can't modify a simple line and restart server to test it so this is going to need a commit and push to work which might not be ideal for services that need to quickly monkey patch security vulnerabilities but works for most other scenarios.

The Cons

  • Builds and Rollbacks are equally timetaking in case on emergencies. Not an issue if you just push changes and don't have a full fledged build phase or if you add in a plugin that can handle rollback caches for upto 2 - 3 successful builds.
  • Again uses containers so things from containers in a way do apply here.
  • Can get pricey if you use heroku, fixable by using self hosted dokku or ansible with a ec2 instance

The Pros

  • Simplest to setup since it has build packs that adapt to your setup and all you need is a Procfile that decides what command is to be run
  • Easily scalable, since isolated services and containers

Security

While I would love for everyone to go ahead with heroku and/or dokku a lot of people already have a setup and wouldn't like to change it just because I said that something else is better, so instead lets try to fix the minor issues we have with these setups.

The first setup with bastion has the problem of being able to do anything and everything with the connected resources. This can be solved by

  • Don't use the default ports for SSH and HTTP on the bastion
  • Limit the bastion host keys to only be able to run one command or one script on the connected or deployable servers and if you wish to connect to the actual servers with access to do everything have another ssh keypair that has a password protecting the ssh authentication. In short, non password protected ssh key to be limited to running a single script on the server and password protected ssh key will allow you to have proper access for everything.

"Reaper, they can hack into the password too!", yes they can.

But, the password will take a significant time to crack , you can have logs of those attempts stored to monitor if someone unauthorized tried to log in. Also, the password attempts can be limited on most ssh daemons so that adds in even more blockage.

The same solution can be used for Docker Container Runners like Gitlab and Github where you provide your private key to the runner.

  • Don't use your personal ssh key and add that to the runner!
  • Generate a nice little key pair for the particular requirement and add that as a base64 encoded string to maintain integrity and then decode the masked base64 string in your build config files (.gitlab-ci.yml, .github/workflows/action.yml,etc) and again limit the key on the server to be able to only execute a single script.

HOW DO I LIMIT IT TO EXECUTE ONE SCRIPT !?

Good question.

There's a file called ~/.ssh/authorized_keys where you add public keys that can access the ssh daemon and connect to the user. What you can do is add a few parameters before this key to limit what the key has access too. You can find all the parameters on the authorized_keys manual from openssh and one of them is command . Basically you can provide a string of command(s) that whenever the ssh-key connects, the system is to execute it. So, in our case we are going to add a command that runs the execute-deploy.sh script.

So, it'll look a little something like

command="echo 'hello'" ssh-rsa [.....]

Now everytime the holder of this key tries to ssh, he will just see hello and the session will end , if he sends any other command, that'll also just execute hello and end the ssh session.

In a real life scenario you are going to have something in the lines of :

command="bash -ic /home/reaper/execute-deploy.sh" ssh-rsa [.....]

To break it down

-c : run the commands from the following string

-i: create an interactive shell, which if you don't provide, will try to run the script in a plain tty environment and all the commands/binaries and custom path setups you made on the server might not work in the script you execute.

Now all I need to do is pass this private key to the Runners and if the runner tries to connect, it executes a deploy. If gitlab or github ever leak the private key, the max he can do is execute the script unless he brute forces his way into the server and then decides to kill everything, sadly, I don't have the knowledge to prevent that right now.

You'll know when I have more ways to help improve things like these.

As for now,

Adios!

]]>
Mon, 19 May 2021 00:00:00 +0000
https://reaper.is/writing/20-09-2020-Building-stuff-you-dont-like.html Building stuff you don't like https://reaper.is/writing/20-09-2020-Building-stuff-you-dont-like.html Building stuff you don't like

I've advocated working on projects that you'd like to learn to build and also projects that you think you'll use, in the past but then there's another category of projects that exist. The ones we don't like. Now, it should pretty obvious that this category is pretty dense.

This can have projects that you don't like at all and also projects that you think you'll like to do but then end up deciding otherwise because you either lost confidence in the project or to be precise hit with the realisation that there's a similar project that's built a lot better than you ever can.

This is quite common with me and let's see the solutions I've discovered.

Let's start with projects you don't like at all. I'd say just put them in a project bucket list and don't touch them for now, we might approach these when we think we are out of ideas to build something.

Now to the projects that we started but then left them half baked.

I can always bring in TillWhen as an example here but I won't this time. I will link you to it though. TillWhen - Just a Time Tracker

So let's see, how can I explain this without sounding like a jerk, which is very hard to do.

It's okay to drop projects in the middle and the main culprit most of the times is the amount of effort you have to put into the project. At least after my research on how I drop projects, the general process I follow when dealing with an idea is to note it down somewhere with the set of MVP Features that I'd need.

The next step is to start the project and decide the stack for it, and this part takes a good amount of time because now I have 2 options, use something new to learn it and then use something that's been setup properly already and use that instead.

After deciding the above, I end up either creating a project or dumping a project based on two factors.

  • Nature of the Project
  • Time of acceptance

Nature of Project

Now this is very simple, if the project is more for experimenting, then the new stack, new arch approach works really well and I learn a few things but most commonly end up dropping the project when the new stack creates more problems than required. It's not that the stack can't handle the issue , its the amount of work I'd have to put to fix something small that I could've solved with a stack I'm used to.

If I take up a project that's supposed to be a proper product and be released then using a new stack and experimenting with it will be a bad idea because I know I'll loose interest because of the above point.

On the other hand, if you want to try building a small addition then don't start from scratch and try adding that addition to an existing product to see how it works out and do it on a different branch of your git or version control workflow , so you don't have to commit to the new experimental addition.

Time of Acceptance

I'm the kind of guy who likes to work on projects that put up a challenge, a simple CRUD app to manage team projects is not a challenge at all but I ended up trying to build one and have been forcefully pushing myself to build it.

This whole post is a way to just remind me that it's okay to drop the project. But, I also have to realise that not all projects that are out there can be left this way.

I should focus on extending existing projects and maybe even work on integrating different projects with each other. This is where Time of acceptance comes into play. This differs from project to project but there's a general average you can find out. Mine is 3 days. If I can't build an MVP in 3 days you can forget that project. There's no way I'm going to work on it for more than that unless it's a challenging issue and I'd like to break my head longer.

Simply put, Time of Acceptance is basically the amount of time people take to accept that this project is not worth it anymore, and this can be a subconscious decision and this might take longer for the conscious to realise.

Avoiding Half Baked Projects

It's good to try new stacks, new arch, building new arch and also trying out new languages but when you are forced to build a minor part of a production project into a new stack it can be quite daunting and push you away from it, so as I mentioned in a few posts back, have a default stack that you fall to for building production apps and only try new stacks on something small that you've already built so you know the parts that need attention and then you can look at how that language would handle that issue or how the new stack makes it harder or easier for you to approach it.

If I would've done the project management app in an existing setup stack with login, tab, and other architecture and cosmetics figured out, I would've built it in a few hours but since I've been slacking off the whole thing on trying out new languages , trying out new database schema designing techniques , the project is just a huge experiment.

And that's mainly because It wasn't started with the honest mentality to actually be a product for the people, it started as an experiment and is going to be left out to be one.

The only way to avoid this from happening is to avoid the effort you put into building it. Use generators, use project templates and/or tools that already do something you are trying to do. Like, can a telegram bot achieve this, can I use an existing program and just integrate with it to make it easier.

Like if building JARVIS is something you'd like to do then start with an existing NLP engine and use that instead of going crazy like me and attempting to build an NLP engine from scratch. While the latter would help you learn about processing text efficiently and data structures to help you work with huge amounts of existing buffers of data to figure out what the entered text meant but then that's about it. You could've just read about it for future references but no, you decided to build it again because using Facebook's Wit.AI wasn't a good option back then.

Most products you see from huge companies are built well because there was a team who was paid to dedicatedly work on it and you have the option to jump from project to project and that'll always get in the way unless you have a razor sharp focus, which I don't.

Also, the solutions provided above are all with respect to people who work as solo developers and are building things because they think it's fun and don't really expect a return from it.

Proof of this? TillWhen

The donation button there is just a small add-on that I don't even expect people to use. It's there on the website as a formality.

The UX was so intentionally decided that I put the page outside the actual app environment, you can't see it till you actually logout. I didn't really want people to pay for something like a time tracker and just added that so if there's someone who'd like to help the developer, they could.

I do update TillWhen and add improvements for performance and stuff but that's because I can see TillWhen growing, not at a very high pace but it's cool and that acts as a return for me.

On the whole, your side projects can turn into disasters and it's okay.

You can end up not wanting to complete them and that's okay too.

To solve it though, use smart arch decisions to reduce your initial work for the prototype.

Don't try building everything from scratch for something that you might not even work on from a few days from now.

but!

This doesn't mean you will not try out new languages, tech stacks, etc. Have dedicated projects that you'd build with during the learning process so it doesn't effect a potentially good project that you could've built.

Examples of this?

A music player, a todo app, a god damn random colour hex generator if you want.

Now these projects are small enough to be built in a few days or maybe even hours but the point is they are easy and that gives you time to learn something new and/or experiment with.

I needed to figure out ways to handle ACLs better and I created a whole project for it when I could've just tested it on an older project on a dummy branch and this stupid decision to build the whole base arch contributes to this post just as much as the project management app that I talked about in the first few paragraphs.

That's it for this one

Adios.

]]>
Mon, 20 Sep 2020 00:00:00 +0000
https://reaper.is/writing/20042021-Why-I-use-Next.js-for-everything-and-why-you-shouldnt-.html Why I use Next.js for everything and why you shouldn't! https://reaper.is/writing/20042021-Why-I-use-Next.js-for-everything-and-why-you-shouldnt-.html Why I use Next.js for everything and why you shouldn't!

You can directly read the summary if you'd like to avoid the explanation, Summary

Context

I became a Vercel (formerly known as Zeit) fanboy and someone who wanted to join their team somewhere around April 2019 when I first discovered Zeit.co for app deployments and also found out that most of what I used in terms of libraries and even tools were actually built by them 1

And I've attempted writing to them a few times in terms of joining the team but with no success of any response so I'm just going to assume it's a No, getting back to the point.

Next.js

Talking about the foundational framework of TillWhen and a lot of my web apps, mini tools and also the reason I mentioned Vercel first. Vercel is responsible for the Open source framework Next.js which is a SSG (Static Site Generator) based on React much like Gatsby. The reason I picked Next.js was simple

  • Easy base app scaffolding (yarn create next-app <app-name>)
  • Don't have to add a router as the page structure defines routes (an inspiration for the statico generator and also ftrouter)
  • In built API handlers which are written as modules and are pretty scalable if you understand how to structure the routes2
  • The Generated HTML works decently with or without JS unless you decide to handle routing programmatically, which you shouldn't but in case you do it might not work without JS.

These points aren't unique to Next.js anymore but then the newer frameworks don't really offer anything significant to make me shift to them, If I had to jump out of Next.js I'd jump to Nuxt which is for Vue and just use React as is for other projects, I've given Gatsby a try but I guess I just prefer Next.js.

Why shouldn't I use Next.js for everything like you do ?

Well, I always say that you decide on tech stack based on intention of what you are building and the target you wish to achieve while building it.

If the target is to learn, you're better off choosing a tech you haven't used ever. This is how you understand where the tech prevails and where it's going to be a bad option.

If you choose to build a quick prototype, you choose the tech you are most familiar with, this can be a really old tech and probably obsolete at this point but if you are just testing out an idea or working on seeing if the product is going to gain any traction, you still use what you know has worked for you in the past.

If you choose to build for production, you take the prototype or concrete requirements and figure out and experiment with tech that was built for this stuff.

Eg: I built a static generator from scratch for my blog, doesn't mean I'd do it for a client. I'll pick up a battle tested (Wordpress, Ghost, etc) to be my base and hand them that, this can be a bit heavier but is a lot more stable in the long run. On the other hand while doing this you also get ideas as to how you can improve your own scratch built tools to fit more and more scenarios.

But you use next for everything

True, now the reason for that is most of what I build is to act as quick prototypes of a certain idea that I had or something I wish to see and re-implement just to understand what's going on in the code when a dev actually built it and if I can improve it. Good examples of this are Hen , Colors, Pending, other mini tools that I've built over the years. I do use each of them but they aren't unique concepts and just exist because something from some app inspired me and I just wanted to get down and build a clone to understand it and this is where it's fine to use a fallback default stack, which in my case happens to be Next.js

The stack may differ for everyone. You might be a Angular + Phoenix or React + Koa or a RoR + React/Angular monolith person, doesn't matter.

Example

Let's take Hen into consideration here, it's a simple live preview for component code written in React.

  • Do I need a SSG for this? Nope.
  • Do I need an inbuilt router ? Nope.
  • Do I need n number of imports that come with the added framework ? Nope.

I could build Hen with just vanilla JS or just React and that's more than enough. As a matter of fact Hen is built with just React, Hen Experimental on the other hand is the repository that holds the unstable and testing code for the original Live Preview attempt and was written in Next.js.

Point to take from this?

Prototype with a fallback stack and build/re-build it with the shortcomings and advantages with a stack that finds a nice middle ground

You may or may not have to do this with every app you build but it's generally a good idea to find out what the app requirements need , build a dummy version with tech you are quick with, this can even use existing codebase from previous projects etc etc etc, you get the idea.

Again, you might not have to switch the entire stack and as the experience grows with various technologies you might not even have to build the prototype for a certain set of functionalities, you just know what will work well and what won't but to get there you'll have to experiment first and not take people's word for it (conditional but generally helps to increase your own knowledge and build an independent opinion)

Summary

  • Have a fallback stack
  • Don't use it for everything and anything just because you can, unless it fits with the requirement.
  • Have a Prototype First, Production Later mindset to make sure you build something that's a lot more scalable while avoiding monkey patching stuff in the future.
  • Experiment with different technologies to build both knowledge and independent opinion, which can then help you make better decisions (I'm saying this with regards to programming, but you can apply this to life in a way as well)

  1. I used to have Hyper as my primary terminal, pkg to create executable node binaries, ms to handle millisecond formatting and no I wasn't sponsored to say this, I actually did use all these libs a lot back then and I had already tried next.js at this point but it was still not production level as a few issues stopped me from using it for prod. Oh and there was release and also I did get inspired by their design system a few times ↩︎

  2. You can create a huge mess if you write long api paths and don't follow entity based standard while writing routes. ↩︎

]]>
Mon, 20 Apr 2021 00:00:00 +0000
https://reaper.is/writing/2020-08-28-Projects-Delays-and-Go-Lang.html Projects Delays and Go Lang https://reaper.is/writing/2020-08-28-Projects-Delays-and-Go-Lang.html Projects Delays and Go Lang

I've been slacking off on personal projects for a while now and it's not because I want to throw these projects out of the window but because I've been learning Go Lang for the past few weeks and I'm not able to dedicate time to actual coding just yet.

Sorry, What?

Umm, I'm trying to say that I've been reading books and blogs about Go for the past few weeks and that is pulling away time from me to spend on building new projects and/or optimise and improve TillWhen.

While I myself say that doing actual projects help you learn faster it doesn't apply when you directly want to build a huge project in a new language.

It is a necessary step to learn the language inside out since just treating it like an equivalent of another language I know already, will limit us from actually understanding the actual use case of the new language and also blocks our paths when reaching out for help.

If you don't know what the language calls a particular type or a function and you search on google based on your previous language experience, you are going to get stuck.

Why Go Lang?

I've been switching back and forth between Node(Javascript) , Rust and Go for a while now and every time I hit a road block in Rust and Go, I immediately jump back to Node to complete the project or tool and this has been a pattern because I lack the theoretical knowledge of the language and have been treating it like a replacement for Node , which it's not and this thinking has been hindering my progress.

A few weeks back I decided to build a project management portal for solo developers which I didn't continue to build because it was going to have it's own Git server and stuff but then the git implementation and the total size of the binary crossed 70MB and I wouldn't want someone to self host something that takes up 70MB with just 2% of the features and this is because it is hard to get rid of node_modules completely when bundling to a binary and then I remembered about pgweb which is just 8MB and is bundled with a full fledged Postgres web UI.

I liked that and since the Go portable binaries are really well optimised, scaling to desktop apps using an RPC server in Golang would make building native apps a little more easier but, the history of me jumping away from the language every time theres even a minute inconvenience was going to get in the way.

I started researching on how various people moved from language to language and things they had learned about this move. The things they wish they considered, the things that blocked them or took them time to understand, everything.

I got the answer that I'll need more theoretical programming knowledge specific to this language to be able to avoid most of the blocks and the only ones remaining would depend on the newer packages that I'd add to the codebase, which makes sense and hence, I started reading.

I'm almost done with the book so I'll be working on small tools and maybe even rebuilding tools just to get better at the language and then get to the project management app again.

The Point of the Post?

There's never one.

Adios!

]]>
Mon, 28 Aug 2020 00:00:00 +0000
https://reaper.is/writing/2020-09-02-Things-I-wish-I-did-sooner.html Things I wish I did sooner https://reaper.is/writing/2020-09-02-Things-I-wish-I-did-sooner.html Things I wish I did sooner

Being a self taught developer, a lot of times the path you follow is arbitrary and there's a very rare chance that 2 developers follow the same learning train.

One might start Frontend development, stick to it and excel only at that, one might be a curious one and jump from language to language and try to learn everything he/she possibly can.

Neither one is a bad developer.

I wished I had a mentor, might have changed the way I look at code altogether and maybe even be a better coder but then I was always the guy who could teach himself instead of the one who'd learn better from teachers.

Now, backstory aside, here's a list of things I wished I did sooner.

Add these to your Tool-belt

  • Git - I shouldn't even have to mention this anymore but GIT!
  • Unix Basic Commands - ls,cd,cat, grep, ps, sed, mv, pwd
  • Docker - Make your life a little more easier
  • Dokku - Make it easier for deployments and app management

Have a Go To Stack

Ironic coming from the person who changes his tech stack every 2 weeks but have a goto tech stack that you can depend on when confused on what to choose.

This helps you when you just want to test the waters and create a prototype or something where you wouldn't want to invest a lot of time setting up architecture but getting the base functionalities up and running to test. I've previously made the mistake of trying to setup the perfect architecture and failing to ever start the project because of that mind block.

Here's a few web stack examples

These are stacks that I've personally worked with.

Express Starter (for Beginners / Intermediates)

  • Express (Web/Rest Server)
  • PGSQL (DB)
  • Sequelize (ORM)
  • Angular / React / Vue / Svelte / RiotJs (View Layer) (Suit yourself here...)

Hapi Starter (for Beginners / Intermediates)

  • HapiJS (newer versions)
  • MySQL(DB)
  • Sequelize (ORM) / Knex (Query Builder)
  • Angular / React / Vue / Svelte / RiotJs (View Layer) (Suit yourself here...)

Obviously you could add redis, ES , etc etc for additional requirements but I'll leave this to be the base you start with.

What Stack do you use?

The one I use has an experimental server layer so I wouldn't recommend people use it right now but here goes,

  • ftrouter (webserver layer)
  • pgsql / mongo (db layer)
  • knex.js (query builder)
  • Next.js (view layer)

While I know Next.js can be used for the server part as well, I choose not too, I like keeping the UI and Server far away from each other , I did build a full monolith using Next.js but I figured that both, the amount of control and deployment time can be improved if I split them.

Maintain a resource collection

Now this is one step people avoid because everything is available on StackOverflow and various other blog posts and one of the reason I think there's a slow down in learning but that's a rant I can pick up later.

Have a place where you maintain code snippets , libraries , everything that you find useful. You can even store stack overflow answers if you'd like.

The point of having a resource collection is avoid fumbling around the web when you've solved a problem before already. I've written time formatters multiple times now, while not proud of it I still end up writing most of my logic again and again when I could've just picked it up from a previous codebase but then looking through 100 repositories for it is a bad idea and I could write the formatter again till then.

If you've not seen it yet, reaper.is has a collections section where I now store snippets I use a lot or type again and again.

You don't have to do this to your website but use something like Github Gist to store these snippets and something like Pocket to store blog and website links.

Learn to write tests.

While it's mostly a luxury and shouldn't be done for prototype projects. It is still mandatory for you to learn how to write tests and learn at-least one test engine / test suite. It can be a full batteries included test suite or it can be a combination of test runner and a test functions library.

As, when the codebase moves forward from the prototype phase, it's necessary that you spend minimum time trying to figure out what has broken from an already working flow. Addition of new flows/functionalities will take longer since you have to now write tests for these but you gotta do them just once and that one time should be done properly.

It's ironic that I'm giving this suggestion when none of my repos actually have test cases but that's because none of them have moved out of the prototype stage to begin with and the one which is into beta is TillWhen and it has test cases to make sure the base functionality works at all times.

Master your Code Editor

Last but not the least and probably the most important one of the bunch, keep digging through what your code editor has to offer. If you're using VSCode, learn to use it's task runners, learn to use it's in-built debugger, add visualiser plugins to test performance of code, etc ,etc.

If you're using Vim, learn to create macros , get to know how to manipulate larger chunks of text faster, again add plugins to speed up repetetive tasks.

Each famous code editor has a few tricks up its sleeves that can help you in productivity and getting faster at coding. As a programmer the editor is your home and something you spend your precious hours with.

Learn It, Practice It, MASTER IT!

Where Do I look for these features?

Read through the release notes, check if a release has added something useful, try it out , check if it's easy to use when working on code to use this functionality. I use eslint's tasks in VsCode almost instantly at this point. Command + P , Run Task, Lint whole folder, my fingers do it in seconds and I don't have to wait for VS Code's internal plugin runners to do the linting and lint fixes as that slows down the editor on my low power hardware.

Yeah, I code on multiple hardware at once. Shocker!

Similarly, I've gotten so used to Vim's macro creator that I consider that to be the first thing that I think off when looking for repetetive tasks on the text.

The End

Adios.

]]>
Mon, 02 Sep 2020 00:00:00 +0000
https://reaper.is/writing/2021-02-22-Go-Lang-and-Web-Scraping.html Go Lang and Web Scraping https://reaper.is/writing/2021-02-22-Go-Lang-and-Web-Scraping.html Go Lang and Web Scraping

Scraping websites is fun, but then I rarely ever tried doing it since setting up chrome and firefox to run headless on heroku is a daunting task and quite time taking in terms off testing. Which I did do when I made this Epic Games Online Store Scraper and as you can see it's quite slow because it takes time to live scrape, and while I could've had a database setup to store the prices the problem is I'd have to run a sync every-time anyway because the store changes price and new free games are added at any point so a live scraper was the plan but then epic started changing the view structure every now and then and it became hard to maintain it and so it's now just there for me as a reference if I ever need to create a scraper using Nodejs and puppeteer again.

Why talk about it now?

No reason to bring a project that's about a year old , into the picture now, right? Well, If you know how this blog works internally, it's just a nextjs app that renders the markdown text into react components, I should shift to mdx though. I will be cleaning the code of the site soon, it's quite repetetive right now and should be refactored.

So, back to topic. The whole blog section is just 2 files of .js that renders content from .md files from the same repo and while I like the approach, since everything I write is in the same repo, it can be taken offline and you can run the blog locally if you like my posts that much.... doubt that.

But i think being able to take it offline is a big advantage, though I was going throught BuyMeACoffee.com the other day and I remembered that I made 2 posts regarding my released projects there when this blog wasn't really deployed and I liked their editor.

This is where the scraping comes in, they do have exposed api's but it doesn't expose the user posts and quite a few people have asked for it so they might add it in the future releases though for now, we are going to scrape these posts I've got there.

You wrote a scraper with Go, why should I care ?

Ah, because it didn't need a huge ass browser to be run on heroku.

The whole

Build everything with the language you wish to learn

concept really works well for me and since we are on the Go Lang train from the past few months, I gave it a shot and used a browser package the go community offers called surf and it's basically a programmatic browser , so it doesn't need a headless instance in the first place and acts like a browser. If i'm not wrong someone would've already made a testing suite using its api for web apps. The best part, it's a simple go lang package so it got compiled with the remaining program, so now I have a single binary file of a few MBs running on heroku with no other deps needed.

No linux setup required , no waiting for 20 more apt updates and packages to install so that chrome can run headless, one single binary and it's fast, like the current instance of the worker is running for free, so we have the 10 second wake up time from heroku but that's about it. It doesn't take any longer than that.

You can check the api by clicking on the link below. Obviously wait for the 10 seconds if the container has to cold start, but then your reloads will take no time, it spawns a browser, opens the url, scrapes and shows the results in under 2 seconds. I'm impressed, there's no caching logic in that code either (should add though)

Posts - All Posts

Post Data - Example Post Data

and so I think I'm going to do a lot more scraping now since it was a breeze to setup, use and quite performant without any direct optimizations that I've made.

You forgot to tell what you're going to do with the scraper.

Oh yeah, so now since I have the data from there, I realised that their post engine is kinda limited in terms of markdown recognition and also, directly reading from the API conflicts with my thinking of offline availability, so I plan on running a GitHub action every 3-4 days to go through this worker, get the data, create a markdown from the received html data and save it as a file into the repo if it doesn't already have the post. This sticks to the offline thinking for someone who'd like to be able to go through them offline and also I get to use the BuyMeACoffee's editor to make decent posts, won't use it for the posts that need code snippets etc, cause that won't work with their current editor.

That's it for now.

Adios!

]]>
Mon, 22 Feb 2021 00:00:00 +0000
https://reaper.is/writing/20210206-the-most-used-tool-i-ever-wrote.html The most used tool I ever wrote. https://reaper.is/writing/20210206-the-most-used-tool-i-ever-wrote.html The most used tool I ever wrote.

I've written quite a few mini tools

  • Hen - Interactive React Playground
  • Format - A simple in browser code formatter
  • Mark - A markdown editor (cause there weren't already 1000's of these online)
  • commitlog - Change-log generator

There's a few more but we'll limit the show off to this much for now.

Which one am I proud of the most? None actually.

They were all ideas that already existed all over the web, things already built by various developers and in a much better form than I did. The reason for building them was pretty simple, to learn how they worked internally, I've re-invented the wheel quite a few times in these cases just to understand what's going on behind the scenes and that's all they were for.

On the other hand, while still not proud of it, there's one specific web-app that I use a lot. It's like google for me at this point, I open this web-app first, look for what I need and then redirect myself to the needed solution accordingly.

Enough with the mystery, what is it!?

The same website that you are reading this post on. This is the repository that has had the most commits till date and is something that I should just turn into a homepage at this point since I do end up on it every now and then anyway.

Power Menu

The best addition to the website was the power menu or command palette that I added a few days back, you can trigger it right now by pressing CMD/Ctrl + k , sadly only possible through a physical keyboard but it opens up a simple suggestion menu that can be used to browse through the main site, added this because I avoid using the mouse as much as I can but then I don't have keyboard support on my own website, ironic much?

Collections

The other addition is the collections part on the website, I keep snippets, checklists and other lists that I might need to browse through for various stuff, the blog itself has a few posts that work as reference for various issues that I've had or other idiotic decisions I've made while developing stuff and it works well in terms that I can browse through them when needed.

Big Deal, I've seen even better websites!

Oh , definitely! I've seen websites that are even more minimal , for example checkout Leo from Vercel's Dev team, his website is really really simple and uses external tools for basically everything, but then I don't use social media much so I'm limited to having everything on this instead of using Twitter as a micro blog, his github repo's speak for how much better of a developer he is.

I show off a little bit cause I'm not as good, so the marketing is kinda needed.

On the more functional and beautiful side, there's websites from Sindre Sorhus Paco Coursey - His website has no nav, it's all power menu based

That's not all of them and obviously, there's tons of more devs that are better than me, I wouldn't mind if one of them became a mentor of mine though the point of this specific post was about the tool I used the most.

Final words, not just for me , but I think every dev should have a good website which they build with a simple thinking that they are going to use it more than someone else, if you do build it with that thinking, chances are you'll add a lot of useful things to the website that others might just enjoy as well.

On that note, I should probably style the power menu to look a lot better, it's super flat like the rest of the site which is nice but then there's no depth even during actions and that's not nice. Any frontend dev that has any ideas regarding the power menu and the remaining website's style guide, let me know or send through a pull request.

That's about it for now. Adios!

]]>
Mon, 06 Feb 2021 00:00:00 +0000
https://reaper.is/writing/20210300-Why-do-I-end-up-re-designing-everything.html Why do I end up re-designing everything ? https://reaper.is/writing/20210300-Why-do-I-end-up-re-designing-everything.html Why do I end up re-designing everything ?

I've done this quite a few times when it comes to implementing minimal design across tools and apps that I build. So in this post I talked about why I re-designed the todo app, when all I wanted to do was a add a filtering functionality but then I redesigned the whole thing.

And technically, the idea for the colors was taken from the UI i was writing for the markdown editor which then ended up being the inspiration for me to modify the UI of commitlog-web through which I created a small library called themer and now I'm sitting here writing this post indirectly back linking to everything I've created in the past 1.5 days just because I liked the input style I made for a markdown editor.

And this has happened before, I got inspired by lancerlist.co and ended up cloning that color scheme to all the apps I built that week/month.

From memory, I probably built a css resets library that cloned the color scheme of lancerlist and maybe a dummy hiring network and a covid tracker that probably doesn't work anymore.

I know it's not bad to do make something look good.
but! then I think that I do end up spending a lot of time doing this re-design and making things look clean than actually working on something meaningful that'd help a lot of people instead of just building stuff that I'd use, though none of them were ever launched or promoted as products / tools for the greater good so maybe I'm to blame for that.

Like, I could write a new repl.it clone that didn't need you to sign up and create a new repl and then get to coding but just open a url and boom, a repl to test snippets on, but then obviously the other guys wants to make a living out of it so I can't blame him but nope, I'm going to write a dark mode setup library (cause there's not enough of them already)

I mean, I learn a lot while building these smaller tools and apps but then it's not something I can go to the market with or actually earn from it. Obviously there are developers like Sindre and Drew Devault that actually make a living out of open source projects , both with totally different methods but both are full time open source developers.

Argh, Offtopic!

I was talking about building useful stuff for people to use, that reminds me commitlog reached about 48 stars at the time of writing this, so that's really nice but let's see if I can build a lot more tools that work well in everyone's workflow.

I want to try writing a bundler for once, the existing ones are good but then they all assume the developer workflow a lot. Other than webpack and rollup, which are really extensible and hence people create wrappers around them. Anyway, I need to know what would be a better idea ? An online instant repl environment or a bundler?

I guess you can mail in the answer, but for now.
Adios!

]]>
Mon, 07 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210301-Git-Workflow.html Git Workflow https://reaper.is/writing/20210301-Git-Workflow.html Git Workflow

There's quite a few ways to work with git but here's how I go about maintaining the repositories that I work with or maintain as a solo developer while building personal projects. I specify personal projects since the git workflow while working with a team differs.

Also, I've actually talked about this flow, one on one with a lot of people so It's easier to write it down once for people to refer to.

Commands

Let's get over the list of commands that you need to know and understand to go ahead with this flow.

  • git pull (combined with the --rebase flag, used when re-syncing with the remote) -git remote (manipulate the existing remote urls or adding a remote)
  • git push (push to the upstream/remote)
  • git merge (rarely used, but good to know anyway)
  • git rebase (specially the -i option, to edit and re-arrange the commit history)
  • git commit (make commits, i don't use -m ,i prefer writing descriptions for my commits)
  • git status (current working tree status)
  • git diff (check diffs in the terminal when working on the raspberry pi)
  • git branch (creating and deleting branches)
  • git checkout (to move from branch to branch)
  • git restore (in newer versions of git, so upgrade if you don't see this command or it errors out)

Now those are basically all the commands I use, git log would be another but then I use commitlog now so that's out of the picture for most part right now.

When I say know and understand, I expect that you've at least tried out the command with a few flags that each of these provide.

I'll get into detail.
For example, you've all probably used git pull and git push enough by now. I would like you to go create a test repository make some changes on the remote , either via github's ui or on a different computer and go execute a normal git pull on your repository.

At this point , type in git log and you'll see the last commit was a fast forwarded Merge commit that got created because you tried to sync with a branch that wasn't local. Now a lot of people don't realise this but git defaults to merging in case of overlaps and this creates a lot of un-needed history on your repository and also confusion when browsing through them. The option to use? git pull --rebase , you could just set your git to default to rebasing but we'll get to why I avoid that and use the flag explicitly.

Flow

Getting to the flow, let's start with an empty repository.

  • Create a dev branch using the git branch command
  • Push it to the remote if it isn't already pushed.
  • Create a feat/<new-feature> or fix/<new-fix> branch as need, <new-feature> and <new-fix> replaced with the name accordingly.
  • Best part, write the implementation or fix.

At this point, if I'm working on Github, i'd create a pull request, then rebase merge with the base branch if it's possible, if not then it's a squash merge locally if the number of commits that are to be rebased are over 10 commits.

Reason - Rebasing 10 commits, while going through each commit incrementally might not work if you've been working on the feature for a long time or shifting to other projects and there's code snippets you don't fully understand, so rebasing with conflicts over 10 commits might not be easy to go ahead with.

Though this is rarely the case with the repos I maintain, I did have this issue before when I was still learning git.

Now, the current feature implementation when maintained on a seperate branch has a max of 2-3 commits that are all seperated properly, how and why?

git commit --amend and git rebase -i are the commands that I use the most in my workflow right now, I start off with implementing a prototype for a feature, once the prototype is done and on a feature/fix branch , it's tested right then and there and pushed to remote, because I work from multiple devices so I can't just keep it as a local copy.

Yes, I push uncompleted features to the upstream! , but these are to branches that aren't merged and WIP.

Once we are back on the same implementation and making fixes to this, the changes are all made and git commit --amend comes into picture here. --amend basically modifies the previous commit and creates a new commit sha with the combined changes. Now, anyone experienced with git already knows that a new commit sha on the local and a different commit sha on the remote is a call for problems but this is basically why I never do any of this on the master/main branch even when I don't work with a team.

To push this modified commit you use git push -f or force push to the branch. Note: Never, I mean, NEVER force push on a branch that multiple people might be working from!

So now, the remote is updated with the needed changes and now we can make a PR or rebase to the main branch locally and then I delete the branch both locally and on the remote, now I have one clean commit that implements the whole feature.

Obviously there's always going to be bugs, cases you missed, brain farts during commits that you ended up adding to the base branch, what next? force push on the base again? NOPE!

The only point of using force push was to maintain the remote sync with a branch that I might work on from various devices, not to edit remote history (that's a side effect when working from multiple devices).

If you are working from a single device, your amends are always going to be local unless the feature implementation is complete and this is how patches are supposed to work. A more git friendly workflow is what I follow while working on sourcehut.

I work with email patches when working on something like sourcehut instead of github, and making amends to commits is okay because it's never added to the actual repo, it's a simple email with the diffs that can change again and again, but limited to github's arch the force push on feature/fix branches are my only option right now.

Next step,
So you now have a bug you need to fix, you create another branch with the fix/random-feature and follow the same thing, work locally , amend locally as much as you can, then push to remote and raise a PR. wait for the PR to merge, delete branch, update local branches with the base branches

 > git checkout main && git pull --rebase origin main

The overall point is to maintain commits atomic and self sufficient, this allows you to cherry-pick onto other branches when needed, allows you to get rid of features when needed (while not always possible). Maintaining a good git discipline could help you avoid a lot of problems and if you are a power git user, git bisect could help you a lot here.

Rebasing and Branchless workflow

This workflow is something I picked up a few months ago and since I use the macbook for everything right now, because I don't go around that much. The max it changes is from macbook to the raspberry pi that I run and test the cli apps on.

In this workflow, I don't need to really create branches or push to remote for sync with other devices, I picked this up from Drew DeVault.

Basically, You don't push the commit unless it's complete and just keep moving it up in the interactive rebase or squash it while working on something else, so the origin only points to features that have been tested and are okay to be added to the upstream, everything else stays on the local.

Drew talks about it in detail on his post My unorthodox, branchless git workflow, which is basically what I'm using other than the patch part , cause most projects of mine are on github, will move them to sourcehut when I can, but I don't want to move every repository I have to sourcehut just one's I think are suitable enough to be maintained longer. Just to have a cleaner collection.

For now, that's it. BTW, we've got comments now, scroll down to add one.

Adios!

]]>
Mon, 08 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210301-My-Linux-Setup-.html My Linux Setup https://reaper.is/writing/20210301-My-Linux-Setup-.html My Linux Setup

I haven't talked much about my linux setup on this blog, so let's go through a few things I wanted to answer.

Favorite Linux ?

No, specific one, I've hopped through a lot of them over the years, the most used one's are a custom spin of Debian and Arch. I do have backup thumbdrive's with actual OS installs, so not live linux USB's, but drives that contain the entire system (dev environment, needed setup, drivers, etc)

Let's list them out

  • 128GB Sandisk Drive - Arch Linux
  • 64GB Sandisk Drive - Fedora
  • 128GB Hp Drive - Custom Debian

and then the internal HDD of my laptop dual boots between Windows 10 (cause I play games and the linux drivers for the hybrid hardware is still iffy) and Arch Linux (this is there just as a backup in case my drives aren't with me)

Setup?

I've basically got 2 setups one using Openbox and one using Sway, the openbox one is on most of the installs since I've just started using sway and I'm liking the i3 environment, I've used i3 before for a lot of setups but I keep jumping back to openbox because I feel comfortable using openbox and i3 needs you to remember a good set of keyboard shortcuts which I do mess up sometimes so I just like the option of being able to use the mouse in case of doubt. Obviously, it becomes a 2nd nature once you've used i3/sway enough.

The setup is pretty minimal, there's the basic desktop following the duo tone colors I use everywhere, an off-white tone ranging in #dddddd - #eeeeee and a dark gray tone ranging in #121212-#333333, should probably create a good color pallete to share with people, the accent color is a random color I picked from colors and changes quite frequently so it's also an environment variable so I can change it at any time by executing an echo export ACCENT_COLOR=colorhex to the shell profile and reload the sway / openbox config.

Basically what the profile looks after a while.

export ACCENT_COLOR=#3CA60C
export ACCENT_COLOR=#B07AB2
export ACCENT_COLOR=#9FD0B1
... goes on till I clear it up

The primary application launcher on both is dmenu and sometimes I setup rofi when I have the time, the browser of choice is firefox and chromium as backup (because I develop web apps), the same on Mac with safari as an additional test browser.

I don't use a login manager / display manager (lxdm,lightdm,etc) , I use the shell profile to check for a logged in user and then take access of the display. This either launches Openbox with xinitrc or launches Sway with Wayland environment in place.

Wallpapers are just solid colors on the linux setups, again a dark gray in the above range. On the mac though, I change them frequently with a random one from wallhaven

New User, what do I use?

I'd direct you towards Linux Mint to start off with, and there's also Solus but it may or may not directly work with your hardware so try both of them out with a live disk and then initiate an install, an unresponsive wifi/network hardware is quite common so you might need more than just the default install to get either of them working, though Linux Mint generally hasn't given me an issue yet but, don't directly execute an install without testing your hardware on the live disk

I guess that's about what I wish to say about linux right now, I will be going over Linux Distro's that I found interesting but didn't make it to the list of favorites.

]]>
Mon, 22 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210301-This-Weekend-in-the-BarelyHuman-Dev-Labs---March,-20th-and-21st.html This Weekend in the BarelyHuman Dev Labs - March, 20th and 21st https://reaper.is/writing/20210301-This-Weekend-in-the-BarelyHuman-Dev-Labs---March,-20th-and-21st.html This Weekend in the BarelyHuman Dev Labs - March, 20th and 21st

This weekend was rather lazy , I had a few projects I wanted to complete, though I slept through most of the day on both days and spent about 4 hours watching the Snyder Cut JL and 1 hour thinking about what on earth was WB thinking with the initial release.

Back to the update.

SpotSync

This is a very niche project that I picked up that serves the purpose of keeping my Spotify library and a playlist that I share with people in sync. The concept is simple, it syncs the library to a playlist that's it, nothing else. It's basically done but not tested and nope, you won't find it on my Github because I won't be making it public unless I'm done testing it, which will be sometime this week so that's one and it isn't deployed yet so no links right now. Told you, lazy weekend.

MyTag

I don't normally contribute to other open source projects I just use them to learn stuff but I thought I'd help a fellow developer out this time so I spent a few hours on a few issues of this project. MyTag

Pretty great concept, though the app needs a little more polishing and tweaking to be feature complete, which the original developer is working on, hoping I can find more time to help such projects instead of just re-creating my own versions of various existing tools. (talking about TillWhen)

TillWhen

Minor security patches and version upgrades for a lot of things, I do have to make the UI a lot more consistent so I'm kinda stuck as to whether I should work on the Multi Slack workspace support or Work on the teams concepts , also kinda understand why almost all free services online have ads on them, cause people won't support it with donations when being provided for free.

Anyway, guess I'll be working a day job forever if I want to keep the projects free to use.

Making TillWhen Opensource?

I did announce and also mentioned it on the about page that tillwhen will be getting a self hosted version and I do plan on making it open source but I wanna make sure I get rid of any security vulnerability that I can find myself before I do hand it over to the community to see. I trust the community with my other projects because they are tools and store data mostly on the browser, this one has people's daily logs , while an attacker could do a lot of damage it's still something I need to worry about and not something I can just let loose.

Overall point, yes, TillWhen will be opensource and come with a self hosting alternative after I'm sure that any edge case (hopefully all) security concerns are out of the picture. While there's barely any personal information tracked by the app, for someone who routinely logs onto tillwhen, the data can be used maliciously.

The Blog

RSS feeds were added to blog on request by a friend from here , also re-built barelyhuman.dev to follow the new design style I have on all apps and go rid of any deps on Javascript other than the themer (dark and light mode toggle)

Rest of the life

Overall just played CS:GO all night and worked on the above mentioned things in the morning, pretty fun weekend.

Adios!

]]>
Mon, 22 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210301-Weekend-Updates----March-6th-and-7th,-2021.html Weekend Updates - March 6th and 7th, 2021 https://reaper.is/writing/20210301-Weekend-Updates----March-6th-and-7th,-2021.html Weekend Updates - March 6th and 7th, 2021

TillWhen

This week, I wasn't really feeling like developing or working on TillWhen so there are no specific updates there other than a small patch that adds to the authentication check during first render of the app.

New Additions

The past week was mostly me working on themer and making fixes and making the code more cleaner and readable in case someone else wants to fix something that isn't working as they expected it to.

Another project that was created yesterday was conch, a micro library to handle sequential batch processing of promises for systems that can't handle processing like a 100 promises at once, and also something I needed to regulate for cases where I have to preserve memory usage, though the current logic can be improved a lot.

Modified / Feature Upgrades

Other than adding themer to most maintained projects, music got a UI upgrade and now supports importing spotify playlists into the queue, you can either replace the entire queue with the playlist or add the tracks from the playlist to the existing queue. The keyboard shortcuts it had still work but are not listed on the UI right now , since I haven't figured a good way to show them on the current UI style without making it look odd.

Minor work includes template updates to the ftrouter-templates repo, and this includes an experimental cli command that was added to the current master branch of ftrouter, it now allows you to init a folder though this is not in any of the official releases tags, also I finally plan to sort the of ftrouter using some cli tool framework first before I release ftrouter onto the npm registry. You can still install it using the git repo as the repo consists of the compiled source.

mkdir -p new-project
cd new-project
ftrouter --init # which is a shorthand for `ftrouter --init -d .`

For an example on how ftrouter works you can check the music-server or the minimal template's example which includes both query and param cases as well.

That's all that I was able to do this weekend, though I plan on focusing a little more on TillWhen next week, let's see if I do.

]]>
Mon, 15 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210302--Handling-heavier-tasks-.html Handling heavy memory tasks asynchronously https://reaper.is/writing/20210302--Handling-heavier-tasks-.html Handling heavy memory tasks asynchronously

This is going to be both a post to point you to a library I made and also point you to a solution for handling heavier tasks.

Disclaimer: This is one of many solutions that you can use, I'm just talking about this one since I end up using this a lot more than other solutions, I will mention other solutions though

As programmers we are given problems that need to be solved or we discover a problem and then sit down to solve it. Once such problem has always been load management, we've got various tools that handle it for us at different levels.

You've got load balancers for network requests, you've got the ability to scale vertically and horizontally in terms of parallel containers or k8 nodes but let's come to the small section of this, handling load in your actual code.

I'm not sure if you've come across this but a lot of times you've got like 100s of functions that need to be run parallely or sequentially or even cases where the memory can't handle this run.

Context

A few days back I was working on a small dummy replication server on my Raspberry Pi which hardly had about 100-250mb ram left with all the other hosted services running on it. The replication server was to run my spotify , get all playlists, go through each playlist to get all the tracks and then match these on Apple Music and then sync them to my Apple Music. Why do this? I accidentally deleted my Apple Cloud library (My bluetooth keyboard malfunctioned and instead of deleting a single track, I deleted the entire library) and I need all the tracks back so I wasn't going to do it one by one and I could use SongShift but then it's slow and I'd have to setup multiple runners on it for each playlist and then run them manually so I figured I'd just write a small server for it.

Problem

  • Limited amount of RAM: 100-250MB
  • Number of tracks to process: 2500
  • Matching each track takes about : 1s
  • total time to be 2500 seconds if done sequentially
  • memory usage per track search = 500kb , so about 125MB total to import all

The problem? I have close to no ram left on the rasp when this runs and using the other hosted services becomes hard cause they need to shift to swap and then it lags.

Solution?

There's quite a few solutions here

  • Message Queues
  • In Memory Batches
  • DB Queues

Message queues

Good solution but then I need to add RabbitMQ or redis + msg queue, setup a worker to listen to the queue and a server sending the instructions to the queue, limit the queue to not process more than 10-20 tracks at once.

DB Queues

Similar to the message queues but instead I add rows to a queued table of any existing database on the device and the worker keeps checking it after 10-20 mins to see if it has any new queued tasks and mark them complete when done with , this is simpler to implement and would be a viable solution if you need a solution that's crash resistant. Message queues though might loose it's messages on crash in certain cases so this might be the most reliable solution to go with.

In Memory Batches

Now, as the name says, its similar to the above two but we do it in-memory and yes, I'll loose all progress on a crash but this particular use case doesn't need to know if it's started again so we can go ahead with this and it's simpler if you are doing the same operation again and again.

Implementation

You cut down the array of items into slices of certain amount, let's say I want to process only 20 tracks at once, then I run the async process and wait for the first 20 to complete. We then add the next 20 and process them and continue in the same manner but do it only in memory and this reduced the usage down to about 10mb of usage while this is running, leaving the remaining memory free to be used by other hosted apps.

For synchronous you could just slice the array and run a while loop waiting for one to complete before the other starts but incase of async this can be a little more code to write than normal so you might want to use a library.

I would recommend using one of the following

Though, p-map and bluebird's promise.map will allow you to maintain a queue that always have the mentioned number of promises waiting to be completed so it works like an in memory queue, on the other hand conch will maintain batches and will process it one batch after the other instead of a continous queue, both work well in the case , I just prefer the batching.

Another live example of this being used would be music's import logic, which does something similar but to search youtube with the provided spotify link

]]>
Mon, 16 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210304-Getting-Better-at-Development---Part-2.html Getting Better at Development - Part 2 https://reaper.is/writing/20210304-Getting-Better-at-Development---Part-2.html Getting Better at Development - Part 2

I've written about getting better at development and while I don't know what traction that post got, It doesn't matter since I think it won't hurt to write another one

We'll be talking about a few things here,

  1. Master your development environment
  2. Learn to use Vim
  3. Prototype

Master your Development Environment

A very simple concept that I did write about a while ago but it was more of a rant than an explanation so let's go through it again.

Your dev environment may not be the most optimal one out there and will always have things that can be improved.

I personally depend on the terminal a lot even when my code editor can do the same tasks by adding a plugin and I should get used to using the code editor but then there's a conflict where I change my editor quite often. I shift from Vim to Sublime, Sublime to VSCode, VSCode to Vim / Sublime,or try a new code editor its a never ending cycle, .

While this looks like I'm going to contradict my own point, the reason to go ahead and master you environment doesn't always include mastering the tool you use but being able to move freely and quickly through the most used things but in cases where you are using a tool a lot, make sure you go ahead a become a power user. Why?

Let's take an example, git is very common tool today, but the max I see people use it for is

  • git add
  • git commit
  • git push
  • git clone
  • git pull

That's all they know and never go curious as to what other options they have, like you don't have to go through the whole progit book or read the entire git-scm doc,I'll still link it in case you do want too, but at least try to figure out how much more powerful git can be, you can manipulate history, you can manipulate commits by travelling back in time! but then people are stuck with just Pull Requests on the Web portals instead of learning how to use patches.

And I get it , not everyone's workflow involves using all this, but everyone has been at a point where they want to not commit a file but don't know how to unstage a file they accidentally staged, the answer is git reset <filepath>. or git restore --staged <file-path> if you've got the newer versions.

But you wouldn't know that unless you actually went ahead and got curious about what all the tool you are using can offer. The same applies to the code editors you use, the Database GUI's or even source control tools you use. For the longest time I've used sourcetree from atlassian for handing projects and commits just because I like their differ UI , started using Sublime Merge because of the same reasons and the merge conflict handling of sublime merge is amazing, though I hardly get conflicts on personal projects (cause I develop alone, not cause I'm that good!) but the tool comes handy when working with a team from work, still I prefer going ahead and referring the git doc every now and then to see any new sub command addition or checking what other commands I have available.

This helps me decide if I should do a normal squash or a fixup on commits in my git workflow.

There's a million reasons to learn the tool properly but just take this one.

Learn it to improve your own skills and not wait for it to become a requirement you get from projects.

Learn to use Vim!

I really don't have to get into detail here but learn to use a terminal editor, it can be emacs,vim,nano anything! The same as above applies here as well, learn how to extend the usability,learn as many keyboard shortcuts as possible, learn how to modify configs to better match your workflow, as a minimalist I can work from just the ctrlp plugin and vim for basically everything, doesn't mean I shouldn't know what other plugins I can add to improve usability and speed of dev.

A coder with a good typing speed is definitely a plus but that and being able to move around vim/emacs like it's no big deal for you to manipulate text is an amazing skill to have, you might get a lot faster by being able to just do that.

Two simple reasons to learn on the above.

  • almost all environments have one of them available
    • grub boot editor has base emacs
    • vi/vim is available on all linux setups, whether you install it or not it's most likely in the base package
    • nano is available for most sub-set linux utilities and a lot more beginner friendly if vim is too scary
  • they help you think in patterns
    • once you start using vim macros, you start thinking in the sense as to what can be done to do this task just once. When you get better with the keyboard shortcuts you can get to a point where you will finally understand that insert mode isn't really needed to manipulate existing text

Prototype

Enough about tools and things to learn, let's see how you can improve your project building mentality. The simple answer is prototype it.

The detailed version though.

Build something with the sole purpose of testing the idea, not with the mentality of perfecting it. This is to

  • test out if the thought or idea you had is possible
  • is there something limiting or blocking it
  • is there actually a blocker in terms of requirement, that you didn't think of while plotting the idea, there's often times that you realise that something you wanted to build requires more features than you thought initially.

All these come naturally once you have written a prototype version, where things aren't built to be perfect but to test the idea. You can also test the market if you've built something that doesn't exist out there and you want to check the attention, again doesn't have to be perfect it's a prototype after all.

The point is, now after building this, you have enough idea as to things that are required, things that work, things that are iffy and conflicts in thought that you need to be aware off and finally the base scope of the project is now crystal clear.

Next Step? Rebuild it from scratch while keeping everything in mind or refactor the code to match all of this, both these approaches have their own issues.

Refactor

We'll start with the easier one, refactoring the code to now be well structured and you add or remove services,tech,code to accommodate the new requirements and be a little more robust for later expansion, if you've been doing this enough, chances are the initial prototype already followed your general scalability in mind and this is a very easy phase for you.

Chances are you already knew that the language you are using is perfect for the usecase and you have a good idea of what is needed from the start and the additional requirements you figured while building are just hiccups and not a huge pain to fix.

Starting from Scratch

This is hard to do for a lot of people but you can always start from scratch once you are done with the prototype though this approach needs you to make sure you understand why are you starting from scratch?

  • There's a better tech that can handle the work you've done ?
  • Need to change the language all together?
  • It's easier to re-write than patch broken stuff?

If you say yes to two or more of the above, it's better to start from scratch.

If there's a better stack for it you should definitely pick it up and setup a solid base for the project , which will be good for the long run. If you need to change the language all together, then well it's easier that writing interop code that will need a lot more maintenance while you're doing the migration plus you end up breaking and fixing a lot more than needed.

A lot of devs wouldn't want to start from scratch because

  1. It's boring to do it again
  2. You can do it with well thought out refactoring(could get hectic in a very big project)

Either way, use your best judgement or ask people for an opinion on what they would've done, the point is, always prototype before building the real thing and know that you might have to scratch it or heavily refactor it to make it a polished product which can then be pushed into the market.

A non-prototyped product is going to have more failure points and patches than one that was built again while keeping all of it in mind.

Keep an open mind to learning and that's about it for now.

Adios!

]]>
Mon, 25 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210304-If-youre-seeing-this-it-all-worked!.html If you're seeing this, it all worked! https://reaper.is/writing/20210304-If-youre-seeing-this-it-all-worked!.html If you're seeing this, it all worked!

https://reaper.is now has it's own inline editor for me to write blog posts with.

Supports markdown

and obviously since you do see this post up on the blog right now, it all worked out well.

]]>
Mon, 04 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210304-Themer-and-how-you-can-handle-dark-mode-a-lot-more.html Themer and how you can handle dark mode a lot more gracefully https://reaper.is/writing/20210304-Themer-and-how-you-can-handle-dark-mode-a-lot-more.html Themer and how you can handle dark mode a lot more gracefully

A few days back I was basically redesigning the long lost todo app from my repositories and I ended up liking my selected color scheme and the dark variant of it. This lead to a simple dark and light toggle that I wrote in about 20 lines of JS, by simply changing a key in the local storage and handling that change and edge case accordingly.

10 mins after this, I realised the the commitlog-web could take advantage of the new color scheme and the web version of it is written in golang and html templates so I needed something vanilla so I just ended up using the above code from the todo implementation. At this point, it's all good, but then a small issue. It'd take the stored theme instead of the system preferred theme only and for someone who's theme changes automatically over the course of the day , this was a problem.

Now most people would be fine with just the prefers-color-scheme media query but now I don't assume what scheme the user would want to use for my particular app so I want him to be able to choose between system, light, dark and now this is where themer got created.

It's like 200 lines and you can probably understand by reading the source code , but I'll get through the algorithm just in case.

Source Code

Also, you can just install themer and use it if you'd find that easier but here goes.

Requirements

  1. Ability to switch between system,light,dark.
  2. As a developer, the developer experience to just add in one button , point the library to it and have it work seamlessly.
  3. As a developer, the ability to customize the toggles when needed so a function export that can handle the same context.
  4. Permanent storage of the selected theme.

The Plan

  1. Since there's a need for context, we are going to use a Prototype Function declaration for this library (more on that in a few mins).
  2. Ability to customize the button, so the button won't be created dynamically but picked from the config provided to the library, though I wanted a quick setup so the library will handle the icons inside the button, just not the button creation and styling.
  3. Write a function that can be exposed to the instance so that if needed, the person can create custom toggles programmatically.

Code Flow

  1. We define a prototype function first. A prototype function is basically the vanilla js way of making/writing classes , give you the ability to add pre-defined methods to an instance created via the function as a constructor, an example of this would be Date

So, first piece of code.

function Themer() {}
  1. We need it to accept a config so that we can select if we want to handle the toggle ourselves or we want the user to handle it for us. Also, we will see if there's an existing theme value the user has or not.
function Themer(config) {
  let element = config.trigger
  if (element) {
    // Check if the trigger was passed a class string or an id string and convert it to a proper html node ref
    if (typeof config.trigger === 'string')
      element = document.querySelector(config.trigger)
  }

  // existing state for the theme , fallback to system if nothing is found
  const defaultState = localStorage.getItem('theme') || 'system'
}
  1. Now, for the actual toggle, all we do is set the body tag to have an attribute called data-dark-mode and if this is present, your css can over-ride the default light mode variables or you can write custom css with this as a selector.
body[data-dark-mode] button {
  background: white;
  color: #121212;
}

though, just resetting the variables would be easier, you can find an example here

  1. All that's left is to find out which theme we are on and which the next one is supposed to be and this is done on the click on the trigger, also, remember we have to expose the function so we have to isolate that logic and also we need to make sure the same functions are also executed when the system preference changes if the set theme is on system

No use posting the snippet cause that's the whole index.js which you can read.

Hope you liked the post,

Adios!

]]>
Mon, 11 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210305-UI-Overhaul.html UI Overhaul https://reaper.is/writing/20210305-UI-Overhaul.html UI Overhaul

I'm not really good at post titles, so going to keep them short from now.

A really old project of mine started gaining a bit of attention the past few days and I thought that I've been planning on making it look good for quite a while, I should just sit and redesign it once and for all.

So, when I woke up at 4 , unable to go back to sleep, I decided to make a design change.

Old Todo , this is what it looked like and here's the new one, Todo

I'll add an image for everyone else to have a look at

So let's go through everything that's there.

Features

  • the obvious one first, Dark Mode Support
  • Addition of tasks, it's a todo list (what were you expecting?)
  • Filter by state, you toggle between, All, Completed, Pending
  • Power menu to change theme and filter by state(coming soon)
  • PWA, you can install it if you're using chrome on desktop and Safari (Add to Home) / Chrome (Add to Homescreen) on iOS and Android respectively
  • Offline support (cause pwa), and your tasks stay in your browser, there's no sync, though you can share an encoded link to people who you'd want to share the tasks with.
  • Import tasks from shared URL into your task (coming soon)

And as always, it's free. Though you should try supporting the DEV!

]]>
Mon, 05 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210305-reaper.im---Seamless-posts-addition.html reaper.im - Seamless posts addition https://reaper.is/writing/20210305-reaper.im---Seamless-posts-addition.html reaper.im - Seamless posts addition

We've had this blog for a while now and the typical flow of writing a post is me going to Mark, writing the post markdown, exporting the file, adding the metadata to the file which involves the title,publish status, date, and then pushing the repository after checking if the above were done properly.

Though Mark exists just because I've used other tools and Typora is the only one that comes close to being lightweight and aesthetic and while I do use it while I'm on the Mac, I do write a lot of these posts from an iPad and since Mark is just a web-app it works well, as for pushing the repo and creating the file, all is done using gitpod. It's pretty easy to do but yeah, a good amount of window switching.

Adding Integrations to Mark

I like how the new UI on it looks so my second plan was to add the ability to login via github on Mark, select a repository you'd like to add the markdown too and then giving the path in the repository which would've been great and I probably will do that sometime, but I wanted a little more automation since the meta data addition would still be needed and I wouldn't want to generalise mark to have datepicker when it's just for a niche use-case. Though I do have a plan for something similar so let's hope I get enough time this weekend to start with it.

Scraping BuyMeACoffee

The first approach was something I mentioned in this post which involved scraping post data from another site who's editor I liked. While that would work we'd loose offline capabilities of the repository, which I didn't want too and adding a scheduled sync action wouldn't be optimal either.

The easiest approach

The last approach was to just use a password to log into the site, add posts from a simple text area and then push it into the repository using github's API, though there were a few security risks.

  1. The password could be bruteforced.
  2. The attacker could throw as many files as he wanted to my repo.
  3. Obviously, he could post whatever he wanted

So, we put a little more thought into it and ended up blocking this a little bit. The site uses an OTP approach instead, so it mails to one of my random non-public emails a otp that lasts for like 45-60 seconds, this kinda gets rid of the bruteforce but then it's just 6 digits, we've got computers who can kinda get through this so the next block was to create all these posts to a subset branch and create a PR for the main branch.

This does 2 things.

  1. You cannot post directly to the deployed public version.
  2. I'm notified for the PR, so I'd know if there was activity that wasn't from me.

Again, there's still things that can be done that an attacker could do but a little consideration on blocking them for a while is better than leaving an open door.

Thus, the last post you saw was just me testing the whole flow after writing it all. Still got work in terms of security that could increase the friction for an attacker but I've got other tools I need to work on so we'll get to it as soon as I get time.

Adios!

]]>
Mon, 05 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20210329-update-29-03-2021.html This Weekend in BarelyHuman Dev Labs - March 27th and 28th https://reaper.is/writing/20210329-update-29-03-2021.html This Weekend in BarelyHuman Dev Labs - March 27th and 28th

Quite an eventful week I tell you, did enough to satisfy my hunger for building tools.

Let's get to what the week started with and what we ended at.

mytag1

The same app I mentioned last week had a few more additions to the app, again I'm just helping the developer out, haven't made enough contribution yet to make a difference. The app is a very simple implementation of google photo's tagging feature to be able to detect objects. This is being done to avoid sending and storing data to the google servers and just using your own device's storage to handle the searching and indexing for normal users.

Still getting developed so let's wait for it.

reaper.im2

Yeah, this blog, if it isn't obvious, the blog was redesigned to handle dark mode and to generate html files instead of using a live router to handle the routes, each markdown file is now generated as a simple html file, yeah it's a pretty old concept and basically how most SSG toolsets work and then have their own router to handle some routes and act as a middleware to handle the browser requests.

Now, the reason for doing this was

  1. I needed to build my own static generator to experiment some ideas with
  2. It's fun

Now this has definitely ruined the older links everywhere and I might have to sit and fix it everywhere but instead set something up to redirect all old links to old.reaper.im instead

musync3

Another small tool that I built last week and started as a web app but heroku and other existing free solutions wouldn't work well since the scheduling would become very limited to just the architechture I use it with so I ended up moving it to be a simple binary that does it for me. The app is just a simple library syncer as I mentioned in last week's update post and takes in your client creds from spotify and moves all the tracks from the user library to the given target playlist.

Not something everyone needs but I like to have a shareable playlist that has all the tracks I have added to my library.

That's basically all that I worked on the last weekend and planned stuff for tillwhen to have a minimum charge instead of depending on donations, since I plan on putting all the other mini projects on the side and focus on just working on tillwhen as a dedicated business. Just a plan, so let's see where that goes.

That's it for now, Adios

]]>
Mon, 29 Mar 2021 00:00:00 +0000
https://reaper.is/writing/20220418-the-esm-cjs-problem.html The ESM and CJS Problem https://reaper.is/writing/20220418-the-esm-cjs-problem.html The ESM and CJS Problem

Disclaimer: I understand the advantages of moving to ESM and support that people do so but, I'm not a fan of just moving everything to new tech while breaking tech that was already working. Users are also at blame for not checking release notes but it's on both parties but finding solutions is a part of our work so, this.

The problem

tldr;

There's huge number of bundlers, each with their own implementation of the esm spec and so we either need to support each spec or at least find common ground to support most bundler setups. (though most bundlers are now on par with the official spec, this post will go through things you can do to reduce friction in older setups)


Creating ESM and CJS packages that work in most environments. I wasn't even aware of this being an issue until I started writing my own packages.

The problem is based on how bundlers handle these files and different ES syntax that are available today.

Most user(developer) setups involve some form of configuration to decide what ES syntax they can write in their code.

Example.

const array = [1, 2, 3]
const shallowCloned = [...array]

// this might not work for you if the ES syntax your transpiler supports doesn't
// have spread syntax support.

So, as a package maintainer, we have to be sure of what version of ES syntax we plan to support and compile/transpile our code so the user setups can handle them.

I wish it ended at that, but it doesn't. As we have to keep moving forward with the standards so a new standard emerged a while back which was already being used by browsers for a while and its called EcmaScript Modules (ESM). ESM is basically a way of treating modules as asynchronous sources of javascript code / behaviour.

Which makes it possible to use cached modules from a remote source (at least in browsers). The advantages towards having it in node environments would add a more unified language standard and reduce the need for bundlers as the esm package could be just shipped as is.

The support for this spec was added behind an experimental flag in earlier versions of Node 12.

A few package maintainers, decided to move to ESM right away and their newer package versions(major version, no breaks for existing users, unless they do npm install pkg@latest) would break setups that weren't respecting the ESM standards.

Users(developers) would just go npm install <package-name> and that installs the latest version and half of them never read docs so they had no idea what was going on.

Oh, you should've seen the amount of cannot use esm in common js issues that were raised by devs during this phase.

Now, bundlers that added support for esm as patches handled the issue pretty well. But bundlers that were undergoing changes in arch and API took a little longer to get it working and most people who reported these compat issues were on these bundlers.

"Where's the problem? You're just ranting Reaper". More like giving context, but this is where the problem is, the diverse nature of setups in the javascript world is what's responsible for the existence on this problem. How we mitigate it, is up next.

The Workarounds

"Workarounds", cause these are not concrete solutions.

If you wish to support both sides of the party, CJS and ESM. The points mentioned here might help you both as a maintainer and as a library user but there's certain behaviours that I didn't spent much time researching on. Whatever mentioned here is based on my personal work with these kind of packages and in some cases browsing the codebases of bundlers where an issue for this was raised.

The simplest one (for maintainers), not so much for users.

For the maintainer Name the files as .mjs and ship the package. This will trigger errors on the user(developer)'s bundler and they can add support for .mjs accordingly. You can also keep the extension as .js and instead change the type field in package.json to be "module"

{
  // @filename: package.json
  "type": "module"
}

For the user Most of your bundlers come with configuration to handle such cases. What you are looking for is a way to add support for custom extensions and transpile them as normal JS/ES syntax and transpile as needed.

NOTE: If the library is using very specific ESM syntax like import x from 'node:fs' , you might need to see if your bundler supports handling protocol based imports. If not, well, talk to the maintainer, and figure out if they wish to help you with it or if they even have that plan in the scope, if not, you might wanna look for an alternative replacement for the library or look for an older version of the lib.

Taking the middleground

This is where I'll be standing till the ecosystem has stabilised (at least for me), which is a decision I've taken mainly due to the bundlers and setups used by major frameworks. In my case this would be (React, React Native) and a lot of other system level CLI libraries, cause I work with these the most.

Like everyone else rooting for ESM, I'm also waiting for the point in time where I won't have to use a transpiler anymore, sounds like an amazing place to be, but I'm not the only developers and not everyone is on the same page and people want CJS to stay for longer so we're going to work for both sides.

Before actually writing your package, you need to decide where is library going to be used?

Library for just react native?

Write it like you already did, babel will take care of it. Don't have to deal with ESM and CJS for now.

Library just for the web?

Write it in ESM, it already works in all major browsers but, maybe add an IIFE/UMD version just in case. It's not that hard to generate these from existing code.

Universal package?

Well, this is where the fun is , isn't it?

Let's see the number of bundlers we have that should be able to work with our package.

  1. Webpack 4/5
  2. SWC
  3. esbuild
  4. rollup (microbundle,wrap,etc etc etc)
  5. Parcel
  6. Metro Bundler
  7. sandpack (codesandbox's implementation)
  8. Skypack (literally has it's own patched version of react for esm! and other major libraries)

There's a few more actually, but 8 sounds like a nice number to stop at, daunting enough already. You can add Typescript's tsc to the list, as you can kinda use it to generate a single file.

Cool, getting to the fun part.

Configuration

Starting with package.json

The entry point of your package decides what most bundlers see and this is what they look for, when trying to figure out what should be allowed to import and what should be ignored.

This segregation helps with handling private dependencies or internal code that you don't want exposed.

This can also be done by having a single index.js file exporting modules that should be exposed, which is the easier way out.

But, when you work with packages that might need multiple entries, you'll have to configure a few things.

A good example of this is jotai and zustand which have imports like the following

import {} from 'jotai'
import {} from 'jotai/utils'
import create from 'zustand'
import {} from 'zustand/middleware'

This gives the user a clean import and makes it obvious as to what's being used and from where.

How do we achieve this?

This is how the package.json for something like this would look like if all you were writing for was ESM.

// @filename: package.json
{
  "exports": {
    ".": "index.js",
    "middleware": "middleware.js"
  }
}

We aren't working with just ESM so let's compile a few CJS versions using whatever bundler and adding them in the entry points as well.

// @filename: package.json
{
  "exports": {
    ".": {
      "import": "index.js",
      "require": "index.cjs"
    },
    "middleware": {
      "import": "middleware.js",
      "require": "middleware.cjs"
    }
  }
}

This is what Node's spec specifies for conditional imports. We basically are asking the bundlers to make sure that they import the right file when working with our package.

Just doing this should solve the issue in your user's code editor because they all use Typescript's LSP engine and this satisfies the conditions for that to work.

Later versions of Node 12 need path specific exports, so we'll have to change the exports a bit.

// @filename: package.json
{
  "exports": {
    ".": {
      "import": "./index.js",
      "require": "./index.cjs"
    },
    "middleware": {
      "import": "./middleware.js",
      "require": "./middleware.cjs"
    }
  }
}

The change being, adding path specfiers for the files references. This should still work with the LSP engine but the bundlers need a little more configuration so let's fix that

Webpack 4 Support

Webpack 4 uses the module field to find the esm files so add module to exports

// @filename: package.json
{
  "exports": {
    ".": {
      "import": "./index.js",
      "module": "./index.js",
      "require": "./index.cjs"
    },
    "middleware": {
      "import": "./middleware.js",
      "module": "./middleware.js",
      "require": "./middleware.cjs"
    }
  }
}

If working with single entry file, you add the module field to the top level of your package.json

// @filename: package.json
{
  "name": "pkg",
  "module": "./index.js",
  "main": "./index.cjs"
}

In certain cases you might have to tell babel-loader to consider .mjs as a .js file. You can find this online on "how to configure webpack 4 for .mjs files"

Metro Bundler Support

I work with react native a lot and most of my packages start as a utility for one of my work related apps and then made as a generic package for all platforms.

If you can keep the library limited to 1 entry file, then you don't have to write the exports section at all. You can do something like this and this should work in both webpack and metro, no issues.

// @filename: package.json
{
  "main": "./dist/index.js",
  "module": "./dist/index.mjs",
  "types": "./dist/index.d.ts"
}

Multi entry packages, don't worry. I'm still here.

You will setup everything according to Webpack 4 support and add 1 additional field and one extra export. Also, if you used .cjs like me for commonjs files, you'll have to let the user know that they need to tell metro to consider .cjs as a valid format

// @filename: package.json
{
  "exports": {
    // as stupid as it looks, it's needed for metro or it'll complain that it can't import cause it wasn't exported
    "./package.json": "./package.json",
    ".": {
      "import": "./index.js",
      "module": "./index.js",
      "require": "./index.cjs",
      "default": "./index.cjs",
      // you can also add types if you wish to, bundlers might not, but the ts engine does so it should work in most cases.
      "types": "./index.d.ts"
    },
    "middleware": {
      "import": "./middleware.js",
      "module": "./middleware.js",
      "require": "./middleware.cjs",
      "default": "./middleware.cjs"
    }
  }
}

Metro configuration if you used .cjs, same goes if you used .mjs for esm

// @filename: metro.config.js
module.exports = {
  resolver: {
    sourceExts: ['.mjs', '.cjs', '.js'], // <= the user will have to add this
  },
}

That's all the information you need to support most of them since esbuild, rollup, parcel, and sandpack handle the generic exports spec pretty well, so just releasing mjs and cjs files for them just works out of the box

But there's always people who have a setup on something like node10 and if I was maintaining something like Jotai, i wouldn't want to leave them hanging so there's other steps that were taken in libs like these and you can read about that on How Jotai handles package entries.

End note

These points are just to mitigate the issues when you are using or writing packages and I wish there were better ways to do things but this is the closest you can get to it right now.

I'd very much like for the majority of the ecosystem to get compatible with esm without having to configure things in each setup like I do right now.

I think, the newer versions of metro should already start taking in the .mjs and .cjs as normal.

Also, if you are reading this and are still using webpack 4, please, upgrade to webpack 5, please!

]]>
Mon, 18 Apr 2022 00:00:00 +0000
https://reaper.is/writing/20220426-mac-ci.html Setting up a remote Mac machine for Gitlab CI https://reaper.is/writing/20220426-mac-ci.html Setting up a remote Mac machine for Gitlab CI

There's a 1000 posts on this topic online and not one of them explains that you need to modify the location of the launching plist for this runner to work on sudden restarts.

Except maybe this post from symflower

Even that has it at the end so you hardly ever find it on first search unless you are looking for it and hence, this post.

This is a reference post and may or may not be helpful or detailed enough for a beginner, if you do need help with it, you can just contact me and I'll try to help you out.

Let's get done with the basic steps. These are all to be done on the remote screen / vnc app and not via ssh.

  1. Start the Mac instance, obviously.
  2. Setup XCode,Android Studio, whatever the hell you feel like you need for your builds to work.
  3. Next, up brew.sh, by the way, if you didn't already know, you can actually install android studio using brew brew install --cask android-studio
  4. Use brew to install gitlab-runner
  5. Register the runner, can find steps to this on the first result of google for "registering self hosted gitlab runner"
  6. Start the service brew services start gitlab-runner

Now, we get into the part where I can start things with SSH.

Now connect via SSH and copy paste the below xml to /Library/LaunchDaemons/homebrew.mxcl.gitlab-runner.plist

and you can now restart the machine via ssh and even if the user isn't logged in via GUI, the service should still start.

Replace #user# with the username of the CI user, preferably a non-root user.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>EnvironmentVariables</key>
  <dict>
    <key>PATH</key>
    <string>/opt/homebrew/bin:/opt/homebrew/sbin:/usr/bin:/bin:/usr/sbin:/sbin</string>
  </dict>
  <key>KeepAlive</key>
  <true/>
  <key>Label</key>
  <string>homebrew.mxcl.gitlab-runner</string>
  <key>LegacyTimers</key>
  <true/>
  <key>ProgramArguments</key>
  <array>
    <string>/opt/homebrew/opt/gitlab-runner/bin/gitlab-runner</string>
    <string>--log-format=json</string>
    <string>--log-level=debug</string>
    <string>run</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>StandardOutPath</key>
  <string>/Users/#user#/logs/gitlab-runner.log</string>
  <key>StandardErrorPath</key>
  <string>/Users/#user#/logs/gitlab-runner.log</string>
  <key>UserName</key>
  <string>#user#</string>
  <key>WorkingDirectory</key>
  <string>/Users/#user#</string>
</dict>
</plist>
]]>
Mon, 26 Apr 2022 00:00:00 +0000
https://reaper.is/writing/20220503-multithread-in-nim.html Multithreading in Nim - Notes https://reaper.is/writing/20220503-multithread-in-nim.html Multithreading in Nim - Notes

Well, this topic has made a few people leave Nim and a few people just couldn't find enough examples to help them with it so they gave up.

Either way, nim developers who stuck around have found out different ways to get it done a few of them include changing the memory model of the compiler and if you wish to, you can even change the garbage collector separately to make multithreading a little more easy.

Now, I'm not going deep into any of the topics since this is just a collection of notes and snippets that I can refer back to instead of searching the documentation again for something similar.

High Level Threading

High level threading can be done by importing threadpool and then using spawn to create threads and the running of these are handled by the threadpool

A simple example would look like this

import "std/threadpool"

proc worker(value: int): int =
    return value

let result:FlowVar[int] = spawn worker(1)
var fromFlow = ^result
echo fromFlow

To explain.

  1. "std/threadpool" adds in macros and types required for handling FlowVar and ^ syntaxes.
  2. the procedure worker is what's going to be run in a thread, for now we are just going to run 1 thread which will return the value passed to it, pretty useless but get's the point across.
  3. result, as mentioned above gives us a FlowVar generic which has been casted to the type int since we are expecting a number from the thread.
  4. spawn is used to create a thread and the worker runs with the passed value 1
  5. We wait for the thread to complete it's work by using ^ and this is a blocking operation so if you have a sequence of such FlowVars you'll avoid blocking it till all workers are spawned and then block when you have all the FlowVars types returned from the spawned threads.

Low Level threading

This is the approach I chose to go ahead with for mudkip , for those who don't know, mudkip is a simple doc generator I'm writing to keep consistency and speed for me when I'm writing documentation.

The approach is similar to the above one but we're going to dig deeper since instead of the macros handling the pointer passing for us, we'll be doing it manually.

The below snippet is from mudkip at this point and might change in the future but for now let's look at the code.

## Import the needed standard lib helpers
import "std/os"
import "std/locks"

## Define the base structure of what the file meta
## is going to contain
type FileMeta = object
  path: string
  lastModifiedTime: times.Time
  outputPath: string

## We then take in consideration of what the thread would contain.
## in this case we want the thread to have access to a
## shared state between threads, the point of the shared state is
## so that if a new file get's added and the chunks change, the threads
## process the new chunks accordingly
type ThreadState = object
  page: int
  statePtr: ptr seq[seq[FileMeta]]

var
  ## we now declare a channel variable, which will be needed to pass through a very
  ## simple string message around, you can solely depend on channels for passing
  ## data around in the threads but that can get complex to handle very quickly.
  chan: Channel[string]

  ## Next we are defining `threads` of the type `ThreadState`, this is necessary when
  ## working with low level procedure `createThread` since you need to cast the type
  ## of the parameters you send with it.
  threads: array[4, Thread[ThreadState]]

  ## Up next, we have the actual state and a pointer reference to it
  ## the pointer reference is what we'll be passing to the workers
  ## while the original is guarded with a lock so that when locked
  ## no other thread can modify it. Even though in our case the threads only
  ## read and never write to the state but if you do plan to write to the state
  ## then make sure you `acquire` and, `release` the lock when done with the mutation
  sharedFilesState {.guard: lock.}: seq[seq[FileMeta]]
  sharedFilesStatePtr = addr(sharedFilesState)

## Just taking the length which I use for creating chunks
## of the list of files I need to process
let maxThreads = len(threads)

## This is the worker function which takes in the `ThreadState`
## param that we defined before and it handles the base of the file
## watching logic.
## Each worker runs at an interval of 750ms to check if any file
## in the batch it's been given has changed.
## if it has changed then it'll send the main thread a request to
## update the state and also to reprocess the file.
## This is done with the channel, the channel sends the path to the file
## and since I'm not handling anything else in the thread, the main thread
## reprocessed the path that it gets and updates the state for me
proc watchBunch(tstate: ThreadState){.thread.} =
  while true:
    let data = tstate.statePtr[]
    let batchToProcess = data[tstate.page]

    for fileMeta in batchToProcess:
      let latestModTime = getLastModificationTime(fileMeta.path)
      if latestModTime != fileMeta.lastModifiedTime:
        chan.send(fileMeta.path)

    # force add a 750ms sleep to avoid forcing commands every millisecond
    sleep(750)

## This is not the entire implementation so you can read the whole thing on
## the repo.
## If the procedure gets a `poll` param with true, we open the channel to start
## listening for messages.
proc mudkip(poll:bool)=
    if poll:
    echo "Watching: ", input
    chan.open()

    ## Then we go through each index of threads and use `createThread` to cast to
    ## type `[ThreadState]` and pass it a new threadstate which tells the worker
    ## which index of the chunk does it have to work with. Which is passed with
    ## the `page` property in state and the whole set of chunks are passed as
    ## a reference pointer, `sharedFilesStatePtr` on the `statePtr` property.
    for i in 0..high(threads):
      createThread[ThreadState](threads[i], watchBunch, ThreadState(
        page: i,
        statePtr: sharedFilesStatePtr
      ))

    ## Then we on the main thread keep waiting for messages on the channel
    ## as soon as we receive one, we update the shared state in `updateFileMeta`
    ## and we process the file with `fileToHTML`
    ## this one is timed 500ms instead of 750ms to have a small rest period for
    ## the CPU, or we'll overload it with polling instructions.
    while true:
      # wait on the channel for updates
      let tried = chan.tryRecv()
      if tried.dataAvailable:
        updateFileMeta(input, output)
        echo getCurrentTimeStamp() & info("Recompiling: "), tried.msg
        fileToHTML(tried.msg, output)
      sleep(500)


proc ctrlCHandler()=
    ## Finally we close the channel, sync all the threads and deinit the lock as it
    ## won't be used anymore.
    chan.close()
    deinitLock(lock)
    quit()

## This just registers the above handler as the ctrlC action
setControlCHook(ctrlCHandler)

That's definitely a lot to read but that's still a simple implementation. I could've made it more functional will a threadpool but in my case the threadpool would still be blocking the main thread with ^ and I didn't want to do it with it. I wanted to have the main thread also listening for stuff since it's what actually handles processing the file, the threads are just there to listen for changes and not to actually process the files since that would need a lot more communication between the threads to work properly.

Also, since the main thread only processes one file per thread request, it's still fast enough to handle this on most systems.

]]>
Mon, 03 May 2022 00:00:00 +0000
https://reaper.is/writing/20220509-graphql-in-digital-studios.html Full Stack Development with GraphQL in a Digital Studio https://reaper.is/writing/20220509-graphql-in-digital-studios.html Full Stack Development with GraphQL in a Digital Studio

GraphQL is what people are building on, a lot right now. The advantages on a higher level are the one's below

  1. Concise client decided payloads
  2. No route and parameter handling
  3. Faster responses since it's accessing a graph instead of a regex matching algorithm
  4. Type-safety

and a few more that you can find

I agree with each of the above and these would make a serious difference if writing REST wasn't easy and I'm talking about REST backends that some senior developer didn't decide to make complex for you. Because, trust me, I can make it very hard to even write REST API's

So, how did I actually make GraphQL a little more worthy and easy to write in for a Digital Studio that deals with quite a few projects in parallel

Context

To bring you up to speed, I work with a creative/digital studio that helps with creating UI/UX mockups and if you feel like it , the functional app based on them as well.

Now, this business works because people need apps and not everyone can find a development team easily and we already have one so we solve the technical side of things for you.

Considering the rise in apps and demand for more web based products, we deal with quite a few clients. We have no option but to make it very easy for us to create API's in seconds. I wouldn't go all the way into No Code Environment cause my control complex isn't that easy to subdue

Also the reason why Loopback 3 was the base of all my projects, amazing mixins and middlewares. Ability to handle file uploads and signedURL fetching using those same mixins, while keeping all logic modifiable in case the requirements change.

Loopback 3 had it's EOL a few years back.

Now to replicate a few set of the same features, I did write a few internal tools that'd work with express to handle the same things but CRUD generation tied to me a single ORM and I wanted a more quick approach, at least for crud generation.

The following is basically how the current setup looks like for us, and it's pros and cons.

The CRUD Advantage

The first framework of choice was Hasura

Hasura gave the following advantages

  • Auto generated CRUD
  • A great db definition website portal
  • Ability to autogenerate migrations by using the Hasura CLI's proxy connection
  • Handling permissions to limit the access type based on custom user roles.

All this was good for most applications that we were building but that's all we had hasura for. For anything that needed business logic we had 2 options

  1. Use the hasura webhooks to use a single interface for handling the request and response
  2. Writing a simple REST Service for the custom business logic

We went with the fist approach to be able to scale to a good micro service arch and using hasura as the API gateway for that arch.

This went poorly cause of the limitations of Hasura's Request types at that point of time where the JSON and String primitives were used by us often when we were dealing with a huge nested output that was already typed in hasura and had to be retyped when writing the REST service. This added to the development time and after a while we were just using the JSON primitive where possible

The 2nd approach is something that we are using right now and it's a lot simpler in terms of handling but since the database for hasura is also containerized in our setup, we had to do a few hacks to be able to use the same database url when developing.

Now, this whole advantage itself is limited to a few frameworks but due to the nature of graphql being defined with types, it makes it easier to write generators for it.

Which is something you can get most ORM's to do, by handling the type reflections of the data model for it.

Another example for this is Prisma + TypeGraphQL + TypeGraphql Generator for Primsa

And so, we started moving away from Hasura and creating something similar using existing tools and writing generators where needed.

Streamlining Developer Experience

The second part of working with loopback 3 was the fact that a junior dev never really had to worry about handling any of the mixins, or any middleware that was setup. Their work was limited to writing business logic and in some cases writing replication engines for analytical usecases.

Now this wasn't something that was because of loopback 3 but because the entire setup was that well built. Anything that doesn't need to be taught or told about was handled by the setup and maintained by one of the senior guys or juniors who got curious and tried to understand the setup.

  1. Generate SDK? Don't worry, I'll do it for you.
  2. Update Database to sync with your model changes? Done
  3. You mentioned a relation with the files table? here's your signed url in the response
  4. Using Dynamic Enums? Here's what the enum points to

All this handling is done using model mixins and the boot scripts of loopback.

Getting a similar experience with existing tools would need a hacky approach but let's go through how that was handled.

Boot Scripts

Most boot scripts were also defined in package.json because I needed to run them separately in certain cases.

{
  "scripts": {
    "boot:models": "ts-node boot/models.ts"
  }
}

Next, there's a boot/index.ts file that is basically a npm-run-all script that runs all the npm scripts that had the prefix boot:

const runner = require('npm-run-all')

const commands = [
  {
    label: 'Running Boot Scripts',
    cmd: 'boot:*',
  },
  {
    label: 'Starting Server',
    cmd: 'start',
  },
]

const info = (msg: string) => `\x1B[36m${msg}\x1B[0m`
const success = (msg: string) => `\x1B[32m${msg}\x1B[0m`

console.log(
  success('> Running in parallel:\n    ') +
    info(commands.map(x => `>> ${x.label.trim()}`).join('\n    '))
)
runner(
  commands.map(x => x.cmd),
  {
    parallel: false,
    stdout: process.stdout,
    stderr: process.stderr,
  }
)

To simplify, everytime the server starts, I have the models re-generated for prisma, generators of dynamic enums and files were executed. There's more stuff, but that's very specific to the usecase

Dynamic Enums

The need for dynamic enum types / app layer enums, are to avoid having to write migrations for every enum change that you make. Which involves having the migration handle deletion and recreation of the enum. These are better done in transactions and I wanted to avoid doing this altogether.

We call these as Options or Constants and these were handled via Model mixins in loopback 3.

The flow would be something like this

We'd defined options like so

const options = {
  TRANSACTIONSTATUS: {
    paid: {
      value: 1,
      label: 'Paid',
    },
    pending: {
      value: 2,
      label: 'Pending',
    },
  },
}

TRANSACTIONSTATUS is the grouping identifier, paid would be the enum accessor and value is what will be the saved in the DB against the field, in this case transaction_status.

So, if I had an order with the transaction_status as 1, then that basically means it's paid. Now mixins were used to provide the client side with the label that it's supposed to show.

The mixin would run before the model was accessed and add an additional field in the response.

It'd look for fields mapped with Options and use the identifier TRANSACTIONSTATUS (defined in the model definition) to match the value and return a new property in the response.

{
  "transaction_status": 1,
  "transaction_status_label": "Paid"
}

These can be implemented in GraphQL using field resolvers, which you can either manually write for every option mapping or by writing generators for it using ts-morph or something similar and added to the boot-scripts.

The boot script would do the following things.

  1. Generate a field resolver class for every entity in option mappings.
  2. Add imports for these resolvers in the graphql schema entry point. (depends on how you process your graphql requests)

The mapping and option definitions are defined in a single file to avoid having to change context for something so simple. It looks like this.

export const options = {
  TRANSACTIONSTATUS: {
    paid: {
      value: 1,
      label: 'Paid',
    },
    pending: {
      value: 2,
      label: 'Pending',
    },
  },
}

export const optionMappings = {
  entity: 'Order',
  mappings: [
    {
      identifier: 'TRANSACTIONSTATUS',
      field: 'transaction_status',
    },
  ],
}

And the generator scripts goes through both the values to create the field resolvers

Automatic File URL's

Handling files is no different than the above options/constants table, the difference is the source of data for the field resolvers.

Instead of using a file for the definitions, it uses the database's table data and services/storage.ts file to get a signed url of the file that's in the files table.

  1. Check if the property belongs to the files table
  2. Then generate a field resolver for that entity's properties that are connected to files.
  3. The field resolver get's the signed url using that relational data and adds a field with the suffix _asset_url to it.

So eg:

----
User Model
----
profile_pic 12


----
Files
----
id  12
path  /path/to/object

Once the generator runs it allows the graphql client to fetch the following properties

{
  profile_pic # 12
  profile_pic_asset_url # https://s3.signed.object.url/
}

Now, most people use a polymorphic schema for files and that actually makes it easier to handle files but since Node doesn't have an ORM that can automatically handle polymorphic relations, the field resolver approach is what works for us and easier to reason about.

The 2 scripts, options resolver and files resolver, each make sure to not create a new resolver for the same entity. It'll look for the entity's resolver and add these field resolver code to it. If it doesn't exist, one of them will create it.

Eg: Users.ts is already generated by type-graphql + prisma generator, so anything extra is added in UsersExtended.ts , which is what the generators will look for

SDK Generation

This was the most hacky part of all. Since, loopback 3 came with it's own SDK generation utility. we on the other hand, would have to create this manually to work in a similar fashion.

  1. Setup URQL in core mode.
  2. Use graphql-codegen to generate a generic set of utilities using graphql document definitions
  3. Write a graphql requestor module for the above generated graphql request function
  4. Move this to a separate package in the repo to make sure it generates the sdk again as soon as the documents change (since this is supposed to be re-used into other clients that might come up in the future, like a mobile app)

The file tree for this looks something like this

-| server
-| shared
---| sdk /
----| generated /
-------| codegen.ts
----| documents /
-------| order.graphql
----| index.ts
-| client

This shared folder has it's own scripts but these are triggered by the client folder as soon as you start the dev server, it starts the watcher for the documents/*.graphql files and regenerates the codegen.ts file everytime it changes, thus giving the client a tRPC like setup as soon as they add a new document.

Obviously, tRPC doesn't need the client to do this modification but in our case that's the added redundant work (will find a solution for it as well)

Pros of this setup

  • Lesser context switching for a Full Stack developer
  • Full fledged local backend and frontend setup for all developers, which drops the need for a dev server and you just need a staging server that replicates data from production for you to debug issues you face with actual user data.
  • More like pro from using Prisma, that gives us less migration files and just push to the DB when working on an unconfirmed requirement
  • Full fledged graphql, unlike hasura you don't need to setup more services to handle responses, it's all in one, if needed you can create REST requests for other stuff as well, while having it on the same codebase
  • Most of it can be modified to match your usecase since it isn't an abstracted framework, the only parts that are dependent on others is the graphql request translation engines(Apollo, Helix, etc),the ORM, and the frontend framework. Which can be replaced but the joining piece is Prisma and that's something I'd like to change but I'll loose my CRUD generation.

Cons of this setup

  • Hacky and needs a maintainer to know what the setup is built on, to be able to fix anything related to the custom generators
  • A frontend developer will have to dive a bit into the backend to handle db updates and seeding of data
  • The backend developer has to rethink of how they write schema to match with the generator's standard, which means you'll have to set a standard procedure of writing and designing to make sure implementing features doesn't add much friction
  • Obviously has a lot of work for the initial setup. (boilerplate for this will be made available publicly soon)
  • Tightly coupled to Prisma for CRUD generation, though if you don't need CRUD, you can get rid of that dependency as well and use something like knex.js
]]>
Mon, 09 May 2022 00:00:00 +0000
https://reaper.is/writing/20220525-graphql-isnt-that-amazing.html GraphQL isn't that amazing https://reaper.is/writing/20220525-graphql-isnt-that-amazing.html GraphQL isn't that amazing

I really didn't think it would come to having to explain this but people think that GraphQL is the holy grail at this point and that REST isn't needed anymore.

I can just say "You're wrong" and let it be or I can get into a details and bring sense to the standard. Let's do the 2nd one since I haven't ranted in a while.

Why does it exist?

This is the specification for GraphQL, a query language and execution engine originally created at Facebook in 2012 for describing the capabilities and requirements of data models for client-server applications

Source: https://spec.graphql.org/October2021/

So, facebook needed a way to have the client and server be able to communicate the data model. HTTP was still the protocol of communication so they were limited to GET and POST requests but they wanted the client to be able to talk to the server and explain itself better.

Here's what people imagine it to be.

Before GraphQL

client: /users
server: Here's all the users, go bonkers
client: Um, i also need the profile pics...
server: the backend developer didn't add it in the response, sorry, make a request to /users/:id

After GraphQL

client: /graphql => query users { users {id name email} }
server: ...
client: Um, i also need the profile pics... so query users { users {id name email profile_pic} }
server: ...

and honestly, I'd blame the blog posts that actually make it seem like this, but just adding graphql doesn't just make it easier.

The above REST implementation would be considered bad talk to the backend dev and get the field added? how hard is that?

The backend developer can still make the above fail by not including the profile_pic in the attributes that I get from the db

And now, you get an empty string in the profile_pic everytime.

BUT I CAN GET RELATIONAL DATA!!?
The backend developer still has to define the types for it and include it in the response. The work for there hasn't been reduced, it has instead been increased since the types are to be defined for it.

  1. Define types for the response, and request
  2. Define types for the relations
  3. Define types for the models of the DB

I don't see any reduced time here

So, Should we use it or not?

Well, every developer who's tested quite a few stacks would say, it depends.

But depends on what?

Here's a few things that graphql makes easier for a backend developer

  1. Request Model Validation is done for you, since you define it for input.
  2. Response fields are auto documented due to your types so you save time on internal documentation. (external documentation still needs work and honestly, I wouldn't want that automated)
  3. In most implementations, you don't need to worry about routing anymore since it's all just one handler processing the DSL. If this sounds like RPC then don't be shocked, it did take inspiration.
  4. Provides a way to also handle realtime setup because you get subscriptions support from most graphql engines

Now, let's get to the client side of things, or how the frontend developer's job is made easier

  1. Since model definitions are passed through the schema, you get type definitions that graphql clients can use to provide typesafety a.k.a browsing the documentation is reduced to just looking for exposed operations.
  2. Selective data, but this isn't an immediate thing, you have to learn to write fragments . Thus, giving you the ability to include additional fields based on the screen/view/page you are working on. Just creating an SDK wrapper on top of all fields or selective fields doesn't really make any difference. You reduced the network load but increased the memory usage of the app which well, is not really an advantage.

Eg:

# this doesn't really make it any better
query users {
  users {
    id
    name
    addresses{
        id
        street
        state
        country{
            id
            name
            shortCode
        }
    }
  }
}


# you'll have to learn to divide them so that
# they can be composed so you can selective get
# the needed data

fragment UserFields on User {
  id
  name
}

fragment AddressFields on Address {
  id
  street
  state
}

fragment CountryFields on Country {
  id
  name
  shortCode
}

query userBaseDetails {
  users {
    ...UserFields
  }
}

query userDetails {
  users {
    ...UserFields
    addresses {
      ...AddressFields
      country {
        ...CountryFields
      }
    }
  }
}

and well, ideally I'd only create a query when it's needed more than once with the same fields, else I just use the field fragments based on the view's requirement.

Now, the client get's the exact amount of data that it needs on that exact view while still being able to scale.

Creating a single query with all fields being used everywhere basically feels like REST that was written on the frontend, beats the whole argument of "fetching only what's needed"

Fragment writing can be a pain and redundant so shameless plug but you can use gqlfragments to generate them for you.

  1. The 3rd advantage is not having to deal with url suffixes, and this is both an advantage and disadvantage. Advantage, when it comes to the development of the app as it reduces the amount of code you write for network requests and disadvantage when you have to go through 100 /graphql requests in the network tab to find out which one is that's making an invalid data call and then look for the operation that it was passing, can get quite irritating

There's a few more but none that actually make any big difference.

Better than REST?

Uh... in a way, yes.

But most of what's above can be done in REST

  1. Selective fields? Easily implemented with middlewares
  2. Model validation? There's tons of library for that
  3. Url suffixes? OpenAPI / swagger => sdk generator. Done.
  4. Type Safety? If you are working with a language that's statically typed, this shouldn't be hard to do. JS has trpc which does this with shared types and reflections.You can just have a types package in your modern monorepo to do it for you.

I've left realtime setup, because I'm sure there's enough websocket information online.

You'll have to add routing for REST which is definitely redundant but it's okay, you can spend 10 extra seconds to define a route definition.

BUT THAT'S SO MUCH WORK!? True, REST implementations require quite a bit of setup to get everything working but so does GraphQL. If it's just about how much time it'd take to start a http server that can respond to a /ping request.

I'm sure you can imagine that a simple http REST server would take me 30 seconds and graphql would need a little more than 2-3 mins when typed out.

const express = require('express')

const app = express()

app.get('/ping', (req, res) => {
  res.send('pong')
})

app.listen(3000, () => {
  console.log('listening')
})

Okay, took 43 seconds...

let's try go lang

package main

import (
    "fmt"
    "net/http"
)

func main(){
    http.HandleFunc("/ping",func (w http.ResponseWriter, req *http.Request){
        fmt.Fprintf(w,"pong")
    })
    http.ListenAndServe(":3000",nil)
}

about a 1min

You're telling me, you'll be able to define a resolver, define it's return type, add in the graphql engine imports, implement the resolver and the processor then add in the http server that reads the request, passes it to graphql in the same time? Nope. Not happening.

Anyway, not here to bash GraphQL. wanted to clarify what the advantages are, and cleary the above 2 examples are just jokes. I'd have to write much more code to get the same features that the graphql engine would give me.

  1. Validation
  2. Doc generation
  3. Playground etc, etc

so yes, the time spent to setup GraphlQL is worth it, it is better than any simplistic REST implementation but no there's no winner here, it's a usecase thing.

It depends on how much work you're willing to put up for the setup. I did write about the amount of work it took to find a decent GraphQL working solution for work, which you can read about. Full Stack Development with GraphQL in a Digital Studio

REST is easier to iterate over, without breaking the entire engine and you can find mature sets of tools to help that backend be more stable and making an argument that "the client decides the response" is an advantage? Dude, the backend developer can still change the response type definition and you are done for. You need to collaborate to get the app out, just one of you aren't working on it.

To other writers, stop using bad REST API implementations as examples for moving into GraphQL, I can write bad GraphQL API's as well

]]>
Mon, 25 May 2022 00:00:00 +0000
https://reaper.is/writing/20220530-contextual-helpers-generic-helpers.html Contextual Helpers are easier to write but ... https://reaper.is/writing/20220530-contextual-helpers-generic-helpers.html Contextual Helpers are easier to write but ...

Clickbait title for the win.

Anyway, what's this about?

I've only been a programmer for about 4.5 years (as of writing this post) and no doubt there that we all grow after making enough mistakes that we reflect upon and correct.

True for everything in life but I'm going to keep it limited to programming

Contextual Helpers

Or, helpers that are limited to a context or more specifically business logic. These are small functions that are very dependent on the data structure that's specific to the application you are building. To simplify, let's say I have a job listing app and I'm working on the page where these are to be visualized.

interface JobListing {
  role: string
  joiningDate: Date
  name: string
  // ... remaining fields
}

Assuming , we have the above structure / type for whatever I receive from the backend, I might have a formatter for the response to model it for consistency across the app or I might just use these fields as is. Though, often I'll need to write helpers that are very specific to this data.

For example, let's say I need to show all the Software Developer roles with the hex #18181b then what?

import { Text } from '@components'
import { standardDate } from '@utils/date'

const getPositionStyledText = (role) => {
  let color = '#000'
  switch (role.toLowerCase()) {
    case 'software developer': {
      color = '#18181b'
      break
    }
  }

  return {
    style: { color },
  }
}

const JobRoleText = ({ role }) => {
  const textStyle = getPositionStyledText(role)
  return <Text style={textStyle.style}>{role}</Text>
}

const JobListingCard = ({job})=>{
    return <>
        <p>{job.name}</p>
        <p>{standardDate(job.joiningDate)}</p>
        <JobRoleText role={job.role}>
    </>
}

Now we have 2 things that are very specific to this project, the component and the helper for the component, you can move the function inside but then that function would get redefined every time and will need to be inside a useCallback to avoid that so it's easier to just have it outside.

Back to the point, This is a bloc helper, or a business logic helper and these are often just limited to the app you write them for, moving them to other apps might need a lot of modification and so these are left alone and people generally start from scratch

Generic Helpers

As the name suggests, these are more geared towards being reused and don't really have business logic tied to them. Thing is, these are a little harder to write compared to the contextual one's because here you have to decide and design the API of the helper in a way to make it generic enough to be reused.

I'm going to give a small example and reuse the above styling helper again but this time written with a more generic API.

import { Text } from '@components'

type ColorMap = Record<string, string>

function createColorMap(definitions: ColorMap, defaultColor: string): string {
  return toMatch => {
    if (!colorMap[toMatch]) return defaultColor

    return colorMap[toMatch]
  }
}

// and the usage would look like so
const roleColor = {
  'software developer': '#18181b',
}

const roleColorMatcher = createColorMap(roleColor, '#000')

const JobRoleText = ({ role }: { role: string }) => {
  const textColor = roleColorMatcher(role.toLowerCase())
  return <Text color={textColor}>{role}</Text>
}

before we go to the explanation

  1. The above is not the best API to write a colorMap, it can be improved a ton more
  2. This is an example, take it like one!

To the mounta.. explanation.

We asked 3 things, which will help you create most of the helpers you write.

  1. What is base operation of helper?
  2. How do I get the data?
  3. Can it even be made generic?

Let's start with the 3rd question first, because that's important to understand.

Can it be generic?

Not all helpers can be made generic, and even if they can be, the API of the helper might not be as simple as the one with the context. What do I mean by that?

Back to examples, let's say I have a few cases where the provider can be an organization or a middleman or a startup each having the type = 1 | 2

function getJobNameWithProvider(job) {
  if (job.type === 1) {
    if (job.org.type === 'startup')
      return `${job.org.name} | ${job.name} (Startup)`
    return `${job.org.name} | ${job.name}`
  }

  if (job.type === 2) return `${job.poster.name} | ${job.name}`
}

I could also use a switch statement to add all this to a single string but for now, this is complicated enough to explain what I'm trying to.

Here if I do make a generic helper, I'll be picking random fields based on conditions and if I do make a generic helper it would look pretty much like the createColorMap function but it's the API that's troublesome.

function createConditionalPicker(pickerMap) {
  return (passerObj, condition) => {
    if (pickerMap[condition]) return pickerMap[condition](passerObj)
    return null
  }
}

// and usage would look like so
const jobNamePicker = createConditionalPicker({
  1: job =>
    job.org.type === 'startup'
      ? `${job.org.name} | ${job.name} (Startup)`
      : `${job.org.name} | ${job.name}`,
  2: job => `${job.poster.name} | ${job.name}`,
})

const job = {
  type: 1,
  org: {
    name: 'BarelyHuman',
    type: 'startup',
  },
}

const jobName = jobNamePicker(job, job.type)
// BarelyHuman | undefined (Startup), since I haven't handled null cases above

You can technically use the createConditionalPicker to even create the color mapper above, but that's not the point.

So, here we have 2 things,

  1. You are still writing your own business logic
  2. The API needs to be explained well to a new developer joining the project.

Looks like I almost wrote a monad though...

back to the concept, in cases like these it's easier to read and modify the original contextual helper than trying to make it generic. This will come with practice so you'll be making un-needed generic helpers quite often.

How do I get the data?

The 2nd question dictates how you design the API.

The above 2 helpers were curried functions since the setup data would be same and it'd make no sense to resend the entire object again and again when it's reference could be used and thus the returned function takes in the parameters that would actually access the reference point

If you are going to be working with different data every time, then you are better off with simpler functions.

If you are working with modifications on data, then we would need helpers that allow pipes, for example the above name picker could be written with something like @useless/asyncPipe

import asyncPipe from '@barelyhuman/useless/asyncPipe'

const jobDetails = await asyncPipe(
  async () => await getJobDetails(jobId),
  async job => {
    if (job.type === 1) {
      if (job.org.type === 'startup')
        job.jobNameWithProvider = `${job.org.name} | ${job.name} (Startup)`
      else job.jobNameWithProvider = `${job.org.name} | ${job.name}`
    }
    if (job.type === 2)
      job.jobNameWithProvider = `${job.poster.name} | ${job.name}`

    return job
  }
)

here the asyncPipe is the generic helper and we aren't really creating a generic helper for the job name, instead we make the modifications on the source data, which is how I would be doing the jobName field anyway but had to think of something simple to explain the 3rd question.

Now, people would ask, why would you write these functions inside a pipe? Good question.

The point of using a pipe to make sure the structure is modifiable, because the pipe assume a set of data to be passed down at all times. This isn't close to the original functional programming pipe but more to how coffeescript implements pipes.

The above in production code looks like this.

async function addProviderNameToJob(job) {
  if (job.type === 1) {
    if (job.org.type === 'startup')
      job.jobNameWithProvider = `${job.org.name} | ${job.name} (Startup)`
    else job.jobNameWithProvider = `${job.org.name} | ${job.name}`
  }
  if (job.type === 2)
    job.jobNameWithProvider = `${job.poster.name} | ${job.name}`

  return job
}

const jobDetails = await asyncPipe(
  async () => await getJobDetails(jobId),
  addProviderNameToJob
)

At this point, the addProviderNameToJob is optional, I can remove it and add it anywhere in the pipe and still expect the same result because you'd conceptually pass the same job down the pipe. The asyncPipe from @useless isn't tightly tied to a source for other reasons but based on functional concepts. You'd have one source and multiple sinks for that source.

The sinks are what consume the source, make modifications to it and return it. I can add another modification in the middle if it does the same thing, consumes the source, modifies it and returns it

async function addProviderNameToJob(job) {
  if (job.type === 1) {
    if (job.org.type === 'startup')
      job.jobNameWithProvider = `${job.org.name} | ${job.name} (Startup)`
    else job.jobNameWithProvider = `${job.org.name} | ${job.name}`
  }
  if (job.type === 2)
    job.jobNameWithProvider = `${job.poster.name} | ${job.name}`

  return job
}

async function addBarelyHumanToJob(job) {
  job.isFromBarelyHuman = true
  return job
}

const jobDetails = await asyncPipe(
  async () => await getJobDetails(jobId),
  addBarelyHumanToJob,
  addProviderNameToJob
)

Pretty stupid for an example, but I hope you get the point, my business logic can be separated in chunks and still be added or removed at will.

Don't be fooled, you can do all that without even using asyncPipe but it's a little more structured for my mental model.

What is base operation of helper?

Last question, what is the base operation. The base operation for the color mapper was to compare a string to another string (switch cases or if case) which can be moved into comparing a value in map.

You basically move out the base operation and write that into a generic function and then move the data dependent decisions out to the developer using the API.

Combine this with the answers to the other 2 questions and you'll have an helper design in your head or on the paper. Then run a few tests and voila, generic helpers!!

Conclusion

It's always going to be easier to write business logic specific helpers but if something can be split into a generic helper which you see happening in most projects that you write, spend a little more time and make a generic helper out of it.

Do keep in mind that not everything needs to be generic, some things are easier to read and modify when left with their context.

]]>
Mon, 30 May 2022 00:00:00 +0000
https://reaper.is/writing/20220606-tillwhen-incremental-updates-and-additions.html We're polishing TillWhen, finally. https://reaper.is/writing/20220606-tillwhen-incremental-updates-and-additions.html We're polishing TillWhen, finally.

About 2 years ago (25 May 2020), I decided to speed code a timetracking app in a weekend.

  • Picked the login system off of my older web projects in Koa
  • Used a simple layout components without any styles
  • Picked the crud to table helpers based on user ownership from older projects as well

The amount of code I wrote for that app was close to 200-300 lines and that was mostly the tab animation which I removed right after cause I wasn't happy with it.

Either way, the MVP of the app was done in 2 days.

The next few weeks it had a few additions from various libraries, both for UI and backend functionalities.

considerably, it's the simplest app I've built and it's not that hard to build but then I promised to polish the app to be a lot more consistent and was never able to keep up to it.

Excuses

Cover Up: I got busy with work related stuff and other projects

Truth: I gave up on the product since, I didn't use it myself and in my opinion if I can't be on the dogfooding side of the product then there's a good chance I won't find issues that the customers might be facing.

Why return to it?

Recently, my primary domain reaper.im has lost it's original nameservers and the owner of the tld has probably changed thus making it hard to get back. (obviously, since you are probably reading this on another domain).

This issue lead to breaking mailer.reaper.im and that was the service I used for sending mails, it's just nodemailer as an API and was being used to send mails for the magic links that tillwhen uses to login.

And to fix that I moved the tillwhen code to use it's own nodemailer instead of the service and while doing that my OCD kicked in and I saw a lot of stuff that I could improve and I just instantly created a v2 branch and started working on some minor stuff.

Stack

The stack is still the same as before

  • Next.js
  • Postgres
  • AntD and Semantic React for UI Elements

As you can see, I have 2 huge libraries being bundled with the app, most of you can't see the speed lag because almost all pages are server rendered and nextjs is good at handling caching for these kind of things.

Updates

As for what's we're changing

  1. rewriting the UI elements with tailwind and more generic css
  2. rewriting the API's so you can write extensions for the app
  3. moving a lot of the validation to the database layer
  4. writing faster queries for strictly server rendered pages

UI

The most work is going to be removing the AntD and SemanticUI components and rewritting more elements in my style / theme.

The initial version was just

copy component code => change color => randomly place it on the right side => done

That's not how I prefer it but then I needed the app up in 2 days to challenge myself. Now since we have over 200 users, I really thing they deserve a little more than just a working tracker.

The 2nd part to focus on was the refetched data that isn't being cached or even stored in a global state so we've got jotai and jotai-form which are libraries that I sometimes help with and jotai-form doesn't have a full fledged form validation solution yet.Which is my responsibilty, and I'm writing the experimental solution as a part of tillwhen and if it works out as expected, it'll be added to the official package.

As for the API requests, it's mostly written in axios and whether I wish to change it or not is totally dependent on the time I decide to spend on them, since they are just being used in 3 components and I could rewrite them in fetch pretty quickly.

Backend

The backend code, which is a huge set of koa and express API handlers glued with Next.js' API routes code which while works basically follows the flow like so.

Page => Next.js Route => API => express handler => Page's serverSideProps => Page

Which, is honestly not how anyone should write nextjs code, I wrote it cause it was quicker and I didn't want to handle the context level code that would then need changing the express handlers greatly, and this glue worked fine for the initial expectation of 0 users.

Since there are more users, the memory and requests are now something to be reduced and so we'll be rewritting the APIs properly in a way so that you can also extend and write extensions if you wish to.

This also includes writing faster raw queries instead of using the knex generated queries for when a page is strictly server rendered and not client code dependent, one such page is the /dashboad stats. The page has only 1 client interaction and that is also re-rendering the page from server data. Right now it has 3 queries running in parallel to get the data which can be reduced to 2 simpler and more faster SQL queries. I'm a big fan of ORM's but that doesn't mean that raw queries are to be avoided altogether, there's place where that's the best option to speed up stuff.

Process

How are we making these changes?

The older code is still on the main branch and the new code is on a branch called v2/initial where the changes are being made.

No, I haven't deleted the whole codebase in that branch, it's still incremental. So the API's have a new subfolder v2 and the newer API's go in there.

For the UI components, there were minor abstractions that I wrote in the previous codebase which I can now move to a folder called old inside components and vscode took care of changing the import paths for me.

The new components were then added as needed, first being coded on the tailwind playground and then being imported into the component file.

Since the database will not be changed and only have more constraints added to it, that'll be done with SQL scripts that I can run using gator or using knex's raw query runners.

Probably will be using gator since it was written for things like these.

While the code in current codebase is a mess, a lot of the things were placed properly and the containing folders were named appropriately and this made it easy for me to find stuff when modifying them.

Also, I've started using eslint a lot more than I use prettier so that's handling a lot of code style for me. I avoided eslint since all I needed most of the times was a code formatter and not a modifier but there's been a few tricks that I used various cli's for and all that's being handled with eslint so I think I'll be using eslint more.

Conclusion

There's no deep meaningful statement to be made here, just stating the things I've been doing.

]]>
Mon, 06 Jun 2022 00:00:00 +0000
https://reaper.is/writing/20220613-react-0.61.5.html Notes on fixing react native 0.61.5 on a M1 mac https://reaper.is/writing/20220613-react-0.61.5.html Notes on fixing react native 0.61.5 on a M1 mac

Here's a few references based on things that broke while trying to make an old project on react native to work with my Macbook M1

Undefined / Unknown symbol : , expected )

This happens when babel is unable to parse flow grammar, so you can remove it from your source code but there's good chances it won't be able to compile it for imported stuff as well, you can lock the babel to resolve to a particular version by following the below

https://github.com/babel/babel/issues/14139#issuecomment-1011836916

Cannot build simulator for arm64

Well, XCode 12.5 adds a few defaults with regards to supported architectures and this breaks due to incompatibility. In short you'll have to do the following steps

  1. Remove VALID_ARCHS from build settings if it's been added
  2. Add Excluded Arch's for ios simulator sdk's and specify the arch to be arm64
  3. Modify Podfile to add the same configuration to any packages that might get installed using the snippet below
post_install do |installer|
  installer.pods_project.build_configurations.each do |config|
    config.build_settings["EXCLUDED_ARCHS[sdk=iphonesimulator*]"] = "arm64"
  end
end

For a more detailed explanation, read the below blog https://khushwanttanwar.medium.com/xcode-12-compilation-errors-while-running-with-ios-14-simulators-5731c91326e9

No Images or assets on iOS 14 +

The rendering changed over the versions and this specific react native version doesn't handle the change so you'll have to manually patch this using patch-package by creating a patch file for the below addition in the file react-native/Libraries/Image/RCTUIImageViewAnimated.m

else {
    [super displayLayer:layer];
}

and this should handle the rendering for you after running patch-package again and re-installing the Pods.

More details about this on this issue thread: https://github.com/facebook/react-native/issues/29268

cannot initialize a parameter of type 'NSArray....'

There's a good chance you start with this issue where XCode fails to handle basic types due to the difference in the SDK version and paths

You can fix this by manually replacing these with older definitions by adding the below lines in your post_install script in the Podfile as well

find_and_replace("../node_modules/react-native/React/CxxBridge/RCTCxxBridge.mm",
  "_initializeModules:(NSArray<id<RCTBridgeModule>> *)modules", "_initializeModules:(NSArray<Class> *)modules")
find_and_replace("../node_modules/react-native/ReactCommon/turbomodule/core/platform/ios/RCTTurboModuleManager.mm",
      "RCTBridgeModuleNameForClass(module))", "RCTBridgeModuleNameForClass(Class(module)))")

The find_and_replace isn't a defined function so you'll have to define that as well, which you can do at the end of the Podfile with the following code

def find_and_replace(dir, findstr, replacestr)
  Dir[dir].each do |name|
      text = File.read(name)
      replace = text.gsub(findstr,replacestr)
      if text != replace
          puts "Fix: " + name
          File.open(name, "w") { |file| file.puts replace }
          STDOUT.flush
      end
  end
  Dir[dir + '*/'].each(&method(:find_and_replace))
end

You can read more about this on this issue thread https://github.com/facebook/react-native/issues/28405

In case you are greeted with both issues regarding XCode >=12.5 then this is how your Podfile's post_install task will look like.

Note: I like to move my post_install out of the targets block. If you keep it inside the block then know that the find_and_replace function is to be on the very outside

targets "project" do
#  pod Stripe
# ... etc
end


def find_and_replace()
    # ...
end

So the final output would look a little something like this

post_install do |installer|

  find_and_replace("../node_modules/react-native/React/CxxBridge/RCTCxxBridge.mm",
  "_initializeModules:(NSArray<id<RCTBridgeModule>> *)modules", "_initializeModules:(NSArray<Class> *)modules")
  find_and_replace("../node_modules/react-native/ReactCommon/turbomodule/core/platform/ios/RCTTurboModuleManager.mm",
      "RCTBridgeModuleNameForClass(module))", "RCTBridgeModuleNameForClass(Class(module)))")


  installer.pods_project.build_configurations.each do |config|
    config.build_settings["EXCLUDED_ARCHS[sdk=iphonesimulator*]"] = "arm64"
  end

end

def find_and_replace(dir, findstr, replacestr)
  Dir[dir].each do |name|
      text = File.read(name)
      replace = text.gsub(findstr,replacestr)
      if text != replace
          puts "Fix: " + name
          File.open(name, "w") { |file| file.puts replace }
          STDOUT.flush
      end
  end
  Dir[dir + '*/'].each(&method(:find_and_replace))
end
]]>
Mon, 14 Jun 2022 00:00:00 +0000
https://reaper.is/writing/20220614-the-monorepo-experiment.html An attempt to reduce the monorepo complexity https://reaper.is/writing/20220614-the-monorepo-experiment.html An attempt to reduce the monorepo complexity

Note: This involves ESM quite a bit so you might wanna read about ESM first and then get to this.

I made a tweet yesterday with regards to getting a unified component API working on both React Native and React (not react native web)

Tweet

Now, this might not seem like a huge achievement or something ground breaking for most people since they've been kinda doing this with typescript all along as typescript handles the module aliases and path resolution and it's neat.

Plus, ESM has been around for long enough that someone else might have already done it and I didn't know about it so I got excited but well, let's get to what the post is actually about.

Monorepos

I have a love hate relation with Monorepos , primarily because they reduce the context switching when compared to a multi repo setup and I hate it due to the sheer complexity that comes with it while setting it up.

To be fair, Jared Palmer did kinda solve this with Turborepo and no I wasn't paid to promote it, I'm not paid shit anywhere for any of my work.

Now, the solution turborepo brings is more on the lines on initial setup, and having an opinionated monorepo arch and it's fast since everything is remotely cached for you. This makes it faster to install and rebuild your app even over CI's

As to why do I still have problems with Monorepos, the complexity hasn't just gone away.

Yarn v2+ (Yarn 3) has made it easier to create and work with them without the need to use lerna and even npm handles this well right now but there's still things that you need to configure when working with both Javascript and Typescript setups.

A few examples,

  • Compiler configurations to make sure your package works in other packages (in the repo itself)
  • Hoist and No-Hoist issues from various dependencies (mostly react and react-native dependent stuff)
  • Managing dev and peer deps when working with stuff that shouldn't be bundled with your code (ex: styled-components)
  • Configuring scripts and linters to make sure a new junior dev doesn't break this entire setup with a single line

a.k.a, the abstraction of the tooling is very necessary and that's still something that people are figuring out.

What does your tweet have to do with all this?

be patient!!

When ESM came into the picture, a few developers jumped the train and moved their modules to pure ESM right away and Sindre Sorhus was one of them who did list out a couple of reasons as to why it was the right thing to do and most points were valid but I was still concerned cause a lot of my work related stuff was still on Node 8 at the time.

Also, why I've spent so much time making sure ESM and CJS packages worked everywhere, as much as possible

One point that people missed was the reduced need for transpilers and bundlers. It should've been obvious since deno is literally the proof for this, but I'm dumb so I didn't sit to think about it.

Anyway, was working on a monorepo we have at work and there's no UI component sharing, there's business logic sharing with respect to a API SDK and general computation on all sides (backend / frontend / mobile) and metro bundler decided to fail on symlinked packages so I had to fix it's configuration again and while I was doing that the above concept hit me and with my impulsive nature in place, we now have...

  • an ESM for the whole SDK and computation logic
  • another one for a unified component API layer

Now all that's left in the repo in terms of bundlers are Metro, Vite and Typescript, which is well necessary since type-graphql needs typescript, react native needs Metro and the react app needs some bundler at least till I'm done creating a solution to avoid having to use a bundler with simple web apps as well

We had rollup for the shared logic and SDK , which is now no more needed, and if you've done component libraries before you've seen the configuration they come with.

Hence, the tweet. Not so exciting for everyone but I kinda reduced the whole need to handle multiple bundlers or transpilers in the monorepo and it's only left to the one's that are absolutely necessary.

This also, simplified the metro.config.js since I no more have to add every folder to the watchFolder since it's no more symlinked, I can just add extraNodeModules and give it the path to the neighbouring folder and done. All the complicated symlink handlers, monorepo helpers, gone, removed, destroyed.

Why not use Typescript for everything? Do you not know me!?

As for the unification layer this is how my imports for the shared components are now.

//  in react
import { Button } from '@barelyhuman/ui'

// in react native
import { Button } from '@barelyhuman/ui/native'

and the simplified components look like this

// src/button/button.js
export const Button = styled.button`
    ... styles
`

// src/button/button.native.js
const _Button = styled.TouchableOpacity`
    ... styles
`

const _ButtonLabel = styled.Text`
    ... styles
`

export const Button = ({ children, ...props }) => {
  return (
    <_Button>
      <_ButtonLabel>{children}</_ButtonLabel>
    </_Button>
  )
}

The source folder has 2 index files for the actual exports

// src/index.js
export * from './button/button.js'

// src/index.native.js
export * from './button/button.native.js'

and then at the root of the folder we have another index.js and a native/index.js to make the imports a little more cleaner, you can also do the same with the exports field in package.json and expect it to work well but metro fails to recognize this (rarely, but does) so it's easier to just have the package itself act as the import path

// index.js
export * from './src/index.js'

// native/index.js
export * from './src/index.native.js'

The same is done for the business logic but since there's no difference there in terms of implementation and it's just simple functions, the aesthetics are left to a minimal.

Here's a demo snippet:

import { useOptionStore } from '@barleyhuman/shared/store'
import { Checkbox, CheckboxLabel } from '@barleyhuman/ui'

function ListOptions({ selected, onChange }) {
  const populateOptions = useOptionStore(x => x.populate)
  const options = useOptionStore(x => x.options)

  useEffect(() => {
    populateOptions()
  }, [])

  return (
    <>
      {options.map(optionItem => (
        <li>
          <Checkbox
            value={optionItem.value}
            checked={selected === optionItem.value}
            onChange={v => onChange && onChange(v)}
          >
            <CheckboxLabel>{optionItem.label}</CheckboxLabel>
          </Checkbox>
        </li>
      ))}
    </>
  )
}

So, not much was reduced in terms of complexity but it was plenty considering:

  • No extra configurations to handle
  • Remove the build scripts from the root for the same
  • The setup doesn't break due to an abstraction that no one other than the person who setup the code arch understands. (a.k.a easier to look for documentation when the configurations are much simpler)
  • Easier to compose stuff since it's just javascript.

Cons

  • Jest will need a babel step in the middle to work (Use uvu,ava,tape,etc if you're setting up a new project)
  • You still have to be considerate of dev and peer dependencies since react doesn't like multiple instances, neither does styled-components
  • Not all packages you are using might work with type:"module", so good chance you might have to look for hybrid supported packages or pure esm packages.

The last one might irritate quite a bit during development but most of what I use is from people I know write hybrid packages it's mostly the nested/deep dependencies that I'm worried about.

Don't worry, I'm not shifting all my packages(ESM + CJS) to pure ESM, if there's only one person using them, it's still a user and it's going to stay that way. If that's a bother to you, you are free to fork the packages and create pure ESM versions of the same. All of them are licensed MIT for this very reason.

]]>
Mon, 15 Jun 2022 00:00:00 +0000
https://reaper.is/writing/20220616-fastlane-in-javascript.html Writing fastlane scripts in Javascript https://reaper.is/writing/20220616-fastlane-in-javascript.html Writing fastlane scripts in Javascript

3 Posts back to back!? Yes, lot's of content out there right now.

tldr;

  • You install fastlanejs using npm
  • You create a new fastlane class out of it
import { Fastlane } from 'fastlanejs'
const flane = new Fastlane()
  • flane can then be used to trigger any fastlane native action

The longer version

I made a recent post about getting an older react native codebase back up on fairly new hardware and the next step was to add fastlane to make sure getting builds for debugging would be easier.

Those who don't know, fastlane is a very extensible tool written to automate most of the work that you'd do for creating builds and at the end of the day it's just ruby you can extend it with numerous gem's available online.

Why write it in Javascript then?

Well, you see there's a tiny set of issues that need to be addressed to understand the usage of js here.

  1. You cannot split lanes into different files
  2. All helper functions have to be defined in the same Fastfile

Now, this isn't a problem everyone would face since not everyone has react native apps that use the variant approach for creating dev and prod builds separately. I do and so my Fastfile has quite a bit of code.

These are the functions that my Fastfile has, which are then called by a lane definition

# ios
deploy_ios
sign_ios
build_iod
dev
prod

# android
deploy_android
sign_android
build_android
dev
prod

the structure is actually very simple, deploy_ functions call the sign+build functions, then call the dev | prod function based on the params passed.

fastlane ios dev # would create a dev build
fastlane ios prod # would create an appstore build

Now this is necessary since I deal with apps that aren't just going to be uploaded to testflight with the prod api, we haves staging servers and the QA needs to test them so dev builds are unstable/untested and can't be on testflight, someone's bound to create an accidental release out of it

And when I said multi-variant, there's 2 bundle identifiers created by com.example.app and com.example.app.dev and the above fastlane lane's take that into consideration when building. basically each function reads the input parameter.

So, there's a few more helper functions to find out if the given parameter is for dev or prod

let's say the app's ios scheme is named productOne ,then the dev scheme is named productOneDev

Each of those would create a different build, one would trigger a development certification signing and upload itself to diawi and notify slack with the link to the installable app.

The other would upload to testflight and notify slack once the app is out of the processing phase

Similar flow for android and , this means there's helpers that help with finding out if the given param was for dev or not

def is_ios_dev(scheme)
    if scheme.end_with("Dev")
        true
    else
        false
    end
end

def is_android_dev(bundle)
    if bundle.end_with(".dev")
        true
    else
        false
    end
end

Which can be combined into a single function but it's easier to modify this than having 2 if conditions nested.

Also, yes I know implicit returns are to be avoided in ruby but that's a pretty simple function!!

Now, all this is easy and ruby is pretty good language to learn and use but fastlane doesn't really allow importing ruby files so if I need to modularize any of this, I either write custom actions that import my ruby code which adds a lot of glue code but is definitely something I'll be doing to stay close to the source of the tool.

Till then, JAVASCRIPT

I like the enthusiasm JS community has to move everything into JS.

Ray Deck decided to create a auto generated fastlane API layer for javascript. The concept is pretty simple but even better executed.

  1. Run a fastfile with all the actions to generate all the possible inputs and outputs into a json file
  2. Setup a tool that can talk to the fastlane runner using the socket server that fastlane has
  3. Use the json from the 1st tool to create a typescript api of the same
  4. Wrap this all up in a library and done!

Obviously took a lot of work to get it all working so KUDOS!

Now, I found this library while trying to see if someone had already done it, because I had a more verbose approach that I was going to take which was writing a child_process based wrapper for all the fastlane commands that were documented which would actually be a lot more work than writing something like this. I'm not very smart...

Now, how does this solve my problem?

We get to write smaller functions that are just that, functions and importable. Each function is just like an API call to socket server that'll pass in the parameters as a serialized payload and get the result back and it's all promises so you can add in more async code.

Let's get to how to use the library.

Installation

  1. You still need fastlane so go through their docs to set it up
  2. Creating Appfile and Matchfile will reduce the amount of code you write in lanes so do keep them intact or write them up first.
npm i fastlanejs
# or
yarn add fastlanejs

Basic Usage

import { Fastlane } from 'fastlanejs'

const fastlane = new Fastlane()
;(async () => {
  await fastlane.getVersion()
  await fastlane.close()
})()

Real life usage

The API is fully typed so your IDE will help you out a lot with what's valid and what's not.

Here's what the dev version build I mentioned about above would look like

#!/usr/bin/env node

import { Fastlane } from 'fastlanejs'
import process from 'node:process'
import dotenv from 'dotenv'
import { upload } from 'diawi-nodejs-uploader'

dotenv.config('../.env')

// dynamic variables to control behaviour over the file
const isDev = true
const flane = new Fastlane()
const buildtype = 'development'
const scheme = 'productDev'
const workspace = 'ios/product.xcworkspace'
const project = 'ios/product.xcodeproj'
const certType = 'development'

await run()

async function run() {
  await flane.updateCodeSigningSettings({
    useAutomaticSigning: false,
    path: project,
  })

  await setup()
  await sign()
  await build()
  const lcRes = await flane.laneContext()
  const lc = JSON.parse(lcRes)

  const uploadResponse = await uploadToDiawi(lc.IPA_OUTPUT_PATH)

  if (!uploadResponse.link) {
    return
  }

  await notifySlack(uploadResponse.link)
  await flane.close()
  return
}

async function setup() {
  await flane.createKeychain({
    name: process.env.KEYCHAIN_NAME,
    password: process.env.MATCH_PASSWORD,
    unlock: true,
  })

  await flane.match({
    gitUrl: process.env.MATCH_CERTIFICATES_URL,
    teamId: process.env.APPLE_TEAM_ID,
    keychainName: process.env.KEYCHAIN_NAME,
    keychainPassword: process.env.MATCH_PASSWORD,
    readonly: flane.isCi,
    forceForNewDevices: true,
    type: certType,
  })
}

async function notifySlack(link) {
  const gitBranch = await flane.gitBranch()
  await flane.slack({
    message: 'Automation Engine: iOS \n' + link,
    success: true,
    payload: { Git: gitBranch },
    useWebhookConfiguredUsernameAndIcon: true,
    slackUrl: process.env.SLACK_HOOK,
  })
}

async function uploadToDiawi(filePath) {
  console.log('Uploading to diawi, please wait...')
  const result = await upload({
    file: filePath,
    token: process.env.DIAWI_TOKEN,
    wall_of_apps: 'false',
  })

  return result
}

async function sign() {
  await flane.registerDevices({
    devicesFile: './fastlane/devices.txt',
    teamId: process.env.APPLE_TEAM_ID,
  })
  await flane.match({
    gitUrl: process.env.MATCH_CERTIFICATES_URL,
    teamId: process.env.APPLE_TEAM_ID,
    keychainName: process.env.KEYCHAIN_NAME,
    keychainPassword: process.env.MATCH_PASSWORD,
    readonly: flane.isCi,
    forceForNewDevices: true,
    type: certType,
  })
}

async function build() {
  await flane.incrementBuildNumber({
    buildNumber: process.env.BUILD_NUMBER,
    xcodeproj: project,
  })

  await flane.gym({
    configuration: 'Debug',
    workspace: workspace,
    scheme: scheme,
    clean: true,
    outputName: scheme,
    silent: true,
    destination: 'generic/platform=iOS',
    outputDirectory: 'builds',
    exportMethod: buildtype,
  })
}

The above handles the following

  1. Creating keychains
  2. Signing the app
  3. Building the app
  4. Uploading it to a distribution system
  5. Notifying slack

and if you closely observe it's only the fastlane's own actions that are available, plugins like the fastlane_diawi has been replaced with a node package instead

I can add parameters to each of these functions and export them from a utils.js file and reuse them to write the prod script with the only things that change to be the parameters on the top since everything else is being read from environment variables.

How do I find the parameters I can pass ?

As mentioned, this is all just generated code from the original fastlane documentation, you can use them and just camelCase the params and you are done.

Overall this is a nice thing for quick fixes and scripts that I would wish to experiment with and while I've mentioned that I'd like to stay close to the source, I'll probably write custom actions that'll help me with making it easier to move a fastlane configuration from one project to another without having to handle the minor details like bundle identifiers and everything which can be programmatically extracted (which fastlane already does but no clear API for it yet)

Till then, this seems like a viable option, since I've got generative javascript code everywhere, writing something similar will be a lot easier.

That's all for now, Adios!

]]>
Mon, 16 Jun 2022 00:00:00 +0000
https://reaper.is/writing/20220701-iterative-graphql.html Iterative GraphQL https://reaper.is/writing/20220701-iterative-graphql.html Iterative GraphQL

I wrote a post talking about actual pros and cons of using GraphQL in a post before

This was primarily for people just walking around social media throwing it around like they did for typescript.

Just installing a new tool doesn't solve problems! , it might mitigate it a bit but it doesn't solve the inherent problem of you not using it properly.

I could go on this rant, or I can talk about the actual topic.

As always, I was stalking github repositories for learning stuff and github explore actually kept showing repo's from the-guild.dev and since I'd already gone through most of their tools I kept skipping but then I didn't find anything interesting so I did end up at their site again.

Apparently, I did miss a library. It's named GraphQL Yoga and this is one of the few libraries that is functionally minimalistic.

Stuff that it has covered for you

  1. Setting up a graphql server
  2. Works with envelop.dev plugins by default
  3. DSL first - which, most of them are and most people just prefer the graphql-js approach (type-graphql or graphql pothos) which adds the modularisation of code easier but still complicated cause the same could be written in the DSL in one line (anyway, personal preference so can't complain.)

You actually like a graphql library now?

I've never hated graphql, it's fun, handles most of the boilerplate code but then I wouldn't say it's a all out solution to all problems that you face with REST.

and I've talked about this in the previous post about GraphQL, so you can read that there. The same goes for typescript, I use it where it would be a better choice, more about this in a future post.

GraphQL Yoga

This section is more about how the DSL can actually be quite easy to use and allows you to iterate faster.

so here's what a simple ping query would look like in the graphql dsl

type Query{
    ping: String
}

and that's it, you have a graphql schema done.

Yeah, we aren't kids, we know how the DSL works Then why not use it!?

Moving forward. The next this is to get the DSL executable so it could link to programmatic resolvers.

This can be done in all graphql server libraries out there, instead of buildSchema you can just pass in the schema file to the server creation instance.

I'm going to go through doing this with Yoga cause it's my blog.

import { createServer } from '@graphql-yoga/node'

const server = createServer({
  schema: {
    typeDefs: `
      type Query {
        ping: String
      }
    `,
    resolvers: {
      Query: {
        ping: () => 'pong',
      },
    },
  },
})

server.start()

That would take about 30-45 seconds to write and so graphql ping in 30 seconds (a facepalm for the younger me who said it couldn't be possible.)

The example is pretty much self explanatory but let's see how we can extend this.

Auth

A very basic use case is going to be authentication and passing around context.

This isn't very different from other graphql server implementations but here's the additions you'd do

import { createServer } from "@graphql-yoga/node";
+ import { useGenericAuth } from '@envelop/generic-auth';

const server = createServer({
  schema: {
    typeDefs: `
      type Query {
-        ping: String
+        ping: String @skipAuth
+        privatePing: String @auth
      }
    `,
    resolvers: {
      Query: {
        ping: () => "pong",
+        privatePing: () => "another pong",
      },
    },
+   plugins:[useGenericAuth({
+      resolveUserFn,
+      validateUser,
+      mode: 'protect-granular',
+    })]
  },
});

server.start();

and now you have granular control over what needs authentication and what doesn't.

You can also use graphql-shield to add authorization controls but I'd prefer writing my own as helpers and use them in the resolver since more often than not I do need the resolved data to find if the requestor should have access to it.

Why is this iterative?

  1. Reduced time to get typescript decorators to get working. (type reflection isn't always perfect)
  2. It's now just a function invocation so I can add more composable stuff than having to depend on the structure of a library (ex: class types from type-graphql)

overall more time spent writing logic than getting the tooling working, which is what we've been trying to do all this time , right?

But, but but!

It's not all amazing

You obviously saw this coming.

  1. You still need to define input and output types which are rather easy to do since it's all in one .graphql file that you can read through to find the types.
  2. The context switch is a little more but manageable. You move between 2-3 files at any given point to write a resolver, import it , and define it in the schema.gql file. This would be just 1-2 files when working with something like type-graphql since you'd already have most of the stuff autocompleted for you.

Though, in my opinion these can be easily made a little more easier by adding typescript just for autocompleting the function definitions.

That's about it for this post. Adios!

]]>
Mon, 01 Jul 2022 00:00:00 +0000
https://reaper.is/writing/20220708-creating-a-blog.html Creating a blog https://reaper.is/writing/20220708-creating-a-blog.html Creating a blog

If you've been around, you know this very blog has been re-done about 4 times now.

It started with a simple nodejs script turning markdown to html, then we had the next.js version that I played around a lot with, then the statico version that actually kinda sufficed what we had and now back to the over-powered astro framework for something so simple.

Any sensible developer wouldn't be spending time doing this again and again but then , me and sensible doesn't really go together, does it?

What do I use to create my blog?

Well, this question has been coming up a lot recently and being directed towards me in my peers and to be fair, I really don't know.

There's so much cool stuff to use that it's hard to decide but I'll still try to minimize the options for you so it's easier to decide.

Quick and Simple

The quickest way to get a blog is using hugo and the theme that I like is Paper and Anubis, these really are very minimal and easy to read on.

For the Terminal Club

I sometimes go in full terminal mode and do all work from the terminal, though this keeps changing over time so having a blog in the same way sounds like a fun idea. Though, it's not for me because I'll get bored after a while.

Either way, the option is prose.sh, the basic concept is to ssh into the domain, get a username and then you can use your username to log into a directory that'll create a blog for you on prose.sh. You don't manage anything, just content.

I like the overall idea and as a minimalist , the blog's style is quite lovely but as I said, I'd get bored with it quickly and would move the content around again so it makes no sense.

For the Frontend Devs

Now, this is the club that needs to show it all off because the website does act like a portfolio and they have to make a good impression.

You do frontend to, don't you!? Why bash us!? I do and no one's bashing you.

Now, lately most frontend developer website have started to look the same since everyone just picks up Chakra UI, Next.js and done.

Oh wait, change theme colors and done.

That's a fair approach to it if you aren't used to designing and it looks pretty decent and if I'm interviewing a frontend dev I do expect them to have website or a blog.

So, what do you do? I would recommend building something with simple html,css and js without involving react but then go ahead use react/vue/solid/alpha/beta/gamma/.... whatever.

Point being, try to keep compatibility with older versions of browsers like, at least 0.5% browserlist compatibility.

For the Backend Devs

Most backend devs that I know would be better off not touching code at all for stuff like this.

Go ahead with hugo and a markdown editor that can commit to git. Should suffice most of your use case.

If you do wish to spend a little time on it, then do a little more work and get a frontend that renders notion's pages and just write on notion and let it index on your website. This gives you the freedom to write from any device that notion can be installed on and you hardly worry about publishing or build processes.

End Notes

You can use astro as well though astro is still being actively developed and I wouldn't recommend it for developers who are serious about blogging.

I use it cause I'll loose 0 readers even if my blog stops working tomorrow, since there's only 1 person that reads it.

Also, if it does break, all posts are in the source repo.

Though, back to serious blogging.

90% of the time your blog's looks don't really matter (unless you are also creating a portfolio) and people would still read if you're providing value.

Simple example would be an example post from danluu

Though, I'd recommend at least adding a few styles for readability, examples would be

which are by people who spend a lot of thought on the prose and it's design so you can enjoy reading it.

as always, 0.5 value from the post and that's about it for this one.

Adios.

]]>
Mon, 08 Jul 2022 00:00:00 +0000
https://reaper.is/writing/20220709-atomic-forms.html Atomic Forms in React https://reaper.is/writing/20220709-atomic-forms.html Atomic Forms in React

I've mostly worked with Formik for most of my react form validations and the one thing that always feels out of place is the re-initialization of state based on the initial values.

Now, I understand why it needs to be a flag since you don't want it to be considered a dirty field unless touched. This works but it also causes a few rendering issues and as a human who can forget, you often end up writing stuff that causes a 100 re-renders.

I could've sat and thought out a solution for this but I never did cause I was busy trying to churn out libraries after libraries that no one needs (it's fun though, you should try it).

Either way, on Apr 25th , 2022 , Dai Shi decided to play with an idea and created jotai-form.

Initial Annoucement

Now, this looked interesting

What's interesting about this?

Well, that'd need me to explain why and where you should be using jotai so for now I'll just leave it at that. It's interesting!.

Being able to write atoms that store form state isn't that hard but then adding validations to it would need to add derived atoms and then handling async and sync validators separately would require work. Then a form group level validator would require even more work and you'd end up copying these derived atoms and validator utils to every project you use jotai in or you could use jotai-form.

The library is its initial development right now but it basically handles atomic form state for you.

To elaborate. Atomic state with it's validator isolated from other states. Don't have to worry about the form's state needing a re-init since it isn't abstracted from you and is just a state like you'd use with useState, but instead you use useAtom here.

But this would make the dirty field logic break!?

True, also why the utils from the library allow you to change the logic used for the dirty field calculation. You can modify it to match with your way of working with the atoms.

Done with the teasing? Show me an example!

Sure, since the docs aren't added yet, you can use this example as a quick start or use the repository's examples folder instead.

import { atomWithValidate } from 'jotai-form'
import { useAtom } from 'jotai'
import * as Yup from 'Yup'

// define an atom that needs to be validated
const nameAtom = atomWithValidate('', {
  validate: name => {
    if (name === 'Reaper') {
      // throw an error to say that the value is invalid
      throw new Error("Nah, invalid name, you can't be Reaper")
    }
    return name
  },
})

// you can also use form validation libraries if you wish to
// yes it supports async validations
// you can use a backend API for the validation for stuff
const nameYupAtom = atomWithValidate('', {
  validate: async name => {
    return await Yup.string().required().validate(name)
  },
})

const Form = () => {
  const [name, setName] = useAtom(nameAtom)
  return (
    <>
      <input value={name.value} onChange={e => setName(e.target.value)} />
      <span>{!name.isValid && `${name.error}`}</span>
      <button disabled={!name.isValid}> Save Name </button>
    </>
  )
}
  • The state is still just simple singular state.
  • I get to add async validations for cases where the validation logic depends on data from the database
  • I can add an export on the atom and make it a globally available form state and I get validation with it wherever I wish to use it. (conditions apply)

This might look like a lot of boilerplate code but it's actually not. It's more separation of concerns and these are named as atoms.

I've been using the above library for TillWhen and it's what's responsible for the multiple timers running on the screen. Though you won't know that since I haven't published that version yet.

Anyhow being able to segregate form validation from the component's own render cycles makes it a lot more functional oriented and in my opinion that's good for not having to guess if your form is the reason for the re-renders.

I mean, the form might be the reason but that's very easier to check since the state is exposed to you and is not abstracted which makes it harder to debug as to what change is causing the re-render. Instead, you can just add a useEffect on the form state to see if it's the one causing the undefined behaviour.

Anyway, I like the library and also why I'm contributing to it. That's about it for the post.

Adios!

]]>
Mon, 09 Jul 2022 00:00:00 +0000
https://reaper.is/writing/20220719-docker-compose-prisma.html Docker Compose, Prisma and the M1 https://reaper.is/writing/20220719-docker-compose-prisma.html Docker Compose, Prisma and the M1

Pointers,

  • Make sure you have links defined for the app service
  • The connect_timeout is a necessary param for the connection string if you are on M1

Everything else is pretty much standard docker compose

version: '3.9'

services:
  app:
    container_name: app
    build: .
    ports:
      - '4321:4321'
    depends_on:
      - db
    links:
      - db
    env_file:
      - .env
    environment:
      DATABASE_URL: 'postgres://postgres:password@db/scirque?connect_timeout=300'
  db:
    container_name: db
    image: postgres:13-alpine
    expose:
      - 5432
    environment:
      POSTGRES_PASSWORD: 'password'
      POSTGRES_DB: 'dbname'
    volumes:
      - pgdata1:/var/lib/postgresql/data

volumes:
  pgdata1: {}

This post is a simple note for me for future, since I ended up spending quite a bit of time researching as to why the prisma client wouldn't connect in the docker compose during migration commands but work when the application is run in the same composed container.

]]>
Mon, 19 Jul 2022 00:00:00 +0000
https://reaper.is/writing/20220726-the-lab-folder.html The lab folder https://reaper.is/writing/20220726-the-lab-folder.html The lab folder

I've probably already written about this somewhere though I don't think it was on the blog so we're adding it here now.

The post is based on my workflow of how things end up in git and also kinda prove the level of dumbness that I'm at.

Ideation

I'm a do first, thing later kinda guy and this shows almost everywhere and most of the work you see has gone through a filtration stage ( might not believe it since hardly anything I write is useful)

Web

The first step in my not so organized life is to take this tiny little idea down somewhere, which is mostly the iOS notes app since that's what is around me almost always, either on the mac while I'm working or on the iPad later when I'm watching something to chill

These ideas fill up a folder in the notes app and stay there till I choose to go through them again.

Tooling and Packages

The tools and packages on the other hand go into an existing project that I'm working on and are built as a part of that project first. Then they are moved out into a package if I feel like it's something that devs might need at some point.

Prototype

Don't let it fool you but there's no actual structure in here, it's literally just pickup whatever and do whatever thing. The 2nd stage is basically writing down the code for whatever the idea was.

There's not much difference here since both these things go into the lab folder.

lab folder?

Being an impulsive guy, it's really easy to try to do everything at once and the probability of messing up things with this is quite high. I can't change my nature but I can work with it.

For a smarter person this could be easily solved by changing workflows to be very focused but I'd like to avoid that restrictive workflow, this got me to the whole approach that we are going to be talking about.

Everything goes first to the lab folder, which is a simple folder that's named lab nothing else, it doesn't have special scripts (it does have scripts, they aren't really special though). This folder just represents being a dump of ideas, a place of experiment and no restriction, and I can fill it up with whatever on earth I want, since it never goes public.

If you thought that "reaper has 256+ git repos and only 4 usable tools", trust me, I could've made 1000's and chose not too (I could've have hogged up github's storage if I wished too)

The scripts here are basically a bash script that allow you to select the kind of project I wish to start, the script itself gets updated every time I pick up a new language so it handles the steps to create a new project. There's no templates, no specific method, just a bash script that goes through multiple commands that I'd write in the shell.

example, here's what the go.sh script looks like

#!/usr/bin/env bash

set -euxo pipefail

mkdir -p $1
cd $1
go mod init "github.com/barelyhuman/$1"
touch main.go
echo "package main" > main.go
nvim .

This would do the following in order of the commands

  1. use bash
  2. set the script to fail on first error and the commands that are being executed and fail on unset variables
  3. make a directory of the input name ex: ./create/go.sh commitlog would create a directory for commitlog
  4. change into the directory
  5. initialize a go module
  6. safe create a file to avoid overriding if the file already exists
  7. echo the base package line into the file
  8. open the editor of choice (might change as I change editors)

The JS is a similar one, it doesn't setup tooling or anything else, it just sets up the main index file that the code will go into since before it even becomes a package, it's supposed to be functional.

Also, I've gone through various packaging methods over the years for nodejs packages to learn about the shortcomings of each and so I spend more time writing the package scripts and tool configurations than I spend writing the actual package (which, isn't a good thing but it's fun)

Publish / Deploy / Push

Before anything is pushed, the idea is filtered out (yes, after doing it), reason being that there's stuff that people could write themselves in very small lines of code and that'd be a waste to be added into a package.

2nd, more often than not, someone's already built a much better and more robust solution to it and has been spending time maintaining it, me making a useless version of it and throwing it out there doesn't make sense. It's still advantageous to do since I learned how the core logic of that implementation works but that doesn't mean I have to post it out there.

A good example would be the toyvm I wrote, while it's public, the sole reason for writing it was to learn, the reason it's on github is because that's something I'd like to refer to later on and since I work with multiple devices having that reference on all devices makes it rather convenient.

8/10 times that's the reason the experimental project is up there.

Next up, once the prototype is ready, the way of working changes a bit based on the project.

  1. A pure client side webapp get's dummy deployed using vercel's CLI, checked and then permanently deployed by linking it to the git repo
  2. A server and client side webapp gets pushed onto a compute instance (Digital Ocean, AWS, etc) using docker's remote context
  3. A npm package get's a CD script to publish to the registry
  4. A go package just get's pushed with a tag (the simplest god damn flow!)
  5. A go binary get's CI/CD to build the releases on tags and then I spend the next 1 hour figuring out why a certain OS + Arch combo doesn't work for that specific binary.

code folder

Once the above is done and I've pushed to a remote git server (github, personal server, etc etc etc) , the project from the lab folder is deleted and a fresh clone from the git server is taken into the code folder (when I decide to polish the app or improve it) till then, there's no record of that project on disk.

This is done for 2 reasons, a cluttered code folder makes it hard to find projects that are similarly named and hog up space and node_modules is not a joke! (just kidding, i use pnpm)

But, to be honest, as long as the clutter is away it does help to get to code quicker and the code folder normally also has projects that i'm contributing to, so the folder does have quite a few projects when combined with my projects.

Polishing

Unless you're like me, there's a good chance that your working on just 1 or 2 project and you might pick up the polishing phase a lot quicker than I do.

But, more often than not, your project has probably reached this state

  • functional
  • looks hideous
  • has 1 person interested in using it (should be you, the others can join in later)

This is where you might spend time on polishing it, though I rarely do it unless I've been using the app/tool/package quite a lot. examples would be

  • commitlog
  • conch / useless library
  • mark
  • CRI

These are my primary tools and the one's which are updated over time significantly.

  1. commitlog has a whole new CLI
  2. conch got a perf boost
  3. mark got a UI Revamp and usability improvement
  4. migrated the whole thing to a proper database and improved the API's being used.

If I sit down to improve everything I've ever written, trust me, I won't have time to sleep (not that I can sleep for more than 4 hours).

anyway, this is basically how having a separate dump of ideas helps an impulsive monkey like me

  • learns stuff
  • help filter out from actual prototypes
  • adds a tiny bit of structure to whatever I'm doing

And that's about it for the post, Adios!

]]>
Mon, 26 Jul 2022 00:00:00 +0000
https://reaper.is/writing/20220807-plugins-lua-alvu.html Plugins, Lua, Alvu https://reaper.is/writing/20220807-plugins-lua-alvu.html Plugins, Lua, Alvu

Now, this was going to go into kb but would better serve as a proper post so let's get to it.

Plugins

Most people might already know what plugins are but for those who don't , it's a mechanism to extend the original implementation with additional features / mechanisms / etc etc.

Being able to write a truly open API for the plugins to use is pretty hard to do and could end up being complicated and finding languages that most writers could easily learn and use is also a challenge.

Not everything needs plugins but it definitely takes the load off of the developer to develop and add everything into the tool/app/whatever.

I assume most readers are coders so you already know all this and the closest example would be your code editor, either sublime, vs code, vim whatever, there's plugins everywhere.

Next, moving onto the reasons to choose lua as a language and things I learned.

Lua

I barely knew about the existence of lua since I've heard of it being the embedded scripting language for most platforms (games, micro hardware, etc) but never really learned it.

I recently started moving to nvim from vscode and lua was the choice of configuration by a friend and I'm using his configuration so I just stuck with it.

Basically, started learning lua to soon realise that the language is pretty tiny, easy to remember and very easy to read even in larger codebases.

It's simple to extend with other functionalities by just adding additional .lua files to your project and importing them into your code, or you can use the global system level libraries with luarocks which can involve having libraries writing C lang code that builds into lua.

Thus, the language can be easily extended.

Now, choosing it for alvu was a no-brainer cause alvu itself barely has any functionality. The entire codebase is like 500 lines in a single main.go right now .

It just takes in a directory, converts it to html, everything else is done by the developer and that sounds like a pain but let me tell you what the developer actually does.

As a user of alvu, you will rarely every write the entire functionality of the blog / wiki / static site that you are building. You'll mostly just bring in content.

Everything else is already going to exist as a template for you to use, alvu is just a tool that knows how to process these templates for you. This gives us a way to keep the base tool tiny and still infinitely extendable and considering how small the Lua VM is extending it is not that hard.

Alvu

You might already know what it is, or you might not, to be put in single statement.

Alvu is a static site generation engine which can be extended with plugins written in lua.

We call these plugins hooks but basically they are the primary drivers of how things post render will look.

The base idea is implemented but it's going to change over time with the lua hooks taking over the entire functionality of alvu and the only thing alvu would be doing would be to give the hooks information about what to process.

Now, this also does shape the direction for other tools that I have and where needed there will be ways to extend the tools (while keeping the base tool as tiny as possible)

Problems

There's never anything without it's own set of problems so here's the things that were causing issues when working with it and how I wish I could improve perfs

  1. I was unable to setup a waiting queue algorithm without making the program overly large and so I had to give up on that algo altogether. (preferrably build it as a package for later use)
  2. Right now it uses a combination of channels and mutex ( exclusion locks ) which internally depend on channels and this code could get messy very quick, and I did actually get it to be a little more complex to read so you might wanna avoid that when working with Lua VM since it's not directly thread safe and you can have one lua state instance per go routine.
  3. There's quite a few repetitive tasks that a alvu template developer might have to do which while isn't a problem that can't be easily solved, it is a problem as of right now and don't just go on promises I'm making.
  4. Lua as a language does have it's own shortcomings, no regex support, string manipulations are limited, etc etc. And while I can extend lua to have these features, it will increase the size of the original tool and that's something we are trying to avoid so it's considered a problem.

Well, that's about it for the post. Adios!

]]>
Mon, 07 Aug 2022 00:00:00 +0000
https://reaper.is/writing/20220812-projects-for-self.html Projects for Self https://reaper.is/writing/20220812-projects-for-self.html Projects for Self

For me coding has been both a hobby and a way to distract myself from other problems that I might have.

Which, lead me to working on a lot of projects and a lot of them were just experiments and while I'm still proud of them since I did learn quite a bit doing these constant experiments, I just recently found out that a lot of people don't pick up projects just because they don't see any value in it, or that no one will ever use it.

To simplify this entire post,

Treat programming as art.

and I mean actual art, not the one used for money laundering, the one artists do for themselves.

Programming even a small graceful hello world program should be treated as something you do for yourself and don't expect appreciation for that because anyone can do it and that's not the point of doing it.

You do it because it's fun, it keeps you sane and you don't have to force yourself to focus on doing it.

Now, good chance that no one's going to use it. It's fine. you yourself might not use it. That's also fine.

Maybe your opinion of the project changes over time, that's also fine.

That's the whole point!

Make those paintings, make them till you think it's done or leave them in the middle, no one cares!!! That's the fucking beauty of it.

But, I want to make useful tools! Everything you build has an audience and there's quite a few factors that impact this term "useful"

  • How big of a community does the niche have?
  • Are there other tools that do a lot more ?
  • How big is your following ?
  • etc
  • etc
  • etc

A few of these variables can be controlled and a few can't be, and that's okay, you'll forget about these when you start having fun with what you build.

Cause at the end of the day, the entire purpose of life is to have fun (and eat, eating is actually more important.)

I'm probably not a good source of knowledge if you're building a business...

anyways, that's about it for this, Adios!

]]>
Mon, 12 Aug 2022 00:00:00 +0000
https://reaper.is/writing/20220817-modern-react-a-mess.html Modern React, is a mess. https://reaper.is/writing/20220817-modern-react-a-mess.html Modern React, is a mess.

Clickbait title, I know. It's kinda true though.

Every 2 days, there's a new thing about react and how to use it, that the whole "it's just javascript" statement makes no sense to me anymore.

I've made a few statements about moving back to class components and, I've actually done that for a lot of components at work.

Another one of your futile rants?

Maybe, idk, I'm just a dumb guy.

A few months back I figured out that context shouldn't be used for state, and a few days after that, I read a post about it from library maintainers stating the same thing.

A little later I found an issue during maintenance of a project where a component was had excessive re-renders due to a dependency missing in one of the useEffect's and that was an easy fix, but then we figure out that we shouldn't be doing data fetches in useEffect altogether.

Source Tweet

Now the thread involves quite a few known people in the react community and I get the point of not doing it in useEffect but we were introduced to useEffect as an alternative to componentDidMount and if I wish to get data on mount I will put it in there.

I mean, we can use a data fetching library which also does this but then, we have to make sure that we don't update the fetched data into a state primitive due to internal react update paradigms. (also why micro state libraries are needed)

I could be wrong, I didn't write react, nor did I wish for it to be so complicated.

It doesn't end there though, there's been changes on how to do stuff in react quite a bit over the past few years and it feels like it's no more just a UI library.

It's got it's own way of doing things and this brings back the Angular environment where the community would just go crazy because something wasn't done the "Angular Way".

Now, react and react native are something I use on a daily basis and I can't just throw it up into the air since there's no other alternative to react native for me (I'm going to let flutter mature a little more before I jump on that train).

So, any solution after all this rant? Apparently not, The options include using libraries that handle these flows for you and falling into dependency hell and more of reading documentation than being able to write code and that's bearable, I guess.

Unless. Something breaks.

Then your options are to fork and fix and then you might do something wrong because "that's not how you do it react!"

Satire aside, the options really are to pick up good libraries to help you with most tasks and if possible stop using hooks altogether.

Why would I stop using hooks?

Based on posts and reading code bases / documentations, the primary reason for them to exist is to help library developers setup a good flow for you.

For example,

useSWR A library to fetch and cache keyed data or more formally an implementation of stale while re-validate which is a paper that specified the mechanics of returning stale data while revalidating for new data in the background.

The library is amazing and can handle most cases today because of the simple API that allows you to pass in a fetcher which is basically the function that'll define the data fetching logic.

Now, could you implement this yourself? Sure.

To simplify the behavior,

  1. A fetcher function that depends on a key
  2. The key is what decides what something will be cached under, like an identifier
  3. The hook is to return the error,data values and if there's neither then the network call is active or it's in loading state.

Simple right? Yeah, no...

The library isn't 500 lines for something that simple.

  1. It has to maintain a global state for you to be able to fetch the same stale data in other components that might also be in the view.
  2. You should be able to manage dependencies of each useSWR hook separately.
  3. The passed in fetcher needs to be cached so it's passed to a useRef and monitored.
  4. the key is also cached and monitored similarly.
  5. You need to see if it's the component's first mount and if the above 2 (key,fetcher) have changed since that first mount since a react component can have up to 4 renders on the "initial" component render.
  6. The error, data isn't a state(useState value) inside the custom hook but maintained in cache and fetched from it to be sent to you, because if it was just the state primitive swr would be causing a lot more renders on every mutate() call you trigger.
  7. There's also cases where you have to handle cancellation of these fetch calls since if the calls complete for an unmounted component then that's considered a data leak
  8. and this goes on (totally not trying to avoid explaining the remaining 100 lines)

So, can a frontend developer who's work was just to write a simple UI render after getting data from an API function or SDK do all this in every project?

But they only have to do it once and then copy it everywhere!

Yeah and then copy the fixes back to the older projects, right?

Don't claim it to be beginner friendly when it's not, and it's definitely no more javascript.

All that bashing to promote your library?

Nah, my library doesn't even solve all the above problems, and it probably has more issues that I can even imagine right now and I'll only find out about them as it starts getting used by more and more people.

So, no.

My libraries have nothing to do with this. Also, I haven't even mentioned any of them in the post yet.

But it works!

It does, and it works beautifully. I've mentioned it before, I don't hate the library, but that doesn't mean I'm not irritated by the decisions.

Luckily I'm not smart enough to sit and write my own UI library that'd work everywhere (web,ios,mac,windows) so, I'm going to have to adjust to the decisions taken by the devs of react but, I can sure put it out there that something is wrong.

The devs of react will have convincing reasons for those decisions and I might have missed them while going through the RFC's , so that's wrong on my part.

Just use class components then!

Ah yeah, about that. I already am.

Sadly, the amount of HOC's I have to write to get the data from libraries that only have hooks is rather high but that's okay, that's still a manageable way to do things (at least, right now)

The only magic point in class components was the this.setState call and everything else was just simple plain javascript, I could control what would cause renders with componentDidUpdate and componentWillReceiveProps and honestly that control is missing so yes, class components is a good option. ( Yeah, React.memo exists but the docs ask you to refrain from depending on it)

So, valid options are

  1. Class Components + HOC's to get data/actions from hooks
  2. Good libraries that you can trust the devs are actively maintaining (so, nothing from my github!)

Finally,here's resources on things that'll help you be a better react developer in case you wish to know the right way to do things

Oh also, on the contrary the react docs do ask you to make ajax / api calls on useEffect.

  1. https://reactjs.org/docs/faq-ajax.html#example-using-ajax-results-to-set-local-state
  2. https://tkdodo.eu/blog/avoiding-use-effect-with-callback-refs
  3. https://tkdodo.eu/blog/use-state-for-one-time-initializations
  4. https://kentcdodds.com/blog/how-to-use-react-context-effectively
  5. https://kentcdodds.com/blog/application-state-management-with-react
  6. https://www.joshwcomeau.com/react/why-react-re-renders/

You can find more by yourself, but hopefully this has been a nice rant for you to read.

Adios!

Update: Added another link (6.) above as it does explain quite a bit visually

]]>
Mon, 17 Aug 2022 00:00:00 +0000
https://reaper.is/writing/20220822-open-source.html My Open Source https://reaper.is/writing/20220822-open-source.html My Open Source

Almost everything that I love is tied down with Tech, and honestly that sounds kinda boring but for me at least, it's not.

I've been with programming for quite a long time now.

I started when I was in 10th (about 15 years old) with tiny programs in C to teach my younger sister for her computer classes.

Then I got into custom roms for android and built a few roms (pretty lame ones). The journey surprisingly continued to be in my favour as I got my Bachelors in Computers.

Then, decided that the rest I could just learn by myself and, I was both wrong and right.

Wrong, because not everything on the web can replace an actual mentor. A mentor does smoothen out a lot of the issues for you, and this is a gap I still feel exists since most of what I do involves a lot of experimenting with and thus adds on the "time spent to learn" that I could have avoided with a mentor

It has it's own perks as well. You feel like a scientist, cause you are also trying to find solutions to problems you see and don't like the existing solutions since they don't align with your values or you think they can be improved.

I was right, because I got to learn from resources I wouldn't just find in a library. The open source code repositories have been my teacher for the longest time now.


I've talked about me owing to the open source community and why I try to make almost everything I do open source.

But, what is open source?

Seems like a pretty easy question to answer and yet, there's different answers from different people and I just wanna put it out there as to what it means to me right now.

I stress on right now because the perspective for me has changed over time and might keep changing based on how my values for software and programming change.

There's different perspectives and here's a few:

  • For some the open source is just idiots giving out free software code that they can copy / re-distribute to make their job easier and keep making money out of it.

  • For others, it's an amazing set of developers who code for the love of coding and they'd like to be one of them

Some love it enough to work on these OSS projects while doing their day job and some have crossed limits in terms of helping the community that they are able to earn off of the donations from their work.

There's a few other perspective and they are all harmless to the OSS developers, just because most of us are busy figuring out fixes and have no time for your judgement

I've mentioned, I think of programming as an art (people on the RSS Feed would know.) and so my stand on this currently is to do it because I love it or something like that but in all honesty it's the only skill that I've spent the most effort improving.

There's no other skill that I'm even half decent at so coding it is.

Open Source for Me

  • is being able to share and read code from other much better developers has taught me to keep my options open and my opinion loose. I can be wrong and I can be super wrong and that's fine, I don't mind being corrected
  • is a source of amazing ideas that can be re-built and/or improved on. In most cases, you'd be helping the original author out and they would appreciate any help they get

My stand on Donations

I can't really say much since I hardly have any experience with sponsors or donations

But, I recently landed on How to Support NetNewsWire. It's what made me draft this post.

It also made me want to delete my sponsors and all the donation links that I've had on my repos. I didn't delete them because, adding sponsors was a decision taken so I could sponsor other devs, the ratio I wanted to maintain was 1:2 or 1:3 but it's 1:5 right now and I think that's fine since I'm not sponsoring huge amounts.

Point being, I'm not putting a price on my own work but trying to make it sustainable for both me and other OSS devs to continue doing what they like doing.

I picked that up from Drew Devault and theres obviously things that we both might not agree on but there are things that do make sense to me and I'd like to act on them when possible.

He does the above with funds collected from SourceHut and his other endeavors, I don't have such an amazing product or idea to help me with it but I've got my day job for now.

If I ever stop working for companies and start freelancing/consulting, I'll have to reevaluate how much of this mentality would still make sense and if I could even move forward with it but that's a problem I can solve when it comes

That's about it for this post, Adios!

]]>
Mon, 22 Aug 2022 00:00:00 +0000
https://reaper.is/writing/20220912-do-we-need-js-frameworks.html Don't use UI libraries/frameworks for everything! https://reaper.is/writing/20220912-do-we-need-js-frameworks.html Don't use UI libraries/frameworks for everything!

Do we need JS UI libraries / frameworks?

Not always.

That's the answer and I'm going to go through why I think so with a short example of things that people convey that a framework would handle better for me.

The general argument that comes forward when supporting frameworks in

  • Ease of use
  • Performance Optimized
  • Extendable

and, I'd say all of the them are true but also, you can actually make it less abstract by not using one. I'd say differently when working with something a lot more complex where you need to handle a lot of rendering.

Let's get to a simple app example,

I wrote typer as a fun side project that I could just spawn at any time to warmup typing on a new keyboard or for my daily typing practice. I've got a decent typing speed which could be increased slightly with some nice coffee.

Either way, I don't intend to get any more faster and doing the same exercise on something like monkeytype forces me to put it my all to make sure I beat my own average WPM every time and doing this everyday when it's just supposed to be a warmup makes no sense to me.

Also , I just hate numbers for figuring out “Am I good enough?”, which might help others learn how much they have improved but I'm fine with being average so I'd just like to stay away from having to subconsciously answer that question everytime I open a typing test website.

Why is this a good example?

Well, because the approach I took for the typing website uses a tokenization concept and then modifying each token to show whether what was typed was correct or not.

I could write this in react in the something similar to this.

function CharNode({ correct, children }) {
  let classList = []
  if (correct) classList.push('correct')
  if (incorrect) classList.push('incorrect')
  return <span className={classList.join(' ')}>{children}</span>
}

const isCorrect = (source, character, input, pos) => {
  // logic to see if the character exists in the words typed and matches with what was expected
}

function Typer({ words, inputValue, currPos }) {
  return words
    .split('')
    .map((char, index) => (
      <CharNode correct={isCorrect(words, char, input, pos)}>{char}</CharNode>
    ))
}

This is definitely smaller than whatever the vanilla js implements and the optimizing fact or what keeps the renders small is the fact that the same value of the correct prop will not re-render the CharNode in most cases so I could have 100 characters and this would be fast and at 1000 characters this would start to slow down very slightly.

The same on the other hand in Vanilla JS would require something like this

const words = 'hello world'
const nodes = []

function createCharNode(char) {
  const node = document.createElement('span')
  node.innerText = char
}

function createContainerNode() {
  const container = document.createElement('div')
  return container
}

function install() {
  nodes = words.split('').map(x => createCharNode(x))
  const container = createContainerNode()
  container.appendChildren(...charNodes)
}

function update(inputValue) {
  words.split('').forEach((char, index) => {
    if (char === inputValue[index]) {
      nodes[index].classList.add('correct')
    }

    if (char !== inputValue[index]) {
      nodes[index].classList.add('incorrect')
    }
  })
}

That's definitely a lot more code and also not the first implementation thought that a beginner would have. They'd instead re-render the entire node tree everytime instead of manipulating each node by index or by id

Funny thing is, this is going to get slower if the amount of nodes cross 500, reason being that the update tasks you add are to modify the tree which the browser has to re-render and the entire cycle can be slowed down if there's a lot of such updates going through.

The thing react / vue / or anyother framework that works with VDOM is that the above manipulations are done on a programmatic representation of the DOM that's faster to manipulate when compared to the actual rendering DOM and once the updates are done the diffs for the same can be generated for the DOM vs VDOM to run one update request on the DOM.

The advantage I'd have here over the framework would be the granular control I can have over optimizing this, like this would get slow at 500 because of the continuous update invocations.

Which I can throttle to be executed only once every 100ms, thus making it easier on the browser's execution stack. I could add an async / delayed execution of each update with something like a debounce to further reduce the continous load on the browser.

Another advantage or more a side effect of the granular control is that when profiling this app I'd only be dealing with code I've written for the updates and make it faster by avoiding even more redundant paints on the browser as compared to dealing with the abstractions that you can't fix and have to wait for the framework's team to do it for you.

Which brings us to the point, should you use one?

And the answer is, not always, using them makes sense when you are dealing with UI that's got a lot of complicated rendering logic (which is most SPA's today) but when dealing with simplistic behaviour additions vanilla js should be fine.

I don't need to add the entire react and react-dom to handle a page which is just going to search through a small list of items and render them, that's not a complicated flow.

Neither is it needed for something like a portfolio website where the only reason you have framework is for the SPA routing solution for your chosen library/framework (like, really?)

There's been vanilla JS solutions for routing for so long! I don't recommend using the default window.history as it may or may not exist in the browser you wish to target so a library is recommended here, just because the polyfilling can be delegated to the library, or you can polyfill it yourself.

Using SSG will be it's own different rant, I saw someone with a portfolio written in Angular so that's going to be a long one.

But, I want reactivity!!

  • Yeah, okay, you can write node renders based on events by adding a simple pub sub model.
  • Storing all shared nodes in a memory tree or even fetching them again with an id where applicable
  • Or, create observables and move node creation into modules

There's quite a few patterns that've existed for a while, the new hyped library you're using probably still benchmarks itself to the update speed of the browser DOM but then you also probably will have an easier time writing an array that handles the render diffing for you instead of writing a specific diffing logic everytime, but then your custom diffing could be optimized to be a lot more faster for the specific implementation since you don't have to bring in custom primitives like state / observable thus reducing code but then having custom primitive also helps with structure in the code so ...

If you now have an headache, you're welcome :)

Anyway, back to the original answer. It depends on what you are trying to do.

My general thinking is that can this be done with a simple function? If the answer is yes, it's done with generic javascript implementations and not library specific methods.

I use library specific methods or instructions when it's something that has to be done with it.

For example, Fetching data in react cannot be done with a simple javascript function because you have to store it in the contextual tree and for this , people use redux, useSWR, etc etc or store it in component state directly and use it from there but it can't be a simple function execution and has to be an effect invoked function with access to the state.

Vue on the other hand, allows you to execute a function and add the data to a reactive variable that the component is listening to and so it's considerable simpler. Same goes for svelte, solid, etc

In vanilla JS, you choose when to re-render the node so the magic part of the libraries / frameworks can be ignored in this case and everything is either reacting to a DOM event or is a function invocation.

The reason I prefer vanilla over libraries is that in most cases when working with JS I end up breaking the app because a certain library decided it was okay to add breaking changes in patch versions and I've always pinned deps but then security vulnerabilities might end up forcing you to refactor.

So, when you are like me who might not look at the codebase again for quite a while, and then when you are back to it and everything is breaking and the solutions you find online is "Update to the latest version!" and you read the docs and see 10 breaking changes so you leave the issue as is and be like "Hah, the users probably won't see the bug that often"

and when doing it with a server rendered setup I can avoid having JS handle complex behaviour and use vanilla to handle something that'll work in most browsers and can be easily picked up after a while since it's "just javascript" with no magic.

Though, I still end up using react a lot because of my work with react native and shared react code but that does frustrate when something tiny breaks and all I can do is wait for the team to fix/review PR , the only good thing is that something like patch-package exists and I mostly add a lot of such fixes to the codebase directly.

]]>
Mon, 12 Sep 2022 00:00:00 +0000
https://reaper.is/writing/20221011-lower-level-projects.html Lower Level Projects https://reaper.is/writing/20221011-lower-level-projects.html Lower Level Projects

Running away from complex projects is something most of us do, this is mostly because of not knowing what you are supposed to do and it involves a lot of trial and error and if you just can't handle that then you just start searching for existing solutions and just settle.

Being interested in the most basic things (in terms of tech) has helped me build things based on assumptions and that helps me look for solutions faster since I kinda know what I'm looking at. It's not a perfect approach but it does help you speed up things.

Also, it's the only way I know that has helped me learn quite a few things, base them on assumptions about the implementation then go in detail to correct myself. The correction part is quite important.

A recent project which deals with creating a middle layer between react native and a pure javascript API to manipulate views has helped me talk to quite a few people I've envied over the years and honestly even if it doesn't end up being a successful project having worked on something so low level made me realise the complexity that has been abstracted from us.

This has significantly helped in improving my thinking process while writing code and I appreciate what lower level implementations have saved us from. If I had to deal with all of that when I first started, I wouldn't be a programmer, I'd be crying in the corner somewhere.

Not that I'm suddenly a guy with a higher IQ but i'm smarter than I was 2 weeks ago with regards to the domain I've been working with.

I guess at some point in your developer life, no matter what you do, (frontend, backend, etc) , getting down to writing implementations of things you've always taken for granted will help you get better at development.

]]>
Mon, 11 Oct 2022 00:00:00 +0000
https://reaper.is/writing/20221108-neovim-configuration.html NeoVIM Configuration https://reaper.is/writing/20221108-neovim-configuration.html NeoVIM Configuration

Yeah, no one asked for this,

Just a couple of steps to reproduce my minimal shell configuration and neovim configuration.

Also, mvllow and I use the same keymaps so you might want to go through the lil-editing and other lilvim files to see if you need to change the keymaps

  1. Install Neovim and iTerm2
brew install neovim --HEAD
brew install --cask iterm2
  1. Change the iTerm2 to have the background color as the hex #111111
  2. Clone https://github.com/mvllow/lilvim.git and copy the .config/nvim folder from the repository to ~/.config/nvim
  3. Change the colors in ~/.config/nvim/color/un.lua to match the following in the dark variant
        error = "#eb6f92",
        warn = "#f6c177",
        hint = "#9ccfd8",
        info = "#c4a7e7",
        accent = "#ffffff",
        on_accent = "#191724",
        b_low = "#111111",
        b_med = "#181819",
        b_high = "#222222",
        f_low = "#959595",
        f_med = "#aaaaaa",
        f_high = "#bebebe",
  1. Now go to ~/.confing/nvim/lua/lil-ui and comment out / remove the rose-pine use setup altogether
  2. and in ~/.config/nvim/init.lua add the following statement
vim.cmd("colorscheme un")
  1. Hopefully you saved all the changes and now run :PackerSync , close the editor and open it up again, wait for Treesitter to install all it's dependencies and close the editor and open it up again.

Note: You could try to use the :% command to reload config but it hardly ever works with packer so just quit and open

  1. Run a final :PackerCompile and :PackerSync and you are done.
]]>
Mon, 08 Nov 2022 00:00:00 +0000
https://reaper.is/writing/20221204-decisions-and-updates-november-2022.html Decisions and Updates November 2022 https://reaper.is/writing/20221204-decisions-and-updates-november-2022.html Decisions and Updates November 2022

Here's what has happened in the past month and a summary of why they happened.

Preact Native

I took on an ambitious project to get preact working on react native and for most part go the rendering working with the react native sdk but soon realised that getting rid of the react dependency would lead to rewriting most of what the react team has built over the years.

So, this project has been slowed down immensely, as a matter of fact I haven't pushed the minor fixes I made for almost 3 weeks at this point.

I did start writing a tiny port for the navigation libraries and it still no where close to being remotely functional but hopefully we get there.

Reason for writing the port was to make sure if someone does decide to help or pick up the project, there's enough groundwork done to move forward with it.

Major problem is still the children update logic on android which is handled with hacks inside the react native codebase but with our DOM styled abstraction it would make it hard to keep the domain logic seperate from the tree.

Minit and Twitter

I moved away from using twitter extensively and only open it once a week or so just to check for any updates from people I did join twitter for.

We're mostly back to posting updates as blog posts but for smaller and simpler things I built minit which is just a ephemeral dumpster where you can throw in random stuff and it'll get deleted after 24 hours. It's text based right now but I think I'll add image support to it. It's written in the traditional fashion with maybe a little bit of JS for minor interactions

Mudkip

mudkip the mini documentation generation tool got a small update this month where I added search index generation for the documentation. It's no algolia level stuff but just a simple score based search index using the search behaviour that sublime text uses and this index is bundled as a javascript file with the output.

If you don't have JS enabled on your browser the doc website works as normal and the search input isn't shown either.

I enjoyed writing JS in nimlang and it felt like I could do a lot more powerful stuff with it but then I haven't spent that much time on side projects the past month or so due to the day job's work.

Other minor contribs

The contributions to other oss projects were close to none, I might have fixed tiny bugs in a few and maybe done some chores here and there but nothing more.

Hopefully this month I'm able to spend more time on stuff. I do wish to add a few arch changes to tillwhen and also finish the preact-native-navigation port which is private right now but will be OSS once I'm done.

]]>
Mon, 04 Dec 2022 00:00:00 +0000
https://reaper.is/writing/20221218-crystal-lang.html Playing around with Crystal Lang https://reaper.is/writing/20221218-crystal-lang.html Playing around with Crystal Lang

I'm no professional "Programming Language Reviewer" but I think readers at this point understand that I jump around multiple languages just cause it's fun.

There's no meaning or purpose behind this, it has given me the advantage of being able to pick a language based on specific usecase but then that's basically it, for most consumers and most cases today people don't care. I do, so I pick what I like and works for the situation to the point where I would write something in Assembly if required.

Reasoning

Picked it up cause at least in my head it's now gaining traction in and around the community and I see a lot of higher level stuff being available in the language at this point.

I wished to see if I would like the language or not (which is based on just my opinions, hold you horses)

Language Setup

The setup was pretty simple and straightforward, but then what isn't if it's tied to brew , I think I should also release commitlog and mudkip on brew to make it easier for me to install them later as well. I spend so much time finding the curl script for mudkip specifically.

brew install crystal

and you're done, they have package manager setups on linux distros as well, so my attempt with alpine was equally simple.

Overall Experience

Syntax

Python has a beautiful syntax and there's no denying it. Actually indented languages with simple syntax are something that just work for me personally.

Examples: lua, nim lang, ruby

And, I think I can add Crystal to the list of syntaxes I enjoy writing. I mean, it was a no brainer since the language and syntax picks off of ruby so it was going to be something I like to type but I mean, there were no hiccups in translating ruby syntax knowledge (thanks fastlane) to this.

Size

Now, the breaking factor which moved me to nim from most languages I've tried is the output binary size and even here it's like ~1 MB for a simple "hello world" program and while that is smaller than outputs from golang, I think I'll still stick to nim in case of future CLI tools, since the output size matters a lot to me when creating them.

For others, that don't really care about that, you can go ahead

Web Dev

I've not aggressively tested the language but I was able to setup a basic web server with the following features

  • View Rendering (HTML, Mustache)
  • The Static server (obviously)
  • Basic Auth
  • Basic queries from DB
  • Worker Threads1

CLI Dev

This was impressive since, I've worked with nim and go for CLI and they both provide a way to parse CLI options in the std library, so this was a easy winner.

Also, it's really easy to construct and handle the different cases almost as easy as bash, so I guess that's a win.

On the other hand, user input, at least text is pretty simple to work with as well, literally the same as ruby so I think for basic cli tooling you should be fine. Unless you wish to build something like Astro's Hudson.

Thoughts

I'm going to stick to Go lang for microservices and Nim for CLI tooling for now, but I will keep looking at Crystal for both, since I'm still not able to confirm that all cases of Database invocations work or let's just say I'm not comfortable switching to it permanently yet. Looks like a good language to jump to though.


  1. native concurrency, a little tricky but not that hard, or I'd say it's more about remembering that you can ask the Fibers to yield at will or they will wait for the event loop, I think the closest I can compare that to would be the timer functions in Javascript (setTimeout and setInterval↩︎

]]>
Mon, 18 Dec 2022 00:00:00 +0000
https://reaper.is/writing/20221218-decentralisation-movement.html Re: On the current decentralisation movement https://reaper.is/writing/20221218-decentralisation-movement.html Re: On the current decentralisation movement

This is like a reply or more like, my thoughts after reading the post from Manu https://manuelmoreale.com/on-the-current-decentralisation-movement

I've been on the same train for a while but did end up creating a twitter account which helped get a few more ideas to build on from people I envy or wish to be like1 in the developer space and I think twitter or now, mastadon , are an important place as the gathering platform for multiple people with rather low friction compared to building a blog and that's what people and devs are probably running towards again?

Like, having your own website and blog is significantly easy for me but that might not be the case for everyone, I've seen devs who haven't made a simple portfolio for themselves online cause it's too hard to understand DNS, Hosting, etc for them but then they have a Twitter account. I guess it's more about the convenience of setting up that makes the whole situation a problem that devs want to solve.

So, in my head the solution people are chasing for isn't decentralisation but how to make it convenient for the majority of the humans.

But then the argument stands, if you do host it somewhere you still can be thrown out based on the hosting providers rules, so maybe get a Raspberry Pi and a battery module and a solar module and run it off of that, but then the site might have down time.

That doesn't really matter since if you have an RSS feed for your blog, since it's a single file that readers will fetch once(per interval) so if it's online once per day, that should be enough.

Overall, I think the majority wouldn't care about it and never really cared about it and the convenience of being able to connect with a ton of people does just work for them.

Maybe like Manu, I'm just some dumb guy overthinking this all out cause I've been using the solution he provided for a while but then other people might just prefer the social media vibe?

I'm not good at figuring out humans, tech is much much easier.


  1. preact-native being one such example ↩︎

]]>
Mon, 18 Dec 2022 00:00:00 +0000
https://reaper.is/writing/20230101-update-december-2022.html Decisions and Updates December 2022 https://reaper.is/writing/20230101-update-december-2022.html Decisions and Updates December 2022

Happy New Year humans!

Honestly, haven't done much this entire month and to be fair, I almost gave up on development and coding altogether for a while.

It has nothing to do with getting overwhelmed but for once I just forgot the whole reason I wrote code in the first place. I started comparing myself to not be enough for whatever I was doing and not being useful to anyone and this was my mental state for a week or so.

This isn't fun when you start getting impressed by everything that others have built but then you see whatever you've built isn't as impressive.

The reset for all of this was basically me remembering I had to update one of my packages to implement something that was to be done a while back and just started working on it and while doing it I forgot about all the inadequate work I've done.

Partly because most of that package deals with color conversions and maths so I had no time to think about whether I was useless or useful, I was too busy getting the functionality to work.

So, here's the decisions and updates for the past month.

@barelyhuman/tocolor

Worked on updates for the L*a*b* color variants and XYZ color variants. The base implementations are done but the documentation is still something I need to update for it, so it's on the next tag on npm right now.

typeapi

A simple typescript based website, that can simply list the exported types and functions from a node package.

This was built for @barelyhuman/tocolor since I am writing stuff in typescript, I wanted to reuse the exported types data.

Example API Reference for @barelyhuman/tocolor

This isn't open source right now, cause it's not up to the MVP stage yet and I'm still working on it. It cannot handle relative exports and everything yet and I still have to add all of that but this is something that's usable for most tiny libraries that export all types in one file.


Other minor stuff that I need to complete now is the libraries that I normally contribute to, which were also pushed back due to the aforementioned stuff.

]]>
Mon, 01 Jan 2023 00:00:00 +0000
https://reaper.is/writing/20230116-on-javascript-rest-apis.html On Node JS and REST API Frameworks https://reaper.is/writing/20230116-on-javascript-rest-apis.html On Node JS and REST API Frameworks

I've talked about how I like both REST and GraphQL as for me both of them are just an interface to expose functionality.

I've also ranted a bit about GraphQL not being a silver bullet to all your REST problems but either way, it's good to learn and it's definitely useful and actually kinda faster to setup if you're working with the GraphQL DSL instead of Typescript.

The post is more towards me still being irritated about loopback 3 reaching EOL , it was such a well balanced framework. It's been 3+ years since then and I'm still irritated but at this point I ended up replicating a lot of things that loopback provided.

Either way, as a programmer being irritated that something doesn't work the way you want it is never the solution, the solution is to fix whatever isn't working and me being me I wrote a few set of libraries that kinda mimic things I liked about loopback.

This is all going to be a part of the http packages that I release over the course of the next few months. I could release them right now but then they don't have a dedicated API style and I'm making mods to it while removing the chunk off of TillWhen's backend.

Features or stuff I wish to achieve.

  • An app state handler all over the app
  • Easy module injection, so no dependency injection is needed
  • Ability to handle app boot state while being able to modify the datasources if needed
  • Being able to generate a SDK from simple REST Templates for external API's
  • Model Level Mixins

The app state thing is something most libraries and frameworks already provide so I'm going to reuse that from there.

Module Injection The point is to have all required functions, models, etc on the above app state handler so that you can access them from anywhere in the application source code

This is basically verbose in the current version but I wish to add a graph like tree to avoid having circular deps break when app boots.

Bootables

These are functions that I wish to run on app boot to sync with external services or data sources or run migrations etc. and this is also something that already works but is verbose as the whole thing is right now a part of TillWhen's rewrite.

SDK

The SDK generation is actually already a library httpsdk and while it's not published to NPM it's a very simple concept and obviously not HTTP spec compliant right now, but can handle the basic request body and request header sdk generation.

Mixins

More like datasource level metadata which was something I used a lot in loopback to be able to handle custom model properties.

Example, Let's say you wish to populate a field computedField based on the normalField's value

  if normalField is 0 => computedField = `PENDING`
  if normalField is 1 => computedField = `PAID`
  if normalField is 2 => computedField = `COMPLETED`

The problem right now is that we will have to right a function that stores this information and then when fetching data from this table inject it. or if your orm supports Load hooks then you add this there.

And yes, you can use ENUM's but then changing/modifying enum requires a whole another migration.

But, What if you have such fields all over the place because it's easier to use numbers instead of enums but you can't do this everywhere so this is where a Mixin and Meta Property would come in place.

Let's say I define a computedIdentifier in the property definition for the model and then all models have the same function in the onLoad function that loops up the computedIdentifier and matches the integer value to automatically map the string status.

This is someething you could do in loopback with the Model, and I can kind off achieve this with ObjectionJS but it's still not as seamless.

Being able to get these to work would require me to write an abstraction over knex and that's the complicated part that is stopping me from releasing the http package and since TillWhen's been by place of toying with concepts, you'll have to wait till I'm done...

Oh, btw, this isn't something new. Fastify has a graph based plugin manager, and then there's also Hapi which has had this for years now. I'm just making these since a few of them need to be coupled with the other and that isn't something I can do in the existing libraries or maybe I didn't go through enough iterations with them to be able to achieve it.

This is basically what's going on right now, so hopefully I can finally have a cleaner more manageable backend for tillwhen compared to the initial hacky codebase that I wrote the project in.

]]>
Mon, 16 Jan 2023 00:00:00 +0000
https://reaper.is/writing/20230124-commitlog-a-recap.html commitlog - A recap https://reaper.is/writing/20230124-commitlog-a-recap.html commitlog - A recap

It's been about 2 years since I wrote commitlog#b0f1b1d2bc4265cb72b70b3ae5b60f8e65f47b12 and it's gone through a few changes from when it was first written.

The post basically goes through things learnt during these additions.

Reasoning

The reason is same as what's written on the README, I built it as a replacement to commitlint and commitlint-changelog generator but for all languages. Since, I worked with quite a few languages and this was my first project in golang.

As expected, the initial codebase was a mess and since it was my first project I did request for a review from other gophers over reddit. This got the initial 2 Merge / Pull Requests that corrected a few things.

The Initial idea and feature(s)

The initial feature set was singular.

  • Generate a categorized log based loosely based on the commitlint standards.

Specifically since I did follow the commitlint standards a bit.

The CLI basically used system level git bindings thanks to go-git and built a categorizer that output stuff in markdown.

Growth

v0.X

This moved up to being able to handle git revisions, handling reference names similar to git's own sub-commands and tag based separation. A few short codes were also added to help you specifically define what the range of commits would be.

The next addition was release management since that was also something that differed between languages and I'd like something simple that handled it all.

This is where .commitlog.release entered the picture and at this point most of my projects use this for maintaining versions

v2

Over time, my usage of commitlint standards went down and using the generalised categorization was something I rarely used, but I understood the need for it. This is where v2 enters with the following changes.

  • ability to define custom categorization patterns , can be scaled to support monorepos
  • ability to handle the semver versioning spec
  • better handling of git revisions
  • removed all the fluff from the previous version
  • a lot more structure to the codebase
  • the package is simpler in terms of being used programmatically, if needed

All these don't seem like much but since these were all added slowly, they were done properly.

Lessons

  • People prefer CLI to handle most of everything for them without having to pass options and this is where v2 failed since it added an additional step to be able to do the categorization.
  • Dumping your idea down and being able to consistently maintain it is easier when you use the tool everyday.
  • A good CLI is a silent one. Being able to turn on verbose mode is definitely required but it's a lot more important for the CLI to not spam the terminal unless asked for.

And the advice I always give, build something you enjoy building, people liking it is the last thing you need to worry about. Unless you're building a business out of it, then definitely try to get people to like it but I'm not that smart in that area so you might want to look for someone else for tips on that.

Future

There's no reason for me to stop using it.

I did feel like I'd need something for monorepo and should add monorepo support to it but it can already generate logs for monorepos if you maintain a standard for writing commits per package, then you can kinda already do commitlog -g --categories='feat(commitlog):,fix(commitlog)' and it should lay out the commits that start with feat(commitlog) and fix(commitlog) for you. It's that simple. If you have categorized your commits then well you do need specific monorepo tools that handle more context based on the languages they support. Commitlog tries not to tie itself to a specific language so it makes no sense for me to add contextual categorization.

You are always free to fork and add that if you do think it's something you wish to do. It's licensed MIT for that very reason.

As for maintenance and fixes to the project, there's always tiny knick knacks that happen while I'm using it and I do tend to fix them locally first and the releases you see is normaly about a few months later when I'm sure of that thing working as expected.

Either way, if you do start using it for some reason, do raise an issue for anything that you think needs to be fixed.

That's all for now I guess,

Adios!

]]>
Mon, 17 Jun 2023 00:00:00 +0000
https://reaper.is/writing/20230207-decisions-and-updates-january-2023.html Updates and Decisions January - 2023 https://reaper.is/writing/20230207-decisions-and-updates-january-2023.html Updates and Decisions January - 2023

Time as always seems to be moving pretty fast and one more month vanished into thin year.

To the updates, forks folks.

Personal

Started with giving a little more time for another hobby, the guitar started gaining a bit of attention again, no idea how long that's going to last though.

Dev

Back to what I normally write about, development and shit.

Module Engine / App Engine / whatever you'd like to call it

Most amount of time was spent on the app-engine/module-loader whatever you wish to call it, that was mentioned in one of my previous posts.

A version of this was added inline in the barelyhuman/preact-ssr-starter project in case someone wishes to understand the usecase.

For others, it's a simple module loading engine, and a pretty verbose one if you ask me, people who work with plugins would understand the requirement pretty quickly but the idea is to make sure I can inject data / functionality into a dumb object and then use this anywhere the dumb object is imported.

A helper library for the above was also written, called barelyhuman/typeable and the point of this was to be able to generate types on runtime for the above engine so that if you have different modules adding different properties, there's at the very least some way to figure out what properties already exist on the instance and since it is generated as an ambient type file you can import is with JSDoc and take advantage of this even in older projects with no typescript or if you are like me and try to avoid typescript but like the intellisense.

Preact SSR

Yeah, Astro exists, NextJS exists, this, that, whatever!

I enjoy all that and I've build at least one thing with each of these frameworks. They are greate projects and provide great developer experience but most of it is really dependent on either one person's decision or a community's judgment of what is supposed to be "standard" and so it get's hard when that changes over time and we've had one such incident with react already like I mentioned in Modern React, is a mess.

Either way, you end up having to either lock you codebase to a certain version to stick to your expectation of the framework or fight the update's breaking changes till everything "just works".

The whole thing is based on a small trigger where next 13 and react 18 are no more compatible with preact.

This is what delayed the tillwhen update cause I was fighting the decision of whether to update or leave tillwhen at the version it was. The older version of Tillwhen was using preact/compat to minimize the overall bundle size and it did effect the page load times on slower browsers and network quite a bit.

Tillwhen's page load size was now in danger. The app has 250+ users and while that count might not mean much to you but then the point is that the page load time would increase for lower network areas and I would like to avoid that.

The preact team may or may not work on the compat this time, and that makes sense cause to be fair it's like a cat chasing a laser at this point.

Either way, I had to move it up to next 13 for the simple reason of not having to break my head later when the breaking changes are more and the update process gets even more tedious, so better now than later.

This is where the SSR template came into place.

There are other templates, but then I didn't really feel like using any, I wanted one that was simpler to modify and work with and I wanted to make sure if I ever get time to migrate Tillwhen from NextJS to this, at least the functionality and pages could be copy pasted with little to no modifications.

Ended up combining a few things I've written before and shoved it down one repository that you can clone and modify every inch and is built over tiny tools and libs working with each other so you can remove and add a new one if needed.

Ex: I replaced express with polka in that repository in ~15 minutes.

You can check it out here barelyhuman/preact-ssr-starter

TillWhen

No major update in the actual application other than a color scheme change and I finally made the UI components a lot more consistent overall and got rid of a few extra chunks of code (~ 30K lines).

It was fun to delete stuff that wasn't being used and added to the complexity for no reason.

It was also getting hard to keep track of where the cleaned up code and libraries were so I ended up writing all the rewritten lib functions and sdks in coffeescript so it was easier to locate them and then I could restructure the folders for them.

Compiled these files as normal javascript and that was a little brain hack I wanted to share.

I should probably look for a tiny language that compiles to javascript but only has one way to do things intead of 10 different ways to do stuff in JS and coffeescript, would simplify the decision cycle people like me end up in.

Editor Switch

Ah , this has happened so many times that It's basically a joke at this point but we went down to VIM and sublime again. Slowly switched the editor to be Sublime Text again because RAM issues. (No i'm not getting a new laptop with extra RAM, not a solution!)

And VIM, cause while I have a neovim installed and all configured, I deal with remote systems a lot and it's easier to copy my one file config and let it auto setup everything as compared to waiting for neovim to load everything, though I can probably create a single file configuration for neovim as well... will do that someday for now it's just 2 commands

sudo apt install vim ripgrep
# or
brew vim ripgrep

curl -sf https://gist.githubusercontent.com/barelyhuman/16285b2195cfd25d8c84356676cc807d/raw/3770a3f039aca45a4ad91102eafc03dcfc8606cb/.vimrc > .vimrc

and just start vim and it'll handle the setup for me

Again, it's just something I was already comfortable with so I did that, if you have a single file neovim setup similar to this, I would like to know and try it out.

Sad Shit

idk man you’re gonna have people coming at you with pitchforks if you don’t finish preact-native 😉 ~ mvllow

So um, I'm thinking on sunsetting the idea due to my own incapbility to manage time and multiple projects but as always, the project remains as is, you are free to fork and use the base if needed, that's the whole point of open source in the first place.

But it was a fun project nonetheless, The main reason for this decision is because of the continous changes going on in the react and react native repository itself making it hard to keep track as to what is the correct way to do things. It's a similar issue to why it's so hard for people to write new Typescript compilers. Ever changing source of truth.

The better way out might be for me to write JS - iOS View SDK and JS - Android View SDK from scratch, not sure if I'm capable enough to pick something that huge but if I do, you'll know. Either way, I think I'll have find ways to get Nativescript to suffice my requirements in the future.

That's about it, for now.

Adios!

]]>
Mon, 07 Feb 2023 00:00:00 +0000
https://reaper.is/writing/20230220-going-bonkers-over-islands.html Going bonkers over island architechture https://reaper.is/writing/20230220-going-bonkers-over-islands.html Going bonkers over island architechture

If for some reason, you stalk my github, you'll see the past few days have been just commits upon commits to a repo dealing with creating your own islands architechture setup using existing tools instead of using a framework that does it for you.

In most cases, I do suggest you setup something with astro or fresh cause they already handle most edge cases and provide a better DX.

The point of building this though or spending any time of this project was to just make sure there's at least one repo out there that teaches you how it all works and ties together.

Currently when searching for an exaplanation on how to get islands working, you'll most likely be redirected to Jason Miller - Islands Architecture which explains the concept but then there's no reference implementation, so for someone who doesn't understand the basics of SSR, Partial Hydration or even Hydration make no sense to them. For them, Astro and Fresh are doing something revolutionary when in hindsight this has been the norm for devs working with Ruby on Rails and Django with the exception that they selectively write what JS loads on what page.

For most of us, that's too much work since we're used to Next.js / Astro / SvelteKit / Remix / <Insert another framework here> , handling it for us. And it's all good and great but then these frameworks are heavily dependent on what their community decides for them and in most cases you end up with technical debt just because upgrading is a problem. I've talked about this in a previous post and if I continue on this explanation, I'll most probably repeat everything I've said in that post.

Anyway, it started as a tiny implementation of selectively deciding what chunks of JS go to the client and I asked Jason Miller to review if that felt simple enough as an implementation and he replied with a code snippet to avoid having to manually mount islands.

This lead to creation of variants in the project repo, and each variant having 2 types.

  1. Automatic
  2. Manual

The Automatic utilises the provided code snippet with some modifications to allow the user to just write .island.js files and it'll be converted to a web component that automatically pulls the required chunk from the server.

The manual on the other hand requires you to specify what chunks to load and you get more control over lazy loading a chunk vs sending it with the original bundle.

Each has it's own advantages because too much lazy loading is also a thing.

In most cases you won't have to worry about "too much lazy loading" because the generated files in the Automatic variants are only the ones that are actually being used by the server. So if you have a component or island that isn't being rendered anywhere then a chunk for it is never generated. Thanks for the bundler for that, not something I've done.

Moving forward, there's currently 4 variants.

  1. esbuild
  2. webpack
  3. esbuild-auto-inject
  4. webpack-auto-inject

Each of the *-auto-inject one's are the Automatic types and the esbuild version is slightly smaller in deps though the esbuild-auto-inject does add quite a few deps due to esbuild not providing a way to modify the AST, I had to add in a parser and transformer and they take are bigger than I'd like. I could take the approach of using Regex's to do the replacements but then that's too many cases to handle as compared to manipulating the AST directly.

Still, why!?

Um, I don't actually get anything out of it. Other than maybe having a starter that's easier to just pick and move forward with. I most already do this with my go based services using alpinejs but that's more full hydration than partial but then alpinejs itself is super tiny so I'm not that worried about the total javascript on the page.

An example of this would be minit.barelyhuman.xyz which was written with Go and AlpineJS.

So, overall, it still doesn't make any sense since ther other frameworks would be putting in more effort maintaing their work and moving forward while doing that.

This "moving forward" may align with your own goals but if it doesn't, you basically are stuck. and I don't like that so it's easier to use tools that are bound by a scope than one's that aren't.

Either way, hopefully this does help others, if not, no biggie, not like I'm deleting the repos.

Here's the project the whole post is about.

barelyhuman/preact-islands-diy

That's about it for now, Adios!

]]>
Mon, 20 Feb 2023 00:00:00 +0000
https://reaper.is/writing/20230324-the-web-libraries.html The Web Libraries https://reaper.is/writing/20230324-the-web-libraries.html The Web Libraries

Another rant? Not really.

We're just going through a few thoughts on the recent evaluations I did with regards to frontend development since I wasn't feeling like working on anything serious.

TLDR;

None of them are perfect, go with the one you find the least friction with, for me that's preact + @preact/signals and if something simple, cycle.js or just vanilla js with some reactive streaming library


The whole idea started with a simple thought of writing a simpler rendering library, one which would keep the state, network, dom, away from the view side of things and I ended up writing something similar to how cyclejs does things but instead of streams, it was mostly callbacks to start with.

This moved ahead to a simple h or hyperscript implementation that was donated by Jason which I used for a bit, modified it to handle a simpler implementation of signals and then got rid of it. I didn't want to build another jsx library. I get it that people like JSX and it's made it easier for them to imagine the view but I honestly prefer template + directives (VueJS, Angular 1).

The preference arises from clear separation of logic and view and the reactivity is hidden underneath. I think the traction svelte has gained proves my point about this.

Either way, after spending a few days on it I felt like I'm overthinking it and I should just use a reactivity implementation and use the DOM directly.

This is where the current version of typer stands, it uses a simple pull-push signal context similar to Solid.js (and it was inspired by the Author's own blog posts) and wrote typer using that. The friction of writing it with effects and signal was basically null. Everything just worked. All I had to be sure about was that at the end of the day it's JS and there's no Functional programming optimizations that the interpreter does for me so I should be vary of exceeding the stack size.

I spent like 10 mins refactoring it from it's previous implementation which used element polling and modifying it to work with the signal implementation and it was good to go. This was pushed and is what powers the current Typer implementation. Doesn't end there though, at this point I wanted to see what libraries can I recreate this with and with what level of friction.

PreactJS

I'm leaving react out the picture cause it's basically going to be the same amount of time.

First up, preactjs, the implementation got even smaller because I no more had to monitor effects , I just had to make sure my props were correct and that the signal's value was set.

The best part about using (p)react for something like this is that handling resets, become really easy since you just reset the state and everything renders itself accordingly.

This in vanilla JS requires you to clear quite a few DOM elements and regenerate them manually, which you can accidentally make recursive and then it'd slow down the app once there's enough instances in the memory.

SolidJS

The experience with Solid js isn't much different since I was already using preact with signals so this was pretty much the same, except that I didn't have to wrap my head around the prop pass down, since in SolidJS, the function only renders once and ended up making a tiny mistake of writing the function execution in the definition phase instead of the render phase of the function.

function Component({ signalProp }) {
  const x = signalProp * 2
  return <>{x}</>
}

People who write Solid, can see the mistake already, the signalProp is never computed again and x is rendered with the same value as it was first rendered with.

but that's on me and not the library so, that's okay. We've spent enough time figuring out which hook effect executes and which doesn't a dozen time befores. It's all good.

Svelte

One of the current favorite libraries of web devs and surprisingly even backend devs, which is rare.

The only way to start or the recommended way was to use ViteJS with the svelte plugin or just use SvelteKit, which is fine but I'm not a fan of the +page filenames, I remembers devs going, "Don't make too many index.js files it becomes hard to find!" and now I have a shitload of folders all with their own +page.js , +page.server.js files and I honestly made changes in the wrong file twice while writing this simple thing.

Either way, the friction of actually writing the app was close to none and the adaptors help with output so I guess I got the client only output I wished for.

Cycle.js

I've rarely used cyclejs for one main reason, the amount of thought you have to put into the streams when you are dealing with complicated cases is something that you can avoid with the normal imperative coding styles in the other libraries.

Don't get me wrong, I don't mean streams are weak or harder to work with, you just have to switch your mental model to think in streams and I'll give you a simple example.

I was building Typer with cyclejs and here's a simple thing that typer does or basically how typer works.

const words = getRandomWords(5)
const spanNodes = wordsToNodes(words)
renderSpanNodes(spanNodes)

const input = getInputElement()
input.on('keypress', evt => {
  if (evt.code === 'Escape') resetState()
  updateSpanNodes(evt.target.value)
})

Now, this is psuedocode but that's mostly what the app does.

If you see, we maintain the state external to the render and events handlers so the reset actually just resets the DOM and everything else just stays as is, I don't have to attach handlers again or re-render the entire tree and this is the cool thing about writing in plain DOM. The reasons for libraries to exist is that this can get quite tedious if you're building an app out of it (still doable though, just hard).

Next up, how do you think I'd do this in cycle.js or to be specific, with reactive streams ?

// create an stream of input value events
const input$ = xs.of(inputEvents).startsWith('')
const escape$ = xs.from(input$).map(event => event.code === 'Escape')

const words$ = xs.from(escape$).fold((acc, i) => {
  if (i === true) return generateRandomWords()
  return acc
}, generateRandomWords())

const value$ = xs.from(input$).map(e => e.target.value)

xs.combine(value$, words$).map(([inputValue, words]) => {
  // view construction
  return div()
})

Confused? Yeah, let me explain.

When working with streams, you have to figure out what all areas of data are going to change over time. In our case, the input value and the words will change over time.

  1. So I need 2 streams, one that's the words and one that streams the input's value.
  2. Next, I also need a stream that can inform if the escape key was pressed, we use this to restart the typer.
  3. So, overall I need 3 streams, one for input, one for words and one for the escape key presses.

We've basically created those 3 streams. Each one of them is dependent on the other because I can't reset words until escape is pressed, so the words$ stream is listening to the escape$ stream with gives true or false based on what keycode is being pressed.

Similarly, escape$ cannot exist without the original input$ stream.

This can now be used to generate our views using inputValue and words, since those are the 2 deciding factors for this app.

It's not that hard, but it does require you to understand how fold works, because you can't just add a map over that stream because then it'd reset the word$ every time you pressed Esc but also reset it to the older set of words if you pressed anything else after the Esc key.

Either ways, I wrote the typer with cyclejs and it's always been fun to write stuff in cyclejs other than the problem I mentioned above which is, a change in mental model.

It did take me longer than the signals version to write this, I think I spent an hour because I messed up the computation for valid and invalid characters but I guess it's okay to spend an hour to refactor the whole thing.

We also added a speed$ stream which is also a combination of the input$ and the escape$ stream, which you can consider as a replacement to the computed property in signals. except, mine runs on every input event.

Overall, if I you had to use cyclejs for a larger application you are better off with the reducer styled state instead of pure state streams. You can read about that on the @cycle/state docs

Vanilla JS + DOM

And finally the last one, which is no library. This solution probably is the easiest one with huge amount of documentation all over the web. Best part, no tooling required!!

Jokes aside, it's fun to write in Vanilla JS and to spice it up I rewrote the same thing in FP (Functional Programming) style without using any of my usual libaries (monet.js, ramda) and it did end up being longer because of the IO Monad and destructuring the IO Monad every 2 lines but I guess that's the whole point of FP.

You move the side effects as far away from the actual code as possible, the frictional part is that the effect needs to run for you to debug and so you end up with a lot of functions and effects running before you are even done with the implementation. This is easily solvable with a quick refactor after you've implemented everything but I'd really recommend not using FP when working with the DOM, it makes it really hard to deal with unless you wrap everything in a Maybe or Either and avoid using IO altogether. It's not that you shouldn't, it's just that it takes too much energy to make sure it's all working and you end up debugging twice as much. Though FP does give you the confidence you need, as long as you remember the type defs and return type defs of each function you write, which is JS is hard to do. Try out elm lang if you wish to do pure FP with the HTML DOM, you write in Pure FP and elm takes care of talking to JS and the DOM for you.

After all this, I still didn't really do anything productive but with an imaginary implementation where

  • JSX is optional
  • I can write stuff as simple functions
  • DOM is abstracted for me (could be streams, could be generators, idk)
  • I don't need build tooling
  • If possible template directives.
  • network and async isn't an after thought, or provide ports / adaptors approach to be able to easily offload network instead of being forced to use a library specific dynamic.

I think the closest to all of that is Alpine.js though it doesn't promote itself as SPA library, you can write one with it. It's easier to use it for server-sided rendering though.

I guess that's about it for today.

Adios!

]]>
Mon, 24 Mar 2023 00:00:00 +0000
https://reaper.is/writing/20230401-minimalism-in-workouts.html Minimalism in workouts https://reaper.is/writing/20230401-minimalism-in-workouts.html Minimalism in workouts

Not a topic I've talked about much on here but I do work out (sometimes) to make sure my body stays functional.

And that's all I do workouts for, I used to be a big fan of having abs and heavy interval training and maintaining a diet and all that but then I was also someone who liked to enjoy all the delicacies that the world had to provide.

Overall, I did end up giving up the whole train always life for food and let it get out of hand and had a weight scale that was stuck at 100kg for quite a while.

Turn time to 2 years ahead and the scale is now stuck at around 80-85kg, I don't go lower than that and surprisingly, never go over 85kg.

It has to do a lot with my natural metabolism but then there's also the factor of the added metabolism, which is basically how much movement you've added to your life which in turn increases the energy consumption and that has an impact on your overall weight loss.

So, what do I do to add that additional metabolism? It's a few things and involves me pushing my minimalism principles (or just laziness principles) into it.

Minimalism and Calisthenics

I use calisthenics as my choice for the exercises.

Calisthenics, to be put simply is using your body as resistance instead of weights ( barbells, dumbells, kettlebells, etc), and there are different theories and beliefs out there on its effectiveness so I'm not going to go deep into it, because I'm not that smart, to begin with.

Either way, the choice to do calisthenics was because it was easy to pick up for me as compared to working with dumbells which need you to focus on what you are doing and can cause serious injuries if you aren't careful. This is true for any workout method actually but I just thought calisthenics was easier when I started, so it stuck with me. The method described below can be adapted to weight training as well.

Getting to the actual part which is working out.

The workout pattern laid below is just a tinier version of the multi-exercise pattern used by athletes for endurance training

The Method

We'll be using compound movements, so you are limited to working with exercises that work on more than one area of your body. Exercises like these are common in calisthenics but you have quite a few such exercises in barbell training as well. (google is your friend)

The method is as follows

  1. Pick one of the Push, Pull, Core, and Legs for your workout day. We want to spread them throughout the week. (Push on Monday, Pull on Wednesday, Legs on Friday, etc). You might want to include Core training every day, but if you lack the time you can do that after the Leg day, without rest
  2. Find an exercise for whatever you picked, example: Push day could be Pushups, Dips, Benchpresses etc
  3. The idea is to do 4 sets,
    • 1st set we do the hardest. and then the remaining 3 sets we do a more adaptable version of the exercise.
    • Each week we add in more reps/weight to all the sets till you can do about 20 reps of both the harder and the easier exercises
  4. And that's it.

No, this won't help you get a super physique. Nope, It's not the best way to gain strength either.

It's a simplistic approach to having a really simple workout plan that keeps you moving and having a maintenance workout. That's about it. That's also why calisthenics works well for this since it can be done almost anywhere and even the variations just change in the lever/position of the body

Here's what I'd do for someone who's just starting and can do only let's say 1 pushup.

Let's say Monday is your Push Day.

  1. Set 1: Push Ups 1 rep.
  2. Set 2: Kneeling Push Ups - 2 reps minus the max reps you can do
  3. Repeat Set 2 for 2 more sets
  4. Try to add 1 rep every week, and when I said try, try to just push yourself even if you can only do a quarter of the full movement with that weight/exercise. It still counts as an attempt.

If your body adapts quickly you should be up 4 pushups by the end of the month or just 1, it doesn't matter, you now have a tiny workout, that hardly takes 10-15 mins a day.

That's an embarrassingly small goal!

Yep, it is and in my life and experience, that's how you create habits. You achieve something that's simple and at the same time easy for you to achieve that you keep doing it, and when it's about handling health and working out, being able to stick to working out almost every other day is a lot more important than doing it for like 6 months and then leaving it cause you went back to your old habits.

Anyway, it's not for people who wish to achieve impressive physiques or strength, it's for people who'd like to add a little more movement into their life.

also, I'm like 177-178cm tall, so 82kg based on BMI is considered overweight but I just have to lose 3 kgs to get to BMI's standard of Normal weight, which isn't hard if I stop eating the stuff I enjoy eating. So give up 3kg for the tasty stuff I like to eat. You could say I'm fine with it.

I guess that's about it, though before you pick this up if you are someone who's medically obese, your first step should still be consulting a doctor and getting your diet in check instead of working out.

]]>
Mon, 01 Apr 2023 00:00:00 +0000
https://reaper.is/writing/20230414-turn-the-bass-up.html Turn the Bass up https://reaper.is/writing/20230414-turn-the-bass-up.html Turn the Bass up

This post has nothing to do with music, I'm sorry.

We aren't solving a big problem here, but when have I not given the context of what was bothering me and what has helped mitigate or reduce that frustration

CI isn't the most pleasant music

Scripts I write are primarily in bash and this is so that the scripts can be tested locally, most of them are written in a portable manner or comply with POSIX so that they can be run on both Unix and Linux in most cases.

This works as a solution in most cases but testing them for a CI always requires you to still run it on the CI which well, consists of 100s of commits and at least 3 cups of coffee till you get it running.

The count of commits and coffee doesn't matter if it was a 1-time operation but we all know it never is.

Adding Docker for a little more musical harmony

The other solution that I tried or experimented with was writing docker images that run the code's scripts and these images could be run locally or on remote CI's with very little setup and worked great.

This is probably the simplest solution but then docker wasn't built for running scripts directly so you end up having to write a script that builds the image and runs it for you and the log is then streamed to your terminal.

You don't have to write the log streaming part but it makes it easier to work with.

This story starts somewhere in late 2021 when I first found out about earthly.dev and tried it out for a few web apps

In the past 2 years (presently 2023), I've seen a growth in this space of local CI/CD solutions quite a bit. There's act for running Github Actions locally, there are virtual runners for Gitlab which are still complicated to set up but usable, and there are also the BuildKit solutions

BuildKit, the reason behind the melodies

Docker builds are great and all but as mentioned, setting them up yourself per project might not be ideal considering the maintenance for each of these might become cumbersome especially when you are working in a small team. It's easier to delegate this to a tool that just does this and Docker releasing BuildKit out in the open, helped quite a bit.

BuildKit is responsible for being able to detect the caching and build stages for the Dockerfile and it also exposes an SDK for various languages that can be used to trigger or even wrap around such instructions programmatically.

One of the variations of this is Docker itself, but since it's an SDK, we now have other amazing devs who made use of it.

Dagger.io and the Golden chords

I was introduced to Dagger.io by Alex, who was aligned with their idea of how CI processes should be both local and remote and this was still under development and was only available for Go lang when I was introduced to it.

It's now available for quite a few of the mainstream languages if you wish to try but this is one of the solutions for the problems mentioned above.

Your CI is now a piece of code that can now produce Hermetic Builds(buzzword for consistent and reproducible builds).

The process is simple, you tell Dagger programmatically what the environment is, similar to Docker and then what scripts or lines are to be executed.

Dagger would take care of

  • Pulling and setting up images
  • Copying the required context
  • Preparing the environment
  • Executing and Streaming the logs for you

All of this happens in a BuildKit container instance, instead of creating a new image and running it every single time, thus reducing the overall time when compared to the original approach I mentioned which would need you to build and run which would create quite a few dangling images.

Example

import { connect } from '@dagger.io/dagger'

// Connect dagger's buildkit instance
connect(
  async client => {
    const containerDef = client
      // name the pipeline
      .pipeline('test')
      // create a container
      .container()
      // from the following image
      .from('node:16-alpine')
      // then execute the following command with the next set of args
      .withExec(['npm', '-v'])

    const result = await containerDef.stdout()
  },
  { LogOutput: process.stdout }
)

Now, to add the Bass

Finally, the other solution and the one I've just started using is called Bass, it's not a CI solution but more a language that uses BuildKit as the target runtime. And hopefully, the readers of this blog understand my liking for new languages, though I've never actually worked with LISP/Scheme based languages.

So, it was a little tricky for me to pick up the semantics of Bass but, somehow I was able to learn enough of it to write a few tiny scripts.

#!/bin/bash env bass

;define that this run should memoize the thunks
(def memos *dir*/bass.lock)

;define that the function receives an argument `src`
(defn test [src]
    ; use the `node:16` image
    (from (linux/node :16)
      ; cd into the src argument
      (cd src
        ; run the sequence of commands
        ($ npm i -g pnpm)
        ($ pnpm i)
        ($ pnpm test))))

; Main is the entry function
; so here we define the args we might get
; from stdin

(defn main _
  (for [{:src src} *stdin*]
    ; we then go through the args of stdin, take the value for `--src` and pass it
    ; to the function test
    (run (test src))))

Strings that broke

The counterproductive part here is still that I need to set up a docker/BuildKit environment in CI machine(Circle, Github actions, Gitlab Runners, etc) but most of them provide a way to connect to a docker setup. Post that, all you need to do is either install the dagger SDK when working with dagger or install the Bass lang binary to run the bass script which is a process your language's package managing solution or a simple curl script can handle and is something that rarely breaks.

Overall, I seem to believe that the whole local first CI space is something that'll grow more and more in the next few years and save me from having to test infinite theories of why my CI scripts were breaking.

Bonus solution

Another one here is Nix, it's a language and a package directory that can cache and create the same environment everywhere for you. Though based on experience and review from a lot of people who worked with Nix, the language can get a little confusing to learn as compared to something like Bass.

I'll add an example from someone's gist here because my nix config files are rather unimpressive

This will setup react native and android SDK for you as soon as you activate your shell in the project folder

That's basically about it for now. Adios!

]]>
Mon, 14 Apr 2023 00:00:00 +0000
https://reaper.is/writing/20230506-linux-and-install-disks.html Linux and Installation Disks https://reaper.is/writing/20230506-linux-and-install-disks.html Linux and Installation Disks

Due to a recent unknown mishap my macbook decided to go blank on it's LCD. So now, for portability I pulled out my linux "play"-station. Which I use for various experiments with linux. Best part about it, is that it's formatted every 2 days so, I've never really used USB disks to install, it's mostly done in the following manner.

The Initial Requirement

The expectation is to at least have one system or at least grub on the system. If by any chance you have nothing and are on Windows, I don't really have a solution for you right now.

  • Download a linux image.
  • Mount it and note the paths to vmlinuz and initrd, we'll need these later.

Setting up the Installation Partition

At this point, you can do 1 of 2 things.

  1. Copy out the kernel and initrd images out and add them to your grub entry and then boot from it.
  2. dd the image onto an empty partition and then boot into it using grub's CLI

I normally prefer the 2nd one since it's faster and I normally make sure there's enough space to create a partition that can hold a linux install image (~4GB or more)

Now, you create a partition using your favorite tool and dd the image onto your partition

$ dd if=/image.iso of=/dev/sdXN

replace sdXN, X with the disk number and N with the partition.

Now you have a drive that can act as an installation drive.

The Boot process.

At this point, there's 2 things we need to do.

  1. Load the installation into the memory
  • Load the linux kernel into the RAM
  • Load the ram disk
  1. Install it.

Let's get started.

  1. Let the grub menu appear and then press c (generally the shortcode to open the grub CLI).
  2. ls on the CLI to see the available disks
  3. ls (hd0,gpt1) replace hd0,gpt1 with whatever was listed by grub.
  4. Continue going through the list till you see the Label of the image that was duplicated onto the partition.
  5. When you find it, take note of the disk and now we boot the linux kernel out of it.

Set the root partition

grub> set root=(hd0,gpt1)
grub> linux /casper/vmlinuz toram quiet

replace /casper/vmlinuz with the path to vmlinuz for the distro you are using. casper/vmlinuz is the general path for ubuntu based images.

Next up, we add in initrd

grub> initrd /casper/initrd

Now, the same applies here, you replace the path /casper/ with the one for your distro that you took note of in the starting.

At this point, all that's left is to boot into this drive.

grub> boot

If it all works well, and the kernel supports it, you should now have a linux system running off your RAM and you can start the installation as a normal one , or keep it this way as a recovery partition.

If you wish for it to be a recovery partition, it'll be easier to add this in as an entry to your grub config.

Hopefully this helps someone out, who'd like to fresh install and has no CD/DVD or USB thumbdrive handy and needs to do a fresh install or just jump to a new distro.

]]>
Mon, 06 May 2023 00:00:00 +0000
https://reaper.is/writing/20230516-ignoring-backend-productivity.html Ignoring Backend Productivity in JS https://reaper.is/writing/20230516-ignoring-backend-productivity.html Ignoring Backend Productivity in JS

This isn't a rant but more on the lines of the thing I'm struggling with.

I've had quite a few rant posts about backend development and issues with consistency with existing tech in Javascript.

Most of them do no involve "Full Stack Frameworks" but more make shift solutions that I've built and libraries that were build around them.

Here's the list of posts, if you wish to go through the rants.

A common sequence of events is that companies and everyone wants to build their own frameworks to be able to make apps and this has lead to a bunch of amazing products.

I'm also guilty of this since I've at least had 20 iterations of how I could handle dependency injection in backend apps. Wrote my own router implementations and resource wrapper implementations

All of this is great and good but often I forget that not everything needs to be published out in the open. This habit of mine has lead to a problem where I can't be productive anymore when I'm doing any app's backend.

I start with writing the database schema, I get to the part where I need to write the functionality and then get stuck on building wrappers and abstractions for things that don't need abstractions.

This definitely helps since I build tools for the open source world but it removes any chances of building a web app / SaaS for me. I start writing code for the SaaS and the next moment I have another abstraction over knex to

  • create a simpler ActiveRecord pattern in JS
  • create a mongoose like API on top of knex
  • create another Objection.js but with better models (in my opinion)
  • idk, i've made a few more random one's as well

That's not the only place I fuck up though. I've build engine and module loading systems. This did lead to a library called typeable which generates types for a dynamic object for serializable and native types. Which is nice for any usecases that others might have but the App engine was built for a modification of Taco's backend core and I never completed that PR.

The users of libraries that I build are probably in single digits, but still if they do help them it's nice. Cause they clearly aren't helping me since there's a new idea and way to solve that issue in my head every other day.

How do we solve it?

I'm not sure, I'm looking for answers myself, one such answer would be to use something like Adonis/Sails and while they'd help there's still things that bother me in both and i'll end up creating helpers libraries again.

]]>
Mon, 17 Jun 2023 00:00:00 +0000
https://reaper.is/writing/20230710-the-change.html The Change https://reaper.is/writing/20230710-the-change.html The Change

I've always been the kind to be able to help others with things I understood and it's never profitable when you are of that mentality.

I was fine with that since having a day job took care of the finances for my life and I didn't have to worry about them when writing open source tools and reference codebases. Most of what I've built is either a smaller reference to larger problems and things I wished I understood. Others a tools that I wish existed/still maintained.

The past ~5 years of my work life has been comfortable and that has helped me help other full time open source developers and I never thought I'd reach a stage in life where I could also become a full time open source dev. I did wish I could do it but I never thought I'd actually be able to do it.

Primarily because I was never able to commit to an ambitious project completly and this is was because of the day job's workload getting on my head or not getting enough time over the weekends. There's also my own curiosity in different domains that can be blamed for this. I spend way to many hours trying to figure out how something is being done that I forget that I have projects that need to be improved/maintained etc. This isn't primarily bad since there's not many users so it's okay.

How would switching jobs help mitigate this?

I'm not very sure but having more time to work on open source would definitely allow me to spend more time looking for things I can improve on existing projects. Though there's a good chance it might not work as I think it would.

Either way, the reason for the switch is to be able to build something of my own, I might become a proper indie hacker, though I need to research about India's legalities about this first or I might do consulting work to keep the finances flowing for a while. The initial plan was to work for a company that does do Open Source work commercially and use that as an oppurtunity to both grow and learn about building such companies.

Though, most of these companies (Vercel, Netlify, Ghost, Upstash) prioritise their own communities and contributors over external applications which is amazing so it's an easy entry for people who already have contributed to their repos but me spending time building my own tools, libs, and handling my day job has led to the problem of not being able to contribute to anything beside Poimandres and a few other OSS projects.

So, I kinda don't stand a chance there, this takes us back to finding a solution where I own what I do and I'm just deciding between Indie Hacker and Consultant. Might end up being both but still, the change I'm looking for might not be that easy, but then , nothing is.

If you are someone from India and have been doing Indie Hacking, I'd like to get in touch so do sent through a mail at [email protected]. I've got a few questions that you might be able to help with.

Well that's been all for the day, just writing my thinking down; to find a possible solution.

Adios!

]]>
Mon, 10 Jul 2023 00:00:00 +0000
https://reaper.is/writing/20230716-my-craft.html My Craft https://reaper.is/writing/20230716-my-craft.html My Craft

Every developer I envy has a particular craft, some like designing really cool interactions. Some are great at building layers of functionality that help ease work and some built entire systems like it's nothing.

You can also bring in serial indie hackers and their craft is being able to ship ideas like it nothing.

I've not talked about my history but I've tried to be one of each over the course of about 8 years.

I've been professionally working for about 5 years now but before that I was really into micro interactions and that's basically what I really wanted to do and got into it by replicating a lot of the effects and animations that iOS did at that time to the web.

Next up was that I got bored that I could do that and moved onto doing backend for a new learning experience got decent at that built quite a few things that are somewhere in my archived repos at this point.

  • A invoice management system
  • A idea (mvp -> release) management app
  • A calisthenics workout tracker
  • A minimal job listing system
  • Nodemon as a service

and there's quite a few more, point being, I wanted to be the guy who had made an open source alternative to everything that exists. Got bored of that, moved onto learning programming languages. Started with Python, got to Java, then to Kotlin, then back to C, then to Nim Lang and V Lang and parallely built some stuff with Golang.

I thought that learning languages and being able to build things in multiple languages was cool and that'd be my craft. Got bored of that soon. Jason talked to me about building preact native and if that idea was achievable/doable. He was building something similar for internal tooling at Shopify and wished to see at what level could the abstraction exist and so I ended up writing the based abstraction of preact native. Got really excited by low level work and dealing with communications between systems. It felt like this is what I wish to do, deal with system level programming but guess what. Got bored of that too after a bit and react native was moving too fast that it was hard to keep up with their implementation changes while building preact native.

Because of both pragmatic reasons and boredom, I ended up sunsetting that project (still open source) and this led to me getting interested about a tech in frontend called Island, we got to building an abstraction that would allow any bundler and setup to be able to create islands instead of it being tied to just Deno's Fresh or AstroJS and this was done specifically for Preact.

I'm still working on improving it over time and hopefully I don't sunset it before I'm done with at least a feature complete version.

So, You've done a lot of it, showing off are we?

I wish, but trust me, all I'm trying to do here is write down the things I've done while trying to look for "My Craft" which apparently is exactly what I've been doing. I enjoy experimentation, I enjoy researching on tech that amazing developers are building and replicating them by reverse engineering them and seeing if it could be done in a simpler manner. A lot of times these just vanish into my repositories (primary since I create a lot of these repositories).

Some might consider that I'm not a reliable source for any libraries, they might be right in way because I actually have abandoned a lot of cool stuff. I could defend it by saying that the work that did have users is still being maintained and examples of this would be alvu,commitlog,themer, and a few other libs that have at least 1+ user.

Most of what's been abandoned are libs that didn't get any attention or provided any value and that's fine. When you build so much not everything is a valid idea. Sometimes you build it just to get it out of your system. I've been an advocate of that idea for a long time now.

Make it, just to get it out of your head.

You really think the world needed another Typing test? there's enough out there, or another static site generator?

Nope, I wanted to build my own typing effect, built that and then used it to build a typing app instead. Similary, I randomly thought it would be a nice idea if I could extend a markdown engine based on each project's requirement and built alvu out of that. I've built like 6 markdown to static html generators. Some as scripts and some as proper CLI tools.

They don't really provide value to anyone other than me and so it's my craft. Having a craft just means that you build/make/draw/paint/do things you like. It doesn't have to provide value or be perfect either.

Don't make your craft your career though, what do we even call it? Tech evaluator? Idk, anyone have opening for something like this xD

I'm kidding, also that's all the story telling for now,
Adios.

]]>
Mon, 16 Jul 2023 00:00:00 +0000
https://reaper.is/writing/20230807-an-imposter.html An Imposter https://reaper.is/writing/20230807-an-imposter.html An Imposter

Most skills have an easier progression to start with, and as you get better the harder progressions get easier to pick up. It's no different with programming either.

Though, the feeling of "I've not accomplished enough" is somehow greater and a lot more common in programmers.

I can't speak for everyone but for me it does get to me, I stop coding for a while when it does and I can't do that for the day job but I just stop going close to my workstation at times. I'm not providing a solution here but I think you should still read through.

On most days I just end up re-building older projects to just distract me from this feeling, and it has worked since the mind moves to something else after a bit.

The recent feeling of being an imposter or more like not having built anything significant is mostly due to my current mentality shift where I wish to try to be an indie hacker again and the problem with me is that the separation of "I need to do this for money" and "This is being done for fun" is very hard for me to do.

You see, most of what I've built over the years was done for fun, none of those projects were ever built to show off but just to be there as references of things that caught my attention. You think the world needed another Typing Test? Nah, I built one cause it made me curious as to how the typing animation was achieved by MonkeyType and I sat down to figure it out without looking at their code or to be put simply, reverse engineering it.

Which is one of the better things that I can say about myself, I'm good at reverse engineering. It's a skill that does help with whatever I'm doing but i'm also limited in terms of original ideas. Most of my ideas revolve around improving existing software and since a lot of them are closed source, I end up having to build them from the ground up.

The idea of moving away from a day job and jumping into the world of indie hacking though has made me rethink how I code, it's now about competing for users and making $X ARR / MRR, and that's just not something I'm able to think properly about. Might just take longer but will need to think hard about it.

I've been applying to other companies that are in the open source world, and hire remotely worldwide. Haven't gotten a response from most of them and if I have gotten a first round, there's been no response to my follow ups for it, which proves that the company or humans working there don't really care about it so , that's that.

The mentality I should be aiming at should be a separation of what I do, I should very clearly define the boundaries of what's fun and what's professional and work on the project while keeping that in mind. If I'm unable to do that, then it's going to be really hard for me to ever get to the goal of being financially free and still be able to work on programming projects, treating them as art.

I could be selling templates, if there's people out there who like my typography centered UX, which I think is very niche but if there's anyone out there who thinks that they would be willing to pay for it, do let me know.

Remembering why I started coding actually helped get me back to normal quite a few times. I code because I find it cool, there's things I don't understand so I sit down to think how it could've been made. This is not limited to code but anything that I find cool, it could be about suspended bridges for all I care. The fun of being able to decode all of this mentally and build it by yourself is something that really stimulates my brain.

The attempt to simply answer the question, "Can I also make this?" is what has lead to 80% of what I've built.

Did you need another color conversion library in node? nah. I built one just to dabble in the math behind it. Another static site generator? Lol, nope. Same reason, i had fun making one An on the fly go lang CLI builder? This was probably useful but I found more use to it that others so , still worth it

Going through these things again made me realise why I do it and what I'm doing wrong. Finding a balance between work and fun is something that I'll have to master to be able to be at peace.

That's about it, there's not much more to say here since there might not be much value that I can provide with posts like these.

Adios!

]]>
Mon, 07 Aug 2023 00:00:00 +0000
https://reaper.is/writing/20230811-docker.html Docker https://reaper.is/writing/20230811-docker.html Docker

Docker has been a part of my life for about 4.5 years now, it started as an experiment to be able to run a simple postgres instance locally and not for it's intended purpose of creating reusable app containers.

This has been a great experience when I used to work with it on Linux based systems. Since the virtualization doesn't need a new qemu instance and is mostly based off the restrictively libs and permissions of linux's base.

This is a little more of work on Mac based systems where direct permission based container creation is a little hard, so the solution involves running a qemu instance on the system and then running the docker engine layer on top of it.

The actual concept is a lot more complicated than what I've explained but it's what's got that container engine running and the docker-desktop app is responsible for setting this up on installation.

What do I do with this information? It's not even the whole thing...

That's the context for why I've moved to using alternatives of the official docker-desktop over the course of ~6 months.

Docker desktop's RAM usage started effecting my other work and if you know me, then you know that I have a good set of editors, servers and apps open at all times. On a system with just 8GB of RAM, everything is basically fighting for the memory.

And ?

And, I'd like to introduce people to 2 alternatives that I've been using on 2 macbooks. These are both slightly different approaches but they still do end up doing the following layer approaches

  • Setup Mac's native Virtualisation System + Virtualize a linux Image + add docker to it
  • Tiny QEMU running as a service + docker engine

Here's the alternatives that I've been using

colima

It works on top of Lima-VM which is basically a configuration based linux container stand up utility. It's similar to WSL2 in terms of goals and is built with the assumption that every container shares a few common things, ports, network, volumes etc.

Colima, takes this and adds in proper docker aliases for you to be able to run docker like you normaly would but instead on the colima container instead of the docker-desktop container.

It's faster than docker desktop, lower in memory consumption but the initial start of Colima is about the same as docker desktop. This is when colima first downloads and creates a new linux base image which is understandable.

orbstack

The other macbook has been running orbstack for about the same amount of time now and I think I prefer orbstack over colima for the following reasons.

  • GUI

That's it.

Yes, orbstack has a lot more features and is faster in both startup times and smaller in installation and is being actively developed by a really capable developer but at the end of the day, the convenience that the GUI provides me with the menu bar actions and ability to start, shut, create machines really quickly is what made it a joy to use.

I type at a decent speed (~100WPM), so doing the same in colima takes no time either but having that menubar UI is just faster when I need to quickly look at running instances and their exposed ports, this is a single command in docker but the information is an ascii table and it's not always pleasantly displayed on a tiny screen.

Final Thoughts

For the general use case of standing up services and running virtual linux machines to test out scripts and other cross platform stuff I write, it's been easy on both, so there's no winner here because based on what I'm doing , I might use either.

If I'm setting up a remote mac, CI, it's going to be easier to setup colima and control it since a GUI is not something I can use but if it's for my personal computing and work, then it's an easy choice to just go with OrbStack.

That's about it for this post, Adios!

]]>
Mon, 11 Aug 2023 00:00:00 +0000
https://reaper.is/writing/20230901-typing-routine.html Typing routine https://reaper.is/writing/20230901-typing-routine.html Typing routine

I'm no typing speed expert, nor am I the fastest typer in the world. My typing speed varies from 110-120WPM depending on amount of coffee intake.

Anyway, I've basically setup a daily routine that helps me warmup before I start working and this has also helped increase my speed over time where 110WPM is a very easy goal for me to reach.

I don't have any scientific tips but these are basically things I did to help with motor/mechanical skills in gaming and they seemed to help with typing as well.

Just give me the routine...

  • Slow down
  • Slightly speed up
  • Speed up more
  • Slow back down

Slow down

The first instance is for you to type very comfortably without trying to race the timer, just be as slow as you can. You don't have to force yourself to type one letter at a time but just let your fingers glide onto keys and type as you feel comfortable.

Slight speed up

Now, we need to intentionally increase a bit of our speed, still trying to be comfortable, what I do at this stage is try to type the word in a stretch, wait a bit and then type the next one and keep going at that till I'm done with this

Speed up all the way

At this point, type like your life depends on it and type as quickly as you think you can. I shift focus to looking at the next letter while typing the current focused letter and keep going, I normally do this 2 times instead of once just because it's fun.

Slow back down

Let the brain and fingers rest at this point and go back to simple glide based typing.

Tools

I use Typer for most of the steps since it by default doesn't time your speed, though it does show an overall speed just so you know what speed you are at. I do monkeytype occasionally to check my current typing speed. Thought it's definitely the better tool in terms of options and settings.

]]>
Mon, 01 Sep 2023 00:00:00 +0000
https://reaper.is/writing/20230903-me-and-numbers.html Me and Numbers https://reaper.is/writing/20230903-me-and-numbers.html Me and Numbers

I announced rawjs.xyz the past week and well, due to the nature of where it's heading as a project, I had to add analytics to it.

For people worried about privacy, I'm using plausible so, no, I'm not tracking you.

Why do we need analytics though, did no one figure out a way to know if users are interested in your app, site without having to secretly pick up if the site is being visited. Wait, it's not like me to blame people so lets think of solutions we can use.

I'm not a fan of numbers, likes, follows, stars, visitors, I try to run away from them. I ended up trying to build a simple HIT counter for goblin because that's one of the most obvious products where the hit counter doesn't matter because visiting the website is a one off thing. There's no count or tracker if you are actually using the curl scripts that goblin actually provides.

These numbers change mentality. You end up thinking about increasing these numbers instead of actually building or creating stuff that you originally liked doing. The number starts dominating your decisions after a while.

We already make most life decisions based on the amount of money we're getting as income and that number already takes control of all of the decisions we make. Do I really need another number stat deciding what I should and shouldn't do.


Not like me to blame it on others even if I'm ranting, so let's see. What solutions do I have that could help me get user interest without doing the analytics thing. It would be helpful if you guys could reply to this post via email or hit me up on Twitter with your feedback but overall here's a few things I think I can do

  • Open up a newsletter to understand how many people are interested but, statistically mails like these start getting ignored after the initial interest
  • Add in a feedback panel on the site that people can use to tell me if they like or dislike the content or if there's something new they'd like to see on it.
  • Forget the numbers and just do what I like and see if it works out - I like this, but if I wish to turn rawjs.xyz into a serious thing, this is possibly the worst way to do it.

Point 3, basically depends on word of mouth and that we already put down as an expectation on rawjs, as if it does help you learn then you should help us promote it since the whole point is to not keep knowledge bound to just one person.

]]>
Mon, 03 Sep 2023 00:00:00 +0000
https://reaper.is/writing/20230916-pm2-traefik.html Moving from PM2 and Nginx to Traefik https://reaper.is/writing/20230916-pm2-traefik.html Moving from PM2 and Nginx to Traefik

I use pm2 and Caddy or in complicated apps nginx to manage my personal projects. Simply because I can pretty quickly fix and update things if anything breaks and that's limited to my personal projects. I can't do this when I'm working at my day job since those apps are mid to large sized and it's not a good idea to not add in any kind of fault tolerance to it.

Either way, the personal projects have been doing fine and I don't really need to change to traefik but I've been using traefik for a long time at this point and I thought I could make the deployments a bit more easier.

You can find the final resulting repository of this on barelyhuman/easy-deploy-template

Before the migration

Even though most of the apps I have are already container based and I use docker for most of everything, being able to deploy by simply typing [email protected] docker compose up --build -d is a really nice flow to have. The images are built locally and tranferred to the remote host.

The secrets are also transferred securely so you don't have to worry about that either but for people who are doing this for the first time, here's thing you need to verify.

  • The app you are building is stateless. Basically, it shouldn't access the filesystem or depend totally on the filesystem. It should be configurable based on the environment and finally, it should be self sufficient. If not, you are going to have a hard time creating a container out of it.

  • We need to make sure the server that we are deploying all of this on, has docker and docker compose setup.

  • Finally, write it down somewhere but you have to remove apache and nginx so that it doesn't conflict with traefik

The Plan

I still wish to keep it super simple to deploy locally, so I'm going to be writing bash scripts and Makefile to make it super easy to run the deploy and rollback commands

  1. compose.yml for setting up traefik
  2. compose.yml for the app and it's dependencies
  3. Makefile scripts for running deploys and local builds

Execution

The folder structure of this looks like this

|--| ~
|--| traefik
|--|--| traefik.yml
|--|--| compose.yml
|--| app
|--|--| compose.yml
|--|--| main.go

Traefik

We simply setup traefik to be a closed box waiting for docker services to ask it to deploy stuff.

Let's start with traefik.yaml, which is the config file that we'll be passing it's docker image.

# traefik.yaml
providers:
  docker:
    watch: true
    exposedByDefault: false
    network: 'proxy'
    endpoint: 'unix:///var/run/docker.sock'

api:
  insecure: true

To explain it briefly, we've just asked traefik to use the docker provider and only use the network named proxy to look for services that can use traefik. There's also configuration that disables traefik from picking up all the docker services that are running. You can run this locally if it's too scary to run on a remote machine.

Let's get to the compose.yml for this.

version: '3.8'

services:
  traefik:
    image: traefik:v2.5
    ports:
      - 80:80
      - 8080:8080
    volumes:
      - ./traefik.yml:/etc/traefik/traefik.yml
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - proxy
    restart: unless-stopped

networks:
  proxy:
    external: true
    driver: bridge
    name: proxy

Going to keep that simple as well, we've done the following things:

  • Made sure the service is using the host's 80 and 8080 ports
  • Tied it to the network proxy
  • and have instructed docker to keep restarting this image unless exclusively stopped by us.

If you are doing this locally, you should now be able to go to http://localhost:8080 and see a dashboard from traefik that shows the currently running services, routers, etc.

App

For the app, I'm going to use a tiny go lang program that just has 2 routes and a simple database migration.

package main

import (
	"fmt"
	"net/http"

	"github.com/barelyhuman/go/env"
	"github.com/joho/godotenv"
	"gorm.io/driver/postgres"
	"gorm.io/gorm"
)

type Product struct {
	gorm.Model
	Code  string
	Price uint
}

func main() {
	godotenv.Load()
	pgDSN := env.Get("DATABASE_URL", "")
	db, err := gorm.Open(postgres.Open(pgDSN), &gorm.Config{})
	if err != nil {
		panic("failed to connect database")
	}

	db.AutoMigrate(&Product{})

	http.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		w.WriteHeader(http.StatusOK)
		fmt.Fprint(w, "hello")
	}))

	http.Handle("/healthz", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		w.WriteHeader(http.StatusOK)
	}))

	http.ListenAndServe(":3000", nil)
}

You don't have to understand the app, just know that it runs of the port :3000 and responds to /healthz and / routes. You could use a simple nodejs express /fastify app if you are trying this out right now

We need to define how the app is going to be built, so we'll write a custom Dockerfile for this.

from golang:1.20

WORKDIR /app

COPY . .

RUN go build -o app .

CMD ["./app"]

Pretty simple stuff, we copy the source, build it and run the app at the end.

At this point, you can test if your app works or not locally and now we can just add in the compose.yml file for the app as well.

version: '3.8'

services:
  app:
    # tell docker compose to build this `Dockerfile` that's in this folder
    build:
      context: .

    # set these labels on the container
    # these are what traefik uses to identify configuration
    labels:
      - traefik.enable=true

      # enable the below label when working locally
      #   - traefik.http.routers.app.rule=Host(`127.0.0.1`)
      - traefik.http.routers.app.rule=Host(`goblin.run`)
      # provide information to traefik so it knows what to use to check if the instance is up or not
      # and if up, on what port
      - traefik.http.services.app.loadbalancer.healthcheck.path=/healthz
      - traefik.http.services.app.loadbalancer.server.port=3000

      # if the instance ever goes down, these are the number of attempts
      # before failing and the interval for each needs to be 200ms
      - 'traefik.http.middlewares.test-retry.retry.attempts=5'
      - 'traefik.http.middlewares.test-retry.retry.initialinterval=200ms'

    # Tie the app to the following networks, in this case the internal network is
    # going to be used for any other services that you might not want to expose to the system, a database maybe?
    # on the other hand we need to provide `proxy` so that traefik and the app are on the network, this is what
    # we defined in the traefik configuration
    networks:
      - proxy
      - internal

    env_file:
      - .env

    # The deploy settings are used to define how many internal instances do we wish
    # for docker compose to create for this one image, here it says 3 but you can do
    # just fine with 2.
    deploy:
      mode: replicated
      replicas: 3

      # we also define the update_config, to tell docker how to handle rollback and updates
      # here we specify that it needs to update one container at a time instead of parallely updating them all
      update_config:
        parallelism: 1
        order: start-first
        failure_action: rollback
        delay: 5s

    restart: always

# Here we just define the networks and
# whether they are supposed to be external networks (exposed to the system)
# or docker internal network (limited to the docker container instances that are in the network)
networks:
  internal:
    external: false
  proxy:
    external: true
    driver: bridge
    name: proxy

And that's it. Now how you decide to move this to the remote server is up to you. I use the [email protected] docker compose up --build -d to do it and it works just fine for me in most cases. Though if you wish to do in a more seamless manner you might be interested in setting it up with watchtower and a docker image registry

Post migration

Got right of caddy and let docker and traefik handle the network requests for me, I don't have to expose any other port from my VPS as it's all handled by traefik inside docker containers.

You might want to move up to the 4GB or 8GB RAM instances for this one, if you working with images that you have no control over. If you can find alternatives that use alpine linux as the base image, you might save some megabytes of memory in that.

That's it for today, Adios!

]]>
Mon, 16 Sep 2023 00:00:00 +0000
https://reaper.is/writing/20230925-hate-and-responsibility.html Hate and Responsibility https://reaper.is/writing/20230925-hate-and-responsibility.html Hate and Responsibility

Disclaimer: None of this is with regards to anything that has happened to me, I've just been a witness to it.

Sometimes I wished I was famous so shit I said would get some attention,

I'll rant about it anyway. The hate I've been seeing towards devs online has gotten worse day by day. People acting like OSS devs owe them something. No, we don't.

The entire premise of OSS is, "I've built something, here, you can use it too", not "Here lord, have this piece of code I've written that now makes you my master. I'll now do as you say"

I considered programmers to be smart, no idea where the entitlement comes from though. Luckily the people who are this negative are fewer of the bunch but still balance it with their weird way of communicating.

There's cases where the OSS Author might take it in the wrong way and that's fine, human communication isn't the easiest thing anyway. Not ranting about those cases.

As for the nicer people, you can't cancel out the negativity since a 100 "Nice Job!" comments doesn't block out the "This is shit!" comment from your head. Thanks for the attempt of doing so though. A better way would be to contribute to projects you work with. Trust me, it's not that hard. I can assure you there's smaller projects that would like you to contribute.

I understand that not everyone has the time or even interest to sit and write more code, and that's fine. You can always support in other ways, promoting the library, a donation to the author if they accept it, star the repo, etc.

The attention helps developers to understand that what they've built is actually needed by the community and not something random. This doesn't mean you go and star every repo you see someone promote, we still need quality control...

A small gesture gives a tiny boost of motivation to the author. This is good for you in the long run since they put an effort to maintain the project because of that.

Don't go and star every repo I've made just because of this post, or randomly donate. If it's been useful, then yes; if not, then don't!

I still need solutions for dealing with people, who have the "World owe's me shit!" attitude.

]]>
Mon, 25 Sep 2023 00:00:00 +0000
https://reaper.is/writing/20230929-versioned-xcode-fastlane.html Multiple XCode version on Fastlane https://reaper.is/writing/20230929-versioned-xcode-fastlane.html Multiple XCode version on Fastlane

This is a short write up on how to have multiple Xcode versions on your local CI without using vagrant images and virtual boxes.

For context, I handle simplification of tech and automation at Fountane and the following is basically how we handle MacOS CI boxes. We've got 2 setup's, one is my own macbook and the other is a Mac Mini provisioned by Scaleway. The Scaleway version uses vagrant and docker to spin up gitlab runners and do the builds. Though for my macbook, the virtual box images make it hard to run concurrent processes and so I had to switch back to running it the old way.

The other tool involved in this process is Fastlane. Fastlane, has been my go to; tool to handle iOS certificates and Android pre build automation. Though, when working with iOS, there’s this tiny problem that you need a macOS compatible system. The other problem is that certain xcode versions cannot be installed on certain macOS versions.

If you need to deal with the latter; where you need to update to the latest mac and still be able to build older codebases then you are better off going the vagrant way. Luckily, for me the codebases are rather new and can be run from Monterey with XCode versions till 14.2, I'll have to upgrade the codebases for working with Ventura and XCode 15, but for the current state of the codebases, we can manage for a few more months.

Solution

  1. To Start with, we first need xcodes which is a CLI tool that allows you to install multiple xcode versions. You can do so with homebrew
$ brew install xcodesorg/made/xcodes
  1. Install the required version of xcode, in my case that's 14.2
$ xcodes install 14.2
  1. We need to let fastlane know that it needs to use a specific version. For the sake of the example, I'm hard-coding the version, in an actual runner you'd be picking this up from the environment variable to make sure the Fastfile is reusable to newer setups
platform :ios
	desc "Create a development build and upload it to TestFlight (create a backup upload on diawi)"
	desc "then notify slack for it"
	lane :beta do
		xcodes(version: "14.2", select_for_current_build_only: true)

		# to keep it more reusable over time
		# xcodes(version: ENV["XCODE_VERSION"] , select_for_current_build_only: true)

		# remaining actions / statements
		# ...
	end
end

While this works, it's more of a hack then a solution. Cause the environment is going to be dirty and you'll have to make sure your runner cleans up after itself. Since this was done only for my local macbook, it's fine. I would recommend properly setting up vagrant if you only plan to do this for a remote machine that's being provisioned for builds like these.

]]>
Mon, 29 Sep 2023 00:00:00 +0000
https://reaper.is/writing/20231013-programming-languages-i-like.html Programming Languages I like https://reaper.is/writing/20231013-programming-languages-i-like.html Programming Languages I like

Disclaimer: Not going to declare a "best programming language" here, you can find that drama pretty much all over the web

If you don't know me, short context about stuff I do when I'm bored. I pick up existing simpler projects that I can find or have built before and then build it again in another programming language that seems interesting. A lot of these projects are things you can build as well, so I don't need to go deep into these but this is just a list of languages that've stuck around for more than a "Hello World" project.

One's that stuck

For the one's that don't need a story.

Now for the others,

I'm someone who started with writing cat and ls clone programs in C lang in school as practice, because it was a part of our curriculum (not because I was some genius in school). That just stuck to me for some reason, I liked picking up hobbies cause they were fun. I tried origami, drawing, Plaster of Paris based pottery, etc etc.

Programming was one of the hobbies I picked up really quickly (also surprisingly, something I'm still doing...); Anyway, picked up Python next and then a little bit of Java. There's no reason to mark them as bad languages, I just don't like OOP so Java is something I've never picked up again. Python though, I did do some GUI work with Python for college mini project and then one of the Dev's of the Numix Theme group, just convinced me to write the whole thing in Electron again.

And since then I've been shoveling down the rabbit hole even after it ended.

JS is your primary language?

Nah, JS stuck because of the number of available jobs, pure and simple. A lot of jobs in India needed react and AngularJS 1.6 developers when I got out of college. I started doing react and picked up Angular for a tiny startup that I was working with (~2017). Post that, it's just been react and more react and a lot more angular and some more react and that lead to fighting with Typescript and Javascript's quirks every day for almost 3 years day in and day out.

Think of it as a language that started as a skill to earn and stuck because I went too deep into it.

Then comes the time where I wished to do a little more in terms of Open source and instead of just browsing through, I'd like to contribute and I didn't know which project to contribute to, so I just made my own. I really liked Linux and CLI tools because they worked everywhere!

As long as you have a terminal, you can run the app/tool/software. I started doing this with C because the hackers online told me that it'd be easier to write portable CLI on C. Wrote a few of these but writing the base layer for C is actually a lot of work. Though, after you're done with that, you pretty much can do whatever you wish to with it.

Anyway, the goal was to find another language where I could avoid writing the base layer and still find a good std lib to work with. This is where Zig, V, Go, and Nim Lang came into picture.

I worked with tiny apps before building something serious and here's what the result was.

  • Zig
    • Verbose
    • Straightforward flow
    • the std lib could be a little more intuitive. Haven't tested again in the past 2 years, so this is from my experience in 2019)
    • There was no package manager back then and it took a lot of looking for good libraries when working with it.
  • Go
    • Straightforward flow
    • Really great standard library
    • The straightforward flow goes to hell when working with channels or anything concurrency related, which is technically a problem with most languages and hard to simplify, so understandable
    • I wish the versioning didn't need a new module path, once you release something greater than version 1
  • V Lang
    • Very close in terms of syntax to Go
    • A lot simpler, since it doesn't allow you to do one thing in 100 different ways.
    • The ecosystem is highly productive
  • Nim Lang
    • Loved the syntax, cause it reminded me of Python but with typescript
    • Really dense standard library
    • A good package manager, one of the problems of working with C was to figure out what libs are available on what operating systems
    • Easy to complicate with macros but not always needed.

Out of these, Go and Nim stuck with me due 2 reasons, that aren't listed here.

  • Binary Size
  • Compilation Ease

Zig is also something that falls in those points but I wasn't a fan of the verbosity, since I could just write C and it's simpler that way. Personal preference, I'd recommend you try the language yourself to decide if you like the language.

I'm by no means a master of any of these languages, I've worked with JS enough to know how to deal with it's quirks without having to rely on TS but I still use TS where applicable so that users can enjoy using libraries in their Code/Development Editors.

Bonus Languages

These are languages that I like but didn't really make it into the list of my first choices.

  • Ruby
    • I use it to write Fastfile's in fastlane and also any custom behaviour that might be needed to make the automations smoother to work with.
    • There's some gems that are well built and maintained in Ruby as compared to other languages so in cases where it makes sense it's used as script / tool to handle certain scenarios
  • Crystal
    • It's very similar to ruby but offsets the speed issue that interpreted language have
    • Mostly didn't use it since the std library was still growing, I should probably give it a try again soon
  • Lua
    • I kinda write a lot in lua since my entire blog runs on hooks written in Lua
    • It's a tiny language and you can actually learn it pretty quickly. It's pretty fast as well and addition of libraries is as simple as copying the .lua file.
    • It comes with a package manager so it's not that hard to extend either, I've just not gotten too deep into lua to know any more than the basics of writing functional/control flows.
  • Zig
    • Same reasons as I mentioned in the original report above.
  • ReScript / ReasonML
    • These compile to JS and are actually great languages, I tried ReasonML recently again and I like it though the setup of the compiler / transpiler itself has gotten a little longer with dependencies on Melange, the making sure esy works on the node version you are on, etc etc.
    • ReScript on the other hand is much simpler to work with, though most of my work revolves around experimental libraries and stuff, so it was easier to just write JS than functional programming based solution since I'm mostly unaware of how I'd be solving the problem. If it was for something more known like a server with API's and everything, this is actually good choice
  • Swift
    • A lot of people like Swift as a language as compared to writing ObjectiveC and there's a few reasons for it.
    • It's a very practical language and has just the right amount of quality of life features.
    • This is also what makes writing UI's in SwiftUI a joy to do, though there are still some glitches in SwiftUI that are being worked on but the language itself isn't the reason for it
    • As for why I'm not using it much, it's probably because I rarely develop desktop applications anymore. The last one was WallSync and I haven't even had to update it since it works on the MacOS version that I'm on and I doubt anyone else uses it

I guess that's about it, Rust / CrabLang is probably the only one that I just didn't like because of the amount of mental gymnastics you need to do but people do like the language so I guess I'm the odd one out in this case.

That's all the languages I like and why I like them, any more details about the "Why" would make this a book and I'd like to avoid that.

]]>
Mon, 13 Oct 2023 00:00:00 +0000
https://reaper.is/writing/20231024-arrow-render-to-string.html Announcing arrow-render-to-string https://reaper.is/writing/20231024-arrow-render-to-string.html Announcing arrow-render-to-string

It's rare for me to build something for the external world but here's a package I ended up building while working on another project which I'll talk about later.

ArrowJS

ArrowJS is a simple UI library which takes the most minimal approach at writing UI libraries. Something I truly wished I had come up with but I'm happy it exists. If you've seend my tweets/Xeets for the past few weeks I've been praising ArrowJS for a while.

I bet on it's simplicity enough to build a side project with it and while I was building the side project, it hit me that a server rendered and client re-hydrated method would make it much more snappy to use and so I went onto writing a tiny utility to render the html templates that arrowjs has to pure html strings. You'll see this in action once knex-studio get's the other set of updates but for now we're going to just release the renderToString utility to the world.

arrow-render-to-string

barelyhuman/arrow-render-to-string is a simple JS module that should run anywhere that JS does because it's not using anything runtime specific and is just Javascript.

The current scope of the project is to be able to help you stringify ArrowJS's html templates and do that really quickly. ArrowJS comes with it's on view mounting in place so there's no need for a hydration utility but over-time I do plan to create a tiny framework that standardise how you write ArrowJS island based apps.

Well, that's all I have for the announcement, any feedback that you have or any issues you face just raise them on the repository and I'll check it out!

]]>
Mon, 24 Oct 2023 00:00:00 +0000
https://reaper.is/writing/20231127-the-decent-developer.html The Decent Developer https://reaper.is/writing/20231127-the-decent-developer.html The Decent Developer

Over the years, this post has taken a different form everytime I wished to write about it. It's been written as a guide for other developers to read through. It's been a cry for help from a developer who wasn't able to achieve anything. There also was a version where I just felt I couldn't really produce any value for anyone anyway and was going to rant about not having a direction/purpose in life.

I don't know if I'll publish this version either because I don't know where this version is headed either. I never do, most of what I write here is an immediate jot down of whatever I'm thinking. I don't know if it justifies the spelling and grammatical errors but, they are mostly because what you are reading is a just a long form thought.

Defining Value

A lot of what I've built over the years were based on simple ideas I wished were accessible to people, it started off as a journey to build tiny, no nonsense apps for everyone to use at a low price tag or even no price tag.

This changed to building for developers since, I thought I could understand that market better and maybe do something for them. Over the last few years I moved to just building for myself. The feeling of freedom when you start doing this is really something else; this did fade for a while since I couldn't give these things time due to my day job but I'm hoping I'll be able to change that or get it in control over the next few months.

"What value does it provide for the general public?", tired of this question at this point. Does it have to? If a problem is a niche one, I don't think I need to build something that solves it for everyone, everytime. Let me enjoy my hobby a bit.

You think musync is an app anyone needs? It's a spotify to spotify sync since spotify doesn't like being able to share your Liked Songs as a playlist. So I basically automated that.

A category fluid changelog generator? Nope, no one needs that either. There's at least one version of that implementation in every programming language. A release manager? Nope, also something every programming language provides.

What about the 100 other things I've built? Nope, there's better and more polished versions out there.

Point being, I don't plan to provide value when I'm building for myself and most projects built for fun do have a section that defines that it was packaged/coded/released just for me or because I was tired of copying it around.

Usefulness

Are they useless? Mostly no.

They do exactly the amount of work I wished for them to do. I don't stop pushing to anything I'm building unless I feel they are able to do the basic thing I wished for them to do. I develop MVP's of my ideas as fast as I can and then move to other things that are blocking my work.

An example of this would be jinks, it's got 15 commits and a single release, it's a simple implementation that doesn't need more messing with, why would I keep adding or removing things from something that's already done and working, it's API was stable the moment I released it and has been ever since. Is it a useless library? Hell no, it's been in production for over 2 years now.

But that raises the question, does anyone else need it? Nope, they don't. It's not that hard to write to begin with and no senior developer writing an app with the requirement to be able to inject links would need something like this. They'd rather use a lexical editor's block model so they can allow editing such descriptions.

I on the other hand wrote it for an app where I knew there wouldn't be a lexical editor at all and it'd make no sense to add a full lexical rich text parser for something so simple.

That brings us to the best part

Requirement Understanding

A lot of your work as a developer is going to involve going through boring parts and using boring software and sometimes you'll have to write targeted software and evaluate new tech. Some developers find the latter more interesting and some find the former the fun part of their work.

One of them is more focused on enjoying what they do and one of them is focused on completing their work. Both parties are right, and have different goals so comparing them makes no sense in the first place.

The point I wanted to make was, being a developer isn't always fun, 80% of your time is spent thinking and predicting problems that might show up and controlling yourself from solving them too early. The remaining 20% is spent thinking about variable names, which honestly isn't that fun either.

It's not a magic formula but know that a lot of what you code isn't going to make sense to anyone else unless they understand the requirement and understand the scope of your implementation. In which case you have 2 things that you can do.

  1. Build it for you and then document the hell out of it so people understand the why and what.
  2. Write a super generic solution for the problem and then provide it as a solution so the others can extend on it.

Or there's a bonus 3rd one.

  • Build it to learn, call it an experiment and then forget it. Also let people know that it was an experiment so they can use the code as a reference instead of depending on it altogether

There's more stuff that I could share but it might get controvertial if it followed up the points I mentioned here so I'm going to break it down into another post and publish that someday.

That's all for now in terms of growing to be decent at what you do, keep doing it for yourself first, you can do it for others once you are happy with what you build for yourself.

Adios!

]]>
Mon, 27 Nov 2023 00:00:00 +0000
https://reaper.is/writing/20231216-personality-in-a-webpage.html Personality In a Webpage https://reaper.is/writing/20231216-personality-in-a-webpage.html Personality In a Webpage

Websites are a beautiful concept, being able to make art using digital pixels and content is something I really wish more people could do and a lot of people are working on making websites easy for people as well. I'd say Framer is one of the few services doing this today.

Though, the post isn't about Framer.

It's about the people who are really fond of working on websites and polishing it's intricate details to the point of perfection. These programmers are really fond of their craft and I'm really envious of them. That leaves me with 2 options.

  1. Replicate and Learn from them
  2. Be proud of my fellow developers

I chose to do both and there's things I've taken up as "inspiration" and to be able to help them get a little more audience I built something.

Announcing minweb.site, a really simple collection websites that I and Arne find either pleasing to look at, or show the designer's effort and personality while staying minimal.

Yes, the collection is going to be opinionated but if you like the concept please help spread it. If you are someone who would like to submit their website to the list, please do so!

I can't guarantee adding every website I get in request because I have to absolutely like it to be a part of the list but submit them anyway!

At the end of the day, it's your personality and your website so it doesn't matter if I like it or not but I have to like it to be a part of the list 😂

]]>
Mon, 16 Dec 2023 00:00:00 +0000
https://reaper.is/writing/20240116-my-framework-space-2023.html My Framework Space 2023 - 2024 https://reaper.is/writing/20240116-my-framework-space-2023.html My Framework Space 2023 - 2024

What is the one thing that the web dev worlds has too much of? Frameworks.

It's basically (n+1)-d number of frameworks. d represents the humans who have mixed opinios thus reducing count and n is the original number of opinions. It'd be worthless to spend time finding an answer to that expression.

Anyway, everyone seems to have a favorite framework and meta-framework so let's get to the one's I use/wrote.

Base

Contrary to my work in Javascript a lot of my webapps start with a simple main.go file serving a directory of html,js,css files. I add api's to the go server and use them in the above mentioned html as needed.

This is what I call, Base. It's a very simple setup and can be created in under a minute. The client side code can be embedded into the resulting go binary and it's easy to deploy and use on pretty much any straightforward server today.

The reason for it's existence is simple, not to get too involved into what the right tech is supposed to be when I'm testing an Idea. This is what minweb.site was initial built on.

DIY Boilers

I've got a few of these and most of them are experiments or evaluations of the various tech available in the full stack web dev space. They are simple nodejs/go/nim lang projects that have the frameworks source code placed into the repository itself.

This allows me to avoid generalisation and build the DIY Framework to be very specific to what's available in the actual project folder. It also makes it simpler for the user of this framework to change and improve the whole thing while working on the actual project without having to wait for me to add the improvements to the original repository.

I use these at Fountane while working and a lot of these have changed over time based on requirements, so much so, that it's hard to even compare them to the original source.

Nomen

This is something I've been working on and ties with the concept of keeping the mental model very simple and tied to the NodeJS javascript.

It exists because a lot of what we do today is tied to bundlers and magical setups. A few established frameworks that come into the it's just "magic" would be Nuxt.js, Next.js and Astro from the list of things I use.

There is no problem using them since the teams behind them are very dedicated and love what they do but, a lot of issues that i've faced while working with them is the need for a shift in the mental model when working with them.

Eg:

You're used to a setup where the database connection is shared by defining a file like the following

export const db = connectToDb(dbConnectionConfig)

And now you are using this connection in the different server functions provided by let's say nuxt, next, astro, qwik, etc.

Each one of them have a different targetted build. I've used next too much, so I'll go with the un-needed problems I had to solve.

In Next.js this connection is not shared, it's impossible for it to be shared both in development and in production. So much so that I had to write a post since it was so commonly asked around in their discussions.

Another solution availabe today is provided by the prisma docs since a lot of people started using prisma with Next.js

Now they clame that it doesn't need to be used in production but then I've had the same error in production since the initialisation of API functions are still done in the same way a lambda would execute. This is faked in a local server and in realtime if you deploy to Vercel.

Tiny nitpick, I know but you loose you basic nodejs server mental model. This changes for how Nuxt.js works and also for how Astro works (depending on Adapters)

A few more such setups led to people using existing services for everying. You need cron jobs? Use a cron job service. You need to do delayed triggers, Use a webhook service.

Things that are pretty trivial to write need you to pay for services and that's fine if you like working that way. It just doesn't work for me.

Hence, Nomen. It's still a nodejs server, it's the same server you'd write without the bundlers but you can write client side code for a few supported UI's and it'd work the way a normal nodejs server would. We try to not modify any paths or add in any magic.

Adex

Probably hypocritical after that tiny rant up there but Adex uses vite's magical environment to provide a simple way to write ArrowJS apps. I needed something that was quick to write with and still end up with a close to NodeJS setup. I was able to achieve that with a custom plugin I wrote for Vite and then that turned into what's now called Adex.

What happened to all that magic talk?

Adex is an open ended setup, you can change the entire running, request handling and rendering of it by just adding your own custom entry files. So, it's more a meta-meta-framework than a meta-framework.

But, it defaults to the ArrowJS libraries so you can say that it defaults to magic and then you can remove it if needed.

As for why add a bundler ? It's mostly because of the existing integrations available for Vite. Tailwind, UnoCSS, UnImport and other amazing DX improvements that Vite already has and that'd take nomen a bit more time to establish. I need people to like ArrowJS before Nomen comes with it being the default rendering engine

ArrowJS is one of the few libraries that don't need a bundler since there's no JSX to transform, you can use just typescript to transpile and go ahead with it and that's what nomen does with it's arrow module.

I guess, I wanted to try out what Vite could do and I ended up building it.

Both, nomen and adex are still under development with Adex receiving a little more effort right now since I need to understand and modify the Vite plugins quiet a bit to make sure I don't add / remove too much of the users code.

Both of them are also written in a way that you don't need to write API's for route data loading similar to how you'd do it in remix, except remix does create a fetch request and these both pass that data as JSON to the client similar to Next.js

Basically, I picked things that made sense from whatever I've used and built a monstrosity that I like.

That's all the stuff I use right now, I sometimes still pick up Astro when I don't feel like dealing with the bugs I need to fix in Adex and Nomen but we circle back to fixing them anyway since it's important in the longer run.

]]>
Mon, 16 Jan 2024 00:00:00 +0000
https://reaper.is/writing/20240213-i-circled-back-to-preact.html I circled back to Preact https://reaper.is/writing/20240213-i-circled-back-to-preact.html I circled back to Preact

I've mentioned about liking arrowjs and working with it for simple and straight forward web apps with minimal interaction. The concept of being able to do it all without build tools sat well with my goal of plainjs

The overall concept of reactivity as a primitive instead of the entire framework isn't new and is also something SolidJS has proven to be a good base to work with.

I liked this and wanted to see what I could do make it simpler to work with arrowjs both without build tooling and with one.

Without build Tooling

The first iteration of this is nomen which allows you to choose your rendering engine to be one of the following UI libraries - preact, arrow and vanilla js.

Nomen, doesn't build / transform anything for arrow and would just bundle it all up so you didn't have to. The bundling is done at application startup so we can avoid runtime overhead.

You also have a build cache which would be used in production environments to avoid rebuilding if the build is already in place.

With build tooling

The other iteration was what most modern meta frameworks do and it was to use Vite to handle the transformation and final distribution.

This was a lower effort since I didn't have to implement most of the basics of a meta framework from scratch and instead just write a plugin for vite. This didn't sit with the plainjs mentality but was necessary to be able to quickly evaluate the usability of my love for ArrowJS.

ArrowJS

Let's actually look at the good, the bad and the things that I think made me move back to preact.

What's good?

  • Primitives
  • Speed

Primitives

The basic primitives are very extensible and work really well. This is what small and well contained utilities can do.

The entire library works off of the reactive and watch primitive functions exposed by it.

And they do exactly what their names tell. One creates a reactive proxy and the other provides a way to watch these reactive items.

Speed

The whole thing is super fast, you can check this on their demos as well but while using it on a live app, it made me realise how big of a tree it was constructing and still didn't flinch one bit.

What's bad?

  • Readability
  • Composability

Readability

The overall nesting that html template literals create gets hard to track and work with over time.

While you can create simple components with arrow's html function, it's still hard to work with direct html strings.

To help with the development, you can use lit-html's syntax highlighter and it definitely helps but it's still not as functionally pleasing as let's say Vue's SFC syntax or JSX.

Composability

Keeping track of what's reactive and what's not can get hard. If you have a static html string and the nested reactive html component, there might be cases where it doesn't update itself because the root html component was static.

Giving the control of what is and isn't reactive to the developer doesn't work well in larger trees and you end up having to go through every insertion to find which template is typed wrong.

Luckily while using adex the above was very rare but that doesn't mean the problem doesn't exist.

I still like the concept and would continue to use it for simple apps where I don't need build tooling but this does change the priority of what nomen and adex will focus on.

The development for both of them would start moving forward to make working with preact a lot easier since it's going to get hard for preact's team to keep up with every new change that react might bring to the stage and I want first class frameworks and libraries for preact instead.

I personally think it's about time the preact ecosystem starts building itself slowly and steadily to avoid piggybacking on the react ecosystem.

Final point, I am back to working more towards what can be done with preact instead. Doesn't mean the arrowjs utilities that were created in the meanwhile would be abandoned.

That's all for now, adios!

]]>
Mon, 13 Feb 2024 00:00:00 +0000
https://reaper.is/writing/20240406-updates.html Updates April 2024 https://reaper.is/writing/20240406-updates.html Updates April 2024

Haven't posted for a while now...

Site Updates

Moving forward, I won't have a twitter for a bit and all updates will be a part of this website. I don't plan on convoluting the Writing section with all the developer updates and instead will just have the same post index it always did.

On the other hand, for people who'd like to follow the development update or read a feed of thoughts that I get on a daily basis, they will now be available as Developer Logs, you can visit the page using /devlogs or by going to the Writing section and clicking on Dev Logs →

Sometime I forget that alvu was built to be amazingly useful for all my cases.

Development Updates

We've started making things dedicatedly for Preact, the first iteration came as a beta release for a Date and DateRange picker in the form of PreachJS/Datepicker.

PreachJS itself isn't a super big project right now since it's a solo mission, hopefully more people join in helping with building a ecosystem around Preact.

Other than that esbuild-multicontext now has a close to stable API and is being used to build a few more utilites around the various tiny libs that I write. This is one of the repositories where I have auto merging of dependabot updates and it seems to be going well for now. If the stratergy of automerging with core unit and integration tests just keeps doing well then I might add it to other libraries as well and it'd save both users(if any) and me the time spent during maintenance releases.

Last one is knex-types, it's a fair attempt at generating flexible types from existing schema and this acts as the building block for knex based data models. The goal is to have well defined data repositories generated for you using just a tiny layer atop knex. This should simplify development while keeping the fun of using a query builder alive.

I haven't been very productive the past months so the list is pretty short but hopefully there's more value provided in the next few months.

Posts like these would become a daily feed on this site, so it should be easier for me to make logs of the idiotic shit that I do.

]]>
Mon, 06 Apr 2024 00:00:00 +0000
https://reaper.is/writing/20240417-poor-mans-fuzzy-search.html Poor man's fuzzy search https://reaper.is/writing/20240417-poor-mans-fuzzy-search.html Poor man's fuzzy search

Been a while since I suggested how to do things or what alternatives exist for a particular problem.

We are dealing with Search today.

Why? Recently had to research (no pun intended) about searching algorithms and various platform specific solutions for fuzzy searching. One of the requirements was for a large data set and the other was for mudkip

The Problem

You see, searching itself is a very simple concept, you browse through the given data to find matched patterns. Example: If I have 10 names, and I need to find how many have the letter S in them, then I'll have to scan all 10 names, go letter by letter to see if it has the letter s.

Now, mudkip has a pretty fast search even though it's a linear run, it's still fast because it's a very simple scoring algorithm and has no string similarity algorithm.

The problem is that this can be a very bad idea when working with a large dataset and you would've seen me use something like Pagefind where the search index is generated on build time.

Mudkip has no such index and that makes it slower when working with large sets. So, the next option is to build something that can handle fuzzy search, really fast.

The Solutions

Server Side

The large dataset on the server can be searched in various ways, the one I like the most is pgvector and vector similarity algorithms, since they are fast and can handle most fuzzy edge cases.

Recently someone also created a rust version called pgvector.rs which seems to be heading in a great direction. The other option is to combine this with a Materialized View that refreshes as necessary, this helps with controlled searches and controlled updates.

Next up, is using redis and there's various ways to do it and I created a tiny library that allows you to use both ways. This is available as @dumbjs/search and handles the methods I'll be explaining below.

I can't do too much in detail about the solution since there's better articles doing the same but to give an overview. The solutions can be divided into 2.

  1. Phonetic Search
  2. Prefix Keys

A phonetic search is based on what a word might sound like instead of the whole word.

For example:

  • I have the words hello world, in this case the phonetic values for this would be HL and ARLT or FRLT and these would be similar in the sense that I could have similar sounding words like worl which would result ARL and FRL, making it easier to search the keys that are built with it using SCAN in Redis.

We store these like so search-index:HL and whatever data you want to identify with this message.

Prefix Keys

The prefix keys are more simplistic in the sense that they are just indexes constructed from words and then multiple words can be a union or intersection of data from other keys.

For example: For the same statement hello world , I'd save 2 a few indices. hello => ['h','he','hel','hell','hello'] and create a key for each of them, now I can use ZRANK to find any of those and all of those have identifiers that I would need in my usage. Because of how sets work, I can also create an insection for a search that might look like hel wor and the results would be an intersection between the values both available in keys search-index:hel and search-index:wor.

Pretty bad explanation but then the library handles most of it for you.

Mudkip

Back to mudkip, the solution for improving mudkip's search is easy, we create a more optimized index on build and create a tiny cache in the browser to make similar searches faster.

One approach is to "get inspired" by what wade does, which is to create trie of the possible tokens. This should allow an easy traversal and improve scoring while keeping the original scoring logic of mudkip which is taken from Sublime Text.

I'll probably be writing that as a tiny package that can be shipped with mudkip so you should have an easier way to use whatever I end up building...

That's mostly it for this log, Adios!

]]>
Mon, 17 Apr 2024 00:00:00 +0000
https://reaper.is/writing/20240426-fossfox-shutdown.html FossFox and it's shutdown https://reaper.is/writing/20240426-fossfox-shutdown.html FossFox and it's shutdown

The job market isn't a fun place for most people, I personally had a hard time looking for one before I joined NearForm and the fact of the matter is that it didn't matter what my resume had or how skilled I might be cause I didn't even get a single reply.

I don't want to dig up what all happened but UpStash and Nearform were the only 2 replies I got from about 190 applications. I applied to every possible OSS and Services company that I wished to work with.

Though, I did find a good website that I liked, fossfox.com (the link's now dead) which provided 2 features that I really liked.

  • Listed all the OSS companies that were hiring for roles
  • Scheduled Newsletter of any oppurtunities that match you request.

The other things that existed were,

  • No login
  • Simple Straightforward UI

I probably would've posted a screenshot on twitter but I don't have a twitter anymore so , that's that.

Anyway, fossfox just vanished into thin air so I'm not sure what happened, the point of the post was that if something so well built was shut down because no profit was being made then there's something wrong with what people consider as an industrial product.

I don't have any energy to rant on such small feature complete apps just going out of business because it's a problem with a lot of variables and a lot of luck involved. Though, I'd like for someone to pick it up again and built it.

When you do, do send me a mail with the link to the app so I can help with anything that I can, but for now, fossfox is dead and I don't like it!

Adios!

]]>
Mon, 26 Apr 2024 00:00:00 +0000
https://reaper.is/writing/20240501-i-rewrote-alvu.html I rewrote Alvu https://reaper.is/writing/20240501-i-rewrote-alvu.html I rewrote Alvu

Reader's who've been around might know about the tool that builds this website. It's called alvu and it was my little attempt at building a content generation system.

Wasn't the first static site generator I built but is probably the last one since the idea has worked so well for the past few years that I don't think I need something else unless my needs for it increase.

Alvu itself is very simple, it takes in 3 basic folders. public, hooks, pages and each carry files that for the same and then you just run

alvu

That's mostly it.

There's definitely more to it than just doing this since it does generate the website you see here. You are free to generate dynamic content at build time, you can add in various steps during buildtime. I didn't spend writing a tailwind plugin in lua but you could probably do that and have it generate a tailwind css file during build as well.

So, Why rewrite it?**

I was dumber then

To be fair, quiet a few of the decisions were taken by a less informed and less knowledgable developer when it was first written, it was written in a single file and surprisingly Go lang is fun to write in a single file.

Things were too attached to each other

Over time, I realised that a lot of the stuff is very tied together and that stops me from making modifications that are necessary and were not thought of when I started writing it. One such issue was reloading when the hook changed. Since the hooks would've have to read and parsed again, this would need changes in how the build flow was written. The new version isolates it all pretty well so it's a trivial modification after the change.

Naive implimentations

The other limiting factor was how simplistic the CLI Flags were, since I was the only user it made no sense to make use of a definitive CLI helper package but I did add it in the rewrite since there are now a few users who like alvu and are using it.

Needs to be more robust

The final reasoning was that quiet a few things were flakey, the websocket connection and live reload was flakey, the hook's cascading worked most times but then there was this one off case where they didn't. The transformations from markdown were flakey when a new piece of content was passed with weird tabs/spaces.

All of this is now solved and I hopefully can provide it as a drop in replacement (aka, no breaking changes). All the improvements that are being added are added with a deprecation warning for the previous implementation.

Now the bad parts.

It's larger

As it now holds a litte more code and functionality than before it's now larger in size.

17MB -> 23MB unpacked and ~8MB Zipped.

So, the download times on most build systems is going to slightly increase, if you were running it on a local network on your local build system using a Raspberry Pi or something then I'm really sorry, once the deprecations are removed for the next few versions, it'll be smaller again.

It's slower

Now this isn't something that I'm going to wait for the next few versions but I removed concurrency while processing files (for now), this is a temporary change to make sure things are stable before I make them concurrent.

This is also why it's a post and I haven't linked you to a release, the current release of alvu is going to stay as is and you can continue using it and I don't mind adding patches to the original release if there's any issues. The rewrite is going to take a bit longer since I'll have to profile it enough to make sure it's as fast or faster than the current version.

I might need help

While I know it's a personal project and I don't usually ask for anyone to help me with my personal projects, but it'd be nice if someone can pick up issues and profiling on Windows and Linux systems as you find time. If you can help with development, that'd also be nice.

]]>
Mon, 01 May 2024 00:00:00 +0000
https://reaper.is/writing/20240530-may-updates-2024.html Updates (May 2024) https://reaper.is/writing/20240530-may-updates-2024.html Updates (May 2024)

A few things happened this month and we are now done with whatever work I had outside of my development life.

Since we are back, a few things have gotten attention and are being worked on, here's the log for that.

jotai-form

There's isn't a huge update here, just started fixing existing issues and adding a few more DX helpers, one of which is being able to use zod schema's to create the form atom for you.

eslint-plugin-valtio

This plugin has always been slow in terms of my development focus but looks like there's a lot of changes here and there in both eslint and in the original implementation of valtio so there's things that need to be fixed here, working on that.

alvu

The tool that powers this website is now in it's new form and i'm still profiling and working on migrating this website to it first to understand all the areas that I've missed in the re-write.

goblin

As always, goblin is gaining attention and that calls for the requirement of performance and caching improvements for goblin. It currently cached the binary by letting go handle the build cache and then releasing it after 12 hours to have fresh builds even for the older versions of a package (Storage size limitations, since it's being run from my pocket for now and no one has to pay anything for it).

Though, caching implementation using MinIO is complete and under stress tests to make sure parallel build requests don't create issues.

That's mostly it in terms of OSS work as of now, it's been slow for the month because of other responsibilities but hopefully I get to have more fun in the next few months with all the projects I have.

That's it for now, Adios humans!

]]>
Mon, 30 May 2024 00:00:00 +0000
https://reaper.is/writing/20240620-barelyhuman.xyz.html The barelyhuman.xyz crash https://reaper.is/writing/20240620-barelyhuman.xyz.html The barelyhuman.xyz crash

I don't know how long the site's have been down since I haven't had the chance to work on any of my public projects.

If you are someone who uses any of the following for your use case or for learning and it was down, I apologize for the downtime and they are now back up.

https://og.barelyhuman.xyz https://music.barelyhuman.xyz https://barelyhuman.dev https://pdf.barelyhuman.xyz https://hits.barelyhuman.xyz https://minweb.site

For anyone else that'd like to know what happened to them, the short answer is Nothing.

The longer one is, the in-memory pm2 process restarted during an on going maintenance phase at digital ocean and due to a version mismatch because of the various node versions that the server has there was a restoration conflict and pm2 couldn't do anything to restore the processes.

This is now fixed with a system level node install with a system level pm2 install, though there's 2 node based apps on that server and both would need a little maintenance update to avoid something like this from happening again.

All in all, my bad for not being able to keep up with my own speed of creating and launching projects and apologies for that.

]]>
Mon, 20 Jun 2024 00:00:00 +0000
https://reaper.is/writing/20240626-failure.html A Failure https://reaper.is/writing/20240626-failure.html A Failure

We talk so much about success and celebrating it and we forget that there is a big hidden part behind it. We all have different definitions of success and i'm glad we have that choice though I'm not so sure about how we all try to hide our failures.

A big part of our life involves going through small and random failure and yet we decide it's necessary to keep it hidden to look good. Which makes sense, since you wouldn't get a job if your resume is just filled with failures.

Though, I'm not fond of the fact that it makes us seem like we aren't human and that failure isn't okay.

I've heard people say this to me a few times in the past few weeks in lieu of recent layoffs, "I see that you aren't worried. Oh obviously, you'll just get another job like it's nothing". Usually, I ignore it and move on but considering the frequency of this statement; I had to give it another thought. I realised that people don't know the other side of the story at all and that's on me because like everyone else, I never spoke much about the things that didn't go right.

Let's try to change that a bit. Let's see what the generic definition of success is:

The achievement of something desired, planned, or attempted

The gaining of fame or prosperity

That's from Google. Here's a list of a few failures of mine, based on that

  1. I'm going to involve things that I do sometimes wish I had
    1. Not a master of any domain
    2. 0 successful side projects
    3. 0 successful oss projects
    4. Haven't built anything remotely impactful
  2. Lol, not famous at all, so a big failure here

Now, let's put down my definition of success (the post would be of no meaning otherwise)

Growth in overall knowledge and a decent income to be able to code and make things I love

A pretty boring and "trying to sound smart" kinda definition but now let's see the chain of failures since the goal is a long running one.

2017

  • Lack of knowledge of integrating Google API's in Python cost me my internship
  • Wasn't a decent developer in Angular 1.6 resulting in incomplete work and low quality output

2018

  • No job after about 200 applications
  • 0 idea of how the market in India worked
  • Failed to even land an interview

2019

  • Too dependent on the senior to help solve the issue
  • Sucked at debugging the most basic things
  • Never worked with a system at scale and used that as an excuse to justify my dumbness

To be fair, there's a lot more as I got deeper into the tech industry but, that's not the point of this.

If you came to meet me and these are the only points I told you about, you'd probably think I'm dumb, depressed and just like complaining and blaming the world. You wouldn't expect me to be working for NearForm or someone maintaining quite a few OSS projects.

The point isn't that I've failed in both the generic sense and in the my own definition but to put them down and see that just failures don't show the complete picture.

Putting down just one side of the story doesn't make sense and never will, the only kind of people impressed by just listening to all your achievements are people living in an land of delusion where every expert just wants to join their firm/company because it's the best there is.

I probably should write my achievements, I'd look like a depressed lunatic if I didn't. Though, readers of the blog know that I'm not depressed and probably just a lunatic.

End of the day,

  • understand that it's okay.
  • understand that blaming the world or other people will not work for you.
    • (on a different note, I've seen it work for people who wish to climb up the corp ladder and if that's the definition of success for you then I'm probably the wrong person for advice)
  • understand that you need to acknowledge the failure to be able to work on it.

I wouldn't be able to learn and tap into different domains, work on random projects that no one but I use if I just ignored my past and was entitled to whatever little I had accomplished. It's fun to celebrate the little achievements, just don't get lost in it.

Also don't get lost in things that don't go as planned, not everything has to work out according to your and that's the fun part about it. It'd get repetitive and boring otherwise.

That's it for the now, Adios!

]]>
Mon, 26 Jun 2024 00:00:00 +0000
https://reaper.is/writing/20240713-making-something-july-2024.html Making something https://reaper.is/writing/20240713-making-something-july-2024.html Making something

]]>
Mon, 13 Jul 2024 00:00:00 +0000
https://reaper.is/writing/20240804-the-pixel-3a.html The Pixel 3a https://reaper.is/writing/20240804-the-pixel-3a.html The Pixel 3a

No, this isn't a review for the Pixel 3a.

Context

If you've met me in real life, you've seen me with my tiny phone the 12 mini, and as much as I like the phone itself the battery degradation has gotten me frustrated from using the phone more than once and so as always I carry a backup phone. The backup phone has been the Nokia 5310 (2020) version which is a very powerful dumbphone for what it costs but it is no longer a viable option since the 2G towers in India have started to shut down as well and the 3G towers were never that great anyway.

Anyway, that's part of the reason but I would need a new backup phone sooner or later and I saw a listing on Amazon for Pixel 3a for a price of 13k INR and that's all it took.

After effects of De-googling

You saw that coming, didn't you?

Back to topic, Pixels are one of the few phones where installing a Custom ROM doesn't take too much work and it's fairly documented and I didn't need another device tracking the shit out of me so I had to do this. First, we browsed through GrapheneOS to see that it no longer supported Pixel 3a so the next option was to check out CalyxOS and I got lucky.

I'm not going to write the entire process of installing the ROM since their website does a really good job of explaining everything. Also, not responsible if you end up bricking your device.

We now have a device that doesn't have any software from Google and instead provides various ways to install things that you might need and play store is the only way to get it.

In my case, here's the apps I ended up installing which were totally FOSS

  • Waze (Navigation and Maps)
  • 9Gag (memes)
  • Uber Lite (no other lighter option)

And nothing else, that's all that's on that device from the Play Store (Aurora Store), and here's a short experience of using this for over 2 months.

Battery Life

Surprisingly lasts more than 2 days on idling and light usage for the above mentioned apps. It's been the perfect phone to just have in the pocket and since the charging is done with USB C, and my macbook charges with USB C I didn't add another cable to the bag ( I already carry, 3 USB C cables and a Lightning Adapter for the phone).

Calyx adds open alternatives to most things so while Waze works, there's glitches where the location data doesn't update and the same is with Uber and Uber Lite where the directional data of the app isn't updated properly so it ended up being a problem so this might be a bit of an issue. The default OpenStreet based maps is good enough to help you around if you can navigate using a map but if you are very dependent on apps like Google Maps then you might have to rethink using this.

Just to re-iterate, waze does work but there might be certain glitches, it's easy to just close the app and open it again to get it sorted but it's there and might not be convenient for everyone else.

Calling

It's a 4G LTE compatible phone so there's no problem with the bands and connectivity wherever I was going and I wasn't worried about having to figure out a harder way to get in touch with people if needed. The LTE speeds are not as good since the frequencies have had some changes since the device was originally released. It also has ESIM support if you wish to use that instead.

Music

Manually archiving music into the phone has definitely been fun but then it doesn't have extendable storage so that's still a limitation you have deal with.

Summary

De-googling to this extent is probably not for all and becomes rather inconvenient for most but I guess I've lived it with for most of my life so it wasn't that hard. Do I recommend someone else doing this? Not really. You might thing there's too much initial effort for very low ROI and if you aren't worried about that then do go for it!

The ideal

I still think the ideal phone for me would be something that looked similar to a Blackberry and came with basic AOSP android. Android itself is customisable enough at the base level that you don't have to spend money creating a custom OS and then charging the crowd for that custom OS.

Anyway, it's an ideal one but let's see if I get frustrated enough to build one at some point in life

That's all for now, Adios

]]>
Mon, 04 Aug 2024 00:00:00 +0000
https://reaper.is/writing/20240816-faux-js.html Faux Javascript https://reaper.is/writing/20240816-faux-js.html Faux Javascript

I wouldn't be the first to think of a language that could be functional and compile to JS and I probably won't be the last one.

A lot of what's good about functional languages doesn't work well with Javascript when working with limited hardware.

If you're someone who finds RAM and CPU to be very cheap today, this post isn't for you

Let's take a tiny JS snippet and set a little context.

Here' we have a very common logic to transform and filter based on a predicate.

;[1, 2, 3].map(x => x * 2).filter(x => x > 2) // [4,6]

It looks harmless but lets see a few operations that the runtime will be doing

  • 1 map func reference
  • 1 filter func reference
  • 1 clone of previous array in each .map and .filter call
  • 2 full iterations of the array

And all of that holds memory. It looks fine for 3 items in an array but what if it was 100 items, let's add a little more to that, what if this was running on a server and you end up running this transform and filter for every user's request.

You now hold a subsequent amount of memory for what looks harmless. This might still be fast based on what the items inside the array are but the memory still gets used so that's that.

Now if I had to write the same thing in raw JS it'd look like so

const arr = [1, 2, 3]
const result = []
for (let i = 0; i < arr.length; i++) {
  const mul = arr[i] * 2
  if (mul < 2) continue
  result.push(mul)
}

Memory is now allocated for

  • 2 arrays
  • 1 variable for index
  • 1 variable mul every iteration
  • 1 loop iteration

So, the only advantage that the JS engine got here was that it didn't have to hold the entire array's clone into memory and didn't have to run through the array twice.

The 2nd codeblock might give you a slight perf boost as well depending on the size of the array.

I never told you why all of this matters or why I'm talking about this and what does having to build a language that compiles to JS have to do with any of this?

  • Functional JS is slower than procedural JS
  • It holds up more memory than procedural JS

Now, before people come running at me with pitchforks, I'm not saying you should quit writing functional style JS, I'm just stating a fact. I've personally been a fan of functional programming for a very long time and I still write most of my initial solutions in functional js and have a lot of helpers for the same.

But that doesn't change the fact that I end up having to rewrite areas of it to either speed up things or to use less memory. There's a whole bunch of optimizations you could do to make things faster and a lot of them deal with just reducing the amount of stuff that the engine has to do. This involves un-needed functional references, or heavy recursive functions, object clones etc etc etc.

Let me add in a few posts that dive deeper into these concepts

Moving back to the original reason for this post.

I like the idea of building Meta languages that compile to JS or a faster functional backend and a lot of people have tried to make it work out really well but at the end of the day, you'll add a lot of additional abstraction that will might not be apt if you are compiling to js

The only exception is probably nim lang since it uses pointer math instead of human readable js, also it doesn't support nodejs properly so that's that.

One language to rule them all

Here's a few languages built on the concept of "One language for everything" and the one's that I've worked with enough.

  • ReasonML / ReScript
  • Gleam

In all of them, the compiler comes with backends that allow you to compile to a native binary or Javascript.

The reasoning is that you can write your server and client in the same language and compile to different backends to take advantage of each language's power.

The problem?

A boatload of javascript polyfills.

It is no surprise that Javascript's functional helpers are limited and the language has a lot of in-place modifications that need to be re-written (ex: Date) and so the functional language has to implement it's own set of helpers around the knick-knacks of Javascript to keep things immutable where they are supposed to be immutable and so a little of the type information is lost when writing interop code for Javascript.

This is necessary and not something that can be avoided unless you are making something like coffescript which is just an alternative syntax and doesn't introduce new functionality that JS doesn't already support.

An example might help visualize the additions done by the functional languages. We'll take the original example with just the .map function

;[1, 2, 3].map(x => x * 2)

the ReasonML version of this will look like so

List.map(x=>x*2, [1,2,3])

Not that different, is it? Now let's look at the compiled output of the above

var List = require('./stdlib/list.js')

List.map(
  function (x) {
    return x << 1
  },
  /* :: */ [1, /* :: */ [2, /* :: */ [3, /* [] */ 0]]]
)

A simple transform requires a List module and it's methods from stdlib to be a part of the final distributable package; which may or may not be desirable depending on the size of the business logic. If you want to use the JS version of arrays then you'll have to write a slightly more longer version. Notice that the order of the data and function has changed and this is something that reason handles internally where the |> operator automatically decides where the data would go but then you have to keep in mind that all reason ml functions take data last and that the JS native types might not.

Js.Array2.map([|1,2,3|],(x)=>x*2)

The ocaml output would now need the impl details of the above function from melange but that's okay, we can talk about that in a different post if needed.

So then...

When do we use languages like these? It depends

The conditions might be obvious to some but I'll still point them out for the sake of it.

  1. Don't use them for a single side development.
    • If you're using reason-react just for the client side, just write JS. It's going to be easier to find devs and optimize code, add in TS if the team is going to be larger to avoid the most obvious mistakes
    • If you're just using it to have a friendlier syntax for ocaml then please go ahead
    • The same applies to gleam or rescript or any other faux javascript languages.
  2. If working in a team, please talk to the team before introducing a super huge learning curve. Real functional programming is quite different and might not be easy for everyone.
  3. You can use it for experiments and tiny libraries and computation since a lot of what is defined in functional languages are simple expressions and so, the static values can be simplified by the compiler. Below is an example of reason doing that.
let add = (x,y) => x+y

Js.Console.log(add(1,2))
Js.Console.log(add(2,3))
// output
function add(x, y) {
  return (x + y) | 0
}

console.log(3)

console.log(5)

exports.add = add
  1. Consider where you are running this, if it's for a low powered low memory device, you will have better luck writing in a language that doesn't do GC in the first place (C, Rust, Zig, etc) but if you have to use javascript then just go ahead and use it and make sure you profile it for memory IO issues first and then for execution time.

End of the day, use whatever the fuck you want if it's fun. I wouldn't be able to write all this if I never tried them out so use them and figure it out if you like the experience but know that if you work with tight optimization constraints then the path of using compile to JS functional languages might not be as good of an idea as people make it seem.

That's all for now, Adios

]]>
Mon, 16 Aug 2024 00:00:00 +0000
https://reaper.is/writing/20240818-re-on-hobbies-side-projects-and-money.html Re: On hobbies, side projects, and money https://reaper.is/writing/20240818-re-on-hobbies-side-projects-and-money.html Re: On hobbies, side projects, and money

A recent post from Manuel Moreale (On hobbies, side projects, and money) made me realise that the boundary between side project and hobbies in my case is very thin.

While a lot of what I write comes with the intention of "I'm building it for myself and it might help someone else out", the result of most of what I build never reaches to the masses or not convincing enough for anyone to use.

This leads to a pro and a con

  • I'm a developer who dabbles in everything
  • I'm just another developer who dabbles in everything.

It's also been brought to my attention that

  1. Most people don't understand what I'm building when I'm working on random experimental projects
  2. The one's they do understand is already built in a much better way by someone else.

I could justify both points but that is not what the post is about and neither is it something that needs justification because it was built for me to begin with.

But, and it's a big BUT; The things I do build for others (a.k.a dumbjs) also end up not getting much attention for the same reason and that does sometimes make it feel like the effort was not worth it (and then I find something else that's fun to build and the cycle continues).

The post and a tiny discussion with Manuel made it clear that at some point I'll have to talk about money.

It's not hard to talk about money when I'm donating it to others but when it comes to asking for it, I seem to freeze up.

Now since there's a minute context of what and where this is coming from, I'd like to let people know about Manuel Moreale's One a Month movement for his People and Blogs newsletter series and also that my Github Sponsors page doesn't have a minimum amount tier. It's been that way for ever and I would like to believe that people aren't aware of it and that's why there aren't many sponsors (to keep my sanity).

2nd part of this is for the OSS devs and users of their product/library/etc. If you are a user and you think I'm not worth the $1 sponsor, you obviously have free will and can choose to not sponsor but I'd like you to at least sponsor someone who's work you are dependent on in the OSS circle. If you are an OSS dev and are reading this, unless you are someone who's living off of the Sponsorships, please do lower the barrier of entry for the donations so more people can contribute financially.

End of the day, whether you contribute with code/money it helps the authors realise that what they are doing is actually helping someone. It's understandable that not everyone can help financially but I agree with Manuel about $1 being low enough to set and forget.

That's mostly it for today,
Adios!

]]>
Mon, 18 Aug 2024 00:00:00 +0000
https://reaper.is/writing/20240827-one-a-month.html One a Month https://reaper.is/writing/20240827-one-a-month.html One a Month

This is a simple addition to the previous post Re: On hobbies, side projects, and money.

The Ko-Fi page for barelyreaper also has 1 a month enabled so if you are someone who doesn't have a Github account and would like to support using something else then you can use the Ko-Fi page.

Supporters

You can find more details about this and the list of supporters on the Supporters Page

]]>
Mon, 27 Aug 2024 00:00:00 +0000
https://reaper.is/writing/20240908-managing-uptime.html Managing Site Uptime and dealing with the state of urgency https://reaper.is/writing/20240908-managing-uptime.html Managing Site Uptime and dealing with the state of urgency

We've had a our fair share of down time with the side projects that I build and since there's not a lot of users the downtime mostly goes unnoticed unless I need the app for something.

Recently, goblin.run started gaining some traction and it's become important to have the site running all the time. The number of users are unknown because there's no analytics on the site but the assumption that there's a few users is based off the fact that the repository has started gaining attention. It's also something that's displayed on air's (a live reload tool for go lang) README and I think that's the reason for the rise.

Keeping the site up isn't that hard but then there's always shit that goes south very quickly. Things changes, environment that's running the app change, etc, etc. An easier way is to have some kind of notification that can inform you that somethings not working and that's basically what uptime monitoring systems are supposed to be.

I started looking for alternatives and found a few which ranged from $7/month - $24/month for the most basic features and then add additional and advanced features. All these are business oriented apps so it made sense to price them around that range but I wasn't sure if I wanted to pay that much for 1 site.

A few of them also offered a ton of features for free but I didn't really like the whole "GIVE ME YOUR CARD!" approach for a "free" signup and so we moved onto checking the self-hosted options.

Self hosted + OSS had much better options and were super feature packed but I found a few written in PHP and one in Haskell that I really liked except that would make it hard for me to fix the uptime tool itself and I'd like to avoid that considering I'm already short on time nowadays.

After evaluating a few I decided to write something super tiny and functional to just satisfy the need for goblin and create a tiny service called Ping.

It's in beta right now and you don't need to sign up to use it, there are limitations to how many sites you can register for yourself but that's because I'd like to avoid getting DOS'd as soon as it goes up. There's no catch for it being free, I would just open the whole thing to have a 100-200 sites limit but then servers aren't free unless DigitalOcean decides to sponsor the project (which I don't think is happening any time soon).

The other part is that the project is a part of the $1 a month movement so you are free to help increase the overall limit. If the project does get some traction I might add a more robust set of uptime monitoring and handling though I'm not sure how I'd find out if it is getting traction because just like every other project I've made, this one doesn't have analytics either (I'm both lazy and ethical at the same time..., suck at business development though).

Though it did lead me to questioning the whole thing, what if the site is down for a few hours, what if goblin.run doesn't build a certain binary for your system because the version of go used by the CLI itself is too new and I've not verified if the build system successfully builds it at that version of go. Do I rush to get it back up or do I do it properly after I figure out the issue that caused the crash in the first place?

Most people would want me to rush and just get it back up if there's a constant crash though that shouldn't happen unless something big has changed in which case I wouldn't actually deploy it in the first place. As a solo dev, it's idiotic to throw changes into production without checking them locally or running a simulator on them to verify things work. The state of urgency shouldn't exist but in case it does happen, I need to be able to deploy faster and fix things as fast as I can which in turn creates a gap in the development process and you might push something that you aren't very sure of just to get rid of the crash or system failure.

Something else that's worse could happen, the server hosting it might go down and I would have to re-setup the whole thing again from scratch. These thoughts led to another set of issues which deals with the simplifying the process of deploying an app from the ground up on a new server or even sending through hot patches. I'm still figuring out ways to make it simple and fast without relying too much on the existing set of standards because they are slow. I'll get into detail about the alternatives and finding in a follow up post soon.

For now, Adios!

]]>
Mon, 08 Sep 2024 00:00:00 +0000
https://reaper.is/writing/20240918-announcing-ping.html Announcing Ping https://reaper.is/writing/20240918-announcing-ping.html Announcing Ping

Announcing Ping, a simple no sign up solution for uptime notifications.

It's a micro app I built for my own use case but it's here for everyone to use. If you wish to understand why I built it you can read the previous post.

I don't have a 100 word short essay to describe what it does and, doesn't do. It's a very tiny service and pretty self explanatory.

The features were built to be non-intrusive and less irritating than existing implementations or let's just call it minimal.

Enough said, just go and

Check it out

]]>
Mon, 18 Sep 2024 00:00:00 +0000
https://reaper.is/writing/20240925-things-i-have-tried.html Thing's I've tried https://reaper.is/writing/20240925-things-i-have-tried.html Thing's I've tried

I often talk about things going wrong but let's talk about the things I've tried, no failure, no success, just things I found cool and tried to do.

Note: Everything here is still in the realm of programming

I don't know if I should do this in terms of a timeline or alphabetically by project so I'll just go with timeline.

I don't plan to bore you by explaining how a to-do app I built right out of college got me to the path of success but this is more a timeline of domains and things in development that got me into it.

2013

Mid 2013, Right before I started college (~17 years), witnessed one of my classmates playing around with the HTC Forest and he was talking about this custom ROM that he'd installed on it. Found that cool, got home, took my galaxy Y and got to work the next few weeks till I built my own Custom ROM somewhere at the end of the month.

  • DS V1.0- Well SSSidGGG => Siddharth Gelera - Very creative, I know.

Was tied to hacking through java code, reverse engineering the android system to fix and tweak things for the ROM. The ROM itself was pretty stable but then I didn't really give it any more time of day and moved onto other things in life.

2014

Still interested in programming and development but wasn't sure what I wanted to do (still don't really know that though...), Someone introduces me to Arch Linux and I was someone who just preferred Ubuntu, specifically because of how smooth Gnome 2 was. I had a pretty beat up PC so the performance mattered and learning about Arch made a small part of my dream come true. Making my own OS. Now the own OS part is pretty far fetched for someone who wasn't as smart as the other kids but it was a fun dream to have. I'm pretty sure most developers wish to do it at some point.

And well, I was able to get the small part come true as I built a POC distribution of what I wished arch was. There were a few patches here and there to make sure older kernel ran with the shitty WIFI adapter I had but that was 2014.

  • DevilzArch, don't ask me why I named it that, I just had a weird liking toward the names devil, ghost, etc. I still have my phone named as Ghost Mini ...

2015

3rd year of college, I didn't do much but continued running around different linux distributions and playing around with Python to build a few tiny backend apps. The primary one was a clone of facebook because it was famous back then and I didn't understand how the PubSub messaging server that was used; worked.

So I built a tiny chat server replicating something very similar to Facebook's messenger at the time. It was the just the backend. I sucked at design at this point.

2016-2017

Most of time went with projects and submissions so I didn't get to build or pick up anything fancy though this was when I built something similar to what you might call a GUI for brew. Except it wasn't for brew.

I was still into linux but then package management for most source built apps was shit and brew didn't work for linux I guess , or I didn't know about homebrew back then but the package manager approach didn't work for a lot of software that was out in the open.

The idea was very similar to homebrew where I'd maintain the manifest of how something should be built and where it needs to be installed and this would be scripted in python cause I was writing this in python. It had a GUI that would show progress, something similar to what you see in Mac's App Store around 2016.

Pretty much built the POC and a decent looking GUI before a fellow linux enthusiast asked me to switch the whole thing to electron because it was cooler to write it in electron. So, I did. First time writing javascript and I start with electron....

Luckily, I was able to get it all working with the existing python backend to install the packages and also what I submitted as my college project for the 3rd year.

2017-2018

Getting into javascript, opened the door for full fledged web development and the start of me torturing myself for the next 7 years. I still submitted a version of the Arch linux distro as my project for 4th and final year at college but that was the last time I played around trying to fix linux internals on different hardware. Huge respect for the folks that maintain the kernel. I almost broke my brain trying to figure everything out.

2017 is also the year that I graduated and got a job working for firm and did some casual web dev work. The usual fighting with Angular 1.6 till you do it the "non angular way" and push it to production. React was gaining a lot of popularity in India during this time and jobs started shifting from MEAN stack to MERN stack.

For the react lovers, you have 0 idea how much boilerplate code the initial versions of redux and redux-saga took.

Fortunately or Unfortunately, I stuck to web development till 2020, I found other things that were cooler but I didn't really get time from work since I worked in startups as the founding / core engineer and hardly gave any importance to sleep.

2020-2021

The "I can build everything" phase started around here. I had lost my marbles completely around this time. I built random websites, decided that everyone else was just stupid and didn't know how to build products and that any product that I found online, I could build it alone.

This gave life to a few products listed below

  • A invoicing system - Invy (never went live because I forgot about it after I spent 2 weeks building it and decided the MVP was all that was needed)
  • An idea tracker - this did go live with heroku's free plan and I used it for a while till I figured I was better off just writing things in Apple Notes.
  • An open source job listing system - I should probably bring it back but this was one of the few that I kept live for a while and just posted random jobs from everywhere to see if I would get users. Hard luck there because I didn't really post about it anywhere other than maybe dev.to and my blog.
  • A time tracker - TillWhen, the 4th product during my craziness spree. This did have users since Toggl was super costly and the primary 20 users were from the company I worked for, Fountane. It slowly got a few users but I was not able to keep the development up with breaking changes from NextJS and because I didn't really use the time tracking app myself and it was just something I built in the craze of building stuff. We shut it down after ~2.5 years or keeping it up and around 180 users in which I guess ~100 were active.

2021-2024

In this duration, I build a POC for a programming language, a few MacOS apps, a few go based CLI tools (including the one that builds this website), a few go based build tooling (Goblin.run) and tried to port react-native's RAW SDK to support rendering with preact, helped maintain a few OSS libraries that involved working with the Javascript AST, written libraries that manipulate the AST's to generate web components that become interactive on load (preact island's tooling).

Simple stuff.

On the whole, I've tried systems programming, web development, mobile development, native desktop app development, developer tooling and a few more things that I might've forgotten about and I'm yet to find the one thing that I enjoy.

So, just in case you think you are lagging behind and aren't able to find something that you'd like to do for the rest of your life? Trust me, you aren't the only one and it's totally fine.

That's it for now, Adios!

]]>
Mon, 25 Sep 2024 00:00:00 +0000
https://reaper.is/writing/20241015-reminisce.html Reminisce https://reaper.is/writing/20241015-reminisce.html Reminisce

Often think of things I've made.
Things that made me feel cool.
Things that made it feel fun.
Life now keeps me busy and steals the fun away.

]]>
Mon, 15 Oct 2024 00:00:00 +0000
https://reaper.is/writing/20250101-2024-analysed.html 2024 Analysed https://reaper.is/writing/20250101-2024-analysed.html 2024 Analysed

Another 365 days done and I did a few things, failed at a few other things, figuring out some things and happy about some things.

The biggest change was probably my day job this year, I moved from Fountane to NearForm which changed a few things, I now have more time during the day to work on OSS and it's a little less hectic due to time discipline imposed by the company.

I got married, so I need to dedicate some time there and that's been going well, which is nice.

In terms of development and OSS, I was able to launch Ping and was able to keep goblin.run running for the folks that like and use it. Hopefully am able to keep it running forever.

Made an attempt to join the $1 a month club to see if I was making things worth financial value, which hasn't made any progress so, probably not.

Got back into writing random things like resume-lang to figure out why such things don't already exist and understood the larger problem writing something like this brings a.k.a, my experiments first brain still works and that's really good.

Finally realised that I've made too many things and it's impossible for me to manage everything and would be easier to sunset them or look for contributors for a few of them.

Sat down and wrote esbuild plugins to create a nextjs like processing for my simple use cases, like in cri which make it impossible to deploy on vercel but I should be able to move it to one of my tiny servers soon since the site doesn't have a lot of visitors.

That's mostly everything that I've done / changed / handled this year and I hope I could do more and do better.

That's all for now, Adios!

]]>
Mon, 01 Jan 2025 00:00:00 +0000
https://reaper.is/writing/20250204-low-traction-products.html Low traction products https://reaper.is/writing/20250204-low-traction-products.html Low traction products

I wanted to write a rant, but I guess we are going to be wholesome for this one.

Firstly, I'm grateful for all the users that decided to try out https://ping.tinytown.studio and also to all the devs that use goblin.run.

The side effect of building things for your niche use cases is that in most cases no one other than you might use it. I used to build with that mentality when I first started, and that has changed a tiny bit since then. I'm at a stage in life where I really wish I was able to leave the financial part of my life to be automatic, but I'm not someone with enough knowledge of what people would like to use.

Fortunately, because I've grown as a person, I now don't jump into code every second of the day to build something that I might not end up using, and instead, things have become more intentional. I now build because I see a use case and I see myself using that tool/app at least once a week. There you go, there's at least one user for each product now.

Another side effect is that my own standard of privacy ends up reflecting in the way I plan how the app works, which also doesn't scale well in terms of business.

You can't really run email campaigns with your users if you don't know who they are...

I mean, I can go to resend.com (the transactional email service I use) and check who's signed up for Ping, but that's not something I do.

I've built a decent simulator to make sure the app works, the scope of the app is small enough for me to do that, and I know that if emails are failing to deliver, it's not the app but resend.com, and so I don't have to go check it at all.

The low traction that these kinds of products get makes it easier to keep them running off my own pockets, and while it would be nice if people could support it if it was helping them in any way, it's not mandatory and that makes it less sustainable.

The conflict is that I need to set up multiple streams of passive income, and I suck at asking for money when I build stuff.

I'll try in the next few months to make something that I can muster up the courage to ask money for and see if it works. Don't worry, Ping is both free and complete and that won't be changing.

]]>
Mon, 04 Feb 2025 00:00:00 +0000
https://reaper.is/writing/20250304-state-of-preact.html State of Preact 2025 (Q1) by reaper https://reaper.is/writing/20250304-state-of-preact.html State of Preact 2025 (Q1) by reaper

While there isn't an official "State of Preact", here are updates since the last time I mentioned working on the ecosystem.

The Preact ecosystem continues to thrive, largely due to the effectiveness of its React compatibility layer. While there's not much to discuss in that aspect, let's look at a few new tools that were built.

preachjs/popper

A helper library that enables building component primitives for Preact without adding bulk to the overall bundle. It's low-level, tiny, and gets the job done.

barelyhuman/adex

A really tiny plugin that integrates backend, frontend, SSR, and islands for Preact to be used in any Node-compatible environment. We've made significant progress since the original post, and I've explored various rendering and tooling options. This remains the only integrated package that I'm maintaining with incremental updates. While I hope to add features matching other frameworks, my current focus is maintaining stability.

Initial NextJS Migration Scripts

I successfully migrated an old NextJS project to a Preact-based SSR while preserving the NextJS API. This is just the beginning; I plan to enhance the build script to facilitate easier migration for users still using the pages style router and old non-edge API functions.

While the NextJS ecosystem has moved from pages to app router, I'm constrained by time and resources to provide comprehensive migration solutions.

barelyhuman/uma

This is a simple starter template combining production-grade tools for Preact SSR apps. It's nearly complete, with a few pending additions to enhance its utility as a SaaS starter. While not strictly part of the ecosystem, it emerged from existing efforts in developing Node.js and Fastify-based side projects.

Future

My goal is to provide sufficient primitives in Preact for creating beautiful UIs without reimplementing browser-provided components. I collaborate with the Preact team when possible, but given our small team size, we welcome community support through code contributions or financial assistance.

That's all for now, adios!

]]>
Mon, 04 Mar 2025 00:00:00 +0000
https://reaper.is/writing/20250416-thinkpad.html Thinkpad https://reaper.is/writing/20250416-thinkpad.html Thinkpad

I should add tags to better organise things on this blog, but for now, let's get to the topic at hand.

People have seen me advocate minimalism in design on this site, though that's not the only minimalist approach I follow.

Out of habit—or due to the constant nagging by my parents—I'm also someone who believes in spending only as much as is actually needed or in short, frugal living. This has led me to buy things that are just good enough to serve their purpose. A good example of this is that I've been using a MacBook M1 Air 2019 8GB since it was released. That was the entry-level MacBook, and the goal was to have a portable macOS system rather than a powerhouse, which would then have its internals consumed by Docker.

Now, with that boast out of the way, I recently switched to using a ThinkPad. I just happened to see a really good deal on a ThinkPad T14 1st Gen (approximately 6 years old), which made a lot more sense than spending on a new MacBook—especially since the one I own can no longer handle the basic tools since, every other developer building tools thinks that memory is very cheap!

Yeah... no. Apple charges almost $200–$300 for a higher-memory device, and I don't think that's cheap; I'd rather spend that money sponsoring OSS projects.

Memory is cheap for cloud systems, so running your apps on the cloud is definitely less expensive than it used to be. This is not really true for consumer-grade laptops, even for those that are not from Apple (looking at you, HP) and most new laptops aren't very upgradable either(Apple normalised this and I'm not very happy about that either).

Usage and Issues

Circling back to the ThinkPad—a great overall device to have—it cost me approximately $250 / INR 22,000 to get a used one, and the condition of the laptop itself was excellent; everything, including the touchscreen, worked great. The only issue was that the touchpad did not work in Linux, specifically in Linux Mint.

Now, the problem wasn't with the touchpad itself but rather with the generic drivers. The touchpad was very responsive on Windows but experienced a very abrupt lag on the Linux system. I tried different friction and sensitivity settings, and it somewhat worked, but it was still not smooth.

The final solution was to just use the thinkpad's touchscreen or use it in clam shell mode while it's docked on the table.

Migrating to Linux

Even though I've been using GNU/Linux (before someone complains about not prefixing it with GNU) forever—on servers and my Raspberry Pi—most of that experience was gained with a keyboard-only workflow, using just the terminal to handle everything and using GNOME for a few years. The single TTY obviously doesn't work when you are a web developer, though.

My new workflow has been to run everything from a terminal emulator and keep a few browsers running on an i3 setup. I tried Sway (Wayland), but the display started glitching, so I switched back to i3 (X11) until I could debug the cause of the glitches.

Most development work migrated pretty quickly because it was already written with Linux in mind.

It did take me longer to readapt to the Ctrl-first keyboard shortcuts, which I had lost the habit of. Before someone suggests Kinto, note that it doesn't work well with my Bluetooth keyboard (Air Nuphy); the profiles need to be switched continuously, and it doesn't even recognise the keyboard at times, making it pretty unreliable for my particular device.

Missing Apps

Nothing specific is missing, since I didn't really use Mac-specific apps to begin with. Apple Music might be a candidate, but then Spotify works, and my Spotify library syncs with my Apple library on a daily basis.

Vim (Neovim), Sublime, and Alacritty transitioned over fine. I had to configure Alacritty, but that's okay since I always used Mvllow's old dotfiles anyway.

Perf

It's noticeably slower than the M1, but the 16GB of RAM does allow me to run a few more Docker containers to work on more things in parallel (which isn't ideal when you get distracted as much as I do).

I was able to solve a few SQLite build issues I had with older apps that were written with an older version of better-sqlite3, since it was easier to debug with multiple terminals than by opening multiple tmux SSH sessions on the deployment machine.

Apps are slower to launch, which is something I'm not sure how to debug or fix yet.

Overall, I guess getting back into Linux and i3 has been due for a long time, since I stopped working on macOS-specific apps a while back and could save myself from the upgrade price of $1200.

I do have a MacBook Pro from my job, which is a lot more powerful than anything I could ever need, but it isn't mine and will be returned once I change my job, so I will have to start getting more comfortable with my Linux machine.

That's mostly it; I don't have much more to say.

Adios!

]]>
Mon, 16 Apr 2025 00:00:00 +0000
https://reaper.is/writing/20250507-migrating-from-cloud-photos.html Migrating from Cloud storage solutions https://reaper.is/writing/20250507-migrating-from-cloud-photos.html Migrating from Cloud storage solutions

A few weeks back, I started getting the 99% storage notification from Google Photos, which reminded me that I had a PiGallery server running on my Raspberry Pi that I hadn't used in a while.

Got me thinking about moving away from the dependency on Google photos as a backup server considering a small change in how the countries function would lead to me loosing all my photos overnight.

I knew I was going to self host but the Raspberry PI I had was low in power and running software similar to Google Photos with Face tagging and image processing would overheat it and probably even crash.

There were also things that I had to consider, which led to the PiGallery setup getting left out:

  • Convenience of usage: The ease of using services like Google Photos and iCloud Photo's backup.
  • Sync scripts: These need maintenance, and any API changes end up adding more friction to the process.
  • Migration from existing services: This is hard, and quite a few people just drop the idea because there's very few blogs and posts online that show you the options to migrate.

The post might be slightly longer than my usual stuff, so bear with me.

Requirements

  • Simple Image and Video Hosting
  • Ability to sync from phone to the server
  • Bonus points if it supports Facial and Location tagging and the ability to search using the same

Solutions

  • OneDrive / Box / Dropbox:

    • Syncing gets limited for mobile users since iOS requires manually moving everything to the cloud folder in Files.
    • It can easily get pricey and would just make Google Photos both easier and cheaper to use.
  • Self-hosted gallery:

    • You could self-host a few tools that take care of syncing and visualizing, though the setup itself is slightly time-consuming since you have to get a few tools to work together.
  • A tool that does it all:

    • There are a few of these that handle everything from the mobile app to syncing and facial tagging for you, and I ended up using one called Immich.

Setup

The setup itself is pretty simple if you have already worked with Docker and Docker Compose. If not, you should probably try setting that up locally before setting it up on a server.

As for me, I used my ThinkPad as the server since I wasn't going to expose the whole thing to any network outside my house, so a local broadcast of the service was all I needed to do.

Here are the things that were done:

  • Make sure the Linux system is connected using a static IP to connect to the web.
  • Configure Docker and Docker Compose properly so Linux can start them up after a restart.
  • Configure an auto-login user, so if the battery gives up during a power outage, it's easy to just press the power button and leave it to boot everything.

Next up:

  • We run the Docker Compose file for Immich as its documentation explains, with close to no changes. I've just changed the port and database password.

Wait for it to start running, and then access the web version to set up the admin account.

Migration

Google Photos

This took two attempts. The first involved me writing my own script to download every photo on Google Photos using Playwright, which worked but was kind of time-consuming, and verifying that everything got downloaded was a lot of work. So, I would suggest you skip this and just use Google Takeout instead.

The problem with Google Takeout is that each photo's metadata, specifically the date and location, gets split out by Google Photos and hence needs to be re-added using an EXIF modifier. This makes migration to other apps a be a little more time consuming, but Immich has a supporting CLI tool called immich-go that allows you to use the same API key model of Immich to upload Takeout zips directly to the server while fixing the images as they come through. This makes the migration super easy for users to do. You might need to keep the laptop active while this is going on.

Steps

  • Create an API key for your account using account settings on the immich web portal
  • Use this key with the CLI commands of immich-go
  • Make sure you use the right command to import from google takeout zips
$ immich-go upload from-google-photos --server=http://your-ip:2283 --api-key=your-api-key /path/to/your/takeout-*.zip

iCloud Photos

This was easier since all you need to do is set up the Immich app on your mobile device, connect it to the LAN IP of the server (in my case, the ThinkPad), and select the folders you wish to back up and click on Start Backup. You can turn on auto backup and background backup jobs in the settings as well.

You might want to change iCloud Photos settings on your device to Download and Keep Originals to ensure the high-quality images are uploaded to your Immich server.

In case you don't have a phone with iCloud Photos and want to do it from your MacBook, you can modify the same preference from the Photos app preferences and then select all images and export them to a folder on your laptop. Then you can use the immich-go CLI mentioned in the Google Photos section to upload the photos to the Immich server.

Cleanup

The iCloud cleanup is pretty easy. You keep the originals and then turn off iCloud Photos for your device, which will delete all the photos from your iCloud account. In my case, I wanted to reduce the used storage for iCloud but not completely delete everything, so I just got rid of all the photos older than one year. For Google Photos, however, I deleted everything, so now my mail started syncing properly again, and I have enough space for new immediate backups if I need them.

iCloud makes sure to keep the images in shared albums in check but if you use a lot of shared albums then just make sure you delete images carefully or use filters on macOS to only delete images that are not in any album.

]]>
Mon, 07 May 2025 00:00:00 +0000
https://reaper.is/writing/20250630-introducing-store.html Store https://reaper.is/writing/20250630-introducing-store.html Store

I've been building and sharing random tools, scripts, and experiments for a while now. Some of them get used, some just sit in a repo collecting digital dust.

To add to that digital dust, introducing the Store.

No, it’s not a Shopify clone or some drop-shipping hustle. It’s just a simple page where you’ll find things I’ve built—apps, scripts, maybe a zine or two—some free, some paid, all made with the same “let’s see if this works” energy. We're starting off with some HTML templates to keep it simple.

Why a store?

  • Makes it easier for people to find stuff without digging through old blog posts or GitHub gists.
  • Lets me experiment with pricing, bundles, and maybe even the occasional discount code (because why not).
  • Gives me a way to see if any of this is actually useful enough for someone to pay for.

If you’ve ever used something I made and thought, “Hey, this saved me some time,” or just want to support more experiments, check it out. If not, that’s cool too. Most of the good stuff will still be open source or free to use.

If you don't want to buy something and just want to support, you can always go to http://reaper.is/supporters

That’s all for now. Go poke around: https://reaper.is/store

Adios!

]]>
Mon, 30 Jun 2025 00:00:00 +0000
https://reaper.is/writing/20250710-another-one-about-build-setups-part-i.html Another one about build setups Part - I https://reaper.is/writing/20250710-another-one-about-build-setups-part-i.html Another one about build setups Part - I

Hey, the name's reaper. I work on various experimental solutions for working with web apps, ASTs, and generally play around with languages. Mentioning all this because I'm not some random dude who started working on the web yesterday and is teaching you things today.

Moving on, I've advocated the use of custom setups and simple build tooling quite a bit on my twitter/X profile, and so this post is also in the same direction—except I plan to show you how simple it actually is.

This is a 2 part long-ass post, so you might want to start this when you have a clear mind and can follow along. You don't need to, but it'd be nice.

TOC

Meta Frameworks

I'm sure you're aware of what meta frameworks are and when to use them. We aren't here to call them bad or call my approach good—it's a simple post going through the process of simplifying what the meta frameworks abstract.

Features

What do meta frameworks solve that we need to replicate?

  • UI Composition
  • SSR (for content-based sites)
  • Code transforms, Bundling, Minification
  • Client hydration
    • Serving Assets
  • Routing
  • a shit ton of developer experience

The last thing is what makes them really attractive to most developers-beginners and advanced devs alike. So, we won't be adding any developer experience in this post. Let's focus on the other things.

UI Composition

There's a ton of libraries for this: React, Preact, Vue, Svelte, Solid, Hyperscript, yada yada ya...

We'll use Preact, 'cause it's my post.

Here's what UI composition would look like in Preact:

import { useState } from "preact/hooks";

const Layout = ({ children }) => <div className="container">{children}</div>;

const Counter = () => {
  const [count, setCount] = useState(0);
  return (
    <Layout>
      <button onClick={() => setCount(count + 1)}>{count}</button>
    </Layout>
  );
};

Now, let's say I wanted to render this in the browser, I'd use the render function from the lib:

import { render } from "preact";

//... Counter Component

render(<Counter />, document.getElementById("app"));

Great, we have UI composition. Let's target SSR next.

SSR

SSR, or Server-Side Rendering, is a way to simplify the composed UI into a simpler form for the target. There can be different renderers; we will be targeting the browser, so our renderer will do things so a browser can handle/render it.

The point is to be able to construct structured trees of what needs to be rendered, and each one of the UI libraries provides some way or another to do this. Vue and Svelte provide a compiler, (P)react creates a VDOM object. This makes it easy for us to traverse through and use the structure to our liking— a.k.a. the renderer can be anything.

For example, we could convert them into serializable JSON and then convert that JSON back to a tree and then re-construct the component and ask it to render in the browser (some libs do this, research to find out).

To keep things simple, we are going to convert this structured/vdom tree into HTML using preact-iso. You can also use preact-render-to-string (they are the same library, but preact-iso provides other things that we need in this post).

Let's see what a simple SSR with Node.js would look like:

// server.js
import { createServer } from "node:http";
import { prerender } from "preact-iso";
import { useState } from "preact/hooks";

const Counter = () => {
  const [count, setCount] = useState(0);
  return <button onClick={() => setCount(count + 1)}>{count}</button>;
};

const handleRequest = async (req, res) => {
  const { html } = await prerender(<Counter />);
  res.setHeader("content-type", "text/html");
  return res.end(html);
};

createServer(handleRequest).listen(3000, () => {
  console.log("Listening on :3000");
});

It's a little more code than I'd like to explain, but here's what we are trying to do:

  • Create an HTTP server with node:http for Node.js
  • Instruct the server to respond with an HTML representation of the Counter component.

What you'll see here is that when you run this with node server.js, you get an error like so:

  return <button onClick={() => setCount(count + 1)}>{count}</button>
         ^

SyntaxError: Unexpected token '<'
    at compileSourceTextModule (node:internal/modules/esm/utils:339:16)
    at ModuleLoader.moduleStrategy (node:internal/modules/esm/translators:168:18)
    at callTranslator (node:internal/modules/esm/loader:428:14)
    at ModuleLoader.moduleProvider (node:internal/modules/esm/loader:434:30)
    at async link (node:internal/modules/esm/module_job:87:21)

This is where the third feature comes in: the meta frameworks take care of code transforms, bundling, and minification.

Code transforms, Bundling, Minification

Since JS doesn't come with macros or a way to make modifications at build time, we have build tools to help us with that. You see, JSX isn't exactly a part of the JS spec, so we need something that can build this file into pure JS so we can run it with Node.

We'll use esbuild to help us with this. It's not the only tool for this task, but I want to establish tools that are written by people who are very thoughtful about how they add features, and that keeps me from having to update this post every few months.

// build.js

import esbuild from "esbuild";

await esbuild.build({
  entryPoints: ["./server.js"],
  format: "esm",
  jsx: "automatic",
  outdir: "./dist",
  loader: {
    ".js": "jsx",
  },
  bundle: true,
  platform: "node",
  jsxImportSource: "preact",
  external: ["preact", "preact-iso"],
});

With this, we now run node build.js first and then run node ./dist/server.js, and you now have a server on localhost:3000 that renders a button. The button doesn't do much, but it's there—it's the button you created on the server, SSR!!!

Moving on...

Client hydration

The button's useless without the counter increasing, which is what we wanted it to do, but because we just pushed some HTML to the browser, it doesn't know how to make the button interactive, and we'll have to fix this.

This is something most frameworks abstract away beautifully, so most people who've never written a custom implementation feel like there's a super complicated setup behind this.

We need to accomplish three things now:

  1. Make the UI logic isolated
  2. Make the server render this isolated UI logic (already have the skeleton for this)
  3. Make the browser re-add the counter logic.

Let's move the UI logic to a separate file. I'll call it App.jsx, and I'm going to rename the component to App instead of Counter.

// App.jsx

import { useState } from "preact/hooks";

export const App = () => {
  const [count, setCount] = useState(0);
  return <button onClick={() => setCount(count + 1)}>{count}</button>;
};

Moving to the second step, our server.js now looks like so:

import { createServer } from "node:http";
import { prerender } from "preact-iso";
import { App } from "./App.jsx";

const handleRequest = async (req, res) => {
  const { html } = await prerender(<App />);
  res.setHeader("content-type", "text/html");
  return res.end(html);
};

createServer(handleRequest).listen(3000, () => {
  console.log("Listening on :3000");
});

If you do a node build.js and node ./dist/index.js, things should still be working.

Now, step three: I need the browser to mount this app again. Let's ask Preact to hydrate or mount the app again.

// browser.jsx

import { hydrate } from "preact-iso";
import { App } from "./App.jsx";

hydrate(<App />, document.getElementById("app"));

We have a document.getElementById("app") here, which doesn't exist in the HTML we've been sending from the server, so let's fix that in the server.

import { createServer } from 'node:http'
import { prerender } from 'preact-iso'
import { App } from './App.jsx'

const handleRequest = async (req, res) => {
  const { html } = await prerender(<App />)
+  const finalHTML = `
+    <body>
+     <div id="app">
+       ${html}
+     </div>
+   </body>
  `
  res.setHeader('content-type', 'text/html')
-  return res.end(html)
+  return res.end(finalHTML)
}

createServer(handleRequest).listen(3000, () => {
  console.log('Listening on :3000')
})

We added wrapper HTML code that contains the element with the id of app. Does that solve our problem though? Try building and running again.

We still have a useless button...

Why? Because while we added the wrapper, there's no way the browser knows that it needs to fetch the browser.jsx file. And another problem: it's a file with JSX code, which means we need to transpile/compile it to pure JS so the browser can understand it.

Let's deal with the latter problem first. We'll modify the build.js to also create a bundle for the browser, and the output will be in dist/client. You can check the output of both the server and the browser builds to see what's being generated—should help you understand a bit more about the build tools.

// build.js
import esbuild from "esbuild";

await esbuild.build({
  entryPoints: ["./browser.jsx"],
  format: "esm",
  jsx: "automatic",
  outdir: "./dist/client",
  loader: {
    ".js": "jsx",
  },
  bundle: true,
  platform: "browser",
  jsxImportSource: "preact",
});

await esbuild.build({
  entryPoints: ["./index.js"],
  format: "esm",
  jsx: "automatic",
  outdir: "./dist",
  loader: {
    ".js": "jsx",
  },
  bundle: true,
  platform: "node",
  jsxImportSource: "preact",
  external: ["preact", "preact-iso"],
});

Cool, now we have a browser build and a server build where we can write JSX. Back to the original problem: how does the browser know where to fetch the client file from?

Two steps:

  1. Set up static serving for files
  2. Add the location for the served files in your HTML wrapper

Serving Assets

We need to make some mods so that the browser can ask the server to send a specific file. This is how it would get CSS files in the future, but for now we want it to get the compiled browser.js file from the ./dist/client folder.

Let's use a simple library called send to help us with this. A few things: we are writing this code with the assumption that the folder structure looks like this:

.
├── App.jsx
├── browser.jsx
├── build.js
├── server.js
├── package.json
│---------------- # Build Output
└── dist
    ├── client
    │   └── browser.js
    └── server.js

So, the code we run with node dist/server.js belongs in the directory dist, and so does the client folder. Hence, we need to program the server to keep that in mind.

Next, we want to ask the server to make sure it only serves the static files when the request URL starts with /assets.

Let's make these modifications.

// server.js
import send from "send";
import { fileURLToPath } from "node:url";
import { dirname, join } from "node:path";

const __dirname = dirname(fileURLToPath(import.meta.url));

const handleRequest = async (req, res) => {
  if (req.url.startsWith("/assets")) {
    // remove the prefix `/assets` and only use the rest of the path to serve the file
    // eg: /assets/index.js will become `/index.js` and it will send the `index.js` file
    // in the `client` folder which is defined as the `root` option for send.
    return send(req, req.url.slice("/assets".length), {
      root: join(__dirname, "./client"),
    }).pipe(res);
  }

  //... SSR code
};

At this point you should be able to build and run the server and open the browser to localhost:3000/assets/browser.js, and it should show you your bundled JavaScript code in the browser. Let's change the wrapper HTML code to send this file as a part of our rendered app

const handleRequest = async (req, res) => {
  if (req.url.startsWith("/assets")) {
    // remove the prefix `/assets` and only use the rest of the path to serve the file
    // eg: /assets/index.js will become `/index.js` and it will send the `index.js` file
    // in the `client` folder which is defined as the `root` option for send.
    return send(req, req.url.slice("/assets".length), {
      root: join(__dirname, "./client"),
    }).pipe(res);
  }

  const { html } = await prerender(<App />)
  const finalHTML = `
  <body>
    <div id="app">
      ${html}
    </div>
+   <script type="module" src="/assets/browser.js"></script>
  </body>
  `
  res.setHeader('content-type', 'text/html')
  return res.end(finalHTML)
}

Also make sure to update the build.js file to exclude send in the external deps in the server's build.

import esbuild from "esbuild";

await esbuild.build({
    entryPoints: ["./browser.jsx"],
    format: "esm",
    jsx: "automatic",
    outdir: "./dist/client",
    loader: {
        ".js": "jsx",
    },
    bundle: true,
    platform: "browser",
    jsxImportSource: "preact",
});

await esbuild.build({
    entryPoints: ["./server.js"],
    format: "esm",
    jsx: "automatic",
    outdir: "./dist",
    loader: {
        ".js": "jsx",
    },
    bundle: true,
    platform: "node",
    jsxImportSource: "preact",
-   external: ["preact", "preact-iso"],
+   external: ["preact", "preact-iso", "send"],
});

Now, if you run node build.js and node ./dist/server.js, you should have a working button that increases the counter!!!

In a way, we're kinda done. Next up, there are things like routing, adding more platform support, abstracting away Node's server layer so you can use it in different runtimes (for example, Cloudflare or Deno). I'll cover that in the next post.

]]>
Mon, 10 Jul 2025 00:00:00 +0000
https://reaper.is/writing/20250713-another-one-about-build-setups-part-ii.html Another one about build setups Part - II https://reaper.is/writing/20250713-another-one-about-build-setups-part-ii.html Another one about build setups Part - II

Hey, reaper back for Part II. So far we did UI composition, SSR, bundling, hydration, and a non-functional server and client router. Let's try to map things so they are a little more functional

TOC

Routing

Client Sided Routing

You can actually skip this section for your setup as this is mostly needed to avoid downloading too much javascript on every page load.

I'll try to explain that statement. When we bundle, we end up combining everything into one single JS file (No shit sherlock!). To rephrase, every page component you write is now being downloaded for the client, even if the client never really visits that page. This might be fine for smaller apps but as the component trees grow larger and the quantity of pages increase, you end up stuffing the client with a huge javascript file.

So, what does client side routing have to do with this?

Nothing really, but the section is more about making dynamic imports of the page components so that the bundler can split them and making the router understand that certain components are to be loaded lazily. Fortunately, esbuild and preact-iso already come with all the tools we need.

Let's modify the src/App.jsx to now read a magical import variable called pages (which we'll inject in a bit using some build magic), for now all you need to know if that import.meta.pages is an array of paths to files in the pages directory.

import {
    ErrorBoundary,
    lazy,
    LocationProvider,
    Route,
    Router,
} from "preact-iso";

// Dynamically import each page under pages
const routes = (import.meta.pages || []).map(({ path, withoutExt }) => {
    const Component = lazy(() => import(`./pages/${path}`));
    const routePath = "/" + withoutExt.replace(/index$/, "").toLowerCase();
    return <Route path={routePath || "/"} component={Component} />;
});

export function App({ url = "" }) {
    return (
        <ErrorBoundary>
            <LocationProvider url={url}>
                <Router>{routes}</Router>
            </LocationProvider>
        </ErrorBoundary>
    );
}

We, go through each page item and map it to a component that is being dynamically imported using a import() statement, we do this because esbuild already supports dynamic import splitting, so now all the import() calls would create a split and the lazy call around it from preact-iso is to let the Client side router know that the component needs to be fetched/loaded before routing to the page.

Now, as mentioned, let's update the build step to provide us with this pages array.

In build.js, let's make some mods

import { build } from "esbuild";
import glob from "tiny-glob";
import { extname } from "path";
import { spawn } from "child_process";

// collect pages metadata
const pages = await glob("src/pages/**/*.{js,jsx,ts,tsx}").then((files) =>
    files.map((fp) => {
        const p = fp.replace("src/pages/", "");
        return { path: p, withoutExt: p.replace(extname(p), "") };
    })
);

await esbuild.build({
    entryPoints: ["./browser.jsx"],
    format: "esm",
    jsx: "automatic",
    outdir: "./dist/client",
    loader: {
        ".js": "jsx",
    },
+   splitting: true,
+   define: { "import.meta.pages": JSON.stringify(pages) },
    bundle: true,
    platform: "browser",
    jsxImportSource: "preact",
});

await esbuild.build({
    entryPoints: ["./index.js"],
    format: "esm",
    jsx: "automatic",
    outdir: "./dist",
    loader: {
        ".js": "jsx",
    },
+   define: { "import.meta.pages": JSON.stringify(pages) },
    bundle: true,
    platform: "node",
    jsxImportSource: "preact",
    external: ["preact", "preact-iso","send"],
});

We've basically injected the pages as a JSON array of file paths with and without extensions and added splitting for the browser build.

Now when you run the node build.js script, you'll see that the dist/client folder has a lot more files than it did before. There's still the browser.js file but now you have a file for each page as well.

Next up, let's modify the server for pre-rendering the client router. If you were to run node ./dist/server.js on the last build, you'll see that you get an error on the terminal saying location is not defined, this is because the router depends on the browser API and you'll have to stub it for it to work from the server. Some frameworks and routers do this for you but since we are here to teach, let's stub it ourselves.

// server.js

// ...remaining code
const handleRequest = async (req, res) => {
    if (req.url.startsWith("/assets")) {
        // remove the prefix `/assets` and only use the rest of the path to serve the file
        // eg: /assets/index.js will become `/index.js` and it will send the `index.js` file
        // in the `client` folder which is defined as the `root` option for send.
        return send(req, req.url.slice("/assets".length), {
            root: join(__dirname, "./client"),
        }).pipe(res);
    }

    globalThis.location = new URL(req.url, "http://localhost");

    const { html } = await prerender(<App url={req.url} />);
    // ...remaining code
    return res.end(finalHTML);
};

To reiterate, we now have a global location value that partially imitates the browser's Location construct. You can obviously create the entire construct if you wish, but to keep things minimal and simple, a URL works just as well. We've set it to the path / and prefixed it with the base URL of http://localhost. You might want to add in an optional process.env.HOST check if working in Node, but just this also works for running things locally.

Another thing we've done is pass a url param to the App component, which is passed down to the LocationProvider in the component tree. This is done so the Router knows what the initial path is and what needs to be pre-rendered.

Finally, let's add a page to render, for example, pages/index.jsx:

export default function HomePage() {
    return <h1>Hello World</h1>;
}

and our file structure or tree should look like so:

.
├── App.jsx
├── browser.jsx
├── build.js
├── package.json
├── pages
│   └── index.jsx
└── server.js

a quick node build.js and node dist/server.js should now give you the contents of pages/index.jsx on your browser.

This is by no means a complete router, a complete router would also handle dynamic parameters, catch all cases and if we wish to go fully into whats possible, something like TanStack Router or React Router which allow defining route level data, permission guards and additional layers of error checking for your safety, not always needed but it's good to have.

Sounds like I'm making a counter point to writing your custom solution, but no. The solution can be written with the above mentioned routers integrated in your setup. The point of the tutorial is to explain what goes on inside so it's not a big black box for everyone. We did that to auth, I don't want that happening to other things.

Developer Experience

Okay, moving on, there's

  • Seamless auto complete and intellisense for the most common parts
  • Live reload or better yet, hot module replacement
  • Multi Platform support
  • A more seamless api and routing layer.

There's no end to improving the developer experience. But since the first 3 are required for any basic app we'll add that to our example repo that you can refer to and get working with.

The repo also makes 2 tiny changes,

  1. It uses the fetch standard as the interface for request and response
  2. There's different entry files for cloudflare and node so you can use this on either environments, in a more realistic situation you'd only want one but it's there as an example to help potray the logic.

That's it from me and I hope this helps someone either understand the basics of it or convinces people to not be scared of writing their own if needed.

]]>
Mon, 15 Jul 2025 00:00:00 +0000
https://reaper.is/writing/20250911-self-hosting-might-not-be-for-you.html Self hosting might not be for you https://reaper.is/writing/20250911-self-hosting-might-not-be-for-you.html Self hosting might not be for you

I host quite a few things, both on Digital Ocean as droplets (their name for a compute instance) and on my Thinkpad for keeping things in my control. The post is for people who find things like these cool and would like to self host and provide apps for free to others.

To summarise, it all costs money and it's not a small amount.

Tiny example, I have a service called Ping and here's what it costs to keep it running

  • Server - $12/month
  • Database Backups - $5/month

Which is about $204 a year to run and makes me $0. I'm not adding domain charges since it's not specific to the project but costs about $25/year.

Similar to this, let's say you own the server like an old laptop. That adds in electricity charges, and to avoid exposing your IP to the internet, you need something like Cloudflare Tunnels (free), or you can set up a ~$4 droplet that can do it for multiple such self-owned servers.

My current Thinkpad takes about 60W to trickle charge and the current battery power holds up for 4-5 hours, I'll leave the rest of the calculation to you since the cost differs from place to place.

Things like goblin.run, ping, my other micro services (hits,music, etc), all cost about $60/month which is $720 a year.

None of them earn me a single penny. This post is not to brag but to let people know that it's not cheap to run these. There are alternatives to minimize costs, like using Cloudflare / Oracle's free tiers, but they all have their own issues which would further add to my maintenance time and cost.

I could also have a single large VPS for $60 and run it with something like selfprivacy.org, coolify, or /dev/push that would also provide decent centralized logging and observability, which I hardly ever add to the free products. I'm not a fan of tracing and telemetry and would rather spend time writing better simulators to identify the issues than constantly track user actions somewhere (a topic for another day).

Point being, it's fun, but don't think it'll be truly free to run it all by yourself. People seem to think infra is cheap and yes you can use your Vercel / Netlify Free tiers for something small but as soon as the traffic reaches a point where it's important for your service/app to be up you end up with a large bill.

That's all for now, Adios!

]]>
Mon, 11 Sep 2025 00:00:00 +0000
https://reaper.is/writing/20250912-dont-forget-the-layers.html Don't forget the layers https://reaper.is/writing/20250912-dont-forget-the-layers.html Don't forget the layers

It’s easy to get tunnel vision when working on a codebase that you think is simple and straightforward. But there’s this sneaky layer that most of us ignore until it bites us: the abstractions.

So, I was working on Ping. Everything was fine until, I see my resend account has been sending a lot of mails in the past week and I see that it's sending it to the same email addresses again and again. Normally I'd be okay with this since it's a notification service and it's supposed to notify if the site has gone down but Ping doesn't notify unless the provided url has just gone down, if it stays down, no notifications are sent so as to not spam the users and also to save a little on the email costs.

I did what any developer does—blame the network, blame the app, maybe even blame the weather. But after poking around, I remembered that the fetch implementation runs on node's net package and there might be something going wrong there and that's where I stumbled on getDefaultAutoSelectFamilyAttemptTimeout. fancy, right? Turns out, it decides how long your system waits before giving up on picking an IP family (IPv4 or IPv6) for a connection.

The default timeout is 250ms which is too short for our use case. The server was far, the connection could vary, and fetch was tapping out before it even had a chance. I made changes to fetch's own timeout but that wasn't the case, undici was failing much before the AbortSignal was triggered and that's what led to me reading the code and going through how the core connection logic works, where I tried a few different timeouts to finally find the problematic timeout being getDefaultAutoSelectFamilyAttemptTimeout and fix it.

Here’s the thing: abstractions are great. They save time, hide complexity, and let us build cool stuff without reinventing the wheel. But if you treat them like black boxes, you’re setting yourself up for pain. The answers are almost always buried in the details—the docs, the source, the weird config options nobody talks about.

So next time you hit a wall, don’t just slap on a quick fix or curse at your screen. Take a breath, dig into the abstraction, and figure out what’s really going on. You’ll probably save yourself a headache—and maybe learn something you can brag about in your next blog post.

]]>
Mon, 12 Sep 2025 00:00:00 +0000
https://reaper.is/writing/20260131-the-fear.html The Fear https://reaper.is/writing/20260131-the-fear.html The Fear

Ahoy humans!

I've not written for a few months now.

I don't have anything to write about anymore.

Nor am I confident that I'll become the cool developer I'd be if I followed the path I had laid out.

The meaning of development has changed a lot, it's taken away the thinking, the control, the joy of solving problems. I put the problem down and it get's solved automatically. I'm not scared of the LLM, I'm scared of being surrounded by people who have no thought of their own but rather echo the thoughts of what an LLM told them was correct.

I have started re-working on old projects to make sure I complete at least a few of them with the little time that I get nowadays. I want to enjoy the feeling of solving problems. I want to find various ways of doing the same thing so I can learn to judge things better.

The fear of loosing my critical thinking is larger than the fear of loosing my job. The job is a survival option, I/We can always find other skills or things that'll keep us relevant. Yes, the pay will change. The skills you spent years on might not mean as much but seeing the loss of a core skill I spent years working on freaks me out. I love being the problem solver.

This is by no means a cry for help, just putting down thoughts as they come through. My job isn't currently at risk, neither am I in a bad financial position. All of this are just thoughts that rushed through when I was sitting idle thinking about life, what I want to do, what should I do for passive income, etc.

As always, I don't just come with a problem...there's possible options (old habits, sorry). I could let go of programming and find another profession that involves solving problems / puzzles. Move to a simpler life that doesn't involve dealing with fire all the time. There's no real purpose to life so there's no point of me holding onto something this trivial and fear the outcome. The outcome itself might not be that bad to begin with and this might all be me just overthinking it.

Running away might not be an option for some and it's kinda okay, anyway, I'm going to go think more about this and follow up when I have answer / path forward. For now, that's all I have.

]]>
Mon, 31 Jan 2026 00:00:00 +0000
https://reaper.is/writing/20260210-the-re-launch-of-tillwhen.html The Relaunch of TillWhen https://reaper.is/writing/20260210-the-re-launch-of-tillwhen.html The Relaunch of TillWhen

I kinda announced this on twitter a few days back but here's a tiny announcement for the readers. TillWhen(https://tillwhen.barelyhuman.dev) is back up and running.

I don't have any of the data cause the time I shut it down I didn't really think I'd be in a state to bring it back up but life has been great and I have enough funds right now to keep it up.

On that note, I do have to plan on how to keep it running and maintained and I don't want finances to be the reason it goes down so it will have a paid plan and I plan to spend sometime over the weekends to add in features that are worth the money.

For now, the app is free in beta and also because it was built by a naive developer a lot of things can be improved in UI/UX and how things are structured. The codebase will also be made open source so people can contribute or fork and host their own if the paid plan feels like it's too much.

Understand that the plan is to make sure the cloud costs can be handled by the users so my financial state doesn't impact the app.

That's it, adios.

]]>
Mon, 10 Feb 2026 00:00:00 +0000
https://reaper.is/writing/24052021-Updates-22nd-and-23rd-May.html Updates 22nd and 23rd May https://reaper.is/writing/24052021-Updates-22nd-and-23rd-May.html Updates 22nd and 23rd May

It's been a super productive week and weekend both in terms of work and stuff I learned over the week while going through a lot of articles and setups, we'll discuss over a few things over the course of the next few posts.

Let's get back to the dev logs.

Taco Datepicker

We'll start with the datepicker, on the previous log I talked about the progress on Taco and everything and the datepicker was picked from TillWhen's implementation and while it suffices the base requirement for an Alpha project the codebase is very hacky cause built it too quickly.

Anyway, had to create a better and cleaner version both in terms of design and code so I picked this up and probably the first time I've worked after office hours on something and not felt lazy.

Anyway, this is now a separate repository on my Github and while I plan to release the whole Taco-UI as a package, for now we're going to keep the taco datepicker as the only one available to public because the rest of the components are pretty unstable and I wouldn't want people experiencing issues the moment they start using it.

Taco

Next up is the actual product, Taco, in terms of update, I most spent time working on ways to make the mono repo work with the deployment process and avoid having to create docker images for every small change. Have a process in place, let me know if you'd like to know about the deployment process and setup.

For now, the alpha servers that I planned to release on 1st of June are up and usable with a testing account for now, you can create your own account but the feature set isn't complete and a lot of things are disabled by default since it is a testing instance , you're better off waiting till 1st for me to make the instance usable and after 2-3 weeks of monitoring on that instance is when I'll be launching it properly.

Hopefully, it beats TillWhen...

Statico

This is the static markdown folder converter that I use for this very blog and the changes are still being tested and worked on , the main reason for a change is so that people can easily generate indexed and custom markdown pages and also because I want to move these weekly logs into it's own section instead of mixing up with the normal posts, the same applies for the checklists, they are technically a part of posts but the primary link-backs are from the Misc section and makes no sense to have them in the posts as well.

So overall the static generator is going to have a better config file to work with and while I'm doing that I can work on making it faster.

Commitlog

The one project I spent a lot of time dreaming about finally has a proper direction, the changelogs are now going to be cleaner, it has a release subcommand that can help manage the version but is limited to working with just git, I want to make it use a .commitlog.version file so it can use that to decide how to increase and decrease version instead of doing all the heavy lifting of reading the repo again and going through the revs to find the latest revision.

The support for commitlint standards stay since I still do use the standards personally but not strictly and that's why commitlog doesn't force you either.

Smaller Projects

add-config

I've been working with a lot of projects from work and I end up making a similar type of config files to maintain switching between environments and since you shouldn't add any secret data in the frontend anyway, the config method works well. Manually doing it again and again can get exhausting and so add-config is simple nodejs cli that can do it for you, "Why not write it in go or write a bash script!?" simply because most of the projects I work with are node based and I don't need to download a binary over curl to set this up on a new system, it's a simple npx invoke and I have the config in place. I can write a bash script but then you copy around that bash script as well, the point is to be quick with things like these so you can spend time to do other things.

dokcli

This tool has been in the arsenal for a while now and I use it on almost all dev environment setups since the dokku app creation and config handling isn't as un-attented as I'd like it to be, so a simple cli I built as a toy to learn go has been coming in handy, though there was an error, I left the url scheme in the creation script and forgot that dokku won't allow me to add a domain with the scheme in the string (http:// or https://) so just added a simple parser to handle that and now the scripts won't give you an error, also separate domain and app setup scripts because dokku doesn't recommend setting the domain before a proper deployment so you can run the domain setup later. The same has been added to the non-config execution and dokku will ask you regarding the domain and since I was doing all this, i also added letsencrypt support to the config and non-config invocation so it'll setup letsencrypt for the domain as well on execution of the scripts.

That was a lot of stuff, I'm impressed, anyway that's it for now.

Adios!

]]>
Mon, 24 May 2021 00:00:00 +0000
https://reaper.is/writing/24092020-Setting-up-a-personal-git-server.html Setting up a personal git server https://reaper.is/writing/24092020-Setting-up-a-personal-git-server.html Setting up a personal git server

This post is not a blog write up and just consists of steps I went through to setup a git server on my home server ( Raspberry Pi, for now).

File content is available below.

  1. Setup WIFI - Make sure it auto connects. (Default: Debian CLI Wifi)
  2. Install NGINX,FastCGI and Setup gitweb config in /etc/gitweb.conf
  3. Setup the default nginx to point to the gitweb config and gitweb.cgi
  4. Setup Git Daemon to point to the projects directory and --enable=receive-pack as the additional parameter.
  5. Restart services, nginx, fastcgi, git daemon.

Files.

etc/gitweb.conf

our $projectroot = "/home/pi/git-repos";
our @git_base_url_list = qw(git://192.168.31.162 [email protected]:git-repos);

add this to the server listener in nginx

 location /gitweb.cgi {
           include fastcgi_params;
           gzip off;
           fastcgi_param   SCRIPT_FILENAME  /usr/share/gitweb/gitweb.cgi;
           fastcgi_param   GITWEB_CONFIG  /etc/gitweb.conf;
           fastcgi_pass    unix:/var/run/fcgiwrap.socket;
        }

        location / {
          root /usr/share/gitweb;
          index gitweb.cgi;
        }

Commands to create a repository.

git init --bare --shared <project-name>
cd <project-name>
touch git-daemon-export-ok
cp hooks/ppost-update.sample hooks/post-update
chomod a+x hooks/post-update
echo "Project Description" > description

Adios!

]]>
Mon, 24 Sep 2020 00:00:00 +0000
https://reaper.is/writing/25122020-Updates-December-2020.html Updates December 2020 https://reaper.is/writing/25122020-Updates-December-2020.html Updates December 2020

Ahoy Human!

It's Chirstmas and I've slept for 18 hours now.....

Anyway,

Back to what we are here for, Updates (cause I'm some celebrity with a huge following waiting for me to post this), I won't start with TillWhen this time, people are already bored of it so I'll add it to the last.

Language Updates

Most people already understand that I've been trying to move out of the web dev scene for a while and while I did use Javascript for the desktop apps while hating on electron (hyprocrite much!?) but recently I've started using Fyne with Golang to build app and the latest example of this is the Spotify Lite App about which you can read here.

  • Start Practicing Go Lang more frequently
  • Attempting to build CLI and Desktop apps in GO
  • Doing light learning of Rust while I'm at it, because rust is a little harder in terms of getting used too.
  • JS and TS is still on a decent side though I've started properly adding types to my Typescript apps instead of using the any type just because I'm lazy to specify a type definition there.

Project Updates

New / Still Planning

Pfer

Another app that i'll be building but this is going to be a web app since I need available on all platforms and I don't have the patience to wait for Apple to approve everything so we're going with web app for this one. This is a simple playlist sync and transfer app for Spotify Playlists.

I'm not sure about you guys but I have this thing were I add everything from my Saved Library to a Playlist so I can share that playlist around and it's a good amount of work if you don't use the desktop app and I wouldn't want to download 100MB app , login, sync playlist with saved songs, delete app and repeat whenever I want to do this sync.

Pfer is basically going to be a simple web-app for that purpose, there's not link for you to visit right now, I'm still deciding on how I'll be making it, I want to build something serverless and drop the whole thing on Vercel's servers, though the 10 second action timeout might create a block in certain cases and thus I'm still thinking on how I'll be splitting the actions.

Existing Ones

Spotify Lite

  • Changed auth mode to use PKCE, since the client creds don't allow refreshing and I totally forgot about it when writing the initial app, it's fixed on the current release which at the time of writing this is 0.4.0
  • Added a few helper and loading screens , this was needed since I cannot use the spotify web API's for playback control, unless the user is a premium user , I probably missed it during reading their reference but my premium ended and the app stopped letting me control the playback so that fixed it for you guys. This is basically why using something you've built helps in figuring out issues no one else will.

TillWhen

Here's our favorite one! So TillWhen's slack app is being tested by real people on the beta instance right now and you can do that as well, you can sign up on https://beta.tillwhen.barelyhuman.dev and try out the slack integration from Profile > Integrations > Add to Slack. Remember it's a beta instance, I might clear the database, or i break the god damn server if I wish to when ever I want so don't use it to save time logs that matter to you.

Oh, and make sure you report those issues on https://github.com/barelyhuman/tillwhen-issues/issues , and then I can move onto fixing them.

I'm sad though that no one's donated yet... I'm kidding, though Its good TillWhen has 140 users right now, which is good considering I only promoted it on ProductHunt and maybe , just maybe on HackerNews and LinkedIn.

Sleep and Work Updates

It's all going good, the works going great, I've improved my sleep cycle quite a bit with the solution I wrote a few posts back, you can find it here](/posts/12102020-The-Fight-with-Ones-brain---My-Sleep-Solution.html) , No I'm still not on any social websites though my self hosted Email Server idea didn't really go well, though I might use SourceHut's Email Lists , modify it a bit but I'm still not sure about it.

As for repositories, I've started mirroring repositories on Gitlab and bitbucket for most repos that I have or care about, bitbucket one's are still on private though I like the overall interface of bitbucket but the runners on Github and Gitlab are more functional in my opinion so it's a little hard for me to just jump out of them immediately.

That's about it for now.

Adios!

]]>
Mon, 25 Dec 2020 00:00:00 +0000
https://reaper.is/writing/26042021-This-Weekend-in-BarelyHuman-Dev-Labs---April-24th-and-25th.html This Weekend in BarelyHuman Dev Labs - April 24th and 25th https://reaper.is/writing/26042021-This-Weekend-in-BarelyHuman-Dev-Labs---April-24th-and-25th.html This Weekend in BarelyHuman Dev Labs - April 24th and 25th

Don't really have to even write this post cause all I did these 2 days was figure out ways to bring back a dead machine to life till I finally gave up.

So...... My macbook is dead , like literally dead, no response, no glitching screen, no abnormal boot. It's just dead and I'm stuck with a raspberry pi and my old system that's unstable in terms of being able to work from and generally used to test various distributions of Linux since it just randomly shuts itself down but it's a good test machine to see performance of various linux distro so I didn't throw it away.

So there's no real update in terms of projects since I didn't work on any but then I did test out a little bit on the deadline and edit project patch/trivial feature that I added to TillWhen's beta instance sometime mid last week and planned to move it to live which I can do from the raspberry pi so I will do that sometime this week.

What did you do to kill a macbook?

No damn idea, I sent out a few iOS builds to testflight before I shut it down on Friday and it had like 10% battery at that time. I should've connected it to a charger for 10-20 minutes so it could cancel out the battery usage from the ever connected SSD but I just left it like that since it's no big deal.

Woke up to a machine that wouldn't charge or boot into recovery or boot into single user mode, tried various methods for 2 days other than opening the back panel cause then the Apple Service Centre wouldn't even touch it so that's that.

Anyway, that's about it. Luck is a .... never-mind.

Adios!

]]>
Mon, 26 Apr 2021 00:00:00 +0000
https://reaper.is/writing/27082023-announcing-rawjs-xyz.html Announcing rawjs.xyz https://reaper.is/writing/27082023-announcing-rawjs-xyz.html Announcing rawjs.xyz

Tiny post.

I made something: https://rawjs.xyz

The goals of the project is to be able to help all kinds of JS developers while keeping the knowledge as unopinionated as possible, we also plan to let other contribute articles about topics that do need attention are not feeding the current hype.

The current state of JS is widely dominated by libraries and frameworks which make it really hard for people to actually focus on JS and focus more on the framework's change in pragma and usage.

I've talked more about it on the actual project page, so you can read it there

Announcement

I hope you find something to learn on it, that's all for now. Adios!

]]>
Mon, 27 Aug 2023 00:00:00 +0000
https://reaper.is/writing/27092020-Updates-27th-September.html Updates - 27th September https://reaper.is/writing/27092020-Updates-27th-September.html Updates - 27th September

Let's look at things I've done in the past few weeks.

TillWhen

  • To-dos now in beta
  • Export Project data to CSV

Added the above two to Tillwhen and are right now on the beta environment as I'm posting this, you can expect them to be on the live instance in about 2 days.

Cor

Haven't touched Cor for a while now , Cor is a project management app that I planned on building but gave up mid way, might get back into building it soon, but not motivated enough to work on it right now.

PUI

A postgres UI client that was built as a toy project will take precedence over COR and TillWhen next week. It's usable for local connections for now but basically has no validations etc so wouldn't really recommend using it for cloud connections right now, plus I haven't added ssl support to it so that's going to fail on you for various hosted instances anyway.

Others

I've been working on the text editor research still and have been practicing Go Lang and C a little more and plan to get better at both before I start using them in production grade stuff. The Minimal Code Editor that I've been planning to build all along is the motivation for me to go to these languages. C for it's cross-platform availability and Go for it's easy interop with Swift and GTK. While the Linux edition of the editor might come off easy with just one language, the mac version takes precendence since I'll be testing the editor while using it myself.

Also plan on building a transaction tracker and payment system for people to host , something like Drew's fosspay but my take on it. Not sure when I'll be starting this but you can follow my Github to see if I do it.

Setup a personal git server for TillWhen's source code, not that I worry it's source code will be out because I will be making it open source once the app is stable with all the features I want and once I've removed all the heavy UI libraries it has as dependencies for smaller components. Don't want the community to start patching torn elements of a codebase that's not elegant to deal with so stay tuned for that.

]]>
Mon, 27 Sep 2020 00:00:00 +0000
https://reaper.is/writing/30092020-Postgres-ECONNREFUSED-on-127.0.0.1.html Postgres ECONNREFUSED on 127.0.0.1 https://reaper.is/writing/30092020-Postgres-ECONNREFUSED-on-127.0.0.1.html Postgres ECONNREFUSED on 127.0.0.1

Postgres fails to start properly if there's an obsolete postmaster running and services fail to connect to it so just remove the pid if it exists to make the service create a new process.

rm -f /usr/local/var/postgres/postmaster.pid
]]>
Mon, 30 Sep 2020 00:00:00 +0000
https://reaper.is/writing/31052021-Side-projects-thatve-proven-their-worth.html Side projects that've proven their worth https://reaper.is/writing/31052021-Side-projects-thatve-proven-their-worth.html Side projects that've proven their worth

I'm no "HUSTLE EVERYDAY!" kind of person but I like keeping myself busy or I go into the feedback loop and then ruin my mood for no logical reason.

This has led to a lot of hobbies in the past and a lot of side projects after I found my liking towards code and being able to build stuff, this includes all the Custom rooms I built for the hell of it, the mini tools I've built over the past few years, things I learnt, things I want to learn now, the list is never ending.

Over the years, I've rebuilt quite a few things and I still do. They say "Don't reinvent the wheel" , which is true but that's only when you are building something that involves the existing concept of the wheel.

Let's say I'm building an E-Shop, I could go with existing partners like shopify, wordpress plugins for the same or there might be other alternatives to the same that I don't know off and it's totally fine to go with these options, your time to market is reduced exponentially when compared to something that would've been built from scratch and totally makes sense if you are limited by time.

But, now I want to compete with the existing solutions out there in the market by building a custom framework that's better than them in some form in my opinion , I will have to reinvent the wheel, I'll have to understand the concepts , I'll have to go through every problem these products faced, I'll understand their decisions a lot more. This might even change my approach to building a new competitor in the first place but the point remains.

You have to reinvent the wheel when you want to understand how the wheel works.

This might take longer but will make sure you understand it better and that's what the end goal is, at least for me.

The chance of me being the best there is in terms of a developer is quite low but the chance of me being able to at least learn everything I wish too is a lot greater if I move towards it.

With the whole philosophy out of the way, it's time to discuss the actual title. The sideprojects I'm currently proud of.

A few names come to mind when we talk about them

There's obviously others that I've built but these are the top 5 right now, the sole reason being that they are the one's I use the most. We'll go through each one by one and the whole first part was just to make sure I could bring it to your notice that each one of them was "re-invented" with my understanding of how it would've been done.

commitlog

A really simple concept of being able to use the commit messages as changelog and it now has the additional release version management inbuilt, though it needs modifications in how it manages the version right now.

The concept isn't new or something innovative, there's a dozen tools on github that do the same, the sole reason was to move away from the NodeJS only release which needed a node pkg to be initialised to work with the version management and that would mean I'd have to add package.json files to a non nodejs project , for example a ruby or a golang project and that lead me to creating a cli that worked with basically any project because it's dependency is just Git and itself.

Hen

The alternatives of this are way more powerful than hen is ever going to be because each of them expect it to be a business plan and they have to put in all that to compete with each other, Hen on the other hand is a quick REPL for me to test out new component designs that don't really need a full environment and file management, it's a single code editor panel which renders the output on the right.

This started off as a simple experiment to see how easy / hard would it be to create a live renderer of React components while maintaining isolation of concerns and not letting people do whatever they want to the website. Obviously, you can't use hen's code in production since it assumes the tool to be only frontend compatible and while the rendering is done in an iframe and isolates the executed code, the website cookies can still be transferred since the iframe acts as a part of the domain. But then these are all handled on the other alternatives using various headers that identify the source of the execution, which you can obviously add to the code if you plan to use it but it doesn't do it since there's nothing on there for people to use

Mark

My primary markdown editor, I'm still improving on things on this and at some point it'll have a live preview instead of side by side preview but that needs me to focus on it and right now all I do is use it to write posts, this very post you are reading was written on it and so are the devlogs on barelyhuman.dev

This started off as another random project that I thought would be super simple to implement considering there's so many markdown parsers available outside though I do plan on writing my own markdown parser someday.

I've tried Typora and I do like it but it's going to become a paid app once they are done with the beta, and If I can afford the price I will buy it but if I can't, mark isn't going anywhere now, is it?

Music

The oldest of the bunch, was named Orion to start with and is now just music.reaper.im but the concept of this was I needed something that could loop youtube videos while I'm gaming and something that's super light to work in the steam browser which was quite limited back then and even though now the app is a little more heavier when compared to it's older versions, steam handles it quite well and I did add the ability to import Spotify playlist tracks into the player a few weeks back.

Statico

I was called stupid for building this from scratch but then I've been called stupid for a lot of things...

Anyway, the concept.

Build a small and quick handler that could convert a folder full of markdown files into html files that you could simply serve, this started as the base for reaper.im which was powered by Next.js before and my hub for experimenting with things and features but then I realised that the site was unnecessarily heavy in terms of what it actually accomplished and with the other generators I'd have to use themes provided by others or sit and write the theme in the standards they've set which is pretty easy considering my design preferences but I though how hard could it be to write a simple tool that would do this.

That brought the base idea to life with reaper.im being the sole focus of support for the tool, literally just to support the reaper.im , I had written a SSG(Static Site Generator) and now there's 2 sites that I know off that use it.

Obviously both are sites belong to me but the barelyhuman.dev needed modifications to the tool so it could adapt to more setups than just the one that this site had and now statico can be used with basically any simple SSG requirement, the complex one's I don't know off because I've not tested and the deploy time with it is 12~14 seconds. the 10 seconds being the environment getting ready and cloning the repo etc, the binary just runs for 2-4 seconds based on the amount of files you have and the number of indexes it needs to generate.


Overall, build whatever you wish too, as long as you see the need for it and it might just work out for the best. This is just a cut down list , the total number of things I've built both small and big would take time to go through on a blog post. You can go through them on my Github if you'd like to, follow or star any repo that you think you like.

Oh also, the weekly updates will now be on the barelyhuman logs section since that's the identity most of these tools are built under, this blog is mostly going to be posts like these and a few educational ones when possible.

That's it for now,

Adios!

]]>
Mon, 31 May 2021 00:00:00 +0000
https://reaper.is/writing/31052021-Tests-vs-No-Tests.html Tests vs No Tests https://reaper.is/writing/31052021-Tests-vs-No-Tests.html Tests vs No Tests

Every developer has mixed feelings about writing tests, some think it's for the testers to do, some would like to avoid writing these at all costs and some would judge you if you didn't write any.

Then there's people like me who know that you cannot be on one point of the spectrum in this case, tests are important for every software but there's also a time, place and project where tests are to be written.

We are going to go through the tools I use for tests and when do I take writing tests seriously.

Tools

Let's start with tools cause that's going to be a really small set compared to the mentality part.

Most test suites come with everything packed but then there's others where the test runner and the assertion is taken care of by 2 different packages. We've had Jest in the JS community for a while, which takes care of everything for you but I don't really go with the whole batteries included setup for anything other than survival gear.

So, I use a combination of Mocha and Chai for most of my testing, I have used Ava for certain cases but it's mostly mocha and chai, older setups I had used Karma but I've not worked with it in like the past few years.

Backend / API Testing

As mentioned, it's mostly Mocha and Chai and Chai comes with an added plugin of chai-http which is what I use for testing the API's.

I'm not a TDD person, i write tests mostly after I'm done with the actual base API and not write the fail cases first and then write the feature next because most of my workflow depends on an incremental and iterative approach to the solution, so the TDD is more like torture in my case. Works well for people who do work on stricter paths, just not for me.

Frontend / Web Render Testing

This is a hard part, testing every click and action for a web app can be time taking and has sometimes taken longer than the acutal implementation and so I just setup the tests to check for renders instead of everything. The other stuff

  • event handling
  • state changes is tested manually and I write them down in the readme to make sure I test them accordingly.

So I use snapshots of the render, a concept I picked from Sindre where you can render the component using react-test-render and then test if the needed props are making the needed change in behaviour. This can be hiding/showing based on a prop, rendering a certain prop in a certain element, changing a certain state or triggering a certain prop.

Considering the atomic nature of how react components are written , this takes care of almost everything that could break. The only thing that remains is the business logic, which can be simple functions you export from helper packages and then test them as well.

This blocks most cases of failures.

You can obiously check event handlers as well but as I said, I prefer doing them manually.

This is for react, I've not tried testing setups for other UI libraries or frameworks but my approach would be similar if I did.

When do you write tests?

There's quite a few people that consider that everything should have tests or that makes you a bad developer.

In that case, I am a bad developer.

But then , I'm not going away without an explanation(should already know that by now).

I say the decision of writing tests depends on a few factors

  • Requirements
  • Deadline
  • Nature of the project

Requirements

If the requirements you are working with are variable in nature, i.e. if they are bound to change at variable points in the development cycle then writing tests is going to be a huge waste of time as the tests may or may not get invalidated as you go through. Still once you see a certain requirement is going to stay for longer, it's preferrable you write a test for it as the other changes you make over time shouldn't break that requirement.

On the other hand if you are clear in terms of what you are building and what the end product is going to prioritise, you are better off writing tests to maintain that stability and reduce the manual load of checking trivial stuff.

Deadline

This is pretty self explanatory, the lower the time you have, the less you need to focus on tests because if you don't have something to test in the first place, the test scripts make no sense.

Nature of the project

Is your project a simple single functionality tool? You don't need to write tests but that doesn't mean you don't have to, you can go ahead and write a test if it's a small thing and it builds up to a being a good habit later.

If your project is a prototype that you will throw in the trash right after testing the concept you had in mind, then the test is totally unnecessary though there's a but in this case. Like if you are going to keep building on that prototype to be the final case, write a test for it before you add in more features to it.


I don't mind tests as they reduce your load and that's good, since you don't have to keep confirming things that should be already working, and this saves you from the whole Don't deploy on a Friday night to a limit, cause the test coverage is going to fall short to a certain percentage every now and then but if you can avoid the majority of setbacks it still gives you a lot of peace of mind.

I can say that considering I've had TillWhen deployments fail and save me a few times.

That's all for now,

Adios

]]>
Mon, 31 May 2021 00:00:00 +0000
https://reaper.is/writing/asahi.html Asahi Linux Fedora Checklist https://reaper.is/writing/asahi.html Asahi Linux Fedora Checklist
  • $ -> Normal user mode
  • $! -> Root user mode

Post Install Fedora

  • Setup the root password for admin access for other tasks

    • $ sudo su
      $! passwd
      
  • If you wish to re-map the keys of the macbook, run the following

    • $! echo 'options hid-apple swap_fn_leftctrl=1' > /etc/modprobe.d/keyboard.conf
      $! dracut -f # will get stuck for a bit, let it work everything out
      
  • Edit GRUB to give you a timeout of at least 5 seconds

    • $! vim /etc/default/grub
      
    • update GRUB_TIMEOUT to be around 5
    • run the update-grub command
    • $! update-grub
      
]]>
Mon, 18 Aug 2024 00:00:00 +0000
https://reaper.is/writing/browsers-code-editors.html Browsers and Code Editors https://reaper.is/writing/browsers-code-editors.html Browsers and Code Editors

There was a hint in the previous post where I mentioned I'd be building a lot of the tools that I use from the ground up and since I normally build stuff with my requirements in mind they turn out to be very minimal.

A lot of stuff that I've built is either tied to this website or you can find it on the work page and I was told that in my idea of reinventing the wheel for almost all tools was stupid but also interesting but mostly stupid .

While I agree to that, and you shouldn't be reinventing the wheel when working on production apps for clients , it's okay if you are doing it to learn. I consider myself really lucky that I've got all these resources out there that I can learn from and it actually makes it easier to find better explanation to stuff if the man page or documentation of a certain library or tool isn't well documented.

How's you learning going to be helpful to us?

It won't. You might not be a fan of the minimal approach to things but if you are , you might have another option to choose for when choosing code editors. I still VIM is the best there is but then that's just my opinion.

If someone has observed the pattern of my repositories, you'd notice the things I've built or have forked to go through are for building a code editor. If unclear let me guide you through the mini-tools I've rebuilt.

  • Apex (Mini Web Code Editor)
  • Mark (Markdown Editor)
  • Snips (My goto code snippets)
  • Hen | Hen-Experimental (React Code playground)
  • Nova (my half baked attempt at the code editor)

and the forks include

  • Atom (For learning how they structured the modular approach)
  • Carbon (to figure out how the syntax highlighting was working, should've just inspected and I'd know it was codemirror but then I went through a few other things like global config and a few nice ideas for better maintainability). I might have delete this fork though, don't remember.

Obviously no one is that observant but it's been going on for a while. I also wanted to take on building a browser but I'll get to why I didn't later in the post.

The above minitools each have attempts of figuring out how I could abstract each part of the final code editor into modules of its own thus making the final editor a lot more pluggable with various tools and to get a basic understanding of how plugins can be made a lot more scalable.

Apex takes on adding tabspaces etc to an existing textarea on the web which isn't that helpful when you build the editor using something like Go Lang, but I just wanted to see how the syntax highlighter actually worked, and yes, I did go through codemirror and primjs's codebases to see how they were doing it and work on improving where I could.

Mark on the other hand was a failure because I didn't really complete it to the point that I wanted to. I wanted it to be like a wysiwyg editor where the markdown would update as you typed and reflected in the same editing space something like Typora, but I never went ahead with the idea and instead left it out there like every other web based markdown editor. So, technically didn't learn much from it but was still in the right direction so I'm going to give it a break.

Similary, Hen and Snips were experiments to figure out visualisation methods and while doing that I added a Mini playground for react components. Took 2 approaches to make it , one with iframe and one with a contained div , though I think the iframe approach would be a lot more secure but either way, I've got helper code for both now.

That's slow.

It is, yes. That's also the reason why I didn't really try making a browser. Even though there are terminal based options out there that I do like using but, I like the dev tools experience of Firefox and so it's hard for me to actually think about building one right now with a day job.

Building both a browser and a code editor in parallel would get me burnt out in no time but don't worry, there will be a day when I attempt making a browser.

Also, it's going to be slow because

  1. I'm not making it to earn from it and only making it because it helps me learn and as a result there might be one more editor for people to use.
  2. Expecting to live off of donations is very ambitious thing to do, unless you build something really useful which doesn't happen everyday, at least not for me. I rarely have good ideas for products.
  3. If I do build this in a hurry, I won't be satisfied with the result and won't use it and the project will sink down in my git repositories like other abandoned projects that I thought I'd waste time building.
Tip: For people who are expecting to earn via Open Source,
your best bet is to go the Open Core way which a
lot of Open Source purists won't like and the
other one is to offer managed services
while giving away a self hosted solution for free.
Gitlab, SourceHut, Mongo, you get the idea.

What's wrong with the current ones?

Oh there's nothing wrong with the current ones. As I said I like building these things both to learn and to have my own take on something as solution and try to not kill the ram. And to think I was going to build it with Electron was the first mistake but I have a better approach now, it's a very old one but still the best way to go through building desktop tools in my opinion.

The requirements for the editor though,are very minimal and the existing solutions. Atom, VSCode, end up offering a lot more that I'm every going to use. I'm not kidding, I made a post long back about my vscode setup and I've gotten rid of the other themes and only using Min Theme now and don't even have polacode and music time anymore. Also, Sublime and VIM are great alternative and I do use them right now more than I use VSCode. I still want to try building one.

To sum it up, this is all I want

  • Bracket Matching
  • Syntax highlighting(optional but nice to have it)
  • Decent Duo Tone or Monochromatic Dark/Light theme

Why don't I use any other plugins? Well basically because I have everything else handled by tools that run when i commit or are setup to run as a github action.

My linters run during commit, formatters run during commit and also as a github action to make sure all files are formatted and not just the ones that were just staged and I sometimes directly edit from Github's web editor so those changes are formatted for me automatically from the action.

Stuff like this doesn't need to be done in realtime because I've been coding for a long time now and silly mistakes like typos and all are rectified during testing. You'll still see a lot of typo messages in my commits because I commit the functionality before I test it and then add fixes as their own separate commit. I like being verbose about the mistakes I made.

Anyway, building an editor with just the above 2 functionalities should be simple right? Well that's the thing, I tried doing it with Nova with 0 knowledge of how large amounts of text buffers are to be handled and what data structures i was supposed to use and that botched the project before I even started and that's when I went the route of building every module slowly and understanding things I need to learn and think about before going ahead and jumping into building it.

Don't worry, you'll get updates of my progress with it after I'm done with dark which is the current project I'm working on. Haven't really told anyone on what's it's going to be but I will once I have an mvp ready for it.

Adios.

]]>
Mon, 23 Aug 2020 00:00:00 +0000
https://reaper.is/writing/callbacks-promises-generators-js.html Callbacks, Promises, Generators and Why a JS Developer needs to understand them. https://reaper.is/writing/callbacks-promises-generators-js.html Callbacks, Promises, Generators and Why a JS Developer needs to understand them.

I've had my fair share of up and downs with Javascript and it always is frustrating when the language you've been using for a while doesn't behave the way you assumed it would but I continued using JS for almost everything I built over the past 2-3 years.

Why Javascript?

I'm not biased towards the language, I would like to get back to being a C developer or maybe be a little modern and be a Go/Rust Developer but as I mentioned in my previous post, I've been a little bitch about it and keep running back to JS for moral support.

But, I still think that learning JS is a valuable skill. Learning any programming language at this point is.

We though are going to go with JS to start because I can explain stuff about it a little more than I can explain C, Go, or Rust.

It all Started with him... the dreaded one

Callbacks

Veterans love them, newbies fear them and others have no idea what's going on.

The thing about callbacks is that we are all using them in almost every JS codebase and still fail to realize that we are.

Anyway, getting to the basics.

What are they?

It's a function. A function trapped inside another one to be precise but it's still a function so let's treat them like one.

function functionOne(callbackFunction) {
  const a = 1
  callbackFunction(a)
}

function functionTwo(num) {
  console.log(num)
}

// Variation One
functionOne(value => {
  functionTwo(value)
})

// Variation Two
functionOne(functionTwo)

Now, before I explain the above, I'm assuming you understand that functions in JS aren't considered different than general parameters and thus, you can pass them down to other functions.

This is also allowed in other languages as well so should not come as a surprise anymore to people who've been jumping languages or to people who've been dealing with JS a lot.

Let's go through the code snippet now.

We've got 2 functions to start with functionOne and functionTwo, functionOne takes in a parameter called callbackFunction which could be anything, a string a number or even a boolean or an object/array for that matter, but I'm going to keep it simple for us to understand and not add type checks at this point (which you should add if you are writing in just JS, ignore if you use TypeScript).

functionTwo on the other hand has the same parameter signature or accepts the same number of arguments as functionOne.

If we now look at the inner code of these two we see that one declares a variable a and executed callbackFunction and passed in that value (again, functionOne assumes that callbackFunction is going to come in as a function and so blindly executes it.

functionTwo's inner code is logging the passed parameter to the console/stdout(depending on where you are executing this snippet).

Execution

After the declarations we have 2 variations for the execution of our functions, one being a little verbose and the 2nd being my definition of readable code.

  1. The first variation basically calls functionOne and passes it another function as a parameter which is called an anonymous function (guess why) and this anonymous function surprisingly has a value parameter, we didn't declare it, so how does it get it? functionOne passes it to anonymous function when we made the callbackFunction(a) and callbackFunction is now pointing to our anonymous function because this is what we passed as a parameter and then we just call our functionTwo and pass it the received value.

  2. The second variation is used when there's only one function that needs to be executed with the incoming value from functionOne, you should still go with Variation one if you're going to use the value more than once. Now this works because we are still passing functionOne a callbackFunction which takes in one value and similar to the 1st variation, it accepts the value and runs its logic with it.

You can copy the above code and run it on any JS playground and you should see that the number 1 is printed twice.

Why use Callbacks?

As I said, you're using them everywhere in JS without realizing that you are but, as to why use them? It's a very simple answer.

Scoped Data Access

If you've not gone through the internals of JS this might be a little hard for me to explain but I'll give it a try.

Like most languages, you have data scopes that are maintained by the interpreter or compiler which is why you can access variables only under certain conditions.

If you go back to the above example, you can see that var a = 1 is defined inside functionOne and thus can only be used by functionOne's scope or by code that is inside functionOne but what do you do if you want that data to be accessible to other functions because if you write everything inside one function, then it beats the point of having functions and or thinking about creating modules altogether.

This is where callbacks excel and this is why JS is very async friendly.

async - asynchronous programming, I'll explain this in detail in another post.

When you write code with async programming in mind, the chances of you blocking the execution thread is very low, unless you hit a deadlock between two callbacks calling each other or you forgot to break a loop.

So if we go back to our example, we see that a is passed to functionTwo from functionOne's scope and then functionTwo just prints its. That is a very naive example and in real-life code, callbacks aren't that clean and easy to read.

If dealing with dependant data and working with data from the network, you'll probably see your code go south like this.

function dataFetch() {
  const data = someNetworkRequest()
  formatFetchedData(data, (err, formattedData) => {
    if (err) {
      console.error(err)
      return err
    }

    processFormattedData(formattedData, (err, processingResult) => {
      if (err) {
        console.error(err)
        return err
      }

      sendResultBackToServer(processingResult, (_err_, serverResult) => {
        // let's end this with a console.log
        console.log(serverResult)
      })
    })
  })
}

function formatFetchedData(param, callback) {
  // relevant code
}

function processFormattedData(param, callback) {
  // relevant code
}

function sendResultBackToServer(param, callback) {
  // relevant code
}

dataFetch()

A 3 level callback dependency can be readable but obviously, a complex app won't stop at 3 and while I could write something cleaner with an async chaining utility, a very famous one is async.js and we could use it's waterfall method to keep passing down upper dependencies to the lower functions, it's a little more manageable but still messy in larger codebase.

The Solution to the Living Hell

Enter Promises

The above-mentioned chaining is still the solution to avoiding the triangular callback code but with a little more magic handled by the wrappers.

You see, someone wrote a library called q which was the initial concept of how promises have grown to be today, this was followed by bluebird's promise polyfills which overall implement the same thenable paper.

Thenables

We are still going to have the callbacks in our life but we are going to put on a little makeup on them so we can bear them for longer sessions.

Thenables when explained simply, is a stateful container that can be chained by .then caller functions and each caller function creates another thenable. A recursive chain of wrappers to be precise.

I'll explain with the same example

// Variation One
dataFetch()
  .then(data => {
    return formatFetchedData(data)
  })
  .then(formattedData => {
    return processFormattedData(formattedData)
  })
  .then(processingResult => {
    return sendResultBackToServer(processingResult)
  })
  .then(serverResult => {
    return console.log(serverResult)
  })

// Variation two
dataFetch()
  .then(data => formatFetchedData(data))
  .then(formattedData => processFormattedData(formattedData))
  .then(processingResult => sendResultBackToServer(processingResult))
  .then(serverResult => console.log(serverResult))

// Variation Three
dataFetch()
  .then(formatFetchedData)
  .then(processFormattedData)
  .then(processingResult)
  .then(console.log)

If you understand the verbose example, you can understand how the other 2 variations work and this is obviously much neater than plain callbacks but as I said, we are still going to continue using callbacks because the language depends on it. The solution/promises are just a better way to handle it.

As visible, we still pass functions down to a wrapper/caller function that takes in the returned data and passed it to the next then in the chain because every function being called inside a .then is treated as another Promise and hence can be chained to be such.

I'll try to simplify how Promises work internally. The Promise constructor maintains a state for itself. pending | fulfilled | rejected these 3 per promise decide if the call was successful, or failed and based on it, they call a .then or a .catch.

new Promise((resolve, reject) => {
  if (1 > 0) return resolve(true)
  else return reject(false)
})
  .then(value => {
    console.log(value)
  })
  .catch(err => {
    console.error(err)
  })

To explain this we'll consider the above example. I create a new Promise using the words new Promise and this constructor takes in a callback that is passed 2 params, resolve, and reject.

Pause.

At this point we have a Promise with the state pending because nothing has actually been a success or a failure and it'll stay that way till you, the person who's writing this promise decides.

resolve tells the constructor that the run was successful and you can execute your .then function's callback and pass it the data that it has received. In our case, true is passed to .then.

reject, on the other hand, calls the .catch with the passed value and this is where the state changes to rejected

You didn't mention the state changing to fulfilled !!

I know. Patience, human.

The fulfilled state is updated but under certain conditions, if there was only one .then call, then the main constructor is now fulfilled but if you chained it with more and more .thens then each one has it's own state and even though the first constructor might have resolved and changed to fulfilled you'll still see pending in the console because the chained ones each have their Promise instance which still has the state as pending.

In the end, we have a few callbacks and a constructor wrapped around our callbacks to make this chaining possible and code a lot more readable.

Generators

This is a huge topic so I'm going to explain it in another post sometime in the future but for now all you need to know is that generators are special kinds of functions that allow you to iterate over and over till you decide to end the function altogether. This is the actual concept that async and await works on. You can write your custom async-await implementation using a few generators and promises.

The Chosen One

Async|Await

Now even though I'm someone who likes promises more than I like async-await, mainly because I keep forgetting to add the async keyword on my functions and I'm too used to writing thenables to control my async flow, still as a programmer we gotta learn what's new. Isn't always better but if you know your options you can choose the ones that suit the condition.

As mentioned in Generators, you can create your a custom async-await wrapper if you'd like too since the interpreters will actually compile your async-await code into generators anyway.

Generators allow you to iterate over itself and used a keyword called yield which allows you to throw value out of the generator and take in another value for the next yield till you decide to end it's life with a return, much like normal functions.

function* infinite() {
  let index = 0

  while (true) yield index++
}

const generator = infinite() // "Generator { }"

console.log(generator.next().value) // 0
console.log(generator.next().value) // 1
console.log(generator.next().value) // 2

With this in place, I can have one generator run, get an async value and yield it out of it, then pass it back to the generator and it can run the next yield scope and so on till it decides to stop. You need to understand that while yielding we are still resolving promises and async/await is nothing more than syntactic sugar for creating and resolving promises and as always, it's also based on the concept of callbacks and hence each of them creates a slight delay if compared to async functions written with just callbacks. But, developer experience and sanity need to be kept in check, else we'll have all JS developers in some Asylum yapping about callbacks all day long.

In technical terms, Javascript's concept of considering functions as first-class types improves the composition and this is something that functional programming languages generally have.

The composition works well but then you gotta limit the level of abstraction you create ( a rant for later )

Overall, a general idea of how the internals of the language dictate your control flow helps you make better choices and this can be seen by the concept of callbacks which is a spec that André Staltz came up with, which is creating a Pub/Sub model using just callbacks and using them for streams (Again, will make a post about this as well).

Make sure you don't create a callback hell inside a thenable though.

That was long....

Adios!

]]>
Mon, 01 Sep 2020 00:00:00 +0000
https://reaper.is/writing/column-null.html COLUMN “” CONTAINS NULL VALUES | Postgres https://reaper.is/writing/column-null.html COLUMN “” CONTAINS NULL VALUES | Postgres

I don't maintain migrations so I end having this error almost everytime I add a new column that's not nullable.

Here's a few queries you can execute for this.

Integer

ALTER TABLE public.<tablename> ADD COLUMN <column_name> integer NOT NULL default 0;

String

ALTER TABLE public.<tablename> ADD COLUMN <column_name> VARCHAR NOT NULL default ' ';

Boolean

ALTER TABLE public.<tablename> ADD COLUMN <column_name> boolean NOT NULL default false;
]]>
Mon, 18 Feb 2020 00:00:00 +0000
https://reaper.is/writing/covid-tracker.html Covid Tracker Attempt https://reaper.is/writing/covid-tracker.html Covid Tracker Attempt

I created this because I was bored and didn't have anything creative to make and since everyone seems to be posting theirs. Here's a minimal version that gets you the counts of various incidents

Link

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/css-resets-mnmlcss.html Consistent UI's with CSS Resets and announcing Mnml.css https://reaper.is/writing/css-resets-mnmlcss.html Consistent UI's with CSS Resets and announcing Mnml.css

Another post, another day. or was it supposed to be the other way....

Anyway, CSS Resets. Everyone's heard of Normalize.css? No? Are you serious!

Never-mind, so Normalize has this amazing set of css properties that someone decided to write so that the CSS we write is consistent across browsers and this makes it easier.

Now since this is possible some people figured that it'd be nice to have stylized resets, so you could just link them into the head element and voila! beautiful html without any additional overhead.

A few examples would be

And these are actually really good, but then there are certain things that I wanted to restyle and also certain helpers that I keep adding into my css, I know tailwind provides them! I'm not going to add a full css library and then setup purifyCss when I can type it out in the same amount of time. So, I picked one up and modified it to have my own set of resets and here's where the shameless plug comes in.

Just like the above two, it uses it's own resets. Other projects where I managed to use this. and since all are available on my github, you can check the code to see the implementations

you can check the git repo for the available classes and the resets that are modified git repo later today

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/custom-roms-and-index.html Custom Roms and the need for a custom rom index https://reaper.is/writing/custom-roms-and-index.html Custom Roms and the need for a custom rom index

You already know where the post will lead, it'll lead to some side project I ended up doing during the weekend. But, there's twist, this wasn't done on a weekend. I did it on a weekday and that's the only twist. Let's get to the post.

It all started with a general day at work, writing features, fighting bugs, with my brain just imagining me to be a super hero. Once I logged out, I just go sit with my parents and eat dinner or talk about random crap. During this I observed dad constantly patting the back of his phone like it was a TV Remote and hitting it would magically make it work.

I grabbed his phone and randomly started doing intensive tasks to see that the phone wasn't just slowing down but literally begging for resources to work properly. Shouldn't be happening to a decent phone but might just have been a faulty piece and my dad doesn't really tell me that it's not working cause then I'd instantly order a new one instead of trying to get it fixed.

Anyway, with the new knowledge that the king's telecommunications weren't working, the prince went ahead to look for another phone and I remembered that I could just swap the current ROM with a custom one to see if it's the hardware or it's the software optimizations that are causing the issue. I reset the phone, the phone still hanged itself to death and then came back to life after 2-3 mins. Pretty irritating!

I then went ahead with the Custom ROM approach to find out that the phone doesn't support bootloader unlocking and I can't do anything but stare into darkness with this in my hand. ** Next Steps? **

Look for a phone that's fairly new like released last year and has at least one of 3 of my favorite roms.

  • Lineage
  • Pixel Experience
  • Evolution X

Sadly, there's no way to combine those 2 searches so I ended up going through each of their websites, looked up the devices they support and the release date of each device. Luckily each of those roms have websites that provide all the information I needed but it was way too many clicks and that makes sense cause their audience is people who already own the phone , searching if their phone is supported.

Not people who plan to buy a phone only if it's supported by these 3. I sat for a few minutes doing this and making note of all the phone I've checked , their release timelines, the stability of the ROM and then I was like, "Dude... you are a programmer. Script this out!" , wrote a simple script that could scrape the lineageOS wiki and device release data and show it in the descending order and I could easily just select the device I could get for Dad.

Brain didn't really like the idea of a cli script and went "You know, you can build a website right?". The remaining story doesn't need to be explained. Ended up building https://cri.reaper.im and it has about 5 ROMs and the devices that support them. Thus, making it as easy as a click to help me find what to buy.

Check it out, or don't , who cares? I'm going to go get Dad a Pixel 4a.

]]>
Mon, 17 Aug 2021 00:00:00 +0000
https://reaper.is/writing/ec2-setup-checklist.html Checklist for EC2 Base Setup https://reaper.is/writing/ec2-setup-checklist.html Checklist for EC2 Base Setup










Document DB Checklist

Note: VPC Constrained - Won't work for public access





]]>
Mon, 18 Feb 2021 00:00:00 +0000
https://reaper.is/writing/efficient-software-instead-of-software-that-works.html Efficient Software vs Software that just works https://reaper.is/writing/efficient-software-instead-of-software-that-works.html Efficient Software vs Software that just works

I landed on this post today.

Link

This is a post from 2018 and I landed on this looking for a good way to build multi platform apps while maintaining a really small footprint. The arguments and discussions in its comments were hilarious and kinda pleasing it lead me to a rust based webview for pure web apps but on a desktop.

Anyway, was close to locking on programming the whole thing in C/C++ and use some abstractions to support all 3 platforms, and all this headache because, I realized that a lot of tools that I have installed are built with the chromium engine as a window layer with Electron on the top. A few to name would be VS Code, Postman, Spotify , WebTorrent which ended up taking a lot of space on my disk vs softwares like Sublime and Transmission that hardly take up any space and the only reason I don't use Sublime is because the packages I need are obsolete and need a little tweaking to get them to work. So I, just end up letting vscode eat my ram and battery.

For anyone else, the disk/ram/battery usage is important to me because I have a entry level Macbook Pro with just 121gb of user storage and that fills up quickly when you are a nodejs developer. Cause well, just 20 Repositories have over GB's worth of just node_modules and then I even decided to download XCode and Android Studio because someone wanted a Flutter and React Native application as well.

I agree with tux0r and being a fellow minimalist it would be a nice if software was efficient and more focused on being performant while being small instead of

"Yeah, this library, that lib, add all of them up and ship it, if the user needs the software he'll download the 200GB installation candidate!"

Now, tux0r's argument was specific to websites being a huge ram hog but in the comments he ends up supporting desktop apps for heavier tasks and most of the people end up supporting electron and JS as a language.

Now I know its not going to be ideal for business as they want everything quick and not always done right but a lot of us do build projects for fun and we should actually try to build something that is a lot faster to use while not being a ram hogger. Cause , a lot of people don't upgrade for a long time and they can't really enjoy the software you built.

On a different note, I'm still unable to decide what I should be doing for that multi platform desktop app I wanted to build...

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/embedding-files-go-lang.html Embedding Files in your Go Lang binary https://reaper.is/writing/embedding-files-go-lang.html Embedding Files in your Go Lang binary

We talked about how I built https://status.barelyhuman.dev and what I was doing to handle adding the html data into the final binary so that the hosting provider could render the html from a single binary.

You can read that post here

The previous solution works and was quite easy to do but then it obviously took away my speed since there's no syntax highlighting for css or html as they are all just go strings now. Plus, to format the html or css I have to move them to a separate file format it there and paste it back here which is really not something I wish to do if I have to maintain the repo for a long time.

So i started looking up ways and while I knew about go embed I was waiting for embed filesystem(FS) to get a little more stable and then I forgot about it and never checked the pull requests that were to be merged for embed FS. I happened to randomly check it two days back and so I'm now going to talk about how embed works and how status is using it to handle the HTML and style templates.

html/template

This assumes that you've worked with html/templates in go before and we build up on that. The basic html/template code flow is something like this

  1. Parse the templates, either as files , glob, or the entire file system with a certain pattern.
  2. Execute the template with the needed variable data for the actual compilation to be done
  3. Write the compiled template to an io.Writer instance, in our case, the http socket

embed

embed provides two ways to go about embedding files or other static data

  1. Read the file into a byte slice
  2. Read the file into the embed filesystem

If we were to use the first one the code would look a little something like

package extras

import (
	_"embed"
	"html/template"
)

//go:embed templates/home.html
var homeHTML []byte

the modifier you need to look at is go:embed this allows you to put a path based on the current file, you cannot use files from folder outside your current .go file , which is the current limitation of pattern matching but I really hope changes in the future.

The compiler then reads and embeds the data into the variable homeHTML and then you can continue using the file like it's already read. but in out case we have more than one template and you can add that to each variable , just know that the go:embed has to be on the top level and cannot be inside a function (at least, when I'm typing this.)

The other way involves using the embed FS which is basically going to do the same thing but instead creates an in-binary filesystem that you can read through. This is built on the existing go FS interface so you can use the embedFS wherever the FS interface is supported which the html/template library does and we are going to use that since I don't want to manually have to go and add the embed since I am parsing all templates anyway.

The code for the same would look something like this

package extras

import (
	"embed"
	"html/template"
)

//go:embed templates/*.html
var embedFS embed.FS


// GetTemplates - parse and get all templates
func GetTemplates() (*template.Template, error) {
	allTemplates, err := template.ParseFS(embedFS, "templates/*.html","styles/styles.html")
	if err != nil {
		return nil, err
	}
	return allTemplates, nil
}

The line of focus is again the go:embed comment but also focus on the next line this time, we now have a variable that is of the type embed.FS which as I mentioned before is an implementation of the filesystem interface that go already has for other io interfaces and since the html/template allows you to parse based on filesystem and from the 2nd param you can add in patterns which is of the type (...strings) aka infinite patterns on the FS and everything gets parsed into allTemplates.

At this point, all I do is call GetTemplates from wherever I need them, and follow the normal template code flow

  1. Compile them with the required variable data
  2. Pass them to the io.Writer interface, in our case the http socket

and we are basically done, as a user, this won't create a difference on what you see when you open status but as a dev, this made it a lot more easier for me to handle changes in the html and style files.

]]>
Mon, 22 Jul 2021 00:00:00 +0000
https://reaper.is/writing/fixing-mjs-imports-webpack-4.html Fixing .mjs imports with ESM libraries in Webpack 4 https://reaper.is/writing/fixing-mjs-imports-webpack-4.html Fixing .mjs imports with ESM libraries in Webpack 4

No long story for this one.

If you are working with Webpack 4 with/without Typescript there's a good chance that your webpack is complaining about .mjs files and not being able to import stuff from them.

2 solutions.

  1. Migrate to Webpack 5
  2. Configure Webpack 4 to remove the strict module loading so it can just bundle the .mjs files as normal .js files

for those who'd go with the 2nd one, here's how.

Considering the below is your webpack config, add another rule to the array of rules.

const config = {
  module: {
    rules: [
      // ... all your loaders/rules
      // add the below rule
      {
        type: 'javascript/auto',
        test: /\.mjs$/,
        use: [],
      },
    ],
  },
}

If you are using create-react-app, the default babel-loader tried to load the .mjs files but other .mjs rules actually conflict with it so instead of trying to handle every conflict just let webpack know that it has to consider .mjs files as just plain javascript files that it needs to compile as normal and it'll take care of it.

Hope that helps someone.

]]>
Mon, 30 Jul 2021 00:00:00 +0000
https://reaper.is/writing/get-it-out-of-your-head.html Make it, just to get it out of your head. https://reaper.is/writing/get-it-out-of-your-head.html Make it, just to get it out of your head.

Now I've been the guy who just throws out repositories right left and center, hasn't completed projects and mostly have them on pending for a really long time unless something really motivates me to work on that project again.

I still end up making these things again and again just because I think that it's good to get that thought out of your head and out into code because you get to the point where you realize whether that project is actually going to be something you yourself would use.

I ended up building a lot of stuff that I do personally use and maybe no one else does because I haven't published anything about those projects anywhere and I'm the only one who knows about their existence. On the contrary there's double the number of repos that even I don't care about anymore.

The whole process of thinking and getting it out into the world of code gets you through scenarios that get you to realize and also sometimes learn about things that you might be doing wrong.

I've got this really old music player that I built using RiotJS, 0 folder structure, 0 modularity, just plain working code.When I went back to it to make changes or add features, I realized that I don't plan out my projects and jump straight to prototyping, the next time I built the same thing on Vanilla JS, the code was a 100 times better because I had a mental image of things I needed to separate and things that need to be maintained to avoid having spaghetti code in the later stages of maintenance.

These subconscious decisions help you out in the later stages a lot.

I don't use that player anymore but it's taught me something. so did the todo list(which everyone has built at some point in their coding life) and so did the other 100 repos that I have on my github

and after all of this reading, the moral was basically the title of this post.

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/gitscaff-git-repos-boilerplate.html Using existing GitHub repositories as boilerplate https://reaper.is/writing/gitscaff-git-repos-boilerplate.html Using existing GitHub repositories as boilerplate

A tool that I made is called GitScaff, it's a simple wrapper around the git clone command but makes it a little simpler to clone repositories as templates from existing GitHub/Gitlab repositories and supports private repositories and Gitlab repository grouping so if you have boilerplate that is a private repository, you can still use this utility, because at the end of the day it is still a simple git clone

Why though?

No real reason, I was using degit for a while and then it created an issue because I couldn't use it for cloning private template repositories and it didn't support gitlab's repository grouping either so it kinda bummed me out to have to clone and then get rid of the .git folder and all which i could do with a single command using an alias (which i should have...) in Linux.

Anyways, I had time to kill so I built this to do that for me.

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/hello-world.html Writing cleaner state in React and React Native https://reaper.is/writing/hello-world.html Writing cleaner state in React and React Native

Ever since hooks got introduced in React, it made it a lot more easier to handle composition in react components and also helped the developers of react handle the component context a lot better. Also, as consumers of the library, we could finally avoid having to write this.methodName = this.methodName.bind(this) which was a redundant part of the code to which a few developers ended up writing their own wrappers around the component context.

But that's old news, why bring it up now?

Well, as developers there's always some of us who just go ahead follow the standard as is even when it makes maintenance hard and in case of hooks, people seem to just ignore the actual reason for their existence all together.

If you witnessed the talk that was given during the release of hooks, this post might be not bring anything new to your knowledge. If you haven't seen the talk

  1. You should.
  2. I'm serious, go watch it!

For the rebels, who are still here reading this, here's a gist of how hooks are to be used.

Context Scope and hook instances

If you've not seen how hooks are implemented then to be put simply, the hook will get access to the component it's nested inside and has no context of it's own, which then gives you ability to write custom functions that can contain hook logic and now you have your own custom hook.

Eg: I can write something like this

import { useEffect, useState } from 'react'

function useTimer() {
  const [timer, setTimer] = useState(1)

  useEffect(() => {
    const id = setInterval(() => {
      setTimer(timer + 1)
    }, 1000)

    return () => clearInterval(id)
  }, [timer, setTimer])

  return {
    timer,
  }
}

export default function App() {
  const { timer } = useTimer()

  return <>{timer}</>
}

And that gives me a simple timer, though the point is that now I can use this timer not just in this component but any component I wish to have a timer in.

The advantages of doing this

  • I now have an abstracted stateful logic that I can reuse
  • The actual hook code can be separated into a different file and break nothing since the hook's logic and it's internal state is isolated.

This gives us smaller Component code to deal with while debugging.

What does any of that have to do with state!?

Oh yeah, the original topic was about state... Now the other part of having hooks is the sheer quantity that people spam the component code with it and obviously the most used one is useState.

As mentioned above, one way is to segregate it to a separate custom hook but if you have like 10-20 useState because you are using a form and for some weird reason don't have formik setup in you codebase then you custom hook will also get hard to browse through.

And, that's where I really miss the old setState from the days of class components and there's been various attempts at libraries that recreate the setState as a hook and I also created one which we'll get to soon but the solution is basically letting the state clone itself and modify just the fields that were modified, not that hard right?

You can do something like the following

const [userDetails, setUserDetails] = useState({
  name: '',
  age: 0,
  email: '',
})

// in some handler
setUserDetails({ ...userDetails, name: 'Reaper' })

And that works (mostly) but also adds that additional ...userDetails everytime you want to update state. I say it works mostly cause these objects come with the same limitations that any JS Object has, the cloning is shallow and nested states will loose a certain set of data unless cloned properly and that's where it's easier to just use library's that make it easier for you to work with this.

I'm going to use mine as an example but you can find more such on NPM.

import { useSetState } from '@barelyhuman/set-state-hook'
import { useEffect } from 'react'

function useCustomHook() {
  const [state, setState] = useSetState({
    nested: {
      a: 1,
    },
  })

  useEffect(() => {
    /* 
      setState({
        nested: {
          a: state.nested.a + 1
        }
      });
    // or 
    */

    setState((prevState, draftState) => {
      draftState.nested.a = prevState.nested.a + 1
      return draftState
    })
  }, [])

  return { state }
}

export default function App() {
  const { state } = useCustomHook()
  return <div className="App">{state.nested.a}</div>
}

and I can use it like I would with the default class styled setState but if you go through it carefully, I actually mutated the original draftState and that's because @barelyhuman/set-state-hook actually create's a clone for you so you can mutate the clone and when you return it still creates a state update without actually mutating the older state.

Summary

  • Use custom hooks to avoid spaghetti state and effect management code
  • Use a setState replicator if you are using way to many useState hooks

make it easier on your brain to read the code you write.

]]>
Mon, 30 Aug 2021 00:00:00 +0000
https://reaper.is/writing/i-owe-the-oss-community.html I owe the OSS Community https://reaper.is/writing/i-owe-the-oss-community.html I owe the OSS Community

I've been asked a few times as to how I learn things and Why I support open source so much that I'd like to work for a company that has a dedicated OSS team. I could answer this in one line , or I could write a whole essay.

One Line Answer

I learn from open source repositories and study them to figure out stuff and things that exist out there.

The Essay

We all started somewhere. The point in time where we knew basically nothing. I like calling it the Dumb Point , and every person you know has been there and then their decisions got them to where they are today.

Dumb Point's are basically where you might know enough to do something but you don't fully understand what's going on. And thinking that once you've grown at a certain skill you won't have dump points is another mistake people make. I've been with JS for about 2.5 years Industrially and about 4 years if you count the projects I did in college to learn JS in the first place and taking in consideration that I was coding in Python way before that would turn that into 8 years worth of experience with code but I still have dump points.

I had one last month when I was learning Rust , now rust isn't hard or anything but there's obvious differences that it has compared to the languages I'm used to.

I've worked with Static Typed languages before. Java, C, and in the recent years TypeScript(i know it's a toolset and not an acutal language, but you get the idea.) but then Rust has the concept of pattern matching and every line being a typed expression which does create friction when you start off, theres numerous other things that make it a little harder to just get used to Rust code, macros, variable shadowing , etc. Point being, I needed help to understand what was going on and like anyone else we'd turn to google for this but then I don't open up Stackoverflow or Blogs for these anymore.

My first point of learning is the docs, always. That's where I look for the answers and most of the times good tools have sufficient documentation for you to refer too. The second point of learning for me is git repositories using that particular tool. Taking rust as example again, I'd look for github repositories using rust and try to go through them as much as I can. A lot of times people add helpful comments on functionalities that are a little more complex and that helps you out too.

Overall, I study codebases to understand approaches and since I end up going through a variety of repositories I find various ways to implement similar things and then go with the approach that's both easy to read and performant, the other approach might be more performant and I'd use it when building something for production but for initial learning and understanding I'll pick the readable version.

Now, since I do this a lot more with JS libraries and frameworks right now, I get to go through the package.json files which contain dependencies that the projects use, I normally have numerous tabs open for each npm package that I don't know about and this opens up even more repositories for me to study and obviously, I now know of more packages that can help with a certain task if needed in the near future.

And even though the knowledge I've gained in years is not even close to 1% of what's out there, I still owe a debt to the open source community and hence most of what I build is open source.

]]>
Mon, 02 Aug 2020 00:00:00 +0000
https://reaper.is/writing/importance-of-development-tools.html Importance of Development Tools https://reaper.is/writing/importance-of-development-tools.html Importance of Development Tools

It's hard to not talk about this topic, I've always been the kind of person who'd prioritize learning over everything and have no regrets. I've done that while building most of my projects. Better project structures, easier ways to implement a certain functionality while maintaining a clean code, all of these have come after a lot of trial and error and there's still times where I'd give up on clean code just to get a hacky version of my idea up and working.

But this post isn't supposed to be about learning, this post is about why Development tools are an important part of a developer's life and why he should focus on growing up to be a power user of every tool that exists in his dev toolchain. People who think that this is supposed to be obvious, you'll find out why I'm writing this.

I've seen people use VSCode like they were using notepad++. No hard feelings towards notepad++ but the whole point of having something feature rich like VSCode is to make it easier and faster for you to work with. People seem to be surprised when I use multi-cursors in the editor to edit multiple lines of code.

The code editor is not the only tool that I'd like to focus on. Database GUI Clients, Various Browser debugging environments, Platform specific IDE's (Android Studio / XCode),Deployment Tools, CI/CD all of these have a ton of features that can help you automate and if not automate , maybe just make it easier and sometimes also faster for you.

Even though I am trying to convince you to use tools, I don't want people to stop going through the raw basics of each of the technologies that they use the tool for. I still want you to learn to use something like vim, I still want you to learn raw sql queries, you should already know why the linters need to exist.

Examples

  • Instead of Browsing for a file through the file tree, use the super search provided by your code editor.
  • Use a GUI to manage your remote and local database instances instead of writing redundant create/delete queries for it. The GUI tool does allow custom queries and the tables are a visually a lot more pleasing to go through (Doesn't mean , you stop learning SQL/NOSQL queries all together and just depend on the tool)
  • Learning to use a Git Diff tool like Sublime Merge or Git Kraken to make merge conflicts and git history tracking a breeze. (Doesn't mean you shouldn't learn git basics)

TLDR

  • Learn the basics (Advanced topics too, if you can) of all the technologies that you use.
  • Find tools that suffice to your requirements and don't use heavy tools if you're not going to use it's features. You are just wasting your system memory in most cases.
  • Grow to be a power user of each tool you use, learn the keyboard shortcuts, if the tool can be configured, figure out various ways to configure it. If the tool has a plugin system, maybe even build a plugin to know about it inside out.
  • Don't skip the 2nd point!
]]>
Mon, 03 Mar 2020 00:00:00 +0000
https://reaper.is/writing/is-life-hard.html Is Life really hard ? https://reaper.is/writing/is-life-hard.html Is Life really hard ?

This might be just another life lesson post on the internet, but it also might be worth a read.

Let’s GO!

Life is easier than you think and harder than you imagine. Your everyday decisions are responsible for what happens with you and also with people connected to you.

Every decision that you take, leads you to an experience. This experience might not be a good one but if you do accept the fact that you were wrong then this will help you learn a lot.

Let’s imagine life as an Open World Game, where you decide whether you’re going to go for the main quest or complete a side quest.

If you chose to complete the side quest, it might lead to another side quest or might give you a clue about the main quest or might get you armed with weapons or other utilities that might just make it a little bit easier for the main quest.

If you chose to complete the main quest, then you end up progressing in the story and learn a little more about the other characters around you. During the completion of the story your opinion will change based on the characters, you develop smarter ways to complete the quest and then the game’s main story line is basically done. You missed the side quests but completed the story successfully anyways.

Unfortunately, life doesn’t guide us the way the game does so here our own decisions help us get through it. These decisions when wrong will help you learn about what has to be improved in your life and when correct, will help you advance in your journey.

This Journey that I’m talking about doesn’t have to have a target or a goal. We set targets based on things we’d like to accomplish. We make life harder for ourselves by trying to compare ourself with people. We mentally increase our own insecurities to lead up a life where you start stressing about everything you do and regret things that you did.

We try to convince others and ourselves that we are better than the person who has achieved something. When the reality is that, it doesn’t matter what you’ve achieved. Let’s try to understand this.

Will a Lamborghini make you happy?

Most of you are straight of going to say “YES! IT WILL!” and then there will be people who don’t like the car and will say “Nah, insert car name here is better, and that’ll make me happy” and then there might be people who just don’t like cars and prefer bikes and this goes on. The point is, it’s a luxury.

These things make us feel good about what we have accomplished. Owning a costly watch, car, bike etc etc Et cetera!. Which isn’t what makes us happy. I mean, maybe for a few days, months but that's about it. The fact that we all crave financial comfort more than mental comfort is what make life harder.

The fact that we don’t have to worry about financial problems anymore is what makes you think that you’ve achieved success.

Now, success is defined differently by different people. Some define it as Fame, some may define it as Fortune and some would like to define it as Happiness. If you’ve achieved all 3, then I envy you and respect you.

If you’ve achieved either one or two, I still respect you. It doesn’t matter. There is no difference as to what you think of success to be , because there is no rule book that says that you’ve got to be rich, you’ve got be strong, you’ve got to be a genius. The world has never limited a person to anything. The standards we set in our mind for “Good Life / Happy Life” is what makes us feel that we haven’t reached to the point we deserve to be on. There is no competition unless you think of it as a competition. If you’d like to compete will people for fortune, do it if it makes you feel good.

So, Is life really hard? I don’t think so. I’ve had my down times but they all helped me out. It is a diplomatic answer but trust me, there’s no easier way to explain it. Life is as hard as you make it. We make it hard by overthinking about stuff, we jump to conclusions without a proper thought. We blame others to convince ourselves that what we decided to do was right.

The day we stop worrying so much about what will happen and what has happened. It’ll make life a lot easier. Just know that, it doesn’t matter what you choose to be or achieve. You’ll achieve it sooner or later and that’ll motivate you enough to aim for something a little more bigger.

Anyway, have a great one.

]]>
Mon, 01 Oct 2017 00:00:00 +0000
https://reaper.is/writing/love-faded-wallpapers.html My Love for Faded Wallpapers https://reaper.is/writing/love-faded-wallpapers.html My Love for Faded Wallpapers

I'm the kind of guy who changes his wallpapers a lot. Like... A LOT!

But, there's this thing that I fancy. Almost all wallpapers that I download, I modify them to have a slightly washed out or faded look, if they weren't already that way.

A few examples of what I'm talking about can be found on here (note: these are not mine, just examples).

And since it's normally the same amount of settings from GIMP for almost all images, I thought I'd write a service. Which I did somewhere around 3rd Jul 2019 , exactly a year ago and I've been using it since.

Never really gave it away since I didn't think many people would want to use it but that's not something I should be deciding so I'm going to post the link to it here.

Washed

]]>
Mon, 02 Jul 2020 00:00:00 +0000
https://reaper.is/writing/macbook-setup-checklist.html Checklist for Macbook Setup https://reaper.is/writing/macbook-setup-checklist.html Checklist for Macbook Setup

I wipe my Macbook quite often as a chore, this is done once every 4-5 months and helps me get rid of stuff on the SSD that I probably don't use anymore but is there just because I'm not browsing throught the entire SSD.

This isn't a blog type post but more like a checklist of things that I need to make sure I do before I send everything on the drive to hell.

Pre - Wipe

Things before wiping the system.

  • Backup .ssh folder, them keys are important!
  • Make sure you make a timecapsule backup to an external ssd / flash drive
  • Move notes from various folders to the SSD, (make a good notes app idiot!, stop putting everything here and there on the system)
  • Don't forget getting the vimrc and vscode config, update an existing gist or for the sake of god be a little more smarter and put them up on your website, the collections section is there for a reason.

Post - Wipe

  • Setup Homebrew

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    
  • Update Brew and install Git, Wget and other base level tools

    brew install git yarn make fastlane
    
    # now for the UI tools
    brew install clean-me visual-studio-code google-chrome iterm2 docker vlc postgres adoptopenjdk/openjdk/adoptopenjdk8
    
  • Add ZSH Suggestions

      git clone https://github.com/zsh-users/zsh-autosuggestions ~/.zsh/zsh-autosuggestions
    
      # Add the following line to .zshrc
      source ~/.zsh/zsh-autosuggestions/zsh-autosuggestions.zsh
    
    
    
  • Next up! Programming Language Support

    • Go Lang: https://golang.org/dl/

    • Node

      curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | zsh
      
      nvm install --lts
      nvm install 10
      nvm install 12
      nvm alias default 12 # most projects depend on this to be the min version for me right now
      
  • Copy back the .ssh folder in place and do a dummy connect to a certain project system for the ssh identities to be loaded automatically

  • Disable Keyboard corrections and other improvements from the keyboard settings

  • Oh, btw, did you enable opening apps from identified developers? Do it then!

  • Lets setup both the editors, restore the backed up editor configs from the pre-wipe, download the needed fonts for vscode and vim and while we are at hit, download Sublime Merge as well.

    • Not done yet!! who is going to install vim-plug? you think the plugins will just start working!?

    • Download plug.vim and put it in the "autoload" directory and then run the below command

      curl -fLo ~/.vim/autoload/plug.vim --create-dirs \
          https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
      
      
    • then we just open vim and run :PlugInstall

  • XCode and Android Studio, download them or check if the SSD has the latest version, if they're already there, let's create symlinks from there to the $HOME/ExternalApplications so we can save some space on the SSD.

  • Open XCode, change the derived data and archives folder to point to the external disk

  • Install ngrok - brew install ngrok

That's about it reaper, go sleep now, it's 4 in the morning, maybe start doing this a little early the next time.

]]>
Mon, 05 Feb 2021 00:00:00 +0000
https://reaper.is/writing/my-vscode.html My VSCode, Visual Settings (Theme, tab layouts,etc) https://reaper.is/writing/my-vscode.html My VSCode, Visual Settings (Theme, tab layouts,etc)

Now this doesn't really need a huge write up but I'll just link you to a Gist that'll give you the whole settings.json.

Changed my mind, I'm gonna link a few things that I'm using anyways.

Theme

I switch between these 3 themes

Material Theme

Ayu Theme

Min Theme

Font

It's a custom Ubuntu Mono font with Ligatures added, You can find it at lemeb/a-better-ligaturizer, or customize any of your own fonts to use ligatures as well

Fira Code Mono is another one that I like.

Plugins

Music Time - Control Spotify with your code editor

PolaCode - For Epic Code screenshots

And that's about it, I do have other language specific plugins but I don't think I need them posted here.

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/nextjs-pg-knexjs.html Solution to multiple connections with knex while using Next.js https://reaper.is/writing/nextjs-pg-knexjs.html Solution to multiple connections with knex while using Next.js

The Issue

Using next.js has it's own advantages and not going to go through them in this post but one major blockage while building TillWhen was the number of Database connection each api request was creating. Initially I thought it was just because of the constant restarts of the server I was making that lead to the 30+ connections but I remember setting PG to disregard idle connections after a minute.

Anyway, soon It was obvious that the knex connections I created weren't getting destroyed and there was a new connection every time I made a request.

Now even though thins could be easily solved for mysql using serverless-mysql which manages the connections based of serverless environments, and I could even use the pg version of the above, serverless-pg but, we already had the whole apps built with knex.js and I didn't wanna rewrite every query again so had to find a better way.

I had 2 solutions at this point.

  • Memoize the connection.
  • Destroy the connection on request end.

Solution #1 - Memoize

Now, I assume that you have one file that you maintain the knex instance in, if not, then you are going to have to do a lot of refactoring.

Let's get to creating a knex instance but with a simple variable that will store the connection instance so on the next request, the same is sent back to the handler using the db instance.

utils/db-injector.js

const dbConfig = require('knexfile')
const knex = require('knex')

let cachedConnection

export const getDatabaseConnector = () => {
  if (cachedConnection) {
    console.log('Cached Connection')
    return cachedConnection
  }
  const configByEnvironment = dbConfig[process.env.NODE_ENV || 'development']

  if (!configByEnvironment) {
    throw new Error(
      `Failed to get knex configuration for env:${process.env.NODE_ENV}`
    )
  }
  console.log('New Connection')
  const connection = knex(configByEnvironment)
  cachedConnection = connection
  return connection
}

We now have a variable cachedConnection that either has an instance, if not, a new one is created and is referred to by it. Now let's see how you would use this in the request handlers.

controllers/user.js

const db = require('utils/db-injector')

controller.fetchUser = async (req, res) => {
  try {
    const data = db()('users').where()
    return res.status(200).send(data[0])
  } catch (err) {
    console.error(err)
    throw err
  }
}

At this point you are almost always getting a cached connection, I say almost always because the actual utils/db-injector.js might get reinit by next.js and you will have a connection that still hanging out with knex for longer than intented. This isn't much of an issue but if you are like me who doesn't want this to exist either, let's get to the second solution.

Solution #2 - Destroy!

Yeah, we mercilessly destroy the connection with the database after each request to make sure that there's always only one connection per request, the peak of optimization! Which should've been handled by knex but let's not blame knex!

Anyway, the 2nd solution required a simple higher-order function that would

  • take in the request handler
  • give it a connection instance
  • wait for it to complete the request
  • destroy the connection

we start by modifying the db-injector to create a new instance everytime instead of caching because the cached instance won't exist anymore and will give you a unusable knex connection or no connection at all. Let's do that first.

utils/db-injector.js

const dbConfig = require('knexfile')
const knex = require('knex')

let connection

export const getDatabaseConnector = () => {
  return () => {
    const configByEnvironment = dbConfig[process.env.NODE_ENV || 'development']
    if (!configByEnvironment) {
      throw new Error(
        `Failed to get knex configuration for env:${process.env.NODE_ENV}`
      )
    }
    connection = knex(configByEnvironment)
    return connection
  }
}

We now have a new connection on every request, let's write the higher-order function so it can destroy the connection and let the DB of the connection misery.

The higher-order function as said, is going to be very simple, it's just taking in the handler , waiting for it to complete the request and then we destroy it.

connection-handler.js

import { getDatabaseConnector } from 'utils/db-injector'
const connector = getDatabaseConnector()

export default (...args) => {
  return fn => async (req, res) => {
    req.db = connector()
    await fn(req, res)
    await req.db.destroy()
  }
}

Why do I pass in req.db?, reason being that if the handler keeps importing the db , the higher-order function has no way to destroy the exact instance, and hence we init the db instance and destroy the instance here. It's a simple form of self-cleaning.

pages/api/user/index.js

import connectionHandler from 'connection-handler'

const handler = async (req, res) => {
  try {
    if (req.method === 'GET') {
      const { currentUser } = req
      const data = await req
        .db('users')
        .leftJoin('profiles as profile', 'users.id', 'profile.user_id')
        .where('users.id', currentUser.id)
        .select(
          'profile.name as profileName',
          'profile.id as profileId',
          'users.id ',
          'users.email'
        )
      return Response(200, data[0], res)
    } else {
      return res.status(404).end()
    }
  } catch (err) {
    return res.status(500).send({ error: 'Oops! Something went wrong!' })
  }
}

export default connectionHandler()(handler)

And finally, I'm showing a generic Next.js handler here instead of the full fledged controller like in the example above, since the higher-order function is going to be added in here and not in the controllers. So the only modification you'll have to do to all the route handlers is , instead of exporting the handlers directly, export a version wrapped in a higher-order function.

]]>
Mon, 29 May 2020 00:00:00 +0000
https://reaper.is/writing/off-grid-digitally.html Off-Grid Digitally https://reaper.is/writing/off-grid-digitally.html Off-Grid Digitally

People who know me, have a brief idea of how opinionated I am as a person and how I'd normally stand for something that I think is illogical on the broader picture no matter how emotional the other person might be with the decision and this leads to a lot of arguments or discussions with certain people.

Now, why do you need to know that about me and how does it relate to this post?

Because that is one of the reasons why I avoid social media and talking to people in general, having a blog with no comments section gives me the power to not have to worry about what the response is going to be and that freedom let's me talk out cleary. While I can do that in person too, because the whole internet won't jump on me for something I think is right.

What's with going Off-gird?

Off-grid it self means living without dependence on the public utilities, so self farming/hunting for food, building everything by yourself, basically being able to get anything and everything done the tribal way and living off of your own skills.

While I might end up that way sometime in the future, I can't do it right now so instead I'm onto something I can do, get rid of stuff that's unnecessary and taking up my time from things I like to do and that's use of social media.

I've been moving away from social media for a while now, destroyed my facebook account during college 2017, stopped double tapping on instagram around early 2019 and tested micro blogging on twitter for the past few weeks.

The only "social" profile I have had active for the longest time is LinkedIn and that's because it's easy to forget about and just have the profile exist as an online resume.

Not that I need one anymore because If I do ever resign from the current job it'll be for working on solo projects , much like Drew Devault.

Yeah, I might delete my linkedin soon too. Not sure how soon but soon and this doesn't just include social media it also includes instant messaging platforms like WhatsApp which I've been on and off, multiple times now because people force me back onto it for that 5 min conversation that we could've had on a call but anyway that's gone this time and permanently.

How do we contact you then?

The people who work with me, already have slack for that and for everyone else. This website isn't going anywhere and gives you the mode of contacting me , right on the homepage

Self Hosting stuff

The real reason I'm going to be able to sustain this is by having tools available to me on my local servers. I've had multiple local servers just sitting there working as side runners to multiple personal apps I've built over the years.

Since the plan is to get away from dependencies on online SaaS services. I have decided to move to self hosted options of most of what I use and while I will be using these self hosted apps to start with, I'll be moving to versions I build for each of these.

  • Github - Gogs | Sourcehut | Gitlab Self Hosted
  • Gmail - Mailcow
  • Digital Ocean Containers - Raspberry Pi Clusters
  • 9Gag/Reddit - I don't think I need accounts to browse these, so I'll be fine
  • Dev.to - I've got this blog and I mostly post everything here so Dev.to account makes no sense anyway.
  • Vercel/Heroku - Ah, these I'm going to miss but, I do have dokku setup in most deployments so it's okay.
  • Github Actions - This, the automated runners, luckily sourcehut provides this and so does gitlab.

Running away Loner?

I wish. I've got a god complex, I know, you know, everyone knows.

Let's get back to the first paragraph now, In my opinion it's easier for me to be productive when things are under my control and not under some guy with a lifetime of data on me and hence, without giving 2 cents about what people will think, I planned all of this.

Why this whole post then?

Oh this is so I don't have to explain it again to each person that has a question, Why no whatsapp? or Why can't I find you anywhere online? and also for the never ending list of "friends" who go "No I don't need anything, just casually texted" quite often (remember, the homepage has you covered)

Security

Won't you be giving away you IP address when you setup everything locally? Most of what I build is already open source and doesn't really take in any user data other than an email to have verified logins.

Also, my IP/VPN is tracked by my ISP, google... everything that's there online so it makes no sense to try to hide it anyway. Can't go to 0 digital signature when I'm a coder anyway.

Plus, after this movement I plan on making everything I ever built to be easily self hosted on various online providers like heroku and vercel for people who'd like to use them, so then the security of using them becomes your concern and not mine anymore.

Adios.

]]>
Mon, 19 Aug 2020 00:00:00 +0000
https://reaper.is/writing/opensource-hiring-platform.html My take on an open source job listing platform ? https://reaper.is/writing/opensource-hiring-platform.html My take on an open source job listing platform ?

Yeah yeah, another hiring platform , we've got so many , why would you build another? and it looks so crappy, why would anyone use it?

Will answer all of that, give me a sec!

Now, Let's see. Click on the link below to get to it.

HireMe

What's different!?

Nothing actually, its a lot worse than most of the sites at the moment. Reason being, that I spent only a few hours to build this monstrosity.

Why would I use it?

You don't have to, I built it because of all the people that have been getting layoffs from their employer due to the recent events and I thought it'd be nice to have another platform for them to try out on.

And since I want people to actually get in touch with the candidate instead of just judging and throwing away a profile based on what they see written on the profile.

I know it saves them time to filter out unneeded profiles but then you miss out on candidates that might be worth the time. This is pretty opinionated so let's not get into a huge debate about it, cause at the end of the day I'd still apply for a job which states 5 years of experience even when I hardly have 3.

You really think huge companies would get on it?

I don't know, I don't have that expectation, not when I haven't added all the features that I think the platform should have. It's an MVP , enough to get 2 people connected. There's no email verification so yeah, I can have huge amounts of junk signups by morning tomorrow. Thus, pushing people from actually even using it.

Some might even say I built a platform that's built for spammers to abuse. While I do realize that these issues exist. I'm still throwing it out and will be working on improving it as soon as I can.

But still, The point of its existence is to help people out, now whether people use it for the reason it was built or make it junkyard, it's on them.

I'll keep improving the site as I get more and more time, but for the time being, as I need Iron Man playing in the background to work at full speed. This is going to be it.

People are allowed to raise issues and send PR's.

It's open source for a reason!!

Update: The App now has Magic Link based login and signup, but sendgrid seems to have a heavy loaded queue due to the twilio hackathon etc, etc so I don't think the emails are getting delivered at the speeds that they should. Will shift to nodemailer later. The UI has been beautified a little bit too. If that mattered

Update: Sendgrid servers seem to be back to normal and the app's login is now functional.

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/product-development-and-a-developers-role-2.html Product Development and a Developer's Role - Part 2 https://reaper.is/writing/product-development-and-a-developers-role-2.html Product Development and a Developer's Role - Part 2

Do read the first part before you continue to read this.


We stopped at the stack last time and now what remains is the evaluation part. Most of

you would be like, why another post for just the evaluation part , why not just add it

into the first part?

The reason is that while the post is targeted towards the developer's role in each

phase, this phase has a few ways to do things and I wanted to go through each one of

them.

The developers role stays constant here , mostly in the boundaries of bug fixing and solution hacking things that need to be handled right before the initial launch or whatever phases the product is to go through before the targeted users get to it.

These are the one's I've worked with and there's obviously better or worse ways to evaluate but these are based on just my knowledge at this point, as that changes, you can expect a better post later on.

Evaluation Methods

To be fair, there's like a lot of these,

  • UAT - User Acceptance Testing
  • Dogfooding
  • Automations

these are all good but we have to understand where each works and where the other would be a better alternative.

If there's other that you wish I cover on , do consider emailing or reaching me out on my twitter handle @barelyreaper

UAT

UAT (User Acceptance Testing) - one of the first methods that I learned about through the first startup I worked for and it worked fine, other than that people took it very seriously and would fix and deploy things in such a hurry that it would normally need a few rounds of deploy to see them finally rest, which while is okay, I guess a bit of unit testing would've reduced but then it was a very small startup and the deadlines were hard and I need to find better excuses.

Anyway, the point of uat is to make sure the user's actually understand the app and it makes sense to their business logic (in B2B) or is intuitive enough for users to browse through easily (in B2C).

Pros

  • The feedback is almost always immediate
  • reduces the risk of having defects in production
  • users already understand the system before actually using it for the intended use.

Cons

These are more like things people end up doing and barely an issue with the evaluation method itself.

  • The developers sometimes go crazy on trying to fix the bugs.

Solution: Calm down humans! It's just an evaluation phase, the point of it is to break!

  • The environments might not match with production and this is something that the developers should make sure they setup during the start, if it's a project that changed roles over time then this might be unavoidable but still try to maintain a similar environment to not run into issues during production movement.

Solution: Docker, K8s, They exist for a reason, use them!

Dogfooding

Saw this one coming, didn't you?

This is something I picked up a while back without knowing what it was called. Readers know that I build tools very specific to my requirements and then 90% of the time I'm the one using them and this is the basic principle of dogfooding.

The builders of the product/tool/app use the app internally before the publish/go live.

This is something basecamp has been doing from the start and the evaluation method works but requires a good version management to go with it to work.

Version management discipline will make sure you have checkpoints throughout the codebase to identify what's still under evaluation and what is stable enough to move forward with.

If you are using semver a good way is to handle it with pre-release tags which look a little something like:

vX.X.X-<pre-id>.X

eg: v0.0.1-alpha.2

which translates to "this is the alpha.2 version before the 0.0.1 release and not the 0.0.2 release".

This gives you a set of idea that everything that's in alpha is being evaluated and everything with a stable non-alpha tag is being used on the stable releases.

This also means that you don't have to hurry the to fix something, but go through the alpha releases slowly to make sure the defects are at a minimum in the stable releases.

Bugs are inevitable, there's always a corner case, there's always a library that decided to change something, there's always a new requirement. Don't rush to fix the bugs and never fix them with the first solution that comes to your mind, go through the problem, check if it's a problem at the implementation you are looking or is something else the root of the issue.

All code is buggy. It stands to reason, therefore, that the more code you have to write the buggier your apps will be.

- Rich Harris (Creator of Svelte)

Pros

  • Easier to find bugs as the users are using it with the actual intention
  • Lower cost of handling since it can be done on a single environment and doesn't need separate dev/uat/production environments, just dev/production is fine.

Cons

  • Doesn't work for fast paced companies that are on harder deadlines
  • Screwing up the version management can screw up the entire concept, needs discipline to follow through.

Automations

A lot of people depend on various automations for UI testing and API testing and I've talked about this before in a post about testing where I talk about how I do it and in terms of whether I like this or not, here's a single line answers.

Doesn't work when requirements change constantly, you are better off manually testing this instead.

That statement aside, you should still make it a habit to write tests for your API's if you have the luxury of an open deadline. If on a hard deadline, you can spend that time on actually writing that feature to be as robust as possible.

You can read about how I handle testing here - Tests vs No Tests

]]>
Mon, 29 Jun 2021 00:00:00 +0000
https://reaper.is/writing/product-development-and-a-developers-role.html Product Development and a Developer's Role - Part 1 https://reaper.is/writing/product-development-and-a-developers-role.html Product Development and a Developer's Role - Part 1

As a developer and a CS Student we are normally taught about the waterfall method during college and we just naturally grab the "Development => Maintenance" phase to be the part where the developer is going to be active at and we leave it at that. We take that to be the ownership part of the project which is the first big mistake. Anyway, we'll get to the whole thing in a min.

Talking about Product Development can lead to more than one or two posts and so I'm going to leave it on the response on this to decide if I need to write more about this topic.

If observed, I never really mention my work place projects in my portfolio and it's mostly the tools and apps I've built over time which are all available on my GitHub, and the reason I do this is my role during the product development in most of the places that I've worked was limited to the development phase, I normally had no say in what the designer would do or what the client wanted or even being able to push back to the client if a requirement they mentioned overlapped existing requirements thus delaying the project more.

Not to blame the managements of any of these companies but it was my own thinking back then that it's not a part of my job to do that, my job was to inform the managers that so and so was happening and then they'd do the above but well sadly that didn't happen always.

Anyway, to the product development knowledge and the reason you need to know about this is so you as a developer can rethink where and how you are supposed to be there in the project to handle the respective problems.

There's basically the following stages that are taken in consideration before I start building anything (might come as a surprise but no, I don't just jump into writing code for something without going through these).

  • Requirement Notes - Writing down the overall scope of the project
  • Requirement Filtering - Figure out what part of the core that needs to be built first
  • Team Selection - I work alone so.... this is the shortest phase
  • UX and UI - Pickup the filtered requirements and build the base UX around it, if it's a prototype then I avoid going through the UI phase and just use exisiting designed components
  • Decide the stack and arch - decide the tech that'd go with the requirement, this one takes a bit of to and fro
  • Code - the favorite part of the whole thing
  • Evaluate - this is either easy or irritating based on how I'm evaluating
  • Repeat

Requirement Notes

This isn't publicly talked about because this is mostly scraps of paper that I kept writing on and then I switched to the iPad and wrote on that instead and recently I've moved to using the Notes section that I've added on Taco for the same. Basically built that for this.

The phase involves making note of everything you currently have in mind for the project's overall scope this can be all fancy or this can be all basic in terms of features but the point is to have it written down so you know what the app is going to be about, otherwise it's just a idea you jumped to build and then forgot what all you wanted to build and that backfires pretty quickly. I seem to remember things I don't want to but then important things like these just slip past when I need them so, just note it down!

Developer's Involvement: Helps clarify what is technically feasible and if the requirement is even valid to go through or is the deadline proposed by the client even achievable

This can then be be reduced to set of core requirements that is what the next phase is about.

Requirement Filtering

Here you throw the idea down the drain and stop thinking of it as my baby product , and take in a logical approach and ask yourself a few questions.

  • What parts of these requirements can be taken care by manual work?
  • Are there existing tools that I can use to handle certain things?
  • How important is this requirement right now?
  • Do I have enough money to spend on so and so?

Based on the answers you will normally have a good idea as to what can be kept as a core functionality and what can be added later to make it easier for the targetted user.

Example

I used linear as a task manager while I was building the base of taco, which is basically another issue tracking / task tracking app. I had a few less detailed tasks written on the iPad. There were even plans to integrate with existing platforms, ability to import data from around every other app. I had a lot of fancy stuff I wanted to add to it but obviously the idea of scaling could go all the way to the moon and not always needed. The problem Taco wanted to solve was to have all the basic tools in place to handle tasks and collaborations between teams, not to be the most feature rich project manager out there.

I cut down the requirements to the absolute basics and now all we have is the tasks built first, the projects section was built next and then the teams part is under work right now.

Developer's Involvement: This part can mostly be avoided since the business perspective is mostly what's to be seen here but the developer can still help clarify things or even provide solutions based on past experiences

Next, you setup priorities based on your evaluation method and then get to deciding who's going to do what.

Team Selection

If this was being done in a company then you'd have to figure out who's available and what's to be handed to whom but since I work alone on these project the team selection is pretty simple.

  • Architecture - Reaper
  • Design - Reaper
  • Theming - Reaper
  • Backend - Reaper
  • Frontend - Reaper
  • Sleeping - Reaper

Developer's Involvement: Helps give a 'base idea' of the timeline and how much extra work might be needed

You get the idea.

Though the Architechture phase is a bit time taking since that base decides how the project would flow and scale and one setup doesn't always work for every project no matter how similar the projects are, we'll get to that.

This step is where I decide how much time it'd take me to build the whole thing considering the arch would take about a week to solidify , design would involve about 2 weeks or so for the core features since it's just the tasks part that needs to be built first but that gets all the base design components to be built by then so I can reuse them in other places.

You have deadlines for your own projects? How do you think taco's first alpha was built in a 6 days!? It has a testing mode, profile management, project relations, task handling, and animation for toggles and the project deadline pulse and obviously components accompanying each with api's handling each.

That's in 6 days with about 4-6 hours of work each day. That's not the fastest in the world, not am I boasting. I'm just saying the deadline is responsible to keep me in track for the project, my productivity can go down significantly based on the following

  • No music playing while working
  • Constantly getting pulled into something else
  • No deadline to raise my anxiety

There's definitely more but that's all you need to know for now.

UX and UI

This phase either lasts forever or is done within a week based on what I'm doing , if I'm building the core then I have a set number of pages that I need to design the UX for and based on that the components that need to be designed.

Then I just reuse these components as much as possible because minimalist

Back to using Taco as an example.

The tasks page has

  • Banners

  • List Elements

  • Side bar Navigation

  • Accordions (the expanded task view)

  • Status Menu

  • Search Input

  • Buttons

  • Navigation Menu

  • Page Headers

  • Task Type Headers

Now lets group them in terms of things I can make common styles out of

  • Menus [Navigation, Status]
  • Buttons
  • Headers [Page and Task]
  • Banners [List Elements, Banners,Accordions]
  • Inputs [Search]
  • Sidebar Navigation

So my menu component has to be dynamic in terms of it's trigger but the menu style is going to be the same, the buttons are well going to be generic, headers are going to have a font size and font weight based on where they are being placed, the banner style can be used for the list items, the actual banners and even scalable to be alerts by changing the background colour, inputs, sidebar navigation structure can be reused in other pages, like the settings page.

Developer's Involvement: feasibility of what component can be built in the given base timeline and what needs to be added as extra in the timeline based on the complexity of the designs, this can increase or decrease the timeline significantly

So Now I have elements that cover 80% of the app UX and the 20% include cards, tables, graphs that are going to be a part of the projects and dashboard. This would've taken like 2-3 days to sketch and finalise and then 1-2 days to individually implement.

The Stack

Though I mentioned implementing the components before I decided the stack the reason is because the stack is actually decided after a prototype phase which I've explained about quite a bit in previous posts1.

The prototype gives you an idea of what needs to exist and what doesn't and where it would create an issue, in my case I've built enough todo apps to know what's going to break where and TillWhen gave me the remaining needed knowledge for the scaling issues.

So the stack was already in my head thus implementing the components was going to be in React and that's what I did. The stack phase is inclusive of the setup that needs to be done for the same.

This includes the codebase setup, the configurations , CI/CD for the same and environments that you'd be deploying the project on.

In my case

  • Alpha Instance
  • Live Instance

The codebase setup can be a time taking process if you're setting it up for the first time but if you've done it before you can manage or create templates on GitHub to reuse, i'm not sure how many people know about this so I'm just mentioning it here. I have quite a few templates up and I picked up my mono repo template , made a few modifications to the folder structure, added the needed dependencies and then got onto the configuration phase.

A big misconception developers have is that there is to be different branches that maintain different configurations, like master is to maintain the configurations for production and dev will maintain the development environment config. NO!

The codebase and configuration are to remain the same, the values of these configuration change based on where you are deploying and thus remote configs can help you with this but you are better of understanding how to implement this yourself. The branch approach adds up additional overhead of making sure you have the right config before you deploy and if you are using CI/CD for everything that is a disaster when your un-checked configuration deploys to prod. Don't add that headache to your work.

I can be blamed for telling people to do that but in my defence the meaning of maintaining different branches is to make sure that the code on those branches are based on what you want to deploy.

Example

master is to have the code that's been tested and can be sent through to the prod

dev is the codebase you add untested merges to and what deploys to the staging environment

this doesn't mean you add hard coded configurations in each branch which differ, a wrong git rebase commit and boom your configurations need to be setup again.

Make it easier for yourself to handle such cases before you even start coding the project.

The decision on the actual stack can vary based on requirements, so that can be a long post but I think I'll write about it sometime. To just go through what needs to be checked would be,

  • how well does the community support the languages you chose or the frameworks you chose.
  • If you are going to use existing libraries or codebases, are these still being supported by the maintainers or do you need to fork it to make fixes (this decides the timelines)

There's no Developer's Involvement section here since the developer is going to be doing this part. Though make sure you add the project setup time in the timeline as a developer, people don't realise that it's a good amount of work and its your responsibility to make sure you don't forget about this.

This is all for this post, the next post should be out soon in the next few days.

Adios.

]]>
Mon, 21 Jun 2021 00:00:00 +0000
https://reaper.is/writing/public-temporary-file-storage-api.html Why I built a temporary file storage and left the API Public? https://reaper.is/writing/public-temporary-file-storage-api.html Why I built a temporary file storage and left the API Public?

Now, this isn't a very complex project and anyone who's worked on basic web servers can build this.

But Why did I build it?

I had this project where I needed to upload images and Firebase's storage was my first alternative, I kept using it but then the storage quota just asked me to go find a better solution, because the moment you upload over 15-20 images you've completed your storage quota and now I'm stuck with a upload code that won't work and I can no longer upload images to Firebase.

My next alternative was Imgur's API to upload private images but then that limits me to a certain amount of uploads per minute.

And then I started looking for a service that'd allow me to upload images for the sake of testing and then delete those images after a certain threshold time. Sadly, didn't find any or I just sucked at searching. Either way my last search result for this was file.io and it was kind off what I wanted but It would delete the file the moment I requested it for the first time.

So, I ended up building one, for the people who'd like to know what services are being used for this, it's just a combination of Next.JS for the demo page and Node.JS and MySQL for the back-end. All of this is hosted on Heroku's free tier for now, since I'm the only one using it and both Heroku and the db would stop working if someone were to overload the API so that's that. I know it's not a good solution but it works since I'm the only one using it.

But without any authentication, people are going to use up all your storage!

I understand that but as of now I'm the only person who knows about it's existence so that's that and if people do start using it, let me know in the comments so I can add up a full fledged API Key based access system for the API and also move it to a Digital Ocean server to increase the storage and also increase each file's time to deletion.

Files uploaded from the demo site have a 30 second life and will be deleted after that.

Here's a link to the demo

]]>
Mon, 22 Apr 2020 00:00:00 +0000
https://reaper.is/writing/remember.html remember https://reaper.is/writing/remember.html remember
  1. Do one thing at a time
  2. Learn to listen and learn to ask questions
  3. It's okay to build things for yourself and not have any users
  4. Build for learning over earning
  5. Help people as much as you can
  6. Learn to obsess over quality
  7. Build at your own pace and be consistent about it
]]>
Mon, 10 Sep 2024 00:00:00 +0000
https://reaper.is/writing/should-you-use-hasura.html Should you use Hasura? https://reaper.is/writing/should-you-use-hasura.html Should you use Hasura?

For those who don't know, I work with Fountane and we are a creative studio that helps business gain digital presence or even technical presence.

This normally involves either auditing their existing websites for better seo and user experience, or even building web/mobile apps for them from the ground up.

This obviously requires a bit of work but a lot of clients are in the hurry to get into the market for various reasons that we might not be able to understand but a lot of projects that come to us have a really tight deadline and this is where Hasura came into the picture.

From Scratch REST APIs

We used to build APIs like any other firm, build optimized REST standard APIs and while this worked it was slow since a lot of the code couldn't be reused (other than maybe the auth part) and writing CRUD for every model again and again was a slow process and often pushed the deadline a bit which was sub-optimal for a studio that was fast growing.

Loopback

For developers who have had similar problems , a lot of them moved to Nest, Loopback , other frameworks that try to replicate the Angular Arch where everything is pluggable to the original base and not very hard to do.

I used and liked loopback 3 for a project and it worked great. The generation of CRUD was no longer needed and I could use the Angular generated services from loopback to automate the SDK creation for API calling and everything but this went south when Loopback 3 announced EOL and Loopback 4 was a drastic change and little more learning curve than I'd like the team to go through.

I could've just forked loopback and maintain a personal copy for this but then loopback is not just the core repo, its the connectors, the additional pluggable modules , everything, that would be a project in itself.

The requirements and Hasura

Well, my buddy Gautam aka https://backend.engineer like the loopback arch and wanted graphql for the newer apps and we ended up deciding on hasura and trying it out for a project or two to see the mileage of the framework and Pandey was given the work to architect a monolith (webhook server and hasura, no web app here) for this.

A week or so later, this was done and ready to be transferred to projects that were still in the arch phase

At this point, we have 4 projects that use hasura and what you are going to read next is my review of whether you should use hasura or not and obviously the occasional knowledge transfer as to where it makes sense and where it doesn't.

The good parts

These are simple and pretty much why people pick it up.

  1. Easy CRUD for all generated models
  2. A Web UI to handle most of the work
  3. Handles permissions, roles, and migrations for you
  4. Comes with the ability to construct actions / queries/ mutations for you

You can literally read about it from their website since that's what they market and also the reason why we picked it up in the first place.

The pain points, or things I didn't like

The list is purely based on my experience and things I wish could be improved and obviously I have better alternatives instead of this.

1. The Migration and Metadata System

While most of it is automated and it looks graceful and is atomic if done well, it's not very intuitive and goes against the norms of migrations if you were doing it with something like knex but then that's with every framework, like Loopback had it's own way to handle table updates and migrations so I can be a little lighter on this.

The actual pain point though is that the migrations get overwritten very easily when 2 or more people are working on their local systems and this creates an issue since they can be timed to be the same they can conflict and there's no way to find out what went wrong, this in turn breaks the metadata of the entire console and now you have missing permissions and actions. Post this, you will have to go through your git history to read through the metadata to find out what went wrong and also through migrations and manually merge this.

If the developer is good with SQL then modifying and merging these .sql migration files shouldn't be an issue but then I don't see why I need a framework cause it does end up adding to the total work time and sometimes can take a lot longer than expected.

The other part is the structure of the metadata, while it's a simple JSON that you can browse through, this can get super messy when the application get's a bit bigger and it's the worst when you accidentally mess up the permissions in the meta data.

2. No Dynamic value comparison in Permissions

Yeah yeah, I can send new headers with X-hasura-<variable-name> and then use that for the dynamic comparison but that's a super hacky way to get it done.

Beats the point of already sending all the needed data in the Payload and then sending it again in the header because the framework decided that it can't add a $payload.id in the permissions meta and and and, You can't do the above without modfying your auth handlers to handle the additional dynamic variables.

This not only changes how people handle permissions but is also very limiting for a full fledged industrial application and if you have a super complex permission check, it's easier to write a custom action instead of asking hasura to handle it cause it will require a dynamic comparision based on dependent outputs of other tables and that very hard to maintain, it's unreadable , and you are better off just writing it as a custom action.

3. Closed Box model

You can say that , this is because I liked loopback a bit more so I expect hasura to be the same but I understand each framework has it's own set of opinions and depends on the creator's thoughts of how they want to build something and while I see why they did that, considering most of it is in Haskell and extending it would need knowledge of Haskell.

Though, being able to extend the model's methods would actually align perfectly with how graphql works when you build one from scratch and also reduces the amount of permission handling that they have on the Web UI and instead be simple middleware functions that you can write based on the context of the query.

That would help maintain a smaller and more maintainable architecture compared to what it is now. The hasura arch was with the thought of having microservices in place and gateways that handle these microservices but when most of the app you are building requires a lot of logic and permissions then this closed box becomes counterproductive and you end up spending more time trying to figure out what goes where instead of actually getting done with the application.

Solution

I don't have a concrete solution to match with this right now but if the CRUD and models is all that I wanted to solve, It would be better to setup Prima and Express Graphql (if you still use express) or use AdonisJS with apollo server or basically anything with apollo server or your language specific scratch graphql implementation.

This could be a combination of Python + Flask/Rapid API/DJango + GraphQL + ORM + CRUD Generator plugin (which you can find for most Setups at this point and it's not hard to write re-usable CRUD handlers that replicate the actions for the ORM that you are using).

This gives you the ability to scale on the small arch and manage things while keeping control on what this can do. I can't do that to hasura without picking up a new language, forking it and then changing all these.

Your opinion on the usage of Hasura might totally differ but to conclude

Should you use Hasura?

It's not a bad framework at all, it just doesn't suffice the requirements of our use case and the overhead of hacking into the problems that come aren't worth spending time on when you are on tighter deadlines.

On the other hand if you are in for a quick way to the market with simple and trivial implementations it perfectly fits the usecase as you can reuse most of the work you did before into it very easily and most of it can be setup using interconnected docker setup's so it's pretty easy to replicate the environment no matter where you deploy it.

Though the same can be done if you create your own setup and that's a lot more easy to maintain than a framework that you need to hack into to make things work (I made grator because the migrations would fail when moving into new systems and I wouldn't want to keep making such tools just because the framework wasn't handling it)

]]>
Mon, 20 Aug 2021 00:00:00 +0000
https://reaper.is/writing/shutting-down-tillwhen.html Shutting down TillWhen https://reaper.is/writing/shutting-down-tillwhen.html Shutting down TillWhen

3 Years...

TillWhen came into existence as an alternative to Toggl. It didn't try to compete with Toggl but just had to have the bare minimum features to be called a Time Tracker/Logger.

It was a project where I experimented with designs I thought would look good, typography that would look nice, and in some cases also built internal libraries and tools.

Overall, the project had a good run. It had quite a few active users when it started, and that number probably dropped over the years. But I got to experience development as an Indie dev, and I can't lie, it felt great.

Why shut it down?

It's mostly because of my inability to focus on the project, and the future reaper might not be able to support the project both mentally and financially.

You see, I plan to move into freelancing instead of a day job, and that's going to lead to a big change in my lifestyle since I can't expect the same income as soon as I start doing it.

TillWhen is cheap to host. It costs $140 per year, taking into account the Transactional Email Service and hosting costs. I've avoided all other costs by using my personal domain names, avoiding any file upload-based features, and overall keeping the app very small to be able to run on a simple $10 Digital Ocean instance. There were no ads because I don't like them, so I'm not interested in users seeing them either.

Overall, the decision is based on where I'll be in the next few months. I would prefer that users transition to other platforms slowly instead of me abruptly shutting it down one day (I shut down Taco in a day, and that wasn't nice).

The other issue is that TillWhen can't go through the dogfooding process, so issues are slow to get resolved and often overlooked since I don't even know they exist. I'm not a fan of time tracking, so I never use it, and the feedback loop is the only way to improve it. As mentioned, that's very slow for a product that doesn't really provide much value other than being simpler.

The last issue is my failure to maintain it properly. I can also blame it on the tech since upgrading that tech is another hassle right now, but it still is my fault for not writing it well enough to be able to refactor it properly. The codebase is a mess with all the experiments, my dumbness, and things I didn't understand about the stack when I started working with it. Now that I do understand them, the entire application would be better off rewritten than spending time refactoring it. I've tried to do it a couple of times, but then every other project that I use and provides me value ends up taking precedence. You can say I'm being selfish and not thinking about users, and you may be right, but they also deserve to use a product that's at least going to be maintained.

Does this affect other services like Goblin and CRI?

Not exactly. Goblin is something I use on a weekly basis for setting up my own tooling and other Go binaries that I often use. It's something I can maintain, so the chances of it shutting down anytime soon are quite low! And CRI is a self-maintained app and doesn't need me to mess with it unless the auto-population starts breaking, which is simple to fix. It's also small enough to be able to run on Vercel for free, so there's not that much work for me in it. So no, this doesn't affect both those services.

Why not make it a paid app?

I'm not sure how to make good productivity tools. I'm not a super organized or productive person, and these tools just cause friction in my workflow. So it's hard for me to keep building on a tool that I have no interest in and that will further impact the users. It's not just about the money, or I would've done that.

Hopefully, some of it makes sense. But well, it was a nice learning experience, helped me grow as a developer, and specifically as an indie developer.

]]>
Mon, 01 Jul 2023 00:00:00 +0000
https://reaper.is/writing/sleep-gave-up.html Sleep Update: I gave up https://reaper.is/writing/sleep-gave-up.html Sleep Update: I gave up

A few weeks back I made a post about me not being able to sleep at all and guess what?

It's gotten worse.

I now sleep at around 8PM-10PM and my body kicks me back up by 12AM-2AM. Post that, I can't do anything that'll shut me down.

Based on research, a normal human is sleepy around 2AM and 2PM, my body on the other hand starts getting drowsy at 6AM (Can't sleep cause I will never wake up for work on time) and 6PM. I somehow pull myself up to 8 or 10 using coffee to complete work and stuff.

Overall point, my internal clock is totally botched.

Looks like I'll have to research more and find ways to get the clock back to normal.

Sleep at 6 PM then!

Based on my sleep cycles, I can only sleep for 3 cycles. Each cycle being 90 minutes, making that about 4.5 hours. So, if I sleep at 6, it'll be up by 10.30 and then won't be able to sleep at all.

Bad Idea!

The title says, you gave up.

About that, I kinda started accepting the shift and have started working at 2AM-2.30AM on personal projects and then give the remaining of the day to office work which I generally start at around 4AM-9AM. I plan to follow this till I figure out a reset.

Till then, I'm a proper reaper (at least in terms of not sleeping)

Also, if you've got suggestions on how to deal with this without using any kind of medication / drugs. Let me know.

Adios.

]]>
Mon, 07 Sep 2020 00:00:00 +0000
https://reaper.is/writing/status-vercel-and-how-did-it.html Status, Vercel and How I built it. https://reaper.is/writing/status-vercel-and-how-did-it.html Status, Vercel and How I built it.

By Status i mean this website barelyhuman/status

The reason I built it was simple, I needed a simple setup to see if the web services were all up and running, the list is quite small because those are the currrent one's I would like to keep a track off.

Now this was built in a quick 60 min - 80 min time span and I'm going to explain the things I did and why I did them.

Stack

  • Go lang (templates and http server)
  • Vercel

That's basically it. No seriously, that's about it.

Mental Model

The idea was to have a simple go binary that would render a html page with each website that needs to be checked and its particular status. This is basically why the website takes a bit longer to load after the browser cache has expired

The go binary is the microservice that vercel will use as an entry point, so on a very basic scale, since I won't be using code to show how this is being done, we'll go through the list of things that it does.

  1. Get the list of sites to be queried
  2. Ping each site to check if it's up, in certain cases ping the backend api of the web service to check if the api is up.
  3. Check for the status to be 200, if not then the site/api is probably down
  4. Render the html with the above data
  5. Upload this as a microservice endpoint to Vercel

Problems

  1. Vercel doesn't allow a root endpoint without configuration so the endpoint for the micro service would be /api/pinger and the final url would be https://status.barelyhuman.dev/api/pinger , which while not a big deal, is a few more keystrokes than just https://status.barelyhuman.dev and thats one issue.

  2. The html templates files are additional assets and those don't always work in the vercel api functions and that iffy deployment and checks would be a deal breaker for a service which is trying to show the current status of other services.

  3. Vercel has a time limit on each request of 10 seconds , post the initial execution request, so about 12-13 seconds whole. If I ping a website on the Indian Server from a US server, the services that are hosted in EU / IN would add up about 2 seconds, thus making the website be shown as Timed Out if it accidentally crosses the 12 second mark.

  4. Avoid showing people what the urls of each of these api's are, which while you can dig down a bit and find out, I'd prefer to have at least a layer of protection on top.

Solutions and Hacks

  1. The first one was simple, i do a browser rewrite for the / url to /api/pinger and done, checked if vercel supported that and boom , done.
{
  "rewrites": [{ "source": "/", "destination": "/api/pinger" }]
}
  1. The templates are just strings at the end of the day and go will build everything into a single binary so if I just write them as strings assigned to a variable, I can achieve the same result, so tried that and it worked as expected. I've explained about it in this discussion thread

  2. Pinging each site in a linear fashion would be a bad idea but that was the initial prototype. I wanted to test if everything was already working. Checked if it was and then I went ahead and deployed the prototype. Right after I realised, that these would fail if the request is coming from the US server to the Indian hosted backend, thus making it slower, and that basically actually happened. It pinged barelyhuman's blog properly but failed to get the status for the Taco backend and crashed, a reload showed both the results but I don't want it to crash for something so trivial. I then just setup a go routine to parallelize the fetching of each service's status and now the site handles 4 without any issue, not a big number but since it's parallel it theoretically should create any problems for more either. I can optimize this better by limiting the time of each response to be under 4 seconds but then a busy server might not respond at once so I'm rethinking if I should/shouldn't do it.

  3. The html render and the website links are all fetched from the environment variables so that's a secret on the vercel portal for now so as long as no one hacks into vercel, I'm a little safe, obviously you can look through each app but those are proxies and rate-limited proxies (not boasting, there's ways around each) but for now that's a simple barrier to keep the urls safe.

]]>
Mon, 03 Jul 2021 00:00:00 +0000
https://reaper.is/writing/testing-tools-method-mentality.html Testing - Tools, Methods, Mentality https://reaper.is/writing/testing-tools-method-mentality.html Testing - Tools, Methods, Mentality

If you are in a dilemma of whether to write tests or not, I did cover a bit about that in a previous post

As for what this post is going to cover, here's the overview

  • The tools (the one's I normally use)
  • The methods (what method of testing will you need for a certain scenario)
  • How you think in tests

No other fluff, to the topic!

Tools

Almost every testing tool out there, at least for NodeJS is mostly agnostic. Agnostic in terms of what you can use it for, you can use them for web, mobile (react-native, ionic,etc) or even desktop (electron). The setup for each varies and each setup could be it's own individual post and I might just write them someday. The tools I normally end up picking are the following:

  • uvu - A test runner by Luke Edwards, which supports most standard setup's today and is the fastest and most of the time's the first one I pick
  • avajs - Another one but, this is mostly switched to, when I'm working with react based projects since ava supports having 2 separate babel config by just adding that config in the package.json file, while this is possible in uvu, it requires a separate programmatic setup.js file which I avoid doing since ava makes that a little more simpler
  • jest - And when everything else fails to suffice the use-case or if you don't want to even setup test runners, jest normally comes with the create-react-app setup, so this is what I use when working with full fledged react apps, instead of the above two.

The overview

  • uvu, ava -> Small Libraries, cli apps, tooling , anything other than a react web app
  • jest -> Full scale react apps

There's obviously a whole bunch of other test runners out there, the above 3 are my choice because the test runner and assertion is both handled by the same tool and if you want to use other assertion libraries, you are free to do so with uvu and ava , i've not tried that with jest so I wouldn't know.

Overall, I like uvu's assertion API so I end up using that in most places.

Now, the other kind of testing setups I've used in the past have Mocha and Chai as a combination, one is a testing framework and the other is an assertion library. This combination is battle tested and is a part of really huge projects if you'd like to take that stability into picture, on the other hand while uvu and ava are stable, the source code is easier for me to go through and thus, fix if I do need to do that at any given point.

Methods

Getting to how and when would you use the tools, no matter which one you choose to go with. I'm not going to go through every method there is but the one's that I've worked with cause I'm limited by my knowledge.

  • Functional/Unit
  • Snapshots
  • Event Flows

Each can be used for a specific type of project or you can mix and match since they are just methods aka, when you use them is to your own instinct. As for when I use each, you'll get to know that anyway.

Functional / Unit Testing

The most common use case is writing tests for singular functions or the base input=>output flows, where the same input gives you the same output, this can include anything from library functions to http requests to the god damn space.

You'd add these to stress test your functions for null values, invalid values, random values, and obviously the happy (valid) values to make sure you get the correct output on each.

You can find example for this on the tocolor repo's tests, an example from it

test('hex to rgb', () => {
  Object.keys(colors).forEach(color => {
    const convertedColor = hexToRGB(colors[color])
    const ref = MATCH_MAPPERS[color].rgb
    assert.equal(convertedColor.r, ref.r)
    assert.equal(convertedColor.g, ref.g)
    assert.equal(convertedColor.b, ref.b)
  })
})

Where test defines or describes what's going to be tested, assert.equal is what checks if the generated value from the helper is equal to the expected value, this will fail if they don't match, thus letting me know that hex to rgb conversion is failing.

Snapshots

While working with web and webviews, snapshots are what start to become common, a snapshot is a simple JSON structure in some test libraries and the entire View DOM in others.

The point of snapshots is that you are trying compare 2 exactly same structures and if the structures differ, you need to either fix or update the snapshot. An example to help with this would be from sindresorhus/react-extras as I don't have any public repo's with snapshot testing

const snapshotJSX = (t, jsx) => t.snapshot(render.create(jsx).toJSON())

test('<If>', t => {
  snapshotJSX(
    t,
    <If condition={true}>
      <button>🦄</button>
    </If>
  )
  snapshotJSX(
    t,
    <If condition={false}>
      <button>🦄</button>
    </If>
  )

  let evaluated = false
  snapshotJSX(
    t,
    <If
      condition={false}
      render={() => <button>{(evaluated = true)}</button>}
    />
  )
  t.false(evaluated)
})

react-extras provide you with a simple If utility component which takes in a condition and renders based on the condition. The above is basically a test for that which takes in 2 predefined values and make a snapshot out of them or to be simpler, a JSON structure of the rendered dom.

If the utility is working correctly, the first If render would render the 🦄 as expected and the 2nd would render nothing, this will create snapshot where the render only has 1 button. If the utility fails and both are rendered the test runner will give you an error stating that the snapshots have changed and based on that you either fix your utility or if you have made a change that adds a new rendering element, then you update the snapshots by forcing the runner to consider that the snapshot is correct and it needs to update it's stored data.

A few of these tools create a markdown of the snapshots as well, so you can visually see it , you can find one for the above code snippet here

Event Flows

This is not with regards to web but anything that depends on async code or event based code, there's not much to change here when compared to the Functional / Unit Testing but instead of checking to values instantly, you wait for the events to occur (which should be obvious).

In web you can use the DOM handler utilities that the frameworks provide, Angular has these for checking if something has rendered or not, or if a certain reactive variable has changed.

React comes with it's own set of utilities , specifically the act function, you can read more about it on the docs, but to simplify I use an abstraction library on top of this called testing-library/react which is by Kent C. Dodds and it adds a lot of flow based utility functions on top of the existing react testing helpers.

The same team provides similar abstractions and flow helpers for other frameworks like Svelte, Vue as well, so the concept below can be applied to others.

Let's take barelyhuman/react-async as an example for this

test('<AsyncView data={fetchPostSuccess} />', async t => {
  const networkData = await fetchPostSuccess()
  const { getByText } = render(
    <AsyncView data={fetchPostSuccess}>
      {({ data, loading }) => {
        if (loading) {
          return <p>loading</p>
        }
        return <p>{data.title}</p>
      }}
    </AsyncView>
  )
  await waitFor(() => {
    t.truthy(getByText(networkData.title))
  })
})

The above test is trying to test the success case of an API call from the AsyncView component provided by the library. The flow of the above is as follows

  1. Fetch data with a fetch request
  2. Use the same fetcher in AsyncView
  3. Render the above AsyncView to the DOM
  4. Wait for the fetched title to appear on the DOM

The fetcher is basically pulls data from a mock server and compares it to what is rendered by AsyncView once it's gotten the data. If it's the same, then AsyncView is working as intended.

You could however, use snapshot testing for this. It will suffice as a valid test case here and the only reason I didn't do it is to avoid storing that extra markdown and snapshots file.

The other case is triggering events on DOM, which is also common and you could have similar use-cases when working with backend where you would want to trigger a redis message and handling from the test case, in which case you will have to use the redis listener and dispatcher in the same test case and wait for each to complete like any other async function.

For the web however, I do have an example so you can refer to that

test('useAsync | refetch', async t => {
  const { getByText, queryByText } = render(<RefetchView index={0} />)
  await waitFor(() => {
    t.truthy(getByText('hello'))
  })
  const buttonEl = queryByText('hello')
  buttonEl.click()
  await waitFor(() => {
    t.truthy(getByText('world'))
  })
})

The above is for the useAsync utility from the same library as above, and to test the hook, we create another component that is rendering the data from the hook. Using static data in this case but since we need to check if refetch get's new data based on existing params or not, we've made the rendering a little different. This is mostly how you will be testing hooks since they need the React context to work, for any other function you could just go with Unit Testing but since hooks are dependent on the React context for the reactivity or handling changes, you are limited to actually rendering it inside a component.

As for the test itself, the flow is as follows

  • Render the original component
  • Wait for the original rendered data after loading has completed.
  • Grab the button using utilities
  • Fire a click event on the button to trigger a refetch call
  • Wait for the new data to be rendered on the DOM

All of these are Open source so you can check that out in the tests folder in the repo

Mentality

Writing tests isn't hard but figuring out how a certain flow can be tested can get to you pretty quickly and so it's easier to do TDD but then TDD also requires you write script the tests first and that does require time.

The general mistake is to test the entire flow in a single test, which often backfires pretty quickly and it's very hard to manage them anyway.

While writing tests, there's a small checklist I'd like you to consider

  • Test case is isolated
  • Cleanup/Resets handled
  • Dependent Data is handled separately

Isolated test cases && Handling Dependent Data

If observed, the above test cases handle very specific and very contained cases and that's natural when working with components but even if you working with a lot of helper functions / library functions, you keep you test cases limited to that function. The whole point of unit testing is to point of which exact unit is causing issues and not testing the entire flow.

Though, I understand that testing the entire flow can be necessary and this is where dependent data based testing comes in. A very simple example of this would be working with REST API's or GraphQL or whatever network based data interface you use right now. General flow of any data based transaction would look like the following

  1. Send a request with certain params
  2. Get response with a structured data
  3. Use the structured data to visualise or manipulate again.

Point 3, can be handled via Unit tests by passing in valid, invalid, arbitrary data and that cuts the failure of those particular cases. Point 1 and Point 2 , deal with reading a datastore, which can give dynamic results so it's not the result that matters but the behaviour and the final response.

So, if I want to create a todo and list the todo, my request takes in the task and its status and when calling the listing api, I get back an {id,task,status} as the base parameters that I'll be using. How would you test this?

  1. Write a single test case sending and receiving the todo data and validate it.
  2. Write 2 tests, one to test creation and one to test listing and validate that the required response parameters are there

Most people pick up point 1 and write tests and while this works well for smaller cases like the todo app, it is a disaster for larger flows, like a checkout flow which will need you to validate the following

  • Items
  • Order data
  • Payment Credentials
  • Payment success
  • Order Confirmation

one single test, doing all that... amazing.

On the other hand, if you use point 2, you now have unit tests that can be replicated as much as needed for various cases and to manage the dependant data, do not clean the data on each test but instead after the end test.

Taking the above checkout flow in picture

  1. Clean DB
  2. test to add items into the cart
  3. test to list the items
  4. test to create an order
  5. ... create payment request
  6. ... validate payment request
  7. ... check payment success / failure
  8. ... confirmation of the order

Now, why would you spend time writing 7 tests instead of 1?

  • Your tests can now be modified singularly without risking breaking other tests since they still look for data from a single source instead of what you defined in the test itself.
  • Easier to remove cases when you change flows since one doesn't depend on the other and all just depend on the source of data.

Eg: Client comes in and says it's going to be a free product, don't need the payment gateway anymore.

  • Make changes in the code to remove the payment gateway
  • Add test.skip or whatever your tool specifies for skipping tests, to Point 5 - Point 7 , and leave the rest as is.

I still have test cases in case we decide to add payment again and I didn't have to test things in and out if I'd have done everything in one test case.

Cleanup/Resets

This is something even I forget doing when I write tests in a hurry and that's why I don't personally like TDD , since you start hurrying since the feature is a lot more important than the test. The client won't see the tests he's going to see the feature and so I hurry down on writing the tests and forget to write the cleanup cases or reset cases.

Most testing tools provide a way to to handle these.

  1. beforeAll
  2. beforeEach
  3. afterAll
  4. afterEach

these are hook functions that get triggered based on the name that you see. If I wrote a single checkout.test.js for the above mentioned flow of checkout, then I'd have a beforeAll hook that would clean the DB and then everything would execute as mentioned.

If on the other hand, I have multiple checkout test cases to handle then I'd have a specific before call on the Point 1 test case to clean up the db before starting that and then the next set would do the same.

Another case is where you want to test every case with a fresh clean DB, in that case you add a beforeEach hook, and that will execute before each test case is executed thus, giving you a clean DB for each test case.

How these hooks are to be invoked are dependent on the framework you use and so are to be read about from each's documentation.

That's about it for this post, hopefully that's helped someone.

Adios!

]]>
Mon, 06 Dec 2021 00:00:00 +0000
https://reaper.is/writing/the-budget-mechs.html Mechanical Keyboards - India - Mid 2020 https://reaper.is/writing/the-budget-mechs.html Mechanical Keyboards - India - Mid 2020

Mechanical Keyboards

This is going to be a really short writeup on what mechanical keyboards are and then we'll get to the options available to us right now in India, mid 2020.

You can go all the way down to the buying options and skip the other info if needed.

What are we talking about again?

Mechanical keyboards are the much more robust version of your day to day keyboard. The keyboards you normally see around are membrane keyboards which are cheaper to produce and use digitial contacts to figure out what key was pressed.

On, the other side we have the Mechanical Feel keyboards which are basically the same membrane keyboard but with clicky keys trying to imitate Mechanical keyboards and these are also available at a decent price range, you can invest in these if you don't really want to spend much on mechanical keyboards but then we do have a few budget options right around the same price.

Finally, Mechanical Keyboards. I'm just going to go over the important components that'll get you enough knowledge to get started.

We've got 4 components

  • The box / case
  • PCB - Circuit board
  • Switches - Key Mechanism
  • Key Caps

The Case

It's the casing for the actual keyboard, there's nothing much to tell here other than the various varieties out there. You can get 3d printed cases, metal cases, plastic cases, wooden cases, there's a whole bunch and each has it's own cost and when going the budget route, the obvious choice is the plastic case. Don't worry, there's a few cases that are really sturdy even if plastic

PCB

Now this is what the cable/bluetooth/wireless receiver addon is on and your system connects to this to detect the passed signals or in even simpler words, this circuit board takes care of actually passing the required digital signals for the system to detect what is being typed.

Hot Swappable vs Soldering

There's 2 types of PCB's , or circuit boards. One which allows you to directly plug in the switches and one which needs you to solder the connection points to the board. The one's you can plug into are called Hot Swappable boards and the others are called solder boards.

Switches

Now part 1 of the fancy stuff, the keyboard tech has improved over time and you've got a lot of options for the switches you'd like to use. You've got really clicky switches to really silent switches. Unfortunately, there's not much you can get directly in India and are limited to a few brands but you can import a lot of the other ones from various places online but the options in this post will be limited to stuff you can buy from Amazon and/or online local mechanical keyboard resellers like PCStudio and Meckeys.

Short explanation of the switches,

Clicky - Keys that make a nice loud click sound, just like a typewriter.

Tactile - Need lesser pressure to activate and is normally good for games where the action needs to be instant

Linear - You'll have to press the key all the way down like membrane keys to activate it

  • Blue - Tactile and Very Clicky
  • Brown - Tactile (mostly found in gaming keyboards)
  • Red - Linear
  • Red Speed - Linear but needs lesser force to activate the key

There's obviously more types of switches, you've got blacks, silvers, jades , but these are the most common ones and most prebuilt keyboards will come with one of these.

Now, the most heard thing in the Mech Keyboard community is the Cherry MX Switches, now while the german engineered switches are a great choice overall,Gaterons are another set which can be your first choice when spending a good amount of money while building a custom mech. They might be a little harder to obtain in India but as always we've got alternatives.

On the contrary, you can find all of these on various international markets online, since it's a little hard to obtain them right now without paying huge amounts of customs, we'll stick to the below options.

Namely

  • Outemu
  • Kailhs

Now each of them have the same colors available and each have their own level of tactility and force to activation but they try to replicate the original color pattern I mentioned above so you should be fine. Now these are available at different rates online and you can obtains most of them form Meckeys and Amazon. These are all good switches anything else is probably a clone (ironically outemu is actually a clone, i'll let you do the research on that).

Keycaps

And Part 2 of the fancy stuff , the keycaps. The switches are basically you're medium to connect the keycaps to the PCB and once they are in place all you have left is adding the keycaps. Now keycaps have a wide range and are just a search away on Amazon and Meckeys.

There's so many out there that I didn't even want to add them in the options below but I guess I'll just add the ones I like and you guys can get ahead and get others if you want too.

Yes, you can buy a cheap mechanical keyboard and replace the keycaps to make it look better than the cheap legend/font it comes with.

What Can I Buy?

These are based on the date of the post and the prices have been fluctuating on amazon for a while now so they may get out of the price brackets but the point of this post is to list out valid buying options.

You can also go through Keychrons which are available at different price ranges and can be found from their website keychron.in , these keyboards mostly come with Mac support so is preferred by Mac Users but you can use almost any keyboard mentioned below by just switching the modifier keys in the macOS keyboard preferences.

2000 - 4000 INR

The tightest bracket but you've got a few options.

Set your Expectations

  • Plastic builds
  • Cheap keycaps (Not all of them but most of them)
  • Blue and Brown switches ( Outemu / Kailhs )
  • Static Backlighting (might be multicolored but rarely full rgb)

Links

6000 - 10000 INR

You've find a good amount of prebuilt options at this range and also build a custom one if you'd like too.

Set your Expectations

  • Plastic builds (but known brands)
  • Good Keycaps
  • You can get options for red
  • Most of them will have RGB but expect static lightings in certain options as well

Links

  • Ducky One 2 TKL Link
  • Ducky One 2 Full Sized Link
  • DUCKY ONE 2 MINI Link Link
    • You can find more options in terms of case and keycaps on Meckeys.com
  • Hyper X Alloy Pro TKL Link 1 Red Blue
    • Has stock at the time of posting, but about 10, so might change by the time you read this

10000+ INR

At this price you can go for most options on Meckeys.com and Amazon but the recommend brand to go for would be Varmilo in and if you can, build a custom one.

Custom Keyboard?

Yeah, you can build a custom one, you can find various videos online to get you through this but the simplest one would be with the options below.

Base

GLORIOUS GMMK CUSTOMIZABLE TKL RGB - Link Comes with the pcb and the case already, don't have to mess with that when starting out.

Switches

Kailh Box Switches - Link Comes in packs of 10 so you'll need 90 keys , round it to 100 for extra switches for use later

Keycaps

ABS BACKLIT DOUBLE SHOT KEYCAP – WHITE - Link

I like the off white tone of these, so I chose these, you can find a lot of options over at meckeys and amazon.

Now this should cost you anywhere around 10k to 14k based on what you choose and that sums up our post.

]]>
Mon, 19 Aug 2020 00:00:00 +0000
https://reaper.is/writing/the-unproductive-weekend.html Be Unproductive? https://reaper.is/writing/the-unproductive-weekend.html Be Unproductive?

It's okay to be unproductive.

That's it, that's the post.

But, seriously, why do people think that you have to have something really good going in your life, like everything is about having some form of achievement, like there's no other thing in life to do. Have some god damn fun!

but but but, know that fun is different for different people.

But you're the one that posts your weekly achievements?

Yes, I do, cause it's mostly the discipline of not missing on writing a post, I need to maintain this habit.

I could post random stories instead of what updates I made to each project which you can just find from each update's changelog on Github anyway.

I could post about things that might help you write better code if that is something people want to read but the idea is to keep writing and that's very easy content to post about every week, for any other topic I'd have to craft it out in advance and maintain a list of posts I need to make, which adds more friction and might push me off the habit of writing altogether.

Why spend your weekends coding so much?

It's a hobby of mine, every small library, project that I've written was a part of me having fun. I spend times with my hobbies most weekends. I write about coding since that's what both the blogs are about. A programmer.

Sharing about me practicing the guitar, or drawing for hours, or binging on anime will make no sense on a "developer log" post.

Yeah okay, Why this post?

Well to tell people that it's okay to sleep 2 whole days while doing nothing if that's what you wish to do. I don't do it every weekend but if you do, that's not a bad thing, point is spend time with things/people/hobbies that make you happy.

In my case it's coding so I end up being a little productive on weekends but to each, his own, even though the last weekend wasn't. I did basically what I typed in the first line of this section.

I did spend 60-90 mins learning zig though...

Repeating the point, have fun in your free time, might not get it back.

]]>
Mon, 29 Nov 2021 00:00:00 +0000
https://reaper.is/writing/thinking-straight.html What I do when I can't think straight. https://reaper.is/writing/thinking-straight.html What I do when I can't think straight.

I assume that I'm not the only one who has moments like these where it's hard to come to a concrete decision.

I can only talk as a developer and I have these moments where I'd like to build something but I start overplanning to avoid all short comings so much so, that I end up with a dilemma. Should I start the project or is a waste of time?

I rember posting a contradicting article , titled "Make it, just to get it out of your head". But this scenario is a little different, because I did start making what I wanted too but the decisions I had to go through to make sure it doesn't fall were too much to handle.

How do I handle it ?

The solution I use isn't something I'm sure about since it works for me, might not work for you but you can take the general idea into consideration.

Let's redefine the problem, I can't think straight about a new project that I want to build, due to which I'm unable to write any code or even setup the architechture to start it off.

At this point, my general thought process goes down the negative loop hole of you just aren't good enough what I forget to add to that statement is "yet" , that one word creates a difference in thinking but when you've already lost control such simple things don't come to mind. It's very easy for motivational speakers to use the do this when this happens kind of talk, and I myself say that and I realise how wrong that is.

So, I can't think straight and I can't tell my brain to chill out either, what do I do now?, Do something else.

Trick your brain

Yeah, you read it right. I end up doing something that I'm already good at. Obviously, it's not the first thought that comes to mind but since I've done it so many times it's more of a habit that I just instantly switch to doing it than wait for my brain to calm down.

Obviously, not everyone has been doing it so let's go back to when I couldn't do this. I'd spend over 2-3 hours just getting back to normal while my brain kept thinking about what programming language should I use to build this, what framework makes more sense, this will make it a larger app, this will make it use more ram, but then this will break, this won't work for people with slower computers, etc, etc, etc.

There's no stopping my overthinking cycle when it gets out of control and the only way to slow it down to the point where I can at least think a little more logically is blasting music at the loudest volumes in my ear drums and even that works only if I'm in the mood to listen to that genre.

Give me the solution already!

As mentioned before, this might work only for me but the solution is singing to music while trying to calm the brain down and then build something I've already built before and rebuild it.

How can you replicate that?

Start off with an attempt to calm yourself down. This can involve music, working out, sleeping, looking at the ceiling for the next 30 mins, anything, anything that works for you (Sadly, you'll have to figure this out).

Once you're done with that , we can move to stage 2, doing something we are already good at. Draw something you already did, sing a song you know, code something you've coded multiple times before. The idea is to let yourself feel that slight comfort that you can build and improve on stuff that you've done before.

Example

I've wanted to build a postgres client for all platforms but I wanted to avoid using electron and building it with C++ will take me time since I'll have to learn a lot before I start actually building the application and I wanted the RAM usage and the total size of the app to be on the lower end of the scale which is never the case with Electron and I ended up going round and round around such points and ended with nothing.

What I rebuilt.

I've been thinking about improving Orion , a music player I built years ago but I never did anything because it worked well and didn't need any changes functionality wise. But, I didn't really like the design and have been wanting to change it but didn't because I kept myself busy on building new stuff.

Finally, Trashed the previous code of Orion and rewrote it properly this time, removing all amateur coding standards I used back then. Changed the design to a new level of minimalism and voila!

While I was at it, I wanted to test a file tree based router I wrote and ended up rewriting the server side of it as well. Replaced Express with Routex.

Sane Thinking and Final Decision

After this was all done , I just decided I'd go with my usual choice of Electron-Vue and use it for the builds. If Hyper can compete with iTerm in terms of memory management then so can I. The size of the app though, will have to figure that out. I normally try avoid external dependencies so I might be able to get away with it and keep the total app sized in the range of 60-100MB.

TLDR;

  1. Reduce the Overthinking Cycle with whatever works for you

  2. Do something you are already good at, it can be anything. gaming, singing, dancing, drawing.

  3. Try taking a decision at this point if you get back in to the overthinking loop, start again from step 1.

]]>
Mon, 22 Jun 2020 00:00:00 +0000
https://reaper.is/writing/twilio-coversations.html Working with Twilio Coversations https://reaper.is/writing/twilio-coversations.html Working with Twilio Coversations

Complaining about docs has been a global rant for now but then not everyone has the time and man power to write documentation, I don't know why Twilio skipped on this but the only way to figure out working with Conversations is the incomplete documentation about the basic flow of the events from their documentation and the somewhat documented types on their typescript generated typedoc reference.

Now, I actually got done with the usecase pretty well since I've worked with it before and knew the mistakes I made before so it was easy this time and hardly took a day or so to get the additional features with integrations to be done, but I got feedback on a discord channel regarding a helper library I built for twilio conversations and the feedback was

While I understand what your library is trying to do, I don't think it's easy to look for documentation on the original twilio reference since most of the time you don't really know what is to be used for what.

There's more to the feedback but this made sense, since the first time I worked with programmable chat and conversations, I had to take reference from the then available example repositories using these services. Which is generally a secondary approach to learning something. The primary approach for most people is referencing the documentation or stack overflow.

Anyway, since this comment then got a few more supporting comments regarding similar issues, we are going to try to have a base flow of what to look for when working with a simple chat app and then a few additions in case you work with a more complex one.

Note: There's actual code sample after the theory is done for reference, if just reading doesn't help you

Nomenclature

clearing this will help us with the remaining of the explanation

  • Conversations - to simplify, these are your chat rooms, people join and leave conversations and that's how you control who is talking to whom, there's a lot more you can do here but keeping it simple for now.
  • Participants - every identity/person that joins is a participant. the messages are assigned to these participants using something called as an identity and you can look up participants by id to perform further actions
  • Messages - Self explanatory, but these are always inside a conversation, and hence you can get back the conversation from a message if you do want to update something in the conversation, for example, update the last read message time or to simplify (unread message counter feature)

These are the 3 basic resources you need to work with for a very simple chat app, and each of these can be manipulated more to create something more complex once you understand the limitation of each.

Twilio Client and Initialization

The basic functionality will require you to get a twilio client library and then listening to various events on that client. There's a good amount of debate on when you should be initializing the client, we'll get to the simplest options in a bit but let's get to the flow of working with the client.

  1. You request a server for the auth token with the needed twilio grants
  2. You use this token to initialize the client
  3. You add listeners to handle the conversation events.

That's the flow for init of client, you will generally initialize the client at the soonest possible time in your app based on a few variables involved.

  • You do the initialization post the authentication step, generally on the page that handles your redirection, so let's say you enter the creds and execute a login , you take the person on a loading page first and then the actual chat room, so your client is created on the loading page, just created, and you also add a failsafe for when people share the chatroom link to use this loading page before the people see the actual chat room or you can add it to the chat room page itself and add a loading on the chat screen, your call here.
  • Let's say you are working with a more complex app, something that has more features than just a login page and a chat page, in this case, the earliest is going to be the dashboard page or the overview page that comes up after you are done with the signup / process flow etc etc, and this is the page that loads everytime the app loads so it makes sense to have it here, again, just create the client here, you don't necessarily have to wait for it's initialization / connection to complete here

Conversations and Listing them

While, a lot of backend developers would just create the token and hand it over to you for you to add listeners for everything that twilio provides, it's generally easier for the backend to do the conversation listing for the app initialization stage.

This helps with a few things, in most cases the conversations are going to be 1 on 1 and you'd need more than just the conversation name, your app might need the user's full details, the profile pics, etc etc , which "surprisingly" the backend has faster access too. So here's the things I'd recommend that you get from the backend for the initial chat list.

  • List of conversations (obviously)
  • User details for each conversation (both Participants, will include details like, name, avatar, last message etc)
  • And make sure this is on the offset 0, so basically the last 10 conversations , sorted by last message in desc

This helps the client/frontend to make one initial call to get the list of conversations to show and it's faster to render than doing something like

  1. Wait for twilio client to connect
  2. get conversation list
  3. check the identity for each conversation.
  4. retrieve user details from the identity of each conversation.
  5. fetch avatar url using the user details

This doesn't mean the frontend won't add listeners. Once you are connected to the client, you are better off handling the events listed below on the frontend.

  • messageAdded - you add a listener for this and update the conversation's last message and unread count based on this event
  • conversationUpdated - this event can help you with updates on the conversation, like removing a participant from the group chat and you wish to show that on the list , also if it's a named group chat, the same event can help. you take the updated data and only update the state of one single chat instead of fetching the entire list from the API, so both are to work in conjunction.
  • conversationAdded - this event fires when you are added to a new conversation, so you might want to update the list and bring this to the top, and I realise you might have to fetch other details for this, which is fine, you fetch for one conversation now instead of the entire list, which is okay, still keep the state update non-blocking to make sure the user doesn't suffer from a loader when it's not needed.

And there's a ton more of these events for message and conversation, there's also typing events if you wish to add that to the conversation list as well, you'll have to modify how your chat works for this which is next.

Conversations and sending messages

This is also something that would initially load data from the backend while the listeners get added. The initial data would involve you sending the backend your participant id and getting the last 4-5 messages and listing them, unless you plan to something like slack and keep the past N messages of each channel in a cache in the indexedDB (for web) or some sqlite in the mobile apps, which is also fine if you ask me , though let's say we don't want to do that for now, we can have the backend send through the last few messages, avatar data (if you aren't caching that), etc.

and then we have the listeners that we would like to listen to.

  • messageAdded - this is not on the client, but on the conversation instance, so you'll only be notified for messages added to this very conversation.
  • messageUpdated - if you have added a edit message functionality, this is the event you monitor
  • messageDeleted - same as above but for delete events, self explanatory

Done with the boring parts, based on the above you update your state with the needed data to show the message. As for handling typing and read, unread message You have to update the last read message time in the conversation which is on the conversation object with the method setAllMessagesRead or updateLastReadMessageIndex , unless you trigger one of these messages the getUnreadMessagesCount method on the conversation will always return null.

Now for handling typing indicators, that's very simple and actually documented so I could redirect, but since I've already written so much. You use the method typing on the conversation instance, which will trigger an event lasting a duration of 3-4 seconds , unless another typing event is triggered, so you can add this to your input's keypress event and as long the person is actually typing, this event keeps firing thus letting you to show that the person is actually typing.

Finally sending the message is as simple as calling the sendMessage method on the conversation instance with the text message or a media message. Do use the attributes optional parameter for the sendMessage function to add unqiue trackable values if needed.

all the reading aside, let's see it in example. I'll be using my helper library to reduce my work here

// app-init.js

import {
  createClient,
  onInit,
  onTokenAboutToExpire,
} from '@barelyhuman/twilio-conversations'

let client

export async function initializeTwilio() {
  if (client) return client

  const token = await fetchTwilioToken()
  client = createClient(token)

  onInit(() => {
    console.log('Twilio client , connected')
  })

  onTokenAboutToExpire(ttl => {
    // not using the ttl, but it's there if you need it
    fetchTwilioToken().then(token => client.updateToken(_nextToken))
  })
}
// chat-list.js
import { onInit, onMessageAdded } from '@barelyhuman/twilio-conversations'
import { initializeTwilio } from '../app-init.js'

async function ChatList() {
  const conversationList = fetchConversations()

  // render the list on screen
  render(conversationList, {
    onClick: conversation =>
      navigateTo('Chat', { conversation: conversation.sid }),
  })

  const client = await initializeTwilio()
  // if the client isn't connected wait for it to happen
  if (client.connectionState !== 'connected') {
    onInit(() => {
      rerenderChatList()
    })
  }

  // add the needed listeners
  onMessageAdded(message => {
    const conversationSId = message.conversation.sid
    const _convToUpdate = conversationList.find(x => x.sid === conversationSId)
    _convToUpdate.lastMessage = message

    // update state with the new element for the specific item in the list
    updateRenderForKey(_convToUpdate.sid, _convToUpdate)
  })
}
// Chat.js

import { findConversations } from '@barelyhuman/twilio-conversations'
import { initializeTwilio } from '../app-init.js'

async function Chat(conversation) {
  const existingChatMessages = fetchMessages(conversation)

  render(existingChatMessages, {
    onSend: text => {},
  })

  const client = await initializeTwilio()
  // if the client isn't connected wait for it to happen
  if (client.connectionState !== 'connected') {
    onInit(() => {
      rerenderChatList()
    })
  }

  const conversationResource = findConversations(conversation)

  // this is different based as it's from the helper library
  const {
    conversation,
    onMessageAdded: onMessageAddedToConv,
    onTypingStarted: onTypingStartedInConv,
    onTypingEnded: onTypingEndedInConv,
  } = conversationResource

  // update render since we have the resource now
  render(existingChatMessages, {
    onTextChange: text => {
      conversation.typing()
    },
    onSend: text => {
      const _formattedMessage = {
        id: message.sid,
        text: message.body,
        status: 'pending',
        user: {
          id: myUserId,
        },
      }
      conversation.sendMessage(text)
    },
  })

  // add the needed listeners
  const { unsubscribe: unsubMessageAdd } = onMessageAddedToConv(message => {
    const _formattedMessage = {
      id: message.sid,
      text: message.body,
      status: 'sent',
      user: {
        id: message.author,
      },
    }

    // update state with the new element for the specific item in the list , as it's status is now changed
    updateRenderForKey(message.id, _formattedMessage)
  })

  const { unsubscribe: unsubTypingStart } = onTypingStartedInConv(
    participant => {
      isTyping = true
    }
  )

  const { unsubscribe: unsubTypingEnd } = onTypingEndedInConv(participant => {
    isTyping = false
  })

  // clear the listeners before your componenet unmounts to avoid overloading the event listeners
  onComponentDestroy(() => {
    unsubMessageAdd()
    unsubTypingStart()
    unsubTypingEnd()
  })
}

Everything you see here is doable with the most frameworks and the actual twilio library, the helper only created a global context to be able to use it the client even when it's not importable and to be able to chain the client functions as needed. I repeat you can do everything I've done above with the twilio library. No, the above code won't work anywhere since it's pseudocode and there's doesn't exist a framework that has the above mentioned render functions the way I used them, these are just examples to be read as examples.

]]>
Mon, 23 Dec 2021 00:00:00 +0000
https://reaper.is/writing/typescript-jsdoc-managing-code-style.html Typescript, VSCode, JSDoc - An overview on managing coding style and restricting developers https://reaper.is/writing/typescript-jsdoc-managing-code-style.html Typescript, VSCode, JSDoc - An overview on managing coding style and restricting developers

Javascript and Typescript , the unneeded debate as to which is better and which should be used has been around for as long coffeescript existed and the debate shifted to Typescript once typescript started getting a lot more heated.

Anyway, none of my business.

Getting to what I use and why I use it. Oh and before we start

"JS DEVELOPERS DON'T LIKE STRICT TYPING!!!"

is an argument you can use somewhere else, I work with Go and Rust and I like strict typing so that argument doesn't work with me.

Types and Strict Typing

The problem we have with JS is that it can be variable typed and that causes issues that making debugging hard and it's not because JS allows you to do it but it's because developers need to understand what they are to write and when it's acceptable. Obviously this helps as a reviewer but then the actual developer doesn't understand the reason for the restriction

eg:

let a = 1

// in code somewhere
a = 'string value'

// in some other function
a += 2

You can see the issue here because I've segregated it to 3 specific lines where the mutation is causing the unexpected behaviour but this won't be easy to do when working with code that's larger and this is where you have 2 options.

  1. Use typescript and strictly type everything
  2. Understand the issue is with my code style and improve it.

1. Typescript

This is the easy way out and you can leave it on the TS-server to decide how it helps you with configurations and I would advise you learn TS whether or not you use it. The problem is , that without the ts-server you are basically on your own and you'll have to fallback to your own coding skills to make sure the code is safe but it's always good to have some form of completion to help you so we'll get to that as well.

2. Improving your code style

  • The easiest step is to consider that everything is immutable and that can be done by starting to use const instead of let for your declarations.

eg:

const a = 1 // cannot be changed.

a = 'string value' // will throw an error

const b = a + 2 // expected behavior b = 3

Now, the issue with this is the amount of memory you use but since most JS runtimes have a GC(Garbage Collector) that can keep the memory in control it is still worth it to know that doing this extensively on a global scope is not a good idea. Keep the memory allocations inside smaller scopes and moving to the next point, smaller broken down logic blocks.

  • The next step is to break down logic as much as you can to keep it reusable, I don't mean strict DRY coding because that ends up creating a lot more issues in larger projects but breaking into functions / blocks that can be cloned when needed.

eg:

// file: a.js
async function fetchUser() {
  const userId = 2 // would mostly come from the calling function as a param
  const resp = await serverReq(userId)
  return resp
}

// file: b.js
async function fetchUserWithImage() {
  const userId = 2 // would mostly come from the calling function as a param
  const resp = await serverReq(userId)
  const imageURL = await imageReq(resp.profile_pic_id)
  return { user: resp, imageURL }
}

Yes, yes I can import the first one and use in the second one and that's possible and a good solution in this scenario but i'm giving an example as to what I mean by code that can be easily cloned. Things that don't need much modification to reproduce similar behaviour.

What does this do? You now have 2 functions, who's memory usage is defined by themselves, resp doesn't clear up totally since the reference is passed to the above function but the internal definitions are cleared as soon as the block ends.

So, in a way a little more control over the memory usage (not as granular as something like C but it's okay for now)

  • Post breaking down, there's another issue, a lot of developers don't understand the concept of references in JS and so it's something that I will cover now. There's obviously better posts out there that go in detail for this topic but let's take a brief overlook.
const a = [1, 2, 3]
const b = a
b[0] = 1

console.log(a, b)

// next snippet
const x = {
  y: 1,
}

const z = x
z.y = 3

console.log(x, z)

People who understand the issue here already understand references, and people who think that changing b has no effect on a and changing z.y has no effect on the value of x.y , here's what's going on.

JS works with references when working with complex types, which in the generic runtime are arrays and objects, these internally return a reference point for the runtime,

eg:

const a = [1,2,3] // returns referencePoint x12132 <= some random address in the runtime memory

when you assign it to another variable, the variable takes the reference.

const b = a // `b` now points to `x12132`

And well, now any changes you make to B are made on the actual reference so unexpected results when re using the original array or original object.

How do you avoid this?

The solution is cloning and this can be a whole different post but for now, know that you create a new reference of the complex types using either additions from the newer tc39 proposals from es6 to esnext or use libraries like lodash or underscore to create clones for you.

When you understand this , you avoid most of the issues of cleaning up array and other requirements. This concept also helps with React and react's state management or Angular and Angular's ngChange directives since they both use reference comparison to see if something changed or not.

Next?

Next would be to combine these few points and see the difference, the other things that bother developers is that code style needs to be managed and consistent, for this , if anyone observed most of my projects have a formatting action that runs standard / prettier and commits back to the code in case of a PR or an edit from the github editor.

This makes sure I can make changes from anywhere and my actions would take care of handling the code style and standard is also in the commit hooks to avoid me making obvious mistakes. Do i need to make it super restrictive? Not really, the point of linters and code formatters is to show you what a simple set of rules can help you with, depending on them to handle other people's coding will just stop those developers from learning what went wrong.

A few people can learn by just assuming based on the given explanation but others need to practically see the code break to understand what they did wrong and 9/10 times they won't repeat that.

But but! I like the autocompletion from Typescript!?

Um, if you structure you app and code well enough, VSCode is smart enough to help you with auto completion and I have no issues with Typescript , i have issues with it trying to be it's own language. It went from being a strict type engine to a full fledged superset and that adds up work.

eg:

export type PartialState<
  T extends State,
  K1 extends keyof T = keyof T,
  K2 extends keyof T = K1,
  K3 extends keyof T = K2,
  K4 extends keyof T = K3,
> =
  | (Pick<T, K1> | Pick<T, K2> | Pick<T, K3> | Pick<T, K4> | T)
  | ((state: T) => Pick<T, K1> | Pick<T, K2> | Pick<T, K3> | Pick<T, K4> | T)

The above is an implementation of a PartialState type which will allow the keys to be any of the following from the State type and also extend the type T if the key is from the type T , and then I allow picking them , so your auto completion would work as expected.

The problem here? this needs to be learned and can get more and more complex as time goes, compared to simply writing the type of what the particular object / type is to represent, because this is trying to be more than a type, this is trying to be the entire logic of how the code can be accessed, and is specifically written to help people write function parameters the way I want to restrict it too but then , is this readable over time? If I come back to this 2 years later will I understand what I was thinking? Probably not.

But yes, I understand that intellisense is a huge part of developers life today and so I use an alternative, I don't write types this complex. I write smaller types that are basically easy to read and use and use them in my .js files.

Wait, WHAT?

You read it right, I use the types in my .js files. The ts-server is a very powerful tool and even more powerful tool today is the code editor VSCode and it handles typescript natively since it's built on the same tech.

But , it supports the general JS ecosystem to it supports JSDoc and typescript itself supports JSDoc since that was the de facto way of writing documentation for JS before all of this came up.

Get to the goddamn example already!

Cool, so since ts-server can handle both, I can write types in TS and keep using JS for my logic block without ever having to setup typescript in the project, I don't need tsconfig, i don't need to compile my code, or sit and solve type issues that aren't supposed to be blocking when it's Friday and you have to deploy the project in an hour.

so , this is how I have it setup as a small example

// app.js

/** @returns {import("./types").SomeTypeDef} */
function printSomeTypeDef() {
  return {
    name: 'hello', // will show the autocomplete if I type `n` as the return type is defined already
  }
}

// types.d.ts
declare interface SomeTypeDef {
  name: string
  age: number
}

Nah, don't get scared, the import statement will be autofilled by vscode so it's okay. Also, minor detail, you can directly write {SomeTypeDef} without importing in cases where you have a single ambient type declaration file, if you have any other type declaration file that does exports instead of ambient declaration, you'll have to use the import syntax.

That's a lot of typing.

Not really, it's the same amount of typing you'd do to import a type in TS and then assign it to a function, also most of JSDoc syntax is also autocompleted by VSCode without the need for a ts-server so that's that. The issue with this is the need for VSCode, for someone like me who works with Vim, Sublime as and when I feel like it, the autocompletion breaks and that's fine because I'd prefer referring and falling back on my own coding skills than totally depending on typescript to decide what I can do in my code.

Finally,

  • do learn typescript it's good to understand the things that are missing in JS and things you can avoid
  • don't write types that make it complex to read the code - it's useless to be smart for 10 seconds and then go dumb 2 years later and understand nothing that you've written
  • Restrictions are to be understood and not imposed on developers - if the developer is just going to be a man following orders without understanding the reason and using his own reasoning to whether do it or not, then well I don't see this as a good developer, he's a good bot that can write code

Probably typed a lot of things that will offend people but these are based on my experiences and based on things I've screwed up while learning to code, might differ in your case and you may have valid counter points to everything I've written and I'd honestly like to hear them. Till next time...

]]>
Mon, 23 Sep 2021 00:00:00 +0000
https://reaper.is/writing/useful-dokku-commands.html Useful Dokku Commands https://reaper.is/writing/useful-dokku-commands.html Useful Dokku Commands

A micro post for commands that I usually look for, while using dokku.

Create App

dokku apps:create <app-name>

Database

sudo dokku plugin:install https://github.com/dokku/dokku-postgres.git

dokku postgres:create <db-name>

dokku postgres:link <db-name> <app-name>

Logs

dokku logs <app-name>

dokku logs <app-name> -t #to tail the logs

SSL

sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git

dokku config:set --no-restart <app-name> DOKKU_LETSENCRYPT_EMAIL=<email>

dokku letsencrypt <app-name>

Domain

dokku domains:add <app-name> <domain>

Process Management

Restart

dokku ps:restartall
dokku ps:restart <app-name>
### Rebuild from source
dokku ps:rebuildall
dokku ps:rebuild <app-nam>

Deploy

git remote add dokku dokku@<host>:<app-name>

git push dokku master

or

git push dokku <local-branch>:master
]]>
Mon, 28 May 2020 00:00:00 +0000
https://reaper.is/writing/why-a-look-into-my-brain.html Why are you providing it for free? https://reaper.is/writing/why-a-look-into-my-brain.html Why are you providing it for free?

Intro

Every time someone get's to know about what I'm building, a few questions arise.

  • Why?
  • It's so simple, anyone can make that, why are you wasting your time on it?
  • Who would use that?
  • You should charge for it.
  • You are just wasting your time.
  • Isn't that going to cost you?

I'd like to answer a few.

Let's start.

Simple | To the Point Apps

The simplest example to this would be the task list I created about 2 years ago, it has had a few improvements. Like the ability to share the task list with others but that's about it.

The task list lives in your browser and is built with minimal and well thought code and no web frameworks. Makes it easier for someone to later improve the code for more performance, or maybe add additional functionalities without disrupting the existing ones. I rarely have to worry about bugs with something this small, its hardly 200 lines of code.

If you'd like to check the source (https://github.com/barelyhuman/rmnd-r)

Normally, that should be enough for people to understand why, but for others, I'll get a little into details.

  1. A decided flow of business logic (rare among projects done for clients).
  2. Lesser code to maintain, aka fewer bugs to fix.
  3. Easy to scale , if you maintain the modular approach.

While the above 3 are something every projects should follow, developers normally mess up at point 2.

Why? A lot of developers have given up on the point of writing efficient software because "Everyone has 8GB ram and a TB of storage, so my 200MB APP with a full Chromium browser shouldn't be a problem for you." and obviously the other group with their "VS Code is so fast. It uses electron too!"

Now, while both of them are kind of right, you forget to see the whole picture. A full stack developer like me who has XCode, Android Studio and VS Code or Atom(I use atom so...) and Chrome and PostBird for Postgres and SequelPro for MySql and RoboT for mongo and Hyper for terminal and ..... you get the idea, your tool isn't the only thing I have on my system. The 200MB keeps adding up, and my RAM keeps getting eaten up by your so called "everyone has enough RAM nowadays" argument.

This is not just against electron. It's against the general mentality that the client can handle this amount of ram usage and this amount of space usage.

While VS Code is actually performant, people forget the amount of time it took Microsoft to get it to this point, go through their talks on it and you'll notice how much effort and hacking it took them to squeeze out that performance.

Why does this relate to my argument? People prefer writing code that just works instead of efficient or well structured optimised code. Why? No time, strict deadlines, constant changes from clients. There's a huge list of factors.

What does this do to the developer?

The developer's now coding to make the software just work. He won't look at other things that need to be checked. He'll just get this change done and this laziness ends up with adding multiple lines of dependent code into a certain module that's being used by 10 different modules. Should I be blaming him? Nope, I can't. But, the approach of modifications to modules can be changed.

Not everyone has the patience and a large company backing them up for developing and optimising their software project. Plus, people try to make one app do everything. To the point where things start depending on each other so much that it becomes hard for a developer to make sure what he writes doesn't impact any another piece of code.

People with larger projects know this and have partially solved this issue with tests and code reviews. I say partially because when you actually go through this, you still spend time rewriting tests for something you built but then someone else's code change made your part of the implementation a little close to uh... un-reactive.

I don't wanna be a hypocrite , so I'll let you know that I myself have developed apps on Electron. I don't hate electron. I just think it's a little too excessive to packages the whole browser with the app, most people have chrome installed on their systems and we could use an installed version but that's a discussion for another day.

Now how do you solve this? Treat everything as a package. Everything that can be reused should be it's own package. You don't literally have to create everything into a NPM package or a Rust Crate and post it but treat them that way. As in, you write it once, and the functionality is frozen. Everything else will be on top of it and not a modification to the original package. Yes, your new package now depends on the previous one but if there's 3 other components using it, you don't have to worry about breaking those components.

Modules and Components exist so you write isolated code that can be re-used by other modules without breaking it's own flow. DRY (Don't Repeat Yourself) , heard of it?

Hence, the small apps. I don't have to spend a lot of time deciding what goes where and can maintain a good sane structure. By small I don't mean write apps that do just one thing, but write apps that follow the above mentioned package or everything is a module based approach.

By the way, we were talking about this Task List

Why don't you market them !?

There's not much that I can write for this question. I'm just not good at talking people into using something. I'm not skilled enough to act like I built a Lamborghini when I know I built a Maruti.

Also, most of the time I build things because I think they'd help me reduce my work. I've had the stupidest ideas and I just built them to get it out of my head. Never have I thought that any of my ideas would help the industry or others.

  • Youtube based music player
  • Idea Storage app
  • The markdown editor
  • Minimal CSS Resets

Just to name a few.

I don't know anyone who'd need these, because there's so many more well built apps that are being sold by companies with professional sales team. I can't compete with them. But, I can use my own apps.

So, ideas like these don't need marketing and if you do end up at my GitHub page , you'll see a lot of such apps.

For Free? Really? Isn't it costing you?

I'd put it this way, my education had a very minute part in my development life. I was coding before I got into college and almost everything I learned involves blog posts and tutorials from the kind people serving the open source community. I've been a fan of open source and services that provide a hobby/free plan for the longest time. Why do you think I keep dreaming about joining Github and Mozilla.

But GitHub has a paid plan, TillWhen doesn't!

I realise that, but GitHub also has a huge user base and needs powerful servers which you can't afforded with your job's salary and hence needs a pricing model. Also, GitHub's been making more and more features free to use for solo developers as their revenue from the team based users have been increasing.

On the other hand, TillWhen hardly has 20+ users, I don't need a state of the art server. I can handle it with my day job's salary. You can donate to it if you'd like to but that's optional.

But, providing it for free helps me sleep better. Less money to worry about!

The other projects I have are all hosted for free with platforms like Heroku, Netlify and it's shared resources, and since I'm the only one using these apps, the usage doesn't break the free usage limits on either of the platforms.

The only pricing I pay yearly is the domain charges. Everything else I've managed to obtain for free. Thus, I don't mind giving it away for free.

]]>
Mon, 10 Jun 2020 00:00:00 +0000
https://reaper.is/writing/why-cant-i-sleep.html Why can't I sleep ? https://reaper.is/writing/why-cant-i-sleep.html Why can't I sleep ?

This post has no informational content and is just me sharing my thoughts and opinions.

I've had countless nights where, no matter what I do I just can't fall asleep. I've tried light music, meditation, no music, working out, banging my head on the wall.

Okay, I might not have tried working out but point being, I've tried a lot of "sleep hacks" out there to know that nothing actually works when my brain isn't tired. Problem is, I don't know how to get it to be tired enough to fall asleep instantly. I don't mind sleeping at random times during the day but then, it hinders my personal projects. Since, I have a dedicated time in the evening after office where I spend time thinking/learning/building.

You know there's this "hack" online which says, the temperature of the room to the temerature of the body plays a role in you sleeping. I'd believe that but then it didn't work for me so I'm going to discard that.

Insomnia ?

Nah, I don't think so but, it might be. I mean, I've had days I've slept for more than 12 hours and days where even 2 is a hard task. The brain runs wild on these days is all that I've observed and it seems to be a problem with a lot of people.

No, I don't have my phone on my eyes while I'm trying to sleep. Its just there beside my pillow connected to a charger playing the music that I normally listen while trying to sleep.

I guess the thoughts I have while trying to sleep need to be controlled for me to actually fall asleep but that isn't always a conscious thing and at some point I do loose control. Can't medidate for hours together waiting for the body to finally give up. Even if the body does give up, the mind doesn't.

My biggest problem, it forces me into thinking about things I want to forget. Things I'd like to delete from my memory but nope! not going to let that happen. You know why? Cause someday, someone might want to know how I tried to copy my swimming coach's underwater swimming and almost drowned. Imagine a kid who just joined swimming trying to overdo his coach.

I did manage to learn underwater swimming quite quickly though... Anyway. I don't understand why it's hard for me to sleep when there's days when I can literally drop on the bed and instantly fall asleep.

I guess I'll have to observe a lot more to figure that out. As for now, I'm writing this at 5 in the morning with 0 sleep last night and I should probably just lay down for a few hours so I can at least stay up during the office hours.

]]>
Mon, 10 Aug 2020 00:00:00 +0000
https://reaper.is/writing/workation-oct-2021.html Workation https://reaper.is/writing/workation-oct-2021.html Workation

There's supposed to be a devlog today but there's basically nothing that I did in the past 7 days that would count as enough content for it. There was progress but not enough.

Anyway, The founders planned a trip to Ghej Bid , a small village in Gujarat, India. The plan started about a month ago and as always my "Who cares, I don't want to go" ass just went back to code review and handling merge requests for the day. About 15 days after that, my manager was like "come na, it'll be fun" and so I just thought I'd give it a chance. I've been told enough times that I don't go out and that's why I'm not social. People don't realise it's the other way around.

Back to the trip, So 7 days back from today, I travelled with 3 people, we got to the farm house, ate amazing breakfast, slept at around 2 AM or 3 AM almost every day, made offensive jokes as always, was the only one laughing (obviously!) but end of the day, I think I found peace in the house.

No, no, no! Not because of people but the change in scenery was nice. I normally would just sit in my room or watching something on Netflix and the cool breeze there just made it a lot more peaceful. This lasted like 2 days cause then it was 17 people in total instead of just us 4 and the social anxiety kicked in real quick, which I should be used to by now but it still makes me uncomfortable. Whatever, didn't matter. I still had a room I could just sit in and silence the noise with my headphones. I'd watch,listen or code something that would calm me down.

Stayed in the crowd for a day, ended up almost getting into trouble but that's just how it is. If I could predict everything and anything I would be God and not just a guy with a god complex.

With all this, and me being in vacation mode, all I was able to complete was office work and I ended up writing prototype code for statico's next version since the current one can't be scaled anymore without a major refactor or patching in and around the code. In other news, Razorpay didn't like Taco, I'll have to change the payment implementation to be either Paddle or Stripe and then handle taxes manually (too much work!!!)

All in all, 3 things I enjoyed the most.

  1. Driving around and also getting lost while driving cause Google Maps and my iPhone decided they don't want the GPS working in offline maps. The detour meant more driving so more fun for me.
  2. Sitting alone + music + cool breezes every now and then
  3. A columbus ride we took where I ended up almost loosing my voice with all the screaming. Nah, I wasn't scared of the ride we were all screaming the name of the company. Probably the point where the founder was like "are these the people that handle projects?"

That's where the trip ended, I got back to my relatives place and that's about it.

These are things that happened, on the other side, things that didn't change

  • Still don't like people
  • Still enjoy food, though my stomach doesn't
  • Still suck at uphill driving and will need to find better ways to improve my car handling when going up hill. To clarify what I mean, the clutch control is still not to the point I'd like it to be

Overall, nice experience.

Wasn't as productive for my side projects but I guess I needed the quite time without work to be refreshed

]]>
Mon, 25 Sep 2021 00:00:00 +0000
https://reaper.is/writing/working-with-async-code-in-react.html Working with Async Code in React https://reaper.is/writing/working-with-async-code-in-react.html Working with Async Code in React

Due to the nature of my work I end up having to work with a lot of libraries and there's a lot of POC and experimenting that goes on to help junior devs not cry their eyes out when working with code.

This part of setting up good DX for everyone on the team requires failing miserably while doing the same for yourself first.

And the most irritating part about hooks have been the amount of work you have to do to handle async functions which is very common when working with frontend apps and also the reason a lot of companies worked on creating good async hooks, though each of then thought of their usecase of hooks and now that's the standard.

Full disclosure: this post kinda tries to market a tiny npm package I made

Let's take swr for example, it handles a global state for caching which is stored by hashing the params given to the useSWR function and then the same is implemented in the react-native version nandorojo/swr-react-native by Fernando Rojo which makes modifications for react-navigation. You can use the original vercel/swr in react-native though the focus revalidation won't work since the focus logic is different when working with react-native-navigation.

Back to the point, if I use swr

  • it's going to handle caching for me
  • handle retries if configured to do so
  • handle revalidation (refetching data from network and checking if it's different from the cache and re-rendering if it is) at a polling interval and also on focus of the view/page

All amazing features that you'd like to have and has a really simple API, but the only case where the above makes no sense is when you are already working with a library that does most of this.

I work with urql and apollo in almost every GraphQL backed app that we have and they already provide with hooks and a connection client that handles caching + network, polling (if needed), doesn't have revalidation but it's not that hard to set that up.

Now this is where the existence of react-async comes into picture, since having 2 network clients maintain cache based on hashing libraries does hinder performance when working with larger apps and to be fair swr or react-query do not advice you to use them with other clients, they document to use it with the vanilla graphql client and i'm not really blaming the libraries, they just don't suffice the usecase.

Now, for the reasons for as to why this wrapper has to exist is, the hooks provided by the libs (url and apollo) are rather messy to work with and get limited to a single query and will need a custom hook implementation when working with multiple data queries or you end up writing a new query that uses fragments from both which ends up with a lot of redundant code

Example:

const FETCH_USER_QUERY = gql`
  query fetchUser {
    fetchUser {
      id
      email
      name
    }
  }
`

const FETCH_USER_ADDRESS_QUERY = gql`
  query fetchUserAddress {
    fetchUserAddress {
      street
      city
      state
      country
    }
  }
`

function ReactExampleComponent() {
  // using a hook generated by Apollo for the above query
  const {
    data: addressData,
    error: fetchingAddressError,
    loading: isAddressDataLoading,
  } = useFetchUserAddress()
  // using a hook generated by URQL for the above query
  const [addressResponse, refetchAddress] = useFetchUserAddress()
  const {
    data: addressData,
    error: fetchingAddressError,
    loading: isAddressDataLoading,
  } = addressResponse

  // using another hook to fetch user
  const {
    data: userData,
    error: fetchingUserError,
    loading: isUserDataLoading,
  } = useFetchUser()

  return <>{/*...*/}</>
}

This is a very naive example but if you work with something like hasura then this can get quite common since you end up joining un-related dependent data quite often for rendering, ending up in writing a lot of useFetch<QueryName> and handlers for each, or you can write a custom hook to isolate this logic to be able to reuse it later, something like below

function useUserDataAndAddress() {
  // using 2 different hook formats as an example, in reality you'll have just one format based on your lib
  const [response: addressResponse, refetch: refetchAddress] =
    useFetchUserAddress()

  const {
    data: addressData,
    error: fetchingAddressError,
    loading: isAddressDataLoading,
  } = addressResponse

  const {
    data: userData,
    error: fetchingUserError,
    loading: isUserDataLoading,
    refetch: refetchUser,
  } = useFetchUser()

  return {
    data: {
      user: userData,
      address: addressData,
      loading: isAddressDataLoading || isUserDataLoading,
      error: fetchingAddressError || fetchingUserError,
      refetch() {
        refetchAddress()
        refetchUser()
      },
    },
  }
}

// and then use it in the following manner

function ReactExampleComponent() {
  const { data, error, loading } = useUserDataAndAddress()

  if (loading) {
    return <Loader />
  }

  if (error) {
    // handle error, show toast, render error page etc
    return <></>
  }

  return <>{/*....*/}</>
}

This is what I normally do and it can get really redundant really quickly since you are basically doing nothing else but organizing the hook data, so I instead decided to go the other route with these libraries (urql and apollo) where they provide a core client that you can use to write fetchers and these are simply functions, so now I can write SDK type functions that I can chain for data so the above code with react-async would look like

async function fetchUserAndAddress() {
  const user = await SDK.fetchUser()
  const address = await SDK.fetchAddress()
  return { user, address }
  // will automatically throw an error
}

function ReactExampleComponent() {
  const { data, error, loading } = useAsync(fetchUserAndAddress)

  if (loading) {
    return <Loader />
  }

  if (error) {
    // handle error, show toast, render error page etc
    return <></>
  }

  return <>{/*....*/}</>
}

Thus, reducing my overall redundant code and I write the actuall fetchers just once when I'm writing the gql query, which is also something you can automate if you which to, though I'm fine writing a single line of

// client is the urql client in this case
async function fetchUser(payload) {
  return client.query(FETCH_USER_QUERY, { payload }).toPromise()
}

Obviously, the other side is if you work with the traditional graphql server where the queries are limited then yes you can go with the above custom hook approach and you might end up with a few extra custom hooks but that's about it and that's still a clean way to handle async data in your apps.

This just makes it easier since I now have

  • a reusable fetcher
  • a reusable and copyable function that doesn't depend on the react context for data
  • an API I don't need to remember cause it literally takes in 2 params useAsync(fetcher, options)

The library is still getting used and tested internally at work and so the options part isn't available in the released version you can use @barelyhuman/react-async@beta to get that and it's typed so you'll get the options accordingly, but it's still not documented so I'd wait till I release the next patch version.

To stay updated you can signup on the newsletter or follow on twitter or add this blog's rss , idk your call

]]>
Mon, 23 Nov 2021 00:00:00 +0000
https://reaper.is/writing/writing-cleaner-state-in-react.html Writing cleaner state in React and React Native https://reaper.is/writing/writing-cleaner-state-in-react.html Writing cleaner state in React and React Native

Ever since hooks got introduced in React, it made it a lot more easier to handle composition in react components and also helped the developers of react handle the component context a lot better. Also, as consumers of the library, we could finally avoid having to write this.methodName = this.methodName.bind(this) which was a redundant part of the code to which a few developers ended up writing their own wrappers around the component context.

But that's old news, why bring it up now?

Well, as developers there's always some of us who just go ahead follow the standard as is even when it makes maintenance hard and in case of hooks, people seem to just ignore the actual reason for their existence all together.

If you witnessed the talk that was given during the release of hooks, this post might be not bring anything new to your knowledge. If you haven't seen the talk

  1. You should.
  2. I'm serious, go watch it!

For the rebels, who are still here reading this, here's a gist of how hooks are to be used.

Context Scope and hook instances

If you've not seen how hooks are implemented then to be put simply, the hook will get access to the component it's nested inside and has no context of it's own, which then gives you ability to write custom functions that can contain hook logic and now you have your own custom hook.

Eg: I can write something like this

import { useEffect, useState } from 'react'

function useTimer() {
  const [timer, setTimer] = useState(1)

  useEffect(() => {
    const id = setInterval(() => {
      setTimer(timer + 1)
    }, 1000)

    return () => clearInterval(id)
  }, [timer, setTimer])

  return {
    timer,
  }
}

export default function App() {
  const { timer } = useTimer()

  return <>{timer}</>
}

And that gives me a simple timer, though the point is that now I can use this timer not just in this component but any component I wish to have a timer in.

The advantages of doing this

  • I now have an abstracted stateful logic that I can reuse
  • The actual hook code can be separated into a different file and break nothing since the hook's logic and it's internal state is isolated.

This gives us smaller Component code to deal with while debugging.

What does any of that have to do with state!?

Oh yeah, the original topic was about state... Now the other part of having hooks is the sheer quantity that people spam the component code with it and obviously the most used one is useState.

As mentioned above, one way is to segregate it to a separate custom hook but if you have like 10-20 useState because you are using a form and for some weird reason don't have formik setup in you codebase then you custom hook will also get hard to browse through.

And, that's where I really miss the old setState from the days of class components and there's been various attempts at libraries that recreate the setState as a hook and I also created one which we'll get to soon but the solution is basically letting the state clone itself and modify just the fields that were modified, not that hard right?

You can do something like the following

const [userDetails, setUserDetails] = useState({
  name: '',
  age: 0,
  email: '',
})

// in some handler
setUserDetails({ ...userDetails, name: 'Reaper' })

And that works (mostly) but also adds that additional ...userDetails everytime you want to update state. I say it works mostly cause these objects come with the same limitations that any JS Object has, the cloning is shallow and nested states will loose a certain set of data unless cloned properly and that's where it's easier to just use library's that make it easier for you to work with this.

I'm going to use mine as an example but you can find more such on NPM.

import { useSetState } from '@barelyhuman/set-state-hook'
import { useEffect } from 'react'

function useCustomHook() {
  const [state, setState] = useSetState({
    nested: {
      a: 1,
    },
  })

  useEffect(() => {
    /* 
      setState({
        nested: {
          a: state.nested.a + 1
        }
      });
    // or 
    */

    setState((prevState, draftState) => {
      draftState.nested.a = prevState.nested.a + 1
      return draftState
    })
  }, [])

  return { state }
}

export default function App() {
  const { state } = useCustomHook()
  return <div className="App">{state.nested.a}</div>
}

and I can use it like I would with the default class styled setState but if you go through it carefully, I actually mutated the original draftState and that's because @barelyhuman/set-state-hook actually create's a clone for you so you can mutate the clone and when you return it still creates a state update without actually mutating the older state.

Summary

  • Use custom hooks to avoid spaghetti state and effect management code
  • Use a setState replicator if you are using way to many useState hooks

make it easier on your brain to read the code you write.

]]>
Mon, 30 Aug 2021 00:00:00 +0000