[{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/blog/","section":"","summary":"","title":"","type":"blog"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/tags/maintenance/","section":"Tags","summary":"","title":"Maintenance","type":"tags"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/tags/new-features/","section":"Tags","summary":"","title":"New Features","type":"tags"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/tags/refactoring/","section":"Tags","summary":"","title":"Refactoring","type":"tags"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/tags/scalability/","section":"Tags","summary":"","title":"Scalability","type":"tags"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/tags/technical-debt/","section":"Tags","summary":"","title":"Technical Debt","type":"tags"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/tags/technical-stack/","section":"Tags","summary":"","title":"Technical stack","type":"tags"},{"content":"A key challenge in scaling software is avoiding unnecessary friction with your tech stack. As systems grow more complex and teams expand, engineers naturally question their architectural choices. While this questioning helps improve systems for the long term, it becomes problematic when their maintenance and evolution consume so much time that they prevent you from focusing on actual business needs.\nWhy do people feel the need to change? # *\u0026quot;*We should use X, it would be so much easier than all this nonsense!\u0026quot; — Sound familiar? This common reaction emerges when system maintenance becomes challenging. Teams often gravitate toward starting fresh with shiny new technology, seeking a sense of progress and hoping old problems will magically vanish. If you hear these conversations repeatedly, your current stack likely has some issues:\nHigh complexity: The architecture has grown too convoluted, with only a few people understanding it fully. Component changes can trigger unexpected side effects that remain hidden during feature development, leading to longer development cycles and frustration. Poor developer experience: This manifests in various ways—costly and time-consuming deployments, inability to run or test the system locally or in shared environments. When development becomes frustrating, people resort to shortcuts. Production becomes the default testing ground, making developers hesitant to refactor or improve the codebase. Lack of context: People and systems evolve. In long-running projects, team members come and go, creating cycles of knowledge transfer where crucial context gets lost. This creates uncertainty around changes and leads to poorly informed decisions. It\u0026rsquo;s a misconception that simply adopting newer technology will improve performance or developer experience. Yet it\u0026rsquo;s equally wrong to assume it won\u0026rsquo;t. Every technology shift involves tradeoffs. The real challenge lies in balancing modern industry standards with immediate business needs.\nShould we change it or not? # The answer depends on your situation. While some organizations give their engineers time and space for proper implementation, most prioritize shipping products. As Dan McKinley notes in his presentation, you can\u0026rsquo;t focus on product requirements if you\u0026rsquo;re debating whether to use Cassandra (1). Before proceeding, consider these key questions with your team:\nDoes it fix something your current stack can\u0026rsquo;t? # Sometimes new functionality requires new technology, such as adding a caching layer or message queue. In these cases, it\u0026rsquo;s better to invest early rather than implement a temporary solution that won\u0026rsquo;t scale within months.\nCan your team use the new stack, and do they want it? # Regardless of who proposes it, ensure the entire team supports this change and has relevant experience with the new stack. Don\u0026rsquo;t waste energy convincing people that X is better than Y when both tools are equally capable. Without team buy-in, you\u0026rsquo;ll face friction and the maintenance burden will fall on those who proposed the change.\nDo you have support within the company? # Are there colleagues available to help with issues? In smaller companies, the answer is typically no, or people are too busy. While some argue that containerization makes the technology stack less relevant, this mainly applies to large organizations like Google or Microsoft. In a 30–40 person engineering team, alignment is crucial.\nIs the tradeoff worth it? # Can this be done quickly, or will the migration prevent the team from delivering other work? If there\u0026rsquo;s already pressure to deliver product features, it might be better to address existing problems with your current stack rather than create new ones with different technology.\nDoes your infrastructure support it (pipelines, deployments, libraries)? # If you\u0026rsquo;ve decided to proceed, verify that the new stack has been used successfully before. Setting up operational boilerplate for building, testing, and deploying code requires significant effort, not to mention creating domain-specific shared libraries. If no one has done it before, your team will need to build this foundation.\nFor how long will you need to maintain both stacks simultaneously? # Can you perform a complete replacement? If not, how long will parallel maintenance last? After adopting a new stack, you need a plan to decommission the old one or at least remove it from critical paths. The longer you wait, the harder maintenance becomes.\nAnswering these questions thoroughly will help your team understand the costs and benefits of introducing a new tech stack and make an informed decision about moving forward.\nClosing Remarks # There are no absolute right or wrong answers when choosing a new stack—only tradeoffs. Progress happens when someone is willing to accept these tradeoffs at the right moment. However, exclusively using stable technology is the opposite extreme, which can slow you down long-term through hiring challenges, technical stagnation, and difficult evolution.\nMost teams are better off sticking to what they know. This approach allows them to focus specifically on product development and deliver faster—particularly crucial in early stages when finding market fit is paramount. As companies scale, the need for new technology naturally emerges. At these critical moments, someone must invest in pushing the infrastructure forward. The real challenge lies in identifying these moments and accepting the right tradeoffs without disrupting business operations.\nWhen considering new technology, don\u0026rsquo;t let personal preferences cloud your judgment. Make a thorough assessment with your team and ensure everyone supports the decision to move forward.\nReferences # (1) Choose Boring Technology, Dan McKinley\n","date":"November 28, 2024","externalUrl":null,"permalink":"/blog/the-stack-of-less-resistance/","section":"","summary":"A key challenge in scaling software is avoiding unnecessary friction with your tech stack. As systems grow more complex and teams expand, engineers naturally question their architectural choices. While this questioning helps improve systems for the long term, it becomes problematic when their maintenance and evolution consume so much time that they prevent you from focusing on actual business needs.","title":"The stack of less resistance","type":"blog"},{"content":"","date":"November 28, 2024","externalUrl":null,"permalink":"/","section":"Welcome to the jungle","summary":"","title":"Welcome to the jungle","type":"page"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/bitcoin/","section":"Tags","summary":"","title":"Bitcoin","type":"tags"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/crypto/","section":"Tags","summary":"","title":"Crypto","type":"tags"},{"content":"The idea of a borderless decentralized currency remains a very romantic one, but we have yet to see global adoption. After almost 15 years of Bitcoin, rather than taking over central banks, crypto seems to be filling different gaps in economies around the world.\nAfter the recent scandals, namely the FTX downfall, and the US regulators choking crypto businesses, there was a massive purge in the industry. VC money mostly shifted from crypto to AI and adoption is down overall in developed countries, which already have an accessible offer of banking services.\nEnd-user adoption is low # Even with rising inflation and an economic recession around the corner, in places like the US, UK, or EU there isn’t an incentive strong enough to push people away from their financial system:\nEntry barriers are high: Setting up a crypto wallet and sending and receiving tokens requires more effort when compared to using Revolut, Wise, or Monzo, for example. Moreover, people are used to paying for everything in a single domestic currency, which confuses them when introducing a new one. It’s way more natural to charge $10 for a beer than 0.000028 BTC, for example; Lack of solid regulation: Despite significant developments in the EU and the US, we’re yet to see properly established regulation frameworks that can level crypto with other financial instruments and are clear to users, especially in what regards to tax; Banks are relatively safe: Even with all the power exercised by banks and governments, it is safe to have money in the bank, given that deposits in the EU are protected by legislation, as well as in the US with the FDIC and credit unions; Not ready for payments: The existing payments industry is tightly coupled with banking and we don’t have yet the right tools to make it easier and cheaper for any merchant to accept crypto payments; All things considered, banking is still pretty much a no-brainer because it’s accessible and effective. The end users don’t care about blockchain, bitcoin, or whatever technology holds their money, as long as it’s easy to use and reliable.\nFinancial institutions are pushing forward # At the same time, as the finance industry responds with competition, it is also starting to adopt and integrate crypto assets both on the commercial and technical sides in several different ways:\nRegulated investment assets: With the improvements in regulation, big asset managers like Grayscale and Blackrock are starting to get into digital asset markets. 2024 should see a major flow of institutional money into crypto after the upcoming spot bitcoin and other crypto ETFs are approved; Enabling faster settlements: Today’s money transactions are very slow. Most payments might seem instant to the end user but that’s just because one of the parties is bearing the liability. Several apps like Strike, PayPal, Circle Pay, or BitPay are already using blockchain technology (lightning network and stablecoins) behind the scenes to improve their settlement speeds, and ultimately the success rate because less liability checks are required; Transfers between institutions: Whenever a payment is made, there are usually several transactions between different institutions like card issuers, banks, payment acquirers, or any other entity taking part in the business. Not only those transactions are slow, but also there need to be a lot of controls in place for these institutions to trust each other. Stablecoins can play a key role here, not just to improve speeds but also to provide a shared audit trail that simplifies trust between all of them. Visa and Paypal are two of the companies experimenting with stablecoins for this purpose; Decentralized Finance: Although there is still a long way to go on the regulatory side, DeFi is still on the rise, with platforms like Compound, AAVE, or Curve offering new financial products with better interest than bonds or savings accounts. We expect some of these companies to become fully regulated at some point, however, institutions like JP Morgan will get there first by adding DeFi products to their offer; Final Remarks # The financial freedom idea is still pretty much alive among Bitcoiners, and it\u0026rsquo;s proven itself as a strong alternative in weaker economies. However, in the developed world, it found more resistance from both the regulators users. There is still a big gap between blockchain technology and the actual consumer products that could lead to mass adoption. End users want a product that\u0026rsquo;s easy and legal to use, and don\u0026rsquo;t really care if it\u0026rsquo;s powered by blockchain or not, even if it means that somebody else can take control of their money, simply because that\u0026rsquo;s not common in developed countries.\nCryptocurrencies are making their way into the finance industry, mostly in the form of speculation vehicles, but also on the technological side, with banks and financial institutions looking into using blockchains to increase settlement speeds, security, and the audit trail of their existing products.\nPretty much like banking and currencies aren’t the same in different parts of the world, cryptocurrencies are filling different gaps in economies, while at the same time offering a competitive alternative to existing financial instruments. Although it’s not straightforward, we could say that cryptocurrencies are improving the financial industry, rather than taking it over.\n","date":"December 13, 2023","externalUrl":null,"permalink":"/blog/crypto-adoption-developed-economies/","section":"","summary":"After almost 15 years of bitcoin, rather than taking over central banks, crypto seems to be filling different gaps in economies around the world. In the developed world, end user adoption is low, but institutional adoption is on the rise.","title":"Crypto adoption in developed economies","type":"blog"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/crypto-etfs/","section":"Tags","summary":"","title":"Crypto ETFs","type":"tags"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/defi/","section":"Tags","summary":"","title":"DeFi","type":"tags"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/finance/","section":"Tags","summary":"","title":"Finance","type":"tags"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/categories/finance/","section":"Categories","summary":"","title":"Finance","type":"categories"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/paypal/","section":"Tags","summary":"","title":"Paypal","type":"tags"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/stablecoins/","section":"Tags","summary":"","title":"Stablecoins","type":"tags"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/tech/","section":"Tags","summary":"","title":"Tech","type":"tags"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/categories/tech/","section":"Categories","summary":"","title":"Tech","type":"categories"},{"content":"","date":"December 13, 2023","externalUrl":null,"permalink":"/tags/visa/","section":"Tags","summary":"","title":"Visa","type":"tags"},{"content":"","date":"September 3, 2023","externalUrl":null,"permalink":"/tags/career-ladder/","section":"Tags","summary":"","title":"Career Ladder","type":"tags"},{"content":"","date":"September 3, 2023","externalUrl":null,"permalink":"/tags/culture/","section":"Tags","summary":"","title":"Culture","type":"tags"},{"content":"","date":"September 3, 2023","externalUrl":null,"permalink":"/tags/leadershio/","section":"Tags","summary":"","title":"Leadershio","type":"tags"},{"content":"","date":"September 3, 2023","externalUrl":null,"permalink":"/tags/mentoring/","section":"Tags","summary":"","title":"Mentoring","type":"tags"},{"content":"","date":"September 3, 2023","externalUrl":null,"permalink":"/tags/relationships/","section":"Tags","summary":"","title":"Relationships","type":"tags"},{"content":"","date":"September 3, 2023","externalUrl":null,"permalink":"/tags/self-improvement/","section":"Tags","summary":"","title":"Self-Improvement","type":"tags"},{"content":"","date":"September 3, 2023","externalUrl":null,"permalink":"/tags/software-engineering/","section":"Tags","summary":"","title":"Software Engineering","type":"tags"},{"content":"A little over a year ago, I decided to leave a staff engineer position at my then-current company and start fresh as a senior developer. It felt like I was throwing away years of investment in developing my people and technical leadership skills, but the allure of working in the crypto industry had been tempting me for some time and ultimately led me to make the move.\nIt took me a while to fully process the tradeoff, but eventually, I was able to let go of the feelings of guilt and focus on my growth and self-improvement. As I traverse this path once again, several things strike me in a different way than they did before. In this article, I will outline a couple of random thoughts that I’ve been collecting as I experiment and improve my approach to software engineering.\nTechnical skills # It’s easy to forget. Always keep one foot on the ground.\nAs we move up the career ladder, it\u0026rsquo;s common to spend less time doing technical work. However, in my opinion, it\u0026rsquo;s crucial to keep an eye on this metric and make sure it doesn\u0026rsquo;t drop to zero. Throughout my career, I\u0026rsquo;ve been closer to delivery, and I\u0026rsquo;ve always been critical of architecture astronauts and their unfeasible ideas, until one day I was also floating in outer space without even realizing it.\nDuring my first few weeks as a developer, I felt a mixture of excitement and embarrassment as I made the same mistakes I had repeatedly warned my previous teams against. As a rule of thumb, whatever your level of seniority is, if you have a technical leadership position, you should spend some time yourself doing fieldwork. Keeping one foot on the ground will make you aware of the issues and limitations affecting your team, while at the same time giving you the field knowledge required to make good and realistic decisions.\nRelationships # Work on building relationships as if they were a required skill.\nEstablishing oneself in a group setting varies depending on each individual\u0026rsquo;s personality, but it is not solely dependent on it. I have found that intentionally connecting with others, doing frequent catch-ups, one-on-ones, pair programming, or whatever promotes spoken communication, goes a long way in earning your place in the team, especially in this post-COVID era where everyone is just a portrait on Zoom. Building trust within the team increases the likelihood of your ideas and contributions being taken into account, and it also increases the quality and honesty of the feedback you receive from your peers. As a side effect, teams with strong bonds between members tend to be healthier, more motivated, and more resilient to external pressure.\nMentoring # Mentors and sponsors will speed up your growth.\nLearning and growth opportunities are everywhere, provided that you are willing to take them. However, having trust and credibility might not be enough if you don\u0026rsquo;t know how to identify those opportunities, which is likely to happen if you are new to the company. It is much easier to navigate if you are getting oriented by someone who has walked that path before. Therefore, always try to surround yourself with more experienced people and allow them to help you steer your career. This principle applies to pretty much every role, regardless of the level of seniority. You\u0026rsquo;ll be surprised to find that most people are willing to help if you approach them with humility.\nCulture # Culture is not just about being nice. You should prepare for it.\nAs humans, we all have different ways of speaking, thinking, and doing things. This diversity can have a powerful impact on your growth, whether positive or negative. Companies that create a safe space and encourage cultural exchange will benefit from their employees experimenting more and sharing knowledge, resulting in both individual and collective growth. In contrast, toxic cultures hinder growth, as people become more afraid to fail and are less encouraged to experiment and share ideas.\nWhat I’ve come to realize is that the company can only meet you halfway through. Working in a diverse environment goes beyond being open to differences in the sense that It requires you to study them so that you don’t send the wrong message or draw the wrong conclusions when interacting with your peers. A friend of mine recommended me the The Culture Map, which i found to be quite a useful tool for providing context when interacting with someone from a different culture.\nDifferent approaches # You’ll learn different ways of doing things you already did.\nAs we progress in our careers, we tend to develop strong opinions and habits on how to approach certain problems. Starting a new job can bring different perspectives that challenge our established approaches. As the authors of Software Architecture: The Hart Parts defend: There is no right or wrong, just different trade-offs and ways of thinking. Avoid making assumptions and listen to other opinions before forming your own. Remember, the company already existed before you arrived.\nStarting over # You’re not starting from 0 nor throwing away what you’ve achieved so far.\nSwitching companies and roles is a big undertaking, especially at a psychological level. The idea of giving up something that cost us so much to achieve and starting over can easily feel like a downgrade to our career. While to some extent that may be true, you can carry past experiences to the new position, which will give you a head start and make the next attempt easier. Moreover, you can improve and perfect your skills, pretty much like doing something for the second time. Just like we rarely get a computer program to work the first time, It\u0026rsquo;s okay to make career mistakes. Instead of focusing on feeling like you\u0026rsquo;re moving backward, retrace the steps from your past experiences, see where you can improve, and do it better and faster until you\u0026rsquo;re happy and proud of what you\u0026rsquo;ve accomplished.\nComfort zone # Don\u0026rsquo;t be afraid to stay in comfort zone. This could also mean growth.\nMost self-improvement content will tell you that leaving your comfort zone is necessary for personal growth, but I never quite fully agree. In my opinion, it makes sense to push yourself out of your comfort zone at an intensity level, such as taking on bigger and more complex challenges or working harder. However, I found that I produced better output by eliminating nonessential tasks that made me uncomfortable, such as public speaking or people management, and instead focusing on what I enjoy most, such as coding and architecture.\nWe tend to perform better at tasks we enjoy because we are motivated and willing to experiment, learn, improve, and repeat as many times as necessary. In a way, this can also be seen as stepping out of our comfort zone, as we are constantly pushing ourselves to improve in areas we are passionate about.\nFinal remarks # Looking back on the past year, I am glad I made the move, but the feeling did not come immediately. As I returned to work and experimented with different approaches to my career, I realized that I had forgotten what this feeling was like, and in some cases, had never experienced it before. Although I kept the same goals, I took a step down instead of making a career change. Whether there will be two steps up or not, only time will tell. For now, I can say that this move allowed me to rediscover the joy in software development while also reflecting on areas where I can improve and, bit by bit, become a better engineer.\nReferences in this article # Don\u0026rsquo;t leave architecture astronauts scare you High performing teams don\u0026rsquo;t leave relationships to chance Finding your staff sponsor The Culture Map Software Architecture: The Hart Parts ","date":"September 3, 2023","externalUrl":null,"permalink":"/blog/what-ive-learned-from-moving-down-the-career-ladder/","section":"","summary":"A reflection on moving back from staff to senior engineer. How it affected me personally and what impact it had on my career.","title":"What I’ve learned from moving down in the career ladder","type":"blog"},{"content":"","date":"June 5, 2023","externalUrl":null,"permalink":"/tags/integration-testing/","section":"Tags","summary":"","title":"Integration Testing","type":"tags"},{"content":"One of the many controversial aspects of software development is the way we test. Thanks to containers and the idea of one database per microservice, testing a running application that’s connected to a database is now an easy task, considered by many a best practice, one that allows greater confidence in the deployable.\nWe define integration testing as the activity of testing an application and its database together, which might be controversial given that the database is also part of the application. However, in this article, we focus on the specifics of rocket and sqlx rather than the testing philosophy. Available tools # Once again, rocket and sqlx have the right tools to do the job but we\u0026rsquo;re missing a bit of support or compatibility between both as well as documentation on how to build an integration testing setup using these crates.\nSQLx test macro # We\u0026rsquo;ll use the sqlx::test macro, which provides all the boilerplate required for integration testing:\nCreates a disposable database for each test case Provides utilities to connect to those databases Cleans up the used test databases Executes database migrations if needed This looks almost perfect as it pretty much does everything we need outside testing, but the actual problem is wiring these test databases with the rocket testing framework.\nRocket local client # Rocket tests use the Local Client, which takes an initialized rocket instance as a parameter. However, we usually configure a static database address in one of the configuration sources (by default env vars or Rocket.toml file) and we can\u0026rsquo;t override rocket_db_pools with the connections created by sqlx::test.\nCreating a special instance for tests # Fortunately, one of the test modes supported by sqlx::test take the connection details as a parameter, instead of the connection pool itself. We can use that testing mode to get the generated database name and create a dynamic configuration that will be passed to the rocket initialization.\nDeclaring test cases # As mentioned above, we\u0026rsquo;ll be using the testing mode where the database connection details are passed to the test instead of the connection itself. We\u0026rsquo;ll then pass those details to a function that will produce a local client and a rocket instance that is connected to the database provided by sqlx::test. This functionality must live on a separate function so that we can reuse it to create a new rocket instance on each test case.\nThere is one caveat in this approach though: For sqlx::test to orchestrate the testing databases, a DATABASE_URL variable needs to be declared in a .env file for postgres and mysql as mentioned in the docs so that it knows how to connect. We\u0026rsquo;ll be using postgres in this article, which may have a slightly different implementation from mysql and sqlite.\n#[sqlx::test] async fn my_test_case( _pg_pool_options: PgPoolOptions, pg_connect_options: PgConnectOptions, ) -\u0026gt; sqlx::Result\u0026lt;()\u0026gt; { let client = async_client_from_pg_connect_options(pg_connect_options).await; ... Building a new rocket config # The next step is to create a new connection string and build a configuration object that can be used by rocket to connect to the database created by sqlx::test. PgConnectOptions only allows us to get the database name (using the get_database() method), so we\u0026rsquo;ll have to hardcode the rest of the connection string or read it from the .env file. This only applies to the testing setup, so it should be an acceptable tradeoff.\nWith the new connection string (db_url) we can use the rocket::config API to create a config map that is understood by rocket (db_config). We then merge it with rocket::Config::figment(), which is the default configuration that is read from the env and Rocket.toml.\nRocket config uses the Figment crate, which is a library for loading configurations from multiple sources and merging them according to a given hierarchy of precedence. pub async fn async_client_from_pg_connect_options( pg_connect_options: PgConnectOptions, ) -\u0026gt; Client { let db_url = format!( \u0026#34;postgres://postgres:example@localhost:5432/{}\u0026#34;, pg_connect_options.get_database().unwrap() ); let db_config: Map\u0026lt;_, Value\u0026gt; = map! { \u0026#34;url\u0026#34; =\u0026gt; db_url.into(), }; let figment = rocket::Config::figment() .merge((\u0026#34;databases\u0026#34;, map![\u0026#34;mydb\u0026#34; =\u0026gt; db_config])); let client = Client::tracked(rocket_from_config(figment)) .await .expect(\u0026#34;valid rocket instance\u0026#34;); return client; } Once the configuration object is built, the only thing left to be done is to create a new rocket instance using this new config, which is what the rocket_from_config method will do.\nCreating a rocket instance with custom config # The hardest part is done, we now just need to change the way we create the rocket instance from using rocket::build to using rocket::custom. This can be done in many different ways, but to make sure that the app under test is as close as possible to the production one, the best we can do is to have a single function rocket_from_config handle the initialization code, and then the default rocket launcher passing Config::figment(), which is the default figment used by rocket::build().\n#[launch] fn rocket() -\u0026gt; Rocket\u0026lt;Build\u0026gt; { rocket_from_config(Config::figment()) } fn rocket_from_config(figment: Figment) -\u0026gt; Rocket\u0026lt;Build\u0026gt; { rocket::custom(figment) .attach(MyDb::init()) .attach(AdHoc::try_on_ignite(\u0026#34;SQLx Migrations\u0026#34;, run_migrations)) .mount( \u0026#34;/\u0026#34;, routes![ ... ], ) } Voila! Our integration testing setup is complete and ready to use. Once the tests are run, sqlx will start creating sqlx_test_xx numbered databases, and each test deletes the previous one, so a quick way to validate that it worked, is just to check for a database with a name like that on the local db instance. Happy Testing!\nReferences in this article # Rocket Web Framework SQLx Database Toolkit SQLx Test Macro Rocket Local Client Figment Config Library Rocket Config ","date":"June 5, 2023","externalUrl":null,"permalink":"/blog/integration-testing-rocket-sqlx/","section":"","summary":"A simple guide on how to build an integration testing setup for rocket microservices using the sqlx crate to leverage clean databases for each individual test","title":"Integration testing with rocket and sqlx","type":"blog"},{"content":"","date":"June 5, 2023","externalUrl":null,"permalink":"/tags/rocket/","section":"Tags","summary":"","title":"Rocket","type":"tags"},{"content":"","date":"June 5, 2023","externalUrl":null,"permalink":"/tags/rust/","section":"Tags","summary":"","title":"Rust","type":"tags"},{"content":"","date":"June 5, 2023","externalUrl":null,"permalink":"/tags/sqlx/","section":"Tags","summary":"","title":"Sqlx","type":"tags"},{"content":"","date":"June 4, 2023","externalUrl":null,"permalink":"/tags/database-migrations/","section":"Tags","summary":"","title":"Database Migrations","type":"tags"},{"content":"One of the most common setup tasks for starting a new microservice project is to automate the database setup so that it gets out of the way whenever there\u0026rsquo;s a new schema update or the need to move data around.\nThe rocket and sqlx libraries are a popular combination for building microservices in rust and both of them offer great tooling to create an automated database migration setup.\nSetting up Rocket # Out of the box, rocket only provides you with a database init fairing for initializing the rocket_db_pools connection pool, as documented in the official guide. The rocket_db_pools crate supports the most popular databases but doesn\u0026rsquo;t include anything about migrations.\nA Fairing is a middleware that you can hook to certain stages of the rocket application and request lifecycle and execute custom callbacks Creating a Fairing for the migratons # Without built-in fairings for sqlx database migrations, we need to implement our own, either by creating a struct that implements the Fairing trait or by using the AdHoc fairing, which takes a callback for input.\n#[derive(Database)] #[database(\u0026#34;mydb\u0026#34;)] pub struct MyDb(sqlx::PgPool); async fn run_migrations(rocket: Rocket\u0026lt;Build\u0026gt;) -\u0026gt; fairing::Result { // run the migrations } We chose to use the AdHoc fairing for the sake of simplicity, but the struct approach would probably be easier to reuse or bundle in a library. The first step is to implement a function that takes a Rocket\u0026lt;Build\u0026gt; instance as a parameter and returns a fairing::Result. We\u0026rsquo;ll focus on the implementation later but, for now, we only care about the function signature, which is the minimum required to create the fairing.\nRunning the Fairing in the ignite phase # With the function in place, we can now pass it to the AdHoc::try_on_ignite method, which will create a fairing that will execute the run_migrations callback on the ignite phase and prevent the application from starting if the callback fails with an error.\n#[launch] fn rocket() -\u0026gt; Rocket\u0026lt;Build\u0026gt; { let migrations_fairing = AdHoc::try_on_ignite(\u0026#34;SQLx Migrations\u0026#34;, run_migrations); rocket::build() .attach(MyDb::init()) .attach(migrations_fairing) .mount( \u0026#34;/\u0026#34;, routes![ ... ], ) } The last step is to attach the fairing to the application using the rocket builder and validate if it works properly by starting the app. The console output will reveal the attached fairings and the phases they are configured to run at.\n📡 Fairings: \u0026gt;\u0026gt; \u0026#39;mydb\u0026#39; Database Pool (ignite, shutdown) \u0026gt;\u0026gt; SQLx Migrations (ignite) \u0026gt;\u0026gt; Shield (liftoff, response, singleton) SQLx setup # Now we just need to run the migrations on our run_migrations callback, and for that, we use the sqlx::migrate! macro which takes a directory with .sql files and executes them against MyDb at runtime.\nThe migrate! macro doesn\u0026rsquo;t require the .sql files to be present at runtime, because it will load them to strings and embed them in the application binary during compile time. async fn run_migrations(rocket: Rocket\u0026lt;Build\u0026gt;) -\u0026gt; fairing::Result { match MyDb::fetch(\u0026amp;rocket) { Some(db) =\u0026gt; match sqlx::migrate!(\u0026#34;./src/db/migrations\u0026#34;).run(\u0026amp;**db).await { Ok(_) =\u0026gt; Ok(rocket), Err(e) =\u0026gt; { error!(\u0026#34;Failed to run database migrations: {}\u0026#34;, e); Err(rocket) } }, None =\u0026gt; Err(rocket), } } And that\u0026rsquo;s it! If all went well, the application should start successfully and a new table sqlx_migrations was added to the db to keep track of what was already executed. Happy migrations !\nReferences included in this article # Rust programming language Rocket web framework SQLx database toolkit Rocket database guide Rocket database connection pool Database init fairing Rocket fairing Rocket AdHoc fairing Rocket Phases SQLx migrate! macro ","date":"June 4, 2023","externalUrl":null,"permalink":"/blog/db-migrations-rocket-sqlx/","section":"","summary":"A simple guide on how to setup sql database migrations in rust using the rocket web framework and the sqlx toolkit crates","title":"Database migrations with rocket and sqlx","type":"blog"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/c#/","section":"Tags","summary":"","title":"C#","type":"tags"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/control-flow/","section":"Tags","summary":"","title":"Control Flow","type":"tags"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/error-handling/","section":"Tags","summary":"","title":"Error Handling","type":"tags"},{"content":"The try-catch idiom is the most common approach to error handling and one of the first things you\u0026rsquo;re taught when learning how to code. The idea is simple: There\u0026rsquo;s a safe scope where errors can happen, and when they do, the runtime ensures that the program jumps (remember goto ?) into a contingency scope where those errors can be handled. While this is a pretty simple concept, it can get messy pretty quick. The alternative we discuss, treating errors as return values, is not a new idea, but it\u0026rsquo;s recently getting more attention due to modern languages like Golang and Rust partially abolishing exceptions.\nfile, err := os.Open(\u0026#34;/hello.txt\u0026#34;) if err != nil { fmt.Println(err) return } The simplest example of error handling in Golang\nlet file_result = File::open(\u0026#34;hello.txt\u0026#34;); let file = match file_result { Ok(file) =\u0026gt; file, Err(error) =\u0026gt; panic!(\u0026#34;Problem opening the file: {:?}\u0026#34;, error), }; The equivalent in Rust\nExceptions in Typescript # Typescript on the other hand has really poor support for error handling. Pretty much anything can be thrown (including numbers, strings, etc \u0026hellip;), and because of that, there is no type safety. The error parameter in the catch block can either be any or unknown, so you have to check for each different error type that can be thrown.\ntry { // ... } catch (err) { if (err instanceof CustomerNotFoundError) { console.log(\u0026#34;Could not find customer\u0026#34;, err.customerId); } if (err instanceof Error) { console.log(err.message); } } Which is not the end of the world when compared to Java or C#\u0026rsquo;s multiple exceptions or catch blocks. What\u0026rsquo;s more uncomfortable though, is that there aren\u0026rsquo;t checked exceptions too, so there is no way to enforce an exhaustive error handling in compile time. We can either use broader catch-all or simply trust that we\u0026rsquo;ll always remember to handle every possible case.\nAre exceptions bad practice? # Not at all, if used with moderation. The problem usually lies in the abuse of exceptions and the complexity of the codebase, which we\u0026rsquo;re not made aware of by that try-catch example in the slides back in Programming 101, where the catch block is almost immediately after the line that may produce an error. In bigger projects, things can get a bit messier, and we\u0026rsquo;re often confronted with functions where an error is handled several lines away from where it was thrown, or to avoid that, a stupid amount of smaller try-catch blocks that make the code unreadable.\nOn top of that, if the language doesn\u0026rsquo;t offer any support for checked exceptions, or if we\u0026rsquo;re throwing runtime exceptions, it becomes very easy to miss a try-catch block or special condition for error sub-types.\nThe thing with result objects # The result object is a simple idea that allows you to treat errors as values. Unlike try-catch blocks, result objects alone don\u0026rsquo;t do much, you have to change your programming style to use them. Some languages have mechanisms to ensure the error case is handled, but in JavaScript/TypeScript since there\u0026rsquo;s no native support for results, it cannot be enforced, so It\u0026rsquo;s really about how defensive is your programming style. Personally, when writing code, I find it easier to handle all failure cases before moving to the next context. Once all cases are handled, It means a particular step is done, which closes a metaphorical box in my head, making room for more thinking. In addition, whoever reads that code next, gets to do it from top to bottom, without the need of jumping in the call stack to understand what happens in case of errors. The main idea is to return an error result instead of throwing it and then handle the returned error immediately after the function was called, before proceeding to the next context.\nconst readFile = () =\u0026gt; { if (!fs.existsSync(file)) { return Err(\u0026#34;File not found\u0026#34;); } else { return Ok(fs.readFileSync(file, \u0026#34;utf-8\u0026#34;)); } } ... const fileResult = readFile(); if(fileResult.isError) { console.log(fileResult.error); return; } const file = fileResult.value; ... Libraries like ts-results, practica, and true-myth add support for result objects, but there are also several articles like this one or this one with implementations as simple as declaring a type.\nRemarks # Result objects are just a code style that, in my opinion, makes the code easier to read (top to bottom), and help to create good habits of handling error cases before continuing the happy path. They don\u0026rsquo;t fully replace the idea of exceptions, even in Golang and Rust there are panic macros that will force the program to halt and can be used for non-recoverable errors.\nRecoverable errors on the other hand can benefit from this approach but in cases where the underlying code might throw an exception (i.e. using a library or legacy code - excluding Golang and Rust here), the try-catch block should still exist, even if only in the boundaries of our system, to make sure the application doesn\u0026rsquo;t enter a corrupted state.\nHaving that said, even for languages that don\u0026rsquo;t natively support them, I believe result objects are a good thing to have, especially in complex code bases. Not only do I find that functions become easier to read (top to bottom, no jumps), but also easier to write (dealing with one context at a time). Exceptions can still happen, so try-catch blocks should also be used whenever we don\u0026rsquo;t want a failure to interrupt the program, and also at system\u0026rsquo;s boundaries to ensure graceful failures otherwise.\n","date":"April 9, 2023","externalUrl":null,"permalink":"/blog/error-handling-with-results-in-typescript/","section":"","summary":"What are the main takeaways of error handling with result objects and how does it compare to the try-catch idiom? Looking into different approaches and some examples.","title":"Error handling with results in Typescript","type":"blog"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/exceptions/","section":"Tags","summary":"","title":"Exceptions","type":"tags"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/golang/","section":"Tags","summary":"","title":"Golang","type":"tags"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/java/","section":"Tags","summary":"","title":"Java","type":"tags"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/result-object/","section":"Tags","summary":"","title":"Result Object","type":"tags"},{"content":"","date":"April 9, 2023","externalUrl":null,"permalink":"/tags/typescript/","section":"Tags","summary":"","title":"Typescript","type":"tags"},{"content":"","date":"March 12, 2023","externalUrl":null,"permalink":"/tags/arrow-functions/","section":"Tags","summary":"","title":"Arrow Functions","type":"tags"},{"content":"In most situations throughout my career, I\u0026rsquo;ve implemented business logic in the simplest possible way, using if conditions and while loops. However, in specific cases, where the business logic is too complex, or changes frequently, it helps to build a flexible system that is easy to read and maintain. A very common solution to this kind of problem is to implement a rules engine. Let\u0026rsquo;s explore how to build one using Typescript.\nExample Problem # Consider the following example problem: We\u0026rsquo;re running a pizza delivery service for the characters of Super Mario. They don\u0026rsquo;t choose their pizzas, however, there are some limitations to what each of them prefers.\n// Mario doesn\u0026#39;t like pizza with salami or olives // Princess Peach only likes pizza with salami or peppers but not both // Yoshi only likes pizza from domino\u0026#39;s with pineapple and banana // Luigi only likes pizza with either mushrooms, peppers or fluffy dough One way we could represent these preferences is to use some basic functions for checking the ingredients, dough, and food chain, along with logic operators to combine them. Each character would then be linked to one or more rules using a tuple.\nconst rules = [ [\u0026#34;Mario\u0026#34;, None(has(\u0026#34;salami\u0026#34;), has(\u0026#34;olives\u0026#34;))], [\u0026#34;Yoshi\u0026#34;, All(has(\u0026#34;pineapple\u0026#34;), has(\u0026#34;banana\u0026#34;), source(\u0026#34;dominos\u0026#34;))], [\u0026#34;Luigi\u0026#34;, Some(has(\u0026#34;mushrooms\u0026#34;), has(\u0026#34;peppers\u0026#34;), dough(\u0026#34;fluffy\u0026#34;))], [\u0026#34;Princess Peach\u0026#34;, One(has(\u0026#34;salami\u0026#34;), has(\u0026#34;peppers\u0026#34;))], ]; The idea here is to iterate the rules array, and for each entry (a tuple) execute the rule and mark the character as eligible or not for a given pizza. The functions has, source, and dough are the building blocks for the rules, and they determine if the pizza (the input) matches the condition.\nThe logical operators are reducer functions that combine the result of multiple rules into a single one:\nNone returns true if none of the inner rules returns true; All returns true if all the inner rules return true; Some returns true if at least one of the inner rules returns true; One returns true if exactly one of the inner rules returns true; And there could be more, but just these 4 operators should be enough to build pretty complex rule validation logic.\nImplementing the rules engine # Now let\u0026rsquo;s see how this could easily be implemented. We\u0026rsquo;ll start by defining what a rule is, and to make it suitable for any input type, we can use generics.\ntype Rule\u0026lt;T\u0026gt; = (input: T) =\u0026gt; boolean; Which means a function that receives any kind of input T (the pizza in our case) and returns true or false. With only this line as our engine, for now, we can already implement the types and building blocks for the pizza use case.\n// types type Ingredient = | \u0026#34;salami\u0026#34; | \u0026#34;olives\u0026#34; | \u0026#34;peppers\u0026#34; | \u0026#34;mushrooms\u0026#34; | \u0026#34;pineapple\u0026#34; | \u0026#34;banana\u0026#34;; type Dough = \u0026#34;thin\u0026#34; | \u0026#34;fluffy\u0026#34;; type Source = \u0026#34;dominos\u0026#34; | \u0026#34;pizzahut\u0026#34;; type Pizza = { ingredients: Ingredient[]; dough: Dough; source: Source; }; // building blocks const has = (ingredient: Ingredient) =\u0026gt; (pizza: Pizza) =\u0026gt; pizza.ingredients.includes(ingredient); const dough = (dough: Dough) =\u0026gt; (pizza: Pizza) =\u0026gt; pizza.dough === dough; const source = (source: Source) =\u0026gt; (pizza: Pizza) =\u0026gt; pizza.source === source; We can have these building blocks as complex as we need them to be, the important thing here is to make sure that they hide the complexity of validating a specific rule, and that what they do is perceivable by just looking at the function call. When we call, has('salami'), for example, what is expected is that the return value is a function (Rule\u0026lt;Pizza\u0026gt;) that receives a pizza as the input and returns true if the pizza contains salami or false otherwise.\nTo glue the building blocks together we need another bit of framework, the logical operators. There\u0026rsquo;s not much here to do, thanks to the already existing Array class methods. We just need to build some wrappers that make more sense in this context and are even easier to use.\nconst All = \u0026lt;T\u0026gt;(...rules: Rule\u0026lt;T\u0026gt;[]) =\u0026gt; (input: T) =\u0026gt; rules.every((r) =\u0026gt; r(input)); const Some = \u0026lt;T\u0026gt;(...rules: Rule\u0026lt;T\u0026gt;[]) =\u0026gt; (input: T) =\u0026gt; rules.some((r) =\u0026gt; r(input)); const One = \u0026lt;T\u0026gt;(...rules: Rule\u0026lt;T\u0026gt;[]) =\u0026gt; (input: T) =\u0026gt; rules.filter((r) =\u0026gt; r(input)).length === 1; const None = \u0026lt;T\u0026gt;(...rules: Rule\u0026lt;T\u0026gt;[]) =\u0026gt; (input: T) =\u0026gt; rules.filter((r) =\u0026gt; r(input)).length === 0; A good example that functional syntax and code readability are not the best friends, so let\u0026rsquo;s dissect one of them:\n\u0026lt;T\u0026gt;() =\u0026gt; A generic arrow function of type T \u0026lt;T\u0026gt;(...rules: Rule\u0026lt;T\u0026gt;) =\u0026gt; A generic arrow function that receives a variable number of arguments of type Rule\u0026lt;T\u0026gt; \u0026lt;T\u0026gt;(...rules: Rule\u0026lt;T\u0026gt;) =\u0026gt; (input: T) =\u0026gt; rules.some((r) =\u0026gt; r(input)); A generic arrow function that receives a variable number of rules and returns another function that receives an input of generic type T and returns true if at least one (Array.some) of the rules passed to the enclosing function also returns true, given the input T. Finally, we need something to execute the rules for a given pizza, and because we\u0026rsquo;re keeping it as small as possible, we\u0026rsquo;ll again make use of generics and arrow functions.\nconst ruleRunner = \u0026lt;T, R\u0026gt;(rules: [R, Rule\u0026lt;T\u0026gt;][]) =\u0026gt; (input: T) =\u0026gt; rules.filter(([_, rule]) =\u0026gt; rule(input)).map(([output, _]) =\u0026gt; output); Here we\u0026rsquo;re declaring a generic arrow function of types \u0026lt;T, R\u0026gt; (the pizza and the character) that receives an array of tuples of type [R, Rule\u0026lt;T\u0026gt;] (the character and the pizza preference rules), and returns another function that receives an input of type T (pizza), runs all the rules against it and returns the objects of type R (character) belonging to the tuples with rules that trigger.\nRunning the example case # With everything together, we can create a new rule runner with the rules we imagined above, and execute it against any set of pizzas.\nconst getEligibleForPizza = ruleRunner(rules); const pizzas: Pizza[] = [ { ingredients: [\u0026#34;salami\u0026#34;], dough: \u0026#34;thin\u0026#34;, source: \u0026#34;dominos\u0026#34; }, { ingredients: [\u0026#34;salami\u0026#34;], dough: \u0026#34;fluffy\u0026#34;, source: \u0026#34;dominos\u0026#34; }, { ingredients: [\u0026#34;mushrooms\u0026#34;], dough: \u0026#34;thin\u0026#34;, source: \u0026#34;pizzahut\u0026#34; }, { ingredients: [\u0026#34;pineapple\u0026#34;], dough: \u0026#34;thin\u0026#34;, source: \u0026#34;pizzahut\u0026#34; }, { ingredients: [\u0026#34;pineapple\u0026#34;, \u0026#34;banana\u0026#34;], dough: \u0026#34;fluffy\u0026#34;, source: \u0026#34;dominos\u0026#34; }, { ingredients: [\u0026#34;salami\u0026#34;, \u0026#34;peppers\u0026#34;], dough: \u0026#34;fluffy\u0026#34;, source: \u0026#34;pizzahut\u0026#34; }, ]; for (const pizza of pizzas) { console.log(getEligibleForPizza(pizza)); } // [ \u0026#39;Princess Peach\u0026#39; ] // [ \u0026#39;Luigi\u0026#39;, \u0026#39;Princess Peach\u0026#39; ] // [ \u0026#39;Mario\u0026#39;, \u0026#39;Luigi\u0026#39; ] // [ \u0026#39;Mario\u0026#39; ] // [ \u0026#39;Mario\u0026#39;, \u0026#39;Yoshi\u0026#39;, \u0026#39;Luigi\u0026#39; ] // [ \u0026#39;Luigi\u0026#39; ] Checkout the full code for this blog post. All feedback welcome.\nFinal Remarks # Although the usage looks nice, the readability of the engine completely sucks. Coming from a Java background I\u0026rsquo;ve always struggled a bit with functional syntax and the abuses that js/ts allow in the name of simplicity. Combine it with generics, and it may or may not look like pure spaghetti. That\u0026rsquo;s for you to decide, I reckon that this is purely a personal opinion and many folks think the opposite.\nThe other consideration is whether something like this would be needed or not. On several occasions, I\u0026rsquo;ve had other engineers advising me to keep it simple, stupid, especially if the project is in the early stages, or under tight delivery schedules. I\u0026rsquo;ve also given the same advice to teammates in similar circumstances. But I like to overengineer stuff if I can afford it. Not because I don\u0026rsquo;t believe in simple solutions (which I do), but because it\u0026rsquo;s an enriching exercise. It forces you to think more about the problem domain and the solution you\u0026rsquo;re building, it also enables you to experiment more, and in the next iterations, you\u0026rsquo;ll be ready to make more appropriate decisions. It comes at the expense of time, so you should probably avoid overengineering if you\u0026rsquo;re tasked with a time-sensitive deliverable, and in some cases, the problem might not justify the solution, so also be ready to drop it if, in the end, it doesn\u0026rsquo;t make sense. The knowledge and experience, however, don\u0026rsquo;t go anywhere, and for me, that\u0026rsquo;s already profitable.\n","date":"March 12, 2023","externalUrl":null,"permalink":"/blog/simple-rules-engine-ts/","section":"","summary":"Building a lightweight, general purpose and easy to use rules engine under 30 lines of Typescript. We\u0026rsquo;ll start with an example use case and build the engine around it.","title":"Building a simple rules engine in Typescript","type":"blog"},{"content":"","date":"March 12, 2023","externalUrl":null,"permalink":"/tags/code-readability/","section":"Tags","summary":"","title":"Code Readability","type":"tags"},{"content":"","date":"March 12, 2023","externalUrl":null,"permalink":"/tags/generics/","section":"Tags","summary":"","title":"Generics","type":"tags"},{"content":"","date":"March 12, 2023","externalUrl":null,"permalink":"/tags/overengineering/","section":"Tags","summary":"","title":"Overengineering","type":"tags"},{"content":"","date":"March 12, 2023","externalUrl":null,"permalink":"/tags/rules-engine/","section":"Tags","summary":"","title":"Rules Engine","type":"tags"},{"content":"","date":"March 12, 2023","externalUrl":null,"permalink":"/tags/super-mario/","section":"Tags","summary":"","title":"Super Mario","type":"tags"},{"content":"Twitter is among the biggest social media platforms ever made. Since it first went live in 2006, It went from being a strange sms-like web service with no apparent utility to one of the most popular communication tools used for pretty much everything from news and entertainment to politics and marketing. Like other internet giants, Twitter is also in the spotlight for controversy around centralization, privacy, and freedom of speech, however, its dominance in the microblogging space only seems to have increased. Several web3 and decentralized app builders are now taking over the mission to tame this 350 million-user beast. A bold stance, but to what extent is it possible? Do people even want it?\nProblems that need solving # All centralized platforms share the same set of problems including data breaches, censorship, privacy violations, server outages, and others. Twitter is no different and since it became one of the mainstream social media apps, there have been several controversies regarding such problems:\nGovernment Surveillance: NSA requests go back to the Snowden era, or even before. It’s no surprise that with such a large amount of data on conversations from all over the world, government agencies sought to add Twitter to their surveillance toolkit; Hacks and Data Breaches: Account hijacking is a recurring theme, and while users are the biggest attack surface, honeypots of data like Twitter and Facebook are always under fire. They\u0026rsquo;re obviously highly secure, but the problem is that the attackers only need to succeed once to leak the entire user base; Censorship: Because they can do it, Twitter has on several occasions ‘silenced’ users by suspending or banning accounts that publish content considered harmful. Some episodes include the Donald Trump and @elonjet bans; Last year was one of Twitter’s most controversial years, ending with the platform being sold to Elon Musk, who believes the product and its user base are worth $44 billion. While the majority is patiently awaiting to be amused by his next move, the first months were not bright, and decentralized alternatives gained new momentum.\nNOSTR # Notes and Other Stuff transmitted by Relays is an open protocol, that can be used for creating a decentralized, censorship-resistant social network. The idea is very simple: Each user has its own public/private key pair, signs messages, and broadcasts them to multiple relays. Relays are just dumb servers that receive and forward messages from users. They don’t talk to each other and don’t need to be trusted since the signatures are created and verified client side.\nThe protocol is still early days and has very few contributors but was interesting enough to have caught Jack Dorsey’s attention. One of the most promising nostr clients is Damus, an iOS app launched in January that looks and feels like Twitter, despite some usability issues like users being identified by their public keys.\nMastodon # Social media app that also bears a striking resemblance to Twitter, but with a unique twist. The platform boasts federation capabilities thanks to the ActivityPub protocol. This means that instead of centralizing data in one location, it is distributed among multiple server operators. Users can choose which Mastodon server to use and trust their data with, providing them with more options regarding privacy and censorship resistance. Thanks to the federation, users from different Mastodon servers can communicate with each other, creating a shared communication space akin to that of Twitter.\nMastodon has experienced a surge in popularity in 2022 due to the controversies surrounding Twitter. Despite this, it\u0026rsquo;s not uncommon for Mastodon users to return to Twitter to advertise their Mastodon accounts and engage with their followers. Mastodon is a nonprofit organization that fully supports open-source, making it a unique and promising alternative to Twitter.\nBluesky / AT # Jack Dorsey has been advocating for Twitter to become a decentralized internet standard rather than a social network, citing the platform\u0026rsquo;s centralization problems. He established Bluesky, a team tasked to work on this vision, and they came up with the Authenticated Transfer Protocol. The Bluesky social network, though still in private beta, will likely be its first implementation.\nThe protocol aims to create large-scale (Twitter-like) distributed applications, and while it is a lot more complex than this paragraph, we can strip it down to the simplest details from Mastodon and NOSTR. Users sign and publish their social behaviors (posts, comments, likes, etc.) on signed data repositories, and are free to choose the server operator to use or host their own. Repositories can federate with each other, and users from different repositories can interact thanks to the lexicon that includes URL schemas that identify the repository for almost every social interaction. Unlike NOSTR, in ATP, signature verification is done independently by the DIDs, the protocol’s identity servers that contain the registry of user certificates.\nLens # A web3 social content sharing and storage protocol that provides a backbone for developers to build decentralized apps on top of. Each user profile is an NFT, owned by a wallet that is authorized to publish content, and execute other social behaviors like comments, follows, and likes.\nAll data is permanently recorded on the polygon blockchain to ensure that apps have no control over it. Although everything is public and on-chain, Lens profiles are not interoperable and each app needs to have its own deployment of the lens contracts. The protocol is built and maintained by the team behind AAVE, with Stani Kulechov as its main ambassador.\nMinds # The whitepaper describes Minds as a decentralized, freedom of-speech and privacy-oriented social network but it is not clear where the actual content is stored. The differentiating factor is the MINDS token, an ERC-20 utility token, issued on the Ethereum blockchain that is used to create an internal economy where, as users, we can spend MINDS to post content and boost other users’ posts (like), while also being rewarded with the token by other users who boost our posts.\nChallenges ahead # While these decentralized social media platforms have exciting features and potential, they face significant challenges in competing against established giants in the industry. Moreover, the wider public is often indifferent to their main motivation of privacy and censorship resistance, and may not be directly affected by such problems. To gain traction, these platforms need to approach and grow their audience differently. Additionally, these products are still very technical and difficult to use compared to industry standards like Twitter, Facebook, and Instagram. As most of the technology is in the early stages of development, it will take many cycles and experiments to reach the same level of maturity.\nWhat sets them apart # Making a mark in the space is certainly challenging, not only due to the scalability requirements but also because the industry giants are the most dominant. Social networks, however, are one good example where the problems of centralization become visible quite fast, especially when there’s government and politics in the mix. This is the differentiating factor for all of the above projects, which were built from the ground up with decentralization in mind.\nPrivacy and censorship resistance: Probably not something that could be found on Twitter or Facebook’s todo list 15 years ago and now, with an enormous user base and potential for marketing/propaganda, and certainly governments chipping in, it’s more likely that they try to shift users’ attention to something else rather than fixing it; Full open-source: While no one can contest that all Internet giants are among the biggest open-source contributors, they still fall behind in terms of transparency when compared to platforms and protocols that are open from the get-go, and in some cases even before they were an actual product; Protocol first: An open ecosystem where any developer can build a different app that interoperates with the others, like email/SMTP, is the groundwork to build a diverse ecosystem with fair competition, or at least freedom of choice; Open to participation and DIY: A consequence of the previous two points is that anyone can participate in the network by hosting and federating their own nodes. Certainly not a feature for the average user, however an important step against centralization. Not only the data is scattered across multiple node operators, but also the users can choose which one to trust; What happens next # With the web yet again reinventing itself, decentralized everything is a hot topic and one that is likely to shape the future of the internet. We can’t tell if this generation of Twitter killers will succeed, and even if we get to have an internet standard for decentralized social networking, it’s not guaranteed to kill centralization. SMTP is often used as an example in this narrative, yet, with hundreds of available email service providers, Google and Microsoft host the largest majority of email accounts in the world.\nOne thing we can be sure of is that decentralized social networking is now more than just a few open-source projects and people are talking about it, partly fuelled by centralization problems in recent years like the Cambridge Analytica scandal, or the Twitter bans. The opportunity is there for the taking but the opponent has a $44 billion weapon and will do everything it can to stay dominant. It’s a long shot, but we’ve already seen bitcoin put the entire financial industry on edge without a roadmap and a CEO, so it’s fair to say that it can be done.\nIf nothing else, the protocols are already an amazing contribution and proof that decentralized social networks can be done. We can look at them as the groundwork for trustless communications over the internet, free of censorship, and open to everyone.\n","date":"February 26, 2023","externalUrl":null,"permalink":"/blog/a-new-breed-of-twitter-killers/","section":"","summary":"Brief analysis of the most popular decentralized social network projects. How they work, what are the biggest pros and cons, and what do they offer that differentiates them from industry giants.","title":"A new breed of twitter killers","type":"blog"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/at-protocol/","section":"Tags","summary":"","title":"AT Protocol","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/bluesky/","section":"Tags","summary":"","title":"Bluesky","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/lens/","section":"Tags","summary":"","title":"Lens","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/mastodon/","section":"Tags","summary":"","title":"Mastodon","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/minds/","section":"Tags","summary":"","title":"Minds","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/nostr/","section":"Tags","summary":"","title":"Nostr","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/social-networks/","section":"Tags","summary":"","title":"Social Networks","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/twitter/","section":"Tags","summary":"","title":"Twitter","type":"tags"},{"content":"","date":"February 26, 2023","externalUrl":null,"permalink":"/tags/web3/","section":"Tags","summary":"","title":"Web3","type":"tags"},{"content":"2021 ended with a series of highs in the crypto space, with both Bitcoin and Ethereum seeing major bull runs and NFTs making their way into the mainstream. However, this momentum didn’t go long into 2022. Prices came tumbling down putting an end to the speculative frenzy, hacks and bankruptcy took central stage every couple of weeks, and crypto “was dead” more than a thousand times. It was also a year of experimentation for a lot of companies in the space trying to diversify their offering in the web3 and nft space, as well as consolidation for major blockchains, which are mostly working to scale and secure the infrastructure in preparation for the future. It was a lot easier to keep up when crypto kitties were the only cool thing around 😂. Keep reading for a small digest of the most impactful events of the past year.\nPrices down # The market capitalization of crypto assets declined 62% from $2.2 trillion to about $835 billion. Other industries and asset classes also had a tough year as the world, still recovering from the downturn of the COVID-19 pandemic, was suddenly hit by the Russia-Ukraine conflict, which paved the way to a massive energy crisis and subsequent high inflation and interest rates.\nSource: Coindesk Research NFTs also took a major hit, peaking at $35 billion in March to end the year at around $20 billion. A big loss for the majority of investors, but still an increase in total market cap when compared to early 2021\nSource: NFTGO Hacks and Collapses # Ronin Hack # The ronin network that supports the popular Axie Infinity blockchain gaming platform became the victim of a cryptocurrency hack, one of the largest ever, which resulted in the theft of around US$625 million worth of Ethereum and the USDC stablecoin.\nTerra/Luna Crash # TerraUSD or UST (Stablecoin on Terra network) de-pegged from the U.S. dollar following a $2 billion worth of UST withdrawal from the Anchor Protocol, a decentralized money market on Terra. This caused a lot of panic leading people to sell their UST and thus increasing the supply of Luna due to the algorithmic nature of this stablecoin. Shortly after the crash, exchanges started to delist the LUNA/UST pair, and LUNA became worthless.\nUltimately, this incident impacted several companies, among them crypto lenders Celsius, Voyager, and the Hedge Fund Three Arrows Capital (3AC), which ended up filing for bankruptcy alongside their affiliates.\nDo Kwon, Terra network’s founder fled the country and is nowhere to be found, while Sam Bankman-Fried (SBF) took the opportunity to bolster his position as a major industry player, bidding and offering lines of credit to distressed assets from the Terra-LUNA collapse.\nFTX Collapse # Throughout November, FTX, one of the largest cryptocurrency exchanges in the world was burned to the ground. The spark was a report by CoinDesk revealed Alameda Research, a quant trading firm run by SBF, held a $5 billion position in FTT, the native token of FTX, and had been borrowing millions of dollars with FTT as collateral. These documents raised concerns in the industry about the undisclosed leverage and solvency of SBF’s companies.\nBinance’s CEO Changpeng Zhao (CZ) tweeted that Binance would be unwinding its position on FTT, which led to a panic sell across the entire market forcing FTX to pause withdrawals shortly after, causing even more chaos in the crypto market.\nAfter Coinbase and OKX turned down SBF’s requests for bailout rescue, he turned to CZ for help but the acquisition fell apart after Binance’s due diligence revealed that “there was nothing they could do to help”.\nAfter filing for bankruptcy and stepping down as FTX’s CEO, SBF fled to the Bahamas, where he ran the business with a couple of former co-workers and colleagues from MIT. He was later arrested by the Bahamian authorities but was released on a $250 million bond and awaits trial in the US.\nMajor Brands experimenting with NFTs # Whether it is for business diversification or just fear of missing out, big companies started following each other and expanding their offering to NFTs to increase the customer base. Some of the big names included Starbucks, Instagram, Nike, Reddit, and Disney, among others. The top projects are building exclusive content and experiences on almost every type of medium, all of them seeking to diversify the offering and increase reach, pretty much like everyone started building a social media strategy 10-15 years ago. Some notorious events during the year were Budweiser’s Superbowl Ad which featured a Noun NFT and Reddit’s collectible avatars which featured widespread adoption across the Reddit community. Polygon was a big winner by capitalizing on Ethereum’s high gas fees and the competitors’ lack of web brand experience and was able to onboard most of these brands.\nBored Apes on Fire # The team behind Bored Ape Yacht Club (BAYC), Yuga Labs is one of the most prominent companies in the NFT space and was all over the place in 2022. They made a big splash throughout the year with a series of events:\nAcquisition of CryptoPunks, one of the earliest and most expensive NFT collections, (whose price wasn’t affected this year unlike everything else) $450 million funding round led by a16z at a $4 billion valuation; Launch of ApeCoin, an Ethereum-based token built for supporting the evolution of gaming, art, and entertainment. Launch of Otherside a metaverse game; Mostly all of their products revolve around the Bored Ape NFTs, and due to their high value, all of those launches captured the attention of most people in the community, ultimately ending up in rises and falls of the BAYC NFT price.\nTech Improvements # Ethereum Merge # Ethereum recently completed The Merge, a major upgrade that merged the original Mainnet with a separate proof-of-stake blockchain called the Beacon Chain, creating a single chain that uses proof-of-stake. The merge has reduced Ethereum\u0026rsquo;s energy consumption by 99.95%.\nThe upgrade had been a milestone on the roadmap for a long time and took several years to build. It was heavily tested in the 3 testnets (Ropsten, Goerli, and Sepolia) and there was also a bug bounty during the entire testing phase. The actual mainnet merge was rolled out without incidents, and while it had no impact on the price and re-ignited the discussions about centralization, it was an amazing demonstration of how to prepare and coordinate critical software updates.\nBitcoin Taproot # The Taproot update is Bitcoin’s most significant upgrade since Segregated Witness (SegWit) was activated in 2017. It was designed to enhance the network’s privacy and efficiency on a larger scale by making transactions cheaper, faster, and easier to deploy. The Taproot upgrade is also expected to promote the use of smart contracts on the Bitcoin network.\nTaro Alpha Release # Lightning Network infrastructure firm Lightning Labs has released a test version of the Taro daemon, a new piece of software that will allow Bitcoin users to create, send and receive assets on the Bitcoin blockchain.\nTaproot Asset Representation Overlay (Taro) is a protocol proposed by the same company, that enables the issuing of digital assets on the bitcoin blockchain. These assets can be fungible or non-fungible, the equivalent to Ethereum’s ERC-20 and ERC-721, and could be sent over the bitcoin network through on-chain transactions or over lightning when deposited into a channel.\nLeveraging the Taproot upgrade, Taro positions bitcoin to compete with Ethereum as the base layer for stablecoins, nfts, defi, and other kinds of decentralized apps.\nPolygon zkEVM testnets # Zero Knowledge (ZK) Proofs are mechanisms that allow one party to reveal knowledge of information to another party without revealing the information itself. One of their first applications used in the ZCash blockchain was to hide transaction details, but recently ZK proofs are gaining traction in the Ethereum scaling project.\nIn the making for over 3 years, Polygon zkEVM includes the first EVM-compatible implementation using ZK proofs, which is expected to vastly improve Ethereum’s transaction costs and speeds by allowing nodes to validate transactions by just looking at the ZK validity proof instead of replaying the smart contract execution or storing the data. It can also help to bridge the gap between off-chain data and real-world assets in crypto.\nFinal notes # 2022 was a tumultuous year for the crypto market, with several events that will have a lasting impact on the industry. One of the key takeaways from the year was the fragility of regulation and law enforcement in the sector. Despite increased efforts to combat crypto crime, scams, hacks, and loopholes in regulation, this year registered a record high of $20 billion in losses from hacks and scams. Additionally, crypto assets were hit harder than other asset classes, revealing their high volatility. NFTs, in particular, experienced a significant downturn in prices and the community is still searching for the perfect use case, while many companies are experimenting with royalties, exclusive content, and experiences in music, art, television, and gaming. However, it wasn’t all bad news, as funding for crypto companies grew to around $30 billion, up from $23 billion in 2021. There was also notable progress in terms of technology, with the two main blockchains becoming more mature and future-proof. As we enter 2023, the speculative bubble appears to be shrinking, but this is still a trillion-dollar industry, which is why hopes and expectations for crypto’s future remain high.\n","date":"January 16, 2023","externalUrl":null,"permalink":"/blog/2022-crypto-year-review/","section":"","summary":"A digest of 2022\u0026rsquo;s most impactful events in the crypto and web3 space, including hacks, fraud, bankrupcy, and the latest tech developments in Bitcoin and Ethereum.","title":"2022 crypto year in review","type":"blog"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/tags/bored-ape-yacht-club/","section":"Tags","summary":"","title":"Bored Ape Yacht Club","type":"tags"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/tags/ethereum/","section":"Tags","summary":"","title":"Ethereum","type":"tags"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/tags/ftx/","section":"Tags","summary":"","title":"FTX","type":"tags"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/tags/luna/","section":"Tags","summary":"","title":"Luna","type":"tags"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/tags/nft/","section":"Tags","summary":"","title":"Nft","type":"tags"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/categories/nft/","section":"Categories","summary":"","title":"Nft","type":"categories"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/tags/polygon/","section":"Tags","summary":"","title":"Polygon","type":"tags"},{"content":"","date":"January 16, 2023","externalUrl":null,"permalink":"/tags/yuga-labs/","section":"Tags","summary":"","title":"Yuga Labs","type":"tags"},{"content":" Apes and Kitten in the Music Industry Technology has disrupted the music industry several times in history, changing the way we consume it, how it’s distributed and even how it’s made. With the introduction of radio, vinyl records and then portable audio players, our relationship with music has become part of everyday life, much more intimate and diverse, creating a multi billion dollar industry that’s constantly reinventing itself to extract the maximum potential from artists and meet our demands as consumers.\nIn the past couple of years there’s been a great debate around the applicability of non-fungible tokens (NFTs), and the music industry is currently the stage for lots of experimentation with popular artists like Snoop Dogg, Steve Aoki and Mike Shinoda making bold moves in the space. In this article we’ll reflect about the good, the bad, and the ugly facts around NFTs in the music industry.\nStatus Quo # For decades, the music industry was controlled by publishers and studios which would act as gatekeepers to what got recorded and distributed. During that period, recording music was an expensive process that required upfront investment. On the other end, the marketing and distribution machine ensured that whatever got recorded was sold. Artists that were not able to get publishing deals had nearly zero chances of entering the market.\nThat all changed with technology, recording is now so cheap that you can do it on almost any laptop, distributing to the major streaming platforms is cheap and accessible, and marketing can be achieved through social networks (TikTok, for example is one of the most common marketing vehicles). Even with all these shifts, the music industry is still largely controlled by the major publishers because of the huge machine behind, that markets, promotes, and distributes new music, concerts, social accounts, etc.\nSource: International federation of the phonographic industry Streaming has been the dominant distribution format for some years now. In 2021, around 65% of global recorded music revenues, with Spotify alone accounting for almost half a billion users and almost 200 million paying subscribers.\nFor the large majority of content, digital platforms negotiate streaming rights with major record labels. Sony BMG, Warner, and Universal together hold the rights for around 67.5% of the global recorded music market, which shows us how much power and influence these companies exert in the industry. Artists are at the end of the value chain creating content, performing live, and securing the last slice of the pizza. While this is the status quo for most, technology improvements have been consistently enabling artists to explore DIY alternative ways to build a music career and make a decent living from their work. In recent years, NFTs have captured the attention of many, who ventured in the path of independent music making.\nBut taking a step back, what is actually an NFT? Originally proposed and built as an Ethereum smart contract, non-fungible tokens are a specification to represent ownership and trading of unique assets. Technology-wise, the NFT itself is a simple collection of digital documents that can be stored and consumed using normal computers and smartphones. What’s more interesting about them is the fact that they’re built on top of distributed ledgers, that make it very hard/expensive to tamper with, allowing for a trustless and transparent economy where users are able to trade directly with each other, and the whole transaction record stays public for anyone to audit and verify the ownership of any given NFT.\nThe Good # Royalties # There’s a great number of hops in the value chain between artists and their fans. The machine that makes it possible for commercial music to play on our earphones is a complex one, with a lot of different entities profiting in between. Streaming services have been heavily criticized for low payouts, and record labels were always known for monopolizing rights with unfair contracts that force artists to give up ownership of their songs to the label.\nSource: Data Observatory What NFTs and the crypto asset economy are creating is the possibility to bypass some hops, naturally changing the way business is made. While most of what we see is still experimentation, maintaining rights ownership is what seems to be the main discussion topic.\nOn top of selling their content almost directly to fans, NFT transactions make it possible to automate royalty enforcement mechanisms for artists to earn creator shares on the secondary market. NFT platforms like Open Sea, Rarible, Sound.xyz, and Catalog already include royalties as part of their functionality, but the biggest game changer is that these royalties could actually be enforced by the blockchain protocol itself, further protecting the artists from not receiving a fee when their music is re-sold. On top of this, licensing also becomes more efficient as such a public registry can help avoid disputes and overlapping royalty claims.\nAlternative Ways of Monetization # NFTs make it possible to skip the middleman, and enable a different business model where artists can sell content by themselves. There’s not yet a standard recipe for “what” is actually sold with the NFT, which is why artists are experimenting with different combinations. Electronic dance music star 3LAU was among the first to venture in the NFT space and made a small fortune with an NFT sale to mark the third anniversary of his album Ultraviolet. Content included several exclusive items like a custom song, access to never-before-heard music online, custom art based on the music and new versions of Ultraviolet’s 11 original songs.. Another popular NFT sale was carried out by Grimes who launched a collection of digital art accompanied by exclusive music. More conservative approaches are also showing up, and in this case, Kings of Leon were one of the first to add NFTs to the list of already available distribution channels like Spotify and iTunes, along with some special perks for NFT holders.\nAnother interesting thing is again rights, but from the transferability perspective. With no record label contract in place, it is up to the artists to decide whether to protect their contents by keeping the rights to themselves or give them away to NFT holders. There was a popular discussion on twitter between Mike Shinoda and one of his fans, where the artist was very clear about non commercial use when asked if Ziggurat could be reproduced in social media posts. On the other hand, should artists choose to embed copyrights in the NFT, it would open the rights market to the general public looking out for potential investments and portfolio diversification, making it more valuable in the short term.\nScarcity and New Ways to Interact With the Fans # The element of scarcity is known to generate lucrative profits on pretty much everything that can be sold. Paintings and limited product editions are common examples of scarcity augmenting the price of a given asset. The music industry is no stranger to this marketing technique, and the unique nature of NFTs make them the perfect vehicle for artists to appeal to scarcity in new ways, since they are the ones controlling the number of copies in circulation. One example would be Shawn Mendes’ launching his NFT collection, which includes rare digital versions of his signature accessories such as his Fender guitar and gold ring. While such assets can easily be copied and illegally reproduced, the proof of ownership made possible by NFTs also acts as a guarantee of authenticity, and can be used to build experiences like backstage access, digital collectibles and other exclusive assets for the most dedicated fans. This is what the platform aok1verse (Steve Aoki’s take on NFTs and Metaverse) tries to create by granting exclusive perks to the fans like access to merchandise, real life events, early access to unreleased music and even artistic collaboration.\nThe Bad # NFTs and blockchain promise a more fair and equal future in many industries, but at the moment the underlying technology is still pretty immature, and we don’t know in which terms it will stick around. If something changes and all of a sudden Ethereum stops being the dominant NFT platform, users might move away to a different technology and it remains unclear what happens to all of those who already invested in NFTs.\nThere’s a lack of standardization and interoperability which is bringing in new NFT specifications and blockchains. As the ecosystem continues to evolve, we’re likely to see different, incompatible platforms coming to life, potentially leading to an excessive amount of distribution channels for artists to worry about.\nThe usability is also not there yet, and most of these technologies are not accessible to everyone, leaving the less proficient outside, while at the same time creating an opportunity for new middlemen companies that pretty much like producers and distributors will generate profit from the artist’s success.\nOn top of all of this there is the crypto asset market, which is in its early days and is frequently the central stage for price speculation, volatility and fraud. Most of the above problems are mainly related with a general lack of regulation on the market itself which naturally has an impact on how the NFT asset class is perceived.\nThe Ugly # Even with all the above changes, the music industry at its core still has the biggest piece of the pie. Publishers and distributors are still the gatekeepers of success for most artists. Even the ones that found success through social networks still end up getting signed by a major publisher in order to scale their brand and get their music to as many listeners as possible.\nSome of the previously discussed experiments with NFTs are being driven by players in the music industry not by newcomers to the space. Thus, we are yet to understand if NFTs will be just another tool in the publishers’ belt to maximize profits versus something else that can be leveraged at scale by new unsigned artists.\nFinally, most attempts and experiments seem like marketing stunts to leverage the hype of crypto. This has the potential to undermine the future value of NFTs for music both technically and economically.\nWrap Up # The whole crypto asset industry is still at its early days and essentially lacks definition and standardization, which can be both good and bad for its role in the music industry. On one hand it created a space for the brave, defiant ones to experiment new approaches to their careers, but on the other the infrastructure is not built yet, which restricts NFTs to the tech savvy users and creates the space for new middlemen. We should also be mindful that the current price speculation and fraud with NFTs and crypto assets in general doesn’t allow a clear forecast of what will be the monetary value for music NFTs, and neither if it will be possible to make a living out of them.\nWhat stays on the record for now is that by democratizing the commercial trades of digital assets, NFTs have definitely captured some attention in the music industry, helping out artists to have a more independent career and sell their music directly to the fans. Major companies in the industry have also noticed this and will eventually start chipping in as the whole ecosystem is yet to be properly regulated. The latter also poses an interesting question that goes straight to the core of what NFTs and blockchain stand for: if there is a centralization of the system, in this case major studios/publishers controlling most of these technologies, does the community really benefit from decentralized technology?\nEven though NFTs have huge potential for the music industry and art in general, we think that the biggest unfair advantage will probably remain the same as it always has been: the ability to make something stand out of the crowd. And this is where the major studios and publishers win because of their amazingly oiled machine that can pick up almost anything and market/distribute it to a global audience, with or without NFTs.\n","date":"December 22, 2022","externalUrl":null,"permalink":"/blog/20221222-apes-and-kitten-in-the-music-industry/","section":"","summary":"Technology has disrupted the music industry several times in history, changing the way we consume it, how it’s distributed and even how it’s made. In this article we’ll reflect about the good, the bad, and the ugly facts around NFTs in the music industry","title":"Apes and Kitten in the Music Industry","type":"blog"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/mike-shinoda/","section":"Tags","summary":"","title":"Mike Shinoda","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/music/","section":"Tags","summary":"","title":"Music","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/open-sea/","section":"Tags","summary":"","title":"Open Sea","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/rarible/","section":"Tags","summary":"","title":"Rarible","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/record-labels/","section":"Tags","summary":"","title":"Record Labels","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/sony-bmg/","section":"Tags","summary":"","title":"Sony BMG","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/sound.xyz/","section":"Tags","summary":"","title":"Sound.xyz","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/spotify/","section":"Tags","summary":"","title":"Spotify","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/steve-aoki/","section":"Tags","summary":"","title":"Steve Aoki","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/tiktok/","section":"Tags","summary":"","title":"TikTok","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/universal/","section":"Tags","summary":"","title":"Universal","type":"tags"},{"content":"","date":"December 22, 2022","externalUrl":null,"permalink":"/tags/warner/","section":"Tags","summary":"","title":"Warner","type":"tags"},{"content":"","date":"November 13, 2022","externalUrl":null,"permalink":"/tags/describe.each/","section":"Tags","summary":"","title":"describe.each","type":"tags"},{"content":"","date":"November 13, 2022","externalUrl":null,"permalink":"/tags/input-combinations/","section":"Tags","summary":"","title":"Input Combinations","type":"tags"},{"content":"","date":"November 13, 2022","externalUrl":null,"permalink":"/tags/javascript/","section":"Tags","summary":"","title":"Javascript","type":"tags"},{"content":"","date":"November 13, 2022","externalUrl":null,"permalink":"/tags/jest/","section":"Tags","summary":"","title":"Jest","type":"tags"},{"content":"One challenge that I\u0026rsquo;ve been facing recently is how to write readable jest tests that exhaustively test my functions with all possible input combinations. On early stages this is usually simple, but as the functions evolve and get more input parameters, the test code starts to get bloated and confusing even to the person that wrote it (yours truly).\nA practical example # Let\u0026rsquo;s imagine that we have the following callback computeIngredients(cake, vegan): Ingredient[] which returns the list of ingredients required to make a cake, supporting both vegan and non-vegan recipes. We want to implement a business rule that requires PeanutButter to always be in the list of ingredients regardless of the cake or whether its vegan. Given that we only support ChocolateCake and OrangeCake for now, it would be fairly simple to cover all test cases:\ndescribe(\u0026#34;Compute Ingredients\u0026#34;, () =\u0026gt; { test.each` cake | vegan ${Cakes.ChocolateCake} | ${true} ${Cakes.ChocolateCake} | ${false} ${Cakes.OrangeCake} | ${true} ${Cakes.OrangeCake} | ${false} `( `Should have peanut butter cake=${cake}, and vegan=${vegan}`, ({ cake, vegan }) =\u0026gt; { const result = computeIngredients(cake, vegan); expect(result).toContainEqual(Ingredients.PeanutButter); } ); }); If we add another cake recipe VanillaCake and glutenFree to the modifiers, the number of entries in our test.each table goes to triple, making the test list of test cases much bigger and hard to understand.\ntest.each` cake | vegan | glutenFree ${Cakes.ChocolateCake} | ${true} | ${true} ${Cakes.ChocolateCake} | ${true} | ${false} ${Cakes.ChocolateCake} | ${false} | ${true} ${Cakes.ChocolateCake} | ${false} | ${false} ${Cakes.OrangeCake} | ${true} | ${true} ${Cakes.OrangeCake} | ${true} | ${false} ${Cakes.OrangeCake} | ${false} | ${true} ${Cakes.OrangeCake} | ${false} | ${false} ${Cakes.MangoCake} | ${true} | ${true} ${Cakes.MangoCake} | ${true} | ${false} ${Cakes.MangoCake} | ${false} | ${true} ${Cakes.MangoCake} | ${false} | ${false} `; Exploiting describe.each # A simple solution if you really need to generate exhaustive combinations for your test case would be to exploit describe.each by using it to loop over each argument value multiple times. Each describe creates a new scope, and we can nest as many describe blocks as we like. Using each in combination with describe allows us to achieve a similar behavior to nested for loops\ndescribe(\u0026#34;Compute ingredients\u0026#34;, () =\u0026gt; { // Repeats tests for each cake describe.each(Object.values(Cakes))(\u0026#34;When cake = %s\u0026#34;, (cake) =\u0026gt; { // Repeats tests for vegan/non vegan describe.each([true, false])(\u0026#34;And vegan = %s\u0026#34;, (vegan) =\u0026gt; { // Repeats tests for gluten free/non gluten free describe.each([true, false])(\u0026#34;And glutenFree = %s\u0026#34;, (glutenFree) =\u0026gt; { test(\u0026#34;Should have peanut butter\u0026#34;, () =\u0026gt; { // Do the actual test const result = computeIngredients(cake, vegan, glutenFree); expect(result).toContainEqual(Ingredients.PeanutButter); }); }); }); }); }); Where the results would be something like this\nCompute ingredients When cake = OrangeCake And vegan = true And glutenFree = true ✓ Should have peanut butter (3 ms) And glutenFree = false ✓ Should have peanut butter And vegan = false And glutenFree = true ✓ Should have peanut butter And glutenFree = false ✓ Should have peanut butter When cake = ChocolateCake And vegan = true And glutenFree = true ✓ Should have peanut butter And glutenFree = false ✓ Should have peanut butter And vegan = false And glutenFree = true ✓ Should have peanut butter (1 ms) And glutenFree = false ✓ Should have peanut butter (1 ms) When cake = MangoCake And vegan = true And glutenFree = true ✓ Should have peanut butter (1 ms) And glutenFree = false ✓ Should have peanut butter (1 ms) And vegan = false And glutenFree = true ✓ Should have peanut butter And glutenFree = false ✓ Should have peanut butter (1 ms) Conclusion # We should always try to keep our tests as simple as possible, easier to read and to maintain. Nesting describe.each scopes provides a simple way to generate the highest possible number of argument combinations and test our code against all of them, giving us more confidence to release. On the other hand, abusing this functionality might also mean that our interfaces are too complex or not well-structured and the wisest thing to do could be to rethink them.\n","date":"November 13, 2022","externalUrl":null,"permalink":"/blog/test-input-combinations-jest/","section":"","summary":"A simple method to generate exhaustive input combinations for test cases while maintaining a healthy and readable test codebase","title":"Multiple test input combinations with jest","type":"blog"},{"content":"","date":"November 13, 2022","externalUrl":null,"permalink":"/tags/software-testing/","section":"Tags","summary":"","title":"Software Testing","type":"tags"},{"content":"","date":"November 13, 2022","externalUrl":null,"permalink":"/tags/test-case/","section":"Tags","summary":"","title":"Test Case","type":"tags"},{"content":"","date":"November 13, 2022","externalUrl":null,"permalink":"/tags/test.each/","section":"Tags","summary":"","title":"test.each","type":"tags"},{"content":" This article was originally published externally, read the original here. The product manager role has been gaining popularity in the tech industry over the recent years. As more companies add PMs to their organisation charts, there is still a lot of experimentation with team setups to find the best alignment possible between product and engineering. These two functions work now as close as they ever did, and while it is said to be the recipe for high achieving teams, lots of companies are still struggling to achieve good levels of collaboration.\nStrategies to make it work are well covered in Martin Fowler’s website, in this article I’ll focus on more on the engineer’s perspective and what are our expectations for a good product manager.\nWhat Not to Expect # As an engineer myself I have observed friction coming from both sides, but also some productive partnerships, and while one can argue that the team or the organisation can influence the outcome, it mostly depends on how much each function is willing to collaborate with the other.\nLet’s do an exercise and think about these expectations in reverse. I believe the PM role is still early days, and because of that, it assumes different shapes, especially in less mature companies that are starting to build their product development strategies. If you work in the tech industry right now, you might be familiar with some of the following stereotypes.\nExcel Manager # An ace with macros and a master at reporting progress in the weekly steering. The whole project looks like a geometric piece of art with bars stacked over each other, horizontally aligned with a column of red cells that turn green once you type in the word “Delivered”. The excel manager cares very little about the product lifecycle and will spend all of her chips in getting the devs to commit to those deadlines.\nFeaturist # Top specialist in 360 market research. She knows all about Steve Jobs and the story of the iPod, and care about the product lifecycle, but can’t afford to loose time building strategies because “Details matter, it’s worth waiting to get it right”. As long as build a responsive design, with social share buttons, cash will start flowing. Be sure to check out “Avoiding Featurism”, which I stole the word featurist from.\nRetired Programmer # Displeased with the idea of being code monkey forever, she abandoned engineering in the search for happiness and success. Looking with regret at the life she left behind, the retired programmer is good ally and is willing to manage the leadership expectations and push back deadlines at the exchange of sharing some ideas for the software architecture and also that story about getting an A+ on a programming project at the university.\nKing’s Hand # Why sharing one’s ideas when we’re all here to serve a greater purpose ? Like an all-pass filter, the king’s hand is taking no chance at fingers pointing on her direction. She’s just the messenger and if you don’t agree, be sure to expect some escalations so that you can all ask guidance from the master.\nThe Single Idea of Product Management # Now [puts serious tone], what the above stereotypes have in common (intentionally) is that all of them delegate business calls to the leadership layer, which I think is the biggest game changer about the product manager role. More than design or implementation, the PM is accountable for the entire product lifecycle from idea through implementation to customer feedback and market performance. A good PM will as as a lower level CEO, holding herself accountable for the the success of a small part of the business (the product), offloading this responsibility from leadership. An even better PM will share this accountability across the entire product engineering team.\nThat being said, how the partnership is implemented is a different story. The most successful teams I’ve integrated are the ones where the PM is there, sometimes even under the same leadership/reporting line. The less friction there is between the two functions, the better results. PM \u0026amp; EM: Rules of engagement establishes 3 foundational rules which I believe can be extended beyond PM and EM to product and engineering: Trust, joint accountability and separate ownership. While it is important that each function plays a different part, working as a team with shared responsibility will increase everyone’s alignment towards what and why they’re doing it, as well as reducing the bureaucracy required to get something done, giving the PM better flexibility to build and adapt the product strategy iteratively.\nOnboard the Team Into the Business # One of the things that always bothered me is how little engineers know about the products they’re building. Surprise or not it is possible to work an entire lifetime without knowing who uses the software you’re building and how much money does it make. One of the advantages of doing business at a lower level, is that this barrier can be broken. Recurring discussions with the team about the product’s performance as well as the north star or other relevant kpis is a powerful way of fostering innovation and keeping motivation levels high.\nOne Roadmap to Rule Them All # Building a technical roadmap while working on a product team was one of the most counter productive experiences I’ve had. While it’s important to keep track of tech debt that needs to be paid, if there’s no buy in from product, experience tells me that those tasks are never going to be implemented.\nSqueezing a couple of tasks into ongoing projects is not sustainable and at some point nor will be the code base leading to developer burnout and degraded product performance. A good PM is able to understand the cost of not paying technical debt and will include it as part of the the product strategy.\nTarget Dates, Not Deadlines # If you want to stress out an engineer, ask him for an ETA or to commit to a deadline set by leadership. Building software under pressure only causes harm to the business in a sense that it will force people into making more mistakes that can cost money or decrease the team’s velocity later on.\nWhile it’s also not acceptable that engineers are free to waste large amounts of time, the PM should be flexible enough to either allow the target date to move or scope to be removed.\nFinal Remarks # What the perfect PM should be like is still an open question, but it is clear that if both product and engineering work towards building an effective partnership, the results can be far more productive as opposed to working in silos (grouped by activity instead of outcome). Building the product from inside the team is key to build a better collaboration environment where high levels of alignment, motivation and productivity are more likely, similar to startup companies. From an engineer’s point of view, the ideal PM is not a stakeholder but a peer instead, pretty much like the CEO of a small startup inside the wider company.\n","date":"November 2, 2022","externalUrl":null,"permalink":"/blog/20221102-engineering-friendly-pm/","section":"","summary":"The rules of engagement in solid partnership with engineering. The do\u0026rsquo;s and don\u0026rsquo;ts as seen from a software developer\u0026rsquo;s perspective","title":"Engineering Friendly Product Manager","type":"blog"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/engineering-management/","section":"Tags","summary":"","title":"Engineering Management","type":"tags"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/excel/","section":"Tags","summary":"","title":"Excel","type":"tags"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/innovation/","section":"Tags","summary":"","title":"Innovation","type":"tags"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/ipod/","section":"Tags","summary":"","title":"iPod","type":"tags"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/martin-fowler/","section":"Tags","summary":"","title":"Martin Fowler","type":"tags"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/organization/","section":"Tags","summary":"","title":"Organization","type":"tags"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/product-management/","section":"Tags","summary":"","title":"Product Management","type":"tags"},{"content":"","date":"November 2, 2022","externalUrl":null,"permalink":"/tags/product-teams-new-features/","section":"Tags","summary":"","title":"Product Teams New Features","type":"tags"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/amazon/","section":"Tags","summary":"","title":"Amazon","type":"tags"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/google/","section":"Tags","summary":"","title":"Google","type":"tags"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/leadership/","section":"Tags","summary":"","title":"Leadership","type":"tags"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/organizations/","section":"Tags","summary":"","title":"Organizations","type":"tags"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/self-organizing/","section":"Tags","summary":"","title":"Self Organizing","type":"tags"},{"content":"Tech startups are good examples of productivity. With small teams on limited resources, every member usually exercises multiple competencies from engineering to product and leadership. While this is in most cases not an option, it creates a positive outcome of high accountability and alignment with the company\u0026rsquo;s mission. The ones that succeed and need to scale will most likely at some point struggle to maintain this cohesion and control the chaos that naturally comes with more people onboard.\nImage by Jason Goodman on Unsplash Scaling up is a painful and uncertain process, which is why, with no surprise, most organization models we see in scale-ups try to mimic the best practices and success stories from other companies that already walked the same runway. Spotify squads and Amazon 2-Pizza teams, two of the most popular role models, were discussed in every place I\u0026rsquo;ve worked at for the past 6-7 years, alongside with Google SRE book and Airbnb engineering culture. These 4 companies alone were likely to have influenced most engineering organizations around the world, which is probably one of the reasons we\u0026rsquo;re solving similar problems everywhere.\nIn a very (very) simplistic way, all of these models advocate for autonomous, self-organized small teams, partly to avoid them getting into each other\u0026rsquo;s way, but also because they are usually more productive and accountable than larger groups.\nHowever, each company is different and the conditions to thrive too, which why in a lot of cases we see them falling short, not working as expected. Speaking of self-organizing teams only, what seems to be the most common problem, is that even though they might be structured into clear different domains, autonomy is not always possible either because the product requires deliverables by multiple teams (e.g. app team and api team) or it relies on cross team services (e.g. infrastructure or billing), that need changes to meet the requirements for a specific feature.\nBig companies like Google and Amazon built their way to support hundreds of products with standalone teams on a state of maturity that is an generally an exception. Most companies will reach the scale up stage with only one product to support and an in-development technical stack that “will one day” allow teams to build their own products in a self-service fashioned way.\nEven those that eventually get there, will at some point build products or features that require two or more teams to work together, and that\u0026rsquo;s usually where the model fails because the sum of several self-organizing teams isn\u0026rsquo;t usually a self-organizing group of teams. And if we think about it, the typical agile teams are built in a way that prioritizes their own process and roadmap over external interference. While this might not always be true, I believe that the hive mind expectation that self-organized teams will make the best decisions for the organization won\u0026rsquo;t work here, because as humans in a group, we tend to wait for someone else to take the first steps and assume the lead. When combined with the teams\u0026rsquo; individual roadmap, the collective initiative is likely to get second priority. Quoting Amazon SVP Dave Limp: “The best way to fail at inventing something is by making it somebody\u0026rsquo;s part-time job.\u0026quot;\nOne thing that is not discussed as often as autonomous team organization models is their failures and what some companies have done to fix them, when they realized that their teams were pulling the rope on all sorts of different directions because there was a leadership gap across the board.\nGetting multiple teams to work together and deliver these initiatives successfully comes with an impact on every team\u0026rsquo;s roadmap, so without filling the gap it is unlikely that any of the teams will feel accountable for the whole project and take the lead. Some companies create temporary \u0026ldquo;project squads\u0026rdquo;, others hire specific project management staff to lead these products but the important here is to make sure that there someone will be in charge of getting the project done, and that someone has the power and ability to drive all the required teams and accommodate the project deliverables in their agendas.\nStandalone teams are still in my opinion a good way to scale tech organizations, and work perfectly for contained initiatives, but when it comes to dependencies, the model needs to be flexible enough to allow direction overrides without affecting the team\u0026rsquo;s boundaries and self-organization on business as usual activities. Finding the right trade off between autonomy and collaboration is the key to succeed at a collective level and still be able to achieve good levels of productivity.\n","date":"October 9, 2022","externalUrl":null,"permalink":"/blog/self-organizing-misconception/","section":"","summary":"Reflecting on the common problem of leadership across multiple teams in scale-up organizations","title":"Self-organizing teams misconception","type":"blog"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/site-reliability-engineering/","section":"Tags","summary":"","title":"Site Reliability Engineering","type":"tags"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/teams/","section":"Tags","summary":"","title":"Teams","type":"tags"},{"content":"","date":"October 9, 2022","externalUrl":null,"permalink":"/tags/two-pizza-teams/","section":"Tags","summary":"","title":"Two Pizza Teams","type":"tags"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/code-formatting/","section":"Tags","summary":"","title":"Code Formatting","type":"tags"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/code-review/","section":"Tags","summary":"","title":"Code Review","type":"tags"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/comments/","section":"Tags","summary":"","title":"Comments","type":"tags"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/commits/","section":"Tags","summary":"","title":"Commits","type":"tags"},{"content":" Code reviews are an essential part of the software development lifecycle. Virtually the majority of tech companies, from giants to startups, advocate for code reviews in the pipeline, and it is generally accepted that they contribute to better code quality and more accountability over what is released to production.\nWhile in most open-source projects (thankfully), code reviews were standard practice long ago, it was GitHub\u0026rsquo;s Pull Request feature that sparkled mass adoption across the IT industry.\nWhy is this important ? # The main idea behind a code review is to ensure that at least two engineers collaborated on any change in the code base, increasing quality and accountability. As a developer you\u0026rsquo;re probably reviewing your peers\u0026rsquo; code as much as writing your own (At least you should 😀), and pull/merge requests are usually the stage where you\u0026rsquo;ll exchange more feedback and knowledge with your peers about the feature being delivered.\nThe distributed nature of the pull request process allows code reviews to be written and asynchronous, which is what happens most of the time, so it is important to make them as clear and concise as possible to make sure that your peer understands what\u0026rsquo;s being done and facilitate the review. Obscure pull requests lead to misunderstandings, lack of context and ultimately bugs. This becomes even more evident on remote teams with weaker communication habits, or low timezone overlap.\nOptimizing your PR\u0026rsquo;s readability # We all think and communicate differently from each other, even within the same cultural boundaries. Translation and interpretation issues happen all the time, so the best strategy to get a smooth code review, with no surprises, is to start preparing it from the moment you begin writing code.\nTidy up your commit history # When building software, one thing we usually do is to use a divide and conquer approach and implement the feature step by step. If we\u0026rsquo;re able to mirror this strategy into the commit history, it will help the reviewer understand the timeline of events, and review the commits individually. Additionally, good commit messages will make the history self-descriptive, and the reviewer will know what to look for.\nLet\u0026rsquo;s suppose we have a user microservice, and our task is to include the user\u0026rsquo;s last name in the get user details endpoint. Here\u0026rsquo;s what I think it would be a good commit history:\n* 54bfc23ab Add lastName to UserEntity and update UserRepository to fetch from db * 43ba765a5 Add lastName to UserDTO and update UserController fetch from UserRepository The idea is to create small commits where a single component is changed, as well as the tests it affects. The commit message itself gives a hint of what files were changed, so that whoever reads it can easily understand what to look for. After clearing that commit, the reviewer can mentally close that context and jump to the next as if she was developing the feature herself.\nTo make this point clear, here are some patterns that you probably have come across, which I think aren\u0026rsquo;t review friendly (The names are made up):\nDo then test # * 54bfc23ab Add lastName to user * 43ba765a5 Fix tests * 32aa54db7 Fix tests I\u0026rsquo;m not advocating for TDD here, that\u0026rsquo;s a touchy subject 🔥, but I just think that on a step-by-step approach it makes sense to change all the files related to a specific context and the tests are part of it. If the changes made to UserRepository test are in the last commit, it will force the reviewer to context switch back to a step that was already cleared.\nSniper # * 54bfc23ab Add lastName to user One shot commit histories are also not spectacular to review because they contain too much information, as opposed to a divide and conquer approach where we would review small changes step by step. It\u0026rsquo;s also difficult to know where to start, since all the files were changed in the same commit\n#YOLO # * 54bfc23ab First draft * 43ba765a5 Fix bug * 32aa54db7 Fix tests * 24ab567a8 Fix another bug This is the typical case where someone didn\u0026rsquo;t test the code locally and only realized they\u0026rsquo;ve made a mistake when the pipeline failed or there is a bug in the sandbox environment. While this obviously can happen even if you took all precautions, you should avoid polluting the commit history while trying to fix mistakes. If there is no other way, maybe it can be a good idea to use rebase and squash commits before creating the PR.\nIf you\u0026rsquo;re able to guide the reviewer with your commit history, half of the work is done because she will be able to keep up with your line of thought. Descriptive messages are key to this, but organizing commits into logical steps can also be very helpful.\nFix conflicts ahead of submitting the PR # Merge conflicts are a frequent issue, especially in projects with lots of developers and small code bases. While it might not impact the review itself, it is a good practice to keep your code up to date with the main branch, so that the merge runs smoothly. Depending on your team\u0026rsquo;s process, you might find yourself in a situation where you need to fix the merge conflicts and ask the reviewer to go back and approve the PR again because you couldn\u0026rsquo;t merge it in the first place.\nUse Rebase instead of merge # A very common way of keeping a branch updated is to merge the main branch into the feature branch ahead of the review or at some point in time. While this standard practice for many, there is a downside which is the commits it adds to the history after the merge.\n* 54bfc23ab Add lastName to UserEntity and update UserRepository to fetch from db * 43ba765a5 Add lastName to UserDTO and update UserController fetch from UserRepository * e672eab17 Merge branch \u0026#39;master\u0026#39; into \u0026#39;feature/add-last-name-to-user\u0026#39; Rebase on the other side does a slightly different thing which is to apply all the commits from the main branch into the feature branch, without polluting the history with merge commits (one less thing for the reviewer to care).\n* 54bfc23ab Add lastName to UserEntity and update UserRepository to fetch from db * 43ba765a5 Add lastName to UserDTO and update UserController fetch from UserRepository The downside to this approach is that it will force you to use push --force-with-lease if your feature branch is already published on the origin.\nDon\u0026rsquo;t commit code formatting changes # Including code formatting changes in a PR will cost the reviewer a significant amount of time figuring out what parts of the code have actually changed. If you find yourself doing this it\u0026rsquo;s a sign that either there is a misalignment between you and your team, or the project doesn\u0026rsquo;t have an automated code formatter yet. There are a bunch of tools that you can plug in to your IDE or commit hooks like Prettier and EditorConfig and avoid committing formatting changes. The only important requirement for this to work is that everyone in the team uses an equivalent formatting setup so that all code is in the same format when committed.\nSide note: Don\u0026rsquo;t fall into the trap of actually discussing those standards as you and your team will most certainly burn precious amounts of time not reaching an agreement. I find it more practical to use community accepted ones like Google, Facebook or Airbnb code formatting standards.\nComment and document your code # Use comments wherever the code itself doesn\u0026rsquo;t make it obvious why you made such a decision. This might even help other developers that come across your code in the future and need some context of why something is happening .\n+ for(User u : users) { + if(user.isActive()) { + // Interrupt the entire process because no user should be active at this point + throw new IllegalUserStateException(u); + } + deleteUser(u); + } Just keep in mind that a comment is another line to review, and too many comments makes the code less readable, so try to avoid redundant comments.\n+ // If the user is active, throw an exception + if(user.isActive()) { + throw new IllegalUserStateException(u) + } + // Delete the user + deleteUser(u) Documentation on the other side is a bit different, typically public classes and methods should be properly documented so that you can generate decent API docs, however it is also very easy to exaggerate and add documentation to stuff that\u0026rsquo;s self-explanatory like getters and setters:\n+ /** + * Gets the user + */ + public User getUser(); My opinion for cases like this is that you\u0026rsquo;re better off with a simple description in the class header, or no description at all, as the comment doesn\u0026rsquo;t add any value to whoever is reading it.\n+ /** + * Provides CRUD functionality over Users on the data store + */ + interface UserRepository { + public User getUser(); + public User createUser(); + public void deleteUser(User u); + } Include every relevant detail in the PR\u0026rsquo;s description # I found several guides for creating pull request descriptions, all of them slightly different from each other but with the same base idea: Try to provide as much context as possible so that the reviewer looks in the right places. I also believe that descriptions with too many sections and details are at the risk of being ignored, so I try to keep them short and objective.\nContext # Explain why this PR exists, what is the current problem and what needs to be fixed. Most teams already have this information on their issue tracker, so it\u0026rsquo;s obviously a good practice to include a link to it, but it\u0026rsquo;s an even better one to write a small paragraph so that the reviewer doesn\u0026rsquo;t need to click it.\nThis PR closes issue CONN-1023, which is a feature request to add the user\u0026#39;s last name to the get users endpoint in the users microservice What was done # The smaller this section is, the better. Usually, I use a list of items to guide the reviewer. These might be similar to the commit list but with complementary information on the design patterns or algorithms used, for example. If these are UI changes, you can record a video or include screenshots, for example, which will be very helpful for whoever is trying to reproduce the fix.\n- Changed `UserRepository` to include last name in the db query - Changed `UserController` to include the new parameter in the endpoint signature - Updated the documentation - Created a new integration test Communicate in a positive way # I had to put this here, but the idea was actually taken from Anton Chuchkalov\u0026rsquo;s article. As I mentioned in previous articles, creating a positive culture and having a good relationship with your peers is key to engagement and collaboration. Pull requests or code reviews are just another instance where you and your team need to work together in order to get somewhere. This remarkable quote from Anton\u0026rsquo;s article sums it all up:\nRemember: Code changes are ephemeral - relationships with teammates are what matters\nRecap # Code reviews are standard practice in the tech industry and pull/merge requests on version control platforms are the most common tools for conducting them. The process itself is written and asynchronous, and with the tendency to build geographically distributed teams we are more vulnerable to lack of context and communication issues, which is why it\u0026rsquo;s so important to write good pull requests.\nWhat do these look like is a widely discussed subject on the internet, and while the above strategies have worked for me, there are certainly different approaches. I believe that every organization and individual is different and therefore my ultimate recommendation is that each should explicitly discuss how to address pull requests and code reviews, in order to develop the framework that best fits their reality.\nOther Resources # Martin Fowler - Pull Request Github - How to write the perfect pull request Anton Chuchkalov - How to make a perfect pull request Sajal Sharma - How to write a pull request description Gonzalo Bañuelos - Writing a great pull request description Hugo Dias - Anatomy of a perfect pull request ","date":"August 13, 2022","externalUrl":null,"permalink":"/blog/creating-optimal-pull-requests/","section":"","summary":"An essay on the role of pull requests and their importance in the software development cycle and what strategies can we use to improve them","title":"Creating optimal pull requests","type":"blog"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/git/","section":"Tags","summary":"","title":"Git","type":"tags"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/github/","section":"Tags","summary":"","title":"Github","type":"tags"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/pull-requests/","section":"Tags","summary":"","title":"Pull Requests","type":"tags"},{"content":"","date":"August 13, 2022","externalUrl":null,"permalink":"/tags/software-development-process/","section":"Tags","summary":"","title":"Software Development Process","type":"tags"},{"content":"","date":"July 1, 2022","externalUrl":null,"permalink":"/tags/docusaurus/","section":"Tags","summary":"","title":"Docusaurus","type":"tags"},{"content":"This website was recently migrated from Docusaurus to Hugo. The initial idea was to not dive into complex web development, but as I continued to produce more content and use cases, some customizations to the theme were needed, which proved to be a bit of a struggle in Docusaurus. After exploring Gatsby and Hugo as alternatives, I ended up sticking with the latter. In this article, I\u0026rsquo;ll go through the main evaluation steps, comparing the three frameworks along the way and explain why Hugo was the final decision.\nRequirements # The main idea for this website is to share some articles and create a writing habit, but also to add custom pages like my resume or projects portfolio, without adding too much complexity to the final solution. The ideal framework needs to:\nSupport markdown which is what I\u0026rsquo;m comfortable with as a software engineer who produces a lot of documentation;\nProvide simple content creation methods (adding a file to a folder will suffice), so that I can focus on the content, rather than in the code and surrounding infrastructure;\nInclude built-in themes and templates, which would allow creating a website with boilerplate like menus, mobile compatibility without putting a lot of effort into it;\nAllow the creation of custom pages as well as customizing the existing styles and templates, so that minor adjustments can be made;\nGenerate static pages, and don\u0026rsquo;t rely on any backend so that the entire application can be deployed on traditional web hosting services;\nThis was more or less the bar set by Docusaurus although, as mentioned above, it doesn\u0026rsquo;t excel at some of these points.\nThe Frameworks # Docusaurus # It was the initial choice because I was using it for API documentation at work and knew my way around it, so it seemed like a no-brainer at the time. Docusaurus is a JavaScript/Typescript static website framework, built by the Facebook team. While there isn\u0026rsquo;t a specific target use case on their website, It excels at wiki style documentation, but also supports blogs, and custom pages using React or Markdown. It is the engine behind the documentation of most facebook projects as well as many others.\nGatsby # The best way to create static websites, according to the React documentation. Gatsby is around since 2015 and includes an enormous set of features. It\u0026rsquo;s oriented to content driven applications (like blogs), and it includes integrations with major CMS vendors like Contentful and Wordpress. It also supports Markdown and React, but it doesn\u0026rsquo;t include many prebuilt styles or themes, making it more of a web development framework than a blogging platform.\nHugo # The oldest of the three, Hugo is a general purpose website framework like Gatsby, but it includes a large catalogue of community developed themes, as well as commercial ones, which allow the users to build a variety of applications like resume/portfolio pages, blogs, wikis, and even online stores. It supports Markdown and HTML templating but not React.\nCommunity and Maintenance # Good support and active maintenance are always desirable when picking up a website framework, especially if we intend to continuously evolve and maintain our project. All three of them have slightly different support models:\nDocusaurus is part of the Meta Open Source Community (formerly Facebook Open Source) and has over 30k stars and 900 contributors on GitHub. Having one of the world\u0026rsquo;s tech giants behind it was enough to build a large community. Despite being a big opensource project, it looks like its mainly sponsored by Meta, and there aren\u0026rsquo;t many community contributed plugins, themes or add-ons yet;\nGatsby is fully open source, but it is part of an ecosystem of commercial products from Gatsby Inc which supports the development, and monetizes all sorts of Gatsby related infrastructure and premium support. Their GitHub project has over 50k stars and 3k contributors. There are also several plugins and themes listed on their docs, most of them officially supported either by Gatsby Inc or third parties;\nHugo is a bit of an underdog in comparison to the others, but it\u0026rsquo;s probably the closest to a true open-source project. It\u0026rsquo;s sponsored by a couple of well known companies like Brave and Linode but there aren\u0026rsquo;t obvious commercial motives behind it. There\u0026rsquo;s almost 60k stars and over 700 contributors on GitHub, and there are a bunch of community built themes as well as commercial ones. It\u0026rsquo;s like the Wordpress of modern days;\nDocumentation # There are two types of documentation that we typically look for: on-boarding and customization/advanced features.\nOn-boarding is usually the first thing we dive into when getting started with a new framework, to learn how to use the product and what features are available. All three frameworks have great on-boarding documentation, and it takes roughly 10 minutes to get started with any of them.\nWhen it comes to customization, things are a little different. Docusaurus has a couple of guides that teach you the very basics of changing styles, replacing components and developing custom pages. Higher levels of customization usually come at the cost of navigating through the code in order to find out how it can be done.\nGatsby is by far the most complete, and this is probably it\u0026rsquo;s a web development framework, and it\u0026rsquo;s obviously in Gatsby Inc\u0026rsquo;s interests that developers know how to work with it, so necessarily their docs are filled with development and customization guides and example so that there is very little friction when creating a new Gatsby project.\nHugo\u0026rsquo;s docs are pretty much comprehensive in what comes to customization, offering all API references, components rendering strategy and some customization guides. These are also built by the community like Docusaurus, but are a bit more mature. The themes themselves also have their own docs, which are supported by the theme developer, so here it\u0026rsquo;s a bit of a gamble since not all themes have good documentation.\nCustomization # Still on the subject of customization, all three of them offer high levels of customization, which slightly differ from each other in terms of what can be done and difficult it is:\nDocusaurus # Custom Pages # Docusaurus understands any js/ts, jsx/tsx or md/mdx file. All of them (apart from md) support React components natively, and it is possible to do pretty much anything since you\u0026rsquo;re building the page yourself. There is a limited set of data available in the custom page\u0026rsquo;s context, enabling stuff like reading the configuration, documents or blog posts.\nStyling # The Infima framework is shipped with Docusaurus, and all available components already have proper styles defined, which will be coherent with the design of the remaining pages. If you choose not to use Infima\u0026rsquo;s building blocks, then CSS or SASS for these components is required, which adds a bit more work to the process. It is also possible to override global styles by providing additional stylesheets.\nBuilt-in Templates # The built-in templates (docs wiki and blog, base layout etc.) are a little harder to customize, because there\u0026rsquo;s just not a lot of documentation on how to do it. Each of the plugins is extensively configurable, and the React components they use can be overridden using a technique called Swizzling. The plugins themselves can also be overridden, but they provide very limited guidance on how to do so.\nGatsby # Custom Pages # Pretty much everything is custom in Gatsby, since it\u0026rsquo;s closer to general purpose web development than to blogging. Pages can be developed in React using js/ts, jsx/tsx files and markdown, with the mdx plugin which enables support for md/mdx files. The officially supported method of loading data relies on a globally available graphql interface that enables executing to complex queries over the content and configurations.\nStyling # The basic starter doesn\u0026rsquo;t provide you with built-in styling, so it\u0026rsquo;s a bit greenfield in here, pretty much like normal web development. It supports adding everything that can be used with React, and they provide official support for a bunch of styling frameworks like Theme UI, Styled Components, SASS, etc. Through the plugin mechanism, there\u0026rsquo;s also support for some prebuilt/opinionated themes like Gatsby-theme-ui-preset, but they don\u0026rsquo;t seem to be very popular.\nBuilt-in Templates # Once again, through the use of plugins it\u0026rsquo;s also possible to not start from scratch, and in the case of blogging there\u0026rsquo;s a very simple starter, which is officially supported and can be used to start a very simple blog. It generates a new Gatsby project (or adds into an existing one) with the required files to create a ready to use blog with built-in styles and navigation.\nHugo # Custom Pages # The concept of custom pages in hugo is a bit different, since every page needs to be an md file, making markdown (mdx is not supported, but we can use HTML inside md files) the only option to create content. However, pages are rendered using Go Templates which basically allow writing the actual HTML (no React support) that prints the page. For each new page we can specify a different template, so it is actually pretty simple to add custom pages for whatever purpose we need.\nStyling # Styling highly depends on the theme that\u0026rsquo;s being used and the support it has. Because themes may contain both CSS and HTML templates, what can be done, really depends on those files. However, Hugo provides an asset bundling mechanism, which will fetch any custom CSS or SASS that was added to the project, and add it to the final assets bundle.\nBuilt-in Templates # Hugo provides a couple of templates for simple text pages, articles and article listings, which were sufficient for the purpose of this blog. These templates are made up of a bunch of HTML/Go Template files and all of them can be overridden by mimicking the same folder structure in the current project. Themes can also contain additional templates, so basically we can install themes for other purposes like, wikis, documentation pages, portfolios, stores etc..\nPerformance # This is a very simple comparison of build and page load times across the three frameworks. All tests were performed on the same machine (my laptop), and the numbers are relative to its hardware specs.\nStarting with build times, there are 3 different types of build that were measured: production (optimized, minified), development (source maps, uncompressed code and images. etc..) and hot reloading (like development but just for the changed files).\nBuild times comparison for production, development and hot reload Notice that I changed the Y axis to logarithmic scale such is the difference between Hugo and the rest. This is likely to be related with the fact of running natively (golang) rather than in the nodejs runtime, but mostly because Hugo does much less than the other two at build time, since it doesn\u0026rsquo;t need to perform all the heavy lifting required to build a React application.\nThe generated builds are very close to each other in terms of performance, in this case I just measured the time to load the blog listing page on all three frameworks, and the results are similar, so the Y axis has a linear scale in here.\nPage load times comparison for production and development builds Because Hugo\u0026rsquo;s generated pages are tightly couple with the theme, this values might have been different if I were using a different theme. Nevertheless, it\u0026rsquo;s possible to conclude that in terms of performance, the end result is more or less the same, but with Hugo, development cycles and iterations are much faster.\nFinal Remarks # I started looking for alternatives to Docusaurus, ultimately because I wanted to move away from their theme, and customize the page layout and styles (i.e. change fonts, add a list of recent posts, etc.). While all of this is possible in Docusaurus, it\u0026rsquo;s not straightforward, and definitely not well covered in the docs. I\u0026rsquo;d probably still use it for building API, SDK or open source project documentation because it is exactly what it was built for, and these are use cases where typically we\u0026rsquo;re ok with standard styles and layout.\nGatsby excels where Docusaurus fails, allowing all the customization I needed and much more. It\u0026rsquo;s a very complete web development framework, but on the other hand, it requires much more development effort to get something out of it (it fails were Docusaurus excels). I\u0026rsquo;d probably go with Gatsby if I wanted to build a static website with its own design, and specific functionality, from the ground up, without built-in styles and components.\nUltimately I ended up sticking with Hugo because it was the framework that offered less friction to set up the blog/webpage that I had in mind. It\u0026rsquo;s obviously less flexible than Gatsby, but the component override system is really simple and well documented. Did I already mention that development cycles are blazing fast ? I\u0026rsquo;m using the PaperMod theme, one of the most common, and while I probably won\u0026rsquo;t be able to move away from these styles and dramatically change the appearance, I can override HTML templates and reuse CSS classes or add new ones, without much trouble.\nIf you\u0026rsquo;re looking into Hugo and want to try it out, be sure to check out Nuno Coração\u0026rsquo;s getting started guide, which is an awesome place to start playing around with it.\n","date":"July 1, 2022","externalUrl":null,"permalink":"/blog/docusaurus-gatsby-hugo/","section":"","summary":"A comparison between Docusaurus, Gatsby and Hugo, as blogging frameworks and why I chose to use Hugo for implementing this blog","title":"Docusaurus vs Gatsby vs Hugo","type":"blog"},{"content":"","date":"July 1, 2022","externalUrl":null,"permalink":"/tags/gatsby/","section":"Tags","summary":"","title":"Gatsby","type":"tags"},{"content":"","date":"July 1, 2022","externalUrl":null,"permalink":"/tags/hugo/","section":"Tags","summary":"","title":"Hugo","type":"tags"},{"content":"","date":"July 1, 2022","externalUrl":null,"permalink":"/tags/markdown/","section":"Tags","summary":"","title":"Markdown","type":"tags"},{"content":"","date":"July 1, 2022","externalUrl":null,"permalink":"/tags/papermod/","section":"Tags","summary":"","title":"PaperMod","type":"tags"},{"content":"","date":"July 1, 2022","externalUrl":null,"permalink":"/tags/react/","section":"Tags","summary":"","title":"React","type":"tags"},{"content":"","date":"July 1, 2022","externalUrl":null,"permalink":"/tags/static-website/","section":"Tags","summary":"","title":"Static Website","type":"tags"},{"content":"Content sharing is today\u0026rsquo;s most common internet activity, made easy and accessible to everyone thanks to cloud providers, social networks, blogging platforms etc. They spoil us with cheap, reliable and highly available services, while in return we give them our data and our attention. Most of us still don\u0026rsquo;t grasp to what extent these companies exploit us, but the awareness and conversation around internet centralization are growing, and the latest cryptocurrency boom brought a wave of investment and the promise to build the so-called decentralized internet or Web3 and bring the control back to the users. Web3 is not what this article is about, but it\u0026rsquo;s the reason why it happened. There\u0026rsquo;s a lot of excitement and people working in the web3 scene at the moment, building groundbreaking technology and products, attempting to make web3 a reality. One such example is the Inter Planetary File System (IPFS) that aims to fix the problem of content addressing and streaming on a peer-to-peer network. Drawn by the hype, and curiosity, I\u0026rsquo;ve set myself on the journey to build an IPFS cluster - what this article is actually about - for hosting this website. Below I\u0026rsquo;ll discuss the ups and downs and some interesting (or not) conclusions about the current state of website hosting on IPFS.\nSummary # I strongly believe self-hosting is somewhere down the road to decentralization and tried to go that route; Soon it became obvious that full-blown self-hosting would create other complex problems and deviate myself from the original goal, so I\u0026rsquo;ve accepted the tradeoff of using cloud servers to run the IPFS cluster; DNSLink and public IPFS HTTP gateways try to bridge the gap between Web2 and Web3, and the combination is not much different from the traditional web stack, and dramatically slower; Cloud providers, CDNs and DNS registrars can still take my website down or at least degrade it (So much for self-hosting and decentralization); I ended up falling back to the traditional static website hosting on the cloud, but still offer a solution for users wanting to use their IPFS HTTP gateway or a public one, using tools like Brave or IPFS Companion; We need to change the way modern browsers access online content so that we\u0026rsquo;re able to move away from the traditional client-server architecture to a more peer-to-peer content sharing model; Hosting # The whole idea behind peer-to-peer is the users\u0026rsquo; ability to share/serve their own content, which is how the internet started. If we want to be fundamentalists about it, self-hosting is the only way to have full control of our online presence. However, because I\u0026rsquo;m not a fundamentalist, and because I lack the resources to set up a reliable physical server infrastructure and network connection in my garage, I made the first tradeoff before even starting the project, and succumbed wonders of the cloud.\nThe idea was to set up an IPFS cluster on \u0026ldquo;my\u0026rdquo; cloud nodes, which would be used to upload/pin the website, simultaneously with Piñata and Infura as a complementary solution to make streaming and resolution faster.\nUnlike bitcoin and other blockchain based systems that replicate the whole content across the network, files on IPFS are stored in different nodes, and their address/identifier (CID) is broadcast to a distributed hash table (DHT) that will be used to locate the content when someone tries to access it. If there isn\u0026rsquo;t at least one node in the network serving the content, it will not be available, so one good practice is to upload/pin the content to multiple nodes, which is possible due to the way content is resolved in the IPFS network. The more nodes in the network are hosting your files, the faster it will be to find and retrieve them, and consequently, the lower will be the chances of downtime.\nThough I strongly believe that self-hosting your own stuff is in the path for decentralization at some point in the future, the entry barriers presented by it are still too high, and it would take a significant effort to build the infrastructure I needed, eventually changing this project into something else. On his article about self-hosting and decentralization Chris McCormick states that this friction created a market for big companies like Google and Amazon to transform web hosting into a commodity, ultimately leading to internet centralization. Even most of the websites on IPFS, are (probably) hosted by Piñata and Infura, which are awesome and reliable (and probably run on AWS) but exist because it\u0026rsquo;s still complicated for an average user to self-host their own contents with the same level of availability and performance established by the industry leaders.\nBuilding the infrastructure # Spinning up an IPFS node is quite simple, and the official docs have a very comprehensive guide. It\u0026rsquo;s a good place to start, but if you\u0026rsquo;re aiming towards a production environment with more than one server or component, I recommend going for one of their supported orchestration tools so that you can replicate configurations with confidence across multiple servers.\nIf I were to start over today, I\u0026rsquo;d probably use docker for the whole setup (lots of official resources available), but at that moment, my ansible chops were still fresh, and I bumped into ansible-ipfs-cluster which works quite well and was a good opportunity to make some contributions (pending merge) to the project itself.\nThe ipfs daemon by connects to other peers in the network to serve its own content and exposes a management API both over HTTP and libp2p. All of the clustering features are implemented in the ipfs-cluster daemon which runs alongside ipfs and operates it using the API. ipfs-cluster daemons communicate between themselves (only the cluster peers) to synchronize the content across all nodes in the cluster using RAFT or CRDT consensus.\n{{\u0026lt; mermaid \u0026gt;}} flowchart LR; ipfs-cluster1 \u0026lt;--\u0026gt; ipfs-cluster2 \u0026lt;--\u0026gt; ipfs-cluster3 \u0026lt;--\u0026gt; ipfs-cluster1 subgraph node1 [node1] ipfs-cluster1 --\u0026gt; ipfs1 end subgraph node2 [node2] ipfs-cluster2 --\u0026gt; ipfs2 end subgraph node3 [node3] ipfs-cluster3 --\u0026gt; ipfs3 end ipfs1 \u0026lt;--\u0026gt; ipfs2 \u0026lt;--\u0026gt; ipfs3 \u0026lt;--\u0026gt; ipfs1 ipfs1 --\u0026gt; other-ipfs-peers ipfs2 --\u0026gt; other-ipfs-peers ipfs3 --\u0026gt; other-ipfs-peers {{\u0026lt; /mermaid \u0026gt;}} Exposing the website over HTTP # The IPFS network stack (libp2p) is built for peer-to-peer communication and is not compatible with HTTP at the moment. While it\u0026rsquo;s still unclear whether the way we browse the web will change or not, each user would have to run an IPFS node in order to consume content from the network. This is a major dealbreaker for adoption at the moment, and because of that, IPFS ships with an HTTP Gateway to bridge the gap between traditional web2 and their network.\nCompanies like Cloudflare, Pinata, and many others adopting IPFS, host their public gateways to make it easy for everyone to access files on IPFS. To make it all even more seamless, a standard called DNSLink was proposed by Protocol Labs, the company behind IPFS, to allow linking a DNS record to any type of content, including IPFS CIDs.\nSince we can rely on public IPFS gateways, there\u0026rsquo;s no need to expose an HTTP endpoint on the IPFS cluster, and therefore no need to worry about TLS certificates, caching, ddos, etc. As long as the files are stored by a node, any gateway is able to retrieve and expose them over HTTP, hiding all of this complexity from the user, and most importantly, from me.\nHaving said all of that, I chose Cloudflare\u0026rsquo;s DNS Registrar and public IPFS Gateway (cloudflare-ipfs.com) because they have quite good documentation and support\u0026hellip; and it\u0026rsquo;s almost free. The image below illustrates the first setup, and it pretty much resembles a common web stack: self-hosted files, hidden behind Cloudflare.\n{{\u0026lt; mermaid \u0026gt;}} graph LR; browser self-hosted-ipfs-cluster subgraph Cloudflare cloudflare-ipfs-gateway cloudflare-ipfs-nodes end browser --\u0026gt; cloudflare-ipfs-gateway cloudflare-ipfs-gateway --\u0026gt; cloudflare-ipfs-nodes cloudflare-ipfs-nodes --\u0026gt; self-hosted-ipfs-cluster {{\u0026lt; /mermaid \u0026gt;}} Deploying files to the cluster # Adding files to the IPFS nodes is fairly simple when done manually, provided that they are already available on a storage volume that the ipfs daemon is able to reach. There\u0026rsquo;s also the webui app, which connects directly to your node\u0026rsquo;s HTTP API and allows uploading files from the browser. Both are ok for experimenting and casual utilisation, but impractical for automated deployments on a CI/CD workflow like github actions, for example. One of the recommended tools is ipfs-deploy, an npm package that is able to pin content to several pinning services, and change the DNSLink (TXT record) using Cloudflare\u0026rsquo;s API.\nGetting it to work with the current setup was a different story, not because of the tool itself but because it would invalidate everything I said about not needing to expose HTTP endpoints. Up until now, the ipfs cluster\u0026rsquo;s HTTP API wasn\u0026rsquo;t publicly available because it wasn\u0026rsquo;t needed for accessing the files. However, ipfs-deploy and pretty much any other ipfs deployment tool will need the IPFS cluster API to upload and pin the files. I\u0026rsquo;ve started with a standard approach, nginx + letsencrypt, to at least have some access control and security (Auth and TLS termination), however an issue with go-ipfs and nginx, which obviously only presented itself after everything was properly setup, forced me to directly expose the cluster API using its security features.\nThe last problem down the road was related to certbot, turns out that certbot runs as root, and if the user running ipfs and ipfs-cluster can\u0026rsquo;t escalate privileges like standard nginx installations, it won\u0026rsquo;t be able to use the certificates. It was solved with a good old chmod 644, which I\u0026rsquo;m not very proud of.\nAfter all of this madness, the setup to host and deploy this website using IPFS was up and running, and madoke.org was available on IPFS. If you want to build something similar, I\u0026rsquo;ve shared the ansible-playbook that does all of this, hoping that it might be useful at some point in the future.\nResults # Building this infrastructure was an educative and challenging journey that helped me to better understand the whole IPFS ecosystem and how it is evolving to become the standard web3 approach to dApp and content hosting. The results however, weren\u0026rsquo;t as satisfying as one would expect. Overall, the whole setup became a bit more complicated, less performant and still depending on third parties.\nCensorship Resistance # It\u0026rsquo;s one of the greatest discussions in the internet, and also one of the things web3 aims to fix. In this particular setup however, the website is still vulnerable a number of external parties:\nCloud provider: The first tradeoff made, to skip the hassle of building the physical server infrastructure. Consequently, if my cloud provider decides to take the IPFS cluster down, I won\u0026rsquo;t be able to guarantee that the content is being served. There are some workarounds however:\nUpload to multiple IPFS services (as discussed above), so that there is never only one source of truth, which is probably the easiest and the best approach;\nActually build a physical server infrastructure and serve the IPFS cluster from my garage. This one is high cost, not suitable for everybody and adds the Internet Service Provider to the parties that could interfere with the website.\nPublic IPFS Gateway: Pretty much like with their HTTP CDN, Cloudflare can still take the website down by blocking it on their gateway. Alternatively, I could:\nHost my own gateway: High cost of setting up a reliable HTTP gateway, but at least would make sure that regular web browsers can access it;\nDon\u0026rsquo;t use a gateway at all: This is where the mindset of IPFS wants to take the web, but unfortunately it would require everyone to run its local IPFS node to access the website, which given the current state of the web would basically lock me out of more than half of the internet users.\nDNS: As far as I\u0026rsquo;m able to understand, you can\u0026rsquo;t easily run away from DNS registrars, so I really didn\u0026rsquo;t invest any amount of time looking for alternatives. There are however starting to appear some non-ICANN registrars (Ethereum Naming Service, Namecoin, Unstoppable Domains), claiming to fix this problem in the blockchain, but it would probably force me to move away from madoke.org, which I intend to hold on for my dear life :D\nPerformance # It\u0026rsquo;s amazing after the content is cached on a lot of nodes, and in theory a major step towards scalability. The problem is that this is not instant, and because this website doesn\u0026rsquo;t generate significant traffic, it isn\u0026rsquo;t likely to become cached in a lot of nodes. After every new deployment, content resolution would take up to 5 minutes (with DNSLink adding significant more latency), until all page loads were instant.\nI tried a lot of different combinations here like pinning the files not only to my nodes, but also to Piñata and Infura, direct, explicit peering with Cloudflare and Piñata\u0026rsquo;s nodes, but the behavior was always the same on different public gateways (Cloudflare, IPFS, Piñata).\nA couple of searches revealed that this issue is happening with more websites, and we can confirm that by installing the IPFS Companion or Brave and force all IPFS enabled websites (the ones with DNSLink TXT records) to use the IPFS gateway rather than direct HTTP access. Even docs.ipfs.io is sometimes slow.\nWhat everyone seems to be doing here, is not really a workaround but rather to keep giving users the traditional web2 alternative until the ecosystem has evolved to a state where it\u0026rsquo;s possible to just remove it without damaging the user experience.\nConclusions \u0026amp; Tradeoffs # In the end, the setup was a bit more complicated than the initial, but to be fair after the IPFS cluster is properly setup, you don\u0026rsquo;t need to touch it anymore, so I\u0026rsquo;ve decided to continue to make the website available on IPFS, but without dropping the static hosting on cloudflare. Thanks to github actions, this happens seamlessly after every commit. Users with Brave or IPFS Companion can use their own IPFS node/gateway to access the site if they want. This is possible because adding the DNSLink TXT record to madoke.org will make those tools prompt the user if they want to navigate using IPFS.\n\u0026gt; dig +short TXT _dnslink.madoke.org \u0026#34;dnslink=/ipfs/QmbKyCXSYiJPQdvB5Z2MEBNGfjTbTfAA9x4THsS1PSLcPX\u0026#34; As to the public gateway, I\u0026rsquo;ve decided to drop it because the website is still being served on Cloudflare Pages, and not only the idea of a centralized gateway is pretty much the same, but it also goes against the web3, and IPFS\u0026rsquo;s own principles. The problem with public gateways, as exposed by Matt Ober, is that they were built to bridge the gap to web2, but they are fundamentally incompatible with distributed, peer to peer systems.\nAll in all, madoke.org can\u0026rsquo;t be exclusively on IPFS for now, so let\u0026rsquo;s just say that it is also available on IPFS rather than migrated, which was the original goal. It was a very interesting project that taught me a lot about IPFS, and made me understand that in order to truly benefit from peer to peer and distributed file sharing protocols, the tools that we for browsing the internet need to change their approach, but most importantly adopt these standards so that technologies like IPFS can be accessible to the average user.\n","date":"October 12, 2021","externalUrl":null,"permalink":"/blog/an-ipfs-adventure/","section":"","summary":"Excited with all the Web3 and self-hosting hype, I\u0026rsquo;ve set myself on a mission to self-host this blog on a self-hosted IPFS cluster. Here\u0026rsquo;s what I learned from it","title":"An IPFS adventure","type":"blog"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/cloudflare/","section":"Tags","summary":"","title":"Cloudflare","type":"tags"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/decentralized-storage/","section":"Tags","summary":"","title":"Decentralized Storage","type":"tags"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/dht/","section":"Tags","summary":"","title":"DHT","type":"tags"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/dnslink/","section":"Tags","summary":"","title":"DNSLink","type":"tags"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/infura/","section":"Tags","summary":"","title":"Infura","type":"tags"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/ipfs/","section":"Tags","summary":"","title":"Ipfs","type":"tags"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/pinata/","section":"Tags","summary":"","title":"Pinata","type":"tags"},{"content":"","date":"October 12, 2021","externalUrl":null,"permalink":"/tags/protocol-labs/","section":"Tags","summary":"","title":"Protocol Labs","type":"tags"},{"content":"","date":"August 6, 2021","externalUrl":null,"permalink":"/tags/legacy/","section":"Tags","summary":"","title":"Legacy","type":"tags"},{"content":"","date":"August 6, 2021","externalUrl":null,"permalink":"/tags/micro-services/","section":"Tags","summary":"","title":"Micro Services","type":"tags"},{"content":"","date":"August 6, 2021","externalUrl":null,"permalink":"/tags/monolith/","section":"Tags","summary":"","title":"Monolith","type":"tags"},{"content":"","date":"August 6, 2021","externalUrl":null,"permalink":"/tags/refactor/","section":"Tags","summary":"","title":"Refactor","type":"tags"},{"content":"Maintenance and evolution of legacy systems is a popular challenge in software engineering. Every time we produce a line of code, and it reaches production, we’re already creating legacy for our future selves or somebody else to look after. If not properly maintained, most systems won’t age well, and therefore most of us will eventually at some point in our careers face a rusty old behemoth that’s important to the business and simply cannot be shut down. Circumstances are different in every company, but there are some common milestones and pitfalls during the recovery or replacement of an old system. In this article we\u0026rsquo;ll go through some situations and try to understand the reasons that motivate them, as well as their potential solutions.\nThe business cannot stop # Generally, most people in a company will be unaware of the systems stability and scalability assuming the products are working normally until actual issues are visible. Apart from small businesses, engineers and other participants in the business tend to be organized in different reporting structures with different agendas, so if there isn\u0026rsquo;t a good communication and alignment strategy in place, this kind of issues escalates with the company\u0026rsquo;s size.\nAs soon as the product is out, the company will naturally try to acquire customers and make money to pay whatever investment was made to build the product. At this point, the engineering team should already be prepared with a maintenance and evolution strategy that allows them to allocate time to refactor the infrastructure at the same time (not speed) that they\u0026rsquo;re building new features. Following this strategy, the team minimizes the chances of losing control over the system.\nLosing control, however is also frequent and can happen for different reasons like for example the absence of a long term strategy, limited team resources or lack of communication between areas of the business. If this goes for long enough, it\u0026rsquo;s easy to imagine unhappy customers demanding stability (or their money back), unhappy management demanding solutions, and an unhappy engineering team that wants to delete the project and start over. Stopping that bleeding will only be possible if there\u0026rsquo;s a coordinated response from all the participants. Not only is this possible, but it\u0026rsquo;s also a good opportunity to reflect on how the product is delivered and operated, as well as the underlying organization of the people doing it.\nI lost control, what now? # What to do entirely depends on the circumstances of the product and the company. In some rare cases, the engineering team receives the \u0026ldquo;Get out of jail free card\u0026rdquo;, which allows them to start over from scratch (every developer\u0026rsquo;s dream), while on other cases there\u0026rsquo;s the mythical \u0026ldquo;stop everything until this is fixed\u0026rdquo;, but the most common scenario is the realistic one: There are live customers, there are new customers in the pipeline, and we need to fix all of this at the same time.\nWhile it\u0026rsquo;s possible to reach a safe harbor without radical moves, it will take a coordinated effort, so the first thing an engineer should do in such situations is to immediately assess the damage and raise awareness for it, so that the business as a whole is able to make a plan, and weigh each new decision in the light of these new conditions.\nNobody knows what\u0026rsquo;s in there # Engineers are always moving between companies, leaving a trail of software behind which makes working on something that already exists far more common than building something new. If you inherited a giant whale of code, developed by a team of unknown heroes who left nothing but a couple of empty documentation pages and good luck wishes behind, it’s time to get acquainted with it.\nIt\u0026rsquo;s not a simple process and certainly not fun, but you need to do it before you can actually have a plan. Otherwise, you will be changing your plan every time a new variable comes up. Gather everyone that was involved with the project, do workshops, ask questions and start building the documentation that never existed. Some points that might guide you in this quest:\nWhat does your app do? What is it used for? Who are your customers? What features do they use, and what don\u0026rsquo;t? What are the core/critical features? How can you operate (deploy, test, troubleshoot) it? What is the domain/architecture, and how coupled is it? With greater knowledge about the platform/application you will be better equipped to decide how to act over the system and build a technical/strategical plan that’s more realistic and useful to the business.\nNo space left for maintenance # The ‘legacy’ label is just around the corner for any system and can be applied shortly after a new product release. The company has already moved on to the next big project before you even notice, and nobody outside the team will think about maintenance. This is the point where we tend to make mistakes and also forget about maintenance, because the next big thing is now the top priority. You can (and should) build the most resilient and scalable software, but if there isn\u0026rsquo;t a maintenance process in place, chances are that after a while you or someone else will be forcefully remembered of this software\u0026rsquo;s existence because something started to fail. The problem now is that its framework and runtimes are outdated, or your CI/CD has changed, and you can no longer operate it like the other services because there’s a big technical debt to clear. For most systems, this is inevitable and most certainly will happen because a company’s workforce is limited. However, it is definitely possible to mitigate the impacts in the components that we believe that are core to the business and will be around for a while.\nThere are two important things to understand here:\nMaintenance is not a feature, so if you really need your system to be reliable and scalable, there is a permanent stream of work that should be kept going in parallel with everything else. The less effort you put into it, the bigger the debt to pay further ahead; The responsibility belongs to the whole company, rather than the engineering team alone. I’ve tried to handle maintenance and scalability in the past by “self organizing” the team or overestimating other projects, and it didn’t last long because the business would eventually reclaim the time we spent clearing tech debt with more aggressive dates or workload. It really needs to be part of the team’s public roadmap, so that everyone understands what’s going on. As a software engineer, leader or not, it is your responsibility to make it very clear to everyone why maintenance is needed and how much will it cost, as well as what are the consequences of not doing it. Because this goes against all that “fail fast” and “time to market” startup jargon, people outside engineering will be more reluctant to accept trading speed for stability if you don\u0026rsquo;t give them a clear reason to do it. A good strategy that’s helped me so far is to produce a document with the overall technical strategy and discuss the risks and benefits with everyone that can have an impact on the team\u0026rsquo;s agenda, including the team itself. Keep in mind that you shouldn\u0026rsquo;t do a typical sales pitch that people forget after 5 minutes. You want all the relevant stakeholders to participate and contribute as much as possible, so that this document is not only part of their work, but also a commitment to support you in later stages of the roadmap. Also make sure that it\u0026rsquo;s available to everyone, so that new joiners or people that didn\u0026rsquo;t participate can quickly catch up with what was decided. If the need is clear and everyone (that matters) buys in, then you can officially allocate capacity for maintenance and scalability on every system owned by your teams.\nNobody wants to touch it # As developers, we always prefer to start from scratch, rather than picking up somebody else’s code, even if it’s brilliantly designed. Improving or refactoring an application built ages ago, in a language or technology that’s not sexy anymore is something that won’t naturally motivate young teams, especially if they need to clean up messy code and processes. The following approach can help you maximize the team\u0026rsquo;s experience:\nAlways have a visible plan and keep reminding everyone about the end game. Pretty much like the technical roadmap is needed to justify your team’s allocation, you need to establish a mission and a contract with your team, so that those boring and unappealing tasks can be seen as a necessary evil to a greater good. Include senior developers in the project. More experienced people will be able to understand your vision more quickly and help you out spreading it. We’re all social creatures, and tend to follow good examples. Look out for refactoring opportunities that may come up during feature requests or bug fixes. It\u0026rsquo;s hard to refactor everything in one go, but if you are able to split your monolith in small domains, there may be an opportunity to build a new service for feature A and migrate some existing functionality too. The reasons for a legacy system to become uninteresting are typically related to bad technology choices, bad code or, as previously mentioned, lack of maintenance. The more of those factors you have in the mix, the harder it is for people to like it, so the idea here is to try and remove the most friction you can (making the system easy to operate), and create an environment where your engineers are able to understand the end game and the opportunities to develop themselves.\nCan I refactor this? # If you think your software is up-to-date, just wait a couple of months. It’s fairly easy to build stable and long-lasting software that can remain untouched and reliable for years in production. However, it\u0026rsquo;s a different story to keep it updated, and it usually cannot be done without proper maintenance. As you read this article, there are probably people working on improving the JVM internals, a new release of the golang compiler, bugfixes in kubernetes, 15 new JavaScript frameworks, and the list goes on. The tech ecosystem evolves at a much faster pace because there are thousands of people working on it, whereas your app, only depends on you or your team. Having that said, outdated tech stacks are the most common scenario for legacy software and this is a problem for your team because it will keep on increasing the technical debt and the maintenance overhead. When paired with bad code practices, the mixture is explosive. So what to do? Refactor or reimplement the whole thing in \u0026lt;NEW_TRENDY_TECH\u0026gt; right? It\u0026rsquo;s tempting to do a File \u0026gt; New Project straight away (I\u0026rsquo;ve done it hundreds of times), however things can get quickly out of hand because there is a very high cost associated with redoing something for scratch (Just think about everything that\u0026rsquo;s not the actual code: tests, logs, monitoring, CI/CD, etc.). If the system is critical, and you can\u0026rsquo;t operate it, you probably want to invest some time in fixing it first, or at least making it easier to use/get acquainted. There are some popular actions that usually apply to most cases:\nIntegrating the existing CI/CD pipeline system for automated builds, testing and deploys. It will speed up the development cycles and reduce confusion if every project shares the same CI/CD infrastructure; Using the same runtime and infrastructure as your modern services (E.g: Kubernetes). Like the point above, it\u0026rsquo;s easier if there is only one kind of infrastructure to operate; Upgrading frameworks and runtimes to the latest LTS version. For keeping the system out of potential security threats, and eventually enable some features that you might need for refactoring later; Write documentation for every maintenance process that needs to take place more than once, even if it\u0026rsquo;s just once a year. This will give everybody a way to find out how to operate the system; If you can clear some of these, the legacy is at least as hard to operate as any other service. It will lower the maintenance overhead and create better conditions for refactoring and reimplementing should the opportunity present itself.\nFinal remarks # When starting at a new company, role or project, usually we think about painting on a blank canvas because it\u0026rsquo;s associated with starting fresh. While this can be true in many cases, it is more frequent to pick up on already existing systems. If not properly maintained, a software system can get out of control pretty quickly, which will generate an increasing technical debt and eventually customer failures. Whether dealing with any kind of system, legacy or not, as engineers, we should always raise awareness towards the importance of staying at safe technical debt levels, and getting space for maintenance in the team\u0026rsquo;s calendar.\nSystems in bad shape can be a big technical challenge, but also an ungrateful one. In order to properly be able to evolve a legacy system the whole team needs to be on board, and that\u0026rsquo;s usually a hard thing to get because maintaining legacy is not something that everybody would be happy to do. In situations like this, it\u0026rsquo;s tempting to imagine a new system, starting fresh and sell that idea to the team, but it can also be a dangerous path, if not impossible, especially if we\u0026rsquo;re talking about an already productive system with live customers. Before making any kind of decision it is very important to due diligence the whole system and domain, so that nothing is left out of the maintenance and re-qualification plan.\nWhatever is the strategy for evolving the technical platform, it should be discussed and accessible to everyone, so that there is a clear goal and people don\u0026rsquo;t start pushing in opposite directions. It will also give a purpose to the team and help with motivation.\nDepending on what is the current state of the product, the team might have periods with more space for refactoring and rebuilding components, but there may also the need to prioritize critical bugs that are affecting customers, costing money and need to be fixed before they are reimplemented. Whatever is the case, it is very important that the team is able to operate the platform (deploy, test, troubleshoot), and that will probably require some effort to get in place. However, it\u0026rsquo;s probably the first thing that should be done because it\u0026rsquo;s about creating conditions for every team member to work, and reducing friction from the overall process. With knowledge about the full domain, and the coupling between its components, some feature requests or bug fixes might be a good opportunity to refactor/reimplement some parts of the system.\nAll in all, dealing with legacy monoliths can be hard, and it will require more than just technical efforts to get right. The standard \u0026ldquo;delete and start over\u0026rdquo; approach is an illusion in most cases, because you can\u0026rsquo;t stop a business that\u0026rsquo;s already operating so, in addition to juggling the team and the company\u0026rsquo;s interest on maintaining and modernize legacy, the customers need to have a smooth experience while the transformation happens in the backend. With all these variables at play, I\u0026rsquo;d say that participating in such a kind of operation is far more challenging and rewarding than building a new system from scratch.\n","date":"August 6, 2021","externalUrl":null,"permalink":"/blog/surviving-at-the-bottom-of-the-iceberg/","section":"","summary":"A classic tale of a team that deluded themselves with the mirage of breaking a monolith into shiny new microservices on a limited budget, understaffed and fast-paced environment","title":"Surviving at the bottom of the iceberg","type":"blog"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/business/","section":"Tags","summary":"","title":"Business","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/career/","section":"Tags","summary":"","title":"Career","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/devops/","section":"Tags","summary":"","title":"DevOps","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/documentation/","section":"Tags","summary":"","title":"Documentation","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/logging/","section":"Tags","summary":"","title":"Logging","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/monitoring/","section":"Tags","summary":"","title":"Monitoring","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/observability/","section":"Tags","summary":"","title":"Observability","type":"tags"},{"content":"","date":"May 9, 2021","externalUrl":null,"permalink":"/tags/product/","section":"Tags","summary":"","title":"Product","type":"tags"},{"content":"Technology and automation are tapping into every possible industry and the demand for software engineers has never been higher. Building successful companies gets harder every time because of all the competition creating new challenges everywhere, and as more people try to get themselves a good slice of the giant IT pizza, the need for differentiation between engineers becomes crucial to get a taste of that pepperoni.\nThe software engineer role # But what do exactly SWEs do ? There are many areas of specialisation like embedded systems, AI, Data or Web, all of them different from each other but sharing the same engineering foundations. In the dot com bubble days, IT companies defined clear borders between developers and every other role in the business. A typical setup would include different people for infrastructure, systems, databases, security, technical support, and software development, which was what they knew to be the accepted formula on other industries like construction, electronics or automobile. Around 20 years after, a huge and competitive internet (4,6 billion users - 60 percent of the world population) pushed software systems and the underlying technology to different levels and the SWE role had to evolve in order to keep up with the new reality. While the classic setup is still around in many companies, the borders between roles are blurrier and we see SWEs taking responsibilities on every part of the stack. That\u0026rsquo;s because our expected deliverables are now much closer to the resulting products and services, rather than \u0026ldquo;just code\u0026rdquo;.\nDelivering Products and Services # Ahead of this paradigm shift, large IT companies started picking up signals from the growing number of successful startups. How did organisations with less people and resources manage to deliver better products within a shorter time to market ? Part of it was because their teams were small and cohesive. When you\u0026rsquo;re on a tight budget you can\u0026rsquo;t afford to staff a traditional IT structure, so on one side SWEs at startups would have to build the full technology stack to support the product vertical, and on the other they didn\u0026rsquo;t suffer from the hassle of team dependencies or communication issues. Cloud service and infrastructure providers also play a key role in this transformation, lifting off the significant amount of work required to build production ready runtimes, making life easier for everybody whose product depends on a database, CDN, message queue, container scheduler etc. Eventually lots of companies like Amazon, Airbnb, Spotify, Valve, etc. iterated over engineering team setups, creating different and innovative patterns that most organisations follow today. Having said all of this, the current state of the art indicates that today\u0026rsquo;s SWEs are required to work on a scope that goes beyond their code. This could materialise in different forms, but put it simply, companies prioritise people with diverse skills rather than specialised ones.\nDifferentiating # How can we differentiate then ? How do we reinvent ourselves to be more aligned with market demands ? This is one of those vague questions that have no single correct answer, however, having worked in a couple of different industries, I found out that there are some simple vectors we can follow to maximise our value to the company, and ultimately as engineers. Following this set of principles over the last few years helped me to perform better both as a developer and tech lead.\nUnderstand the company and the product # \u0026lsquo;I Get It Wow\u0026rsquo; by @dannynewtv on Giphy I cannot stress how important this is. In the early days of my career, I was avid to get cracking with new programming languages, frameworks, struggling to create better code than my peers, without ever really thinking about what would be real world applications of what we were creating. While It had its limited value, I often ended up disagreeing with specific tasks, delivering features of no use to the customers, or needing to reiterate because the deliverables were not fit for purpose. The IT industry created a psychological barrier between engineers and the outside world and it was actually after breaking that barrier that I began to understand the real value and the cost of the things we do. Companies appreciate engineers that are able to deliver what they want to sell, and minimise the waste of money, which is way easier under those circumstances. The other hidden benefit of understanding what you are doing, is that it will make you feel part of the product and more accountable for it, which ultimately can lead to being more motivated and performing better, or choosing a different career path because you don\u0026rsquo;t agree with your company\u0026rsquo;s strategy.\nWork collectively, not individually # \u0026lsquo;Stop Motion Animation\u0026rsquo; by @aardman on Giphy There\u0026rsquo;s an anti-social, competitive, misfit aura above SWEs (we\u0026rsquo;re seen as The Big Bang Theory characters by most people) because, well, some of us prefer to be around machines rather than people. That\u0026rsquo;s also a thing of the past. Nowadays, engineering is of the most culturally diverse and inclusive qualified professions and it has to be because the market is way to big and competitive for one man bands. There are exceptions, the unicorns of course, but the norm for scaling up businesses is get more people, and in almost every case, successful products are the coordinated results of multiple teams. Collective work brings social and cultural challenges that go beyond technical skills. Throughout my career I have felt the frustration of disagreement or others standing in my way, and most likely I caused frustration on them too. Luckily enough, i\u0026rsquo;ve crossed paths with several wise people who taught me to sit on other people\u0026rsquo;s shoes before making an opinion. In the end it is the company\u0026rsquo;s success that is on the line, and one\u0026rsquo;s success or failure will eventually end up reflected on everybody\u0026rsquo;s pay check. If you always see things in that perspective It\u0026rsquo;ll be easier to break what I call the FarmVille mindset (working in silos, unaware of the big picture). Positive attitude towards others delivers immediate value to the company and long term value to yourself. Moreover, it\u0026rsquo;s wrong to think that It will interfere with your personal agenda, you\u0026rsquo;ll just have to figure out a different way to carry on with it, and when you do, you\u0026rsquo;ve become a better engineer.\nDocument your work # \u0026lsquo;Singing Homer Simpson\u0026rsquo; by www.simpsonsworld.com on Giphy Some time ago, at company X, I was doing a one man job for this bespoke project that our product manager asked me to do. The delivery was a success, customer was happy, documentation zero. After a while I wasn\u0026rsquo;t around anymore and the customer requested some changes. You can figure out the rest right ? Unless you want to stay in the same job forever (not criticising), your presence everywhere will be temporary, so you need to cover that by leaving tracks for the team to follow. Documentation is hard and boring, especially because it doesn\u0026rsquo;t always deliver short-term results. It is even harder if you start piling up on items to document, so the best option is really to embrace it and consider it a part of your deliverables. As a developer I never really cared much about documentation, and it took me a while to understand that if you always invest some time in making it easier to the next person, there will be less questions and dependencies on yourself in the future. Moreover, it makes your work more visible and exposed to the rest of the company, so in the end it\u0026rsquo;s not only important for the team and the company\u0026rsquo;s ability to use it, but it\u0026rsquo;s also an investment in yourself.\nTest every deliverable # \u0026lsquo;Car Slamming\u0026rsquo; by Unknown on Giphy This is probably one of the most popular discussions in the software engineering space, yet, due to the stressful urgency to deliver and getting to market as soon as possible, companies tend to commit with aggressive timelines and tests are one of the first things that get discarded to match those dates. Consequences are somewhat predictable, so this is one of those no brainers that depend more on culture than skills. I\u0026rsquo;ve worked in projects on both extremes (no tests vs excessive tests) and as expected, having no tests led to production issues affecting customers more often while excessive tests led to less surprises and bigger, costly development cycles. Not investing time in testing is obviously worse for the company, but the aggressive timelines will always be there, since very few businesses have time and money to waste. It\u0026rsquo;s the SWEs job to find an effective testing strategy that fits the team\u0026rsquo;s reality and has the right balance between missing test coverage and extreme testing. With a good testing strategy established and enforced by everyone, changes will be smoother, less prone to errors, increasing product reliability and your credibility as an engineer.\nMonitor your systems # \u0026lsquo;Twin Peaks\u0026rsquo; by @twinpeaksonshowtime on Giphy It’s not really important during development, nor it is on your local machine. But have you ever thought how would it be if there were no fire alarms, airbags or thermostats ? Monitoring is how you know what’s happening in your systems and it’s what enables you to anticipate or mitigate outages. Traditional software development teams didn’t usually have this kind of concerns because the maintenance and operation of the production runtime would sit under another team’s scope, that would care about it. Monitoring isn’t a new thing, it’s been there since computer programs exist, we’re just seeing it more on every team, because the culture is changing and consequently, pretty much every framework or application has a way to export metrics. Wether it\u0026rsquo;s by using logs, or metrics, or tracing, your team will welcome a good observability setup on the features you deliver, because it will make everybody\u0026rsquo;s life easier throughout their maintenance.\nDesign secure systems # \u0026lsquo;Break In\u0026rsquo; by @hulu on Giphy Not so long ago I was on a mission to deliver a new product. The team had been running against the clock to match the release date and by the time it arrived, the product was visually ready for launch, so we did it. With very little experience on security and privacy I didn\u0026rsquo;t pay much attention to the APIs being exposed, but after so much effort, clicking that deploy button felt like doing a good thing for the company. But was it really ? Within 72 hours after launching we suffered an enumeration attack and had to mobilise lots of people to fix the breach and pick up the scraps. Rookie mistakes like this can be damaging, even if you\u0026rsquo;re not building customer facing products, so it\u0026rsquo;s really one of those concerns that will make you a better SWE. Security is not as popular in discussion as testing, monitoring and documentation, but it\u0026rsquo;s probably more important due to what can happen when things go wrong. Following standard authentication and encryption methods is a good place to start but they\u0026rsquo;re not enough. It\u0026rsquo;s hard to build a secure system, but it\u0026rsquo;s harder to fix an insecure one and deal with the potential consequences, so don\u0026rsquo;t keep yourself from bringing the topic to the table when building software with your team.\nTake full ownership of your domain # \u0026lsquo;I got this Anna-Kat Otto\u0026rsquo; by @abcnetwork on Giphy Product teams are like mini startups inside a company, responsible for the full product lifecycle from development to maintenance. In order to maximise the team\u0026rsquo;s effectiveness in doing so, every team member should be well aware of the product\u0026rsquo;s technical borders and be comfortable operating inside them. The thing is that, as I mentioned several times in this article, those borders will at certain point go beyond your code. In other words, as a modern SWE, you need to be prepared to do more than just software development tasks. Depending on the context this can go from orchestrating infrastructure, databases, attending meetings with relevant stakeholders, leveraging other teams that the product depends on, etc. This doesn\u0026rsquo;t mean that SWEs should do everything these days, but they should aim towards keeping up with all of the team\u0026rsquo;s responsibilities. Not doing so, will eventually slow the group down and create management issues because person A doesn\u0026rsquo;t do X. Those cases happen very often because every team has a different setup, and there are always new people onboard, some with less experience, others with more. An SWE with the right attitude, will make the most out of the situation and ask the team for mentoring or training in order to close the gap and acquire new skills.\nAutomate repetitive tasks # \u0026lsquo;GIF\u0026rsquo; by @jerseydemic on Giphy Because of the multidisciplinary setup that we\u0026rsquo;re creating in modern software engineering teams, a significant amount of the things we do today is not building software, and almost on all cases they can be improved by adding some automation magic to it. Every time we\u0026rsquo;re automating something, be it a deployment procedure, data cleanup, backup, testing or customer invoicing we are creating more time for people to focus on other tasks. It is ultimately an optimisation of the company\u0026rsquo;s resources and will result in overall better performance. Because we\u0026rsquo;re already comfortable coding and using APIs, creating small programs to deal with repetitive tasks can be a low-hanging fruit that immediately brings added value to the team and to yourself. Every case is different of course but as an SWE, you should always try to get rid of repetition and routines, as its a waste of your skills and the team\u0026rsquo;s availability.\nFinal remarks # The principles discussed above resemble several industry trends like the DevOps culture and Site Reliability Engineering, which i purposely avoided in this article. These are the reflection of having worked with multiple individuals and companies trying to adopt it, but presented in an unstructured, not scientific way, which i believe is more tailored to my personal opinion. Over the years I\u0026rsquo;ve been working on all of them to improve my value as a software engineer and I feel like it paid off, hence I decided to share them hoping that they can help out other fellow engineers. I also expect that as good SWEs, readers go beyond this article and get more input from either other peers or state-of-the-art literature, so that you can make your own opinions based on multiple inputs, maximising your efficiency in becoming a more complete software engineer.\n","date":"May 9, 2021","externalUrl":null,"permalink":"/blog/the-complete-software-engineer/","section":"","summary":"What does it take to excel as a software engineer and what other aspects beyond programming can help us differentiate ?","title":"The complete software engineer","type":"blog"},{"content":"","date":"March 15, 2021","externalUrl":null,"permalink":"/tags/credibility/","section":"Tags","summary":"","title":"Credibility","type":"tags"},{"content":"","date":"March 15, 2021","externalUrl":null,"permalink":"/tags/decision-making/","section":"Tags","summary":"","title":"Decision Making","type":"tags"},{"content":"","date":"March 15, 2021","externalUrl":null,"permalink":"/tags/delegation/","section":"Tags","summary":"","title":"Delegation","type":"tags"},{"content":"","date":"March 15, 2021","externalUrl":null,"permalink":"/tags/feedback/","section":"Tags","summary":"","title":"Feedback","type":"tags"},{"content":"Roughly 3 years have passed since I became a lead software engineer (or tech lead) and it\u0026rsquo;s been an exciting run with many achievements and pitfalls. In this article, I try to synthesize the most important lessons I\u0026rsquo;ve learned so far on my journey, hoping that they can be useful to other leads who are trying to develop their craft. Having headed a couple of teams until now, I still have a lot to learn, but I certainly do approach things differently than I did some years ago, and the results present themselves in the most unexpected ways. There’s no secret formula for leadership, It\u0026rsquo;s a learning process pretty much like working with a new language or framework. I started off by reading some blogs and books, but the greatest lessons actually came from trial and error, and constructive feedback from others.\n“A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves.” - Lao Tzu\nAccepting that we\u0026rsquo;re human # Coming from a computer engineering background, it took me a while to understand that leading people is different from writing software programs. As human beings we are unpredictable and each of us is likely to react differently to the same stimulus. Modern product teams often include people from different cultures and countries, and they\u0026rsquo;re not exclusively made of software engineers anymore: UX, design and product are essential skills that the team needs in order to be complete. Teams are more diverse, each of us has different expectations, and for the tech lead, the challenge is to establish a common culture and mindset that allows everyone to be coherent and aligned. Conflict is part of human nature but on the other side, history proves that given the proper conditions and motivation, we can adapt to pretty much everything. I struggled to improve my social skills, and I still do, but it was probably the best investment that I did on my career up until now. Being able to connect and communicate with my teams, both individually and collectively carried me a long way through the journey.\nBuilding respect and credibility # \u0026lsquo;Respect\u0026rsquo; by Stillness InMotion on Unsplash Nobody gives a sh*t about your CV. Literally. And to why should they ? Haven\u0026rsquo;t you been defensive or skeptic about a new manager or teammate before ? It\u0026rsquo;s the human factor coming into play. Experience is important, but it’s not going to make you an authority by itself. Referring to your past achievements will add very low, if not negative value to the team dynamics because it’s something you did outside the group. It’s the way you relate to your current team and the experience you gain by building stuff together that counts, so even if you\u0026rsquo;re arriving from google or nasa, you should always be humble with your problems and tackle them with proper collective analysis and conclusions. Every start is a brand new one, so don\u0026rsquo;t ignore the importance of building trust within a group that doesn\u0026rsquo;t know you. This has been my approach on every team that I\u0026rsquo;ve joined so far, both as tech lead and developer, and it is one of the hardest things to get because it will take time, experience, and a significant deal of humility. This should be one of your main goals as a lead engineer, and in my opinion, one that will make the difference. Teams with respected leaders are always the the most motivated and consequently the most productive.\nMaking decisions # This is something that will come across very often, and it\u0026rsquo;s probably what other people think that is your job. It might seem a no brainer, as assessment requests start to land in your hands.\n\u0026lsquo;It\u0026rsquo;s a Trap\u0026rsquo; by @starwars on Giphy Don\u0026rsquo;t forget that those decisions will result in tasks to be executed by the team. When asked to plan/design a new feature, I usually find myself tangled with some challenges:\nPeople not feeling included in technical decisions: Software architecture is not just black and white and the more you learn, the more opinions you have. Again, because we are all humans, there will be different opinions. I struggled a lot with this throughout my career as a tech lead, but I had also been on the other side, so most of the times I offload the ownership of the decision from myself to the group. This becomes impractical to execute on every single decision, given that you will have other people around like product and engineering managers poking you with dates, but it is something that you should always do whenever possible. Spikes in the SCRUM framework are exactly for this, and the team members will not only feel included when doing them, but also improve technically and understand your side of things. If you get everybody to agree and understand what needs to be done, sprints are more likely to be successful.\nTechnical decisions being challenged: This probably goes back to the previous point, but understandably, not everything can be decided as a group, for the sake of simplicity. Whether you make those decisions yourself or delegate to other team members, they should be properly justified and documented. Doing so will not only help out others understand why those decisions were made, but also not challenging them in future situations. It will give you credibility and make your team feel that they are learning new things.\nYour teammates and peers cannot make a decision together: This is where you\u0026rsquo;ll need to be practical, because at the end of the day, the ownership will be yours, so if you feel that the conversation is not headed towards a healthy situation you probably need to do something. What to do totally depends on the situation itself, but it usually is to reduce scope, clarify requirements, reduce the number of people in the conversation or bringing a more senior teammate into the discussion to untie the situation.\nYou\u0026rsquo;re not entirely sure of what to do: There is nothing wrong with this. Tech leads don\u0026rsquo;t need to be experts on every single matter, so make sure that you always bring people with the required skillset to the discussion. If there are too many unknowns, probably the situation justifies creating a spike for research and getting input from other teams or more senior engineers.\nYou or your team made a wrong decision: It will happen, and when it does, because the ownership is yours, you\u0026rsquo;ll need to deal with it, so be prepared to rollback and get back to the drawing board with your team. It is a good practice to always draft a quick rollback plan for most decisions, because well, you might need it.\nYour decision impacts other teams: As the tech lead, you\u0026rsquo;ll probably be the one with the highest understanding on how the company and the technology is structured. Product teams often depend on other teams (services, sdks, apis etc..) to deliver their own product, so this might be a common scenario. Ideally you should try to avoid dependencies as much as possible, but if that isn\u0026rsquo;t possible then you should reach out to your peers and make sure the roadmaps are aligned and there is an SLA between both teams.\nIn the end, It all boils down to finding the right balance between group and individual decisions. Making decisions as a team is vital, and will give everyone a sense of ownership and motivation that they will not have if you design everything alone. However, group decisions are more expensive, so it is impossible to decide everything together. I didn\u0026rsquo;t yet succeed in finding the perfect setup for decision making in a team, and currently the approach I find correct is to make team decisions as much as possible using the sprint spikes, and delegate individual decisions to appointed team members depending on the project. Using this approach, the role of the leader is just to assist others in the decision making process, and making sure that every decision is presented to the team and documented\nMentoring by delegation # \u0026lsquo;Mentoring\u0026rsquo; by Nadir sYzYgY on Unsplash Following the line of thought in the previous conclusion, this is probably the second most important goal that you should chase as a tech lead. It didn\u0026rsquo;t strike me at first, and as a consequence, on early days, my teams were heavily dependent on me because I simply filled in the gaps for everyone. If the team is presenting results and meeting deadlines you probably won\u0026rsquo;t notice immediately, but the signs are there, productivity drops when you\u0026rsquo;re off or unavailable, complex tasks start to pile up, and the team is not able to scale nor respond to pressure. It can be very tempting to step in and help out the team to give that final push and deliver the project, but when this starts to be the norm, then you\u0026rsquo;re already in trouble. The team needs to be able to work without you, and the main focus of your work should not be to deliver an assessment for everything or building the perfect architecture, but giving your team the tools to be able to do it without you. From the moment you start your role as a tech lead, you should try to make yourself obsolete. And you do that by mentoring, organising, documenting, delegating. Building an autonomous team is a hard process, and it will take you a lot of time, repetition and perseverance. My current approach is to do small improvements to the team\u0026rsquo;s autonomy on every task that comes to me by asking the same question every time: How would the team do this if I was not here ? And the answer pretty much depends on the situation. It might be writing a tutorial/documentation, it might be pair programming with someone, it might be reviewing someone\u0026rsquo;s code and pointing out the details, or ultimately it might be delegating smaller bits of leadership to other team members (by project for example) while overseeing everything from above. In order for you to be successful, the engineering manager should share this mindset with you and help you out on a daily basis, so make sure that you always keep him onboard.\nLeaving no man behind # From the moment we understood that we needed to hunt in groups in order to survive, we developed a tribal culture that is present pretty much everywhere in today\u0026rsquo;s society like for example in football, religion, music genres, countries or, not surprisingly, engineering teams. When taken with moderation, a bit of rivalry and competition between teams is good for the organisation, but what\u0026rsquo;s really important is that your team members feel a part of the team, and are willing to do the best job possible to represent it. This goes back to the motivation subject, but my point here is that you should make everyone feel represented and comfortable enough to make mistakes. It\u0026rsquo;s not a question of if, but rather a question of when will your team make a mistake (we\u0026rsquo;re humans, remember ?) which will have consequences for the outside environment, which can potentially be a customer or other team. These are the moments where good organisations will demand an explanation of what happened, why it happened and how will your team fix it. This kind of scrutiny is crucial for improving the company, and the team, but if not done properly it can be damaging to the team members and consequently to the team. In my opinion, the tech lead or a senior team member should be able to take responsibility in these situations and take actions towards avoiding the same error in the future:\nThe error was caused by something inside the team domain: This is the moment then to gather everyone round, and as a team understanding what went wrong, what should have been done to avoid, and develop a mitigation plan. It is also the place to make it very clear that the responsibility belongs to the whole team, not so a single member, or anyone outside the team.\nThe error was caused by something outside the team domain: Another common situation in micro-service architectures. In this kind of situations, it is very important that you, or a senior team member reach out to the other team responsible for this service, and do the root cause analysis, develop an action plan that can potentially involve both teams.\nBy taking accountability as a group and making other teams accountable (when it\u0026rsquo;s the case) you will make your team mates feel represented and hopefully engaged and proud to be a part of the team. Consequently, as said before this is what makes the team strong and productive.\nGetting Feedback # Be honest with yourself, be humble with others. The more you read, and the more you fail, the better you will be in your role, but nothing replaces direct feedback from those who work with you. The engineering manager can give you very useful feedback as someone who is also on a leadership role, he is also the one who works closest to you and can capture feedback from the 1:1 meetings with the remaining team. However, I also actively try to get feedback from other sources. First with my teammates, but also with other tech leads, staff or principal engineers, product managers or even the CTO.\nFeedback is important and it should be the starting point when trying to perform better. The more serious and methodical you are about it, the more useful it will be to you. There is always something to improve, if you don\u0026rsquo;t feel that way, you\u0026rsquo;re probably wrong.\n","date":"March 15, 2021","externalUrl":null,"permalink":"/blog/thoughts-on-swe-leadership/","section":"","summary":"An overview of some personal experiences on the engineering leadership track and opinionated advice that derived from them","title":"Thoughts on software engineering leadership","type":"blog"},{"content":"","date":"August 14, 2019","externalUrl":null,"permalink":"/tags/amazon-sns/","section":"Tags","summary":"","title":"Amazon SNS","type":"tags"},{"content":"","date":"August 14, 2019","externalUrl":null,"permalink":"/tags/apple/","section":"Tags","summary":"","title":"Apple","type":"tags"},{"content":"","date":"August 14, 2019","externalUrl":null,"permalink":"/tags/firebase/","section":"Tags","summary":"","title":"Firebase","type":"tags"},{"content":"","date":"August 14, 2019","externalUrl":null,"permalink":"/tags/message-queueing/","section":"Tags","summary":"","title":"Message Queueing","type":"tags"},{"content":"","date":"August 14, 2019","externalUrl":null,"permalink":"/tags/my-truphone/","section":"Tags","summary":"","title":"My Truphone","type":"tags"},{"content":"","date":"August 14, 2019","externalUrl":null,"permalink":"/tags/push-notifications/","section":"Tags","summary":"","title":"Push Notifications","type":"tags"},{"content":" This article was originally published externally, read the original here. Sending push notifications to a device or browser implies that, in some way, the remote server is able to initiate a conversation with the client and not the other way around. Since the typical app/client does not expose a publicly available endpoint, this is achieved by maintaining an active connection between the client and the server (e.g. a TCP socket on accept mode) that is sending the notifications. The concept is really simple but, depending on how it is implemented, may raise a variety of scalability problems. If every app on a phone were to implement it, the device’s resources would be quickly drained. Moreover, app developers would need to deal with network failures, the app going into the background and most importantly security, otherwise, such a connection could be a potential attack surface. Android and iOS solve this problem at the framework level by managing the connection themselves to Google or Apple’s push notification services, while making the functionality available to all apps on the device. Not only is this far more secure and efficient, but it also makes push notifications a breeze to implement.\nSetup # For an app to take advantage of this connection, and be able to receive push notifications, some configuration steps are required to be made. These, of course, depend on the operating system/platform, but assuming that we’re talking about android and iOS, these configurations would involve Firebase Cloud Messaging (FCM) and/or Apple Push Notification Service (APNS).\nThe act of receiving push notifications happens in 2 different moments: Registration and Push. The first usually happens on app boot after prompting the user for permission to send notifications, while the second is where the notifications are actually sent to the device, which could be whenever your business logic decides to. In My Truphone App, for instance, you get a push notification whenever your data plan is nearing depletion, or when your subscription is auto-renewed.\nRegistration # When registering for receiving push notifications, the first step is to obtain a device token, which can be done with both the Android and iOS SDKs. The app developers should at that point store the token on the server-side, so that it can be used as a reference to the device in a different timeframe when the app will most likely be closed or running in the background. At Truphone, we built a Token Registration Service for this purpose, where the recently acquired device token is stored in association with other business logic identifiers such as the customer id, the device id, the ICCD and the different MSIDN’s associated with a Truphone eSIM profile. By having all of these associations, we can later pick up a business event and find the corresponding token, should we need to send a push notification.\nPush # Sending a notification to the device may require different implementations, according to the use case. However, the last part of the path is shared by all of them. Notifications usually start with a business event. For the My Truphone app, for example, this can be a marketing communication, a plan that has triggered the auto-renew functionality or is nearing depletion/expiration. There are a plethora of ways in that these business events can be handled but, for the sake of simplicity, let’s just assume that there is a message queuing system like ActiveMQ, RabbitMQ or Kafka that delivers all of these business events to a handler that will then process them and determine whether to send a push notification or not.\nAfter the business event is processed and the handler has decided what it wants to say to the user, the notification is then ready to send to the device. The first step is to go back to the Token Registration Service and convert the business identifier into a push notification token. Once this is done, the notification can then be sent to FCM/APNS and then forwarded to the device. Since these are two different services with two different implementation methods, most app developers end up using a façade that provides both implementations under the same interface such as the AWS Simple Notification Service, which we’ve chosen for My Truphone.\nLocalisation (i18n) # While most apps don’t implement i18n, once you’ve translated your app to a different language, let’s say German, you certainly don’t want your notifications to be sent in English and vice versa. A common mistake to make at this stage is to add logic to your backend in order to determine what text will be sent to the device. Why?\nIt makes your server side code more complex It makes it difficult to add more translations to the app because there is one additional place with text to translate Your server doesn’t know the phone’s language/locale We followed Apple’s recommendations for implementing i18n and defined an id for each of the notifications. Instead of sending the text, only the notification id is sent. The actual content and translations for these notifications are bundled with the app, as well as the remaining copy. While this solves all of the 3 problems mentioned above, it makes the process less flexible, requiring the developers to release a new version if they want to add a new notification type.\nThe work behind sending push notifications is usually underestimated, probably due to its fairly small expression in the UI (After all it’s just a message that pops up on the user’s phone like many others). However, if you don’t plan ahead and build a solid infrastructure with a clean architecture that is able to support different use cases, you might end up with scattered bits of unmaintainable code, especially if you have a complex backend architecture where different domains can generate business events that will ultimately lead to sending a notification to the user.\n","date":"August 14, 2019","externalUrl":null,"permalink":"/blog/the-journey-of-a-push-notification/","section":"","summary":"Lessons learned from building a push notification system for a cross-platform mobile application at Truphone","title":"The journey of a push notification","type":"blog"},{"content":"","externalUrl":null,"permalink":"/categories/crypto/","section":"Categories","summary":"","title":"Crypto","type":"categories"},{"content":" Software Engineer ","externalUrl":null,"permalink":"/authors/davidsimao/","section":"Authors","summary":" Software Engineer ","title":"David Simão","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/engineering/","section":"Categories","summary":"","title":"Engineering","type":"categories"},{"content":" Staff Product Manager ","externalUrl":null,"permalink":"/authors/nunocoracao/","section":"Authors","summary":" Staff Product Manager ","title":"Nuno Coração","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/software-development/","section":"Categories","summary":"","title":"Software Development","type":"categories"},{"content":"","externalUrl":null,"permalink":"/tags/steve-jobs./","section":"Tags","summary":"","title":"Steve Jobs.","type":"tags"},{"content":"","externalUrl":null,"permalink":"/categories/web3/","section":"Categories","summary":"","title":"Web 3","type":"categories"}]