<![CDATA[Alien's Website]]>https://aien.me/https://aien.me/favicon.pngAlien's Websitehttps://aien.me/Ghost 6.19Fri, 24 Apr 2026 19:12:10 GMT60<![CDATA[How to make your Win11 look like 2046!]]>https://aien.me/how-to-make-your-win11-look-like-2046/690e3c37e6f4c6000142946aFri, 07 Nov 2025 18:55:43 GMT

I always had a passion to make my desktop look fancy. It all began when there was a Gnome 2, and people were enjoying using Compiz! It felt like, my desktop belonged to me! Just much more than changing a wallpaper.

Now, long story short, I was looking around to find tools to customize my desktop. A few things were/are the most important things for me:

  1. Performance! Being sure that the tools I use have little to almost none resource usage. Since I have my IDE, Docker, Unreal Engine there and I also play games. I didn't wanted that tools get on my way.
  2. Open Source! I wanted to at least know what is happening in the codes and to change them if I ever have to. Also, being open source gives me a bit sense of security.
  3. Customization! So that I can do almost anything I want from the tools, from changing the colors, to the point to say Auora (The blur effect of windows) gets used correctly.

Now, for this, I eventually made this minimal set up:

0:00
/0:28

I mean, I know! It doesn't really look like 2046 😃but still, is enough minimalist for me.

For this I used the following tools:

  1. Yasb: A highly configurable Windows status bar written in Python.
    1. I installed the library, then used the "Aero Glass" for the theme.
    2. This basically adds the upper taskbar you see. However, it doesn't have that visual effect of the voice! So that when someone talks or you play a sound, the top bar won't get any color. We will get to this.
  2. Windhawk: The customization marketplace for Windows and programs.
    1. Installed the "Windows 11 Taskbar Styler" and "Slick Window Arrangement" plugins.
    2. For the taskbar styler, I am using the 21996Taskbar theme. No other config there.
  3. Wallpaper engine: Use stunning live wallpapers on your Windows desktop.
    1. Then I installed the "Simple Spotify" wallpaper.
    2. It is important to enable the "Taskbar" option in the settings:
How to make your Win11 look like 2046!

You can configure the rest the way you like.

That was it. Just wanted to show you some settings I have and how I customized my desktop.

Have a great day.

]]>
<![CDATA[AI in everyday Life (For non-techies)]]>https://aien.me/ai-in-everyday-life-for-non-techies/6819fb24227b3900015d35c1Wed, 07 May 2025 10:03:59 GMT

Artificial intelligence has moved from science fiction to an everyday reality that shapes our lives in countless ways. From the moment you wake up to when you go to sleep, AI works quietly in the background, making recommendations, filtering information, and even helping control your home. In this post, I'll explore how AI has evolved, where it shows up in your daily routine, and practical ways to make it work better for you.

But before I begin, what exactly is AI?

At its core, artificial intelligence is technology designed to mimic human thinking. Modern AI systems learn from data to recognize patterns and make predictions. Unlike traditional software that follows fixed instructions, AI can adapt and improve over time, similar to how we learn from experience. This ability to "learn" is what makes AI so powerful in our everyday tools.

Believe it or not, AI has been around since the 1950s! But the AI we know today, the kind that powers your phone's voice assistant or suggests what movie to watch, is relatively new.

In the 1990s, AI hit some growing pains. Early systems relied on rigid rules and struggled with real-world complexity. Think of them like overly strict recipe books that couldn't handle unexpected ingredients. These systems were expensive to maintain and often disappointed.

During this challenging period, AI research continued quietly behind the scenes. Scientists began focusing less on rigid rules and more on teaching computers to learn from examples. By the mid-90s, the field was shifting toward systems that could adapt and improve with experience, laying the groundwork for today's AI revolution.

The real breakthrough came in the last decade when AI finally became part of our everyday lives. In 2011, Apple introduced Siri to the iPhone, and suddenly it felt like our phones could actually understand us. Just a year later, in 2012, researchers created a much-improved learning system that could recognize images with surprising accuracy.

This moment sparked what we now call the AI boom! Almost overnight, technologies that seemed like science fiction became possible: cars that could drive themselves, phones that could understand natural speech, and homes with smart devices that adjust to our habits. These innovations paved the way for the AI tools we now use daily without even thinking about it. From voice assistants that play our favorite songs to streaming services that somehow know exactly what show we'd like to watch next.

AI in everyday Life (For non-techies)

AI in 2020's

AI is often the invisible sidekick in everyday gadgets and apps. Let's explore some familiar examples of AI in action in daily life, and how they make things easier or more personal.

Voice Assistants

Voice assistants such as Siri, Alexa, and Google Assistant have become our digital companions. When you say "Hey Siri, tell me a joke" or "Alexa, play some music", AI is listening. It understands your voice and then finds an answer or plays a song for you.

Think of it like talking to a helpful friend who listens, searches for info, and talks back. Behind the scenes, the assistant has learned from thousands of voice examples so it knows what your words mean and what replies make sense. Over time it even adapts to your speech patterns, getting better at understanding your accent or favorite commands.

This means mornings can be smoother, you can set alarms, check weather, or send texts by voice, without lifting a finger.

Smart Home Devices

AI also powers gadgets around our home. For example, a smart thermostat uses AI to learn our comfort preferences (warm in the morning, cooler at night) and adjusts itself, potentially saving energy without us thinking about it.

Recommendation Systems

Have you ever wondered how Netflix knows what shows you might like, or why your Spotify playlist feels just right? That's AI watching your choices. Every time you watch, listen, or shop, AI takes note. It notices patterns such as "this person likes mysteries" or "that person jams to something on Mondays" and then suggests new content it thinks you'll enjoy.

These smart suggestions save you time and help you discover things you might have missed on your own.

Spam Filters

Ever noticed how very few spam emails reach your inbox nowadays? That's AI at work. Email services use intelligent filters to spot unwanted messages (ads, phishing attempts, etc.) and sweep them into spam folders. It's a lot like having a vigilant door guard for your mailbox, who checks every letter and tosses out anything suspicious.

The AI "learner" behind these filters has seen millions of spam examples, so it's extremely good at catching junk. Google says its AI-powered spam filter stops over 99.9% of spam, phishing, and malware from getting to Gmail users.

In practice, that means you get mostly the good mail you care about, which really cuts down on annoyance and risk.

AI in everyday Life (For non-techies)

How AI Really Works

Each of these examples might feel complex, but at heart they're powered by AI "learning" from lots of data. Let's break it down in simple terms: AI is like a very trusty sidekick who watches what you do, takes notes, and then tries to help next time.

It learns that you say "play metal" when you wake up, or that you tap heart emojis on workout posts, and adjusts its behavior to match. In everyday use, you can think of AI as your friendly helper who keeps getting smarter the more it learns about you.

Ethics and Practical Concerns

It's awesome having smart helpers, but it's also good to be aware of the trade-offs. Here are a few key concerns to keep in mind, not to scare you though, but so that you're informed.

Privacy & Data Security

AI features usually need data to work well. For example, a personalized news feed needs to know which articles you like, and a voice assistant needs to record your voice commands. All that information can be sensitive. The more data these systems collect (like location, browsing history, health info, etc.), the more we need to trust that it's handled safely. Experts warn that AI's convenience can come with risks like data breaches or misuse of information.

In other words, your personal "profile" is what trains these AI systems for you, so it's wise to check privacy settings: see what data an app really needs and turn off anything extra. Keep your software updated too, since updates often patch security holes. And even more: Who's handling this data? Can this person or organization be trusted?

Algorithmic Bias

Sometimes AI can pick up human prejudices. If an AI system learns from data that's unbalanced (for example, mostly data about one group of people), its suggestions or decisions might end up unfairly favoring or disfavoring some people.

One famous concern is that some facial-recognition software works better on lighter skin tones because its training set was unbalanced. It's like if a teacher graded mostly students of one background better because she was used to teaching them: it's unfair.

Good news is that researchers are aware of this and are working on it, but as users it's good to remember that AI suggestions can reflect quirks in the data they learned from. This is why it's smart to take any AI recommendation with a grain of salt and remember a human check is always important.

Over-reliance

When life gets easier with AI, it's tempting to let it do everything. Relying too much on AI can make us a bit less sharp in some skills. For instance, always following GPS might weaken your own sense of direction, or over-trusting a smart assistant might make us less careful about double-checking information.

It's great to use AI for convenience, but every now and then, it's healthy to double-check things yourself or take a route you know by heart.

Think of AI as a helper, but stay the boss in charge.

Being aware of these concerns lets you enjoy AI's benefits while staying smart about the trade-offs. Remember, like any powerful tool, AI works best when we guide it responsibly.

AI in everyday Life (For non-techies)

Tips for Non-Tech Users to Getting the Most from AI

If you're thinking, "This is cool, how do I get more out of AI (and stay safe)?", here I have some friendly tips for you, coming from a tech guy (me).

Customize Your Experience and Train Your AI

Many AI services such as ChatGPT, Claude or Spotify improve if you give feedback. For example, on Netlflix or Spotify, thumb-up your favorite songs or shows so it knows what you like. Over time, your recommendations get more on target.

Similarly, voice assistants let you teach them your pronunciation or favorite commands. Spend a few minutes in the settings of an app: set your preferences (like news topics you care about) or tell the assistant your name and home address. This upfront effort pays off in more useful results.

Protect Your Privacy and Take Control of Your Data

Peek into your app permissions and privacy settings. You might find options to turn off location tracking when it's not needed, or to delete past voice recordings. For instance, you can ofen tell your smart speaker not to store your voice commands permanently.

Use strong, unique passwords for your devices, and enable two-factor authentication if possible. Even better, use a password manager such as 1Password, BitWarden or anything else and never, ever, ever use the same password for different apps or websitesd! Also, regularly update the software you use that often include security fixes, so update your phone and apps when prompted. These small steps help keep your personal info from wandering off.

Be Selective With Personal Data and Choose What You Share

Only link AI services to accounts you really trust. For example, if a fitness app asks permission to access your health records or contacts, decide if that's really needed or just a convenience. Sometimes skipping extra permissions or using alternatives that require less data can be a safer choice.

Learn Basic Commands and Discover Hidden Features

Spend a moment learning what your AI assistant can do. For example, Google Assistant has "Routines" you can set (like saying "good night" to turn off lights and set an alarm). Discovering these features can save you time.

Stay Curious, Ask Questions and Be an Active User

If an AI answer seems weird, feel free to ask "why" or search for a second opinion. AI is smart, but it's not a human's common sense. It can summarize and compute, but it doesn't truly understand context like a friend would. Remember that it has been trained on data which can be biased or anything.

If something sounds off, verify. This keeps you in control and helps "train" your own knowledge.

Basically, treat AI tools as partners. Tweak their settings to your liking, give them good data (like rating content), and keep an eye on what information you share. Use them to enhance your routine, but don't feel locked in. You can always turn off a feature or switch services if it doesn't feel right.

AI in everyday Life (For non-techies)

Using AI Tools in Everyday Life

Cloud AI tools have big advantages: they run on company servers, so you don’t need a fancy computer and they’re always updated with the newest models. For example, using ChatGPT or Claude is as simple as opening a browser or smartphone app and chatting.

However, there are trade-offs. All of the text you type is sent over the internet to their servers. That means your data travels out of your computer, which raises privacy concerns. Also, the most advanced features or larger models (like GPT-4o on ChatGPT) require a paid subscription. In short, cloud AI is very convenient, but less private and sometimes costs money for premium power.

On the other hand, you can run AI locally on your own computer or device. This means downloading a program and using models that live right on your machine, without ever sending your data online. There are now user-friendly tools like LM Studio and Ollama for this. These let you "easily run LLMs like LLaMA or DeepSeek on your computer" with no expertise required.

The big benefit here is privacy and control: everything stays on your device, and you don’t pay per query.

Here are some specific tools to check out, ranging from easy cloud apps to offline models:

  • ChatGPT (OpenAI): A very popular chatbot you can use on openai.com or via the mobile app. It has a free tier (GPT-3.5) and a paid "Plus" plan for GPT-4. It's great for writing assistance, answering questions, tutoring help, and more.
  • Claude (Anthropic): Another conversational AI you can try at claude.ai. It's built to be "helpful, honest, and harmless," and excels at tasks like summarization, creative writing, and Q&A. For example, Claude can assist with summarizing articles or drafting ideas, often with a very natural style. Quora even offers Claude within their Poe app (so you can chat with multiple AIs in one place).
  • Poe (Quora): An all-in-one AI chat app (on web and mobile) that connects you to several bots, including ChatGPT, Claude, and others. Poe makes it easy to switch between different AI assistants without leaving the app. There's a free version and optional paid features.
  • Perplexity AI: A search/chat tool that uses AI to answer questions with citations. It's free with pro options, and it's handy for research or fact-based queries. For example, you can ask Perplexity to gather quick facts or compare options when making a decision.
  • Open-source models (LLaMA, Mistral, etc.): These are free-to-use AI models created by research groups (Meta's LLaMA models, and Mistral AI's releases, among others). They aren't "apps" by themselves but are often available through local tools (such as LM Studio).
  • LM Studio (Windows, Mac): An app for downloading and chatting with open-source LLMs offline. It lets you pick models like LLaMA, DeepSeek, or Mistral and run them on your PC. The website even says you can do it "with no expertise required". (Good for someone who wants local AI without complex setup.)
  • Ollama (macOS/Linux/Windows): Another local LLM platform. It provides a simple command-line or app interface to run models like Llama 3, Mistral, and Gemma on your machine. Ollama focuses on privacy (since it's local) and ease of use for developers.

Both LM Studio and Ollama make offline AI much more accessible. You can try them if you're curious about "AI without the internet," but for most everyday users, starting with cloud services is easiest.

How a Good Prompt Works

Think of prompting as having a conversation with a very smart, but sometimes literal friend. The way you ask questions makes all the difference in getting helpful answers! Here's how to get the most from AI assistants:

The Anatomy of an Effective Prompt

A good prompt has a few key ingredients that help the AI understand exactly what you want:

  1. Be specific and clear: Instead of asking "Tell me about dogs," try "Explain the health benefits of walking a dog daily for a middle-aged person". The more specific you are, the more tailored the response will be.
  2. Provide context: Let the AI know why you're asking or how you'll use the information. For example: "I'm planning a 3-day trip to Seattle with young children. What are some family-friendly activities there?"
  3. Specify format when needed: If you want a list, table, or particular structure, just ask! For example: "Create a 7-day meal plan for a vegetarian with high protein needs. Format it as a day-by-day breakdown with each meal listed separately".
  4. Use examples: Sometimes showing is better than telling. If you want a certain writing style or approach, include a short example: "Write a product description for my handmade candles in the same friendly tone as this: 'Our cozy blankets wrap you in cloud-like comfort, perfect for those quiet Sunday mornings with a book and tea.'"

Prompt Patterns That Work

Here are some tried-and-true prompt patterns that consistently deliver good results:

  • The role prompt: "Act as a [role] and help me with [task]". For example: "Act as a fitness coach and create a beginner-friendly 30-minute home workout".
  • The step-by-step request: "Explain how to [do something] in simple steps". For instance: "Explain how to troubleshoot a slow computer in simple steps".
  • The comparison prompt: "Compare [X] and [Y] in terms of [aspects]". Such as: "Compare meal kit delivery services and grocery shopping in terms of cost, convenience, and environmental impact".
  • The refinement prompt: After getting an initial response, you can say "That's great, but can you make it more [characteristic]?". Like: "That's helpful, but can you make it more casual and add some humor?" (The trick here is encouraging the AI)

Common Prompt Mistakes to Avoid

Even smart AI can get confused. Here are pitfalls to avoid:

  • Being too vague: "Write something good" doesn't give the AI enough direction.
  • Contradictory instructions: Asking for "a comprehensive guide that's also very brief" sends mixed signals.
  • Forgetting to specify audience: The explanation suitable for a 5-year-old is very different from one for a college student.
  • Information overload: Including tons of unnecessary details can distract the AI from your main question.

Remember, prompting is a skill that improves with practice. Don't hesitate to refine your prompt if the first response isn't quite what you wanted. Often the best approach is to start a conversation and iterate, just like you would when explaining a task to a human assistant!

AI Use Cases in Daily Life

AI chatbots can help with tons of everyday tasks. Here are some categories and examples:

Productivity

AI can tackle routine chores so you can focus on what matters. For instance:

  • Email Drafting: Stuck on how to word an email? Tell ChatGPT the main points and it can craft a clear, polite message or improve your draft.
  • To-Do Lists: Ask an AI to help organize your day. It can turn a bulleted shopping list or notes into a well-structured to-do list.
  • Summarizing Long Texts: Paste a long article, report, or set of notes, and ChatGPT or Claude can give you a concise summary.
  • Scheduling Help: Need a meeting agenda or reminders? You can ask your AI to propose a schedule or generate reminders (though you'd still add them to your calendar manually).

These tools save time. The result? Many users report completing writing and planning tasks faster and better than before.

Learning

AI can be like a personal tutor or explainer. Try things like:

  • Study Help: Ask the bot to explain a concept simply (e.g. "Explain photosynthesis to me like I'm 12 years old"). It can break down topics step-by-step, provide analogies, or quiz you with practice questions.
  • Language Learning: Practice a new language by having conversations or translating phrases. The AI can correct your sentences or help build vocabulary.
  • Tutoring/Explanations: If you're curious about something or need a quick lesson, chat with it. For example, ChatGPT or Claude can guide you through basic math, science topics, or even give a "crash course" on personal finance or other subjects.
  • Homework Brainstorming: If you're stuck on ideas (say for a school project or learning how to code), the AI can suggest approaches or resources.

Think of AI as a patient study buddy: it won't judge wrong answers, and you can ask it to clarify or repeat things in different words until you understand.

Creativity

AI can kickstart your creative projects. For example:

  • Writing Inspiration: Need ideas for a blog post, story, or social media caption? You can brainstorm with ChatGPT or Claude. Give it a topic or title, and ask for an outline or catchy hooks.
  • Content Writing: Once you have an outline, AI can help draft paragraphs. It can also check your writing style (e.g. make it friendlier or more professional).
  • Image Prompts: If you use AI art tools (like DALL·E or Midjourney), you can ask ChatGPT to generate a detailed text prompt for the image you want.
  • Jingles & Poems: For fun, you can have the AI compose a quick poem, song lyrics, or even a short story.
  • Editing: AI can proofread and suggest edits to your writing. It catches grammar mistakes or awkward phrasing.

These creative use cases work especially well with cloud AIs. You can ask things like "help me write a friendly travel blog introduction" or "give me 10 blog post ideas about gardening," and get instant suggestions.

Budgeting & Personal Finance

AI can make money management easier:

  • Spending Summaries: You might not give an AI access to your bank, but you can manually share spending totals. For example, tell ChatGPT your expenses and income, and it can help you categorize them or suggest a budget.
  • Savings Tips: Ask AI for general advice. It can explain budgeting categories, debt-payoff strategies, and more.
  • Budgeting Apps: There are AI-powered apps like Cleo (a chatbot) or Rocket Money that hook into your accounts. They automatically analyze where your money goes and can even humorously alert you if you overspend. These apps do the number-crunching for you.
  • Planning Purchases: You can also use a bot to plan for big purchases. For instance, ask it to compare prices in different stores or to plan a grocery list that fits your budget.
  • Simple Tracking: If you prefer not to link accounts, just jot down a list of expenses and let an AI make sense of it. It can summarize totals and trends, and suggest if you're spending too much on one category.

In short, AI tools can handle the grunt work of math and lists, so you can focus on the big decisions. The AI won't replace a human financial advisor, but for everyday budget planning and tips, it's a handy assistant.

Health & Lifestyle

AI can also help with personal well-being routines:

  • Meal Planning: Tell ChatGPT your dietary preferences (e.g. "I'm vegetarian and want high-protein meals"), and it can suggest a week's worth of recipes and shopping list. Always double-check for allergies or medical issues, but it's great for inspiration.
  • Habit Tracking: Describe habits you want to build (like daily exercise or reading) and ask for tips to stick with them. The AI can suggest a plan or tools, such as apps or printable trackers.
  • Workout Suggestions: You can get basic workout plans (e.g. "30-minute home workout for beginners with no equipment"). The AI will give sample exercises or routines.
  • General Advice: Ask for quick tips on sleep, stress relief, or time management. The answers won't be tailored medical advice, but can be helpful reminders (like "drink more water" or "try a short walk").

Remember, the AI offers ideas and motivation, but don't skip consulting professionals for medical or serious health issues.

AI in everyday Life (For non-techies)

Get started

Now that we know about where we can use AI in our lives, let's see what our options are for using them.

Free vs. Paid Options

Most of these tools have free tiers. For instance, ChatGPT and Perplexity let you try basic features for free. Poe is free to use multiple bots. Claude offers a free plan with limits. However, advanced features often cost extra. ChatGPT's top model (GPT-4) and Claude's highest version are behind a paywall (usually around $20/month for ChatGPT Plus).

Paid plans can be worth it if you use AI a lot or need the best results. Local tools like LM Studio and Ollama are generally free to install and use (though you might buy a faster GPU to run big models).

In choosing free vs. paid, consider how much you rely on AI. If you only ask quick questions once in a while, the free versions are great. If you're a heavy user (e.g. doing research, long documents, or business tasks), a paid upgrade can save time and give better answers. And if privacy is a top concern, using local free models might be the best "cost" answer.

Installing LMStudio

AI in everyday Life (For non-techies)

Installing LMStudio is as easy as visiting their website and installing the app on you operating system.

After that, you may want to open the models (by clicking on the purple magnifying glass icon). Then search for open source models that are also compatible on your system. Which will usually recognizable from this notification:

AI in everyday Life (For non-techies)

I personally recommend installing "DeepSeek R1 Distill (Qwen 7B)" which is compatible on most devices we have in our homes.

Also make sure that you set the UI to "User" to make it simpler for yourself.

AI in everyday Life (For non-techies)

Getting Started Tips

  • Just Ask! The hardest part is often thinking of a question. Start with simple tasks: "Help me draft a friendly birthday email," or "Explain how photosynthesis works in simple terms." Use conversational language as if you were talking to a knowledgeable friend.
  • Be Specific: Give the AI context so it can tailor answers. E.g., "I have 30 minutes. What exercises can I do at home?" versus "Give me exercises."
  • Use Web or Apps: Non-tech users should start with websites or mobile apps. Go to chat.openai.com for ChatGPT, claude.ai for Claude, or download the ChatGPT/Claude app on your phone. Poe and Perplexity have easy websites too. No installation needed for cloud services.
  • Try Both Cloud and Local (if curious): If you want more privacy, try a local AI app. LM Studio is user-friendly: install it, pick a model, and chat without an internet connection. Ollama works similarly on Mac or PC. There's usually documentation to walk you through the first run.
  • Learn as You Go: Don't worry about using "smart" prompts. Start with everyday language. Watch some tutorial videos or read quick how-tos if needed (there are many beginner guides online). The AI often corrects or asks follow-up questions if it needs more info.

Remember, these AI tools are designed for everyone, not just tech experts. You don't need to know coding or machine learning. Just talk or type, and you'll see how they can become helpful parts of your daily routine!

]]>
<![CDATA[Believing isn't Knowing]]>https://aien.me/believing-isnt-knowing/681369a9f860d300016203fdFri, 02 May 2025 04:50:42 GMT

Germans have a good thought-provoking saying: "Glauben ist nicht Wissen" which translates to "Believing isn't Knowing". I really enjoyed the directness it has when I first heard it. But what does it really mean?

At first this might seem obvious to you! Of course belief and knowledge are different things. Yet in our daily lives, we often mix up the two, treating our beliefs as if they were established knowledge or facts and defend them with the same conviction and certainty.

This distinction between believing and knowing is not just pure semantic wordplay. It cuts to the core of how we understand the world, make decisions, and interact with others who hold different perspectives. The gap between what we believe and what we know shapes our personal worldviews, our social discourse, and even our societal structures.

Believing isn't Knowing

What Does it Mean to Believe?

Let's talk about belief. What exactly happens when you say "I believe that..." something? At its core, believing is accepting something as true without necessarily having concrete proof. It's a mental stance we take toward an idea or proposition.

Belief: a state or habit of mind in which trust or confidence is placed in some person or thing - Merriam Webster

Think about it this way: when you believe something, you're essentially saying, "Based on what I've experienced, heard, or felt, I accept this as true." Pretty simple, right? Yet beliefs come in all shapes and sizes. Some are small and everyday: "I believe it'll rain tomorrow" or "I believe this restaurant has good food". Others are massive and life-defining: religious beliefs, political ideologies, or fundamental views about human nature.

How Beliefs Form

So how do we end up believing stuff? It's actually a fascinating mix of:

  • Personal experiences: That time I got food poisoning from seafood? Might lead me to believe all seafood is risky. Or I might just have been allergic to seafood (unless otherwise diagnosed)
  • Cultural programming: Growing up in a particular family, community, or culture fills our head with beliefs before we even realize it.
  • Emotional comfort: Sometimes we believe things simply because they make us feel good, safe, or like the world makes sense.
  • Social influence: We're social beings. When everyone around us believes something, it's really hard not to fall in line.
  • Pattern recognition: Our brains are wired to spot patterns and make predictions, sometimes seeing connections that aren't really there.

The thing about beliefs is they often form without us consciously examining them. They sneak into our minds through the back door. Someone says something confidently, we hear it a few times, and suddenly we're believers without ever asking for evidence.

The Structure of Belief Systems

Beliefs don't exist in isolation, they hang out together in what we call belief systems. These are like mental frameworks where various beliefs support and reinforce each other.

Think of our belief system like a spider web. Every part is related to the others, creating this intricate and complicated network. Pull on one strand or part, and the whole thing shifts. That's why changing one core belief can sometimes cause a domino effect that reshapes your entire worldview.

But here's what's wild about belief systems: they're self-protecting. Once you've built up a network of beliefs, your brain works overtime to defend it. You'll naturally seek out information that confirms what you already believe (that's called a confirmation bias) and find ways to dismiss anything that challenges your views. The more fundamental the belief is, the stronger the domino effect.

This isn't because you're stubborn or close-minded, it's just how human brains work. Believing is actually the default setting for humans. It's cognitively efficient. It takes way less mental energy to believe than to constantly question everything.

And that brings us to a key insight: believing is easy! Really easy! You can believe something without putting in much work at all. Someone you trust says something, it seems reasonable, it makes you feel good, boom, you believe it. However, there's a flip side to this coin. The ease of believing means we accumulate all sorts of beliefs that might not be accurate. We believe things not because they're true, but because they're comfortable, convenient, or socially rewarded.

Why people maintain religious affiliations?

Religious belief and participation can certainly provide significant social rewards, which can be an important factor in why people maintain religious affiliations. Here are some examples of how being religious can be socially rewarding:

In childhood, religious involvement often provides:

  • Immediate community and friendship connections through youth groups, Sunday school, or religious education classes
  • Regular social gatherings and celebrations tied to religious holidays and events
  • Structured activities and mentorship relationships with trusted adults outside the family
  • A sense of belonging and shared identity with peers who have similar values and practices

In adulthood, religious participation frequently offers:

  • Built-in social networks when moving to new areas (a new church/mosque/temple provides instant community)
  • Support systems during major life transitions like marriage, parenthood, or bereavement
  • Opportunities for leadership and meaningful volunteer work that's recognized by the community
  • Regular rituals and gatherings that combat isolation and loneliness
  • Cross-generational relationships that might otherwise be difficult to form in modern society
Believing isn't Knowing

What Does it Mean to Know?

So we've talked about believing. Now let's address the more challenging question: what does it mean to actually know something?

This is where things get interesting. Philosophers have been arguing about the definition of knowledge for thousands of years (seriously, since Ancient Greece), but the most widely accepted view is that knowledge is "justified true belief." Let's break that down in plain English:

  • It's something you believe (check!)
  • It's actually true (not just in your head, but in reality)
  • You have good, solid reasons for believing it (justification)

See how knowing is like believing, but with extra steps? That's the key difference. When you know something, you can point to evidence, reasoning, or reliable methods that back it up. You're not just saying "trust me on this" or "it feels right to me." You're saying, "Here's why this checks out, and here's how you could verify it too."

How Knowledge Forms

Unlike beliefs, which can form passively and unconsciously, knowledge typically requires active work. Here's what goes into building knowledge:

  1. Evidence gathering: Collecting facts, data, observations, or experiences that relate to what you're trying to understand.
  2. Critical thinking: Analyzing that evidence, looking for flaws in reasoning, considering alternative explanations.
  3. Testing and verification: Checking if your understanding holds up in different situations or when challenged.
  4. Peer review: Having others examine your reasoning and evidence to catch any biases or mistakes you might have missed.
  5. Revision: Being willing to update what you think you know when new evidence emerges.

This process isn't a one-time thing, it's ongoing. Real knowledge involves constantly checking and rechecking our understanding against reality.

Think about how scientists know things. They don't just say, "I've got a feeling about how gravity works." They develop theories, test them with experiments, invite others to poke holes in their ideas, and adjust their understanding when the evidence demands it.

Science isn't just another belief system, it's our best tool for turning beliefs into knowledge.

Science is basically humanity's most successful method for moving stuff from the "I believe" column to the "I know" column.

The Structure of Knowledge

While belief systems can be pretty flexible and personal, knowledge tends to be more structured and connected to reality in verifiable ways.

Picture knowledge like building a house. You need a solid foundation of basic facts and principles. Then you construct walls of reasoning and evidence. Each new piece of knowledge has to fit with the existing structure or you need to rebuild parts of the house.

Knowledge also has built-in quality control mechanisms too:

  • Consistency: If two pieces of knowledge contradict each other, at least one must be wrong.
  • Explanatory power: Good knowledge helps explain other things we observe.
  • Predictive ability: Real knowledge lets us make accurate predictions about what will happen.
  • Practicality: Knowledge typically works when applied in the real world.

The cool thing about knowledge is that it's cumulative and collaborative. Unlike personal beliefs, which can vary wildly from person to person, knowledge builds on itself across time and cultures. That's how we've gone from thinking the earth is flat to landing robots on Mars.

But the reality is knowing is hard work. Really hard. It takes intellectual honesty, time, effort, and a willingness to be wrong. It means fighting against our natural tendency to believe what's comfortable or convenient and that's why most of what we think we "know" might actually just be things we believe!

Believing isn't Knowing

What's the Real Difference?

Let's get right to the heart of it: believing and knowing aren't just slightly different, they're fundamentally different approaches to understanding the world.

The beautiful thing about beliefs is that you're completely free to believe whatever you want. No one can stop you from believing the Earth is flat, that your favorite sports team is the best despite their awful record, or that eating carrots gives you night vision. Beliefs don't need permission or verification. They just need you to accept them.

Knowledge on the other hand, has constraints. It's bound by reality, by evidence, by logic. You don't get to "know" whatever you feel like knowing. You have to earn knowledge through observation, testing, and rational analysis and if necessary, peer reviews. Knowledge has to conform and adhere to what's actually true, not what you want to be true.

So, just to a bit summarize the differences:

Believing Knowing
"This feels true to me" "I can demonstrate why this is true"
Often formed passively Requires active effort
Can vary widely between people Tends to converge across different people
Comfortable with contradictions Requires consistency
Can ignore contrary evidence Must account for all evidence
Based on trust, intuition, tradition Based on verification, testing, reasoning
Can form instantly Develops gradually
Provides certainty and comfort Often includes uncertainty and nuance

When Opinion Disguises Itself as Truth

Here's where things get tricky in real life. Most of us walk around saying "I know" when what we really mean is "I strongly believe."

  • "I know the stock market will go up next year."
  • "I know my kid is the smartest in her class."
  • "I know [political figure] is corrupt/honest."

But do we really know these things? or do we just believe them? Could we demonstrate them to someone who disagrees with us using evidence they'd accept?

The problem isn't having beliefs, we all have them and need them. The problem is when we confuse our beliefs for knowledge. When we do this, we close ourselves off from learning, from growth, from reality itself.

The Freedom Paradox

There's a fascinating paradox here. Believing gives you freedom of choice, you can believe anything. But that freedom can become a mental prison if you're not careful. When you insist on believing something despite mounting evidence to the contrary, you trap yourself in a worldview that just can't evolve.

True knowledge, while more constrained and limited by reality, actually frees you in a deeper way. It connects you with how things really work. It gives you power to predict, to understand, to solve problems effectively.

But to be honest most of us prefer believing to knowing. Believing is comfortable. It requires less work. It lets us maintain our sense of certainty.

Final Word

So where does this leave us? With the understanding that both believing and knowing have their place in human life.

The key is being honest with ourselves and others about which is which.

There's nothing wrong with saying "I believe this, but I don't know it for certain." That's not weakness, it's actually intellectual honesty. It leaves the door open for growth, for connection with people who believe differently, and for a relationship with reality that can evolve as we learn more.

Perhaps the wisest approach is to hold our beliefs a little more lightly and to work a little harder at knowing what can be known.

Because as that German saying reminds us: believing isn't knowing. But maybe the most important knowledge of all is understanding the difference, and understanding each other.

]]>
<![CDATA[Why Developers Should Advocate for BlueSky and AT Protocol]]>https://aien.me/why-developers-should-advocate-for-bluesky-and-at-protocol/680f30ddf860d30001620354Mon, 28 Apr 2025 10:56:57 GMT

I've been dealing with the world of decentralized social media lately, and BlueSky (or Bluesky, or bsky) has caught my attention in a big way. If you've been watching the social media services, websites and apps evolve over the past couple of years, you've probably noticed the growing interest in alternatives to the big centralized platforms, especially after Elon Musk's Twitter takeover and the subsequent rebrand to "X".

More Than Just Another Twitter Alternative

What's fascinating about BlueSky isn't just that it's a Twitter-like platform with a cleaner interface (though that's nice too). The real magic lies in what powers it: the AT Protocol (Authenticated Transfer Protocol).

Unlike traditional social media platforms where your identity, relationships, and content are locked inside one company's servers, AT Protocol aims to create an open standard where users truly own their digital presence. Your identity isn't tied to BlueSky as a company but rather it's portable and verifiable across any service that implements the protocol.

Why This Matters (For Everyone)

For users, this is important stuff. Think about it: when Twitter/X makes changes you don't like, your options are basically "deal with it" or "leave and lose everything". With AT Protocol, if you don't like how one service is handling things, you can take your entire social presence, your handle (basically username), content history, and follower relationships, to another provider. It's like being able to port your phone number between carriers, but for your entire social identity.

For developers, AT Protocol opens up a whole new world of possibilities. Instead of being at the mercy of Twitter/Meta/etc. changing their APIs or limiting what you can build, you're working with an open protocol designed for interoperability from the ground up. You can create new client apps, innovative feed algorithms, or specialized services that all work within the larger ecosystem.

The protocol itself uses DIDs (Decentralized Identifiers) and a signed data repository that cryptographically verifies content. This essentially means users can trust that posts they see are authentic and haven't been tampered with. It also means developers can build tools with confidence that the data they're accessing maintains its integrity.

Perhaps most importantly, supporting AT Protocol means supporting a vision of the internet that's more aligned with its original promise: open, user-empowering, and not dominated by a handful of corporate walled gardens. When we build on open protocols, we're creating a more sustainable and equitable digital future.

The Challenge: Adoption

Here’s the kicker, though it’s the same uphill battle every new social network faces: showing tangible benefits and proving there’s real value beyond the hype.

Without a vibrant, engaged user community, Bluesky feeds seem empty, and all the magic of AT Protocol (portable identity, cryptographic guarantees, self-authenticating data) stays stuck in a sandbox. Even tech-savvy early adopters hit a wall when their friends aren’t there, The onboarding process can be tough when it involves managing credentials or integrating servers, or when they miss the simple analytics dashboards and monetization hooks they take for granted on other platforms.

We’ve seen sparks of excitement in hackathons and GitHub commits, but turning that into "everyday" appeal means solving adoption barriers at every layer:

  • Seamless UX so non-technical users aren’t scared off by federation or key management (or even the terms).
  • Robust moderation to keep bots, trolls, and misinformation at bay from day one.
  • Built-in analytics & revenue paths so creators and businesses have stake in the game.
  • Clear trust signals such as verification, uptime guarantees, transparent policies all to shake the 'just another alpha" label.

This is exactly where we, as developers, designers, community builders come in. By shipping intuitive apps, crafting moderation tooling, demoing analytics integrations, and evangelizing portable identity, we can turn Bluesky’s "walled garden with open gates" into a truly thriving, interoperable social ecosystem. Only by tackling these adoption hurdles head-on will AT Protocol break free of its alpha origins and redefine what social media can be.

Why This Matters to All of Us

As developers, we're not just code monkeys (though sometimes that's how we feel after debugging for 12 hours straight and probably with all the recent "vide coding"). We're the shapers and builders of the digital world. The choices we make about which technologies to support and build upon determine the future of online spaces.

The centralized model of social media has shown its flaws:

  • Users becoming products rather than customers
  • Platform lock-in with no credible exit options
  • Content moderation based on corporate decisions
  • Algorithmic manipulation designed for engagement, not wellbeing

AT Protocol, or similar protocols can offer a better alternative that puts power back in users' hands. Its unique characteristics include:

  • Self-authenticating data through cryptographic signatures
  • User-controlled algorithmic feeds
  • Portable identity that doesn't depend on a specific server
  • Guaranteed data portability

Beyond AT Protocol: Finding Common Ground

One especially interesting development comes from Robin Berjon, who proposed that ActivityPub (the protocol powering Mastodon and the Fediverse) could potentially run atop an AT Protocol Personal Data Server (PDS).

As Robin points out:

Both the Activity standards and ATProto break siloing in different ways. Activity are built around URLs and can sort of "socialise" more or less anything on the Web, which is great, but they don't touch the underlying substrate. [...] ATProto, on its side, provides a good initial foundation for an extensible PDS designed around user agency and credible exit.

Instead of getting caught in protocol wars, we might find that these different approaches solve complementary problems. ActivityPub excels at federation and connecting disparate parts of the web, while AT Protocol provides stronger guarantees for user agency and data sovereignty.

How We Can Make a Difference

So, what can we as developers do to support this evolution? Here are some practical ways to get involved:

  1. Experiment with AT Protocol: Build apps, tools, or services that implement AT Protocol. Even small experiments help expand the ecosystem.
  2. Create custom feeds: One of AT Protocol's coolest features is customizable algorithmic feeds. Try building our own feed generators.
  3. Bridge technologies: Work on projects that help connect AT Protocol to other decentralized technologies like ActivityPub, bridging communities rather than fragmenting them.
  4. Contribute to open-source implementation: The atproto repository is open source and welcomes contributors.
  5. Educate others: Write articles, create videos, or host workshops about AT Protocol and decentralized social media.

Avoiding Another "Walled Garden"

The decentralized social media landscape is still transforming, and there's a risk that we'll end up with multiple competing protocols that don't talk to each other, and again creating new silos instead of breaking them down.

By supporting AT Protocol while also exploring ways to make it interoperable with other open standards, we can work toward a truly user-centric social web where people control their digital presence, rather than corporations.

The internet was originally designed as an open network of networks. Somewhere along the way, we lost that vision. With protocols like AT Protocol and the developers who support them, we have a chance to reclaim it.

What do you think? Are you working on anything with AT Protocol? Have you used BlueSky? Drop a comment and let's discuss how we can build a better social web together! And by the way, you can find me in blueskye too! @aien.me


P.S. If you're curious about trying BlueSky, it's now open for public sign-ups at bsky.app. If you're interested in learning more about AT Protocol, check out the documentation.

]]>
<![CDATA[The Difference Between Agentic AI and AI Agents]]>https://aien.me/the-difference-between-agentic-ai-and-ai-agents/68018e24f860d300016202b6Thu, 17 Apr 2025 23:47:48 GMT

Have you ever wondered about the terms "Agentic AI" and "AI Agents" that seem to be popping up everywhere in tech discussions? While they might sound similar, they actually refer to different concepts in the artificial intelligence landscape. Let's break down these terms in a way that's easy to understand and appreciate.

Agentic AI: The Orchestra Conductor

The Difference Between Agentic AI and AI Agents
Photo by Andrea Zanenga / Unsplash

Think of Agentic AI as the conductor of an orchestra. It's not just a single musician but rather the overarching system that coordinates everything to create beautiful music. Agentic AI represents a paradigm or approach to artificial intelligence where systems can:

  • Work autonomously with minimal human input
  • Plan and reason through complex, multi-step problems
  • Orchestrate workflows by potentially coordinating multiple AI components
  • Learn continuously through feedback loops and reinforcement learning

Agentic AI is about the big picture, a holistic design philosophy that enables AI systems to pursue goals independently. It's like having a personal assistant who not only follows instructions but anticipates needs, breaks down complex tasks, and figures out the best way to accomplish them without constant supervision.

For example, NVIDIA's Blueprints framework exemplifies this approach by enabling sophisticated multi-agent workflows for data analysis and autonomous decision pipelines. Similarly, systems like Anthropic's Claude Research can integrate web search and internal documents to autonomously gather information and craft comprehensive responses.

AI Agents: The Individual Musicians

The Difference Between Agentic AI and AI Agents
Photo by Geo Chierchia / Unsplash

If Agentic AI is the conductor, then AI Agents are the individual musicians in the orchestra. They're the concrete software entities or programs that perform specific tasks. An AI agent:

  • Perceives its environment through inputs like text or sensors
  • Makes decisions based on its programming and goals
  • Executes actions such as API calls or generating content
  • May adapt over time with proper learning mechanisms

AI agents can be simple or complex, ranging from basic rule-following bots to sophisticated programs that learn from interactions. They're the practical implementations that do the actual work within a larger system.

Examples include HubSpot's Breeze Agents, which are specialized bots that generate replies and prospect lists for small businesses, or ChatGPT Plugins that fetch real-time data and execute specific tasks like booking reservations.

The Key Differences

  1. Scope vs. Instance
    Agentic AI refers to the entire approach or paradigm, while AI Agents are the individual components or instances operating within that paradigm.
  2. System Design vs. Component Function
    Agentic AI focuses on designing entire workflows and coordination mechanisms, while AI Agents focus on executing specific perception-action loops within their bounded scope.
  3. Autonomy Levels
    Agentic AI emphasizes continuous improvement through iterative planning cycles and learning from complex task outcomes, whereas AI Agents may simply execute predefined workflows unless specifically designed with learning capabilities.

Why This Matters

Understanding the distinction between these concepts helps us have clearer conversations about AI development. When businesses talk about implementing "AI Agents," they might be referring to specific software entities that handle discrete tasks. When researchers discuss "Agentic AI," they're likely talking about broader architectures that enable higher levels of autonomous operation.

As AI evolves, we can expect Agentic AI systems to become more sophisticated in coordinating multiple agents to solve increasingly complex problems. Meanwhile, individual AI agents will continue to improve at their specialized tasks, eventually becoming the reliable building blocks of these more autonomous systems.

The relationship between Agentic AI and AI Agents is cooperative, meaning that each needs the other to realize the full potential of artificial intelligence. As we develop better agents, we enable more sophisticated agentic systems, and as we refine our understanding of agentic principles, we create better environments for agents to operate within.

Tools and Technologies for Building Agentic AI and AI Agents

If you're interested in exploring these technologies further, it's helpful to understand the tools commonly used to build each one.

Building Agentic AI Systems

Agentic AI systems typically require more complex frameworks and infrastructure:

  • LangChain and LlamaIndex: These frameworks provide the architecture for creating systems that can reason, plan, and orchestrate multiple components.
  • Vector Databases (like Pinecone, Weaviate, or Chroma): Essential for knowledge retrieval and maintaining context across complex multi-step tasks.
  • Orchestration Platforms (like NVIDIA Omniverse, Microsoft Semantic Kernel, n8n): These help coordinate multiple AI components and manage workflows.
  • Custom Planning Frameworks: Many organizations build proprietary frameworks that implement planning algorithms like ReAct (Reasoning and Acting) or Tree of Thoughts.
  • Reinforcement Learning from Human Feedback (RLHF): This training methodology helps systems learn complex decision-making processes.
  • Memory Systems: Technologies that enable persistent memory across interactions and learning from past experiences.

Building AI Agents

Individual AI agents often leverage:

  • API Frameworks (like FastAPI, Flask): For creating interfaces that allow agents to interact with other systems.
  • LLM Integration Libraries (like OpenAI SDK, Anthropic SDK): For powering the reasoning and natural language abilities of agents.
  • Agent Toolkits (like Autogen, Langroid): Pre-built components that accelerate agent development.
  • Function Calling Interfaces: Structured ways for agents to take specific actions based on parsed user inputs.
  • Specialized Data Processing Libraries: For agents focused on specific domains like image analysis (OpenCV) or document processing (PyPDF).
  • Cloud Services (like AWS Lambda, Azure Functions): For deploying lightweight, scalable agent instances.

The tools for both categories continue to evolve rapidly, with new frameworks and platforms emerging regularly. Many developers start by building simpler AI agents before progressing to more complex agentic systems that coordinate multiple agents toward sophisticated goals.

Spotlight on n8n: Bridging Agentic AI and Real-World Actions

The Difference Between Agentic AI and AI Agents

One particularly powerful tool worth highlighting is n8n, which serves as a crucial bridge between intelligent AI systems and real-world actions. n8n is an open-source workflow automation platform that plays several key roles in the AI agent ecosystem:

For Agentic AI Systems

In sophisticated agentic AI setups, n8n can function as an external toolset or execution engine. When an agentic AI system determines a multi-step plan like "gather data from a Google Sheet, transform it, send an email, and update a Notion page," it doesn't need to manage all these APIs directly. Instead:

  • The agentic AI can invoke predefined or dynamically generated workflows in n8n
  • n8n handles the authentication complexities and API interactions
  • This separation allows the AI to focus on high-level planning and reasoning while n8n manages task execution

Essentially, n8n becomes a "tool-use environment" for agentic AI, similar to how humans use various applications to accomplish tasks.

For Individual AI Agents

For more focused AI agents, n8n serves as:

  • A workflow engine: Agents can trigger specific automation sequences via n8n webhooks
  • An integration hub: Providing access to hundreds of third-party services without requiring custom API code
  • A low-code memory/action system: Storing and retrieving agent context in external databases or creating event-based triggers

Real-World Architecture Example

Consider a system designed to "Send a custom report every Monday with AI insights":

  1. An Agentic AI determines what data is needed and plans the required steps
  2. Individual AI Agents handle specific tasks like summarizing insights and generating text
  3. n8n connects everything by:
    • Triggering the pipeline on schedule
    • Pulling data from various sources
    • Communicating with the AI agents
    • Formatting and delivering the output
    • Logging activities to a database

In the most advanced implementations, AI agents can even generate or modify n8n workflows dynamically through the n8n API, approaching true agentic autonomy—where AI not only uses tools but configures them as needed.

Prediction for the future

I guess that as these technologies mature, we might see AI systems that can truly partner with humans in creative and problem-solving challenges, like not just following commands but actively contributing insights and approaches we probably not have considered. The distinction between Agentic AI and AI Agents may blur over time as individual agents become more capable and systems become more integrated, but understanding the foundation of these concepts helps us navigate the exciting developments to come.

What aspects of Agentic AI or AI Agents are you most excited about? How do you see these technologies changing your industry or daily life? The conversation is just beginning!

]]>
<![CDATA[I'm moving away from US-BigTech, and you should too!]]>My story begins years ago. I was once a happy user of early Facebook and Google, believing in their promises of connection and information access. But over time, I witnessed something troubling: Facebook's misuse of personal data. This was my first wake-up call, prompting me to step away.

]]>
https://aien.me/im-moving-away-from-us-bigtech-and-you-should-too/67dc708542e4f500018e94c6Sat, 22 Mar 2025 08:14:36 GMTMy story begins years ago. I was once a happy user of early Facebook and Google, believing in their promises of connection and information access. But over time, I witnessed something troubling: Facebook's misuse of personal data. This was my first wake-up call, prompting me to step away.

Seeking alternatives, I moved to Instagram, only to encounter the same issues - data misuse and the growing problem of algorithm-driven echo chambers. This became painfully clear when I followed flat-earth discussions out of curiosity. I watched in real-time as YouTube's algorithm reinforced these beliefs, trapping people in bubbles of misinformation rather than challenging their thinking.

My trust in big tech was already wavering when Google delivered the final blow. Without explanation, they suddenly removed my Google Analytics account, a critical tool for my work. This arbitrary decision highlighted how powerless we are against these corporations. They can make decisions that affect our livelihoods without offering any recourse or opportunity to defend ourselves.

Then came the Twitter transformation into X. While I had appreciated Twitter and even admired Elon Musk's innovative thinking, the acquisition made his motivations clear: control. What better way to influence public discourse than owning an entire social media platform? I left immediately and never returned, unwilling to have my data and attention controlled in this way.

This pattern is both frightening and real. These companies want control over our data, our attention, and ultimately, our choices. I refuse to support their political agendas or their data harvesting practices. While I stand firmly against all forms of violence and extremism, I also oppose anything that undermines democracy and human freedom, including the unchecked power of tech monopolies.

Why Consider Alternatives to US Tech Products

People are increasingly wanting their voices heard by the companies they choose to support. However, a global movement will take time, and to succeed, it needs to be gradual, targeted and measurable.

Here are my arguments on why I rather move away from US Big-Tech:

  • Data Privacy: US tech companies have repeatedly shown they prioritize profit over privacy
  • Arbitrary Control: They can remove access to critical services without explanation
  • Echo Chambers: Their algorithms reinforce existing beliefs rather than encouraging critical thinking
  • Monopolistic Practices: Market dominance makes it difficult for innovative alternatives to emerge
  • Geopolitical Concerns: US tech serves as an extension of US political and economic interests globally

The plan however, is to challenge yourself, friends and family to make one change every week. The companies below have proven to be reputable and worthwhile supporting. Businesses that supported current global instability or hateful movement were filtered out.

Browsers

Alternatives to Google Chrome, Microsoft Edge and Apple's Safari:

Email providers

Alternatives to Gmail, iCloud and Outlook:

Search Engines

Alternatives to Google and Bing:

Music Players

Alternatives to YouTube Music and Apple Music:

Social Media

Alternatives to Twitter, Reddit and Chat Apps:

]]>
<![CDATA[Fear in the Face of Success]]>https://aien.me/fear-in-the-face-of-success/67ae6e9d0dc76600015c3241Fri, 14 Feb 2025 07:51:02 GMT

I never thought I'd be writing this post. By all conventional metrics, I'm living what many would call a success story. I'm a software engineer earning a decent salary. I hold a Blue Card that allows me to work anywhere in Europe. I'm married, integrated into German society, and just weeks away from applying for German citizenship. Yet here I am, feeling an overwhelming sense of fear and uncertainty.

It was 2016, I left my home country seeking stability. I arrived in Germany on a student visa, determined to build a better life. I threw myself into my studies, learned the language, worked as a working student with decent salaries, and found love along the way - marrying someone from my home country who had already become a German citizen. After graduation, I secured a job in tech and have been working steadily since 2018, paying into the pension system and social security, never once requiring government assistance.

I did, and still doing everything "right". Everything by the book. I followed every rule, checked every box, climbed every ladder. I'm not particularly political - not conservative, not progressive, just a normal person trying to live a decent life and build a future. I guess I'm the kind of immigrant story that gets held up as an example of successful integration.

But lately, I find myself gripped by an inexplicable fear. The rise of far-right movements, the growing anti-immigrant sentiment, the AfD's increasing popularity, and the horrible incident happened in Munich - it all creates this suffocating atmosphere of uncertainty. The irony isn't lost on me: I left my homeland seeking stability, only to find that same feeling of instability following me here, despite all my achievements.

The hardest part? I can barely articulate what exactly I'm afraid of. My position is secure on paper. I have a residence permit, a well-paying job in a critical sector, and I'm on the verge of citizenship. Logically, I know these things protect me. But fear isn't logical. It seeps through the cracks of our rational defenses, finding its way into our quietest moments.

Perhaps what scares me most is the shifting ground beneath our feet. When I came to Germany, I believed that hard work, integration, and playing by the rules would guarantee security. Now I'm questioning whether any of these guarantees are as solid as I thought. It's not just about legal status or economic security - it's about belonging, about the fundamental right to feel at home in the place you've chosen to build your life.

I'm sharing this not because I have solutions, but because I suspect I'm not alone. There must be others out there - other skilled professionals, other "successful" immigrants - who behind closed doors feel this same creeping anxiety. Who wake up some mornings wondering if the life they've built stands on shakier ground than they thought.

This is not a story of despair, though. It's a story of complexity - of how success and fear can coexist, of how belonging isn't just about paperwork or professional achievements. It's about acknowledging that even those of us who seem to have "made it" carry our own fears and uncertainties.

To my fellow immigrants who might be feeling this way: you're not alone. To my German neighbors: this is also what integration looks like - caring so deeply about our adopted home that its political winds can shake us to our core.

I don't have a neat conclusion to offer. Just my truth, my fear, and my continued hope that by speaking these things aloud, we might find both understanding and strength in our shared humanity.

]]>
<![CDATA[Why (Open Source) DeepSeek's shock is important]]>https://aien.me/why-open-source-deepseeks-shock-is-important/6798a5023aa664000120a5bcMon, 27 Jan 2025 19:26:00 GMT

I, as an AI enthusiast, have been watching the AI industry evolve for a while now, and something fascinating happened yesterday. It's like watching a David vs. Goliath story unfold, but with a twist that could reshape the entire AI landscape. The protagonist? DeepSeek, a relatively new player that's causing quite a shock at the moment in the world of artificial intelligence. Or probably it's just hype, but nonetheless, there's something that one can learn out of it.

You know what's really interesting? In my opinion, it's not just about DeepSeek's performance (though that's impressive too). It's about the idea it represent: a shift in how we think about AI development and accessibility. While giants like OpenAI and Google are building their walled gardens with increasingly expensive User and API costs (Yeah, I'm talking about the $200 plan!), DeepSeek is showing us there's another way.

Here's the kicker: DeepSeek's models (V3 in this case) are not only performing remarkably well, but they're doing it at a fraction of the cost. I'm talking about API costs that make OpenAI's GPT-4 look like a luxury service. But more importantly, they're open-sourcing their model. Think about that for a moment - they're essentially giving away the blueprint for building advanced AI systems.

Why (Open Source) DeepSeek's shock is important

Now I'm aware that there are lots of open source models out there, some of them I use for my personal stuff, but none of them, at least all those I've tried so far, would achieve a performance like GPT4 or Sonnet 3.5, yet alone o1, and the fact that there's a model with nearly almost the same performance (one might argue it's even better) AND it's open source, changes the course!

Both Approaches

Remember when the internet was truly open? When you could build whatever you wanted without worrying about platform restrictions (IaaS as an example) or API costs? The AI industry is at a similar crossroads right now. On one side, we have the big tech approach: proprietary models, expensive APIs, and controlled access. On the other, we have the open-source movement, and players like DeepSeek, that (might) believe in democratising AI technology.

The contrast couldn't be clearer. While companies like OpenAI charge premium rates for their API access, DeepSeek is showing that high-quality AI can be both accessible and affordable. It's not just about cost savings - it's about freedom to innovate, to experiment, and to build without constraints.

The Hidden Cost of Proprietary AI

Let me put this in perspective: OpenAI just secured $6.6 billion in funding, pushing their valuation to a staggering $157 billion. They're even planning a joint venture called "Stargate" with Oracle and SoftBank, committing up to $500 billion over the next four years for AI infrastructure. That's the kind of money I'm talking about in the proprietary AI world.

But here's where it gets interesting: DeepSeek, a Chinese AI startup backed by the $8 billion hedge fund High-Flyer, developed their R1 model with just about $5.6 million and 2,048 Nvidia H800 GPUs. And guess what? They're achieving performance comparable to ChatGPT. It's like watching someone build a Ferrari competitor in their garage - and succeeding.

Why Open Source Matters More Than Ever

DeepSeek's approach is important and effective not just because of the cost savings, but because of how they're doing it. They've developed something called multi-head latent attention (MLA) and a sparse mixture-of-experts architecture (DeepSeekMoE) that drastically reduces inference costs. While other companies are building walls, DeepSeek is opening doors.

Let me break down what makes their approach so special in my opinion:

  • They use reinforcement learning without supervised fine-tuning, meaning they don't need expensive human-labeled data
  • Their MoE architecture only activates relevant parts of its roughly 670 billion parameters for each task (imagine having a huge team but only calling in the experts you need for each project)
  • They offer scaled-down versions from 1.5 billion to 70 billion parameters that can run on consumer devices
  • Everything is open source under an MIT license

Now, let's talk numbers - because they're mind-blowing!

While ChatGPT reportedly needs 20,000 high-end GPUs, DeepSeek gets similar results with just 2,000 (To put that into perspective, I think Meta needed something like 31M hours for models that size). Their API costs $0.14 per million tokens compared to OpenAI's $7.50 o1 - that's a 98% difference! They even offer 50 free daily messages on their chat platform.

How I see the future

I believe we're at a critical juncture in AI development. The choices we make now about how AI technology is developed, shared, and controlled will have far-reaching consequences. DeepSeek's success isn't just a victory for one company - it's a proof of concept that open-source AI can compete with, and sometimes outperform, proprietary solutions.

The question isn't whether open-source AI will play a role in the future - it's how big that role will be. And companies like DeepSeek are showing us that the answer might be "bigger than we thought."

As someone who's been following AI development, I can't help but feel optimistic about what DeepSeek represents. The real revolution in AI might not come from the biggest companies with the most resources. It might come from the open-source community, powered by players like DeepSeek that are willing to share their advances with the world. And that's a future worth getting excited about.

Cheers

]]>
<![CDATA[Farewell to Social Media]]>https://aien.me/farewell-to-social-media/6794d208eba51f0001296269Sat, 25 Jan 2025 12:22:21 GMT

I used to think the problem was entirely my fault. I’d tell myself that maybe I didn’t have enough willpower, or I wasn’t managing my time properly, or perhaps I had simply become too attached to the online world. Because of these doubts, I tried again and again to distance myself from social networks, yet each time, I found some reason to return. But this time, it’s different. After a lot of trial and error, and various experiences, I’ve come to a deeper understanding of what the relying problem is.

From a Silly Fear to a Deeper Insight

Do you know what kept dragging me back each time? A rather silly fear: the fear of missing out (a.k.a FoMO). I worried that something important might happen and I wouldn’t know about it, or my friends might need me and I wouldn’t be there. Sometimes, I was even afraid of missing big news or losing touch with my friends’ lives and various events.

Yet every time I left and then returned, something became clearer: everyone was perfectly fine! It’s almost odd to say it, but the world keeps spinning without me being online 24/7. My friends and family go about their lives without my constant check-ins. They continue living even if I don’t watch their stories. And if something truly urgent happens, I’ll find out one way or another. It seems that the fear of missing out was more of an illusion than reality.

Remembering the Good Old Days…

Then I asked myself: why all this fear? Why was it never like this before? Why didn’t I feel like this 15 years ago?

Now, looking back, I miss the early days of the internet. Remember the era of Yahoo Chat or Yahoo 360, AOL, ISN etc? Some may be too young to recall, but we used to rush home from school, excited to explore the online world and discover new things. Back then, it felt like we owned the internet.

Now, though, it’s like we’ve become guests in a luxurious mansion run by a few giant companies. They tell us, "Come on in, it’s all free" but there’s the catch: we must let them record everything we do, control our behaviour and desires, and kick us out whenever they please. Sounds surreal, right? It’s as if we’re in a reality show that we joined willingly!

A Much Bigger Problem

But it’s not just about monitoring and control. There’s a deeper issue: human beings were never prepared to handle so much global connectivity, or rather so much data! It’s not that we lack the capacity; rather, after so many years of evolution, our brains have suddenly been thrown into an environment where we have to process massive amounts of information every day and interact with countless people, and at the same time, be perfect!

Today, almost every post is either clickbait or an extreme attempt at grabbing attention. It’s as if everyone is shouting, "Look at me!" and they’ll do just about anything to be seen. Some even cast aside basic ethics for a handful of extra thumbs.

For instance, we’ve all heard of monopolies and their harmful effects. Let’s quickly define a couple of terms to make sure we’re on the same page:

Monopolization (Monopoly): A process in which a large company or organization, by using its economic power and operational scale, drives out smaller competitors and gains complete control of a market or industry.
Oligopoly: A market situation where a small number of large companies (usually two to five) control most of the market. Instead of genuine competition, they often form a strategic balance with each other and effectively split the market among themselves.

That’s exactly what’s happening online. The modern internet is practically controlled by five major companies: Meta, Microsoft, Google, Apple, and Amazon.

To understand this better, consider how the internet used to be about 20 years ago, like a moderately large town filled with small shops, libraries, and cafés. Many were run by individuals, and you could choose one or several of them as your personal hangouts in your spare time.

Sometimes these spaces were personal websites where people wrote and posted in their own style, or they were smaller, one-way social networks such as Yahoo 360 or AOL. Admittedly, there were various problems, like cultural differences, yet, in the end, everything revolved around individuals. Viral content didn’t exist, and competition had a meaningful purpose: people genuinely tried to showcase their expertise and spent time creating quality posts that were actually helpful!

Then one day, all of a sudden, the first hypermarket opened: Facebook. Little by little (yet surprisingly quickly), everything changed. People crowded into this one spot, and almost no one visited smaller, more personal corners of the web. Relationships took center stage, overshadowing all else.

Since then, many people who barely knew what the internet was jumped on board via these social networks, but without realizing what murky politics were lurking beneath them or how these platforms might limit their exploration of the web beyond.

At the end, everything was and is for these giants! Everything, all data, all you!

For Those Who Are Still Caught Up…

If you, like so many others, are caught in this vortex and find it tough to walk away, here’s something you should know: you’re definitely not alone! It’s about just far more than having enough willpower.

Modern social networks are specifically designed to tap into our brains. They know exactly which emotional buttons to push, or which trending photos and videos to show us, so we can’t help staying glued. That’s why walking away from them often requires more than mere determination; we need a deeper understanding of our own needs, wants, and, most importantly, our fears.

For me, quitting social media was one of the best choices I’ve ever made. I felt lost, with my phone controlling my life. I wasn’t really enjoying any hobbies, and I felt anxious most of the time. I started searching for answers and realized that my relationship with social media, and with the internet in general, had become unhealthy.

Once I left platforms that consumed my free time (mainly Instagram, I thankfully never used TikTok), I could finally focus on my studies and work, and my mental health improved significantly. I also had more time for family and friends. Plus, I felt liberated from the constant pressure to present a flawless image of myself online. Ultimately, it all came down to three key reasons for leaving social media:

Constant Distraction

Every time a notification appears, our brains are wired to respond. This triggers a cycle of endlessly checking social media instead of focusing on what truly needs doing. That perpetual distraction makes it harder to stay on track and lowers overall productivity.

Negative Effects on Mental Health

Research consistently shows that too much social media use can increase anxiety, depression, and loneliness. In addition, social media can distort reality, because people typically share only the best parts of their lives, leading others to feel inadequate or to develop unrealistic expectations.

Scientific References:

  1. D. Ostic et al., “Effects of Social Media Use on Psychological Well-Being: A Mediated model,” _Frontiers in Psychology_, vol. 12, Jun. 2021
  2. B. Keles, N. McCrae, and A. Grealish, “A systematic review: the influence of social media on depression, anxiety and psychological distress in adolescents,” _International Journal of Adolescence and Youth_, vol. 25, no. 1, pp. 79–93, Mar. 2019
  3. Google Scholar Search

Loss of Privacy

Social media platforms collect and use large amounts of personal data, often for targeted advertising or selling to third parties. Leaving social media can help you regain control over your personal information.

A New Hope

Here’s some good news: that "good" internet is still alive! You just need to step away from a few popular apps and spend some time exploring. The digital world is much bigger than Instagram, X (Twitter), Facebook, TikTok, and the like.

If you want to embark on this journey, here’s a suggestion: start by asking yourself, "Why do I actually use social media?" Then, more importantly, "What could fill that space in my real life?"

My decision to leave social media wasn’t sudden. It was a long process that began with a simple fear and grew into a deeper understanding of how we connect with the digital world.

I also discovered some clear benefits after quitting:

Better Focus and Productivity

Without the nonstop pull of notifications and feeds, I found it easier to concentrate on important tasks and get more done.

Better Mental Health

Leaving social media can reduce feelings of anxiety, depression, and isolation, and help improve body image and self-confidence.

Stronger Relationships

When you’re not under constant pressure to appear perfect online, you can focus on forming meaningful connections with the people around you.

How to Quit Social Media

The journey away from social media isn't easy, but the benefits are worth it: improved focus, better mental health, and more authentic relationships. Instead of endless scrolling, explore activities that genuinely enrich your life - reading, walking, or spending quality time with loved ones.

  1. Evaluate Your Current Usage
    Write down how much time you spend on each platform and what you usually do there. Identify which platforms are the biggest distractions or bring out negative emotions. Put those at the top of your list to cut back on or quit entirely.
  2. Set a Clear Goal
    Choose a date to step away from social media and create a plan to meet that deadline.
  3. Try a Digital Detox
    For a while, avoid non-essential technology to give your mind a break. Later, you can return with a clearer understanding of whether these tools help or harm you, and make a more informed decision on how, or if, they should fit into your life.
  4. Take Small Steps
    If quitting all at once seems too intimidating, gradually reduce the time you spend on social media each day. Small, steady changes often lead to bigger, more lasting results.
  5. Find Alternatives
    Instead of mindlessly scrolling through feeds, look for other activities that bring you genuine enjoyment, like reading, walking, or spending time with friends and family. While it can be tough to compete with the “dopamine hits” social platforms provide, new habits can become just as rewarding over time.

Remember, giving up social media isn’t necessarily easy, and it may take time to adjust. But by taking small steps and discovering new, healthy ways to spend your time, you can reclaim control over your life and make real, positive changes.


Recalling those early days of the internet, when freedom and discovery truly meant something, inspired me to leave this closed, repetitive environment. It might be a small step, but every big transformation begins with a single move.

From now on, my plan is to be more active on my own website and to look for smaller, more personal online communities to join.

Wishing you the best of luck!

]]>
<![CDATA[Looking Beyond HackerNews]]>https://aien.me/looking-beyond-hackernews/678e3992d334b600015dd334Sat, 18 Jan 2025 13:34:00 GMT

I enjoy HackerNews. It's been my daily companion for tech news and interesting discussions (I actually prefer using hckrnews.com for its better interface). But lately, I've been thinking about how much of our online lives are controlled by the tech giants.

The internet used to be this amazing place full of diverse voices and independent websites. Now it feels like we're all stuck in the same few social media platforms and news aggregators. That's why I started looking for other places to learn and discover new ideas.

This page isn't about leaving HackerNews behind. It's about adding more flavors to my daily reading. Think of it as creating a richer information diet! Here are some interesting alternatives I've found:

Lobste.rs

A community-focused tech news site with an invitation system to maintain quality discussions

Lobsters
Looking Beyond HackerNews

Tildes

A platform focused on meaningful conversations and quality content over viral posts

Tildes
Tildes is a non-profit community site with no advertising or investors. It respects its users and their privacy, and prioritizes high-quality content and discussions.
Looking Beyond HackerNews

Bear Blog Discover

A collection of minimalist personal blogs focusing on genuine content

Discovery feed
Discover articles and blogs on Bear
Looking Beyond HackerNews

Paperlined

A curated collection of technical articles and resources

paperlined.org

Lemmy Technology

Open-source, federated alternative to Reddit focusing on technology discussions

Technology - Lemmy
This is the official technology community of Lemmy.ml [http://Lemmy.ml] for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it. — Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal. — Rules: 1: All Lemmy rules apply 2: Do not post low effort posts 3: NEVER post naziped*gore stuff 4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users. 5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people) 6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist 7: crypto related posts, unless essential, are disallowed
Looking Beyond HackerNews

Engineering Blogs

Aggregator of various company and personal engineering blogs

Engineering Blogs

Jimmy R

News aggregator with multiple sources including tech and science news

Viral news today
Top rated news from all over the Web
Looking Beyond HackerNews

Tech URLs

Curated collection of technology news and articles

TechURLs – A neat technology news aggregator
Read tech news from the most popular tech websites in one place.
Looking Beyond HackerNews

Interesting Engineering

News and articles about innovation, science, and engineering

Interesting Engineering | Engineering, Tech and Science News
Explore Interesting Engineering for cutting-edge articles, news, and insights on technology, innovation, and the future of engineering worldwide.
Looking Beyond HackerNews

Research Buzz

News about search engines, databases, and information resources

ResearchBuzz
News and resources covering social media, search engines, databases, archives, and other such information collections. Since 1998.
Looking Beyond HackerNews

Techmeme

Technology news aggregator focusing on the most important tech stories

Techmeme
The essential tech news of the moment. Technology’s news site of record. Not for dummies.
Looking Beyond HackerNews

Robohub

News and information about robotics and AI

Robohub - Connecting the robotics community to the world
Looking Beyond HackerNews

Bleeping Computer

News and tutorials focusing on technology and cybersecurity

BleepingComputer
BleepingComputer is a premier destination for cybersecurity news for over 20 years, delivering breaking stories on the latest hacks, malware threats, and how to protect your devices.
Looking Beyond HackerNews

Daily Rotation

News aggregator covering various tech topics

DAILY ROTATION
DAILY ROTATION, Tech News, current news headlines from thousands of tech related sites, science news, web based RSS reader for tech headlines from thousands of sites
Looking Beyond HackerNews

Raddle Tech

Alternative tech discussion forum with a focus on privacy and freedom

]]>
<![CDATA[Umami, Right After Google Analytics Suddenly Terminated My Account]]>https://aien.me/umami-right-after-google-analytics-suddenly-terminated-my-account/67805e1dd334b600015dd01bThu, 09 Jan 2025 23:56:57 GMT

Today, I got the most frustrating email of my career. Google Analytics, the service I've been using for over five years to track my website's performance, suddenly informed me that they had terminated my account. Just like that. No warning, no explanation, nothing..

Umami, Right After Google Analytics Suddenly Terminated My Account

You know that feeling when someone forces something to you through a text message? That's exactly how it felt. The email was super short, just saying they "detected improper activity" in my account. What activity? When? How? These questions kept spinning in my head while I was trying to log in to my account, only to find out I couldn't access any of my data anymore.

The funny thing is, I hadn't made any changes to my website or analytics setup in months. Everything was working fine until that morning when I opened my inbox. As someone who relies on website data to make decisions, this was not just annoying, it was actually harmful to my work.

What's The Problem

The biggest issue isn't just losing access to GA. It was the complete lack of information about what went wrong. The email I received was like those automatic replies you get when you message a big company, cold, generic, and unhelpful. It simply mentioned "improper activity" without explaining what this activity was or when it happened.

I spent hours searching through Google's community forums, only to find others in the same situation. Some people had their accounts terminated without any clear reason, and most of them never got proper answers. It's quite shocking when you think about it, a company that knows so much about our online behavior can't even tell us why they're kicking us out!

I tried contacting their support team, but it felt like talking to a wall The support? Of course, just for paid customers. and all the same, referring to their Terms of Service without pointing out the specific violation. As a small "business" owner, this experience made me realize how dangerous it is to depend completely on a service that can cut you off without any proper explanation.

To Find An Alternative

After spending few hours being angry at Google, I knew I had to find another solution. My website couldn't stay without any analytics. But this time, I wanted something different. Something that wouldn't leave me hanging without explanation. Something that would give me more control over my data.

I started googling for "Google Analytics alternatives" and "privacy-focused analytics tools." There were quite a few options out there, Matomo, Plausible, Fathom, and others. But then I found Umami, and it caught my attention straight away.

What made me interested in Umami wasn't just that it was free and open-source. It was the simplicity of it all. No confusing setup process that makes you wonder if you're doing it right. And most importantly, no risk of sudden account termination because all the data stays on your own server.

The transition wasn't as hard as I thought it would be. Sure, I had to learn a few new things, but isn't that always the case when you try something new? Plus, the Umami doc was really helpful and simple, unlike the frustrating experience I had with Google's support forums.

Why Umami?

After using Umami, I can honestly say that sometimes bad situations lead to better outcomes. Here's what makes Umami much better for my needs:

First of all, it respects privacy, both mine and my visitors'. There's no complex cookie notices needed, and no worries about GDPR compliance. It just collects the basic data that actually matters for my website. When I first saw Umami's dashboard, I was surprised by how clean and simple it looked. No overwhelming charts or confusing metrics, just the important stuff.

Umami, Right After Google Analytics Suddenly Terminated My Account

The best part? I own all my data now. It's stored on my server, and nobody can take it away from me. No more worries about waking up to find my analytics account terminated. Plus, Umami's really (really!) light, it doesn't slow down my website like Google Analytics did (something I never realized until I made the switch, of which I wished I had a screenshot).

Setting up Umami was pretty straightforward. Just add a simple script to your website, and you're good to go. No need to verify your site ownership, no need to accept long terms of service, and definitely no risk of getting banned for "improper activity" that nobody explains to you.

Hope it helps, if you needed to set it up for yourself, let me know, I'd be glad to help.

]]>
<![CDATA[One Simple Domain Change That Cut My API Gateway Costs in Half]]>https://aien.me/one-simple-domain-change-that-cut-my-api-gateway-costs-in-half/6782ea66d334b600015dd0bdWed, 18 Dec 2024 22:57:00 GMT

I was debugging a React app a few weeks back when something caught my eye in the network tab. A simple GET request to my API was actually making two requests: first an OPTIONS request, then the actual GET. My API Gateway logs showed the same pattern: two requests, two charges (you read a $).

That's when it hit me: I was basically paying double for every single request because of CORS preflight requests. Now if you're hosting your API on a subdomain like api.example.com, you might be bleeding money the same way.

What's the problem?

Every time the browser makes a cross-origin request, it first sends a so called preflight request to check if it's allowed to make the actual request. Since API Gateway charges per request, you're getting billed for both, the preflight and the main request.

CORS (Cross-Origin Resource Sharing) is a system, consisting of transmitting HTTP headers, that determines whether browsers block frontend JavaScript code from accessing responses for cross-origin requests.

If you're unsure, a cross-origin request happens when your frontend (e.g example.com tries to talk to a backend that's on a different domain, subdomain, or port (e.g api.example.com. Think of it as your browser's security guard; it wants to make sure the API you're calling is actually expecting requests from your website. It's a crucial security feature, but in our case, it's also causing an unexpected cost overhead.

Here's the thing that surprised me: even if your API is hosted on a subdomain like api.example.com, the browser still treats it as a cross-origin request from example.com. I always thought subdomains would be treated as same-origin. They're not. api.example.com gets treated as if it is a separate origin.

What was (a) solution?

After spotting this in my AWS bill, I moved my API from api.example.com to example.com/api. It basically was just a simple config in my gateway which was straightforward, update the API Gateway custom domain settings, modify my route records, and update the API endpoint in my React app. That's it.

The results were, well, immediate. Not only did my request count drop by almost nearly half (around 40%), but I also noticed a slight improvement in response times. Makes sense, one less network round trip means faster responses for my users.

Now, don't get me wrong. I'm not saying subdomains are bad. They're actually great for keeping your infrastructure organized, especially when you're having multiple services or environments. I've worked with teams that love giving each service its own subdomain. It keeps things clean and gives teams more control over their piece of the system.

But if you're bootstrapping a side project or running a small service where every penny counts, those extra preflight requests can add up. That's when you might want to reconsider your domain strategy.

And hey, if you absolutely need to stick with subdomains, there's a middle ground: Couldflare's CDN or AWS CloudFront. By putting a cache mechanism on top of your gateway, or api, you can handle those preflight requests more efficiently. It won't eliminate the extra costs completely, but it'll definitely be gentler on your wallet. However, this is just a guess and I personally haven't tested it.

So there you have it, a simple domain restructuring that can save you money and speed up your app. It's these small architectural decisions that often make the biggest impact.

If you like my post, I share more of my architectural experiences and cost-saving discoveries on my website. Make sure to subscribe to my newsletter where I break down practical tips and real-world solutions I find in my journey.

Cheers!

]]>
<![CDATA[I switched to Cloudflare Tunnel and ditched Ngrok]]>https://aien.me/i-switched-to-cloudflare-tunnel/65ed81ecb72ee400013a2bdeSun, 10 Mar 2024 11:22:15 GMT

I'd been using Ngrok for a while. It's a helpful tool that allows me to quickly spin up local projects and share them with clients without needing deployment. In my case, I built a CloudIDE, and to address the challenge of hosting my React application on a local Docker container, I used Ngrok to tunnel the project to a publicly accessible URL for testing.

However, the free tier of Ngrok has limitations. It restricts you to a single tunnel at a time and doesn't allow customizing the domain name. To overcome these limitations and cutting the subscription costs, I looked for a self-hosted alternative that offered similar ease of use.

Unfortunately, I couldn't find anything that matched Ngrok's simplicity, where a single command-line program sets up the tunnel.

Then, I discovered Cloudflare Tunnels! While setting up Cloudflare initially might require a bit more effort compared to Ngrok, considering the features and flexibility it offers, I found it to be a worthwhile investment.

What is the Cloudflare Tunnel?

According to Cloudflare's website,

Cloudflare Tunnel provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address.

In simpler terms, instead of exposing your machine's (your localhost for example) IP, a lightweight program called cloudflared creates a secure, one-way connection directly to Cloudflare. This lets you easily share your in-progress website with colleagues or test it from anywhere with an internet connection. It's not just for websites either, Cloudflare Tunnel can handle standard HTTP servers and even allows you to tunnel SSH connections for secure remote access to your development environment. As an added bonus, your local development setup benefits from the security features that Cloudflare offers!

Installing Cloudflared

Cloudflare Tunnels offer two setup methods: through the dashboard or the command-line interface (CLI).

The first option involves using Cloudflare's Zero Trust Platform (ZTP). ZTP is, from my understanding, a security suite designed to manage access and connectivity across your network. It encompasses various functionalities, including tunneling capabilities. I will focus on setting up tunnels through the CLI, offering a more streamlined approach. This method assumes you already have a valid domain name registered and configured within Cloudflare.

Install the package on your machine

We'll first install the package on our local machine, where we want to tunnel our apps from:

sudo mkdir -p --mode=0755 /usr/share/keyrings
curl -fsSL https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-main.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/cloudflared.list
sudo apt-get update && sudo apt-get install cloudflared

Authenticate the program

The following command will open up a new browser window and let's you login to your Cloudflare account and choose a domain. Take note that we want to host our local applications on a specific domain. For example, I want to tunnel all apps running on port 3000 to 3000.aien.me therefore aien.me is domain I'll choose. Yours I assume to be my.domain.

cloudflared tunnel login

Create a tunnel

The next step is to actually create a tunnel in cloudflare. This will allow cloudlfare to generate a unique ID for you and the needed credentials for you to be able to tunnel your apps. Note that there's no connection yet established between your local system and cloudflare!

⚠️
The important part here is that, Cloudflare only allows you to make one tunnel in the free tier. But the difference with ngrok is that, in cloudflare a single tunnel can actually route many applications!

Also note that you can give it any names you want. I would prefer to give them logical names, for example, since I'm running the tunnel in my home machine, I'd call it wsl.home or wsl.work. (The . is not mandatory).

cloudflared tunnel create <TUNNEL NAME>

Now if everything goes fine, it'll print out a tunnel id which you will need to reference in a configuration file.

Tunnel credentials written to xxx.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel wsl.home with id xxx
I switched to Cloudflare Tunnel and ditched Ngrok
You can make sure that your tunnel got created by going to ZT dashboard

Doing the configuration

Now that we have our tunnel created, we need to tell cloudflared, which application on which port, would have to use which hostname on our domain and over which tunnel it should be routed.

For this matter, there's a configuration yaml that we'll use. If you take a look at your terminal where the tunnel got created, cloudflared already mentioned the path to you .cloudflared directory. By default it is addressed to ~/.cloudflared/. Will create our config file using

touch ~/.cloudflared/config.yml

And then we have to put the following default content in it:

tunnel: <Tunnel UUID>
credentials-file: /home/<User name>/.cloudflared/<Tunnel UUID>.json
warp-routing:
    enabled: true

Since we want to expose more than one local service to the internet, we'll also add the ingress config in the same file:

tunnel: <Tunnel UUID>
credentials-file: /home/<User name>/.cloudflared/<Tunnel UUID>.json
warp-routing:
    enabled: true

ingress:
  - hostname: hostname1.my.domain
    service: http://localhost:3000
  - hostname: hostname2.my.domain
    service: http://localhost:8000
  - service: http_status:404

Tip: If you want to control how you hostnames look like, take a look at the official documentation.

What is http_status:404?

Cloudflare ingress routes and hostnames will be evaluated from top to bottom. At the end, it should include a catch-all rule that concludes the file.

Add DNS record per hostname

Now we need to assign a CNAME record per each hostname (service) that we've registered, and we want to tunnel:

cloudflared tunnel route dns <Tunnel Name> <Hostname>

as an example:

cloudflared tunnel route dns wsl.home hostname1.my.domain

Run the tunnel

The last step is to actually run and start the tunnel to proxy the incoming traffic from the tunnel to our services.

cloudflared tunnel run <Tunnel Name>

Conclusion

While setting up a Cloudflare Tunnel might take a bit longer compared to Ngrok, the benefits outweigh the initial time investment and its fun. Cloudflare Tunnels offer greater flexibility, cost-effectiveness, and a sense of ownership due to using your own domain.

Did this post help you set up your own tunnel and host your local projects? Share your thoughts in the comments below!

]]>
<![CDATA[My Take On Social Media and Internet]]>https://aien.me/my-take-on-social-media/65bff4591e8c1d00013aedcdMon, 05 Feb 2024 00:24:18 GMT

2024 has just begun, and January is already behind us. How fast things are moving? It's as if we can't even remember what happened yesterday because we somehow have to "always" worry about tomorrow.

This harsh pace is just a feature of our busy lives, and a reflection of the digital world we are immersed in. Social media and the internet, the supposed connectors of the "global village", have changed into endless, fast-moving streams of content, pushing us to constantly look ahead and scroll, rarely giving us a moment to reflect on the now, let alone the past.

What I believe is, at the heart of this "evolution" is a deep shift in the core of connection and community. Remeber the early days of internet? They were marked by a sense of exploration and discovery. Google, BuzzFeed, AOL, Yahoo360, Deviantart, Other Websites, and blogs were not just digital spaces but canvases for expression, where writers and creators spent hours, if not days, pouring their thoughts and passions into content that was meant to be read, studied, and discussed. These platforms were the digital equivalent of public squares and coffee shops, where people gathered not just to consume but to connect.

Don't get me wrong, most of these websites still do exist, however, we consume our contents mostly from other places. Places meant to be for people on a hurry, made by people how have the dream of becoming a millionaire in a single night, by doing any kind of things and saying anything possible. These people or alike, did exist back then, but there wasn't a single algorithm to rule them all, in the sense to bring the content with most views to your eyes, regardless of the
content" itself. Back then, "You" were responsible for what you see and what you find in internet, and today, it's the FAANG (or better to be called MAANG) that decides it for you, because once you told a friend of yours that you are thinking about buying a bicycle, and now your Instagram's Explore is full of people who are so hyper-healthy because, well, they are riding a bicycle!

In 2024, the digital landscape has extremely changed. The dominance of a few companies has turned the internet into a monopolized space where diversity of thought and the slow, thoughtful consumption of content have been sidelined in favor of quick, digestible (in less than 10 seconds) content made by "Bloggers" designed to grab attention in the shortest possible time. Something that our mind hasn't been evoled to do. This high-speed, vertical-scrolling environment prioritizes at-once engagement over lasting connection, which then reduces complex ideas to absolute short clips and transforming users and readers into pure metrics.

This shift raised questions in my mind, about the nature of social media itself. Can platforms that prioritize profit over people still be considered "social" in the true sense of the term? Think about it: they may offer an "illusion of connection", the base reality is one of isolation, where the depth of our interactions is often as trivial as the content we scroll through. The term "social media" has become a misrepresentation of a space that is more about broadcasting than connecting, more about individual consumption than communal engagement.

In my opinion, twitter (now X), was the last fortress we had in regards of satying "social", which just died in 2023. I just hope the same won't happen to Reddit!

It wasn't perfect, I know. I had serious concerns about how easy it was to depersonalise and demonise others. The "main character" of twitter was someone who the collective believed had said something exremely stupid, and often they were right. But did that justify being piled on by thousands of strangers? Our minds isn’t made for that.

As we now stand at the beginning of 2024, thinking about the rapid pace of our digital existence, it's crucial to confront these concerns sooner. The transformation of social media and the internet from spaces of connection to "engines of profit" is a phenomenon that deserves the attention and action. The challenge now is not just to navigate this new digital landscape but to critically examine its impact on our ability to truly connect with one another. The question is:

What have we lost in this transition, and more importantly, what can we do to reclaim the essence of true social connectivity in the digital age?

Back to the Future

Now this part is just my own take of the early 2000s. It doesn't necessarily reflec on everyone's experience, but I believe the experience was in some aspects similar to each other.

In the early 2000s, the internet was a growing space of opportunity and connection. Unlike the monopolized digital thing of 2024, it was a mosaic of individual websites and forums, each a unique corner of the digital universe where people could find communities that matched their interests, passions, and needs, by their choosings. This era was characterized by a sense of digital advancement, where users navigated through a less commercialized web in search of knowledge, companionship, and understanding.

Personal blogs, forums and communities like Yahoo360, MySpace etc. were the heartbeats of this early internet. Writers would spend days crafting thoughtful articles and blog posts, not for the sake of likes or shares, but to contribute to a broader conversation, or sometimes, just because they were hackers in life! Readers engaged with this content not by merely scrolling past but by reading, reflecting, and often responding. This in my opinion created a cycle of meaningful exchanges where ideas could be debated, and relationships were formed over shared interests and dialogues.

Moreover, the diversity of platforms meant that users weren't boxed into a one-size-fits-all (Instagram, TikTok etc.) experience. Whether it was through forums, personal blogs, or early social networking sites, the internet felt really more like a global village. This period marked my childhood with an authenticity in digital connections, where the slower pace of content consumption allowed for deeper engagement and a stronger sense of community.

The lack of algorithmic interference meant that discovery was often occasional and offhand, leading to a richer, more diverse web experience. People weren't just users in this digital landscape, they were explorers, contributors, and community members. Though, it was far from perfect, yet its foundational philosophy was built on principles of openness, discovery, and genuine social connectivity.

Now I stand in 2024, looking back at the early 2000s, it's clear that something fundamental has shifted. I now, I'm a grown up now, with more responsibilities and concerns in my life, but still, I missed the days when I paid for something and it was actually mine after the payment!

A Monopoly of Moments

It's 2024, the digital landscape presents an utterly contrast to the diverse and exploratory web of the early 2000s. Today, a handful of platforms wield bizzare control over how we consume content, interact with others, and perceive the world around us. THis is a monopolization that has transofrmed the internet into a tightly controlled ecosystem of "Apps" where engagement and of course, profit, dictate the dynamics of the digiral interaction.

Platforms like Instagram or TikTok are designed not for depth but for speed, capturing users attention with an endless stream of content that is often shallow and momentary and their emphasis is on "viral trends" and "eye-catching visuals" keeping us scrolling and liking.

Furthermore, the combination of digital platforms under a few corporate giants has weakend innovation and diversity. New voices and ideas find it increasingly difficult to break through the noise, as the algorithms favor content that conforms to the established "norms" of engagement. Norms that define what is wrong and whats right, and you won't feel safe enough to say anything, because someone, somewhere will find it offensive! This not only limits the variety of content available but also restricts the potential for genuine creativity and expression in the digital realm.

The consequences of this monopolized digital landscape extend beyond individual experiences, shaping societal perceptions and interactions. The rapid consumption of content fosters a culture of immediacy and disposability, where the value of information and connection is measured in seconds rather than substance.

The Future is Now

Now I can write milion lines of critiques about the current situation. Every one can. Yet it's essential to not only critique but also to imagine and work towards alternatives that support genuine connection and diversity. I go back to my previous question: what can we do to reclaim the essence of true social connectivity in the digital age?

Of course, communities must be prioritized over algorithms. This is the first thing that comes to my mind. This would be the first step towards rebuilding our digital spaces to shift the focus from algorithm-driven content (ADC) to community-centric platforms (CCP).

The next step step in my opinion is to foster digital education and critical engagement, which allows to navigate and shape the future effectively. Basically I think, users must become digitally educated, like by understanding how algos influence what they see and for what reason and how their data is used. Everyone wants to become a millionaire, but what are the costs?

Support open source! Which I remember some years ago was the actual hype. Today, open source usually means a "cheaper" version of a software, for which you eventually have to pay.

Be less judgemental and emotional! Now watch out! this is a sensitive thing. I believe, it's also necessary to train the emotional resilience or flexibility in the face of the content (or anything) we encounter online. One of the general challenges in today's internet landscape or even our lives is the fact that we've become more sensitive and reactional to criticism or different oppinions. This often prevents constructive dialogue and obviosly divides us further more, rather than improving our understanding and growth. To counteract this, we must try to approach [online] interactions with openness and willingness to consider perspectives beyond our own, without immediate recourse to offense or defense. Basically, developing a thicker skin, so to speak, does not mean becoming indifferent or dismissive but rather learning to differentiate between personal attacks and constructive criticism (which by it self is ineed hard, and is part of being a grown up and an adult), and responding to each accordingly. By becoming less reactionary and more reflective, we can create a digital environment that encourages healthy debate, diverse opinions, and the robust exchange of ideas, ultimately enriching our collective discourse and understanding.

I think what I'm trying to say at the end is that, as we are going through the currents, it's important not to let the corporate algorithms and marketing strategies determine the course of our [online] interactions. We are far more than jsut a crows or data points to be analyzed and monetized. It is time for us to stand up and reclaim the narrative, to weave genuine connections and cherish the diversity of voices that enrich our ecosystem. Don't let the essence of our communities be overshadowed by profit-driven agendas.

Please share your insights. How will you contribute to preserving the core of our engagements?

]]>
<![CDATA[Simplifying Complex Systems with Backend for Frontends (BFF)]]>https://aien.me/simplifying-complex-systems-with-backend-for-frontends-bff/65167d297f285f00012fd18eSat, 07 Oct 2023 11:05:19 GMT

In the realm of web development, orchestrating a smooth dialogue between the front end (what you see on the screen) and the back end (the magic happening behind the scenes) is crucial. Imagine you're at a bustling cafe, where the front end is represented by the baristas who take your order, and the back end is the kitchen where your coffee gets brewed. In a traditional setup, the baristas (front end) would shout your order across to the kitchen (back end), amidst all the cafe noise.

This common scenario got a different spin during one of my recent job interviews, where the discussion ventured into the Backend for Frontends (BFF) pattern. It piqued my curiosity, urging me to delve into understanding it from an architectural standpoint. The concept of BFF acts like a dedicated waiter, who takes your order from the barista (front end), and delivers it accurately to the kitchen (back end), ensuring that your coffee is made to your specifications and served hot. This dedicated channel helps in eliminating the noise and ensuring that the communication between the front end and back end is clear and efficient.

With the BFF pattern, we create a tailored communication channel that understands the unique needs of the front end, making sure it gets exactly what it needs from the back end to provide a seamless and enjoyable user experience.

Simplifying Complex Systems with Backend for Frontends (BFF)

As we journey through this post, we will explore the essence of the BFF pattern, its benefits, and how it can be a game-changer in bridging the conversation between the front and back ends of your projects.

The Traditional Approach

In traditional client-server web development, the communication between the front end and back end is often handled through a single backend service.

Simplifying Complex Systems with Backend for Frontends (BFF)

This setup tries to be a one-answer-fits-all solution, aiming to meet the different needs of various front-end applications like websites, mobile apps, or web apps.

Here’s a breakdown of the typical challenges faced in this traditional approach:

  1. One-Size-Fits-All Problem: Although this could seem like a solution and a "financial" approach to solve "time" and "speed" limitations, the backend service tries to cater to all types of front-end applications, which often leads to over-complicated code and a bloated system. It's like having a universal remote that has become too complex due to the multitude of functions it's trying to support.
  2. Tight Coupling: The front end and back end are closely tied together. When they are tightly coupled, a change in one can cause a cascade of issues in the other. It's akin to a domino effect; knock one down, and the rest follow.
  3. Scalability Issues: As the system grows, the single backend service can become a bottleneck, hindering the ability to scale the system to meet growing demands. It's like a traffic jam, where a single accident can cause a massive backlog.
  4. Difficulty in Managing Complexities: With an expanding scope of business services, managing the complexities becomes a challenging task. This leads to more bugs, slower development, and a higher cost of maintenance.

The traditional model, while straightforward, often struggles to keep up as the system evolves and the demands grow. Its limitations become apparent, especially when dealing with multiple front-end platforms, each with its unique requirements.

In the next section, I will introduce the Backend for Frontends (BFF) pattern, which emerges as a solution to overcome these challenges, ensuring each front-end application gets its own personalized service from the backend, paving the way for a more streamlined and efficient development process.

Diving into Backend for Frontends (BFF)

Create separate backend services to be consumed by specific frontend applications or interfaces. This pattern is useful when you want to avoid customizing a single backend for multiple interfaces - Microsoft Learn Platform

The term Backend for Frontends (BFF) was coined to describe a specific type of backend service that is designed to cater to the unique needs of different front-end applications. Unlike the traditional approach where one backend tries to meet all the needs, the BFF pattern encourages creating a separate backend service (not to be essentially mistaken with microservice) for each front-end application. This way, each backend is tailored to support the specific requirements of the front-end it serves.

Simplifying Complex Systems with Backend for Frontends (BFF)

Let's break down the core aspects of the BFF pattern:

  1. Specific Backend Services: Each front-end application (like a website, mobile app, or web app) gets its own backend service. This makes the backend services simpler and more focused.
  2. Tailored APIs: The APIs are crafted to match the exact needs of the front-end, ensuring that the front-end gets the data it needs in the format it prefers.
  3. Improved Performance: By reducing the amount of unnecessary data being sent between the front and back ends, the performance is significantly improved.
  4. Faster Iterations: With a more streamlined setup, it's easier and quicker to make changes and improvements to both the front and back ends.
  5. Better Scalability: The BFF pattern allows for better scalability, as each backend service can be scaled independently based on the demand.
  6. Enhanced User Experience (UX): With faster performance and more tailored data, the user experience is greatly enhanced.
  7. Simplified Error Handling: Error handling becomes more straightforward as the errors can be handled in a manner that is most suitable for the front-end application.

Let us put it this way: The BFF pattern is like having a translator who speaks the native language of each party in a conversation, ensuring clear communication and understanding. It addresses the challenges faced in the traditional approach, paving the way for a more efficient and enjoyable development process.

How is BFF Different from Microservices

The Backend for Frontends (BFF) pattern and the Microservices architecture are both modern approaches aimed at making web development more efficient and scalable. However, they serve different purposes and operate at different layers of the architecture!

The BFF pattern is primarily concerned with creating a user-specific backend, acting as a mediator that facilitates the communication between the front-end and the existing backend services. It provides a dedicated backend for each front-end application, tailoring the data and operations to the specific needs of the user interface. This way, the BFF ensures that the front-end gets precisely what it needs, no more, no less. It’s like having a personal assistant who knows your preferences, handling your requests in a manner that suits you best.

On the other hand, Microservices is an architectural style that structures an application as a collection of loosely coupled, independently deployable services. Each service, often corresponding to a business capability, operates in a self-contained manner, encapsulating its own data and operations. Unlike BFF, Microservices do not specifically cater to the front-end; instead, they focus on decomposing the backend into smaller, manageable services that can evolve independently. It’s akin to having a team of specialists, each dedicated to handling a different aspect of the business.

While BFF and Microservices may seem similar at a glance, their fundamental difference lies in their focus and the layer at which they operate. BFF is all about enhancing the interaction between the front-end and back-end, ensuring a seamless user experience by providing tailored backend services for each front-end application. In contrast, Microservices aim to simplify backend complexity by breaking it down into smaller, more manageable services, promoting a decentralized approach to developing, deploying, and scaling backend functionality.

Simplifying Complex Systems with Backend for Frontends (BFF)
How Microservices and BFF work together

In scenarios where the project demands a robust backend with complex business logic, adopting a Microservices architecture might be the right choice. However, when the focus is on delivering a superior user experience with different front-end applications, the BFF pattern shines bright, offering a more tailored and efficient solution.

The BFF and Microservices are not mutually exclusive; in fact, they can complement each other in a well-designed system. A BFF can act as a liaison between the front-end and a Microservices-based backend, offering a harmonized solution that leverages the strengths of both architectural patterns.

How is BFF Different from API Gateway

A fact is that BFF is more or less a design pattern, and microservice quite an architecture. But if you look more closely at the above diagram, BFF would look more like another design pattern usually used in Microservice architecture: The Api Gateway design pattern!

The Backend for Frontends (BFF) and the API Gateway are both architectural patterns that aim to manage and simplify the interactions between client-facing applications and backend services. However, they serve somewhat different purposes and exhibit different characteristics in how they handle client-server interactions.

An API Gateway serves as the sole entry point to the system for all clients, whereas a BFF caters only to a specific client type. Assume, for instance, your system accommodates two common clients: a Single Page Application (SPA) and a mobile client (Android, iOS).

Simplifying Complex Systems with Backend for Frontends (BFF)
Api gateway vs BFF

Here are some key differences between BFF and API Gateway:

  1. Level of Customization: BFF offers a higher level of customization for each front-end application as it allows for a dedicated backend service. In contrast, the API Gateway provides a more generalized approach to managing client-server interactions.
  2. Complexity: BFF can add complexity if there are many front-end applications as each requires its own BFF. On the other hand, an API Gateway centralizes the handling of client-server interactions, potentially reducing the complexity.
  3. Focus: While BFF is focused on optimizing the communication between front-end and backend services, API Gateway is more concerned with providing a set of shared services to handle common concerns like routing and security.
  4. Deployment: Each BFF can be deployed independently, allowing for a development lifecycle that’s closely aligned with its corresponding front-end. The API Gateway, being a centralized component, may have a different deployment lifecycle.
  5. Scalability: BFF allows for independent scalability based on the needs of each front-end application, while the API Gateway may present scalability challenges due to its centralized nature.

How is BFF Different from GraphQL

A legitim question one might ask now is, that how is BFF pattern any different from GraphQL?

Backend for Frontends (BFF) and GraphQL are distinct architectural paradigms, each with its own unique approach towards managing the communication between front-end and back-end systems. Let's explore their differences to better understand when to use each.

GraphQL is a query language developed by Facebook, aiming to provide a more flexible and efficient means for front-end applications to communicate with back-end services. Unlike BFF, GraphQL does not entail creating separate back-end services for each front-end. Instead, it provides a single endpoint that all front-end applications can query to get exactly the data they need. This flexibility reduces the need for multiple requests to different endpoints, often leading to performance improvements especially on slow networks.

However, they are not mutually exclusive and can be combined for enhanced client-server interactions.

Let's consider a scenario of an e-commerce platform with web, mobile, and third-party front-ends, each having different data requirements. The conventional BFF pattern would involve creating separate backends for each front-end to cater to their specific needs. On the other hand, GraphQL would allow all front-ends to query a single endpoint for exactly the data they need.

Simplifying Complex Systems with Backend for Frontends (BFF)
BFF Example in ecommerce
💡
Just a quick note about GraphQL and API Gateway. A GraphQL server could look like an API Gateway, but they shouldn't be mistaken with each other, because they serve different purposes.
GraphQL is a query language and runtime for APIs that enables clients to request exactly the data they need, while an API Gateway is a server that acts as an intermediary between API consumers and API providers to handle various concerns like routing, rate limiting, and analytics.
So basically you can have an API Gateway, which uses GraphQL to transfer data between frontend and backend.

Now, imagine integrating BFF with GraphQL in this setup. Each front-end could have its own BFF, but instead of traditional REST APIs, the BFFs would expose GraphQL APIs. This way, each front-end still gets a tailored backend, while also benefiting from the flexible querying capabilities of GraphQL.

Simplifying Complex Systems with Backend for Frontends (BFF)
Such a constellation wouldn't differ much from a normal BFF

In essence, combining BFF and GraphQL could provide a balanced approach, where the tailored environment of BFF meets the querying flexibility of GraphQL, orchestrating a harmonious client-server interaction that's scalable and efficient. This hybrid architecture can be especially beneficial in complex projects with multiple front-ends, each with varying data requirements.

Tools and Technologies for BFF Implementation

Implementing a Backend for Frontends architecture in a microservices environment (or even in monolith) opens up a realm of possibilities in terms of tools and technologies. The beauty of microservices is that they allow for a polyglot architecture where different services, including BFFs, can be implemented using different technologies that suit them best. However, the key to a harmonious microservices architecture lies in well-defined interfaces, irrespective of the underlying technology.

So apart from the technology and tools, I believe a designer should actually focus on the following points:

  1. Defining Clear Interfaces: Before delving into tools and technologies, prioritize defining clear and consistent interfaces between the BFF and other services. This is crucial for ensuring smooth interactions and is a stepping stone towards a later in-depth exploration of "System Interface Design".
  2. Technology Selection: The choice of technology for implementing BFFs can be quite flexible. Common choices include Node.js for its non-blocking I/O and vast library ecosystem, or Spring Boot if your team has strong Java expertise.
  3. Microservices Frameworks: Microservices frameworks like Spring Cloud, Micronaut, or Express.js can simplify the development of BFF and other services, providing built-in solutions for concerns like service discovery, load balancing, and resiliency.
  4. Containerization and Orchestration: Containerization tools like Docker and orchestration platforms like Kubernetes are fundamental for deploying, scaling, and managing your microservices and BFFs.
  5. Monitoring and Observability Tools: Tools like Prometheus for monitoring and Elasticsearch, Logstash, and Kibana (ELK stack) for logging and observability are crucial for maintaining the health and performance of your BFF architecture.
  6. Communication Protocols: Depending on your system’s needs, protocols like HTTP/REST, gRPC, or GraphQL can be used for communication between services. Each has its own set of advantages and trade-offs.

Conclusion

Exploring the Backend for Frontends architecture is like entering a room where the communication between the front-end and back-end becomes clearer. It's like having a translator for each front-end, ensuring they all understand the back-end in their own way. Emphasizing clear interfaces sets rules for effective communication among services, regardless of their technical language. Combining BFF with other technologies like GraphQL allows for a larger, smoother conversation. As digital conversations become louder and more crowded, architectures like BFF help maintain a crisp, meaningful, and enjoyable dialogue for all users involved.

I hope you enjoyed this exploration through the Backend for Frontends architecture. Your thoughts and experiences enrich this discussion, so feel free to share them in the comments below. Looking forward to engaging in a lively conversation with you!

]]>