Ikorason https://ikorason.dev Minimal blog built by Astro Sat, 21 Feb 2026 05:07:24 GMT https://validator.w3.org/feed/docs/rss2.html Ikorason Feed Generator en-US Copyright © 2026 Irfan <![CDATA[The Attachment Is Real. So Is What’s Beyond It.]]> https://ikorason.dev/the-attachment-is-real-so-is-whats-beyond-it https://ikorason.dev/the-attachment-is-real-so-is-whats-beyond-it Sat, 21 Feb 2026 00:00:00 GMT Late nights. A stubborn movie app. Stack Overflow tabs I’d lost count of. I typed every line, deleted most of it, typed it again. When it finally worked I just kept playing with it; refreshing, clicking around, just staring at it. If you’ve been there, you know. That little app even got me hired. Today I’d have it running before I finished my coffee.

Honestly? It’s disorienting, sometimes even depressing. The replacement talk is loud, the fun feels different. The internet is still flooded with it. Someone built an app in an hour. Someone else shipped a product without writing a single line. Constant, loud, just a lot of noise. It felt less like progress and more like everyone was moving on without me, and I hadn’t even packed my bags yet. But somewhere in all that noise, something shifted.

“You cannot swim for new horizons until you have courage to lose sight of the shore.” — William Faulkner[^1]

Instead of pulling back, I leaned in. I kept asking why; why are they convinced this replaces us? I started to learn how these tools actually work under the hood, what they are and what they aren’t. Things like context, planning, knowing how to guide the tool rather than just prompting and hoping. Still learning, honestly. And somewhere in that process something clicked.

“Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke[^2]

It’s not magic. It never was. Look, I’m not claiming I understand how AI actually works at its core; that’s deep, that’s a whole career. But understanding how the tool works? How to work with it properly? That I can do. And once something clicks like that, you stop fearing it and start using it. Now I build things in hours that used to take me days. Not because I stopped being a developer, but because I finally understood what being one actually means.

So what replaced that feeling; the frustration, the late nights, typing every line and watching it slowly come to life? The joy of orchestration. Telling the tool what I’m thinking, exploring options I wouldn’t have thought of alone. Ideas flow faster. Tasks that used to drain me just get done. And I’m still learning; books, docs, new ideas the tool itself surfaces. The learning never stopped, it just shifted. And documentation; I used to have zero time for it, now it’s everything. Most of my time now? Markdown files. Lots and lots of markdown files. It was never about writing clean code. It was always about understanding how to write clean code. The tool does the typing. The understanding is still yours.

“A man who works with his hands is a laborer; a man who works with his hands and his brain is a craftsman; but a man who works with his hands and his brain and his heart is an artist.” — Saint Francis of Assisi[^3]

But let me be honest. The tool can make you overconfident. I’d knocked out a few backend pull requests and started feeling like I had it figured out. Picked up a task, thought it was simple; just add an endpoint. The tool gave me something that looked right. It wasn’t. After reviewing it together with the backend dev, turns out it needed product involvement. The lesson? The tool doesn’t know what you don’t know. That’s still on you.

Does it make me feel dumber? Honestly, I’ve seen the memes, the studies, the “AI is rotting your brain” takes. I get why people think that. But I just don’t agree. You’re not thinking less. You’re thinking differently. And if you’re doing it right, you’re thinking bigger.

The attachment was real. So is what’s beyond it. Software development is changing faster than most of us are comfortable with. We resist what disrupts us; that’s just human. But somewhere between the late nights and the prompts that build in minutes what used to take days, I quietly made peace with it.

“Your joy is your sorrow unmasked.” — Kahlil Gibran[^4]

I’m sure I’ll find something new to attach to, or maybe I already have.

[^1]: William Faulkner, The Mansion (1959).

[^2]: Arthur C. Clarke, Profiles of the Future (1973).

[^3]: Saint Francis of Assisi

[^4]: Kahlil Gibran, The Prophet (1923), “On Joy and Sorrow.”

]]>
<![CDATA[From Frontend to Low-Level Networking: My Journey Contributing to Open Source]]> https://ikorason.dev/from-frontend-to-low-level-networking-my-journey-contributing-to-open-source https://ikorason.dev/from-frontend-to-low-level-networking-my-journey-contributing-to-open-source Tue, 16 Dec 2025 00:00:00 GMT Life, Family, and Open Source

It’s been a while since my last post. As always, life happens—and in the best way possible, as our family recently grew with the arrival of our second child! With a new baby, the dynamic shifts again. While family is always my number one priority, this change pushed me to improve how I organize my work and learning. I’ve had to become more efficient in time management to ensure I keep growing as a developer without sacrificing precious time with them.

Despite the adjustments, I kept my promise to myself to start contributing to open source this year. I’ve now contributed to several projects, ranging from tools I use daily to personal learning experiments. The most notable contribution was to Rama.

Coming from a background of more than 8 years as a Frontend developer, I recently started expanding into Backend development. Rama was the perfect challenge for this transition; it forced me to dive deep into the ‘low-level stuff’ and gave me a solid understanding of some of the networking knowledge.

Implementing a Stunnel-like Feature

One of my main contributions was implementing a stunnel-like feature directly into the Rama CLI. For those unfamiliar, Stunnel is (as the name implies) a secure tunnel. It’s essentially a way to turn any insecure protocol into a secure one by wrapping it in a protective tunnel.

To implement this in Rama, I needed to build two distinct components that work in tandem:

  1. The Exit Node: Acts as the server side of the tunnel. It listens for encrypted traffic, decrypts it, and forwards the data to your actual destination (like a web server or echo server).

  2. The Entry Node: Acts as the client side. It listens for local plain-text traffic, encrypts it, and forwards it to the Exit Node

How It Works in Practice

One of the coolest things about Rama is that it comes with a built-in “Echo Server” and client request capabilities which made testing this feature incredibly self-contained. I didn’t need to spin up separate python local server or use other tools like netcat for testing, I could do it all within the Rama ecosystem.

Here is the workflow I designed to test the tunnel:

Step 1: The Destination (Echo Server)

First, I need a target service. Rama has a built-in echo server command that listens for traffic and simply repeats what it receives. I start this on port 8080. (The below command will spin up the echo server on port 8080 default btw)

rama serve echo

Step 2: The Exit Node (TLS Termination)

Now, we start the “Exit” proxy. This node acts as the secure gatekeeper. It listens for TLS (encrypted) connections on port 8002. When it receives data, it decrypts it and forwards the plain text to our echo server at :8080.

rama serve stunnel exit \
    --bind 127.0.0.1:8002 \
    --forward 127.0.0.1:8080 \
    --cert server-cert.pem \
    --key server-key.pem

Step 3: The Entry Node (TLS Initiation)

Next, we start the “Entry” proxy on port 8003. This is where the magic happens for the client. It listens for normal, plain-text traffic. When a client connects, the entry node initiates a TLS handshake, encrypts the traffic using the CA certificate, and tunnels it securely to the Exit node.

rama serve stunnel entry \
    --bind 127.0.0.1:8003 \
    --connect 127.0.0.1:8002 \
    --cacert cacert.pem

Step 4: Verification

Finally, using Rama’s built-in client, I hit the entry point. The request travels through the encrypted tunnel (hence stunnel), reaches the echo server, and the response travels all the way back. (The below command will send a client request to port 8003)

rama :8003

Note for Testing

You can skip the manual certificate setup if you just want to try this out quickly. If you omit the certificate arguments and instead pass --insecure to the entry node (replacing --cacert), Rama will use its built-in auto-generated self-signed certificates.

Warning: This method is for local testing only—never use it in production.

Fun fact: When I first started this, I had this whole flow completely backwards in my head. Because I always spun up the Server first (to get the ports ready), I kept visualizing the flow from Right-to-Left (Server → Client). Important reminder that learning the fundamental is first before coding.

Under the Hood: Architecture & Rust

One of the reasons I was able to implement this feature smoothly was Rama’s architectural philosophy: “Services all the way down.”

Rama is built on a modular design where almost everything is a service. This means you can compose complex behaviors by stacking simple layers on top of each other. You can read more about this design in the Rama Book.

For my implementation, I didn’t have to write a monolithic script that mixes encryption logic with socket handling. Instead, I assembled three distinct building blocks.

Think of it like packing a box for shipping:

  1. The Item (TCP): This is the raw object I want to move (the connection).

  2. The Bubble Wrap (TLS): I wrap the item in a protective layer so it doesn’t get damaged or spied on.

  3. The Shipping Label (Forwarder): Finally, I slap a label on the outside that tells the system where to send it.

Here is the snippet for entry node:

let tcp_service = Forwarder::new(connect_authority).with_connector(
  TlsConnectorLayer::secure()
    .with_connector_data(tls_connector_data)
    .into_layer(TcpConnector::new()),
  );

tcp_listener.serve_graceful(guard, tcp_service).await;

In the code, I’m just wrapping the layers: Label( Wrap( Item ) ). It turns a complex networking task into a simple packing list. Rama made all these simple to implement.

The Challenge: Bridging the Gap Between Code and Concepts

While the implementation was written in Rust—a language I’ve been enjoying and learning for about a year—the real challenge wasn’t the syntax. I’m comfortable picking up new languages and navigating codebases, especially with the help of modern AI tools like Claude Code.

The hard part was the domain knowledge. Since I am still in the early stages of my backend and low-level networking journey, the concepts felt steep. but that was exactly the point. I wasn’t just there to ship code; I was there to learn the concepts in depth.

At one point, the maintainer even reminded me to just ‘have fun with it.’ And he was right—why not? Despite the complexity, it was genuinely fun to learn while building something cool.

To build a secure tunnel, you can’t just “import TLS.” You have to understand how trust is established. I found myself hitting pause on the coding to deep-dive into:

The Chain of Trust: Understanding why a self-signed certificate works differently than one signed by a Root CA.

System Trust Stores: Learning how operating systems store trusted root certificates and how to tell my program to look at a specific file (my cacert.pem) instead of the default system store.

The Handshake: Visualizing what actually happens on the wire when stunnel entry talks to stunnel exit.

Stunnel map visualization

The Power of Mentorship

Finally, I want to emphasize that this contribution wasn’t a solo journey. I want to give a huge shoutout to Glen DC, the creator of Rama.

The code review process was one of the highlights of this experience. It wasn’t just about spotting syntax errors; it was true mentorship. Glen took the time to review my implementation in depth, offering feedback that helped me refine not just the feature, but my approach to Rust programming in general. Having a maintainer who is willing to guide contributors makes all the difference in open source.

Wrapping Up

If you are on the fence about contributing to open source—especially if you are juggling a busy life or a growing family like I am—I highly encourage you to go for it. It’s not just about the commits on your GitHub profile; it’s about the fundamental knowledge you gain and the connections you build.

If you are interested in Rust, proxies, or networking, definitely check out Rama. It’s a powerful tool, and a great place to learn.

]]>
<![CDATA[Rust Threading Basics]]> https://ikorason.dev/rust-threading-basics https://ikorason.dev/rust-threading-basics Thu, 06 Mar 2025 00:00:00 GMT It’s been a while since I haven’t post anything here, been busy with work, school, family and life in general. But I’d say learning has been great recently, learning deeper into system programming and also backend programming at the same time, this really opens up my eyes to a lot more of things that I didn’t know about. Also this year I gave myself a goal to start contributing to open source, specifically open source projects that are written in Rust. I’ve been learning Rust for a while now, and I think I’m ready to start contributing, whats the best way to learn than to actually do it, right? But this post is not about my journey to open source, it’s about threading in Rust. It will be a very short post explaining the basics of threading in Rust.

So the other day I was practing a problem Compute Spiral Order Traversal of a Matrix and I thought, if the given matrix is large, it would be better to compute the spiral order traversal in parallel. So I thought of using threads to do that. I’ve never used threads in Rust before, so I thought it would be a good opportunity to learn it.

What is threading?

In computer science, threading is a technique that allows multiple threads of execution to run simultaneously within a single process. Modern computers have multiple cores, and threading allows us to take advantage of these cores to run multiple tasks concurrently. This helps with heavy computational tasks, I/O bound tasks, and tasks that can be parallelized. I won’t get into the details here, but you can find a great explanation of threading here.

Real world use case

Let’s say you have an drawing application, and you select a tool to draw a circle. When you start drawing the circle, the application starts to compute the points of the circle and draw it on the screen. Now if your application is only single threaded, the application will freeze until the circle is drawn. This is because the application is busy computing the points of the circle and can’t do anything else. But if you use threading, you can start a new thread to compute the points of the circle, and the main thread can continue to do other things like handling user input, updating the screen, etc. This way the application won’t freeze and the user can continue to interact with the application.

The problem

For simple illustrating purposes, let’s say we want to generate a large matrix in our application which will be used for some heavy computation. We want to generate this matrix in parallel using threads to take advantage of multiple cores in the CPU. While our matrix is being generated we can do some other work in the main thread, but let’s do this first without threading.

Consider the following code:

fn generate_large_matrix(rows: usize, cols: usize) -> Vec<Vec<usize>> {
    println!("Generating large matrix...");

    let mut matrix: Vec<Vec<usize>> = Vec::with_capacity(rows * cols);
    let mut count = 1;

    for _ in 0..rows {
        let mut row = Vec::with_capacity(cols);
        for _ in 0..cols {
            row.push(count);
            count += 1;
        }
        matrix.push(row);
    }

    matrix
}

Here I just implemented a simple function to generate a matrix of size rows x cols and filled it with numbers starting from 1. And yes you can generate a large matrix better than this using iterators or some other methods, but this is just for demonstration purposes.

Now we can use this in our main function to generate a large matrix.

fn main() {
    generate_large_matrix(20_000, 20_000);
    do_some_other_work();
}

fn do_some_other_work() {
    println!("Doing some other work");
}

I also added a function do_some_other_work() which just prints Doing some other work.

Now if cargo run this code, you will see that Generating large matrix... will be printed first and then Doing some other work will be printed. This is because the main thread is blocked until generate_large_matrix() is done.

Blocking main thread

If the matrix is large, it will take some time to generate, and the user will see the application freeze until the matrix is generated. Now imagine your whole application is blocked because of this, this is where threading comes in.

Threading in Rust

First we need to spawn a new thread using std::thread::spawn() function. This function takes a closure as an argument which contains the code that will run in the new thread. The spawn method returns a JoinHandle so that we can keep track of the progress of the thread that we spawned. We then use the join method on the JoinHandle which will block the current thread until the thread we spawned is finished. If you have multiple threads running you can use the is_finished method on the JoinHandle to check if the thread is finished or not, but for now we will just use join.

use std::thread::spawn;

fn main() {
    let handle = spawn(|| {
        generate_large_matrix(20_000, 20_000);
    });

    do_some_other_work();
    handle.join().unwrap();
}

If you do cargo run now you can see that Doing some other work will be printed before the matrix is generated. As you can see from the gif below, the main thread is not blocked and can do other work while the matrix is being generated in the new thread. Also there are 2 threads running, one is the main thread and the other is the thread we spawned.

Non-blocking main thread

Conclusion

In this post I shared the basics of threading in Rust, what is it and how I learned it. I showed you a simple example of how to use threading in Rust to generate a large matrix in another thread while the main thread does some other work. In this context do_some_other_work() is a simple println! statement, but in a real application it could be some other heavy computation or I/O bound task. In that case they may compete for CPU time, so it’s better to create more advance threading work than this one, I will probably write a blog post about that after I learn about it. For now I hope you understand the basics of threading in Rust.

]]>
<![CDATA[Image Resize in Azure Functions using Rust]]> https://ikorason.dev/image-resize-in-azure-functions-using-rust https://ikorason.dev/image-resize-in-azure-functions-using-rust Mon, 06 May 2024 00:00:00 GMT I’ve known about serverless functions for a while now, but I’ve never actually used them. I’ve always been curious about how they work and how they can be used in real-world scenarios.

I was browsing through this article on creating Azure Functions using custom handlers with languages like Go or Rust. After reading the article, I was intrigued by the idea of using Rust with Azure Functions. So I try to find something like a tutorial about Rust and Azure Functions if someone built something but I can’t find anything, then I found this tutorial from Mohamad Lawand Youtube video where he created a simple image processing with Azure Functions & Service Bus using .Net Core. So I was very excited to try to create the same thing but using Rust.

At first, I thought it would be difficult to use Rust with Azure services like Service Bus and Blob Storage, but after doing some research, I’ve found that there is already an unofficial Rust Azure SDK which can be used to interact with Azure services. So I decided to create an image resizing function using Azure Functions and Rust.

The basic idea is simple, first create an API endpoint which makes a POST request to upload the image to the Azure Blob Storage and send a message to the Azure Service Bus. Then Azure Functions will be utilized to listen to the Service Bus using Azure Service Bus Queue trigger and resize the image and save it back to the Blob Storage.

image resize plan

What are Azure Functions?

Azure Functions is a serverless compute service that lets you run event-triggered code without having to explicitly provision or manage infrastructure. You can use Azure Functions to build web APIs, respond to database changes, process IoT data, and more.

Pre-requisites

This tutorial assumes that you have a basic understanding of Rust programming language and Azure Functions. If you are new to Rust, I recommend you to read the official Rust Book.

Also make sure you have Azure account and Azure CLI installed on your machine, if not, you can create a free account on Azure and install Azure CLI on your machine from the above links.

Create API endpoint to upload image

First thing first, we need to create a simple API endpoint to upload the image to the Azure Blob Storage and send a message to the Azure Service Bus. We will use warp for creating the API endpoint. Let’s create a directory called image-resize and create a new directory called api and create a new Rust project using cargo:

mkdir image-resize && cd image-resize && mkdir api && cd api
cargo init

This will create a new Rust project with the name image-resize. Now, let’s add the required dependencies to the Cargo.toml file:

[package]
name = "image-resize"
version = "0.1.0"
edition = "2021"

[dependencies]
warp = "0.3"
tokio = { version = "1.12", features = ["macros", "fs", "rt-multi-thread"] }
futures = { version = "0.3", default-features = false }
bytes = "1.0"
azure_core = "0.20.0"
azure_storage = "0.20.0"
azure_storage_blobs = "0.20.0"
azure_messaging_servicebus = "0.20.0"
serde = "1.0.200"
serde_json = "1.0"

Now let’s add the code to the main.rs file:

// api/src/main.rs
use warp::{
    http::StatusCode,
    multipart::FormData,
    Filter, Rejection, Reply,
};
use std::{convert::Infallible};

#[tokio::main]
async fn main() {
    let upload_route = warp::path("upload")
        .and(warp::post())
        .and(warp::multipart::form().max_length(5 * 1024 * 1024)) // Max image size: 5MB
        .and_then(upload_file);

    let routes = upload_route
        .recover(handle_rejection);

    println!("Server started at http://localhost:3030");
    warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}

async fn upload_file(form: FormData) -> Result<impl Reply, Rejection> {
    Ok(format!("Hello, world!"))
}

async fn handle_rejection(err: Rejection) -> std::result::Result<impl Reply, Infallible> {
    let (code, message) = if err.is_not_found() {
        (StatusCode::NOT_FOUND, "Not Found".to_string())
    } else if err.find::<warp::reject::PayloadTooLarge>().is_some() {
        (StatusCode::BAD_REQUEST, "Payload too large".to_string())
    } else {
        eprintln!("unhandled error: {:?}", err);
        (
            StatusCode::INTERNAL_SERVER_ERROR,
            "Internal Server Error".to_string(),
        )
    };

    Ok(warp::reply::with_status(message, code))
}

This will set up a basic API endpoint at http://localhost:3030/upload which will accept a POST request with a multipart/form-data containing the image file.

You can now try to run the API using the following command:

cargo run -r

You then should see a message Server started at http://localhost:3030 in the console. Now you can test the API by sending a POST request to http://localhost:3030/upload with an image file.

CURL -X POST -F 'file=@/path/to/image.jpeg' http://localhost:3030/upload

It should return Hello, world! for now.

Now let’s add the code to read the image file from the FormData and store it in a Vec<u8> which we will use to upload the image to the Azure Blob Storage.

// api/src/main.rs
// highlight-start
use bytes::BufMut;
use futures::TryStreamExt;
// highlight-end
use warp::{
    http::StatusCode,
    // highlight-start
    multipart::{FormData, Part},
    // highlight-end
    Filter, Rejection, Reply,
};
use std::{convert::Infallible};

async fn upload_file(form: FormData) -> Result<impl Reply, Rejection> {
    // highlight-start
    let uploaded_files: Vec<_> = form
        .and_then(|mut part: Part| async move {
            let mut bytes: Vec<u8> = Vec::new();

            // read the part stream
            while let Some(content) = part.data().await {
                let content = content.unwrap();
                bytes.put(content);
            }

            // return the part name, filename and bytes as a tuple
            Ok((
                part.name().to_string(),
                part.filename().unwrap().to_string(),
                String::from_utf8_lossy(&*bytes).to_string(),
            ))
        })
        .try_collect()
        .await
        .map_err(|_| warp::reject::reject())?;

    Ok(format!("Uploaded files: {:?}", uploaded_files))
    // highlight-end
}

Next let’s implement the code to upload the image to the Azure Blob Storage, we will use the azure-storage and azure-storage-blobs crate for this purpose. Add the following code to the main.rs file:

// api/src/main.rs
// highlight-start
use azure_storage::StorageCredentials;
use azure_storage_blobs::prelude::ClientBuilder;
// highlight-end
use bytes::BufMut;
use futures::TryStreamExt;
use warp::{
    http::StatusCode,
    multipart::{FormData, Part},
    Filter, Rejection, Reply,
};
// highlight-start
use std::{convert::Infallible, env};
// highlight-end

async fn upload_file(form: FormData) -> Result<impl Reply, Rejection> {
    let uploaded_files: Vec<_> = form
        .and_then(|mut part: Part| async move {
            let mut bytes: Vec<u8> = Vec::new();

            // read the part stream
            while let Some(content) = part.data().await {
                let content = content.unwrap();
                bytes.put(content);
            }

            // highlight-start
            if !bytes.is_empty() {
                // Azure Blob Storage credentials
                let storage_account = env::var("AZURE_STORAGE_ACCOUNT").expect("Missing AZURE_STORAGE_ACCOUNT env var");
                let storage_access_key = env::var("AZURE_STORAGE_ACCESS_KEY").expect("Missing AZURE_STORAGE_ACCESS_KEY env var");
                let container_name = env::var("AZURE_STORAGE_CONTAINER").expect("Missing AZURE_STORAGE_CONTAINER env var");
                let blob_name = part.filename().unwrap().to_string();

                // create Azure Blob Storage client
                let storage_credentials = StorageCredentials::access_key(storage_account.clone(), storage_access_key);
                let blob_client = ClientBuilder::new(storage_account, storage_credentials).blob_client(&container_name, blob_name);

                // upload file to Azure Blob Storage
                match blob_client
                    .put_block_blob(bytes.clone())
                    .content_type("image/jpeg")
                    .await {
                        Ok(_) => println!("Blob uploaded successfully"),
                        Err(e) => println!("Error uploading blob: {:?}", e),
                    }

                println!("Uploaded file url: {}", blob_client.url().expect("Failed to get blob url"));
            }
            // highlight-end

            // return the part name, filename and bytes as a tuple
            Ok((
                part.name().to_string(),
                part.filename().unwrap().to_string(),
                String::from_utf8_lossy(&*bytes).to_string(),
            ))
        })
        .try_collect()
        .await
        .map_err(|_| warp::reject::reject())?;

    Ok(format!("Uploaded files: {:?}", uploaded_files))
}

Make sure to have the following environment variables set in your system:

export AZURE_STORAGE_ACCOUNT="your-storage-account-name"
export AZURE_STORAGE_ACCESS_KEY="your-storage
export AZURE_STORAGE_CONTAINER="your-container-name"

Also make sure to create a blob container in the Azure Portal and set the container name in the environment variable AZURE_STORAGE_CONTAINER. You can also find the access key and storage account name in the Azure Portal.

Now let’s create a function to send a message to the Azure Service Bus. Add the following code to the main.rs file:

// api/src/main.rs
// highlight-start
use azure_messaging_servicebus::service_bus::QueueClient;
// highlight-end
... existing imports ...

// highlight-start
#[derive(Serialize, Deserialize, Debug)]
struct Image {
    filename: String,
    image_container: String,
}
// highlight-end

async fn upload_file(form: FormData) -> Result<impl Reply, Rejection> {
    let uploaded_files: Vec<_> = form
        .and_then(|mut part: Part| async move {
            ... existing code ...

            if !bytes.is_empty() {
              ... existing code ...

              // highlight-start
              let image = Image {
                  filename: part.filename().unwrap().to_string(),
                  image_container: container_name,
              };

              send_message_to_queue(image).await;
              // highlight-end
            }

            // return the part name, filename and bytes as a tuple
            Ok((
                part.name().to_string(),
                part.filename().unwrap().to_string(),
                String::from_utf8_lossy(&*bytes).to_string(),
            ))
        })
        .try_collect()
        .await
        .map_err(|_| warp::reject::reject())?;

    Ok(format!("Uploaded files: {:?}", uploaded_files))
}

// highlight-start
async fn send_message_to_queue(image: Image) {
    let service_bus_namespace = env::var("AZURE_SERVICE_BUS_NAMESPACE").expect("Please set AZURE_SERVICE_BUS_NAMESPACE env variable first!");
    let queue_name = env::var("AZURE_QUEUE_NAME").expect("Please set AZURE_QUEUE_NAME env variable first!");
    let policy_name = env::var("AZURE_POLICY_NAME").expect("Please set AZURE_POLICY_NAME env variable first!");
    let policy_key = env::var("AZURE_POLICY_KEY").expect("Please set AZURE_POLICY_KEY env variable first!");

    let http_client = azure_core::new_http_client();

    let client = QueueClient::new(
        http_client,
        service_bus_namespace,
        queue_name,
        policy_name,
        policy_key
    ).expect("Failed to create client");

    let message_to_send = serde_json::to_string(&image).expect("Failed to serialize image");

    client
        .send_message(message_to_send.as_str())
        .await
        .expect("Failed to send message");

    println!("Message sent to Azure Service Bus queue successfully!");
    println!("Message: {}", message_to_send);
}
// highlight-end

Here we are sending a message to the Azure Service Bus queue with the image filename and the container name where the image is stored, so we can later use this information to get the correct image from the blob container and resize the image and save it back to the Blob Storage.

Make sure to create a Azure Service Bus and Queue item in your Azure Portal, and also make sure to have all the required environment variables set in your system:

export AZURE_SERVICE_BUS_NAMESPACE="your-service-bus-namespace"
export AZURE_QUEUE_NAME="your-queue-name"
export AZURE_POLICY_NAME="your-policy-name"
export AZURE_POLICY_KEY="your-policy-key"

Now if you run the API and upload an image, if everything is ok you should see the following output:

Blob uploaded successfully
Uploaded file url: https://storage_account_name.blob.core.windows.net/container_name/filename.jpeg
Message sent to Azure Service Bus queue successfully!
Message: {"filename":"filename.jpeg","image_container":"container_name"}

Go ahead and check the Azure Blob Storage and Azure Service Bus to see if the image is uploaded and the message is sent to the queue.

If everything is successful, we can now move on to the next step which is to create an Azure Function to listen to the Service Bus queue and resize the image.

Create Azure Function to resize image

Creating a Azure function can be done using both Azure Portal or Azure CLI, also there is easier option which uses VSCode. In this tutorial, I will use Azure CLI to create the Azure Function. Make sure you have Azure CLI installed on your machine.

Navigate to the image-resize directory and run the following command to create a new Azure Function:

mkdir functions && cd functions
cargo init

This will create a new Rust project with the name functions. Now let’s add the required dependencies to the Cargo.toml file:

[package]
name = "handler" // Azure Function name
version = "0.1.0"
edition = "2021"

[dependencies]
warp = "0.3"
tokio = { version = "1.12", features = ["macros", "fs", "rt-multi-thread"] }
futures = { version = "0.3", default-features = false }
serde = "1.0.200"
serde_json = "1.0"
azure_core = "0.20.0"
azure_storage = "0.20.0"
azure_storage_blobs = "0.20.0"
azure_messaging_servicebus = "0.20.0"
tracing = "0.1.40"
image = "0.25.1"

Now let’s create host.json and local.settings.json file, this is needed for Azure Functions.

// host.json
{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "excludedTypes": "Request"
      }
    }
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[4.*, 5.0.0)"
  },
  "customHandler": {
    "description": {
      "defaultExecutablePath": "handler", // make sure this is the same as the binary name
      "workingDirectory": "",
      "arguments": []
    }
  },
  "concurrency": {
    "dynamicConcurrencyEnabled": true,
    "snapshotPersistenceEnabled": true
  }
}

// make sure this file is in .gitignore
// local.settings.json
{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "",
    "FUNCTIONS_WORKER_RUNTIME": "custom",
    // make sure to replace with your service bus namespace, and access key
    "servicebusnamespace_SERVICEBUS": "Endpoint=sb://servicebusnamespace.servicebus.windows.net/;SharedAccessKeyName=SharedAccessKeyName;SharedAccessKey=SharedAccessKey"
  }
}

Now let’s create a function using func new command:

func new --template "Azure Service Bus Queue trigger" --name "process_image_resize"

This will create a new folder called process_image_resize with the functions.json file, let’s modify the functions.json file:

{
  "bindings": [
    {
      "name": "mySbMsg",
      "type": "serviceBusTrigger",
      "direction": "in",
      "queueName": "queue_name", // replace queue_name with your queue name
      "connection": "servicebusnamespace_SERVICEBUS" // replace servicebusnamespace with your service bus namespace
    }
  ]
}

Now let’s add the following code to the src/main.rs file:

use azure_messaging_servicebus::service_bus::QueueClient;
use azure_storage::StorageCredentials;
use azure_storage_blobs::prelude::BlobServiceClient;
use serde::{Deserialize, Serialize};
use futures::StreamExt;
use tracing::trace;
use std::{env, io::Cursor};

#[derive(Serialize, Deserialize, Debug)]
struct ImageNode {
    filename: String,
    image_container: String,
}

#[tokio::main]
async fn main() -> azure_core::Result<()> {
    let service_bus_namespace = env::var("AZURE_SERVICE_BUS_NAMESPACE").expect("Please set AZURE_SERVICE_BUS_NAMESPACE env variable first!");
    let queue_name = env::var("AZURE_QUEUE_NAME").expect("Please set AZURE_QUEUE_NAME env variable first!");
    let policy_name = env::var("AZURE_POLICY_NAME").expect("Please set AZURE_POLICY_NAME env variable first!");
    let policy_key = env::var("AZURE_POLICY_KEY").expect("Please set AZURE_POLICY_KEY env variable first!");

    let http_client = azure_core::new_http_client();

    let client = QueueClient::new(
        http_client,
        service_bus_namespace,
        queue_name,
        policy_name,
        policy_key
    ).expect("Failed to create client");

    let received_message = client
        .receive_and_delete_message()
        .await
        .expect("Failed to receive message");

    if received_message.is_empty() {
        println!("No message received");
        return Ok(())
    }

    println!("Received message: {:?}", received_message);

    // grab the image from the message
    match serde_json::from_str::<ImageNode>(&received_message) {
        Ok(image) => {
            println!("Deserialized image: {:?}", image);

            // Azure Blob Storage credentials
            let storage_account = env::var("AZURE_STORAGE_ACCOUNT").expect("Missing AZURE_STORAGE_ACCOUNT env var");
            let storage_access_key = env::var("AZURE_STORAGE_ACCESS_KEY").expect("Missing AZURE_STORAGE_ACCESS_KEY env var");
            let container_name = image.image_container;

            let blob_name = &*image.filename;

            // create Azure Blob Storage client
            let storage_credentials = StorageCredentials::access_key(storage_account.clone(), storage_access_key);
            let service_client = BlobServiceClient::new(storage_account, storage_credentials);
            let blob_client = service_client
                .container_client(&container_name)
                .blob_client(blob_name);

            trace!("Requesting blob");

            let mut bytes: Vec<u8> = Vec::new();
            // stream a blob, 8KB at a time
            let mut stream = blob_client.get().chunk_size(0x2000u64).into_stream();
            while let Some(value) = stream.next().await {
                let data = value?.data.collect().await?;
                println!("received {:?} bytes", data.len());
                bytes.extend(&data);
            }

            // load the image from the bytes
            let img = image::load_from_memory(&bytes).expect("Failed to load image");
            // resize the image
            let resized_img = img.resize(100, 100, image::imageops::FilterType::Triangle);
            // write the resized image to the buffer
            let mut resized_bytes: Vec<u8> = Vec::new();
            resized_img.write_to(&mut Cursor::new(&mut resized_bytes), image::ImageFormat::Jpeg).expect("Failed to write image");

            // change the filename to include the word "resized"
            let new_blob_name = format!("resized_{}", blob_name);

            let blob_client = service_client
                .container_client(&container_name)
                .blob_client(&new_blob_name);

            blob_client.put_block_blob(resized_bytes)
                .content_type("image/jpeg")
                .await
                .expect("Failed to upload blob");

            println!("Resized image uploaded successfully");

        },
        Err(e) => {
            println!("Failed to deserialize image: {:?}", e);
            return Ok(())
        }
    };

    Ok(())
}

The above code will listen to the Azure Service Bus queue and receive the message, then it will get the image from the Azure Blob Storage and resize the image and save it back to the Blob Storage with the filename prefixed with resized_.

Now with everything in place, let’s start our server and upload an image:

// image-resize/api
cargo run -r

// upload an image
CURL -X POST -F 'file=@/path/to/image.jpeg' http://localhost:3030/upload

Now let’s start the Azure Function by running the following command:

// image-resize/functions
cargo build -r && cp ./target/debug/handler . && func start

If everything is successful, you should see the following output:

Received message: "{\"filename\":\"example.jpeg\",\"image_container\":\"image_container_name\"}"
Deserialized image: ImageNode { filename: "avatar-1.jpeg", image_container: "image_container_name" }
received 8192 bytes
received 8192 bytes
...
...
Resized image uploaded successfully

If you go back to the Azure Blob Storage, you should see the resized image with the filename prefixed with resized_.

Conclusion

This is just a simple example of how you can use Azure Functions with Rust to create a simple image processing function, there are many other possibilities and use cases where you can use Azure Functions with Rust. Overall it was a fun experience to use Rust with Azure Functions, and I learned a lot about Azure Functions and Rust in the process.

Even though Azure SDK for Rust is still in unofficial state, it is still very good and easy to use, and I hope it will be officially supported by Microsoft in the future.

]]>
<![CDATA[Recursion: A Personal Journey Through the Stack]]> https://ikorason.dev/recursion-a-personal-journey-through-the-stack https://ikorason.dev/recursion-a-personal-journey-through-the-stack Sat, 16 Mar 2024 00:00:00 GMT Recursion

“To understand recursion, you must first understand recursion”, yes, the meme is funny, I know. Recursion is a powerful tool, but it can be difficult to understand. I’ve been there, and I’m here to share my journey with you.

Think of recursion like looking into a mirror that reflects into another mirror, dream inside of another dream (Inception?), It’s an infinite loop, but it’s not an infinite loop. It’s a paradox, and it’s a challenge to wrap your head around.

My initial understanding of recursion was quite limited. I knew that it involved a function calling itself, but I didn’t understand how it worked. I didn’t understand how the function could call itself without causing an infinite loop, and I didn’t understand how the function could return a value when it hadn’t finished running yet. I was stuck in a loop of confusion, and I needed to break out of it.

The important part to understand is that a recursive function must have a base case to stop the recursion. Without a base case, the function will keep calling itself forever, and the program will crash. The base case is the exit condition that stops the recursion from continuing indefinitely.

gru learn recursion meme

Even though I understood the concept of the base case, I still didn’t understand how the function could return a value when it hadn’t finished running yet. I didn’t understand how the computer kept track of all the function calls and how it knew when to stop. I was missing a crucial piece of the puzzle, and I needed to find it.

Stack

The stack is a fundamental data structure that operates on the principle of Last In First Out (LIFO). It is a linear data structure that follows a particular order in which the operations are performed. The order may be LIFO (Last In First Out) or FILO (First In Last Out). The Stack data structure is an essential part of understanding how recursion works. When a function calls itself, the computer uses a stack to keep track of the function calls. Each time a function is called, the computer pushes information about the call onto the stack. When the function returns a value, the computer pops the information off the stack and uses it to continue running the program. This process continues until the stack is empty.

I will go over more in depth about the stack data structure in a future post, but for now, let’s focus on how the stack helps us understand recursion.

Visualizing Recursion with the Stack

I am a visual learner, I found that drawing out the stack helped me grasp the concept more intuitively. Let’s consider a simple example: calculating the factorial of a number. The factorial of a number is the product of all positive integers less than or equal to the number. For example, the factorial of 5 is 5 4 3 2 1 = 120.

Here’s a simple recursive function to calculate the factorial of a number:

int factorial(int n) {
  if (n == 0) { // base case
    return 1;
  }
  return n * factorial(n - 1);
}

Now let’s visualize how the stack works when we call factorial(5):

recursion post factorial stack 1

When we first call factorial(5), it will push factorial(5) onto the stack. Then, it will call factorial(4) and push that onto the stack. This process will continue until factorial(0) is called. At this point, the stack will look like this:

recursion post factorial stack 2

The last call to factorial(0) will return 1, and then the stack will start to unwind. Each call will return a value and pop itself off the stack. The stack will continue to unwind until it is empty.

recursion post factorial stack 3

This visualization helped me understand how the stack keeps track of the function calls and how the return values are used to calculate the final result. It was a “aha” moment for me, and it opened the door to a deeper understanding of recursion.

I hope you find this post helpful in your journey to understand recursion. We all have different ways to understand complex concepts, and I found that visualizing the stack helped me grasp the concept more intuitively. If you’re struggling with recursion, I encourage you to try drawing out the stack and see if it helps you too. Happy coding! 🚀

]]>