Bitrock https://bitrock.it/wp-content/it We deliver innovative and reliable technological evolution Mon, 16 Mar 2026 13:16:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://bitrock.it/wp-content/uploads/2023/10/Marchio.svg Bitrock https://bitrock.it/wp-content/it 32 32 Queues for Kafka (KIP-932): The Bridge between Event Streaming and Queuing https://bitrock.it/blog/queues-for-kafka-kip-932-the-bridge-between-event-streaming-and-queuing.html Mon, 16 Mar 2026 13:16:02 +0000 https://bitrock.it/?p=29524 For years, architects and developers have adopted Apache Kafka as the standard for event streaming and distributed logs, yet continued to rely on systems like RabbitMQ for traditional queuing.

This separation has never been ideological, but architectural. In Kafka’s classic consumer-group model there is in fact a fundamental constraint: the 1:1 mapping between partition and active consumer.

If a topic has three partitions for example, you can scale up to a maximum of three consumers to cooperatively consume messages. A fourth remains idle. This model guarantees partial ordering and efficient offset management, but introduces a structural limit to concurrency and operational flexibility.

With KIP-932, introduced in preview in Apache Kafka 4.0 and officially released in Apache Kafka 4.2, this paradigm changes radically. Share Groups are born, a model that brings the concept of queue natively inside Kafka, allowing the decoupling between message processing and storage and surpassing some of the historical limits of distributed logging as originally conceived in Kafka.


The Limits of the Conventional Consumption Model

To understand the value of Share Groups, it is necessary to analyze the criticalities of Kafka’s classic model based on Consumer Groups.

Maximum Level of Parallelism

In the model based on Kafka Consumer Groups, the maximal parallelization in message consumption is limited by the number of partitions. This can lead to the need to use preventive over-partitioning techniques: companies create topics with hundreds of partitions merely to absorb peaks in load (for example user or order peaks during Black Friday), maintaining an infrastructure oversized for the rest of the year.

Head of Line (HOL) Blocking

A single consumer within a Consumer Group is assigned an entire partition and messages are processed in sequential order.

If a single message requires a call to a slow external system, performs a computationally heavy task or fails repeatedly, the entire partition remains blocked. This phenomenon is known as Head of Line (HOL) blocking. The result is a pipeline that stops because of one problematic event.

The Cost of Rebalancing

Rebalancing is Kafka’s fault-tolerance mechanism. However, especially in earlier versions, it could become a highly invasive event: during partition reassignment, consumption was interrupted, increasing latency and generating instability during peak periods. Recent versions of Kafka introduced some optimizations but still have not eliminated the problem.


Kafka Share Groups: Record-Level Assignment

The innovation of KIP-932 lies in moving from the “one consumer per partition” logic to “multiple consumers cooperate on the same partition”. It is no longer the partition that is assigned exclusively, but individual records (or batches of records). This allows scaling the number of consumers beyond the number of partitions, eliminating the historical concurrency constraint.

How It Works: The Share-Partition Leader

In this new architecture, state management is no longer tied to a simple sequential offset. The figure of the Share-Partition Leader is introduced, co-located with the physical partition leader. Its role is to manage the state of so-called In-Flight Records, that is, messages currently in processing.

To keep performance high, Kafka uses a “sliding window” defined by two new markers:

  • SPSO (Share-Partition Start Offset): the offset of the first message not yet acknowledged.
  • SPEO (Share-Partition End Offset): the upper limit of messages available to be fetched by the Share Group.

This approach lets Kafka handle huge topics without needing to keep in memory the state of every single record in the topic’s entire history.


Record Lifecycle and Resilience

With KIP-932, every record has an associated state evolving through a state machine:

  1. Available: the record is in the log and ready to be consumed.
  2. Acquired: the record has been sent to a consumer and “locked” for a defined duration (lock duration).
  3. Acknowledged: the consumer confirms successful processing.
  4. Archived: if a record fails repeatedly or the lock duration expires too many times, it is automatically archived.

This logic natively integrates management of Poisonous Messages, preventing a single faulty record from blocking the system indefinitely and improving the overall robustness of the application.

Rebalancing Without Interruptions

Unlike classic Consumer Groups, rebalancing in Share Groups is much less invasive. Since records are not “owned” exclusively through partition assignment, adding or removing a consumer does not require a full stop of processing: the system simply continues distributing available records to active members.


When to Use Share Groups

Despite the obvious advantages in scalability and flexibility, adopting Share Groups requires careful evaluation of some fundamental architectural trade-offs. The first and most evident is the loss of partial ordering. With Share Groups, records may be processed out of sequence because of inherent concurrency among multiple consumers or retry mechanisms. If the application logic depends strictly on per-partition message sequencing, this model is not the correct choice.

Another significant limitation concerns network cost optimization: Follower Fetching is currently not supported. The state of locks (“Acquired”) resides exclusively in the memory of the Share-Partition Leader. Replicating this transient state in real-time on followers is a complex challenge that, for now, prevents use of Rack Aware Fetching. In multi-zone environments, this can lead to higher network costs compared to the traditional model.

Finally, one must consider the absence of Exactly Once Semantics (EOS). Although it is possible to read transactionally written records, the current protocol does not include the ability to acknowledge message delivery within an atomic transaction. If the application requires strict end-to-end transactional guarantees, the classical consumer group remains the reference standard.

In practical terms there are however some scenarios where this technology can make a difference:

  • Long Running Tasks: dispatching complex tasks, like heavy data transformations on single events, without risking stalling other messages due to blocked partitions.
  • Cloud Cost Optimization: on platforms like Confluent Cloud, partitions have economic weight. With Share Groups, we can scale compute (consumers) independently from storage (partitions), handling message spikes without having to over-provision the entire Kafka infrastructure.

Conclusion

The introduction of Share Groups with KIP-932 marks the overcoming of the historical boundary between streaming and queuing in Apache Kafka. This evolution allows companies to finally decouple computing power from storage, eliminating critical bottlenecks such as Head of Line blocking and optimizing infrastructural costs tied to over-partitioning.

However, adopting this model requires a strategic analysis of trade-offs, especially regarding the loss of ordering and absence of Exactly-Once semantics. This is where Bitrock’s expertise becomes decisive: we don’t limit ourselves to technical implementation, but guide companies in an end-to-end digital transformation. Thanks to our deep knowledge of the Kafka ecosystem, we help partners balance innovation and architectural solidity, ensuring our clients obtain a concrete and sustainable competitive advantage.Do you want to discover how Share Groups can optimize your architecture?

Contact us at Bitrock for a dedicated technical consultancy.


Main Author: Simone Esposito, Software Architect & Team Lead @ Bitrock

]]>
Let’s Rock Product Design with Federico Bianchi https://bitrock.it/blog/lets-rock-product-design-with-federico-bianchi.html Thu, 12 Mar 2026 10:49:58 +0000 https://bitrock.it/?p=29391 In the current technological environment, innovation is no longer assessed only in terms of computing power or coding, but rather in the ability of a digital product to seamlessly integrate into the user’s life. For a leading IT consulting company such as Bitrock, an end-to-end approach cannot be separated from excellent User Experience design.

Design is the bridge between technical complexity and business value.

In today’s rapidly evolving landscape, it is essential to understand the role of a designer. To address this question, we conducted an interview with Federico Bianchi, a Junior UX Designer at Bitrock. During our conversation, he provided valuable insights into emerging trends, the significance of microcopy, and the transformative impact of Design Systems. Through his perspective, the company’s commitment to nurturing talent that not only focuses on aesthetics but also emphasizes strategy and usability becomes evident.

How do you keep up to date with the latest news in the UX & UI world?

It is crucial to establish a clear distinction between the two areas, as they progress at different speeds. UX (User Experience) is founded on human mental models characterized by slow and non-linear evolution, which is why the discipline rarely sees completely new innovations. My work often consists of rediscovering or reiterating fundamental principles, such as those defined by the Nielsen Norman Group. I subscribe to newsletters and information channels that discuss classic topics, which may be overlooked in the fast pace of daily development, but which remain highly relevant. My role involves constantly reviewing methodological techniques.

However, I believe that these principles should not be applied rigidly. For a UX designer, it is essential to strike a continuous balance between methodological assumptions and practical requirements. Despite the existence of best practices and international standards, practical challenges such as technical constraints, budgetary limitations, and time constraints often make full compliance difficult in daily operations. This is where true expertise lies: in the ability to strike a balance between theoretical perfection and practical necessities, ensuring an optimal user experience while maintaining the efficiency of the production process.

The UI (User Interface) is linked to the world of graphics and aesthetic trends, where changes occur rapidly. I use Behance and Dribbble for visual inspiration, and Awwwards for advanced interactions. However, it is important to approach these resources with a critical eye, as many online solutions are overly generic for real contexts. The key is to strike a balance between creativity and practicality, ensuring that interfaces remain relevant and functional over time. My update is designed to address this challenge, by aligning with current trends while maintaining a foundation in usability principles that stand the test of time.

Do you consider any trends in the world of UX & UI Designer to be overrated or underrated?

The Glass Effect (Glassmorphism) is a design trend that has been popularized by Apple’s recent aesthetic choices. This translucent style has been a subject of debate in the design community. Some critics argue that its application at the operating system level can compromise readability and accessibility due to the unpredictability of the background chosen by the user.

However, I believe it is underrated when used in controlled contexts. In a specific business application, where we have full control over the color range, button hierarchy, and font, the Glass Effect can be managed ad hoc to create an elegant and functional interface. Furthermore, with the advent of augmented and spatial reality headsets, this style will likely become the industry standard. It is essential to acclimate the user’s eye to these transparencies, as opaque black blocks obstructing the view are not feasible in AR glasses. This evolution in visual habits is a subject of criticism today but will soon become the norm.

How do you imagine the evolution of your role and what new skills will be crucial?

Artificial Intelligence is a major current topic of interest.

It is a common misconception that AI will replace designers. While it is indeed already possible to generate simple interfaces with just a few prompts, the reality is somewhat different. AI excels at performing tasks based on clear instructions, but struggles when the problem is not clearly defined. Customers often require digital transformation, but do not themselves know what the main problem to be solved is. This is where the human factor remains irreplaceable.

I see my role evolving to encompass the responsibilities of a behavioral analyst and psychologist. In a future dominated by adaptive interfaces, designers will no longer design individual buttons, but will establish the rules of behavior for the system. The crucial skills will be empathy, data analysis, and accessibility. We must ensure that every innovation, whether vocal, gestural, or visual, is usable by everyone.

In this evolution, microcopy (or UX writing) will play a pivotal role. We are not merely referring to ‘writing texts’, but rather designing those concise groups of words — form labels, error messages, menu items, calls to action — that guide the user step by step. Microcopy is an essential tool that streamlines navigation, provides user reassurance during uncertain interactions, and transforms impersonal interactions into personalized experiences.

While AI can facilitate layout generation, designers must develop specific skills to address friction points and eliminate the subtle ambiguities that can hinder navigation. Caring for microtexts means understanding that each word influences conversions and accessibility. Consequently, design will increasingly become a matter of strategy and conversation, rather than merely “coloring” pixels. Our task will be to provide technology with a voice and a clear direction.

How would you describe your ideal colleague?

Design is not a solitary act; it is an iterative process of continuous validation.

The ideal colleague is what I would term ‘Feedback Friendly’. The ability to give honest and constructive feedback without belittling the other person, but with the sole aim of improving the final product, is essential. In a growth phase, having a discussion with a senior profile is vital as it allows you to pick up on details that had previously escaped your notice.

I would also highlight curiosity as a key attribute. Working in technology demands continuous learning and growth. The ideal colleague is someone who shares relevant information, whether it’s news, a new tool, or a case study they have recently read. The capacity to collaborate effectively, present your work, and embrace diverse perspectives is what elevates a good project to an exceptional one.

What is the biggest change in the industry since you started working?

There’s been a radical shift from “page-based” design to “component-based” design through Design Systems. When I started, the approach was centered on individual screens, as that’s how it was taught. However, when working on real, complex projects, it becomes clear that this method is not scalable.

In the current landscape, there is a shift towards component design. When you need to create a feature, you don’t design the page, but rather check which components of the Design System you can reuse or which new elements you need to create so that they are optimized and maintainable over time. It’s a continuous challenge, because tools like Figma constantly introduce new variables and automation logic that require you to update and optimize components created just a year earlier. The design process has evolved into a fluid one, bringing it into closer alignment with software architecture than was previously the case.

Conclusion

Federico’s experience illustrates that contemporary design is now a core component of successful IT projects, not just an accessory. At Bitrock, a leading IT consulting firm that specializes in helping companies with their innovation projects, we understand that digital transformation requires a balance between engineering and strategic design.

From the creation of scalable Design Systems to the meticulous curation of Microcopy and accessibility, our UX team works to ensure that every technology solution not only performs well, but also generates real value for the end user. By choosing Bitrock, you will be collaborating with professionals who, like Federico, look beyond current trends to build robust, inclusive digital systems that are ready for tomorrow’s challenges.

Would you like to optimise the user experience of your digital products or create a Design System specifically tailored to your company? Request a consultation with Bitrock’s experts today

]]>
MQTT & Waterstream: Crossing the Boundaries from IoT Toward Enterprise Stream Processing https://bitrock.it/blog/mqtt-waterstream-crossing-the-boundaries-from-iot-toward-enterprise-stream-processing.html Mon, 09 Mar 2026 14:17:34 +0000 https://bitrock.it/?p=29531 In the landscape of digital and technological evolution, data transmission often occurs far from the idealism of protected data centers. Real-world network infrastructures are frequently marked by variable latencies, limited bandwidth, and structural instability.

In this context, the MQTT protocol emerges not just as an Industrial IoT standard, but as a pragmatic and strategic choice for managing large-scale asynchronous communications.

As highlighted by Franco Geraci, Head of Engineering at Bitrock, during our latest feature on the Bitrock Tech Radio podcast, MQTT stands out for its ability to operate where traditional protocols fail.

Often relegated to a specific niche, MQTT is in reality the engine behind complex systems that require energy efficiency and resilience. However, the real challenge for companies lies not just in data collection, but in its seamless integration with enterprise analytics systems.


Application Scenarios Beyond IoT

By definition, a network protocol is a structured set of rules that enables heterogeneous devices to communicate according to predetermined standards. MQTT implements a publish/subscribe messaging model, defined by a total decoupling between the data source (publisher) and the recipients (subscribers).

At the heart of the architecture lies the broker, a central server that acts as a message router. Clients connect to the broker on specific topics, removing the necessity for direct knowledge among network nodes. This system allows for persistent sessions and storage of messages for offline clients, ensuring the continuity of information flow even in the presence of temporary disconnections.

Beyond traditional IoT sensor scenarios, MQTT excels in contexts where computational resources and network stability are constrained, such as:

  • Mobile and unstable connections: Ideal for communications over cellular networks prone to frequent interruptions.
  • Limited resources: Optimized for battery-powered devices with modest CPU capabilities.
  • High concurrency: Designed to handle millions of simultaneous clients sending small-sized messages.

Today, MQTT’s adoption stretches well beyond industrial sensor monitoring, finding critical applications in highly technology-driven sectors. Below are some characteristic scenarios:

Industry 4.0 and Predictive Maintenance

In manufacturing, MQTT enables the collection of telemetry from PLCs and line machinery, decoupling physical machines from cloud-based analytical systems. This standard facilitates the implementation of artificial intelligence algorithms for predictive maintenance, optimizing processes without the heaviness of proprietary protocols.

Automotive and Fleet Management

Numerous players in the automotive field use MQTT for managing car-sharing systems and vehicle telemetry. In such settings, the protocol’s reliability over mobile connections allows near-real-time status updates and precise control of vehicle parameters, optimizing operational costs and timing.

Healthcare and Telemedicine

In digital health, sensors monitoring vital parameters use MQTT to ensure alarms reach hospital back-end systems with guaranteed Quality of Service (QoS), even under suboptimal network conditions.


Limitations of the Protocol and Alternatives to MQTT

Despite its versatility, MQTT is not a universal solution: there are indeed contexts in which the adoption of other protocols is technically preferable. Here are a few examples:

  • Traditional web applications: where standard CRUD operations and direct browser integration are needed, HTTP/REST remains the benchmark.
  • Large payloads: for transferring massive files or streaming media, protocols such as gRPC or WebRTC offer superior performance.
  • Complex business logic: systems that require advanced routing, complex transactional semantics, or heavy data transformations tend to favor solutions built on Apache Kafka.

Waterstream

The technical friction point often manifests in the integration between the edge (the sensor/MQTT world) and the enterprise core (the analytical/Kafka world). Kafka represents the high-speed highway for persistent event logs, but it is not natively optimized to handle millions of unstable connections coming from field devices.

Waterstream is born precisely to bridge this gap, acting like an osmotic membrane. It is not just a translation bridge, but a solution that enables smooth passage of data between the material world of devices and the analytical power of business systems.

By using Waterstream, companies can leverage Kafka’s persistence and scalability while maintaining MQTT’s lightness and resilience at the network edge. This approach eliminates data isolation and simplifies the architectural complexity, turning raw telemetry into strategic real-time insights.


Conclusions

The evolution of MQTT demonstrates that we are no longer dealing with a mere protocol for the Internet of Things, but with a fundamental architectural choice for resilient management of real-time information flows. The ability to operate successfully over unstable networks, coupled with minimal bandwidth usage, makes it the reference standard for mission-critical sectors like Industry 4.0, automotive, logistics, real-time messaging, and digital health.

However, the strategic value of data is fully realized only when the barriers between the network edge and the enterprise analytical core are dismantled. Solutions like Waterstream respond precisely to this need, acting as the technological glue that makes it possible to scale millions of connections without giving up on Apache Kafka’s processing power.

An integrated approach allows companies to overcome the historic limits of data isolation and scalability complexity, turning a technical necessity into a concrete competitive advantage. An infrastructure capable of harmonizing the sensory world of devices with the electronic brain of the data center is the essential requirement for any company that aims for true end-to-end technological evolution.

Would you like to delve deeper into how Waterstream can optimize the data flow between your devices and your Kafka infrastructure? Reach out to our experts for a dedicated technical session.


Main author, Franco Geraci, Head of Engineering @ Bitrock

]]>
Insight from AI Playground https://bitrock.it/blog/insight-dall-ai-playground.html Tue, 03 Mar 2026 10:05:32 +0000 https://bitrock.it/?p=29510 Working remotely has its undeniable advantages, but there is one thing that no Google Meet call can ever replace: the energy of a day spent in person with your colleagues, especially when the excuse is a glimpse into the technological future.


On January 23, Daniel Zotti, Davide Ghiotto, and I met up in Vicenza for AI Playground 2026, a free event dedicated to front-end developers, with the aim of demonstrating the potential of Large Language Models — and Google Gemini 3 in particular — and discussing some of the best ways to integrate them into our projects.


Being able to finally chat in person, laugh over coffee, and share our first impressions live, even before the keynote began, gave the day a whole new flavor.

La Masterclass: Entrare nel cuore di Gemini 3

The morning kicked off with a three-hour marathon led by Fabio Biondi. It wasn’t the usual theoretical overview, but a real hands-on workshop. We explored the fundamentals of Gemini 3 and its APIs, but the real highlight was the tools:

  • AntiGravity: Google’s new AI-powered IDE that promises to radically change the way we write code.
  • Gemini CLI and AI Studio: Tools that we are already using in our projects and that have fully demonstrated how Generative AI can make our workflows and development easier and smoother

Seeing the power of Multimodal Prompts live—capable of processing videos and images with incredible accuracy thanks to the NanoBanana model—gave us a thousand ideas on how to evolve the interfaces we develop every day at Bitrock.

Networking, Sushi and Genkit

The lunch break was the perfect opportunity to discuss with my colleagues what we had learned in the first part of the day and, above all, to spend some time together face to face. Between one maki and another, we discussed how these technologies could be applied to our clients’ real projects. It is often during these “offline” moments that the best insights arise.


The afternoon continued with highly technical sessions. Giorgio Boa showed us the power of Firebase in combination with GenKit, Google’s open-source framework that simplifies the integration of complex AI features. We talked about how to transform a traditional app into an “intelligent” experience in record time.

Spec Driven Development: Staying in control

One of the topics that made us think the most was Spec Driven Development (SDD), presented by Matteo Ronchi. In a world where AI can generate code at breakneck speed, the real challenge for us developers is to maintain control and quality. We explored methodologies and frameworks to ensure that AI is a powerful assistant, but always guided by rigorous human specifications.


There was no shortage of insights on Andrea Saltarello’s RAG (Retrieval-Augmented Generation) for customizing model responses with specific data and the use of Function Calling and MCP to allow AI agents to interact with the browser and external tools (such as Chrome DevTools).

Conclusion

Between coffees during breaks, we had the pleasure of talking with developers from all over Italy. Hearing different opinions, sharing common problems, and discovering new solutions is the very essence of these events.

We returned home with our minds full of ideas and even more convinced that AI should never be seen as a substitute for humans, but rather as an enhancement, a tool that allows us front-end professionals to focus on what really matters: the user experience and business logic.

The Vicenza Playground was just the beginning. The future is multimodal, agentic, and incredibly fast, and we are ready to lead the way.


Main Author: Gianluca La Rosa, Front-end Developer @ Bitrock

]]>
Swift for Android: Exploring Cross-Platform Development Beyond the Apple Ecosystem https://bitrock.it/blog/swift-for-android-exploring-cross-platform-development-beyond-the-apple-ecosystem.html Fri, 27 Feb 2026 14:27:17 +0000 https://bitrock.it/?p=29502 A practical exploration of building native Android apps with Swift

Can Swift Become an Alternative when Evaluating Cross-Platform Apps?

In today’s digital transformation landscape, companies find themselves at a constant crossroads: release speed or native quality? Historically, choosing native development meant doubling efforts (and costs) by maintaining separate teams for Swift (iOS) and Kotlin (Android). Conversely, traditional cross-platform frameworks have often introduced compromises in performance and User Interface fidelity.

But what if we could break down these barriers without giving up the power of Swift? Recently, the evolution of Swift-to-Java interoperability tools has opened a “third way”: the ability to run Swift code natively on Android. At Bitrock, as a leading partner in IT innovation, we constantly explore these emerging technologies to provide our clients with solutions that maximize code reuse without sacrificing technical excellence.

In this article, we will analyze a practical experiment: integrating Swift business logic within a modern Android architecture. Is it truly possible for Swift to become the definitive alternative for enterprise cross-platform applications?


The Cross-Platform Question

Mobile development teams face a persistent challenge: how do you deliver native-quality apps across iOS and Android without duplicating effort?

The traditional approaches all have trade-offs:

  • Pure native development delivers the best performance and UX, but requires maintaining two separate codebases
  • Cross-platform frameworks like Flutter and React Native enable code sharing but often compromise on native feel or performance
  • Kotlin Multiplatform shares business logic while keeping UI native, but requires Android-first expertise

Swift for Android introduces the option to extend iOS-first development to Android, valuable for teams with established Swift codebases and expertise.


What We Built

Our example project demonstrates Swift for Android in a production-like scenario:

Features implemented:

  • Real-time data fetching from NASA’s API
  • Search and filtering capabilities
  • Full detail views with HD image support
  • Material Design 3 UI with Jetpack Compose

Architecture: The application splits responsibilities between the Swift layer, which handles API communication, data transformation, filtering, and search, and the Android layer, which takes care of the UI presentation.

This separation mirrors how production apps might structure shared business logic with native UI.


How the Swift-Android Integration Works

The bridge between Swift and Android consists of automatic code generation. A tool called swift-java analyzes your Swift APIs and generates Java bindings that Android code can call directly.

From Swift:

public class NASAClient {

    public func getTodayApod() async throws -> ApodData {

        // Swift implementation

    }

    public func search(_ apods: [ApodData], query: String) -> [ApodData] {

        // Filtering logic

    }

}

To Kotlin:


val nasaClient = NASAClient(apiKey)

// Swift async becomes Kotlin coroutine

val apod = nasaClient.getTodayApod().await()

// Swift functions called directly

val results = nasaClient.search(allApods, "galaxy")

This bridge handles memory management, type conversion, and async operations automatically.


Key Technical Insights

1. Type Safety Across Languages

Swift’s type system translates cleanly to Java/Kotlin:

  • Swift optionals (String?) become Java Optional<String>
  • Swift structs map to Java classes with getters
  • Swift async/await bridges to CompletableFuture, which integrates seamlessly with Kotlin coroutines

2. Platform-Specific Optimization

Swift code can adapt to each platform using conditional compilation:

#if os(Android)

    // Use Android-optimized HTTP client

    AsyncHTTPClient

#else

    // Use iOS URLSession

    URLSession

#endif

The same business logic runs everywhere, but HTTP networking, file I/O, and other platform-specific operations can be optimized for each environment.


Challenges to Consider

Build Complexity

Setting up the Swift toolchain for Android requires understanding both ecosystems. The initial configuration has a learning curve, but subsequent builds are straightforward. Sometimes some workarounds might be needed, like deleting cached build folders, which can be a burden to development speed.

Platform Limitations 

Not all Swift features translate to Java. Complex enum associated values, certain protocol features, and some advanced Swift constructs don’t have direct Java equivalents (yet). A list of supported features is provided by the swift-java team.

Ecosystem Maturity 

The Swift for Android SDK is still in early development and might not be ideal for important enterprise projects yet, but it’s a tool worth checking out to understand how the Swift team is evolving.


Comparison with Alternatives

vs. Kotlin Multiplatform 

Both enable sharing business logic with native UI. KMP is Android-first; Swift for Android is iOS-first. Choose based on your team’s primary expertise.

vs. Flutter/React Native 

These frameworks share entire applications, including UI. Native implementations are occasionally needed for edge-cases.


Our Perspective

After playing with Swift for Android, we saw that it still has some rough edges that need to be evened out before implementing it on production-ready apps. We expect it to evolve and become a valuable tool for teams that work more often on iOS/Apple platforms than Android or that need to bring existing logic written in Swift to Android.

On the other hand, for teams starting fresh or working primarily in Android, Kotlin Multiplatform or native development likely makes more sense.


Conclusion

As Swift continues evolving beyond Apple’s ecosystem, we expect:

  • Improved tooling and developer experience
  • Broader Swift package compatibility with Android
  • Better integration with Android development workflows

The technology is being actively worked on, and Apple seems to be investing in expanding Swift onto even more platforms. Whether Swift for Android becomes mainstream or remains a specialized tool, it expands the options available to mobile development teams.

Are you ready to optimize your mobile strategy and slash development costs?

Contact the Bitrock experts today for a personalized consultation on your mobile architecture.


Main Author: Mattia Contin, iOS Developer @ Bitrock

]]>
Reducing IoT TCO: When Complexity Costs More Than the Cloud https://bitrock.it/blog/reducing-iot-tco-when-complexity-costs-more-than-the-cloud.html Mon, 23 Feb 2026 10:25:21 +0000 https://bitrock.it/?p=29468 Many companies watch their IoT platform costs escalate faster than their installed sensor base, trapped in a jungle of intermediary components and custom bridges. In this article, we will explore how to reduce Total Cost of Ownership (TCO) by eliminating unnecessary complexity between the edge and the cloud through a lean architecture based on Waterstream and Kafka.

Let’s start with a use case: a manufacturing company with thousands of devices in the field decides to stream all IoT data into Kafka to enable real-time analytics and data-driven applications. On paper, the architecture looks flawless: a cloud provider’s managed IoT service, a dedicated Kafka cluster, custom MQTT-to-Kafka bridges, serverless functions for data normalization, and intermediary storage for buffering.

After a few months in production, the Ops team is buried in monitoring dashboards, while the CFO questions how it is possible for cloud-related costs to continue growing faster than the number of sensors. As we will see, the problem is not the costs themselves, but the structural complexity that the company is forced to pay for every single month without being able to truly reduce it.


The Hidden Costs

When discussing TCO for IoT platforms, the conversation tends to focus on software licenses or the number of Kafka nodes required. In reality, the true costs lurk within the jungle of intermediary components: separate MQTT brokers (managed or self-hosted), Kafka clusters, integration bridges, connectors, functions, queues, and intermediary databases used to compensate for the limitations of upstream or downstream systems. Each component adds infrastructure costs, operational costs (monitoring, patching, incidents, on-call rotations), and the development and maintenance costs for that MQTT-to-Kafka bridge that no one wants to touch anymore.

TCO therefore becomesimpossible to predict and nearly impossible to reduce without re-evaluating the entire architecture. A real-life case study featured on our blog shows a logistics customer that went from spending around $38,000 per month to $8,000 per month by migrating from a public cloud-based IoT service to an on-premises solution based on Waterstream. It is not the cloud that is too expensive: it is the unnecessary complexity that accumulates between the devices and the data platform.


Waterstream: When Subtraction is Worth More Than Addition

The idea behind Waterstream is as simple as it is disruptive: allow devices to speak MQTT directly to Kafka, without a separate broker and without custom integration layers. Devices connect via MQTT to Waterstream, which uses Kafka as its sole backend for persistence and message distribution, while applications continue to consume from Kafka as they do today. One container, MQTT devices, Kafka apps, and zero custom “glue code.”

The consequence is the disappearance of an entire architectural layer made of bridges, intermediate queues, and functions for data normalization or transformation. Waterstream acts as a stateless component, perfect for Kubernetes and cloud-native environments, with deployment and scaling that can be automated without reconfiguring complex clusters. This means a dedicated team is no longer needed to maintain the plumbing between the MQTT and Kafka worlds: that complexity simply no longer exists.


Where TCO is Truly Reduced

Reducing TCO does not mean spending on fewer nodes, but rather removing structural complexity. With Waterstream, this translates into three concrete impacts that every CFO and IT manager can measure. Eliminating the separate MQTT broker and custom bridges removes entire line items from the infrastructure and operational budget. Kafka becomes the unique back-end, Waterstream exposes it via MQTT, and every proprietary managed service replaced represents a reduction in vendor lock-in and recurring cloud provider fees. Every eliminated component is one less monitoring chart, one less alert, and one less root cause analysis.

Furthermore, Waterstream does not maintain state: messages and session information live in Kafka. This radically simplifies the work of those managing the platform in production. Deployment and scaling become standard in Kubernetes, version updates and rollouts are managed like any other microservice, and a single observability stack is sufficient for both Kafka and Waterstream. Less time spent maintaining infrastructure means more time building features that generate business value.


Development Focused on Use Cases, Not Protocols

Without ad-hoc integration layers, teams can think in terms of use cases instead of protocols. IoT data is available directly in Kafka, ready for real-time analytics, AI, and automation, without duplicated logic between the MQTT and Kafka worlds. Simpler data pipelines are easier to test, put into production, and evolve over time. This is how TCO is truly lowered: fewer lines of “invisible” code written to make systems communicate that should already be talking to each other, and more investment in functionalities with a direct impact on the business.


From IoT Complexity to an AI-Ready Platform

Consider the case of a manufacturing company managing dozens of plants with thousands of sensors per line. In the initial scenario, the architecture included a managed IoT service plus Kafka and custom bridges, leading to rising cloud costs and stalled AI projects because data was not reaching data scientists reliably. With the introduction of Waterstream and an architectural revision, devices continued to speak MQTT but toward Waterstream, Kafka became the single central nerve for all operational data flows, and Data & AI teams were able to access real-time streams without going through additional pipelines.

The benefits were immediate: a significant reduction in spending related to managed IoT services and intermediary components, and an onboarding time for new use cases (alerts, predictive maintenance, digital twins) that dropped from months to weeks. The platform became natively ready to integrate AI models and advanced use cases without further architectural patches. It is not just about saving money: it is about transforming a platform that drains budgets into a strategic asset that enables innovation.


The Role of Bitrock: From PoC to Production Run

To truly contain TCO, solid architectural choices, change governance, and a clear roadmap focused on business objectives (not just the tech stack) are required. 

This is exactly where Bitrock comes in, with the design and implementation of cloud-native and streaming platforms built for scalability, resilience, and cost control. 

The path Bitrock designs with clients always starts with an assessment of the current architecture and effective TCO (including hidden complexity costs), continues with the design of a target architecture aligned with business goals and the digital roadmap, and materializes in an incremental implementation (PoC, rollout, industrialization) with a focus on observability, security, and change management.Contact our Professionals to present your use case and receive a dedicated consultation.

]]>
Sonarflow: Automating Code Quality Where It Actually Matters https://bitrock.it/blog/sonarflow-automating-code-quality-where-it-actually-matters.html Wed, 18 Feb 2026 16:06:20 +0000 https://bitrock.it/?p=29478 In today’s software development landscape, release velocity has become the ultimate competitive advantage. However, this acceleration cannot come at the expense of code quality. Static analysis tools like SonarQube have become essential pillars of modern workflows, yet a paradox is emerging: an abundance of data doesn’t necessarily translate into immediate software improvement.

The real bottleneck slowing down innovation today isn’t the ability to detect issues—it’s the friction in moving from analysis to action. When a “Quality Gate” blocks a pipeline, it often triggers a cumbersome, iterative process that pulls the developer out of their “flow” state. Sonarflow was born from this critical need: to transform code quality from a bureaucratic checkpoint into an integrated productivity accelerator.

In this article, I want to explore how the intelligent integration of static analysis, contextual feedback, and AI can break down operational silos.

Why Static Analysis Isn’t Enough

Most engineering teams have already adopted static analysis. SonarQube, ESLint, and similar tools run within CI/CD pipelines, surfacing issues early and enforcing rigorous security standards. On paper, this infrastructure should guarantee a constant improvement in both quality and speed.

Yet, the reality on the ground is often different. In complex projects, Pull Request (PR) reviews are still slow and fragmented. Developers are forced to constantly jump between their IDE, the CI dashboard, and SonarQube reports. This continuous context switching carries a massive hidden cost: fixes are perceived as interruptions and postponed to a generic “later,” fueling technical debt that quickly becomes unmanageable.

This is where Sonarflow enters the frame.

The Problem: Quality Feedback is Out of Context

Let’s analyze a typical workflow found in many organizations before process optimization:

  1. The developer writes code locally.
  2. Quality tools run on the CI (e.g., SonarQube) after the push.
  3. Issues are reported in an external dashboard or a generic PR comment.
  4. The developer must leave their workspace to analyze logs and understand the error.

This approach decouples problem detection from resolution. This separation is a primary enemy of productivity: every minute spent interpreting an external report is a minute taken away from innovation, generating high friction costs.

The Core Idea of Sonarflow: Bringing Issues into the Developer’s Flow

The philosophy behind Sonarflow is simple yet revolutionary: significant quality improvement only happens when feedback is timely, contextual, and actionable. Instead of forcing the developer to hunt for issues in a generic dashboard, Sonarflow inverts the paradigm by bringing critical information exactly where the code is written.

Through intelligent automation, Sonarflow executes some key steps:

  • Contextual Detection: It automatically identifies the Pull Request based on the local branch name.
  • Smart Filtering: It retrieves only the issues relevant to the specific changes made, eliminating the background noise of unrelated legacy bugs.
  • Assisted Resolution: It leverages the power of LLMs (Large Language Models) to suggest or apply immediate fixes.
  • Time Reduction: It shortens the feedback loop within a single development cycle.

How Sonarflow Integrates into Modern DevOps

It’s important to emphasize that Sonarflow isn’t meant to replace SonarQube; it’s designed to enhance it, acting as a bridge between the analysis server and the development environment. In a mature DevOps ecosystem, the two tools work in synergy to close the quality loop:

  • The developer finishes a task and opens a Pull Request.
  • CI/CD triggers and pipelines start.
  • Quality is checked via QA tools (SonarQube).
  • If something is wrong, the pipeline fails, and the developer is notified.
  • Sonarflow integrates with SonarQube:
    • It retrieves the issues.
    • It analyzes them.
    • It automatically resolves issues (via LLMs), applying knowledge of your specific codebase.
  • The changes trigger a PR update, and the flow restarts.

Ultimately, when the code quality is acceptable, the workflow closes, and the PR can be merged with (much) higher confidence.

Configuration and Usage (For Developers)

Local setup is quick and straightforward. Note that it is currently available as an npm package. You can install it globally with command npm i sonarflow -g

(Skip this if you prefer using npx or adding it as a project dependency). Then, in the project you wish to configure, simply run: sonarflow init

Here is a demo of what is being prompted to the user and guided to the configuration:

After the initial setup, you can use the sonarflow fetch command to retrieve issues on the current branch from SonarQube and get a concise summary.

AI-Assisted Issue Summaries and Fix Suggestions

The final steps of the configuration involve your preferred editor/AI provider to create a rule for auto-fixing SonarQube issues (the default rule name is sonarflow-autofix.mdc).

This is a template rule available in three variants—Safe, Vibe, and YOLO—which differ in the “aggressiveness” of the agent’s autonomy. It is completely customizable: a simple Markdown file with instructions you can tailor to your needs. Once customized, you are ready to leverage LLM agents.

Here’s a demo of it:

The final answer is being recapped here to save you some time. As you can see with the last line (“all changes are committed as…”) the rule we configured for this project is in “yolo” mode: every fix is being automatically committed after linting and checks, but you can customize and remove this behavior of course.

In the quick demo below you saw me using Cursor as my IDE of choice, but this can work with any other (VSCode, Cline, Gemini CLI, Antigravity, Claude Code, Codex… you name it).

Sonarflow is both model and provider agnostic: you are in control of what “AI engine” you are going to use.

Business Impact: The Productivity Multiplier

Sonarflow isn’t just a technical tool; it’s an investment in team velocity. Moving from passive monitoring to active correction transforms the “cost of quality” into measurable value:

  • Workflow-centric
  • Immediate feedback
  • Focused issues specific to the change
  • AI assistance for minor fixes
  • Fully customizable

For Tech Leads and Engineering Managers, this translates to higher throughput and better predictability.

Conclusion

Code quality should not be viewed as a checkbox or a hurdle to release—it is a matter of flow. Tools that only generate static reports solve only half of the equation. The real leap for a modern company comes from deeply integrating quality into the daily decisions developers make.

Sonarflow represents this leap: it brings the context of the problems directly where they are needed, transforming bug detection into immediate resolution. Adopting these tools means giving your teams the freedom to move faster, with the certainty that the ground beneath them is solid.

At Bitrock, we are ready to guide you through this technological evolution. Whether it’s optimizing your Developer Experience or revolutionizing your DevOps processes through AI, our approach is always driven by results and technical excellence.

Want to turn code quality into your primary competitive advantage? Contact our Bitrock experts today.

To try Sonarflow and see code quality become a productivity engine for yourself, visit sonarflow.dev.


Main Author: Davide Ghiotto, Senior Front-end Engineer @ Bitrock

]]>
BuildVS Buy in GenAI: Where to Invest for Sustainable Competitive Advantage https://bitrock.it/blog/buildvs-buy-in-genai-where-to-invest-for-sustainable-competitive-advantage.html Mon, 16 Feb 2026 09:33:50 +0000 https://bitrock.it/?p=29446 The adoption of Generative AI, and particularly Large Language Models (LLMs), is no longer an isolated experiment in research and development labs: it has become a strategic priority for companies that want to maintain a competitive advantage in the market. However, every CIO and CTO faces a crucial question: should we build proprietary AI solutions in-house or purchase already established platforms and tools on the market?

The answer is not binary, and the ‘Build vs Buy’ dilemma hides a complexity that goes far beyond the technological choice. It is a decision that impacts innovation speed, operational costs, risk governance and, ultimately, the company’s ability to scale AI from a proof-of-concept to a governable and sustainable business asset.

In this article, we analyze the strategic criteria to guide this choice, exploring where it truly makes sense to invest in custom development and where, instead, standardization becomes the key to avoiding technical debt and technology lock-in.


Differentiation VS Standardization

The fundamental principle for deciding between ‘build’ and ‘buy’ can be summarized in one rule: build where you differentiate, standardize where you need to be reliable, fast and controllable.

In the GenAI context, competitive advantage rarely lies in owning a proprietary LLM. Models are increasingly accessible, and their availability through APIs or open-source licenses is now democratized. What truly generates distinctive value is the ability to transform business data, internal processes and customer needs into intelligent decisions and personalized services. This is the area where investing in custom development makes sense: AI applications that leverage the company’s unique context, proprietary workflows and exclusive datasets.

On the other hand, there is a set of infrastructure and governance components that must be solid, reliable and compliant with security and compliance standards. These components do not directly generate revenue, but are absolutely critical to prevent AI from becoming an operational risk. Here, standardization and the adoption of established solutions is the winning strategy.


The AI Gateway: the Foundation of AI Scalability

A key concept for understanding the need for standardization is the control plane. Using an urban metaphor: it is possible to build the most advanced and innovative buildings in a city — AI applications — but without a system of traffic lights, traffic rules and an operations center, the city will eventually collapse under the weight of chaos. The control plane is precisely that operations center that governs AI request traffic, ensuring that infrastructure can grow without losing control.

In the GenAI world, the control plane is materialized, among other things, in an AI Gateway: an architectural layer positioned between applications and AI models/services, centralizing governance, security, observability and cost control. This infrastructure enables development teams to innovate rapidly without having to reinvent security, compliance and monitoring mechanisms every time.


Fragmentation Risks

One of the most frequent mistakes in enterprise AI implementations is the proliferation of integrations with model providers, accompanied by sparse and duplicated governance rules within each individual application. This approach, initially perceived as the fastest to achieve results, generates three fundamental problems over time:

  • Technology lock-in: When every application is tightly coupled to a specific model or provider, switching vendors or adopting new solutions becomes a costly and slow operation, even when the market offers more performant or economical alternatives.
  • Unpredictable costs: LLM models operate on token consumption-based pricing. Without centralized control, it becomes impossible to predict, limit and optimize costs. Often, the problem is discovered only when the monthly bill arrives or when performance suddenly degrades.
  • Risk and compliance: In the absence of a coherent audit trail and centralized policies, managing access, protecting sensitive data and accountability for AI decisions become difficult to govern, exposing the company to security risks and regulatory penalties.


Three Fundamental Capabilities to Avoid Lock-in and Govern ROI

To transform AI adoption from a series of isolated experiments into a governable business asset, it is necessary to implement some strategic capabilities:

1. Abstraction and routing: Unified access to AI models, independent of the provider, enables avoiding lock-in and adopting intelligent routing strategies. Routing allows directing requests to the most suitable model based on cost, latency and accuracy criteria. 

2. Cost control and guardrails: Implementing semantic caching mechanisms (an ‘intelligent memory’ that recognizes semantically similar questions and reuses previous answers), rate limiting (controlling the number of requests per user/app) and circuit breaker (automatic blocking in case of spending thresholds or anomalous behaviors are exceeded) is essential to ensure the economic and operational sustainability of AI.

3. Resilience and observability: Resilience implies the ability to manage provider failures through automatic fallbacks to alternative models, ensuring service continuity. Observability, instead, means having complete visibility on performance, errors, token consumption and output quality in production. Without observability, diagnosing problems such as model hallucinations or performance degradation becomes impossible.


Conclusions

The choice between ‘build’ and ‘buy’ in GenAI must be guided by a strategic analysis of the areas where the company can truly differentiate and those where standardization reduces risk and accelerates innovation. Building AI applications that leverage proprietary data and processes generates competitive advantage. Standardizing governance, security and observability infrastructure through an AI Gateway transforms AI from a series of pilot projects into a scalable, transparent and sustainable system.

Bitrock, as a leading IT consulting company specialized in enterprise innovation and digital evolution, positions itself as a strategic partner not only for the initial integration of AI solutions, but above all to ensure the operational maturity of large-scale LLM projects. Our specific expertise is focused on governance, security and economic sustainability of AI infrastructures. 

This includes the design of scalable AI architectures, the integration of standards such as OpenTelemetry and, above all, the implementation of the AI Gateway through Radicalbit, a product of the Fortitude Group portfolio.

We transform the operational uncertainty associated with Artificial Intelligence into strategic confidence, ensuring that investments made in AI are robust, optimized and ready for enterprise scalability in mission-critical workflows.

Discover how Bitrock can support you in the strategic adoption of GenAI and contact us for a dedicated consultation.

]]>
Behind the Scenes of Coding – Carmelo Calabrò https://bitrock.it/blog/behind-the-scenes-of-coding-carmelo-calabro.html Thu, 12 Feb 2026 14:31:55 +0000 https://bitrock.it/?p=29376 Following our first interviews, our column “Behind the scenes of coding” continues its journey to give voice to the driving forces behind innovation at Bitrock. While in the first article we explored the importance of simplicity and maintainability, today we delve into a realm where time is measured in milliseconds and data volumes challenge the limits of traditional architecture.

In this third instalment, we meet Carmelo Calabrò, Senior Software Engineer, to discuss predictive systems, extreme optimisation and the latest news from Kafka 4

A story that highlights how Bitrock’s IT consulting is not just theory, but a constant search for technological limits to exceed our stakeholders’ expectations.

What is the biggest technical challenge you have faced and how did you overcome it?

The most complex technical challenge I have faced involves designing an architecture able to manage a high volume of data in near real-time and transform it into predictive information before the next batch arrives.

The context was the development of a proactive monitoring system whose goal was to generate malfunction alerts that could prevent customer reports. The most difficult constraint was time. The data arrived at a very high frequency and the entire cycle of cleaning, structuring and aggregation had to be completed in a matter of minutes. Overcoming this challenge required perfect synchronisation between every component of the system.

Which project have you been particularly proud of?

Definitely the development of a series of web applications for devices with minimal performance, but which had to guarantee a high level of performance. It was a meticulous job of extreme optimization: the application had to respond in a few milliseconds while maintaining maximum stability on limited hardware.

Although the technical success was rewarding, the real satisfaction came from the end user. Seeing people use those applications on a daily basis and noticing their satisfaction with the fluidity we had managed to achieve was the real reward. It is in those moments that you understand the real value of your work.

Which language has surprised you the most?

I have recently been working a lot with Kafka, delving into its scalability, which is fundamental for both performance and cost control. Kafka provides an excellent balance between consistency and order, but what surprised me the most is a new feature introduced with Kafka 4: Share Groups.

Traditionally, each partition is assigned to a single consumer to maintain order. With the new Share Groups, however, multiple consumers in the same group can simultaneously process different messages from the same partition. This changes everything: the number of partitions is no longer a constraint on scalability, which now depends only on the number of consumers. This is a revolution for cases where message order is not critical.


Conclusion

In this interview, we have seen how the technical challenge often turns into a mission: to make the invisible (data) a visible and useful tool for people. From real-time flow management to extreme scalability, the goal of our Bitrocker team remains the same: technical excellence at the service of business.

This concludes our “Behind the scenes of code” column, in which we have sought to offer an authentic glimpse into how we approach digital evolution as partners ready to break down complexity in order to build solid solutions.Are you ready to turn your technological challenges into concrete successes?Discover our integrated end-to-end and Agile methodology and all Bitrock services for the innovation of your business.

]]>
MCP Server and Telegram: Extending AI Agents with Custom Tools https://bitrock.it/blog/technology/mcp-server-and-telegram-extending-ai-agents-with-custom-tools.html Mon, 09 Feb 2026 14:58:48 +0000 https://bitrock.it/?p=29435 The era of Generative Artificial Intelligence has made AI Agents a ubiquitous tool. To fully exploit their potential, Agents must interact with the external world, performing specific actions such as sending emails, querying databases, or sending notifications. 

This is where the MCP (Model Context Protocol) Server comes into play, a framework that exposes these features in a standardized and accessible way.

This technical article explores the concept of the MCP Server, discussed in the dedicated episode of our Bitrock Tech Radio podcast and in our article on Platform Shifting and the new MCP and A2A protocols, presenting a practical implementation in TypeScript that creates a direct communication channel to a personal Telegram account.

The Model Context Protocol (MCP)

The Model Context Protocol (MCP) is a communication protocol and a set of specifications that define a standardized way to expose tools, resources, and prompts to LLMs.

Essentially, MCP acts as a universal intermediary:

  1. Standardization: Defines a common format for describing capabilities (tools).
  2. Accessibility: Allows any compatible AI agent to discover and invoke these capabilities remotely.
  3. Extensibility: Allows developers to quickly integrate new capabilities (custom APIs, external services, internal logic) into the AI agent ecosystem.

The MCP Server

An MCP Server is an application that implements the MCP protocol. Its main function is to register, describe, and execute the tools it exposes. When an AI agent—the MCP Client, such as Claude Desktop or an IDE such as VS Code—needs to perform a specific action, it queries the MCP Server to obtain a description of the available tools and then invokes the most appropriate one, passing the necessary parameters.

This mechanism is fundamental for LLM tool use or function calling, allowing them to overcome the limitation of not being able to interact directly with the real world.


Use Case: Programmatic Notes with Telegram

Our hands-on example is the creation of an MCP Server in TypeScript, whose sole purpose is to expose a tool called send-note. This tool allows you to send messages (notes) directly to a specific Telegram channel or chat, providing a quick and programmatic notification mechanism for scripts, processes, or, of course, other AI agents.

The Role of the Telegram Bot

To send messages to Telegram programmatically, you cannot use a standard user account. You need to create a Telegram bot.

A bot is an application that operates through the Telegram API.

  • Creation: A bot can be easily created using the official @BotFather bot on Telegram. This process generates a unique API token.
  • Identification: To send a message to a specific user, the bot needs the Chat ID (the identifier of the conversation between the user and the bot).
  • Interaction: All operations are performed by sending HTTP requests to Telegram’s API server, using the API token for authentication.

The configuration, as described in the project’s README.md file, requires two environment variables essential for the server to function:

  • TELEGRAM_TOKEN: To authenticate the bot’s API requests.
  • TELEGRAM_PERSONAL_CHAT_ID: To specify the message recipient (the user).

Technical Implementation in TypeScript

The server was developed in TypeScript, a typed superset of JavaScript that brings greater robustness and maintainability to the code. Using the official MCP SDK greatly facilitates the implementation of the protocol.

index.ts file

The heart of the project lies in the index.ts file. Let’s analyze the key steps:

Initialization of the Server and Transport

  1. McpServer: The base class for implementing the MCP server is imported. The configuration object defines important metadata such as name and version, which the client agent will use to identify and describe the server.
  2. StdioServerTransport: The MCP protocol defines how information should be exchanged. In this case, we choose StdioServerTransport, which uses standard input/output streams (stdin/stdout) for communication. This is a common mode for servers that run as child processes or are integrated into local development environments such as an IDE (e.g., VS Code).

Definition of the send-note tool

The key operation is registering the tool, which we will call send-note, using the server.tool() method.

  1. Signature (Zod Schema): The MCP Server uses the Zod library (z in the example) to rigorously and typed define the inputs that the tool expects. In this case, a single message parameter is required, which must be a string with a minimum length of 1 and a maximum length of 4096 characters (Telegram’s standard limit for messages). This typing is essential for the AI Agents’ ability to construct the correct function calls.
  2. Metadata (Hint): The object containing title, readOnlyHint, destructiveHint, etc., provides semantic information to the AI agent. The setting openWorldHint: true indicates that the execution of this tool may have external effects on the world (in this case, sending a message), a crucial detail for the agent’s decision-making logic.

Tool Execution

The body of the async function defines the logic that is executed when the tool is invoked:

  1. URL construction: The TELEGRAM_API environment variable contains the base URL for the Telegram API (which includes the bot token). The request is directed to the /sendMessage method.
  2. Fetch call: A POST request is made with a JSON payload that includes the chat_id (the recipient, taken from the environment variables) and the text (the message content, passed as a parameter from the MCP Client).
  3. Error handling: The code checks the success of the response (if (!res.ok)). If it fails, it returns an object that includes isError: true and a readable error message (content), following the standard MCP response format. If successful, it returns the confirmation message.

Environment Variable Integration

The env.ts file (implicit in the use of TELEGRAM_API and TELEGRAM_PERSONAL_CHAT_ID) is responsible for reading and validating sensitive credentials from the .env file. This approach decouples the server logic from specific configurations and ensures that keys are not accidentally committed to version control.


Architecture and Workflow

To understand how the MCP Server fits into the broader architecture, let’s consider the typical workflow:

  1. Startup: The MCP Server is started (e.g., with npm start) and listens on the StdioServerTransport channel.
  2. Discovery (Client): An AI Agent (MCP Client), such as an LLM model, connects to the server (or uploads its tool description) and discovers the existence of the send-note tool, along with its description and parameters.
  3. Decision (AI Agent): The AI Agent receives a prompt from the user, for example: “Remind me to call Mom tomorrow morning at 9”. The agent recognizes that the appropriate action is to “send a note” and constructs the call to the send-note tool with the message parameter: “Remind me to call Mom tomorrow morning at 9”.
  4. Invocation (MCP): The Agent sends a formatted invocation message via stdin to the MCP Server.
  5. Execution (Server): The MCP Server receives the message, extracts the message parameter, and executes its associated callback, which is the HTTP request to the Telegram API.
  6. Confirmation (Server and Client): The Server receives the response from Telegram, encapsulates it in an MCP response (success/error), and sends it to the AI Agent via stdout. The AI Agent can then use this confirmation to inform the user.

Advantages and Conclusions

The implementation of an MCP Server for Telegram notifications demonstrates the power and technological advantages brought by this protocol, including:

  • Strong Typing (TypeScript): The combination of TypeScript and Zod ensures that input parameters are validated before being used, reducing runtime errors caused by invalid inputs from AI agents or clients.
  • Decoupling: The AI Agent does not need to know the Telegram API, Token, or Chat ID. It only needs to know the signature of the send-note tool. The MCP Server acts as an abstraction layer that handles complex logic and credential management.
  • Future scalability: Adding new features (e.g., send-image, create-reminder) only requires registering a new tool on the server, without having to modify the AI Agent’s logic.

The Potential of MCP

Although the Telegram example is simple, the MCP Server philosophy is applicable to much more complex scenarios:

  • Business automation: Expose tools for creating tickets in Jira, updating records in Salesforce, or sending queries to corporate databases.
  • Hybrid integration: Enable an AI Agent to interact with legacy (outdated) systems in a standardized way.
  • Resource management: Provide controlled access to data or files (the concept of resources in MCP), such as documents in Drive or spreadsheets.

The MCP Server is not just an integration pattern; it is an essential bridge that transforms language models from simple text engines into active agents capable of performing meaningful actions in the digital world.


Using the newly created MCP Server

Using in VSCode

Create a file called mcp.json in the .vscode folder and edit it as follows:

Using in Claude Desktop

Edit the claude_desktop_config.json file as follows:


Useful Resources


Main Author: Daniel Zotti, Team Leader e Tech Leader Frontend @ Bitrock

]]>