Solace Community - Latest posts https://community.solace.com Latest posts Cannot move application to another application domain due to error message in Designer Hi Stefan,

Moving applications is independent of the number of application versions or their states. All application versions in all states are moved to the target application domain.

Would you be able to provide more details and screenshots?

Do you have editor permissions in the target application domain?

Joseph

]]>
https://community.solace.com/t/cannot-move-application-to-another-application-domain-due-to-error-message-in-designer/4797#post_3 Wed, 18 Mar 2026 16:55:29 +0000 community.solace.com-post-14352
Cannot move application to another application domain due to error message in Designer Hi all,

I’m using Action ‘Move Application’ from the application listing of an application domain to move one application to another application domain. I do get error ‘Unable to complete the move operation as the selected applications have associated application versions’. I have removed all but the ‘Released’ version. Deleting that only remaining version would delete the application itself, so that can’t be the solution. Also tried to put the application into status ‘Draft’, ‘Retired’, ‘Deprecated’ but that makes no difference to the error.

What could be the problem?

Thanks for your insights.

Stefan

]]>
https://community.solace.com/t/cannot-move-application-to-another-application-domain-due-to-error-message-in-designer/4797#post_1 Wed, 18 Mar 2026 07:16:50 +0000 community.solace.com-post-14350
Where to set the maximum bind count for a client proifle via the management web portal OK, perhaps it can be set via the C# API at queue creation time? I control access to our pubsub broker via a nuget package that abstracts it away, so I can make sure that all queues are configured a certain way if I can do it via C# code.

What I definitely don’t want is to have to check in every few minutes just in case a queue has been created and whack it with an admin stick. I also don’t particularly like the idea of creating a “fake” queue to act as a template.

That then takes me back to the idea of a specific template (without a “fake” queue) that I posed in the screenshots in my opening post.

But the management portal UI element that does have a dropdown for template selection cannot be edited (which begs the question, what is the point of this screen in the management portal?), and the UI element that can be edited does not supply a dropdown, it’s just free text. The fact that one does have a dropdown and the other does not makes me suspect that the free text one won’t do what I hope it does.

It’s also not clear to me whether the template setting called “maximum consumer count” is or is not the same thing as max bind count. Semantically it “seems” the same, but then why wouldn’t it just be called the same thing? So maybe not.

]]>
https://community.solace.com/t/where-to-set-the-maximum-bind-count-for-a-client-proifle-via-the-management-web-portal/4791#post_5 Wed, 18 Mar 2026 01:20:39 +0000 community.solace.com-post-14349
Creating agents in Solace Agent Mesh Lets talk about the different ways to create agents in Solace Agent Mesh!

Whats your favourite method to create agents in Solace Agent Mesh? Comment below! :backhand_index_pointing_down:

Agents are the core intelligence units in Solace Agent Mesh, serving as specialized processing components that perform specific tasks and provide domain-specific knowledge or capabilities. They are build around ADK and leverage A2A protocol for standardized communication across the mesh.

Each agent is equipped with tools that enable it to accomplish user requests, from querying databases and calling external APIs to performing data analysis and generating visualizations. What makes Agent Mesh powerful is its flexibility: you can create agents in multiple ways depending on your use case, technical expertise, and deployment requirements.

For more information on agents and their anatomy, check out this blog post: Anatomy of Agents in Solace Agent Mesh

Regardless of which process you take to create an agent in Solace Agent Mesh, all agents share the same fundamental configuration components. Understanding these key elements will help you design effective agents:

  • Agent identity: name, description, and agent card that defines the agent’s capabilities
  • LLM model configuration: which language model the agent uses
  • Tools: built-in tools, custom Python functions, or MCP integrations
  • Instructions: the system prompt that defines the agent’s behavior and personality
  • Lifecycle functions: optional initialization and cleanup functions for resource management
  • Services: session management and artifact storage configurations

TLDR: Summary Table

The following table summarizes the different ways to create agents in Solace Agent Mesh

Method Distribution Deployment Best Use Case
1. CLI Standalone Agent Project-only Self-managed Quick prototyping, project-specific needs
2. CLI Plugin (Create) Python package Self-managed Reusable agents, team sharing
3. CLI Plugin (Add from Repository) Pre-built Self-managed Leverage existing community agents
4. Enterprise Agent Builder Kubernetes Automated Enterprise deployments, non-technical users
5. A2A Agent Remote HTTPS Hybrid Third-party integration, legacy systems

1. CLI: Scaffold Standalone Agent

The Solace Agent Mesh CLI provides a quick way to scaffold a new agent directly in your project using the sam add agent command.

sam add agent my-agent

You can also launch a guided browser-based interface by adding the --gui flag:

sam add agent my-agent --gui

This interface is particularly useful for developers who prefer guided forms over manual YAML editing. The browser interface opens at http://localhost:5002/?config_mode=addAgent and guides you through the agent creation process with forms for configuring the agent’s name, description, instructions, tools, and other settings. The UI generates the necessary configuration files and Python scaffolding based on your inputs.

This process creates the following structure in your project:

  • A YAML configuration file in your project’s configs/agents/ directory
  • Configuration that references shared_config.yaml for common settings like broker connection and model configurations

The CLI handles the scaffolding, allowing you to focus on implementing your agent’s specific logic and tools. After creation, you can customize the generated files to add custom tools, modify the system prompt, or adjust the agent’s behavior.

Best for: Quick prototyping, project-specific agents, and simple use cases.

For more details on understanding the core Agent architecture see the Creating Agents documentation.


2. CLI Plugin: Create Your Own with Custom Tools

If you need to build agents with custom tools that can be shared across multiple projects or with the community, creating an agent as a plugin is the recommended approach. Plugins are packaged as standard Python packages that can be installed using pip or other package managers.

To create a new agent plugin, use the sam plugin create command:

sam plugin create my-hello-agent --type agent

This creates a complete Python package structure with:

  • config.yaml configuration file
  • src/ directory containing your Python modules (tools and lifecycle functions)
  • pyproject.toml for dependency management and versioning
  • Standard Python project files

After developing your agent, you build it into a distributable wheel file:

sam plugin build

The resulting wheel file can be shared with others, published to PyPI, or distributed through a Git repository. This approach promotes code reuse and follows software engineering best practices for versioning and dependency management.

Best for: Reusable agents across projects, team and community sharing, enterprise asset libraries where agents are treated as standardized organizational assets.

To understand the difference between standalone agents and plugins in depth, check out the Plugins documentation.


3. CLI Plugin: Add from Repository

Instead of building agents from scratch, you can leverage pre-built agents (with custom tools) from the community or official Solace Core Plugins. The sam plugin add command installs agents and makes them available in your project.

sam plugin add <component-name> --plugin <plugin-name>

The --plugin flag accepts multiple sources:

  • Official core plugins: from the Solace Agent Mesh Core Plugins repository by name
  • Local path: Point to a plugin directory or wheel file on your filesystem
  • Git repository URL: Install directly from GitHub or other Git hosting services following the git+https://github.com/{user}/{repo}={dir_to_pyproject.toml} notation

Example using a Git repository:

sam plugin add my_ip --plugin git+https://github.com/solacecommunity/solace-agent-mesh-plugins#subdirectory=find-my-ip

If you only pass the plugin name without a source, the CLI first looks in the .sam directory for locally cached plugins, then checks the official core plugins repository.

Best for: Leveraging community-built or official agents without creating from scratch, accelerating development by reusing existing capabilities, and standardizing on proven agent implementations.


4. Enterprise Agent Builder

Solace Agent Mesh Enterprise provides a web-based Agent Builder with advanced features for creating, configuring, and deploying agents. This tool is designed for enterprise environments where agents need to be deployed to Kubernetes clusters.

Business Users can easily build their own agents without writing a line of code, while developers can code more sophisticated agents.

The Enterprise Agent Builder offers two creation paths:

AI-Assisted Creation

Describe what you want your agent to do in natural language, and the AI assistant suggests initial configuration values including:

  • Agent name and description
  • System instructions that define agent behavior
  • Recommended toolsets (Artifact Management, Data Analysis, Web)
  • Suggested connectors for databases and APIs

You can review and modify all AI-generated suggestions before finalizing the agent.

Manual Creation

Enter all configuration details yourself through structured forms that guide you through:

  • Agent details including name and description
  • Instructions that serve as the system prompt
  • Toolset selection from available built-in capabilities
  • Connector configuration for external data sources

Agents created through the Enterprise Agent Builder are deployed using the Agent Deployer in Kubernetes environments, providing automated lifecycle management and scalability.

Best for: Non-technical users who need to create agents without writing code, enterprise deployments with Kubernetes and Helm infrastructure, rapid visual configuration with optional AI assistance, and teams that require governance and standardized deployment processes.

For more information, see the Agent Builder documentation.


5. Connect External Agents via A2A Proxy

If you have existing agents that run on separate infrastructure or want to integrate third-party A2A-compliant agents, you can connect them to your mesh using a proxy. The proxy acts as a protocol bridge, translating between A2A over HTTPS (used by external agents) and A2A over Solace event mesh (used by agents in your deployment).

Enterprise UI Method

The Enterprise edition provides a guided wizard for connecting external agents. Navigate to the “Connect External Agent” feature and configure:

  • Agent URL and agent card location
  • Authentication settings (Bearer token, API key, OAuth 2.0)
  • Custom HTTP headers
  • Agent skills and capabilities

The wizard validates the configuration by fetching the agent card and guides you through deployment.

YAML Proxy Method

For manual configuration or advanced use cases, create a proxy configuration file:

apps:
  - name: my-a2a-proxy
    app_module: solace_agent_mesh.agent.proxies.a2a.app
    app_config:
      namespace: "myorg/production"
      proxied_agents:
        - name: "external-agent"
          url: "https://api.example.com/agent"
          authentication:
            type: "static_bearer"
            token: "${TOKEN}"

A single proxy can manage multiple external agents, each with its own URL, authentication, and timeout settings. The proxy handles agent discovery, authentication management, artifact flow, and task lifecycle management automatically.

Best for: Third-party agent integration, hybrid cloud architectures where agents run in different environments, legacy system integration, gradual migration scenarios, and maintaining service isolation while enabling collaboration.

For detailed proxy configuration options, see the Proxies documentation and Connect External Agents guide.

Note on connecting to external MCP tools

Every agent you create in Solace Agent Mesh has the capability to expose MCP tools. As mentioned in this community post MCP + Solace Agent Mesh: The Deep Dive, integrating MCP tools is as simple as adding a tool block with an MCP definition in the agent configuration.

Choosing the Right Approach

The best method for creating agents depends on your specific needs:

  • For rapid prototyping: Use CLI standalone agents or the Community Edition UI
  • For reusable components: Create agent plugins that can be shared and versioned
  • For leveraging existing work: Install agents from the core plugins repository or community
  • For enterprise deployments: Use the Enterprise Agent Builder with Kubernetes deployment
  • For integration scenarios: Connect external agents using proxies
  • For maximum control: Write YAML configurations manually

Each approach integrates seamlessly with the same Agent Mesh runtime, so you can mix and match methods based on your requirements. Start with the approach that matches your current needs, and evolve your agents as your use cases become more sophisticated.

For comprehensive guides and tutorials, visit the Solace Agent Mesh documentation.

If you made it this far, thanks for reading! Please comment on this post if you found this helpful and which method is your favourite.

Cheers,
Tamimi

]]>
https://community.solace.com/t/creating-agents-in-solace-agent-mesh/4794#post_1 Mon, 16 Mar 2026 20:18:43 +0000 community.solace.com-post-14344
Where to set the maximum bind count for a client proifle via the management web portal Hi allmhhuran and allmhhuranagain! :wink:

I don’t fully trust AI either, but I think the AI answer is mostly correct in that there’s no global setting to achieve what you are looking for. The closest you can get is to set a queue to exclusive and set the max bind count to 1.
Now, if you want to simplify the administration of this, you can create a queue with that setting that you then use as template. E.g. create a queue with that setting and call it myIFlowTemplateQueue then refer to that queue in the queue template section when you create a new queue and it should copy those settings from the other queue. At least that’s my understanding.
Kind regards,
Christian

]]>
https://community.solace.com/t/where-to-set-the-maximum-bind-count-for-a-client-proifle-via-the-management-web-portal/4791#post_4 Mon, 16 Mar 2026 17:20:52 +0000 community.solace.com-post-14343
Copy Message Spool from One Broker to other? Thanks, I managed to do it with a consumer/publisher application.

]]>
https://community.solace.com/t/copy-message-spool-from-one-broker-to-other/4786#post_6 Sat, 14 Mar 2026 07:07:50 +0000 community.solace.com-post-14341
Why does my primary container restart often? I setup a HA triplet on Docker on AWS EC2 ECS optimized instances, but noticed that the HA is getting broken quite a few times.

In redundancy detail, it shows ‘cluster voters not ready’ as the Last Failure reason.

I have used network as host and using hostnames for all 3 nodes for redundancy setup in the docker run command and most of the times redundancy is Up with config-sync also Up and this setup is not live yet, it is still in testing phase.

The system logs say standby broker not reachable around the time of restart.

All the required ports are also allowed in AWS Security Groups.

Can someone help me with that.

]]>
https://community.solace.com/t/why-does-my-primary-container-restart-often/4793#post_1 Sat, 14 Mar 2026 07:07:08 +0000 community.solace.com-post-14339
How do I configure DR between single nodes? Hi,

I’m jumping in from the technical support team to help you troubleshoot.

Based on the “Oper Status” you’ve shared, there appears to be a problem with Replication config-sync. Here are some possible causes to investigate:

Possible Causes:

  • The replication message VPN bridge is down

    show bridge #CFGSYNC_REPLICATION_BRIDGE detail

    Check if the #CFGSYNC_REPLICATION_BRIDGE is up and operational.

  • The queue state is ‘unbound’
    Verify that #CFGSYNC_REPLICATION_QUEUE is properly bound.

  • The SMF service is not enabled Ensure the SMF (Solace Message Format) service is enabled on your broker.

  • Connect-via settings are misconfigured Review your connect-via address, port, and SSL settings.

  • TLS-PSK authentication configuration issue

    • Important: If you’re using the newer TLS-PSK authentication key, you must have mate connect-via configured for SSL using port 55443, even if your replication bridges use plaintext.

Can you also confirm which documentation you followed to configure DR? I assume you used the official guide: Steps for Replication Configuration

Please verify this and let us know what you find.

Thank you.

]]>
https://community.solace.com/t/how-do-i-configure-dr-between-single-nodes/4596#post_7 Fri, 13 Mar 2026 15:35:15 +0000 community.solace.com-post-14338
Solace Agent Mesh — What's New Since GA Solace Agent Mesh — What’s New Since GA

Hello Solace Community,

Since our General Availability announcement in November, we’ve been accelerating our release cadence and shipping new capabilities regularly. Here’s a roundup of everything that’s landed since then.

New Connectors for Agent Builder

We’ve significantly expanded the data sources you can connect to no-code agents through Agent Builder:

  • Oracle Database — Connect agents to Oracle databases with full SQL integration. Docs

  • Microsoft SQL Server — Connect agents to MSSQL databases. Docs

  • Remote MCP — Connect agents to remote Model Context Protocol (MCP) servers hosted by SaaS vendors or internally, with OAuth authentication support. Docs

  • OpenAPI — Upload an OpenAPI 3.x spec to automatically generate REST API integrations with OAuth, API key, and bearer token support. No manual endpoint configuration required. Docs

  • Amazon Bedrock Knowledge Bases — Ground agent responses in company-specific documentation stored in S3 and other sources via RAG. Docs

Gateway Management

New UIs make it easier to configure how agents interact with the outside world:

  • Gateway Management — Unified interface for managing gateways in the Enterprise UI

  • Event Mesh Gateway — Trigger agentic workflows from incoming messages on configurable topics, with success and error routing. Docs

  • Slack Gateway — Easily add a Slack gateway to your Agent Mesh instance. Docs

Remote A2A Agent Registration

Register and manage remote A2A protocol agents through Agent Mesh Manager. Deploy A2A proxies, configure authentication (OAuth 2.0, API Key, Basic Auth), and make external agents discoverable alongside native agents in the catalog. Docs

Collaboration and Security

  • Project Sharing — Teams can collaborate on shared knowledge bases and prompts while maintaining individual conversation privacy and access controls. Docs

  • Dynamic Role Mapping — Map identity provider claims to SAM roles automatically, eliminating manual role assignment for enterprise SSO deployments. Helm samples

Community Edition Highlights (Experimental)

  • Workflows — Define prescriptive, multi-step agent workflows through YAML configuration. Orchestrate multiple agents with sequential processes where each step can reference the original request and previous results. Docs

  • Text-to-Speech and Speech-to-Text — Voice interaction capabilities for agents. Docs

  • Agent Mesh as MCP Server — Expose agentic workflows and agent capabilities as standardized MCP tools to external AI applications. Docs

Resources

We’d love to hear what you’re building with Agent Mesh. Questions, feedback, or feature requests — drop them in the replies below.

]]>
https://community.solace.com/t/solace-agent-mesh-whats-new-since-ga/4792#post_1 Thu, 12 Mar 2026 21:43:19 +0000 community.solace.com-post-14336
How do I configure DR between single nodes? Hi @marc,

Thank you for the response, I really appreciate it. No worries at all. I’m glad you’re helping get this in front of someone with more expertise on the DR configuration.

In the meantime, I’ll continue debugging the issue from my side to see if I can identify the root cause and a possible solution.

]]>
https://community.solace.com/t/how-do-i-configure-dr-between-single-nodes/4596#post_6 Thu, 12 Mar 2026 17:00:43 +0000 community.solace.com-post-14335
How do I configure DR between single nodes? Hi @Tanendra,

Glad to see you trying out the enterprise eval. Sorry you’re running into issues. To be honest I’m not an expert on our DR configuration. I’ll get this in front of someone who is.

]]>
https://community.solace.com/t/how-do-i-configure-dr-between-single-nodes/4596#post_5 Thu, 12 Mar 2026 16:39:46 +0000 community.solace.com-post-14334
How do I configure DR between single nodes? Hello @Marc,

I am trying to set up the same configuration and am encountering the same issue. I followed the same process used for the PubSub+ Enterprise Evaluation, but I am still facing this problem.

Could you please suggest a possible solution and what the underlying cause might be?

Additionally, I would like to confirm whether redundancy needs to be enabled in this setup for config-sync to work with replication.

]]>
https://community.solace.com/t/how-do-i-configure-dr-between-single-nodes/4596#post_4 Thu, 12 Mar 2026 12:46:54 +0000 community.solace.com-post-14333
Spring Cloud Stream Solace binder: consumer fails deserializing inbound headers after upgrade — how to disable/skip header deserialization? @MayurP We excluded nativeHeaders header which was having MultiValueMapAdapter from message, that was causing issue in serialization. After that it is able to read message correctly.

Thanks for the suggestion

]]>
https://community.solace.com/t/spring-cloud-stream-solace-binder-consumer-fails-deserializing-inbound-headers-after-upgrade-how-to-disable-skip-header-deserialization/4787#post_6 Thu, 12 Mar 2026 07:30:20 +0000 community.solace.com-post-14332
Where to set the maximum bind count for a client proifle via the management web portal So this is still the OP. No idea what is going on with the new login process, but I seem to have had to re-register, and I can’t switch accounts after logging out to try my original account again (at least - I guess - unless I go hunting for solace cookies, because presumably the solace website is checking for cookies and then automatically using the user it finds, without any option to not do that. Maybe rethink this design).

I don’t really trust the AI response, or at least it doesn’t seem all that helpful. It seems to be telling me that I can’t do what I want to do, but that seems unlikely. It says I can use a global template, but that the global template is not setting what I want to set. So… what would it be setting?

]]>
https://community.solace.com/t/where-to-set-the-maximum-bind-count-for-a-client-proifle-via-the-management-web-portal/4791#post_3 Wed, 11 Mar 2026 23:55:50 +0000 community.solace.com-post-14331
Introducing the Solace Web Client – A New Community-Built Tool! Hi, thank you for the client that we use for troubleshooting within Kubernetes. We already set up the presets and it´s working but is there a possibility to deactivate the SSL Verify button by default? I tried to set “SslVerify”: false in the presets.json but it does not work. Btw. there is a typo in Github description. You wrote: - To use presets,mount your presets.json into /app/presets/preset.json but the file name is presets.json.

]]>
https://community.solace.com/t/introducing-the-solace-web-client-a-new-community-built-tool/4451#post_2 Wed, 11 Mar 2026 08:36:04 +0000 community.solace.com-post-14330
Where to set the maximum bind count for a client proifle via the management web portal Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

The max-bind-count setting is configured per queue on the Solace Event Broker — there is no global, Message VPN-level, or client profile-level setting that automatically applies a maximum bind count of 1 to all queues. You must set it individually on each queue.

To enforce a maximum of 1 bind on a specific exclusive queue, you configure max-bind-count 1 within the queue’s configuration context. When that limit is reached, any subsequent bind attempt by another client will be rejected with an error (503 “Max clients exceeded for queue”).

Note the important distinction for exclusive queues: even with the default max-bind-count of 1,000, only one consumer is active at a time — additional consumers bind but remain inactive until the active consumer unbinds. If you want to prevent any additional consumers from binding at all (not just from being active), setting max-bind-count 1 is the right approach.

Regarding a global/template-based approach: Queue templates exist and can set initial attribute values (including bind count thresholds for event generation) for client-created queues, but the documentation does not indicate that max-bind-count itself can be set via a queue template, Message VPN configuration, or client profile to apply universally across all queues.

For complete details on configuring max-bind-count and queue access types, see:

]]>
https://community.solace.com/t/where-to-set-the-maximum-bind-count-for-a-client-proifle-via-the-management-web-portal/4791#post_2 Wed, 11 Mar 2026 08:17:21 +0000 community.solace.com-post-14329
Where to set the maximum bind count for a client proifle via the management web portal On my VPN, I want all clients (or better yet, all clients using a particular client profile) to receive an error if a client tries to subscribe to the same queue more than once simultaneously.

In other words I want the queue access type to be Exclusive, but I don’t simply want the first bound IFlow to succeed with the others silently doing nothing. I explicitly want any subsequent attempts to connect simultaneously to the same queue to raise an error (exception) in the client code.

From what I understand, I can accomplish this by setting the “max-bind-count” to 1. I don’t want to play whack-a-mole, changing each queue after it is created. I want this to be set for any subscriber queue on the VPN.

But I can’t find “max-bind-count” (using that term, specifically), anywhere in the broker management portal.

Here are some settings that all “seem like they might be” that…

Under access control > client profiles > settings (show advanced settings) :

”Maximum Endpoints Created Per Client Username” … don’t think this is it.

”Maximum Outgoing Flows Per Client” - well, sort of this? But I want a client to have more than one flow if they subscribe to more than one topic. I just don’t want to allow multiple simultaneous connections all subscribed to one topic (ie, multiple connections by the same client username consuming a single queue at same time). So, maybe not this?

Under queues > Templates, I can create a new template and set “Maximum Consumer Count”. That seems like what I want. But then if I go over to client profiles, select my profile, and try to change this setting I can’t. It’s readonly.

OK, so then I went to the “other” management portal (I don’t really understand why there are two versions of this). And I found this screen, with this setting:

But that seems like it’s total flows per client, not flows per client per queue. So, not this?

In that same “dialog” window there’s also this:

But it’s just free text. It’s not a dropdown which shows me the template I created in the “other” management portal. So I’m not sure this is correct either.

TL;DR: It is very unclear to me how I am meant to set a global “don’t allow more than one client to read from the same queue simultaneously”, and raise an error if they try setting.

]]>
https://community.solace.com/t/where-to-set-the-maximum-bind-count-for-a-client-proifle-via-the-management-web-portal/4791#post_1 Wed, 11 Mar 2026 08:16:43 +0000 community.solace.com-post-14328
Copy Message Spool from One Broker to other? yeah I see, then if thats the case your best solution is browsing and re-publishing to your target end point

]]>
https://community.solace.com/t/copy-message-spool-from-one-broker-to-other/4786#post_5 Mon, 09 Mar 2026 14:30:25 +0000 community.solace.com-post-14327
Tip: How does Agent to Agent communication work in Solace Agent Mesh? Solace Agent Mesh is an event-driven framework that creates distributed ecosystems of collaborative AI agents. At the heart of this collaboration is inter-agent communication - a powerful mechanism that enables agents to discover, communicate with, and delegate tasks to each other while maintaining strict security and access control boundaries.

In this post, we’ll explore how inter-agent communication works in Solace Agent Mesh, the role of the A2A protocol, and how you can use allow_list and deny_list configurations to control agent collaboration patterns.

What is Inter-Agent Communication?

Inter-agent communication is the mechanism that allows agents within the mesh to collaborate by delegating tasks to specialized peer agents. Rather than building monolithic agents that try to do everything, you can create focused, specialized agents that work together to solve complex problems.

For example, imagine you have:

  • A Data Analyst Agent specialized in querying databases
  • A Report Generator Agent that creates formatted reports
  • An Email Agent that handles notifications

An orchestrator agent can receive a user request like “analyze last quarter’s sales and email me a report,” then delegate the data analysis to the Data Analyst Agent, pass those results to the Report Generator Agent, and finally use the Email Agent to send the completed report.

Solace Event Broker and Smart Topics

All inter-agent communication flows through the Solace Event Broker, which serves as the central messaging fabric. The broker uses a hierarchical topic structure to route A2A protocol messages between components with precision and efficiency.

Topic-Based Routing

The A2A protocol leverages Solace’s powerful topic-based pub/sub architecture. Each type of communication uses a specific topic pattern:

Purpose Topic Pattern
Agent Discovery {namespace}/a2a/v1/discovery/agentcards`
Task Requests {namespace}/a2a/v1/agent/request/{target_agent_name}
Status Updates {namespace}/a2a/v1/gateway/status/{gateway_id}/{task_id}
Final Responses {namespace}/a2a/v1/gateway/response/{gateway_id}/{task_id}
Peer Delegation {namespace}/a2a/v1/agent/status/{delegating_agent_name}/{sub_task_id}

This hierarchical structure provides several benefits from a decoupled architecture since agents dont need to know each other’s network and geographical location, dynamic routing based on smart topic subscription, and scalability offered by an event-driven architecture:

Configuring Inter-Agent Communication

The inter_agent_communication configuration section in your agent’s YAML file controls which peer agents an agent can discover and communicate with. This is a critical security and access control mechanism.

Configuration Structure

Here’s the basic structure:

inter_agent_communication:
  allow_list: ["*"]           # Patterns of agents to ALLOW
  deny_list: []                # Patterns of agents to DENY
  request_timeout_seconds: 30  # Timeout for peer requests

Configuration Parameters

  • allow_list: A list of agent name patterns (supports wildcards) that this agent is permitted to discover and delegate tasks to. Default is ["*"] (all agents allowed).
  • deny_list: A list of agent name patterns to explicitly block, even if they match patterns in the allow_list. Default is [] (empty list).
  • request_timeout_seconds: The timeout in seconds for peer agent requests. If a delegated agent doesn’t respond within this time, the requesting agent will be notified of the timeout.

Pattern Matching with Wildcards

Both allow_list and deny_list support wildcard pattern matching using the fnmatch syntax:

  • * matches zero or more characters
  • ? matches exactly one character
  • [abc] matches any character in the brackets
  • [!abc] matches any character NOT in the brackets

This allows for flexible and powerful filtering rules.

How Allow and Deny Lists Work Together

Understanding the precedence rules is crucial for configuring inter-agent communication correctly.

Evaluation Logic

The system evaluates agent access in two steps:

  1. Allow List Check: First, the system checks if the agent name matches ANY pattern in the allow_list. If there’s a match, the agent is initially allowed.
  2. Deny List Check: If the agent was allowed in step 1, the system then checks if the agent name matches ANY pattern in the deny_list. If there’s a match, the agent is denied.

Critical Rule: deny_list always takes precedence over allow_list!

Practical Examples

Let’s explore various configuration scenarios to understand how this works in practice.

Example 1: Allow All Agents (Default Behavior)

inter_agent_communication:
  allow_list: ["*"]
  deny_list: []
  request_timeout_seconds: 30

Result: This agent can communicate with all discovered agents in the mesh.

Use Case: Ideal for orchestrator agents that need to coordinate across the entire mesh.


Example 2: Allow All Except Specific Agents

inter_agent_communication:
  allow_list: ["*"]
  deny_list: ["AdminAgent", "BillingAgent"]
  request_timeout_seconds: 30

Result: This agent can communicate with all agents EXCEPT “AdminAgent” and “BillingAgent”.

Use Case: General-purpose agents that should avoid administrative or sensitive systems.


Example 3: Allow Only Specific Agents

inter_agent_communication:
  allow_list: ["DataAgent", "AnalysisAgent", "ReportAgent"]
  deny_list: []
  request_timeout_seconds: 30

Result: This agent can ONLY communicate with “DataAgent”, “AnalysisAgent”, and “ReportAgent”.

Use Case: Specialized agents with tightly controlled collaboration patterns, such as workflow-specific agents.


Example 4: Wildcard Patterns with Exceptions

inter_agent_communication:
  allow_list: ["Data*"]
  deny_list: ["DataAdmin", "DataArchive"]
  request_timeout_seconds: 30

Result: This agent can communicate with all agents whose names start with “Data” (e.g., “DataQuery”, “DataTransform”, “DataValidator”) EXCEPT “DataAdmin” and “DataArchive”.

Use Case: Domain-specific collaboration where you want to allow access to a family of related agents while blocking sensitive subsets.


Example 5: Understanding Deny List Precedence

inter_agent_communication:
  allow_list: ["*"]
  deny_list: ["CriticalAgent"]
  request_timeout_seconds: 30

Question: Can this agent communicate with “CriticalAgent”?

Answer: No. Even though ["*"] in the allow_list matches all agents (including “CriticalAgent”), the deny_list takes precedence and blocks “CriticalAgent” specifically.

This demonstrates the key principle: deny_list always wins.


Example 6: Multiple Wildcard Patterns

inter_agent_communication:
  allow_list: ["Query*", "Analysis*", "Report*"]
  deny_list: ["*Admin", "*Test"]
  request_timeout_seconds: 60

Result: This agent can communicate with agents matching “Query*”, “Analysis*”, or “Report*” patterns, but NOT with any agents ending in “Admin” or “Test”.

Examples:

  • :white_check_mark: “QueryEngine” - Allowed (matches “Query*”, doesn’t match deny patterns)
  • :white_check_mark: “AnalysisProcessor” - Allowed (matches “Analysis*”)
  • :cross_mark: “QueryAdmin” - Denied (matches “Query*” BUT also matches “*Admin”)
  • :cross_mark: “ReportTest” - Denied (matches “Report*” BUT also matches “*Test”)

Use Case: Complex enterprise environments with naming conventions that distinguish production agents from administrative or testing agents.


Example 7: Disable All Inter-Agent Communication

inter_agent_communication:
  allow_list: []
  deny_list: []
  request_timeout_seconds: 30

Result: This agent cannot communicate with ANY peer agents. An empty allow_list means no agents pass the first check.

Use Case: Isolated agents that only respond to gateway requests and never delegate to other agents, such as simple lookup agents or data retrieval agents.


Example 8: Environment-Based Patterns

inter_agent_communication:
  allow_list: ["prod-*"]
  deny_list: ["*-deprecated"]
  request_timeout_seconds: 30

Result: This agent can only communicate with agents whose names start with “prod-” (production agents), excluding any that end with “-deprecated”.

Use Case: Production agents that should only interact with other production-grade agents, avoiding experimental or deprecated services.

Agent Discovery and Inter-Agent Communication

Inter-agent communication works hand-in-hand with the agent discovery mechanism. Here’s how they interact:

Agent Discovery Process

  1. Agent Card Publishing: Each agent periodically publishes an “Agent Card” (following the A2A protocol schema describing its capabilities, skills, and identity) to the discovery topic: {namespace}/a2a/v1/discovery/agentcards
  2. Subscription and Filtering: Other agents subscribe to this discovery topic. When they receive an agent card, they apply their inter_agent_communication rules:
  • Check if the agent name matches the allow_list
  • Check if the agent name matches the deny_list
  • Only if allowed, register the agent in their local peer registry
  1. Dynamic Availability: If an allowed agent stops publishing its card (due to shutdown or failure), it will eventually be removed from the peer registry based on health check timeouts

Enabling/Disabling Discovery

You can control whether an agent participates in discovery:

agent_discovery:
  enabled: true
  health_check_ttl_seconds: 60
  health_check_interval_seconds: 30

inter_agent_communication:
  allow_list: ["*"]
  deny_list: []
  request_timeout_seconds: 30
  • enabled: true: The agent subscribes to discovery messages and can communicate with peers (subject to allow/deny rules)
  • enabled: false: The agent ignores discovery messages and won’t be able to delegate to peers

Best Practices for Inter-Agent Communication Configuration

1. Use the Principle of Least Privilege

Start with restrictive configurations and explicitly allow only the agents that need to communicate:

inter_agent_communication:
  allow_list: [“DataAgent”, “AnalysisAgent”]
  deny_list: []

This is more secure than allowing everything and trying to block specific agents.

2. Leverage Naming Conventions

Establish clear naming conventions for your agents and use wildcards effectively:

# All agents follow pattern: {environment}-{domain}-{function}
inter_agent_communication:
  allow_list: [“prod-analytics-*”, “prod-data-*”]
  deny_list: [“*-admin”, “*-deprecated”]

3. Use Deny Lists for Security Boundaries

Even with wildcard allow patterns, use deny lists to establish security boundaries:

inter_agent_communication:
  allow_list: [“*”]
  deny_list: [“FinancialAgent”, “HRAgent”, “AdminAgent”]

4. Document Your Communication Patterns

Add comments to your configuration files explaining the communication rules:

inter_agent_communication:
# This orchestrator needs to coordinate across all data processing agents
# but should not access administrative or billing systems
  allow_list: [“*”]
  deny_list: [“Admin*”, “Billing*”, “HR*”]
  request_timeout_seconds: 60  # Longer timeout for complex data operations

5. Test Communication Patterns

After configuring inter-agent communication, verify that agents can communicate as expected by checking logs and monitoring agent discovery events.

Pro Tip: Leverage the hierarchical view offered by the Solace Agent Mesh WebUI Gateway!

Multi-Tier Agent Architecture Example

Let’s say we have an e-commerce analytics platform with the following agents

  • OrchestratorAgent: Central coordinator
  • DataQueryAgent: Queries databases
  • DataTransformAgent: Transforms and cleans data
  • AnalyticsEngine: Performs statistical analysis
  • ChartingAgent: Creates charts and graphs
  • ReportGeneratorAgent: Assembles final reports
  • NotificationAgent: Sends email notifications
  • AdminAgent: System administration tasks

Configuration Strategy

OrchestratorAgent Configuration:

inter_agent_communication:
  # Can delegate to all agents except admin
  allow_list: [“*”]
  deny_list: [“AdminAgent”]
  request_timeout_seconds: 60

DataQueryAgent Configuration:

inter_agent_communication:
  # Only communicates with DataTransformAgent for data pipelines
  allow_list: [“DataTransformAgent”]
  deny_list: []
  request_timeout_seconds: 30

AnalyticsEngine Configuration:

inter_agent_communication:
  # Needs data and can trigger visualizations
  allow_list: [“Data*”, “ChartingAgent”]
  deny_list: [“AdminAgent”]
  request_timeout_seconds: 90

ReportGeneratorAgent Configuration:

inter_agent_communication:
  # Aggregates results from analytics and visualization
  allow_list: [“AnalyticsEngine”, “ChartingAgent”, “NotificationAgent”]
  deny_list: []
  request_timeout_seconds: 45

AdminAgent Configuration:

inter_agent_communication:
  # Can communicate with all agents for monitoring
  allow_list: [“*”]
  deny_list: []
  request_timeout_seconds: 30

NotificationAgent Configuration:

inter_agent_communication:
  # Isolated agent - only receives delegation, never initiates
  allow_list: []
  deny_list: []
  request_timeout_seconds: 15

This architecture creates clear boundaries and controlled communication patterns, ensuring agents only interact with the peers they need while maintaining security and preventing unintended cross-communication.

Last Words of Wisdom

Remember the key principles:

  1. Deny lists take precedence - use them to establish security boundaries
  2. Start restrictive - explicitly allow what’s needed rather than blocking what’s not
  3. Use wildcards wisely - leverage naming conventions for maintainable configurations
  4. Document your patterns - make communication rules clear for your team
  5. Make use of the Solace Agent Mesh WebUI Agent Visualizer!

With these tools and best practices, you can build sophisticated multi-agent systems that are both powerful and secure.


For more information about Solace Agent Mesh, visit the official Agent Mesh GitHub repository

]]>
https://community.solace.com/t/tip-how-does-agent-to-agent-communication-work-in-solace-agent-mesh/4790#post_1 Sun, 08 Mar 2026 20:32:23 +0000 community.solace.com-post-14326
Spring Cloud Stream Solace binder: consumer fails deserializing inbound headers after upgrade — how to disable/skip header deserialization? When Java serializes an object, it writes the class’s serialVersionUID into the byte stream as a fingerprint. On deserialization, Java checks that the UID in the stream matches the UID of the loaded class.
If they differ, it throws InvalidClassException.

Root Cause in your case

stream classdesc serialVersionUID = -1297985047093119690 ← what was serialized
local class serialVersionUID = 1458406687712361249 ← what’s in your classpath now

The MultiValueMapAdapter class from spring-web was serialized with one version of Spring, but your application is trying to deserialize it with a different Spring version where the serialVersionUID changed.

As you mentioned, issue happend after upgrading spring boot version on consumer side. The serialVersionUID of the class has changed between two versions of spring boot. I think if you upgrade spring boot version in publisher app, issue should be fixed.

Alternatively, have you tried headerExclusions property, I mentioned in my previous reply ?

]]>
https://community.solace.com/t/spring-cloud-stream-solace-binder-consumer-fails-deserializing-inbound-headers-after-upgrade-how-to-disable-skip-header-deserialization/4787#post_5 Fri, 06 Mar 2026 15:16:05 +0000 community.solace.com-post-14325
File transfer LongPathTool worked well for shortening complex paths before implementing transfers

]]>
https://community.solace.com/t/file-transfer/2089#post_3 Fri, 06 Mar 2026 15:13:41 +0000 community.solace.com-post-14323
Copy Message Spool from One Broker to other? i was just wondering if we can just copy message spool from file system and place it in another filesystem with same vpn and queues configs.

Goal is to copy messages without consuming them and without touching existing disk.

]]>
https://community.solace.com/t/copy-message-spool-from-one-broker-to-other/4786#post_4 Fri, 06 Mar 2026 11:10:45 +0000 community.solace.com-post-14322
Spring Cloud Stream Solace binder: consumer fails deserializing inbound headers after upgrade — how to disable/skip header deserialization? Yes, Publisher application is Solace Spring Cloud Stream application running with below dependencies:

<solace-spring-cloud.version>2.3.0</solace-spring-cloud.version>
<spring-cloud.version>2021.0.1</spring-cloud.version>
<spring-cloud-stream.version>3.2.4</spring-cloud-stream.version>

Consumer is running with below properties:

<solace-spring-cloud.version>4.11.0</solace-spring-cloud.version>
<spring-cloud.version>2025.0.1</spring-cloud.version>

Here is the full stack when consumer is consuming a message:

2026-03-05T19:51:34.441+05:30 ERROR 43320 — [consumer-input1] o.s.integration.handler.LoggingHandler : java.lang.IllegalArgumentException: Failed to deserialize object
at org.springframework.util.SerializationUtils.deserialize(SerializationUtils.java:86)
at com.solace.spring.cloud.stream.binder.util.XMLMessageMapper.mapHeadersToSpring(XMLMessageMapper.java:464)
at com.solace.spring.cloud.stream.binder.util.XMLMessageMapper.mapToSpringInternal(XMLMessageMapper.java:346)
at com.solace.spring.cloud.stream.binder.util.XMLMessageMapper.mapToSpring(XMLMessageMapper.java:285)
at com.solace.spring.cloud.stream.binder.util.XMLMessageMapper.mapToSpring(XMLMessageMapper.java:280)
at com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.createOneMessage(InboundXMLMessageListener.java:234)
at com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.lambda$processMessage$1(InboundXMLMessageListener.java:161)
at com.solace.spring.cloud.stream.binder.inbound.BasicInboundXMLMessageListener.handleMessage(BasicInboundXMLMessageListener.java:60)
at com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.processMessage(InboundXMLMessageListener.java:161)
at com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.receive(InboundXMLMessageListener.java:147)
at com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.run(InboundXMLMessageListener.java:96)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:317)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
at — Async.Stack.Trace — (captured by IntelliJ IDEA debugger)
at java.base/java.util.concurrent.FutureTask.(FutureTask.java:151)
at java.base/java.util.concurrent.AbstractExecutorService.newTaskFor(AbstractExecutorService.java:98)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:122)
at com.solace.spring.cloud.stream.binder.inbound.JCSMPInboundChannelAdapter.lambda$doStart$0(JCSMPInboundChannelAdapter.java:205)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at com.solace.spring.cloud.stream.binder.inbound.JCSMPInboundChannelAdapter.doStart(JCSMPInboundChannelAdapter.java:203)
at org.springframework.integration.endpoint.AbstractEndpoint.start(AbstractEndpoint.java:152)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:549)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:103)
at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:144)
at org.springframework.cloud.stream.binding.BindingService.doBindConsumer(BindingService.java:188)
at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:141)
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.createAndBindInputs(AbstractBindableProxyFactory.java:99)
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStartWithBindable(InputBindingLifecycle.java:59)
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:833)
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:60)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:405)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:394)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:586)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:364)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:310)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:1010)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:631)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:439)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:318)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1361)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1350)
at org.example.Main.main(Main.java:11)
Caused by: java.io.InvalidClassException: org.springframework.util.MultiValueMapAdapter; local class incompatible: stream classdesc serialVersionUID = -1297985047093119690, local class serialVersionUID = 1458406687712361249
at java.base/java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:598)
at java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2078)
at java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1927)
at java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2078)
at java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1927)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2252)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1762)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:540)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:498)
at org.springframework.util.SerializationUtils.deserialize(SerializationUtils.java:83)
… 52 more

2026-03-05T19:51:34.447+05:30 INFO 43320 — [consumer-input1] c.s.s.c.s.b.u.SolaceErrorMessageHandler : Processing message 1d888d1d-30c4-8700-d03b-d6a5379c6a60 <messaging-exception-message: null, original-message: null, source-jcsmp-message: 1>
2026-03-05T19:51:34.448+05:30 INFO 43320 — [consumer-input1] .s.b.u.ErrorQueueRepublishCorrelationKey : Republishing XMLMessage 1 to error queue scst/error/wk/exclusive/plain/RRCP_HKMA_DECOMPOSITION_LOCAL_TRADE_TEST - attempt 1 of 3

]]>
https://community.solace.com/t/spring-cloud-stream-solace-binder-consumer-fails-deserializing-inbound-headers-after-upgrade-how-to-disable-skip-header-deserialization/4787#post_4 Fri, 06 Mar 2026 06:22:17 +0000 community.solace.com-post-14321
Where can i have hands on or leran to build production Grade Agentic AI systems with Solace Agents Mesh please help me with this .

]]>
https://community.solace.com/t/where-can-i-have-hands-on-or-leran-to-build-production-grade-agentic-ai-systems-with-solace-agents-mesh/4788#post_1 Thu, 05 Mar 2026 17:07:24 +0000 community.solace.com-post-14320
Spring Cloud Stream Solace binder: consumer fails deserializing inbound headers after upgrade — how to disable/skip header deserialization? It would have been better, if you could share the stack trace. The consumer side deserialization may be failing if the serialized Class is not on the classpath. If your consumer application depends on the header, it need to solve the deserialization.

I believe Publisher application is Solace Spring Cloud Stream application, as Solace SCSt publisher serializes the headers values of type Java Object (non primitives).

If you have control on the publisher app, the headers that are java objects and being serialized could be excluded from solace message or converted to simple string header.
Please have a look at the Solace Producer Properties, namely headerExclusions , headerTypeCompatibility and nonserializableHeaderConvertToString .

Similarly, on the consumer side you could exclude the problematic property by setting headerExclusions Solace Consumer Property - if consumer doesn’t need to process the property.

Hope this helps.

]]>
https://community.solace.com/t/spring-cloud-stream-solace-binder-consumer-fails-deserializing-inbound-headers-after-upgrade-how-to-disable-skip-header-deserialization/4787#post_3 Thu, 05 Mar 2026 16:09:02 +0000 community.solace.com-post-14319
Copy Message Spool from One Broker to other? Hey @snowleopard - the copy message command in the solace broker works between end points on the same broker. If you want to copy messages from one broker to another you would have to consume it and republish it on the new broker.

Note that if you want to upgrade from v9 to v10.25 you will have to submit a support ticket for this.

]]>
https://community.solace.com/t/copy-message-spool-from-one-broker-to-other/4786#post_3 Thu, 05 Mar 2026 16:02:37 +0000 community.solace.com-post-14318
Spring Cloud Stream Solace binder: consumer fails deserializing inbound headers after upgrade — how to disable/skip header deserialization? Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

I don’t have the information on that.

The provided facts cover related Spring Cloud Stream and Solace binder topics (such as payload deserialization, Spring library upgrade issues, and error handling), but none of them directly address the specific scenarios you’re asking about:

  1. Disabling Java deserialization of inbound headers (treating them as raw bytes/strings) in XMLMessageMapper.mapHeadersToSpring()
  2. Syncing serialization between producer and consumer to resolve InvalidClassException / serialVersionUID mismatches
  3. Excluding/blacklisting specific headers from inbound header mapping
  4. Strategies for migrating pipelines one at a time with mixed producer/consumer versions

For these specific concerns, I’d recommend searching the Solace documentation and the Solace Spring Cloud binder GitHub repository directly, particularly the XMLMessageMapper source and the binder’s consumer configuration properties, which may expose options for header mapping control.

Potentially related sources from what was found:

]]>
https://community.solace.com/t/spring-cloud-stream-solace-binder-consumer-fails-deserializing-inbound-headers-after-upgrade-how-to-disable-skip-header-deserialization/4787#post_2 Thu, 05 Mar 2026 15:26:01 +0000 community.solace.com-post-14317
Spring Cloud Stream Solace binder: consumer fails deserializing inbound headers after upgrade — how to disable/skip header deserialization? Hi Solace Community,
After upgrading a Spring Cloud Stream consumer (Solace binder), the consumer fails before my handler runs with a header deserialization error like:

Failed to deserialize object / InvalidClassException (serialVersionUID mismatch)

Stack trace indicates it happens during inbound header mapping (e.g., XMLMessageMapper.mapHeadersToSpring()SerializationUtils.deserialize()), likely because a producer is sending a Java-serialized object in message headers.

Is there a recommended way (consumer-side) to:

  • disable Java deserialization of inbound headers / treat them as raw bytes or strings, or

  • sync the serialization, or

  • exclude/blacklist certain headers from being mapped?

I have a lot of pipelines that needs to be migrated to newer versions, I wanted to migrate one pipeline at a time. Would it be possible to sync older producer with new consumer, and what should be the way?

]]>
https://community.solace.com/t/spring-cloud-stream-solace-binder-consumer-fails-deserializing-inbound-headers-after-upgrade-how-to-disable-skip-header-deserialization/4787#post_1 Thu, 05 Mar 2026 15:25:15 +0000 community.solace.com-post-14316
Copy Message Spool from One Broker to other? Is it possible to copy message spool from one broker running on v9 to v10.25+?

]]>
https://community.solace.com/t/copy-message-spool-from-one-broker-to-other/4786#post_1 Wed, 04 Mar 2026 17:16:51 +0000 community.solace.com-post-14314
ADB-000004-01-A product sheet Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

Based on the documentation, the model number you’re asking about—ADB-000004-01-A—appears to be an Assured Delivery Blade (ADB) component rather than a complete Solace Event Broker Appliance model.

Here’s what I found:

The complete appliance event broker models available are:

  • Solace 3530 — for mid-sized businesses
  • Solace 3560 — for high-volume big data, cloud, and IoT requirements

The ADB-000004-01 is a specialized component (an Assured Delivery Blade) that can be installed in these appliances to enable external disk storage connectivity for guaranteed messaging. According to the documentation, the ADB-000004-01 features two 10 Gbps SFP+ optical module ports.

For detailed product specifications of the complete appliance event broker models, you can refer to:

If you need specifications for a complete appliance event broker system (such as the Solace 3530 or Solace 3560), those documents include links to detailed hardware specifications. If you’re looking for specifications specific to the ADB-000004-01-A component itself, I’d recommend searching the Solace documentation or contacting Solace support for component-level technical specifications.

]]>
https://community.solace.com/t/adb-000004-01-a-product-sheet/4785#post_2 Wed, 04 Mar 2026 03:16:31 +0000 community.solace.com-post-14313
ADB-000004-01-A product sheet Where can I get product specifications for ADB-000004-01-A

]]>
https://community.solace.com/t/adb-000004-01-a-product-sheet/4785#post_1 Wed, 04 Mar 2026 03:16:11 +0000 community.solace.com-post-14312
Feasibility of Routing SAP BTP AEM–Kafka Integration via SAP Cloud Connector We are currently working within the landscape to enable integration between SAP BTP AEM and Kafka using Kafka Connect. The customer has chosen to use Kafka Connect instead of native bridging.

We have shared relevant Solace documentation explaining how to establish direct connectivity between SAP BTP AEM and Kafka (i.e., SAP BTP AEM → Kafka Connect → Kafka) .

SAP BTP AEM – Kafka Integration(Direct using Kafka connect) References :

https://solace.com/resources/home/col/resources/kafka-source-and-sink-how-to-video-final-video
https://solace.com/pubsub-plus-kafka-connectors/
https://github.com/SolaceProducts/pubsubplus-connector-kafka-source?tab=readme-ov-file#security-con…
https://docs.confluent.io/platform/current/connect/security.html#connect-reporter-acl-requirements

However, according to customer governance policies, all inbound connections from external systems to the customer’s on-premise landscape must pass through the SAP Cloud Connector. Therefore, we would like to check with you if it is technically feasible to route the integration flow via the Cloud Connector instead (i.e., SAP BTP AEM → SAP Cloud Connector → Kafka Connect → Kafka). Could you please update at the earliest possi ble?

]]>
https://community.solace.com/t/feasibility-of-routing-sap-btp-aem-kafka-integration-via-sap-cloud-connector/4784#post_1 Tue, 03 Mar 2026 17:29:26 +0000 community.solace.com-post-14310
Deploy SAM_RAG official plugin to Solace Agentic Mesh Enterprise Edition Hello all, please i need your help.

Am trying to install the SAM-RAG plugin to the the SAM enterprise edition deployed on Kubernetes, but the python tools for the rag seem not to recognize my config. Also it seems the tool custom code does not receive the app context.

Please can an one assist me with this.

Thank you in advance.

]]>
https://community.solace.com/t/deploy-sam-rag-official-plugin-to-solace-agentic-mesh-enterprise-edition/4783#post_1 Mon, 02 Mar 2026 10:47:00 +0000 community.solace.com-post-14308
Announcement: Solace Agent Mesh Enterprise Now Generally Available Hello @ian_fyfe thank you for the information.

Please do you know how i can deploy the SAM-RAG plugin to the Solace Agent Mesh Enterprise that is deployed to Kubernetes.

Thank you for your support.

]]>
https://community.solace.com/t/announcement-solace-agent-mesh-enterprise-now-generally-available/4675#post_8 Mon, 02 Mar 2026 10:39:52 +0000 community.solace.com-post-14307
Connecting RabbitMQ to Solace In a github project I created a simple proof of concept to bridge RabbitMQ queues with Solace queues.

The only way to do this, is to find a bridge that can communicate with RabbitMQ in AMQ version 0-9-1 and with Solace using AMQP version 1-0-0.

You can probably do it using Spring-Boot and using starter for each brokers, but I found that the Shovel RabbitMQ plugin was also doing it.

In my persomal GitHub project, you can find the provisioning script to create the shovel

As pre-requisite, you will need to install the following plugins:

# `RUN` command comes from the Dockerfile (rabbitmq) in the above github project

# Install Rabbit plugins here
RUN rabbitmq-plugins enable rabbitmq_management && \
	rabbitmq-plugins enable rabbitmq_amqp1_0 && \
	rabbitmq-plugins enable rabbitmq_shovel && \
	rabbitmq-plugins enable rabbitmq_shovel_management

With that I was able to brige any queue from or to RabbitMQ:

  • FROM Solace TO RabbitMQ
  • FROM RabbitMQ TO Solace

Each flow needs a separate shovel configuration.

]]>
https://community.solace.com/t/connecting-rabbitmq-to-solace/84#post_6 Thu, 26 Feb 2026 14:23:28 +0000 community.solace.com-post-14304
Connectivity to Solace Event Broker from Oracle OIC Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

Based on the provided documentation, I can explain the integration options between Oracle OIC and SAP via a Solace Event Broker:

For retrieving SAP events through a Solace software event broker, there are a few key considerations:

  1. SAP Connection Type:

    • For S/4HANA systems, native events can only be published directly to SAP Advanced Event Mesh (AEM) brokers, not to standard Solace event brokers
    • To publish events from SAP to a standard Solace Event Broker, you would need to use one of these options:
      • ASAPIO connector
      • RAP events
      • SAP AIF (Application Interface Framework)
  2. Protocol Requirements:

    • SAP uses AMQP(s) protocol for event connectivity
    • For Oracle OIC to connect to Solace, you’ll need to use appropriate connection parameters including:
      • TCPS-based connection URL
      • Proper authentication credentials
      • Secure password alias configuration
  3. Connection Architecture:

    • SAP systems can have multiple channels configured for event publishing
    • Each channel can have multiple publish daemons (1-10 connections)
    • There is typically one additional connection for consuming events

Given these constraints, to implement this integration you would need to:

  1. Set up appropriate connectivity from SAP to Solace using one of the supported methods (ASAPIO, RAP, or AIF)
  2. Configure Oracle OIC to connect to the Solace Event Broker using proper connection parameters
  3. Ensure proper topic/queue configuration for event routing between systems

For detailed implementation steps, I recommend consulting the Solace documentation as the specific configuration will depend on your chosen integration method and requirements.

Relevant sources:

]]>
https://community.solace.com/t/connectivity-to-solace-event-broker-from-oracle-oic/4780#post_2 Thu, 26 Feb 2026 11:20:32 +0000 community.solace.com-post-14303
Connectivity to Solace Event Broker from Oracle OIC Oracle OIC does not have an adaptor to connect to Solace. In such instances how to connect to Solace Event Broker and retrieve the Events published by SAP

]]>
https://community.solace.com/t/connectivity-to-solace-event-broker-from-oracle-oic/4780#post_1 Thu, 26 Feb 2026 11:19:56 +0000 community.solace.com-post-14302
solace rust library Hello! Any news regarding Rust library?

]]>
https://community.solace.com/t/solace-rust-library/1787#post_8 Wed, 25 Feb 2026 09:37:01 +0000 community.solace.com-post-14300
Spring Boot 4 support Hi @Tomasz ,

Good to see you back! I hope you’re doing well.

@amackenzie actually just responded over on github as well. Support is coming! Spring Boot 4 Support · Issue #192 · SolaceProducts/solace-spring-boot · GitHub

]]>
https://community.solace.com/t/spring-boot-4-support/4776#post_3 Mon, 23 Feb 2026 15:59:01 +0000 community.solace.com-post-14297
Spring Boot 4 support Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

Based on the provided documentation, I can analyze the situation regarding Spring Boot 4 support and GitHub pull request #191:

  1. Current Status:
  • The latest documented Spring Boot version supported by Solace components is 3.3.5
  • There is no explicit mention of Spring Boot 4 support or pull request #191 in the provided documentation
  • The most recent Spring Boot compatibility discussion mentions Spring Boot 3.2.0 working with:
    • spring-cloud-dependencies 2023.0.0
    • solace-spring-boot-bom 2.0.0
    • spring-boot-starter-parent 3.2.0
  1. Analysis:
  • Spring Boot 4 has not been released yet (as of the documentation dates)
  • There is no information about plans or roadmap items specifically mentioning Spring Boot 4 support
  • The documentation shows Solace actively maintains Spring Boot compatibility, with recent updates for version 3.x

Given this analysis, I cannot provide specific information about plans for Spring Boot 4 support or the status of pull request #191, as this information is not present in the provided documentation. The documentation only covers current and recently released versions of Spring Boot.

For the most up-to-date information about Spring Boot compatibility and future support, I recommend:

  1. Checking the official Solace documentation
  2. Following the Solace Spring Boot repository for updates
  3. Using the currently supported Spring Boot 3.x versions for your applications

Relevant sources:

]]>
https://community.solace.com/t/spring-boot-4-support/4776#post_2 Mon, 23 Feb 2026 13:31:37 +0000 community.solace.com-post-14296
Spring Boot 4 support Hello,

Are you planning adding support for spring boot 4 on spring boot solace starters ?

I saw that this is something which is blocking a lot of ppl over the internet as Spring Boot modules change is breaking usage of the current starter.

Additionally pull request is waiting on GitHub but with no response from the reviewers.

Best regards

]]>
https://community.solace.com/t/spring-boot-4-support/4776#post_1 Mon, 23 Feb 2026 13:31:04 +0000 community.solace.com-post-14295
“Redelivery Delay Support Required” Error When Using Delayed Redelivery with Try-Me and SAP Integration Suite (CPI JMS Consumer) Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

Based on the documentation provided, I can explain how to implement delayed redelivery with exponential backoff for JMS messages in Solace Cloud when using SAP Integration Suite (CPI) as the consumer.

The delayed redelivery feature with exponential backoff is supported in Solace Event Broker (formerly called PubSub+ Event Broker) and can be configured on queues and topic endpoints. Here’s what you need to know:

  1. Feature Overview:

    • Delayed redelivery helps applications recover from temporary processing failures
    • It uses a timer and is triggered by client-initiated transaction rollbacks
    • Supports exponential backoff through a configurable multiplier
    • Configuration is done entirely on the event broker side
  2. Key Configuration Parameters:

    • Initial interval: First delay between delivery attempts (default 1000ms)
    • Maximum interval: Maximum delay between attempts (default 64000ms)
    • Multiplier: Factor for exponential backoff (default 2.00, range 1.00-5.00)
  3. Important Considerations:

    • This is a Controlled Availability (CA) feature that requires special enablement
    • Configuration must be done when no clients are connected to the queue
    • The feature works with local transactions
    • Not supported in C API, JavaScript API, or Node.js API
  4. Behavior:

    • When a transaction is rolled back, message delivery is suspended
    • The delay increases exponentially between retry attempts
    • The consumer flow is blocked during the delay period
    • Delivery resumes after the delay timer expires

For the specific configuration steps and detailed documentation, please refer to:

Note: Since this is a broker-side configuration, you cannot modify these settings through the API. The configuration must be done through the broker’s administration interface when the queue is not in use.

]]>
https://community.solace.com/t/redelivery-delay-support-required-error-when-using-delayed-redelivery-with-try-me-and-sap-integration-suite-cpi-jms-consumer/4772#post_2 Fri, 20 Feb 2026 09:47:51 +0000 community.solace.com-post-14290
“Redelivery Delay Support Required” Error When Using Delayed Redelivery with Try-Me and SAP Integration Suite (CPI JMS Consumer) Hi Team,

I am working with SAP Integration Suite (Cloud Integration – CPI) as a JMS consumer and Solace PubSub+ Advanced Event Mesh (Cloud) as the broker.

I created a durable queue and configured the following settings.


Environment:

  • Broker: Solace PubSub+ (Advanced Event Mesh – Cloud)

  • Protocol: JMS

  • Queue Type: Durable, Exclusive

  • Consumer Type:

    • Try-Me (for testing)

    • SAP CPI IFlow using JMS adapter

  • DMQ: Disabled

  • Redelivery: Enabled

  • Delayed Redelivery: Enabled


Issue:

After enabling Delayed Redelivery on the queue, when I try to consume messages using:

  1. Try-Me

  2. SAP CPI IFlow (JMS Receiver Adapter)

I receive the following error:

“Redelivery delay support required”

Because of this error, the consumer is not able to process messages.


My Requirement:

  1. Messages should retry infinite times if processing fails.

  2. Messages should NOT move to DMQ (DMQ is disabled intentionally).

  3. If the consumer is OFF, messages should remain in the queue.

  4. When the consumer comes back online:

    • All pending messages should be processed

    • Failed messages should retry automatically

    • Retry should follow exponential backoff


Questions:

  1. Does enabling Delayed Redelivery require special support from the JMS client?

  2. Does SAP CPI JMS adapter support broker-side delayed redelivery?

  3. Is Try-Me tool compatible with delayed redelivery?

  4. If CPI does not support this feature, what is the recommended way to implement:

    • Infinite retry

    • No DMQ usage

    • Exponential backoff

  5. Should exponential retry be implemented at:

    • Broker level?

    • Or inside CPI (using Exception Subprocess + Timer)?


What I Tried:

  • Enabled redelivery

  • Disabled DMQ

  • Set max redelivery count

  • Enabled delayed redelivery

  • Tested via Try-Me

  • Tested via CPI IFlow JMS adapter

Still facing the same error.


Could someone please guide me on the correct approach to achieve this requirement with SAP Integration Suite and Solace AEM?

Thanks in advance.

]]>
https://community.solace.com/t/redelivery-delay-support-required-error-when-using-delayed-redelivery-with-try-me-and-sap-integration-suite-cpi-jms-consumer/4772#post_1 Fri, 20 Feb 2026 09:47:18 +0000 community.solace.com-post-14289
Reprocessing the messages without using DMQ Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

Based on the provided documentation, I can explain why both DMQ message count and replay log increase when replaying messages from DMQ, and clarify the behavior of message replay.

Reasoning:

  1. When messages are replayed from a DMQ, they are treated as new messages being published to the target queue. The replay log captures all Guaranteed messages by default, so these replayed messages are also added to the replay log.

  2. If these replayed messages fail delivery again (e.g., due to the same issue that caused them to go to DMQ initially), they will be moved to the DMQ again because:

    • Messages in DMQ retain their original headers and properties
    • The DMQ eligibility and delivery failure conditions still apply
    • There is no automatic mechanism to prevent messages from going back to DMQ
  3. Message replay does not automatically move messages back to the original queue - it’s simply a mechanism to resend messages from a replay log to a specified endpoint. The replay feature:

    • Delivers messages from the replay log to specified endpoints
    • Maintains original message order and properties
    • Does not modify or remove messages from their source location

Answer:
The behavior you’re observing is expected. Message replay is not designed to automatically move messages from DMQ back to original queues - it’s a mechanism for replaying historical messages from the replay log. When replaying from DMQ:

  1. Messages are treated as new publications to the target queue
  2. These messages are captured in the replay log (increasing replay log count)
  3. If delivery fails again, messages return to DMQ (increasing DMQ count)

To properly handle DMQ messages, consider:

  • Addressing the root cause of message delivery failures before replay
  • Using a separate application to process DMQ messages
  • Implementing proper error handling and retry logic in the consuming application

Sources:

]]>
https://community.solace.com/t/reprocessing-the-messages-without-using-dmq/4771#post_2 Fri, 20 Feb 2026 09:00:12 +0000 community.solace.com-post-14288
Reprocessing the messages without using DMQ Hi Team,

I am using SAP CPI with Solace Advanced Event Mesh.
I created a queue and enabled DMQ and Replay.

When I replay messages from DMQ, I see:

  • DMQ message count increases
  • Default replay log also increases

Why are both increasing?
Should replay move messages back to original queue automatically?

Thanks in advance.

]]>
https://community.solace.com/t/reprocessing-the-messages-without-using-dmq/4771#post_1 Fri, 20 Feb 2026 08:59:42 +0000 community.solace.com-post-14287
Solclientjs does NOT support message selectors even though documentation for other APIs shows selector support Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

Based on the provided documentation, I can analyze the support for message selectors in the JavaScript API and alternatives for filtering messages. Let me break this down:

  1. Direct Answer:
    According to the documentation, message selectors are explicitly listed as “Not applicable” for the Solace Messaging API for JavaScript and Node.js. This means message selectors are not supported in the JavaScript API when using a software event broker.

  2. Alternative for Filtering Messages:
    The documentation suggests several alternatives for message filtering:

  • Use topic matching instead of selectors, which offers better performance
  • Include message properties within the topic hierarchy
  • Utilize topic subscriptions and topic subscription exceptions which support:
    • Wildcards
    • Prefix matching within levels
    • AND, OR, and NOT operations through multiple subscriptions and exceptions
  1. Reasoning Behind the Recommendation:
    The documentation strongly recommends using topic-based filtering because:
  • It provides better performance than selectors
  • Filtering occurs before messages are added to queues
  • It avoids potential performance impacts on the event broker message spool
  • It eliminates the need to retrieve and evaluate messages from disk
  1. Topic-Based Filtering Capabilities:
    You can implement filtering logic by:
  • Structuring your topic hierarchy to include the properties you want to filter on
  • Using topic wildcards for flexible matching
  • Combining multiple subscriptions and exceptions to create complex filtering patterns

Sources:

]]>
https://community.solace.com/t/solclientjs-does-not-support-message-selectors-even-though-documentation-for-other-apis-shows-selector-support/4770#post_2 Fri, 20 Feb 2026 04:23:24 +0000 community.solace.com-post-14286
Solclientjs does NOT support message selectors even though documentation for other APIs shows selector support Problem Description

I am using solclientjs (Solace JavaScript SMF API) in a Node.js application.

I attempted to configure a message selector on a queue consumer, but the selector is ignored by the broker and does not appear in the Solace UI under:

Queue → Consumers → Selector

This indicates that the selector is not being applied at all.

Evidence That solclientjs Does Not Support Selectors

According to a Solace engineer’s response in the Solace Community (Feb 2023):

“The Solace Javascript API does not support Selectors.”
Solace Engineer: amackenzie

The engineer also referred to the API support matrix, confirming that selectors are not supported in the JavaScript API.

Selectors Are Supported in Other Solace APIs

Official Solace documentation for other APIs (Java, C, .NET) states:

“Selectors enable clients to specify which messages they are interested in … Selectors can be set for flows bound to a queue or topic endpoint.”

So selector support exists in other Solace APIs like:

  • JCSMP (Java)

  • C API

  • .NET API

…but not in solclientjs.

What I Need Clarification On

  1. Is message selector support officially not available in solclientjs as of today?

  2. Is there any planned support on the roadmap for the JavaScript API?

  3. Is there any recommended workaround for filtering messages on the broker side (instead of application-level filtering)?

]]>
https://community.solace.com/t/solclientjs-does-not-support-message-selectors-even-though-documentation-for-other-apis-shows-selector-support/4770#post_1 Fri, 20 Feb 2026 04:22:48 +0000 community.solace.com-post-14285
Implementing Exactly Once in Order (or FIFO) in Solace Hey @AnujDulta

I wanted to offer a different solution to your usecase :slight_smile:

I would model your jobs each with multiple operations within them as seperate consumers which have their own queue. So :

  • Job A with 3 seperate operations will have its own execlusive queue : job-a-queue
  • Job B with its own operations will have its own execlusive queue : job-b-queue

Each queue should have its own DMQ setup and you should have max retries and exponential back offs configured on both the queue and the SAP AEM adapter in CI.

Each operation should be idempotent and this can be acheived by using local idempotent process call step and each operation should proceed execution only if the previous step has been processed successfully. You can also have conditions to ensure that each operation for a given message is executed only if there is no record of it having successfully completed previously.

If there is an error in job A-Operation 2, you should :

  • Handle the exceptions in operation 2 gracefully so that you know the root cause of the error
  • Based on the root cause, determine if this is a recoverable error or not.
  • If yes, then retry processing, if not let the message move to the DMQ.

This way if there is an error in operation 2, the subsequent operation for that message will not happen, the message will be moved to a DMQ from where it can be debugged and replayed if required.

There is a pretty detailed SAP community blog over here on this topic : https://community.sap.com/t5/technology-blog-posts-by-sap/enabling-in-order-processing-with-sap-integration-suite-advanced-event-mesh/ba-p/13703498

]]>
https://community.solace.com/t/implementing-exactly-once-in-order-or-fifo-in-solace/2125#post_4 Wed, 18 Feb 2026 12:44:57 +0000 community.solace.com-post-14282
Client user name Hi Nagamani,

The AI didn’t quite get it right. If you have basic authentication enabled with internal checking, if you don’t give the client a password, they will not be able to login. You can still create the client in the internal database and use it to authorize the client (assign the client profile and ACL profile).

Best Regards,

Paul

]]>
https://community.solace.com/t/client-user-name/4768#post_4 Tue, 17 Feb 2026 17:03:12 +0000 community.solace.com-post-14281
Client user name Got it. If I give select basic authentication as Internal and not give password will my basic authentication for a client still work ?

]]>
https://community.solace.com/t/client-user-name/4768#post_3 Sat, 14 Feb 2026 04:48:56 +0000 community.solace.com-post-14280