Prismatic offers enterprise software vendors an iPaaS (integration platform as a service) specifically designed for designing, embedding, and supporting integrations with other software products and services in the context of the end customer’s business workflow.
Every B2B software vendor has a “wall of integrations” with logos of other vendors, services, and standards their software can talk to. Some of these integrations were announced with great fanfare and painstakingly engineered, with architects and developers coming up with bespoke workflow, secure connectivity, data exchange and state maintenance protocols for the job. Others are as simple as some agent making a public API call. Either way, the integrations must be maintained for dependability, if any customers are using them.
If you were a vendor developing an enterprise application, maybe you wouldn’t want to develop and continue to support your own custom integrations, any more than you’d want to design a new authorization service, or secrets vault, or API gateway, or pod autoscaler, unless the special way you built one of those was especially differentiated and valuable to your customers.
That’s where Prismatic comes in, giving product managers a low-code way to design and refine required intra-vendor business workflows, while offering engineers an SDK approach to scripting their own product integrations or webhooks to other vendors, without having to think about all the necessary scaffolding. AI agents can also use this integration-building logic as a token-saving tool for generating or monitoring integrations to see how they scale.
Copyright ©2026 Intellyx B.V. Intellyx is the change agent analyst firm focused on customer-driven, technology-empowered enterprise transformation. Our thought leadership distills insights across the rapidly evolving enterprise IT landscape, and our advisory helps you and your customers see through the hype and get beyond the fear of technology disruption to take action and realize value through change. At the time of writing, Prismatic is not an Intellyx customer. No AI was used to write this article. To be considered for a Brain Candy article, email us at [email protected].
]]>
Apiiro maps and scans an extended application estate for vulnerabilities and misconfigurations, overlaying them onto a dynamic graph that allows executives, engineers — and their agents — to search, visualize and pinpoint change events from coding to runtime that could introduce risk from a software architecture perspective.
Last time we talked to Apiiro at KubeCon NA 2023, they were looking for cloud configuration flaws and API addressability and security issues within the software supply chain as part of the GitOps lifecycle. The utility of that automated activity hasn’t disappeared – but the rate of architectural change has increased drastically (as much as 4X over the last 12 months) due to the widespread infusion of code generated and delivered by coding agents.
There are a lot of preflight checks and production monitoring tools in the SOC that already address different layers of a hybrid cloud topology, from SAST/DAST, WAF monitoring, API governance, pentests, PII checks, container config optimization tools, and on and on.
All of these tools and sources become telemetry for Apiiro’s “Guardian Agent” to assist app dev teams that need to focus on reviewing human developer and agent-coded pull requests in terms of safely delivering functionality to market, without the overwhelming cognitive load of closing off risky loopholes throughout the rapidly shifting stack.
Copyright ©2026 Intellyx B.V. Intellyx is the change agent analyst firm focused on customer-driven, technology-empowered enterprise transformation. Our thought leadership distills insights across the rapidly evolving enterprise IT landscape, and our advisory helps you and your customers see through the hype and get beyond the fear of technology disruption to take action and realize value through change. At the time of writing, Apiiro is not an Intellyx customer. No AI was used to write this article. To be considered for a Brain Candy article, email us at [email protected].
]]>When we last covered DH2i in February 2024, the company was taking its high-availability (HA) clustering solution for Microsoft SQL Server to Kubernetes on Linux.
Today, with another two years of Kubernetes maturity under its belt, DH2i is redoubling its support for the platform, in large part because its SQL Server customers want to run their HA clusters on Linux.
This pattern derives in large part from Microsoft’s strategy to support SQL Server as a modern database offering while simultaneously emphasizing Linux as its preferred server environment over Windows.
Given Microsoft’s continuing support for the database, SQL Server customers largely remain committed to the platform and find DH2i’s HA clustering technology to be essential for running mission-critical workloads on the database.
Now that DH2i is updating its SQL Server Kubernetes operator to provide improved scale up and scale down capabilities, automated rolling updates, flexible service templates, and StatefulSet-based management, SQL Server customers can take heart that they can continue to run mission-critical workloads on Linux for years to come.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. Microsoft is a former Intellyx customer. None of the other vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>
Everyone agrees that artificial intelligence (AI) depends upon data. But as they say, the devil is in the details.
In the first article in this series, I differentiated among the various types of data that AI requires: model training data, information people put into prompts, and data that feeds into queries via retrieval augmented generation (RAG).
Agentic AI complicates this issue, as AI agents may look for and fetch data on the fly. As my colleague Eric Newcomer explained in the second article in this series, such agentic behavior may be non-deterministic: agents often behave differently from one occasion to the next.
Given the fact that AI agents will take whatever actions they can to meet the requirements set out for them, their unpredictability makes them both extraordinarily powerful as well as potentially dangerous.
AI agents simply aren’t practical unless the organization in question has its data house in order. In fact, agentic AI’s inherent risk has stopped many organizations’ AI initiatives dead in their tracks.
What, then, do you need to do to ensure your data is up to the challenge? Your data must be the right data in the right place at the right time to support your AI initiatives.
Click here to read the entire article.
Image courtesy of Boomi.
]]>
SPHERE provides an identity hygiene control plane that discovers permissions and provides identity analytics spanning several different entitlement, access and authorization policy sources within an extended enterprise IT estate – from data center access to cloud application accounts and AI agent machine identities.
Since our last coverage of them as an emerging identity service vendor in 2021, SPHERE has developed a unique high-level identity risk monitoring and remediation capability that sniffs for issues atop a spaghetti stack of data streaming from platforms for IAM, IGA, PAM, CSPM, DSPM and too many other acronyms.
Since we’re not in the business of bucketing vendors into acronyms, the variety of technologies in this space does point out the need for an integrated view that correlates identities and entitlements across multiple vendor platforms. For instance, a routine compliance audit may reveal unstructured PII records in a data lake that may have been accessed by an AI agent, but with no corresponding assigned SecOps ownership or development team members in an identity provider, while a spreadsheet sitting in a private shared folder contains a link and an admin password.
This is where the control plane’s intelligent automation kicks in, looking across entitlements to propose the most likely owner, then suggesting entitlement workflows to return access and permissions to a less risky / more compliant state without impacting dependencies. Analytics for measuring identity hygiene provide proof of risk reduction and least privileged access identity posture at the enterprise, team, and individual employee level.
Copyright ©2026 Intellyx B.V. Intellyx is the change agent analyst firm focused on customer-driven, technology-empowered enterprise transformation. Our thought leadership distills insights across the rapidly evolving enterprise IT landscape, and our advisory helps you and your customers see through the hype and get beyond the fear of technology disruption to take action and realize value through change. At the time of writing, SPHERE is not an Intellyx customer. No AI was used to write this article. To be considered for a Brain Candy article, email us at [email protected].
]]>
Mainframe teams around the world are asking whether they’re ready for AI – but they’re asking the wrong question.
The right question is why should we implement AI at all? In other words, what are the business drivers for AI in the enterprise – those businesses pain points that AI is best suited to address?
Those pain points will determine how ready any organization is to deploy AI, in the cloud and on-premises, on distributed systems as well as the mainframe.
Only by placing the question of AI readiness on the mainframe into this broader business and IT context can mainframe teams assemble AI readiness checklists that will support their organization’s AI priorities while managing the associated risks.
]]>
Während der Bedarf an robuster Governance wächst, leidet der Technologiesektor unter einer gravierenden Skills Gap. Gartner prognostiziert, dass bis 2027 75 Prozent aller Einstellungsprozesse spezifische Zertifizierungen und tests für KI-Kenntnisse am Arbeitsplatz beinhalten werden. Unternehmen begreifen: Ihre ehrgeizigen KI-Initiativen scheitern, wenn ihnen das menschliche Talent fehlt, sie zu steuern.
Die Folgen dieses Talentmangels sind bereits sichtbar. Branchenanalysten wie Jason Bloomberg von Intellyx beobachten, dass historische Schwierigkeiten mit Data Governance zu fragmentierten Prozessen geführt haben, die heute effektive KI-Einführungen behindern. Diese Herausforderung veranlasste am 9. März 2026 den Start von Trust3 AI – eine neu positionierte Initiative des Datensicherheitsunternehmens Privacera für vereinheitlichte agentische Governance. Branchendaten zeigen, dass nur etwa ein Drittel der Unternehmen über eine nennenswerte Data-Governance-Implementierung berichtet. Infolgedessen verharren viele KI-Projekte im „Pilot-Purgatory“ und können aufgrund ungelöster Sicherheits- und Compliance-Schwachstellen nicht skaliert werden.
Click here to read the entire article.
Image courtesy Ad Hoc News.
]]>
Since our last coverages of Selector in 2023 and 2022, this observability platform has trained its contextual AI alerting and recommendation engines with hard-won learnings from correlating and transforming event data across the full stacks of some of the world’s largest heterogeneous networks.
We will need to reckon with a cloud native future where every network layer (1-7) will have an array of status dashboards, telemetry sources and monitoring tools, both open source and proprietary. This plays into Selector’s domain-agnostic strength for quickly transforming relevant real-time event data from multiple sources into a horizontal data lake that their AI engines can build context awareness atop, correlating the events with intra-service relationships and observable outcomes in production.
Engineers and executives alike can investigate trends and reproduce the most critical incidents within a unified dashboard, or by expressing the business intent they are seeking to improve to their natural language ChatOps agent within Slack, ServiceNow or other ITSM and collaboration tools. They get back a digital twin of the event scenario, with root cause data as well as guided remediation processes which can then be used to automate future resolutions.
Copyright ©2026 Intellyx B.V. Intellyx is the change agent analyst firm focused on customer-driven, technology-empowered enterprise transformation. Our thought leadership distills insights across the rapidly evolving enterprise IT landscape, and our advisory helps you and your customers see through the hype and get beyond the fear of technology disruption to take action and realize value through change. At the time of writing, Selector is not an Intellyx customer. No AI was used to write this article. To be considered for a Brain Candy article, email us at [email protected].
]]>Given all the attention these days to AI security, it’s perhaps not surprising the large analyst firms have divided the market into several segments: AI governance, AI runtime inspection and enforcement, AI security testing and red-teaming, AI infrastructure/model supply chain security, shadow AI protection, and of course, agentic AI security – to name a few.
Rather than jumping into one or more of these segments, DeepKeep offers a comprehensive, full-lifecycle AI security platform that includes AU red teaming (penetration testing), AI runtime protection, shadow AI protection, and agentic AI security – all within a unified platform.
DeepKeep takes business context into account, implementing AI guardrails at the semantic layer both for runtime protection as well as discovery of employee AI activity and enforcement of corporate AI policies around such activity.
The platform works with LLMs as well as image data, automatically evaluating AI-related security risks and measuring trustworthiness across the AI lifecycle.
DeepKeep runs as a SaaS offering or in customers’ private clouds, on-premises data centers, or in fully airgapped environments. While the platform is a unified offering, the company sells individual modules for companies that require part of the solution.
Customers can deploy DeepKeep in two configurations: either as a proxy that secures prompts transparently to the user, or via APIs where AI process orchestrators send prompts to DeepKeep for scanning and policy enforcement.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>
“Threat-Memory improves critical on-device protection capabilities,” said Eric Newcomer, principal analyst at Intellyx. “A device-based threat history enables local defense against repeated attacks and multi-step attacks. Together with AI, local history data creates actionable intelligence to stop fraud, prevent ATOs, and defeat social engineering.”
Since we last covered RackN in September 2023, the world has changed, as AI has come to dominate the technology landscape.
RackN has always delivered a platform for full-lifecycle automation of bare metal provisioning. Now with the rise of massive AI infrastructure deployments, the company’s platform has become a must-have.
RackN provides rapid, large scale bare metal automation for a variety of infrastructure requirements, including multi-vendor and air-gapped environments.
RackN offers zero-touch onboarding of bare metal servers from the moment they’re plugged in, configuring firmware, network hardware, and everything else necessary to prepare servers for whatever clustering, virtualization, or container platforms customers wish to install.
RackN’s standard workflows also handle patch and change management, cluster configuration, reracking, and even decommissioning – preparing end-of-life servers for reuse or resale.
RackN’s automated provisioning is essential for large AI deployments. Such deployments often require a mix of different vendors simply because no one vendor has sufficient inventory.
In addition, AI requirements move so quickly that infrastructure teams must reconfigure clusters on a regular basis, and tech refresh cycles are short and aggressive.
Without RackN, managing the infrastructure in such dynamic environments is unacceptably time-consuming, expensive, and subject to unacceptable downtime.
RackN frees infrastructure engineers from the tedious work of manual hardware provisioning, enabling them to spend time on more value-added activities and improves the resilience of the deployment overall.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. RackN is a former Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>
“Enterprises have struggled to implement adequate data governance practices for years,” according to Jason Bloomberg, Managing Director of analyst firm Intellyx, “the result is fragmented data governance processes, insufficient accountability, and a lack of visibility that now impact their ability to deploy effective AI initiatives. With Trust3 AI, enterprises can now implement proactive, continuous data and AI governance that streamlines data governance with a single Trust layer across the entire data estate.”
]]>At #ZohoDay 2026 with Mani Vembu, CEO, Zoho. Intellyx analyst Jason Bloomberg interviews Mani on Zoho’s 30th anniversary about their new AppOS approach spanning developer and low-code application development, the future of the #SaaS space and #AI-based platforms, #abstraction layers and more.
Jason Bloomberg
Hi, this is Jason Bloomberg with Intellyx. Please introduce yourself.
Mani Vembu (00:13)
I’m Mani Vembu. I’m the CEO of Zoho Division.
Bloomberg (00:17)
Very good. So today we’re talking about AppOS, which is a new Zoho announcement. So give us a bit of background on AppOS. What is it and how did you come up with it?
Vembu (00:27)
So see, AppOS is, we call it the foundational platform for building the business apps. So we have 30 years of experience building, say, apps across multiple domains, like CRM, ITSM, HR, financial applications, ERP, and all those applications. So we believe that there is a good layer of abstraction that can be done across how the data is stored, how the workflows are executed, how the processes are being built, and in terms of reporting everything. And so if an abstracted layer is being offered for businesses to build applications, it could be accelerate the application development. And also with the new generation of AI tools, it also can minimize the whole development cycle. So that’s why we want to build AppOS, which is very specific for building business applications.
Bloomberg (01:17)
What is different about AppOS from what Zoho was doing before? You already had a platform for business applications with quite a number of applications and a common data model and common underlying infrastructure. There’s already a platform approach to what Zoho was doing. What’s new and different about AppOS?
Vembu (01:36)
See, so far, what we have done is we have a cloud infrastructure layer where each application will start at a little lower level in terms of rates. We abstracted certain levels of infrastructure across our apps. In the apps, we opened up some platform functionalities. For example, in the apps, we allow people to add their own custom modules, their own widgets, their own rates, say bringing their own custom processes. Those are all the customizations that we allow on top of our apps. That means, say, you can write your own code, which is custom function contextual. It is all I would say the contextual opening of platform functionalities. But what we are now doing is, say, from these apps, we took what is common across all these apps and tried to offer it as AppOS so that new apps can be built on top of it. Our own new apps, new apps we want to build on top of it, and the same we want to open for our customers and partners and other developers. That means we are standardizing on what we are building and what customers can build on. Rather than, say, we built on one platform which has much more flexibility for our developers, and then we only open contextual platform functionality for customers. Now we are going to open, say, the underlying layer, which is much more abstracted. That way, the customers don’t have to do the same work that we have done already.
Bloomberg (03:01)
So Zoho has offered Zoho Creator, a low-code tool for a number of years, Zoho Catalyst, more for focusing on enterprise integration. So you already supported the ability for customers to build custom apps. So I’m still a little unclear as to what’s changed with regard to building custom apps or custom integrations that you couldn’t do in Creator or Catalyst already.
Vembu (03:24)
So definitely you can do, right? Say, for example, you can look at, say, Creator as the highest end of abstraction. That means we abstracted almost until the UI layer and customers just have to say that. That means if you abstract at that level, then the flexibility drops. Then if you take the Catalyst on the other end, we opened up the infrastructure for developers. That means we opened up the DB, we opened up the writing custom functions, so we opened up memory, we opened up queue, so all the components of software stack. That means it is serverless. The value there is serverless. Customers don’t have to buy servers. That means, say at one side, it is a little abstraction, but a lot of flexibility to build the apps. On the other side in Creator, it is highly abstracted, not so flexible. That means it is meant for certain types of apps here, certain types of apps here. So AppOS fits exactly in the middle. That means we abstract it to a degree where developers don’t feel the lack of flexibility. We also give the flexibility of building their own custom UIs, custom clients, and all the other stuff. Because every app don’t have to follow the same a UI. That means they could actually build their own custom clients, say, the experience can be very different and building agents can be very different. That way, if you look at between the Catalyst and Creator, if you have an X axis, Y axis, where we talk about flexibility and say, complexity. Maybe AppOS sits on highly complex but more flexible. Creator fix on very easy, but a little less flexible. AppOS will be right in the middle where it offers the flexibility at the same time, it also abstracts whatever should be abstracted. So that’s how I would put it. And then even whatever you are now building, so what we are doing now is any app you build on Creator, that it will also actually deployed in AppOS only. So that means it will automatically use all the AppOS components and it deploys in the AppOS. That means it sits with all the other apps. And Creator will become the hosting environment for AppOS. As I showed in the demo, when you build an app, because the app is only the front-end, the application site. So the DB is actually in the AppOS. So that means when you build the app, you need a hosting environment. So Catalyst will become the hosting environment for the app.
Bloomberg (05:59)
So for a customer who’s already been building custom apps or customizations in Zoho, how difficult is it to move to AppOS? I mean, are they going to want to move from the existing platform to AppOS? And if so, what is involved in making that move?
Vembu (06:16)
This will be very seamless. That means for customers, they may not even know that because AppOS is going to actually be part of every app. That means every app moves into AppOS. That means it is going to only give much more flexibility for the app. So that means they could do more customization and all the tools remain the same. So that means whatever we have given the low-code tools, the experience of low-code tools are not going to change. That means our process builder called Blueprint will remain the same. Our Module Builder will remain the same. Our Workflow Builder will remain the same. So all those low-code tools will automatically work with AppOS. So that means there is no migration for the customer. We are only migrating our own apps and our own, right? Say, to use the same two components rather than… Because if you look at the current ecosystem where since we opened up at the infrastructure level, not abstract at certain levels. So some of these features will be implemented differently across different apps. Now we are unifying them. So making sure that AppOS has a consistent experience across these functionalities. And that means, say, for the customer, it will be very easy. And for the end customer, they may not even know that these apps are getting migrated because the data migration, app migration, the customizability, everything will be seamless and unified.
Bloomberg (07:31)
Of course, no discussion today would be complete without talking about AI, of course. But I wanted to drill down a bit and to assume specifics here. One is building AI agents and the other is vibe coding. Both of these are very, very hyped up trends today. But let’s start with agents. What’s the AppOS story for AI agents? Can customers build agents? Are they leveraging agents within the Zoho platform or both? Then what do they do with the agents?
Vembu (08:01)
For AI agents, we already launched the Agent Studio, where customers can start to build agents. We also give ourselves a lot of native agents, but customers also can start to build agents. Already, our customers started to use the Agent Builder to build, Agent Studio to build the agents. And within AppOS, for the developers, this will also be available as part of the stack. That means if you are building an app and your app has to have some native agents, then you can actually build those native agents as part of the apps itself. And then every application, because our Agent Studio already works with all the apps. So any custom app or anything, any app you build on it, so it will be like Agent Studio native. That means Agent Studio understands it because it’s all part of the same stack. So that’s the story with building agents and deploying agents. Then on the vibe coding part, so as you can see, we are opening this at two levels. One is through MCP for the AI codegen IDEs because that’s what many developers are already using. So we want to be supporting the ecosystem there. And we will also give our own ID. So completely, you can prompt-based development will give, where you could actually use that to generate. So we give the choice to the customers. If they are already using an IDE, then we will plug in AppOS into that. If they are not using it, they can use ours.
Bloomberg (09:27)
Okay, so you mentioned prompt-based development, which is at the core of what we mean by vibe coding these days. When you think about prompt-based development, on the one hand, you have somebody verbally describing what they want an application to do, and then the platform builds it. But you may also feed in a pre-written requirements document, and then that is more specific in terms of what the application, what the platform can build. But in the demo you showed this morning, you showed some Cursor as well. It’s a third-party tool for… It’s basically a command-line tool for vibe coding. I didn’t quite understand from your demo how Cursor fit into the Zoho story and why there wasn’t just a Zoho tool that did the same thing. Why did you have Cursor in the demo?
Vembu (10:14)
Yeah, see, already we have the tool. In the UI itself, the tool is there. We already have a demo also for that, the same application built-in thing, our own vibe coding. And prompt-driven development. But we just want to show, say, how it is developer friendly, where if the developers are already using an IDE, so they can still use back-end AppOS as their target and then generate code for AppOS. So through MCP, we connect there and then allow customers, developers to generate code. And then the whole idea is the way we are looking at this, we want, because there is business and IT in any organization, where business drives the requirements and IT drives the execution. So how we want to change this with, say, in the AI world is, say, we want business to still own the requirements, but business have to make sure the requirements are given as document. Then our tool will take this document, generate it as a technical spec. Say, for example, you already have certain modules in CRM and this new process requires some more new modules, and with some relationships, it will automatically put those. We will do that agent orchestration there and then we will put those. And then it will show to the IT, the CRM administrator, whether this is what they want to do based on the requirement. And if they are okay with it, then we will generate these as tasks and then ask them any ID. It could be third-party ID or it could be our own ID to generate this customization as part of the platform. So then that means from requirements to the production. So that means pre-production, where you can do this and host it in a sandbox and then have your business do the UAT. During the UAT, you will find out some changes. You can do the change management through the same cycle, and then finally it goes to production. That means, right? We want to take the AI through the whole SDLC process where there is a human in the loop, but the human is not doing the heavy lifting, whereas the AI is doing the heavy lifting and human is only verifying and reviewing and making sure that the changes are done right. So this is the process we want to achieve with the human and the AI working together.
Bloomberg (12:33)
It makes sense. So I have one final question, and this is the bigger picture question in terms of the economics of SaaS-based business applications and how AI is changing the overall economics. This came up this morning as well, that it’s applying downward pricing pressure, it’s constraining margins. What we’re seeing as analysts is also we’re seeing a whole range of different startups, just brand new companies. They’ve only been around for a few months saying that they can use the agentic AI to create a bunch of business applications that are all pre-integrated. It sounds like, wow, all this work that Zoho put in to build Zoho One over the years, and now along comes a two-month-old startup, and they can do it with AI and build something equivalent. I don’t believe it. I don’t believe you can build a Zoho One in two months, but that’s what they’re saying. Clearly, that is going to apply pricing pressure on Zoho and other business competitive constraints. How does Zoho, especially talking about AppOS, how is Zoho leveraging AppOS to combat the dozens of AI-based startups who are trying to tell similar stories to what Zoho’s story has been for a number of years now?
Vembu (13:50)
Yeah, so the challenge is, say, first you need to understand the domain or the vertical value are in. So that means to even give the prompt, you have to know what is the prompt, what is the requirements. So that means that’s going to actually take some time. And even if we assume that there is a domain expertise who is doing that. When that means the domain expertise is only in that particular domain. So if a bunch of domain experts do it for their own domain, it’s all going to be again a silo. That means so built on the cloud infrastructure, everything is safe from scratch. So It all comes with its own challenges of integrating and all those. That’s why we are now fixing the foundational layer, where we want to give a solid foundation for business apps. Then we also want to increase these developers who are trying to build apps to get on to this foundation. They already get the foundation clear. The foundation has the IAM, the messaging, all the components, the workflows, the processes, everything, the governance, the permission layer, everything is for the business. We already have millions of customers and then so many users. Then it instantly gives a market for these developers. That’s how we see the ecosystem play. When the barrier to entry is low. There is also a challenge of, let’s say, each one creating their own islands. It’s going to… Because the compliance is a challenge now, and every vendor cannot invest in compliance. That’s where this a consolidated framework and platforms will play a bigger role. That’s what we are seeing as from moving from apps to a platform will have a tremendous value in the future.
Bloomberg (15:35)
Very good. Well, that’s all my questions. I think we’re out of time. Thank you very much. Really appreciate it.
Vembu (15:39)
Thank you, Jason. Thanks for your time. Thank you.
©2026 Intellyx B.V. Intellyx is editorially responsible for this content. At the time of production Zoho Corporation is an Intellyx subscriber.
]]>
We’ve covered BMC Helix many times over the years, both when it was part of BMC Software as well as after the companies separated. In June 2025, I wrote how the company was rolling out several AI agents in support of its AIOps capabilities.
In addition to these agents, BMC Helix has a different, perhaps less flashy application of AI that it is uniquely qualified to offer: Remedy migration.
Remedy was a leading IT service management (ITSM) tool in the 1990s. BMC Software acquired Remedy in 2002, when it became the foundation of BMC’s ITSM offering.
One of the reasons Remedy was a market leader in its day was its ease of configuration. As a result, Remedy customers would often make many modifications to the base platform to meet their particular needs.
Now that Remedy is legacy technology, organizations are struggling to replace it, in large part because of those customizations.
To help address this challenge, BMC Helix leverages AI as part of its Remedy migration service.
BMC Helix identifies and documents all customizations – both configurations and integrations – and then designs and implements the migration to BMC Helix ITSM.
This practical application of AI both lowers the risk of such migrations while preserving valuable business logic and accelerating the migration process.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. BMC Helix and BMC Software are Intellyx customers. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>Telus Digital, a division of Canadian telecommunications giant Telus, offers a broad range of customer experience management, AI transformation, and digital solutions.
Telus Digital is particularly focused on leveraging AI to improve the business value contact centers can provide by extracting and leveraging the insights within contact center interactions.
Among the Telus products that stand out are its real-time coaching and agent assistance tools. These tools listen in on customer calls and provide the human agent with real-time pointers and background information based upon the content and flow of the call.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>CipherData offers an AI-native detection and response platform that provides SOC analysts with context-aware risk insights as well as full-lifecycle, end-to-end response coverage.
This lifecycle, what CipherData calls the ‘security flywheel,’ includes detection, discovery, context, response, and improvement – closing the loop to feed back insights to improve the platform’s response.
CipherData supports dynamic queries, where either the analyst provides additional context within each query manually or asks an AI agent to make queries automatically.
CipherData reduces false positives, increases SOC analyst productivity, and improves organizations’ mean time to respond (MTTR), thus lowering the cost of SOC operations overall.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>Spear phishing is a targeted, personalized phishing attack that uses believable messages to trick victims into clicking a malicious link or downloading a malware-infested file.
As AI-empowered phishing tools continue to evolve and drop in price, hackers are mounting increasingly sophisticated spear phishing attacks – over email as with traditional phishing, but also over various messaging channels as well as via voice cloning (aka ‘vishing’) – AI-generated voice that mimics a real person.
Brightside offers an anti-spear phishing tool that uses AI to mount realistic training scenarios that mimic sophisticated spear phishing attacks. The training works across messaging types including simulated voice interactions.
Brightside offers gamified learning and is compliant with the NIST difficulty model for employee phishing training.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>Arqit helps enterprises migrate their encryption technology to post-quantum computing (PQC) by leveraging intelligence about their existing encryption deployments.
Arqit discovers all encrypted traffic in an organization, identifies weak or otherwise vulnerable cryptography, prioritizes the associated risks, and then builds migration roadmaps to facilitate the transition to PQC.
Arqit deploys quantum-safe symmetric keys that enable devices to create, rotate and manage their encryption keys following zero-trust principles, an alternative to quantum key distribution.
Arqit’s migration plans are sensitive to legacy technology challenges. For example, when it’s impossible to upgrade operational technology, Arqit will provide a gateway that establishes PQC encryption between legacy endpoints.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>Flytxt has offered its telco-focused knowledge base and federated learning architecture for many years, helping communications service providers (CSPs) gain essential intelligence about their market dynamics.
Today, Flytxt is leveraging its AI expertise to bring to market an agentic AI platform that helps CSPs optimize their marketplace offerings.
The Flytxt platform enables CSPs to craft their subscription-based products, bring them to market, and support subscribers via ongoing customer relationships.
The core of this offering is optimization: Flytxt helps CSPs optimize their customized subscription offerings across various subscriber personas.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>Vericta from Lawwwing leverages a deep detection approach that can recognize AI-generated or modified images and videos with over 98% accuracy, with far fewer false positives than competing technologies.
Use cases include forensic evidence authentication, catfishing scam prevention, and product image verification.
Vericta is model-agnostic and works with all common AI tools. It also provides a deepfake detection API.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at [email protected].
]]>