Techstrong IT https://techstrong.it/ Powering the Future of IT, Infrastructure & Innovation Mon, 16 Mar 2026 20:49:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 AWS and Cerebras Team Up to Accelerate AI Inference in the Cloud https://techstrong.it/featured/aws-and-cerebras-team-up-to-accelerate-ai-inference-in-the-cloud/ Mon, 16 Mar 2026 20:49:49 +0000 https://techstrong.it/?p=95811 Amazon Web Services and AI chipmaker Cerebras Systems have announced a collaboration aimed at dramatically accelerating how AI models generate responses in the cloud. The partnership combines AWS’s Trainium processors with Cerebras’ wafer-scale AI systems in a new architecture designed specifically for inference, the stage when trained AI models generate outputs. The technology will be [...]

The post AWS and Cerebras Team Up to Accelerate AI Inference in the Cloud appeared first on Techstrong IT.

]]>
Amazon Web Services and AI chipmaker Cerebras Systems have announced a collaboration aimed at dramatically accelerating how AI models generate responses in the cloud. The partnership combines AWS’s Trainium processors with Cerebras’ wafer-scale AI systems in a new architecture designed specifically for inference, the stage when trained AI models generate outputs.

The technology will be deployed in AWS data centers and delivered through Amazon Bedrock, the company’s managed platform for generative AI. By pairing two specialized processors and linking them with AWS’s Elastic Fabric Adapter networking, the companies say they can significantly increase the speed and capacity of AI inference workloads.

“Inference is where AI delivers real value to customers, but speed remains a critical bottleneck for demanding workloads like real-time coding assistance and interactive applications,” said David Brown, VP of compute and machine learning services at AWS. He added that splitting inference tasks across different processors allows each system “to do what it’s best at,” producing performance that could exceed existing approaches.

The collaboration also carries strategic implications for AWS’s custom silicon strategy. Trainium processors were initially positioned as alternatives to GPUs for training large models. By applying them to inference workloads in combination with specialized chips from partners like Cerebras, AWS may broaden the role of its in-house silicon across the AI stack.

Disaggregated Inference

The collaboration centers on a technique known as disaggregated inference. Traditionally, AI accelerators perform every step of inference on the same chip. AWS and Cerebras are instead separating the process into two distinct stages, prompt processing and response generation, and assigning them to different hardware.

The first stage, often called prefill, analyzes the user’s prompt and prepares the model’s internal data structures. This step is computationally intensive but does not require large amounts of memory bandwidth. AWS’s Trainium processors, designed as custom AI chips for cloud workloads, are well suited to handle this portion of the pipeline.

The second stage, known as decode, generates the output tokens that form the model’s response. Because each token is produced sequentially and requires repeated access to model data stored in memory, decode places heavy demands on bandwidth. Cerebras’ CS-3 system, powered by its wafer-scale WSE-3 processor, is optimized for precisely this type of workload.

Cerebras’ chip architecture differs sharply from conventional processors. Rather than dividing a silicon wafer into many small chips, the company builds a single processor across the entire wafer. The resulting device includes roughly 900,000 cores and massive on-chip memory bandwidth. That design allows it to move data between logic and memory circuits far more quickly than many conventional AI accelerators.

Andrew Feldman, CEO of Cerebras, said the partnership expands access to the company’s technology. “Partnering with AWS to build a disaggregated inference solution will bring the fastest inference to a global customer base,” he said. “Every enterprise around the world will be able to benefit from blisteringly fast inference within their existing AWS environment.”

Optimizing Inference

The systems will run foundation models offered through Amazon Bedrock, including open-source LLMs as well as Amazon’s Nova models. Cerebras hardware will be installed directly in AWS data centers so customers can access the capability through the same cloud interfaces used for other AI services.

The initiative is part of a larger shift in the AI industry toward optimizing inference performance rather than focusing solely on training large models. As generative AI applications spread into enterprise workflows, from software development to automated customer support, response speed is central to user experience.

Some workloads, like AI coding agents, generate far more output tokens than typical chat interactions, creating additional pressure on infrastructure. In these cases, faster decode performance can significantly reduce latency and improve productivity for developers using AI tools.

The companies expect the combined Trainium and CS-3 infrastructure to become available through Amazon Bedrock in the coming months.

The post AWS and Cerebras Team Up to Accelerate AI Inference in the Cloud appeared first on Techstrong IT.

]]>
Zscaler Expands Data Sovereignty Controls as Compliance Pressures Mount https://techstrong.it/featured/zscaler-expands-data-sovereignty-controls-as-compliance-pressures-mount/ Mon, 16 Mar 2026 19:06:31 +0000 https://techstrong.it/?p=95805 Zscaler has expanded the global data sovereignty capabilities of its Zero Trust Exchange platform, introducing new regional controls and compliance features geared to help enterprises manage sensitive data within national boundaries. The update reflects growing demand among multinational organizations for security platforms that can satisfy strict regulatory requirements while maintaining the performance needed for cloud [...]

The post Zscaler Expands Data Sovereignty Controls as Compliance Pressures Mount appeared first on Techstrong IT.

]]>

Zscaler has expanded the global data sovereignty capabilities of its Zero Trust Exchange platform, introducing new regional controls and compliance features geared to help enterprises manage sensitive data within national boundaries.

The update reflects growing demand among multinational organizations for security platforms that can satisfy strict regulatory requirements while maintaining the performance needed for cloud services. Governments and industry regulators in Europe, North America and Asia are increasingly requiring companies to keep certain categories of data within local jurisdictions.

Decentralized Architecture

Zscaler’s approach relies on a decentralized architecture that separates three operational layers: the control plane, the data plane and the logging plane. Each layer performs a distinct function, enabling companies to maintain local authority over how data is processed and recorded.

“Effective data sovereignty requires customers to have verified authority over their data residency, telemetry and control data plane data,” said Misha Kuperman, chief reliability officer at Zscaler. “By separating control, data, and logging planes with a decentralized architecture, Zscaler enables customers to align with strict local sovereignty requirements while maintaining the resilience and availability needed for global business continuity.”

Analyzing Encrypted Network Traffic Locally

One of the most significant additions is the ability to perform encrypted traffic inspection within the same region where the data originates. Because the system decrypts and analyzes encrypted network traffic locally, it can identify potential malware without transferring the underlying files outside the jurisdiction.

For enterprises operating under regulatory requirements, Zscaler is also offering a deployment option called Private Service Edge. These single-tenant appliances are hosted by the customer but managed by Zscaler, giving organizations greater control over the infrastructure while retaining the company’s cloud-based security services.

A key feature for companies worried about data security: Independent third-party assessments confirm that the platform processes encrypted traffic without writing sensitive information to disk, a design intended to reduce the risk of data exposure.

Addressing Overlapping Regulatory Frameworks

The company has also introduced new compliance features aimed at simplifying the process of meeting overlapping regulatory frameworks. Its Collect Once, Certify All model allows a single security control set to align with multiple regulatory standards, including Europe’s General Data Protection Regulation (GDPR), the NIS2 cybersecurity directive and the U.S. Department of Defense’s Impact Level 5 requirements.

Additionally, customers retain exclusive control over encryption keys through integration with hardware security modules. This ensures that only authorized parties can decrypt protected traffic, providing an extra layer of assurance for organizations that must demonstrate strict custody over their data.

Zscaler operates its own globally distributed security cloud rather than relying solely on third-party hyperscaler infrastructure. The company claims this approach allows the platform to maintain service continuity even if a single data center becomes unavailable.

The post Zscaler Expands Data Sovereignty Controls as Compliance Pressures Mount appeared first on Techstrong IT.

]]>
AI Boom Drives Global Server Market to Record $444 Billion in 2025 https://techstrong.it/featured/ai-boom-drives-global-server-market-to-record-444-billion-in-2025/ Mon, 16 Mar 2026 17:11:47 +0000 https://techstrong.it/?p=95791 The global server market reached unprecedented heights in 2025 as companies invested heavily to build AI infrastructure, according to new data from research firm IDC. The industry closed the year with a remarkable $444.1 billion in revenue, an 80.4% increase compared with 2024. The final quarter alone generated $125.3 billion, earning it the record as [...]

The post AI Boom Drives Global Server Market to Record $444 Billion in 2025 appeared first on Techstrong IT.

]]>

The global server market reached unprecedented heights in 2025 as companies invested heavily to build AI infrastructure, according to new data from research firm IDC.

The industry closed the year with a remarkable $444.1 billion in revenue, an 80.4% increase compared with 2024. The final quarter alone generated $125.3 billion, earning it the record as the largest quarterly total ever recorded for server vendors.

Driving this dramatic expansion was hyperscalers and large service operators pouring capital into next-gen systems capable of training and running increasingly complex AI models, pushing demand for high-performance servers to new levels.

“The race for AI adoption is setting the market pace with companies starving for infrastructure looking not only for GPUs but also consuming more CPUs among other components in order to feed their needs,” said Juan Seminara, research director for Worldwide Enterprise Infrastructure Trackers at IDC. He added that growing demand could lead to higher prices even if shipment volumes moderate.

Accelerated Servers and the Move Beyond x86

One of the clearest indicators of the AI boom is the rapid rise of accelerated servers, systems designed to work alongside graphics processors and other specialized chips.

According to IDC, revenue from servers equipped with embedded GPUs zoomed upward 59.1% year over year in the fourth quarter and represented more than half of total server market revenue during the period. These systems are widely used to train machine learning models and run large AI applications.

At the same time, the types of processors used in servers are shifting. Revenue from traditional x86-based systems increased a robust 16.9% year over year in the fourth quarter to reach $69.8 billion. But non-x86 platforms, many designed specifically for high-performance computing, surged 146.4% to $55.5 billion.

Hyperscalers vs. Enterprise

Hyperscalers often purchase servers directly from original design manufacturers (ODMs), which build systems specifically for large cloud deployments. IDC said ODMs accounted for more than half of total server market revenue during the quarter.

Traditional enterprise buyers, by contrast, have taken a more cautious approach to spending. Many organizations remain careful with capital investments amid uncertain economic conditions, even as AI initiatives push them toward greater infrastructure capacity.

Global Trends

The continued growth of cloud computing and AI services has kept demand for servers strong across most regions.

The US led global growth in the fourth quarter, with server revenue increasing 72.4% year over year. IDC attributed much of this expansion to an 80.1% jump in accelerated server deployments.

Canada followed closely behind with 70.7% growth. Europe, the Middle East and Africa recorded a 43.6% increase, while the Asia-Pacific region excluding Japan and China grew 27.9%.

China and Latin America posted more moderate gains of 17.7% and 12.8%, respectively. Japan was the only major region to decline, with revenue dropping 4.7% compared with a particularly strong investment cycle the previous year.

Vendor Horse Race

Among vendors, Dell Technologies held the lead, with a 10% share of global server revenue. Supermicro followed closely with 9.3% share, benefiting from strong demand for AI-focused systems.

IEIT Systems and Lenovo were statistically tied for third place with about 4% market share each, while Hewlett Packard Enterprise ranked fifth with 3.1%.

Despite the market’s stunning growth, IDC cautions that supply constraints and rising component costs could create challenges in the near future. Prices for key hardware components, including GPUs, memory and solid-state storage, have become increasingly volatile as demand outpaces manufacturing capacity.

If those pressures continue, companies may face higher infrastructure costs in the coming year. Even so, the momentum behind AI suggests that server demand will remain strong.

The post AI Boom Drives Global Server Market to Record $444 Billion in 2025 appeared first on Techstrong IT.

]]>
The Network Migration That Didn’t Break Anything https://techstrong.it/sponsored/rethinking-the-data-center-network/the-network-migration-that-didnt-break-anything/ Mon, 16 Mar 2026 14:32:32 +0000 https://techstrong.it/?p=95796 Source: Nokia Let’s be honest: network migrations have a reputation—and not a great one. Even in well-prepared organizations, replacing live infrastructure is rarely stress-free. Brownfield environments are layered with dependencies, historical configs, and workarounds.  That’s why Nokia’s global data center migration story is worth attention—not just for the technology involved, but for how [...]

The post The Network Migration That Didn’t Break Anything appeared first on Techstrong IT.

]]>

Source: Nokia

Let’s be honest: network migrations have a reputation—and not a great one.

Even in well-prepared organizations, replacing live infrastructure is rarely stress-free. Brownfield environments are layered with dependencies, historical configs, and workarounds. 

That’s why Nokia’s global data center migration story is worth attention—not just for the technology involved, but for how they managed to move from legacy complexity to a fully modernized fabric without breaking things along the way.

This wasn’t a pristine greenfield rebuild. It was a live migration, touching critical systems like manufacturing operations that run 24×7. And Nokia didn’t just get through it—they built a repeatable model now powering a full global rollout.

So how’d they do it?

A Controlled, Parallel Path

First, Nokia chose not to “rip and replace.” Instead, they designed a parallel fabric—deploying their new infrastructure side-by-side with the legacy network, building out the new SR Linux-based topology while old services continued running.

This gave the team breathing room. They could validate the new design in real-world conditions without putting production traffic at risk. It also let them move applications incrementally, avoiding large cutovers that would have required extended downtime or heroic efforts to troubleshoot.

By isolating the new fabric during initial deployment, they ensured that no live services were disrupted. And as the new environment matured, they used Nokia Event-Driven Automation (EDA) and a digital twin to simulate every migration step before taking action.

Real-World Cutovers, Real-Time Confidence

One of the most powerful tools in this process was EDA’s ability to model changes ahead of time. The team could replicate their production state, simulate the effects of a configuration update or service move, and verify outcomes before anything went live.

This wasn’t a single feature. It was part of a new operating model: validate everything, execute with automation, and monitor post-change behavior through telemetry.

And it paid off.

In the deployments for their European data centers, Nokia IT migrated dozens of critical workloads without unplanned downtime. That includes a manufacturing application that used to crash every month in the old network due to brief connectivity blips. Post-migration? Zero complaints. Zero resets.

It wasn’t magic. It was planning, process, and platforms working together.

Scaling a Repeatable Model

What’s impressive is how quickly the initial deployments turned into a playbook.

The Nokia team applied these best practices to their US data center migration and are preparing to migrate their data centers—across other regions. And they weren’t starting from scratch each time. They had a repeatable operating model built around:

  • A validated architecture
  • A CI/CD pipeline for config changes
  • A digital twin for simulation
  • And a well-understood migration sequence

Every migrated site followed the same structure: build the new fabric in parallel, run validation in EDA, move services incrementally, and retire the legacy infrastructure only after success was confirmed.

This repeatability is what separates successful transformations from one-off projects. Nokia didn’t just survive the first migration—they learned from it, operationalized it, and scaled it.

When Operations Become a Strength

What stands out most in this story is not the gear, or even the automation—it’s the operational maturity Nokia achieved in the process.

They went from an environment where engineers were hesitant to make changes—to one where deployments happen routinely, safely, and with confidence. In fact, they now make changes more frequently than before, because automation and validation have removed the risk.

One team member described it this way: “We push the pipeline, press the button—done. No outage.”

That kind of change in velocity simply wasn’t possible before. And it shows how infrastructure modernization can unlock more than just uptime—it can unleash the team.

Why This Migration Matters

Nokia’s data center transformation isn’t unique in its challenges. But it is unique in how it addressed them.

  • They didn’t wait for a clean slate—they built change into the fabric of their operations.
  • And they didn’t shy away from risk—they managed it, simulated it, and neutralized it with process and platform alignment.

This is what brownfield success looks like.

Not flashy, but effective and precise.

And in a world where IT is increasingly expected to deliver speed without sacrificing stability, Nokia’s story is a reminder that yes, you can modernize without breaking things—if you do it with the right strategy.

To dive deeper into the architecture, tools, and migration practices behind Nokia IT team’s success, read the full report.

This is the third blog in a series of 3. Read the full series here

The post The Network Migration That Didn’t Break Anything appeared first on Techstrong IT.

]]>
Poll Shows Mixed U.S. Sentiment Toward Data Centers’ Role in Communities https://techstrong.it/featured/poll-shows-mixed-u-s-sentiment-toward-data-centers-role-in-communities/ Fri, 13 Mar 2026 20:47:03 +0000 https://techstrong.it/?p=95784 Research released by the Pew Research Center finds that while most Americans recognize the growing presence of data centers, many are uneasy about their environmental footprint and impact on local communities. This research echoes reporting in Techstrong.it, which indicates that American are struggling with today's rapid data center buildout. The survey, conducted in late January [...]

The post Poll Shows Mixed U.S. Sentiment Toward Data Centers’ Role in Communities appeared first on Techstrong IT.

]]>
Research released by the Pew Research Center finds that while most Americans recognize the growing presence of data centers, many are uneasy about their environmental footprint and impact on local communities. This research echoes reporting in Techstrong.it, which indicates that American are struggling with today’s rapid data center buildout.

The survey, conducted in late January among 8,512 U.S. adults, marks Pew’s first effort to measure public opinion on data centers. It arrives as tech companies accelerate construction of these facilities to support the voracious land and resource demands of AI and cloud services.

According to the survey, awareness of data centers is already widespread. Seventy-five percent of Americans say they have heard at least something about the facilities. Among them, 25% report hearing “a lot,” while 50% say they have heard “a little.” The remaining 25% say they have heard nothing at all about data centers.

Environmental and Quality of Life Issues

When asked about environmental impact, Americans express far more concern than optimism. Thirty-nine percent say data centers are mostly bad for the environment, compared with just 4% who say they are mostly good. The remainder say the effect is neutral, uncertain, or they are unfamiliar with the topic.

Views about electricity costs follow a similar pattern. Thirty-eight percent of respondents say data centers negatively affect home energy costs, while only 6% say the facilities have a positive effect. Many respondents believe their influence is either neutral or unclear, but negative views clearly outweigh positive ones.

Public sentiment is skeptical when it comes to local quality of life. Thirty percent of Americans say data centers are mostly harmful for people living nearby. By contrast, only 6% believe they improve the quality of life in surrounding communities.

Economic considerations tell a somewhat different story. Americans are more likely to say data centers benefit local employment and municipal finances than to say they damage them. Twenty-five percent say data centers are mostly good for local jobs, while 15% say they are mostly bad. On local tax revenue, 23% say the facilities bring positive effects, compared with 12% who view them negatively.

Even in these economic areas, however, strong enthusiasm is limited. In each category, sizable portions of the public say they are unsure about the impact or believe it is neither positive nor negative.

Attitudes Influenced by Political Affiliation

Political affiliation plays a defining role in shaping views toward data centers. Among Democrats and Democratic-leaning independents, 50% data centers are mostly harmful for the environment. Among Republicans and Republican-leaning independents, that figure drops to 31%.

The same divide appears in perceptions of energy costs and quality of life. Forty-four percent of Democrats say data centers increase home energy costs, compared with 33% of Republicans. When asked about local quality of life, 37% of Democrats say the facilities are mostly harmful, compared with 24% of Republicans.

Within the Democratic coalition, ideological differences are also visible. Sixty-six percent of liberal Democrats say data centers harm the environment, compared with 38% of moderate or conservative Democrats. On energy costs, 57% of liberal Democrats say the facilities have negative effects, while 35% of moderate or conservative Democrats share that view.

Age also influences opinion. Younger Americans are more likely to express environmental concerns. Fifty-four percent of adults under 30 say data centers are harmful to the environment. Among adults ages 30 to 49, the figure falls to 44%. It declines further to 35% among adults ages 50 to 64 and to 26% among those 65 and older.

As the digital economy grows, communities across the country are facing new decisions about whether, and how, to host these power-hungry facilities. The Pew survey suggests Americans are still weighing the tradeoffs between economic opportunity and environmental impact.

The post Poll Shows Mixed U.S. Sentiment Toward Data Centers’ Role in Communities appeared first on Techstrong IT.

]]>
IBM Unveils Architecture to Combine Quantum and Classical Supercomputing https://techstrong.it/featured/ibm-unveils-architecture-to-combine-quantum-and-classical-supercomputing/ Fri, 13 Mar 2026 18:35:58 +0000 https://techstrong.it/?p=95776 IBM has introduced a new reference architecture created to bring quantum processors and classical supercomputers into a unified computing platform. The framework outlines how hybrid systems could work together to tackle scientific problems that remain out of reach for either technology alone. The announcement reflects a wide agreement within the quantum sector: near-term breakthroughs are [...]

The post IBM Unveils Architecture to Combine Quantum and Classical Supercomputing appeared first on Techstrong IT.

]]>
IBM has introduced a new reference architecture created to bring quantum processors and classical supercomputers into a unified computing platform. The framework outlines how hybrid systems could work together to tackle scientific problems that remain out of reach for either technology alone.

The announcement reflects a wide agreement within the quantum sector: near-term breakthroughs are likely to come not from standalone quantum machines, but from systems that combine quantum hardware with established HPC infrastructure.

IBM refers to the model as quantum-centric supercomputing, a design that coordinates quantum processing units (QPUs) with clusters of classical CPUs and GPUs. The architecture also incorporates high-speed networking, shared storage, and orchestration tools that enable the different systems to operate within a single workflow.

Harnessing Two Systems

Distributing tasks between the two computing systems means harnessing two very different worlds. Quantum processors perform calculations that benefit from quantum mechanical effects, while classical systems manage supporting workloads such as parameter optimization, error mitigation, and data preparation. The systems exchange information repeatedly until a solution is reached.

Researchers say this approach adapts to the practical realities of current quantum technology. Fully fault-tolerant quantum computers, machines capable of operating reliably at scale, remain under development. In the meantime, hybrid systems are widely accepted as a path toward applying quantum techniques to real scientific questions.

IBM’s new architecture organizes this environment into multiple layers, including hardware infrastructure, system orchestration, middleware, and application software. These layers define how quantum processors interact with traditional computing resources and how developers build hybrid applications that span these contrasting systems.

One challenge the IBM architecture seeks to address is the fragmented nature of existing workflows. Quantum systems and supercomputers typically operate as separate platforms, forcing researchers to manually coordinate scheduling, move datasets between machines, and manage job execution across different environments. IBM’s proposal aims to streamline that process by integrating orchestration tools and software frameworks.

Among the tools incorporated into the architecture is Qiskit, IBM’s open source software development kit for quantum programming. By embedding quantum capabilities into familiar development environments, the company hopes to make hybrid computing more accessible to scientists and engineers.

The framework also introduces mechanisms for managing resources across classical and quantum systems. For example, the architecture proposes interfaces that allow classical workload schedulers to interact with quantum hardware and allocate resources as needed.

A Roadmap Built on Hybrid Architecture 

IBM’s roadmap envisions several stages of integration between quantum and classical computing. The earliest stage treats quantum processors as specialized accelerators attached to HPC systems, much like GPUs were first introduced into supercomputing environments. Later stages involve tighter coupling between systems, with lower latency connections and more advanced hybrid algorithms.

In its final stage, the roadmap anticipates fully co-designed platforms with quantum and classical components engineered together from the start. Such systems would allow complex hybrid workflows to operate seamlessly across both types of hardware.

While the architecture is largely conceptual, IBM says elements of the hybrid model are already being tested in research settings. Scientists have used quantum-classical workflows to simulate molecular structures and analyze biological compounds.

Still, significant technical hurdles remain. Quantum processors must become more reliable and scalable, and hybrid systems must overcome performance bottlenecks created by network latency and coordination between the two computing environments.

Despite those challenges, industry analysts say hybrid architectures are likely to define the next phase of quantum computing. Rather than replacing classical systems, quantum processors are expected to operate alongside them.

The post IBM Unveils Architecture to Combine Quantum and Classical Supercomputing appeared first on Techstrong IT.

]]>
Boomi Extends Data Management Reach for the AI Era https://techstrong.it/featured/boomi-extends-data-management-reach-for-the-ai-era/ Fri, 13 Mar 2026 14:51:07 +0000 https://techstrong.it/?p=95768 Boomi this week added additional data management and governance capabilities to its integrated platform-as-a-service (iPaaS) environment, including a set of patterns that provide workflow guidance and recommendations for artificial intelligence (AI) agents. Additionally, the company has added a Boomi Meta Hub that provides a central system of record for business logic across an application ecosystem [...]

The post Boomi Extends Data Management Reach for the AI Era appeared first on Techstrong IT.

]]>
Boomi this week added additional data management and governance capabilities to its integrated platform-as-a-service (iPaaS) environment, including a set of patterns that provide workflow guidance and recommendations for artificial intelligence (AI) agents.

Additionally, the company has added a Boomi Meta Hub that provides a central system of record for business logic across an application ecosystem along governance capabilities that are now being extended to the Cortex Agent developed by Snowflake.

Boomi has also added Agent Session Logs to enable organizations to audit workflows and extended its governance capabilities to SAP data.

Finally, Boomi also unfurled a European Platform Instance of its platform that enables organizations to deploy its iPaas within a sovereign cloud computing environment hosted within the borders of the European Union (EU).

Michael Bachman, head of research and emerging technology for Boomi, said each of these additions and extensions to the Boomi iPaas environment are intended to enable organizations to streamline the management and governance of data using a framework that is already widely used to integrate applications and workflows.

Those capabilities are more critical than ever in an era where massive amounts of data will soon be accessed by thousands of AI agents in near real time, he added.

In effect, Boomi is now pressing a case for managing and governing all that data within the context of an iPaaS rather than requiring organizations to acquire and deploy an additional framework from a third party vendor. The Boomi iPaaS now provides a central point of governance, said Bachman.

That approach provides the added benefit of helping to control costs by making sure AI agents only access relevant data versus trying to access any and all data that might be available, noted Bachman. AI agents have already shown they have a voracious appetite for data regardless of the task at hand, he added.

Most organizations in the months ahead will undoubtedly be revisiting how data is managed and governed in the age of AI. In fact, the Futurum Group projects the global data intelligence, analytics, and infrastructure (DIAI) market will grow at a 17% compound annual growth rate through 2028 off a base of $541.1 billion in 2026 and will exceed $1.2 trillion by 2031. As those investments are made, the role of internal IT teams evolve as the need to make sure the right data securely arrives at the right place and time becomes a more pressing requirement in the age of AI, noted Bachman.

Just as importantly, the controls need to be in place to ensure that AI agents are accessing the freshest data available versus older versions that they have somehow managed to access, he added.

In the meantime, IT teams should assume that master data management (MDM) issues that have often been ignored in the past are going to manifest themselves at scale in the AI era. The challenge and the opportunity now is to address those issues now before potentially thousands of AI agents expose them in a way that everyone in the organization will soon notice.

The post Boomi Extends Data Management Reach for the AI Era appeared first on Techstrong IT.

]]>
Broadcom Launches Optical DSP Built for AI Data Centers https://techstrong.it/featured/broadcom-launches-optical-dsp-built-for-ai-data-centers/ Thu, 12 Mar 2026 17:52:33 +0000 https://techstrong.it/?p=95762 Broadcom has debuted a new optical networking chip built to help data centers manage the expanding bandwidth demands created by AI workloads. The company says the device, the Taurus BCM83640, supports a new generation of high-capacity optical modules used to link servers inside AI computing clusters. The Taurus is an optical digital signal processor (DSP) [...]

The post Broadcom Launches Optical DSP Built for AI Data Centers appeared first on Techstrong IT.

]]>
Broadcom has debuted a new optical networking chip built to help data centers manage the expanding bandwidth demands created by AI workloads. The company says the device, the Taurus BCM83640, supports a new generation of high-capacity optical modules used to link servers inside AI computing clusters.

The Taurus is an optical digital signal processor (DSP) manufactured on a three-nanometer process. It uses 400-gigabit-per-second signaling per lane, a big step beyond earlier 200-gigabit designs that are widely used in data centers. By increasing the data transmitted through each optical lane, the chip enables hyperscalers to build faster interconnects without increasing the number of fiber connections.

Optical DSPs are an essential component of pluggable transceivers: they are the modules that convert electrical signals from switches into light pulses traveling across fiber cables. These connections form the backbone of data center networks that support AI training and inference workloads, which move vast volumes of data between processors.

Expanding Connectivity for AI

Broadcom says the Taurus processor is built primarily for 1.6-terabit optical modules. These modules are expected to become a core building block for next-gen switching platforms that support massive AI clusters.

“Taurus, the industry’s first 1.6T DSP based on 400G/lane I/O, doubles the throughput per lane to enable the next generation of 3.2T optical modules,” said Vijay Janapaty, VP and general manager of Broadcom’s Physical Layer Products Division. He added that the design also supports the company’s efforts to lower power consumption while boosting connectivity for AI and cloud networks.

In addition to supporting 1.6-terabit modules, the architecture is intended to serve as a stepping stone toward even faster optical systems. The company says the same signaling approach could enable future modules capable of 3.2 terabits per second, which would align with emerging switch platforms expected to deliver more than 200 terabits of total switching capacity.

Broadcom said it has begun providing samples of the Taurus BCM83640 to early customers and development partners.

The Market for Optical Interconnect Technology

There is growing competition among semiconductor vendors supplying optical interconnect technology. The market for high-speed transceivers is expanding rapidly as hyperscalers build large clusters of GPUs and accelerators for training AI models.

Vladimir Kozlov, CEO and founder of research firm LightCounting, said shipments of 1.6-terabit and 3.2-terabit optical transceivers could exceed 100 million units over the next five years. Roughly half of those modules may rely on 400-gigabit optical lanes as operators push for greater network throughput inside AI data centers, he said.

Some suppliers are pursuing coherent optical technologies designed for longer-distance connections and advanced signal processing. Others, including Broadcom, continue to refine intensity-modulation and direct-detection approaches that have dominated shorter data center links due to their lower cost and power requirements.

The competition has intensified as vendors race to establish platforms that could define multiple generations of networking hardware. In other words, decisions made by transceiver manufacturers today, like which DSP architecture to adopt, could influence product roadmaps for several years.

The post Broadcom Launches Optical DSP Built for AI Data Centers appeared first on Techstrong IT.

]]>
Survey: Larger Enterprises Are Starting to Use AI Agents to Perform IT and Security Tasks https://techstrong.it/featured/survey-larger-enterprises-are-starting-to-use-ai-agents-to-perform-it-and-security-tasks/ Thu, 12 Mar 2026 15:59:26 +0000 https://techstrong.it/?p=95755 A survey of 508 IT and security leaders employed by U.S. organizations with more than 1,000 employees published this week finds nearly all (95%) are relying on artificial intelligence (AI) agents to perform at least one IT or security task autonomously. Conducted by ConductorOne, a provider of an identity management platform, the survey also finds [...]

The post Survey: Larger Enterprises Are Starting to Use AI Agents to Perform IT and Security Tasks appeared first on Techstrong IT.

]]>
A survey of 508 IT and security leaders employed by U.S. organizations with more than 1,000 employees published this week finds nearly all (95%) are relying on artificial intelligence (AI) agents to perform at least one IT or security task autonomously.

Conducted by ConductorOne, a provider of an identity management platform, the survey also finds that 96% of respondents said they planned to operationalize AI agents.

Additionally, 91% of respondents report that increased reliance on AI has led to increasing investments in identity access management (IAM) platforms, with 87% of respondents rating non-human identity risk as being either moderately to extremely urgent.

In total, 45% of respondents said they already use IAM tools to govern non-human identities, with another 45% planning to follow suit within the coming year. Just under half (47%) also report they are already managing more non-human identities than human users, but only 22% claim to have full visibility into those identities.

ConductorOne CISO Kevin Paige said the survey makes it clear that the rise of AI is likely to further exacerbate existing identity management issues that have long persisted in many organizations.

In fact, the survey finds 80% of respondents experienced at least one identity-related breach in the past year, with phishing and social engineering (52%) and malware or ransomware (46%) being the leading attack vectors. Top challenges include lack of visibility (47%), excessive privileges (40%), limited auditability (37%), long-lived credentials (33%) and tools that are not designed for agents (27%).

It’s not clear how proactively cybersecurity teams are going to be able to secure AI agents as the pace of adoption continues to accelerate. Many end users, for example, are downloading AI agents such as OpenClaw with little to no regard for the cybersecurity implications. The one thing that is certain is that adversaries will double down on stealing credentials that provide them access to AI agents that can then be used to not just exfiltrate data but also compromise entire workflows.

Regardless of those risks, most organizations are committed to adopting AI agents come hell or high water, noted Paige. Cybersecurity and IT teams, at the very least, should make sure that fundamental best cybersecurity practices are followed, including making sure that humans remain in the loop of any workflow, to minimize risk as much as possible, he added. Otherwise, it’s only a matter of time before a catastrophic incident reminds everyone why cybersecurity is still crucial in the age of AI, said Paige.

In the meantime, it’s important for IT and cybersecurity to have hands-on experience with AI agents, he added. It’s simply not going to be possible to secure them without knowing how AI agents access data to automate tasks, noted Paige. Cybersecurity teams will need to understand the architecture used to construct them, noted Paige.

Hopefully there will come a time soon when AI agents are simply just another type of identity that IT and cybersecurity teams have learned how to govern and secure. Until then, however, those same teams would be well advised to prepare for the worst while continuing to hope for the best outcomes possible.

The post Survey: Larger Enterprises Are Starting to Use AI Agents to Perform IT and Security Tasks appeared first on Techstrong IT.

]]>
Intel Unveils Chip That Processes Encrypted Data at Accelerated Speed https://techstrong.it/featured/intel-unveils-chip-that-processes-encrypted-data-at-accelerated-speed/ Wed, 11 Mar 2026 19:37:28 +0000 https://techstrong.it/?p=95743 Intel has introduced a specialized processor designed to dramatically accelerate one of the most challenging tasks in cybersecurity: computing directly on encrypted information. The prototype chip, called Heracles, is engineered to perform fully homomorphic encryption (FHE) calculations thousands of times faster than conventional server processors. FHE allows computers to process data without ever decrypting it, [...]

The post Intel Unveils Chip That Processes Encrypted Data at Accelerated Speed appeared first on Techstrong IT.

]]>

Intel has introduced a specialized processor designed to dramatically accelerate one of the most challenging tasks in cybersecurity: computing directly on encrypted information. The prototype chip, called Heracles, is engineered to perform fully homomorphic encryption (FHE) calculations thousands of times faster than conventional server processors.

FHE allows computers to process data without ever decrypting it, meaning sensitive information can remain protected even while it is being analyzed. In theory, this capability could enable privacy-preserving AI, or medical or financial analytics performed without exposing confidential data.

But in the real world, the technology has faced a major obstacle. Standard processors can handle encrypted workloads, but the computations are extraordinarily slow. Mathematical operations required for FHE often take orders of magnitude longer than similar operations on unencrypted data.

Intel’s Heracles chip is intended to close that performance gap. Demonstrated at the IEEE International Solid-State Circuits Conference in San Francisco, the processor is purpose-built for the unusual mathematical operations that encrypted computing requires. Intel claims it can accelerate key FHE calculations by more than a 1,000 times, and in some cases more than 5,000 times, compared with high-end server CPUs.

Parallel Compute Engines

The architecture of the Heracles chip is far different than the typical CPU. Instead of running operating systems or standard software, Heracles functions as an accelerator dedicated to encrypted workloads. It incorporates dozens of parallel compute engines arranged in a grid, enabling large volumes of encrypted data to be processed simultaneously.

The processor also relies heavily on memory bandwidth. FHE dramatically increases the size of data once it is encrypted, sometimes expanding information many times beyond its original form. To manage that data flow, the chip integrates high-bandwidth memory stacks capable of feeding large datasets to its compute units at high speed.

Intel says this combination of parallel processing and specialized arithmetic units enables the chip to handle the complex operations that dominate FHE workloads. These include large-integer calculations and other mathematical routines that are inefficient on traditional processors.

Milliseconds vs. Microseconds

Intel researchers simulated a privacy-protected query to a database. In the scenario, a voter sends an encrypted request to verify that a ballot was properly recorded in an encrypted election database. Because the data remains encrypted throughout the process, neither the database nor the server ever exposes the underlying information.

On a conventional server processor, the simulated verification took milliseconds. The Heracles chip completed the same operation in microseconds.

That time difference becomes highly significant at scale. Processing tens of millions of queries could shrink from weeks of compute time to less than an hour.

Still in Prototype

The research and design work began roughly five years ago as part of a research effort supported by the U.S. Defense Advanced Research Projects Agency (DARPA). The program aimed to explore hardware designs capable of making fully homomorphic encryption practical for real-world computing systems.

Along with Intel, several startups are developing competing accelerators, and researchers are exploring alternative approaches, including photonic processors. Some companies are focusing on software platforms that enable encrypted queries without requiring specialized hardware.

The emergence of dedicated silicon could be a turning point. If encrypted computation becomes fast enough, it could change how organizations handle sensitive information in cloud environments. Use cases could range from privacy-preserving AI models to secure medical research using encrypted patient records.

Intel has not announced commercial plans for the chip, but researchers say the prototype demonstrates that large-scale encrypted computing may finally be within reach. The company is continuing to refine both hardware and software designs as it develops future versions of Heracles.

The post Intel Unveils Chip That Processes Encrypted Data at Accelerated Speed appeared first on Techstrong IT.

]]>