Techstrong.ai https://techstrong.ai/ Tue, 17 Mar 2026 14:27:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Is Meta the Bronx Zoo Yankees of AI? https://techstrong.ai/features/is-meta-the-bronx-zoo-yankees-of-ai/ Tue, 17 Mar 2026 14:27:01 +0000 https://techstrong.ai/?p=43797 Meta is spending like the late-70s Yankees, buying stars and headlines in the AI race. But as George Steinbrenner learned the hard way, payroll alone doesn’t build a championship team. With delays, internal friction and rivals pulling ahead, the real question is whether Meta can turn a collection of superstars into a coherent machine before the pennant race is already over.

The post Is Meta the Bronx Zoo Yankees of AI? appeared first on Techstrong.ai.

]]>
Meta is spending like the late-70s Yankees, buying stars and headlines in the AI race. But as George Steinbrenner learned the hard way, payroll alone doesn’t build a championship team. With delays, internal friction and rivals pulling ahead, the real question is whether Meta can turn a There was a time when the New York Yankees weren’t baseball’s Evil Empire.

They were baseball’s reality show.

Owner George Steinbrenner kept signing the biggest names, firing managers, stirring drama and assuming that if you stacked enough talent together, championships would naturally follow. The media called it the Bronx Zoo. Loud. Expensive. Dysfunctional. Sometimes brilliant, often chaotic.

Today, Meta under Mark Zuckerberg risks becoming the Bronx Zoo Yankees of the AI era.

Not because it lacks talent. Because it has assembled a galaxy of stars without yet proving it can turn them into a team.

Spending Like a Dynasty… Playing Like a Contender

Meta is betting the company on AI. Billions for compute. Billions for talent. Billions for infrastructure. A publicly stated goal of building “superintelligence,” which is Silicon Valley for “we plan to define the future.”

Yet a recent New York Times report revealed that Meta’s next foundation model, code-named Avocado, has been delayed after internal testing showed it trailing leading models from rivals in reasoning, coding and writing. The release slipped from March to at least May. Even more eyebrow-raising, Meta executives reportedly discussed licensing Google’s Gemini to power some products while they regroup.

If you’re spending over $100 billion a year on AI infrastructure, borrowing a rival’s engine isn’t exactly the victory parade route.

Buying Stars is Not Building a Team

Steinbrenner’s early Yankees kept assuming payroll equaled performance. It took years for the organization to learn that chemistry, development and leadership matter more than headline signings.

Meta’s AI strategy has a similar feel.

The company invested roughly $14 billion in Scale AI and elevated its CEO Alexandr Wang into a central role. It created an elite internal lab. It poached top researchers across the industry. Compensation packages reportedly reached levels that make professional sports contracts look modest.

And yet reports describe internal clashes over priorities, turnover among researchers and ongoing debate over whether the flagship model should be open or closed.

That’s not dynasty energy. That’s a clubhouse still arguing about who bats cleanup.

The Pattern of Almost

Meta’s Llama models earned real credibility, especially with developers. But they haven’t consistently set the frontier pace. Some planned releases slipped. One massive model reportedly never shipped. Benchmarks have been questioned. Timelines keep moving.

Meanwhile, competitors iterate relentlessly.

In baseball terms, Meta keeps winning the offseason headlines while someone else wins the division.

Momentum is the Real Currency

AI isn’t a mature industry where you can rebuild slowly. This is a land-grab for the next computing platform.

Talent follows perceived leaders. Developers build where they see long-term gravity. Enterprises place bets they don’t want to rip out later. Once momentum tilts, it compounds fast.

Meta still has enormous advantages. Distribution across billions of users. Unmatched infrastructure investment. Deep research DNA. The ability to absorb losses that would sink smaller companies.

But perception is shifting. Instead of the disruptor, Meta increasingly looks like the wealthy incumbent trying to buy back relevance.

The Steinbrenner Lesson

Here’s the part people forget.

Steinbrenner eventually figured it out.

The Yankees dynasty wasn’t built on chaos. It was built on structure, leadership and a pipeline of talent that fit together. Money helped, but it wasn’t the strategy. It was the amplifier.

Zuckerberg could absolutely pull off the same pivot. He has reinvented Meta before. Mobile. Social video. Ads. Platforms. He plays long games.

But this isn’t another feature war. This is the foundation of the next era of computing.

The Real Question

Is Meta doomed? No.

Is it behind? Increasingly, yes.

Is throwing more money at the problem guaranteed to fix it? History says no.

Right now, Meta looks less like a dynasty in waiting and more like those late-70s Yankees: intimidating on paper, exhausting in practice and still searching for the formula that turns talent into championships.

The Bronx Zoo always draws a crowd.

But banners hang forever.

If Meta finds alignment, discipline and a coherent product vision, it could still dominate the AI decade.

If not, it may become the most expensive “almost” in tech history.

The post Is Meta the Bronx Zoo Yankees of AI? appeared first on Techstrong.ai.

]]>
Perplexity Personal Computer, a Digital Proxy That Works on a User’s Behalf https://techstrong.ai/features/perplexity-personal-computer-ai-digital-proxy/ Tue, 17 Mar 2026 11:17:32 +0000 https://techstrong.ai/?p=43790 Overview of Perplexity’s expanding AI platform—Perplexity Computer, Personal Computer, Comet Enterprise, and four developer APIs (Search, Agent, Embeddings, Sandbox)—positioned for enterprise research, agentic workflows, secure integrations, and paid premium data sources.

The post Perplexity Personal Computer, a Digital Proxy That Works on a User’s Behalf appeared first on Techstrong.ai.

]]>
Perplexity claims to be the AI search engine that understands what you’re actually looking for. The company has also developed the Comet browser and offers it as a desktop app, an iPhone app or an Android App.

Ask Perplexity itself what it is, and it tells us, “In this context, Perplexity is the name of the AI-powered answer engine you’re chatting with right now. It’s a service that combines live web search with advanced language models to give you concise, sourced answers in a conversational way.”

Perplexity for Developers

For software application development engineers, Perplexity offers an API platform along with API models, documentation and FAQs. Wrapped around a variety of product names, Perplexity Computer is said to be a concept based on a simple idea, i.e., when users have highly accurate AI search, an orchestration harness of 20 frontier models and agentic internet access… we get to a point where we can say that “AI is the computer” today.

The programming team behind this new offering says that Perplexity Computer can understand a goal, move across different user tools (more likely to mean client applications in this sense, but possibly also extending to developer tools also), and keep work going after the user steps away. 

“We’re expanding that functionality across Perplexity: A Personal Computer that can merge your local files with Perplexity Computer and work 24/7, Perplexity Computer for Enterprise, new APIs for developers, and deeper capabilities for financial research,” notes the company, in a product blog.

So Perplexity is Perplexity, which is an AI-powered answer engine with developer extensions… and Perplexity Personal Computer is a software-based virtual computer; we did say it was a slightly variegated product and service nomenclature.

Personal Computer, a Digital Proxy

Personal Computer runs on a dedicated Mac mini that can run 24/7, connected to local apps and Perplexity’s secure servers. Personal Computer is a digital proxy that works constantly on a user’s behalf, allowing them to orchestrate all of your tools, tasks and files from any device, anywhere.

Aiming to deliver a robust eye on security, Personal Computer works in a secure environment with clear safeguards. Sensitive actions require approval and every session includes a full audit trail. A kill switch gives users immediate control. 

“In a study of over 16,000 queries, measured against institutional benchmarks from McKinsey, Harvard, MIT, BCG, and others, we determined Perplexity Computer saved our internal teams $1.6M in labor costs and performed 3.25 years of work in only four weeks. And now we’re extending those same capabilities to other teams,” says the company.

Perplexity Personal Computer for Enterprise (are you keeping up?) connects to the tools a company already runs on. Through app connectors, it can query Snowflake, Salesforce, HubSpot, and hundreds of other platforms directly. That means a financial analyst can ask for revenue by vertical from Snowflake, while a sales team can pull CRM data and competitive context at the same time.

Computer writes the queries, runs them, and returns structured results.

Its developers can also teach Computer their preferred workflows with customized skills. And it also fits naturally inside the workflows teams already use. In Slack, through a DM or a shared channel, teams can collaborate with Computer: To handle coding with Codex and Claude, create dashboards, financial models, and decks without waiting on a data scientist or analytics team, and run scheduled workflows asynchronously.

Comet Enterprise

Back to the browser for a moment, the company says that Comet understands context across tabs and automates much of the repetitive work users carry out. Comet Enterprise brings that same experience into a managed environment as an AI-native browser with controls. Admins can decide where and how the assistant operates, with permissions that apply across the browser or only on specific domains.

They can allow Comet to answer questions without taking actions, let it work more actively in lower-risk environments and review action logs for every session. Comet Enterprise also supports centralized deployment through existing MDM infrastructure, making it possible to install Comet across employee devices, apply browser policies, block domains and extensions, and monitor activity through exportable telemetry.

API Platform

Perplexity’s platform is now expanding with four APIs: Search, Agent, Embeddings and Sandbox. These are the same building blocks that power Computer, now available as APIs: cited outputs, multi-model routing, and the ability to move from retrieval to action in a secure environment. For developers building products where accuracy matters, those are core building blocks.

“Search gives developers a grounded way to retrieve information. Agent makes it possible to delegate multi-step tasks. Sandbox provides a controlled environment for execution. Embeddings support retrieval and ranking systems that depend on stronger relevance. Together, they create a broader platform for products that need trustworthy answers and useful actions in the same system,” says the company.

Premium Sources bring professional data providers directly into Perplexity. Starting today, Computer can access select Statista, CB Insights and PitchBook data as part of the research workflow.

These are paywalled sources widely used for market research, company analysis, and investment decisions. By bringing these sources into Perplexity, we reduce the cost and complexity of accessing them. Premium Sources are cited in research queries and can link to the right source automatically.

The post Perplexity Personal Computer, a Digital Proxy That Works on a User’s Behalf appeared first on Techstrong.ai.

]]>
Hybrid Enterprises are AI Enterprises – Hybrid AI Enterprises, Hybrid AI Attacks  https://techstrong.ai/features/hybrid-enterprises-are-ai-enterprises-hybrid-ai-enterprises-hybrid-ai-attacks/ Tue, 17 Mar 2026 10:26:20 +0000 https://techstrong.ai/?p=43784 Urgent analysis arguing AI-powered attacks are now scalable and stealthy, requiring resilience anchored in network-plus-identity visibility, AI-driven behavioral clarity, and machine-speed response across hybrid enterprises.

The post Hybrid Enterprises are AI Enterprises – Hybrid AI Enterprises, Hybrid AI Attacks  appeared first on Techstrong.ai.

]]>
We’ve crossed a line — and this time, it’s unmistakable. 

For years, security leaders have warned that attackers would eventually use AI to scale faster than human defenders. Recent real-world research now shows AI-powered hacking systems operating at a level that rivals — and in some cases surpasses — professional human attackers. 

That’s not a future problem. That’s today’s problem. It reinforces a simple truth that’s reshaping security strategy: Hybrid enterprises are AI enterprises — and attackers are becoming AI-native just as fast. 

AI Didn’t Just Change the Enterprise. It Changed the Economics of Attacks. 

What’s most striking about recent AI hacking research isn’t that AI is “smarter” than humans. It’s that AI is relentless. It doesn’t get tired, wait its turn or stop after one failed attempt 

It continuously scans, tests, adapts, and retries — at machine speed and at a fraction of the cost of human effort. At the same time, enterprises themselves have become autonomous systems. Across hybrid environments, workloads schedule themselves, identities authenticate other identities, AI agents recommend actions, deploy code, route traffic, and interact without human oversight. This is the defining shift of the modern enterprise: Systems now act on the network, not just exist on it. Attackers understand this. And AI allows them to exploit it at scale. 

Why Prevention Alone is Breaking Down 

Endpoint controls, identity controls, cloud controls — they all still matter. But none of them were designed for attackers who don’t need to break anything. Modern attackers don’t smash doors. They log in, steal or abuse valid credentials, exploit federated trust and move across hybrid networks using legitimate tools and permissions.  

AI accelerates this approach. It automates reconnaissance. It prioritizes attack paths. It adapts faster than humans can tune rules or correlate alerts. The result isn’t a series of isolated incidents. It’s a single, coordinated campaign that moves fluidly across identity, network, cloud, SaaS, and infrastructure — all within what is effectively one massive hybrid attack surface. That’s why endpoint-centric security alone can no longer prove resilience. 

AI Attacks Don’t Look Like Attacks — They Look Like Operations 

One of the most dangerous aspects of AI-powered attacks is that they don’t look malicious. They look normal – from their logins, API calls, SaaS activity and network traffic 

AI doesn’t need to hide by being invisible. It hides by being indistinguishable. At the same time, AI dramatically increases noise — more testing, more probing, more edge-case behavior. That noise overwhelms human-driven workflows and fragmented tool stacks. Security teams aren’t failing because they lack tools. They’re failing because signals are siloed across domains, alerts lack context and credibility and humans are being forced to chase volume instead of inten 

In AI-driven environments, noise becomes the attacker’s greatest ally. 

Why Network + Identity is Where Resilience Must Anchor 

Every AI-driven attack — no matter how automated — must still do two things: 

  1. Authenticate as an identity (human, machine, service, or AI agent) 
  1. Communicate across the network to move, escalate, or exfiltrate 

That’s why network and identity remain the most durable sources of truth in the AI era. This is because endpoints can be bypassed, logs can be delayed (even removed) and cloud controls can be misconfigured 

But attackers can’t operate without identities — and they can’t move without touching the network. When security teams understand how identities behave on the network, resilience becomes operational. It makes AI-driven reconnaissance visible, turns silent lateral movement into something observable (detecting identity abuse before it causes impact), and strips attacks of their biggest advantage: time.  

This is where resilience is actually built. 

What Resilience Means in an AI Enterprise 

Resilience is no longer about assuming you can stop every attack. It’s about proving — continuously — that you can: 

  • See all activity across the hybrid enterprise 
  • Understand which identities and behaviors matter 
  • Detect abuse that looks legitimate 
  • Act before business impact occurs 

In an AI enterprise, resilience must operate at machine speed — but with human control. AI should reduce noise, connect behavior across domains, and surface what matters. Humans should decide when and how to act. That balance is what turns AI from a risk into an advantage. 

The Bottom Line 

AI is now good enough to compete with — and sometimes outperform — humans in cyberattacks, and that genie isn’t going back in the bottle. But the same is true for defense: hybrid enterprises are now AI enterprises, AI attacks are real, scalable, and accelerating, and resilience can no longer be built one silo at a time.  

It must be built where attackers can’t hide — across network and identity — with AI that delivers clarity instead of noise, speed instead of latency, and confidence instead of guesswork. This is the moment to rethink what resilience really means. And it’s why the future of cybersecurity belongs to platforms that understand behavior, not just alerts, in the AI era. 

The post Hybrid Enterprises are AI Enterprises – Hybrid AI Enterprises, Hybrid AI Attacks  appeared first on Techstrong.ai.

]]>
Hybrid is Forever: Building Resilience in the Permanent “Both/And” Enterprise  https://techstrong.ai/features/hybrid-is-forever-building-resilience-in-the-permanent-both-and-enterprise/ Tue, 17 Mar 2026 10:26:00 +0000 https://techstrong.ai/?p=43781 Argument that the modern hybrid enterprise—spanning cloud and on‑prem, humans and AI, SaaS and IoT—requires identity‑centric, AI‑driven visibility and adaptive controls to achieve hybrid resilience across coverage, clarity, and control.

The post Hybrid is Forever: Building Resilience in the Permanent “Both/And” Enterprise  appeared first on Techstrong.ai.

]]>
The enterprise isn’t what it used to be. It’s not a network you can draw, a perimeter you can define, or a stack you can neatly diagram. Today’s enterprise is everywhere. It’s a living, breathing organism made up of data centers and clouds, humans and machines, code and AI — all connected, all in motion. This is the modern hybrid enterprise 

Every organization now runs across seven dimensions of hybrid reality. Together, they’ve made the enterprise faster, smarter, and more connected. They’ve also made it infinitely harder to defend. 

Hybrid network infrastructure now lives across cloud and on-premises data centers. Hybrid workforces split time between office and remote. Hybrid workstreams are shared between humans and AI. 

Hybrid identities span Active Directory and Microsoft Entra ID. Hybrid access introduces complex Zero Trust Network Architecture challenges. Hybrid applications are scattered across SaaS, IaaS, and legacy on-prem stacks. And hybrid devices range from managed endpoints to unmanaged IoT and OT systems. 

Attackers know it. They’re moving fluidly between cloud APIs, identity systems, SaaS connectors, and unmanaged IoT and OT devices — exploiting the seams that defenders can’t see. The problem isn’t that defenders don’t have enough tools. It’s that the tools they do have were built for a world that no longer exists. 

Old ways of defense — reactive, domain-specific, alert-driven — are failing because the modern hybrid enterprise isn’t one environment. There are many, and they change by the minute. And if we don’t start thinking upstream — about how all these hybrid dimensions interact, overlap, and expose each other — we’ll keep fighting downstream fires instead of stopping the flow. 

This is the reality security leaders, builders, and analysts must face: hybrid isn’t a challenge to manage. It’s the new battlefield to master. The only question left is whether your defenses can evolve as fast as your enterprise and faster than attackers. 

Hybrid Network Infrastructure: Data Centers, Clouds and Everything in Between 

Nearly nine in ten organizations now operate with a multi-cloud or hybrid cloud strategy. And yet, roughly half of critical organization applications will never leave private infrastructure. Regulatory, latency, and cost considerations keep on-prem alive. 

For defenders, this means security controls must stretch across clouds and data centers, misconfigurations and shared credentials and inconsistent logging create dangerous blind spots, and traditional segmentation no longer maps to API-driven cloud connectivity. 

How Attackers Exploit It 

Attackers chain misconfigurations—pivoting from an exposed S3 bucket to a neglected on-prem system or using stolen API keys to traverse hybrid environments. The result? Breaches that cross boundaries faster than most detection tools can correlate them. Hybrid resilience means observability everywhere data can move, and data moves on networks.  

Hybrid Workforces: People are the Perimeter 

Hybrid work has plateaued at a new normal. Office occupancy across major metros hovers in the mid-50s, stabilizing rather than returning to five days a week. Meanwhile, the average remote-capable employee splits time between home, office, and travel. 

In response, defenders must navigate multiple access patterns as users connect from different devices, networks, and geographies on a daily basis. Additionally, defenders are now operating in a world where context has collapsed because the traditional notion of “inside” versus “outside” the network no longer applies. Managing persistent identity risks must also become a top priority, as phishing, MFA fatigue, and session hijacking remain top breach paths. 

How Attackers Exploit It 

Adversaries love hybrid work because every remote connection blurs the trust boundary. A single successful phish or MFA bypass from a home network can become a golden ticket to corporate systems, Active Directory, or SaaS apps. Once in, lateral movement often goes undetected in a sea of legitimate hybrid access noise. 

Hybrid Workstreams: Shared Between Humans and AI 

In every modern enterprise, humans and artificial intelligence (AI) now share the workload. From AI copilots and chatbots to autonomous scripts and decision engines, digital teammates work alongside people every day — analyzing data, generating content, writing code, and making recommendations that drive the business forward. 

This hybrid workstream of human and AI collaboration increases productivity but also reshapes the attack surface. Every prompt, model connection, and automated decision introduces a new layer of trust — and new opportunities for attackers to exploit. 

What This Means for Defenders 

Defenders must now secure not just user actions, but AI-driven actions — those executed automatically, at machine speed, and often without human oversight. That means monitoring how AI systems access, process, and share sensitive data; validating the integrity of AI outputs and the authenticity of their inputs; and understanding how human–AI interaction chains can be hijacked or poisoned to produce harmful or misleading results. 

In this new era, defending the enterprise requires visibility into the behavior of algorithms just as much as the behavior of people. 

How Attackers Exploit It 

Attackers target the human–AI intersection. They manipulate training data, inject malicious prompts, or hijack trusted automation pipelines to exfiltrate data or execute unauthorized actions under the guise of “the system.” They exploit blind trust in AI-generated outputs and use social engineering to coerce both humans and models into unsafe behavior. Hybrid workstreams blur accountability and accelerate risk — and adversaries know it. 

The path forward is AI-aware defense: treating machine behaviors with the same scrutiny, governance, and continuous validation applied to human users. Because in the modern enterprise, every AI that acts on your behalf becomes part of your attack surface. 

Hybrid Identities: The New Attack Surface 

Most enterprises now operate in hybrid identity mode—on-prem Active Directory synced with cloud identity providers like Microsoft Entra ID. It’s practical, but it’s also perilous. And it’s no longer just about people. Hybrid identity today includes both human identities (employees, partners, contractors) and non-human identities (service accounts, APIs, workloads, and bots) that far outnumber humans in many organizations. 

For defenders, this means that every synced directory, service principal, federated token, and legacy authentication protocol expands the attack surface. It indicates that identity sprawl across both human and machine accounts erodes confidence in who or what should have access. And, compromised credentials continue to be the top vector in nearly every breach report. 

How Attackers Exploit It 

Attackers no longer brute-force perimeters—they weaponize trust. Techniques like Golden SAML, OAuth token theft, and Azure app registration abuse allow adversaries to impersonate legitimate users or hijack non-human service accounts in both on-prem and cloud environments. 

The fix isn’t another password policy—it’s continuous, identity-centric verification across all environments, for every human and non-human identity that touches the hybrid enterprise. 

Hybrid Access: Legacy VPNs Meet Zero-Trust 

While most organizations are shifting toward Zero-Trust Network Access (ZTNA) and Secure Access Service Edge (SASE), the reality is that VPNs aren’t going away overnight. Unfortunately, they remain one of the most exploited access technologies in use today. 

For defenders, legacy VPNs create single points of failure and implicit trust; network visibility often stops at the tunnel endpoint; and security policies remain static in a world where user context changes by the minute. 

How Attackers Exploit It 

Attackers target exposed VPN gateways or use stolen credentials to authenticate legitimately. Once inside, they can move laterally into Active Directory, servers, or cloud consoles—often with full network privileges. Hybrid organizations must assume every connection could be hostile and adopt continuous authorization—verifying device posture, user behavior, and risk signals before and during access sessions.  

Hybrid Applications: The SaaS Explosion 

The average organization now uses over 100 SaaS applications—and that’s just the sanctioned ones. Add shadow IT and third-party integrations, and you’ve got an exponential growth curve of risk. 

This means every application becomes another authentication surface and a potential privilege chain; many SaaS platforms rely on OAuth tokens that don’t offer the same visibility as passwords; and security teams often lack ownership of SaaS configurations, which typically sit within business units instead. 

How Attackers Exploit It 

Attackers target the seams between apps—phishing OAuth consent screens, abusing third-party connectors, or using compromised tokens to leapfrog from one SaaS platform to another. Data exfiltration happens quietly through APIs, not endpoints. 

Hybrid organization resilience demands SaaS posture management and behavioral detection that understands context across cloud and identity. 

Hybrid Devices & Edges: The Unseen Frontier 

By the end of this year, there will be over 20 billion connected IoT devices worldwide. Most are unmanaged, unpatched, and invisible to corporate IT—yet connected to networks that touch critical systems. 

As a result, device inventories are often incomplete or outdated; patch management and segmentation frequently stop at the office wall; and the convergence of OT and IoT with IT environments creates new paths for lateral exposure. 

How Attackers Exploit It 

Adversaries exploit weak IoT firmware or default credentials to gain a foothold, then pivot into organization systems. In manufacturing and energy sectors, that jump often leads to ransomware or service disruption in operational environments. Hybrid resilience here means unified network detection and response (NDR) that can see unmanaged devices as part of the same attack surface—not a separate one. 

Hybrid Threats: One Attack, Many Pathways 

The modern attacker doesn’t specialize in one domain—they blend them all. A typical hybrid attack chain might look like this: 

  1. Phishing a hybrid worker → 
  2. Credential theft → 
  3. VPN access to internal systems → 
  4. Privilege escalation via hybrid identity sync → 
  5. Lateral movement east-west and north-south → 
  6. Data exfiltration from a SaaS app or cloud store. 

Every step exploits a seam between environments. That’s why detection must focus not on individual alerts, but the behaviors that connect them. 

From Hybrid Risk to Hybrid Resilience 

Hybrid isn’t just how we work—it’s how attackers operate. They don’t see boundaries between cloud and data center, user and machine, SaaS and OT. They see opportunity in every connection point. 

Building hybrid resilience means gaining CoverageClarity, and Control across all those intersections. That’s how you stay ahead of attackers who exploit them. 

Coverage: Unified Observability Across the Hybrid Attack Surface 

You can’t defend what you can’t see. True resilience starts with Coverage—complete, unified observability across every domain where hybrid risk lives. 

That means continuous visibility into indicators of attack (IOAs) and exposure across data centers and private infrastructure; multi-cloud environments; SaaS and identity systems; campuses and remote locations; and cyber-physical and OT networks. 

Coverage answers the fundamental questions every CISO and SOC leader asks daily: 

  • Who and what is on our network? 
  • How is who or what behaving? 
  • Where are we exposed—or already under attack? 

When every workload, identity, and connection is visible through a single lens, you gain the confidence to act decisively, not reactively. 

Clarity: Turning Noise into Identity-Centric Insight 

Visibility alone isn’t enough. In hybrid environments, alerts come in faster than any team can triage. What defenders need is Clarity—the ability to cut through noise with AI-driven correlation that understands identity at the core. 

Vectra AI’s approach brings correlated, identity-centric signal by automatically stitching together indicators of attack across cloud, network, and identity domains; triaging and prioritizing both human and non-human entities based on real risk and intent; and contextualizing behaviors to reveal what an attacker is doing and where they’re going next.  

Clarity answers the questions that matter most to analysts and executives alike: 

  • Why is this entity prioritized? 
  • What is the attacker doing? 
  • Where is the attacker going? 

This is where AI earns its keep—not in generating more alerts, but in turning fragmented telemetry into cohesive stories of attacker movement. 

Control: Adaptive Resilience, Pre- and Post-Compromise 

Hybrid organizations can’t afford static defenses. Attackers adapt, and so must defenders. That’s why Control is the final pillar of hybrid resilience—the ability to continuously harden, hunt, and respond before, during, and after an attack. 

As a result, such control enables adaptive attack exposure management to proactively reduce risk across hybrid networks. It also automates investigation and response to contain threats at machine speed, while allowing for post-compromise containment and remediation to minimize dwell time and blast radius. 

It answers the operational imperatives that keep security leaders up at night: 

  • When should we mitigate and contain the attack? 
  • Where is further remediation needed? 
  • What is our true hybrid network posture? 

With Coverage feeding Clarity, and Clarity enabling Control, security teams gain hybrid resilience not as a static state—but as a living, adaptive capability that evolves with the organization. 

The Boardroom Imperative 

Boards and executive teams should view “hybrid forever” not as an operational nuisance but as a strategic opportunity. Resilient hybrid organizations can scale faster by deploying workloads wherever they make the most sense, protect brand equity by limiting blast radius when incidents occur, and enable flexible work without compromising security posture. 

Cyber resilience isn’t about choosing between cloud or on-prem, office or remote—it’s about mastering both. The organizations that thrive will be those that see hybrid not as a security liability, but as a competitive differentiator—because they’ve built the visibility, verification, and velocity to defend it. 

Hybrid is how business gets done. It’s not the future—it’s the forever. The question isn’t whether your organization is hybrid. It’s how resilient it will be when attackers exploit that hybridity. So build like it’s permanent. Because it is. 

The post Hybrid is Forever: Building Resilience in the Permanent “Both/And” Enterprise  appeared first on Techstrong.ai.

]]>
Beyond Chatbot: NVIDIA Unveils OpenShell, NemoClaw to Standardize Autonomous AI Agents https://techstrong.ai/agentic-ai/beyond-chatbot-nvidia-unveils-openshell-and-nemoclaw-to-standardize-autonomous-ai-agents/ Tue, 17 Mar 2026 00:43:35 +0000 https://techstrong.ai/?p=43727 SAN JOSE, Calif. -- NVIDIA Corp. CEO Jensen Huang declared the birth of a “new renaissance in software" Monday when the company released a suite of open-source tools designed to shift artificial intelligence (AI) from a passive chatbot to an autonomous workforce. The launch of NVIDIA Agent Toolkit alongside a coalition of partners including Salesforce [...]

The post Beyond Chatbot: NVIDIA Unveils OpenShell, NemoClaw to Standardize Autonomous AI Agents appeared first on Techstrong.ai.

]]>
SAN JOSE, Calif. — NVIDIA Corp. CEO Jensen Huang declared the birth of a “new renaissance in software” Monday when the company released a suite of open-source tools designed to shift artificial intelligence (AI) from a passive chatbot to an autonomous workforce.

The launch of NVIDIA Agent Toolkit alongside a coalition of partners including Salesforce Inc., Adobe Inc., and Microsoft Corp. positions NVIDIA to standardize agentic AI, providing secure infrastructure and open-source models necessary for AI agents to independently execute complex corporate workflows at scale.

“Claude Code and OpenClaw have sparked the agent inflection point,” Huang said. “Employees will be supercharged by teams of specialized and custom-built agents they deploy and manage. The IT industry is on the brink of its next great expansion.”

At the heart of the expansion is NVIDIA OpenShell, an open-source runtime that acts as a secure sandbox for the agents. It enforces policy-based security and privacy guardrails, addressing the primary concern of enterprise leaders: how to give autonomous agents enough access to be productive without compromising sensitive corporate networks.

The toolkit introduces a hybrid architecture called NVIDIA AI-Q. This blueprint allows developers to use massive frontier models for high-level orchestration while delegating specific research and reasoning tasks to smaller, open-source NVIDIA Nemotron models. According to NVIDIA, this approach can slash query costs by more than 50% while maintaining “world-class” accuracy.

To streamline adoption, NVIDIA also announced the NemoClaw stack. This allows developers to install the entire agent environment, including Nemotron models and OpenShell runtime, with a single command, effectively creating a turnkey operating system for personal and enterprise AI.

The announcement comes with the support of dozens of industry leaders, who are integrating NVIDIA’s agentic framework.

Salesforce and ServiceNow Inc. are building autonomous workforces of AI specialists to handle sales, service, and marketing tasks. Adobe Inc. and Atlassian Inc. are using the toolkit to power long-running creativity and productivity agents within their platforms.

Meanwhile, security giants Cisco Systems Inc., CrowdStrike Inc., and Microsoft Security are collaborating to ensure OpenShell is compatible with existing cyber-defense tools. Siemens, Synopsys, and Cadence Design Systems Inc. are deploying so-called SuperAgents to autonomously design and verify complex semiconductor layouts.

The open-source nature of the release is a strategic play to capture the developer market. Peter Steinberger, creator of the viral OpenClaw project, noted that the collaboration creates the “missing infrastructure layer” needed to make AI assistants trustworthy.

By providing the models (Nemotron), the skills (cuOpt), and the safety cage (OpenShell), NVIDIA is positioning itself as the foundational layer for a future where software doesn’t just wait for clicks but proactively solves problems.

The post Beyond Chatbot: NVIDIA Unveils OpenShell, NemoClaw to Standardize Autonomous AI Agents appeared first on Techstrong.ai.

]]>
Dell Extends Scope and Reach of AI Infrastructure Portfolio https://techstrong.ai/features/dell-extends-scope-and-reach-of-ai-infrastructure-portfolio/ Mon, 16 Mar 2026 21:27:29 +0000 https://techstrong.ai/?p=43693 Dell Technologies today made a raft of additions to its infrastructure portfolio for building and deploying artificial intelligence (AI) applications and agents, including a data orchestration engine that automatically discovers, labels, enriches and transforms data into formats that AI applications can more easily access and consume at scale. Announced at the NVIDIA GTC 2026 conference, [...]

The post Dell Extends Scope and Reach of AI Infrastructure Portfolio appeared first on Techstrong.ai.

]]>
Dell Technologies today made a raft of additions to its infrastructure portfolio for building and deploying artificial intelligence (AI) applications and agents, including a data orchestration engine that automatically discovers, labels, enriches and transforms data into formats that AI applications can more easily access and consume at scale.

Announced at the NVIDIA GTC 2026 conference, Dell also revealed it has added support for the Blackwell series of graphics processing units (GPUs) developed by NVIDIA along with support for NVIDIA Cuda-Q, an extension to NVQLink for networking quantum and classical computing systems.

The latest series of servers are specifically designed for traditional enterprise and edge computing use cases involving AI applications, says Varun Chhabra, senior vice president of infrastructure (ISG) and telecom marketing at Dell.

Dell is also adding support for the latest Dell NVIDIA AI-Q blueprint for integrating data and the NVIDIA STX, a modular reference design for NVIDIA Vera Rubin NVL72, NVIDIA BlueField-4 DPUs, and NVIDIA Spectrum-X Ethernet networks across its server and network switch portfolio.

Finally, Dell has added an AI Assistant within the Dell Data Analytics Engine that brings a conversational natural language interface directly into analytics applications based on SQL.

Dell has been making a case for a Dell AI Factory portfolio of platforms it initially launched two years ago. Since then Dell claims to now have more than 4,000 organizations that have acquired a Dell AI Factory system.

The data orchestration platform extends the scope and reach of that no-code, low-code platform using a Dataloop platform that Dell acquired late last year. Additionally, Dell is launching a Data Orchestration Engine Marketplace through which organizations can access a set of workflows that were created using the NVIDIA NIM microservices framework along with NVIDIA AI Blueprints and more than 200 AI models, applications and templates.

The overall goal is to make it simpler to expose the highest quality data possible to AI models at the right time, says Chhabra. “Data availability and quality is the most common challenge,” he says.

It’s not clear to what degree organizations are deploying AI applications in the cloud or in an on-premises IT environment. However, a recent Futurum Group survey finds cloud deployments account for 35% of implementations, compared to on-premise and private cloud deployments that have a 24% share and hybrid IT environments that have a 21% share. In fact, it’s not uncommon for organizations to build an AI application in the cloud that is later deployed closer to the point where data is being created, analyzed and consumed.

Additionally, many organizations also have to address compliance and security concerns that might require an AI model to be deployed in an on-premises IT environment.

Regardless of where any AI model ultimately is deployed, the one thing that is certain is that in time IT teams will not only be expected to ensure they are consuming the right data, but also be replaced as often as needed as the pace of AI innovation only continues to accelerate.

The post Dell Extends Scope and Reach of AI Infrastructure Portfolio appeared first on Techstrong.ai.

]]>
Smoking Hot Tokenomics: Akamai has the ‘Edge’ on Distributed Inference  https://techstrong.ai/features/smoking-hot-tokenomics-akamai-has-the-edge-on-distributed-inference/ Mon, 16 Mar 2026 20:30:43 +0000 https://techstrong.ai/?p=43527 Akamai is redefining the "backbone of the internet" by moving beyond CDNs into distributed AI inference. By operationalizing the Nvidia AI Grid with thousands of Blackwell GPUs, Akamai enables low-latency, real-time AI at over 4,400 edge locations. This massive network optimizes "tokenomics" through intelligent orchestration, reducing costs while powering agentic and physical AI. From liquid-cooled chips to Gecko nodes, discover how Akamai is scaling intelligence to the point of contact.

The post Smoking Hot Tokenomics: Akamai has the ‘Edge’ on Distributed Inference  appeared first on Techstrong.ai.

]]>
Akamai Technologies used to be known as the Content Delivery Network (CDN) company; some called it the “backbone of the internet” with its media streaming capabilities and its web stability technologies.

Today we know Akamai Technologies for delivery and stability, but on a somewhat different basis; the company is now recognized for its distributed cloud, cybersecurity and content delivery prowess as it makes use of its massive edge network to provide low-latency AI inference, zero-trust security and high-performance media delivery.

Akamai Clouds, Defined

The Akamai Connected Cloud employs thousands of Nvidia Blackwell GPUs for distributed AI inference at the edge. What kind of edge? We’re talking about AI inference nodes, edge cache servers, plain compute instances and so-called gecko nodes, i.e., compact, localized servers designed to run Virtual Machines and WASM (WebAssembly) for efficiency and security. 

Looking deeper here, Akamai Inference Cloud is a global-scale implementation of Nvidia AI Grid, a distributed network of interconnected AI infrastructure designed to transform isolated datacenters into a unified, grid-aware intelligence platform. Akamai Inference Cloud works by intelligently routing AI workloads across its edge, regional and core footprint to balance latency, cost and performance. In other words, it’s AI in the right place at the right time, at the right volume, and at the right cost.

This contextualization is necessary if we are to understand why the company thinks it has passed a milestone in the evolution of AI by this week, unveiling the first global-scale implementation of Nvidia AI Grid reference design. This reference design is a “blueprint for standardized interconnected AI infrastructure” from Nvidia that integrates its Blackwell chips, liquid cooling and networking.

Liquid-Cooled Chips?

Liquid-cooled chips, really? Yes, rather like a common household radiator, non-potable reclaimed water is used to absorb heat from the GPUs, travels to a heat exchanger, gets cooled back down (by outside air or a separate water loop) and returns to the GPU.

By integrating Nvidia AI infrastructure into Akamai’s infrastructure and using intelligent workload orchestration across its network, Akamai intends to move the industry beyond isolated AI factories toward a unified, distributed grid for AI inference.

The move marks a step in the evolution of Akamai Inference Cloud, introduced late last year. As the first to operationalize its AI Grid in this way, Akamai is rolling out thousands of snappily-named Nvidia RTX PRO 6000 Blackwell Server Edition GPUs, providing a platform to enable enterprises to run agentic and physical AI (a term often used to refer to humanoid robots and autonomous vehicles) with the responsiveness of local compute and the scale of the global web.

“AI factories have been purpose-built for training and frontier model workloads – and centralized infrastructure will continue to deliver the best tokenomics for those use cases,” said Adam Karon, chief operating officer and general manager, cloud technology group, Akamai. “But real-time video, physical AI and highly concurrent personalized experiences demand inference at the point of contact, not a round trip to a centralized cluster. Our AI Grid intelligent orchestration gives AI factories a way to scale inference outward – leveraging the same distributed architecture that revolutionized content delivery to route AI workloads across 4,400 locations, at the right cost, at the right time.”

What is Tokenomics?

NOTE: We can define tokenomics as the economic model and framework used to manage the supply, demand and cost of data units that consume AI tokens processed by AI models to track inference efficiency often in terms of tokens-per-watt (although watts-per-token is also equivalent), employing techniques such as prompt caching to store frequently used AI instructions and reduce token costs and ensure sustainable profitability.

At the heart of the AI Grid is an intelligent orchestrator that acts as a real-time broker for AI requests. Applying Akamai’s expertise in application performance optimization to AI, this workload-aware control plane optimizes “tokenomics” by improving cost per token, time-to-first-token and throughput.

A major differentiator for Akamai is the ability for customers to access fine-tuned or sparsified models through its enormous global edge footprint, which offers a massive cost and performance advantage for the long tail of AI workloads. 

For example, enterprises can reduce inference costs by matching workloads to the right compute tier automatically. The orchestrator applies techniques like semantic caching and intelligent routing to direct requests to right-sized resources, reserving premium GPU cycles for the workloads that demand them. Underpinning this is Akamai Cloud, built on open-source infrastructure with generous egress allowances to support data-intensive AI operations at scale.

Gaming studios can deliver AI-driven non-player character (NPC) interactions that maintain player immersion in milliseconds. Financial institutions can execute personalized fraud detection and marketing recommendations in the moment between login and first screen. Broadcasters can transcode and dub content in real time for global audiences. 

Globally Distributed Edge Network

These outcomes are powered by Akamai’s globally distributed edge network with over 4,400 locations with integrated caching, serverless edge compute and high-performance connectivity that processes requests at the point of user contact, bypassing the round-trip lag of origin-dependent clouds.

Large language models, continuous post-training and multi-modal inference workloads require sustained, high-density compute that only dedicated infrastructure can deliver. Akamai’s multi-thousand GPU clusters, powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, provide the concentrated horsepower for the heaviest AI workloads, complementing the distributed edge with centralized scale.

The company says it can manage complex SLAs across edge and core locations:

  • The Edge (4,400+ locations): Delivers rapid response times for physical AI and autonomous agents. It will leverage semantic caching and serverless capabilities like Akamai Functions (WebAssembly-based compute) and EdgeWorkers to deliver model affinity and stable performance at the point of user contact.
  • Akamai Cloud IaaS and dedicated GPU clusters: Core public cloud infrastructure enables portability and cost savings for massive-scale workloads, while pods powered by Nvidia RTX PRO 6000 Blackwell GPUs enable heavy-duty post-training and multi-modal inference.

“New AI-native applications demand predictable latency and better cost efficiency at planetary scale,” said Chris Penrose, global VP of business development for telco at Nvidia. “By operationalizing the NVIDIA AI Grid, Akamai is building the connective tissue for generative, agentic and physical AI, moving intelligence directly to the data to unlock the next wave of real-time applications.”

The Next Wave of Real-Time AI

Akamai is already seeing strong, early adoption for Akamai Inference Cloud across compute-intensive, latency-sensitive industries. The first wave of AI infrastructure was defined by massive GPU clusters in a handful of centralized locations, optimized for training. But as inference becomes the dominant workload and businesses across every industry focus on building AI agents, that centralized model faces the same scaling constraints that earlier generations of internet infrastructure encountered with media delivery, online gaming, financial transactions and complex microservices applications.

Akamai is solving each of those challenges through the same core approach, i.e., distributed networking, intelligent orchestration and purpose-built systems that bring content and context together as close as possible to the digital touchpoint. 

The post Smoking Hot Tokenomics: Akamai has the ‘Edge’ on Distributed Inference  appeared first on Techstrong.ai.

]]>
Nutanix Extends Reach of AI Platform to Securely Run AI Agents https://techstrong.ai/features/nutanix-extends-reach-of-ai-platform-to-securely-run-ai-agents/ Mon, 16 Mar 2026 20:30:34 +0000 https://techstrong.ai/?p=43672 Nutanix today added an ability to securely run artificial intelligence (AI) agents to its existing platform for deploying inference workloads in on-premises or self-managed cloud computing environments. Announced at the NVIDIA GTC 2026 conference, Nutanix also revealed it is working with NVIDIA to integrate with the NVIDIA Agent Toolkit, which provides access to an open [...]

The post Nutanix Extends Reach of AI Platform to Securely Run AI Agents appeared first on Techstrong.ai.

]]>
Nutanix today added an ability to securely run artificial intelligence (AI) agents to its existing platform for deploying inference workloads in on-premises or self-managed cloud computing environments.

Announced at the NVIDIA GTC 2026 conference, Nutanix also revealed it is working with NVIDIA to integrate with the NVIDIA Agent Toolkit, which provides access to an open source runtime, dubbed NVIDIA OpenShell, for securely deploying AI agents.

The Nutanix Agentic AI offering is an extension of the Nutanix Enterprise AI platform that makes it possible to securely orchestrate AI agents that are running on either an isolated virtual machine or container deployed on a Kubernetes cluster.

At the core of Nutanix Enterprise AI is an instance of NVIDIA AI Enterprise that enables organizations to build and orchestrate AI agents using a rich catalog of pre-built open source AI developer tools including notebooks, vector databases, machine learning operations (MLOps) workflow engines, and agentic frameworks. Nutanix has then extended the NVIDIA AI Enterprise to create a full stack platform that is integrated with its AHV hypervisor, Flow Virtual Networking software and the Nutanix Kubernetes Platform.

The Nutanix AHV hypervisor has been enhanced to automatically optimize allocation of physical resources to virtual machines on GPU-dense servers, while Nutanix Flow Virtual Networking can now offload the network dataplane processing to NVIDIA BlueField processors. The latest Nutanix Enterprise AI also now includes an AI Gateway service for unified policy control over cloud-hosted and private LLMs and support for the NVIDIA Nemotron family of open-source AI models, datasets, and training tools. Finally, NVIDIA has added support for Model Context Protocol (MCP) servers to the platform as well.

The overall goal is to provide IT teams with an integrated infrastructure that addresses everything from the underlying networking and server infrastructure to the tools needed to orchestrate workloads, says Debojyoti Dutta, chief AI officer for Nutanix.

Additionally, the Nutanix Enterprise AI platform provides IT teams with more control over costs, he adds. Rather than deploying every inference workload in the cloud, IT teams can reduce the total cost of AI by opting to deploy agentic AI applications in an on-premises IT environment that provides access to data in an IT environment that has a fixed amount of infrastructure resources. That approach is especially useful for deploying long running AI agents, notes Dutta. “The cost of running long running AI agents can quickly rocket,” says Dutta.

It’s not clear how many AI inference workloads are being deployed on some type of on-premises IT environment versus the cloud. A recent Futurum Group survey finds cloud deployments account for 35% of implementations, compared to on-premise and private cloud deployments that have a 24% share and hybrid IT environments that have a 21% share.

Regardless of how AI agents are deployed, the one thing that is certain is there will soon be thousands of them accessing data across the enterprise. The challenge and the opportunity now is determining how best to securely enable them all to be deployed without breaking the IT budget.

The post Nutanix Extends Reach of AI Platform to Securely Run AI Agents appeared first on Techstrong.ai.

]]>
NVIDIA Ignites Agentic AI Era with Vera Rubin Platform at GTC https://techstrong.ai/features/nvidia-ai-news/ Mon, 16 Mar 2026 20:30:01 +0000 https://techstrong.ai/?p=43564 SAN JOSE, Calif. – In a two-hour mega-event NVIDIA Corp. CEO Jensen Huang dubbed the "Super Bowl of AI," NVIDIA Corp. kicked off its annual GTC conference on Monday with a sweeping refresh of its hardware and software ecosystem. The center of the storm is the new Vera Rubin platform, a massive infrastructure project designed [...]

The post NVIDIA Ignites Agentic AI Era with Vera Rubin Platform at GTC appeared first on Techstrong.ai.

]]>
SAN JOSE, Calif. – In a two-hour mega-event NVIDIA Corp. CEO Jensen Huang dubbed the “Super Bowl of AI,” NVIDIA Corp. kicked off its annual GTC conference on Monday with a sweeping refresh of its hardware and software ecosystem.

The center of the storm is the new Vera Rubin platform, a massive infrastructure project designed to transition the industry from simple generative models to agentic artificial intelligence (AI).

The Vera Rubin platform, a unified supercomputing architecture,  integrates seven new chips in full production, including Vera CPU and Rubin GPU. Combined with high-speed networking components like the NVLink 6 Switch and the newly integrated Groq 3 LPU, the platform is engineered to function as a singular, massive AI factory.

“Vera Rubin is a generational leap,” Huang said during a keynote speech at the cavernous SAP Center here. “The agentic AI inflection point has arrived, kicking off the greatest infrastructure buildout in history.”

NVIDIA projects more than $1 trillion in potential Blackwell/Rubin revenue through 2027, twice the $500 billion it estimated in October, according to Wedbush Securities.

“We are now a computing platform that runs all of AI,” Huang said.

Vera CPU stood out as the crown jewel at the digital three-ring circus. As the world’s first processor purpose-built for reinforcement learning and agentic workflows, it delivers twice the efficiency of traditional rack-scale CPUs. Its high single-thread performance is specifically tuned for the reasoning phase of AI, where models must validate code, interact with data, and manage tools, according to NVIDIA.

Industry giants lined up in support. Hyperscalers Meta Platforms Inc., Alibaba, and Oracle Cloud, alongside hardware leaders Dell Inc. and Lenovo Group have committed to deploying Vera architecture. The broad support positions NVIDIA to standardize the infrastructure used by everyone from startups to global enterprises.

“This announcement clearly signals that the largest enterprises are adopting agents,” Alex Laurie, chief technology officer at Ping Identity, said in an email.

“NVIDIA is making a big claim at GTC 2026: agentic AI doesn’t just have a model problem, it has an infrastructure problem, and NVIDIA intends to be the one to solve it,” said Mitch Ashley, vice president and practice lead, Software Lifecycle Engineering, at The Futurum Group. “That’s my read of their emerging ‘agent stack,’ not their official language.”

Jack Gold, a long-time tech analyst, observed NVIDIA is “trying hard to reposition itself as the inference AI company, after it spent so much time being the premier training company over the past few years, and especially now that there are so many newcomers pushing into inference.”

“We estimate 80% to 85% or AI workloads will be inference in the next one to two 2 years, so NVIDIA must be a major player there,” Gold said. “A complimentary issue to this is that Inference won’t spend the gigabucks that is being spent on training, so NVIDIA is pushing the message that Vera Rubin and its AI factories are lowering the overall cost of tokens, even as the cost of these systems goes up. Inference is a cost sensitive compute structure, just like cloud hosting is.”

Beyond the digital realm, NVIDIA is pushing deep into physical AI. The company unveiled Physical AI Data Factory Blueprint, an open architecture aimed at automating how robotics data is generated and evaluated. In partnering with robotics pioneers Boston Dynamics (via Agility), ABB, and KUKA, NVIDIA is providing the brains for the next generation of humanoid robots and industrial automation through its Isaac and Cosmos simulation frameworks.

NVIDIA’s ambitions also extended beyond Earth’s atmosphere. The company introduced the Vera Rubin Space Module, bringing data-center-class compute to orbital data centers. The Rubin GPU offers up to 25 times more AI compute for space-based inferencing than the previous H100 model, allowing for autonomous space operations and advanced geospatial intelligence in power-constrained environments.

A parade of announcements with third parties were also shared, a testament to NVIDIA’s stature as AI’s central hub.

Adobe Inc. and NVIDIA forged a strategic partnership to overhaul the digital content pipeline, integrating NVIDIA’s accelerated computing and open-source models directly into Adobe’s creative and marketing suites. The collaboration aims to supercharge the development of next-generation Adobe Firefly foundational models and introduce agentic workflows, which use AI to automate complex production tasks.

The alliance merges Adobe’s industry-standard creative tools with NVIDIA’s research and library infrastructure to address skyrocketing demand for personalized digital assets.

AI cloud provider Nebius entered a strategic collaboration with NVIDIA to launch an end-to-end platform specifically engineered for physical AI. Unlike traditional cloud workloads, physical AI requires a seamless loop between virtual simulation and real-world hardware.

The new platform is designed to support the entire robotics lifecycle, from creating high-fidelity digital twins and leveraging massive compute for neural network development to scaling fleet operations in the real world.

“Physical AI is going to be one of the defining technology shifts of this decade,” said Evan Helda, head of physical AI at Nebius. He noted that current development teams are often limited by legacy infrastructure. “Working with NVIDIA, we are building the execution layer for the entire physical AI ecosystem,” he said.

Simultaneously, Lenovo unveiled the next phase of its Hybrid AI Advantage with NVIDIA. The suite of solutions is focused on performance metrics that matter to the bottom line — specifically reducing Time-to-First-Token and accelerating the deployment of generative AI across diverse environments.

The expansion marks a significant scale-up from Lenovo’s previous inferencing tools, now spanning personal devices for AI-enhanced edge computing, enterprise data centers for on-premise private clouds, and gigawatt-scale cloud for massive deployments for global industrial automation.

By integrating NVIDIA’s acceleration libraries, Lenovo aims to enable real-time decision-making and intelligent automation at a global scale, moving AI from a pilot project phase into a core operational driver for heavy industry and global enterprises.

GMI Cloud, a full-stack AI infrastructure provider, launched a global initiative to architect and deploy sovereign AI factories. The company is positioning itself as a primary backbone for these national-scale projects by integrating a significant capacity of the newly announced NVIDIA Vera Rubin NVL72 platform. By bringing this high-performance hardware online, GMI Cloud intends to establish a new gold standard for government-led AI deployments.

The initiative is already underway, marking a strategic shift toward AI sovereignty as countries seek to maintain control over their data and domestic technological capabilities.

The post NVIDIA Ignites Agentic AI Era with Vera Rubin Platform at GTC appeared first on Techstrong.ai.

]]>
Nippon Life Sues OpenAI, Alleging ChatGPT Engaged in Unauthorized Practice of Law https://techstrong.ai/features/nippon-life-sues-openai-alleging-chatgpt-engaged-in-unauthorized-practice-of-law/ Mon, 16 Mar 2026 19:45:17 +0000 https://techstrong.ai/?p=42919 On March 5, a federal lawsuit was filed against OpenAI alleging that the company engaged in the unauthorized practice of law through its large language model ChatGPT. The plaintiff, Nippon Life Insurance Company of America, claims that OpenAI is responsible for legal actions taken by a former disability claimant who allegedly relied on advice generated [...]

The post Nippon Life Sues OpenAI, Alleging ChatGPT Engaged in Unauthorized Practice of Law appeared first on Techstrong.ai.

]]>
On March 5, a federal lawsuit was filed against OpenAI alleging that the company engaged in the unauthorized practice of law through its large language model ChatGPT.

The plaintiff, Nippon Life Insurance Company of America, claims that OpenAI is responsible for legal actions taken by a former disability claimant who allegedly relied on advice generated by ChatGPT. According to the complaint, the claimant used guidance from the chatbot to attempt to reopen a previously settled disability claim.

Nippon alleges that the legal filings made by the claimant were meritless, were created with the assistance of ChatGPT and forced the insurer to incur unnecessary legal costs. The company is seeking $300,000 in compensatory damages and $10 million in punitive damages. It is also requesting a court order declaring that OpenAI violated Illinois’ unauthorized practice of law statute, according to the complaint.

The case was filed in the U.S. District Court for the Northern District of Illinois in Chicago.

The claimant herself is not named as a defendant in the lawsuit. Nippon states that the woman dismissed her attorney and proceeded to pursue the claim after consulting ChatGPT.

The insurer argues that ChatGPT is “not an attorney” and “has not been admitted to practice law in the State of Illinois or in any other jurisdiction within the United States.” Nippon also alleges that OpenAI revised its policies in October to bar users from using the platform for legal advice, claiming that the policy previously contained no such prohibition.

Unauthorized practice of law is illegal in all 50 U.S. states. The lawsuit raises questions about whether AI-generated guidance could cross the line from providing general information about laws into offering personalized legal advice.

People frequently use chatbots to gather information in a way similar to how they use search engines, like Google. However, search engines primarily direct users to existing sources, while generative AI systems can produce tailored responses based on specific facts provided by a user. If a user provides an AI system with detailed information about a personal legal situation and receives direct guidance or drafted legal filings in return, the complaint suggests that could potentially constitute unauthorized practice of law.

The case also highlights how generative AI is increasingly being used for tasks traditionally associated with legal work, including drafting documents, summarizing material and referencing large bodies of legal information. At the same time, courts are still grappling with how AI should be used within legal proceedings. Recently, two Florida judicial circuits issued rules requiring lawyers to disclose when artificial intelligence tools are used in court filings.

OpenAI rejected the allegations, saying in a statement to The New York Post Friday that “this complaint lacks any merit whatsoever.”

The case is Nippon Life Insurance Company of America v. OpenAI Foundation and OpenAI Group PBC, U.S. District Court, Northern District of Illinois, No. 1:26-cv-02448.

The post Nippon Life Sues OpenAI, Alleging ChatGPT Engaged in Unauthorized Practice of Law appeared first on Techstrong.ai.

]]>