Yesterday's NVIDIA GTC 2026 keynote had new chips, new racks, and talk of putting Vera Rubin modules into orbit. But none of that was the real headline. The biggest announcement was that NVIDIA is going all-in on claws — autonomous AI agents that write code, browse the web, call APIs, and chain actions for hours without human input.
Jensen Huang walked on stage and said it plainly: "OpenClaw is the operating system for personal AI. This is the moment the industry has been waiting for."
And then he unveiled NemoClaw.
OpenClaw: From Weekend Hack to 250,000 Stars
To understand why NemoClaw matters, you need to understand OpenClaw first.
OpenClaw started as a weekend project called "Clawdbot" by developer Peter Steinberger in November 2025. The concept was simple: give an LLM full terminal access, persistent memory, and a bunch of tools, then let it operate autonomously. File organization, online research, code writing, system administration — all handled by a single agent running on minimal hardware.
It went viral in late January 2026. Within two weeks it hit 100,000 GitHub stars. By March 3, it had 250,829 stars — surpassing React's decade-long record in roughly 60 days. It became the fastest-growing open source project in history, with over 48,000 forks and more than 1,000 contributors.
The growth wasn't accidental. OpenClaw hit at the exact moment when AI model capabilities caught up to the agent dream. Zero-friction onboarding, MIT license, model-agnostic design, and (curiously) a lobster mascot that went viral in China all contributed.
But the speed came with a cost.
The Security Crisis Nobody Could Ignore
OpenClaw's "let it rip" approach made it incredibly productive — and incredibly dangerous.
A critical vulnerability (CVE-2026-25253) allowed one-click remote code execution through WebSocket token theft on all versions before 2026.1.29. Security researchers found 42,900 public-facing instances across 82 countries, with 15,200 confirmed vulnerable to remote code execution. Over 820 malicious skills were discovered on ClawHub out of 10,700 total.
Microsoft, Cisco, Kaspersky, and Trend Micro all published security advisories. Harrison Chase, the CEO of LangChain, told the VentureBeat podcast that he wouldn't let his own staff install OpenClaw on company laptops. His exact framing: "I guarantee that every enterprise developer out there wants to put a safe version of OpenClaw onto their computer. The bottleneck was never interest — it was the absence of a credible security and governance layer."
Every IT team wanted it. Almost none of them could safely deploy it.
That's the gap NemoClaw fills.
NemoClaw: NVIDIA's Enterprise Wrapper for OpenClaw
NemoClaw isn't a competitor to OpenClaw. It's an enterprise-grade wrapper around it — Apache 2.0 licensed, installable with a single command:
An interactive onboarding wizard detects your environment and configures everything. Under the hood, it creates sandboxed Docker containers for each agent, routes inference to local models by default, and enforces security policies before any data leaves your infrastructure.
The architecture breaks down into three core components.
1. Nemotron Models — Local-First Inference
The first piece is NVIDIA's own open-weight model family. The flagship, Nemotron 3 Super, was released on March 11:
- 120 billion total parameters, 12 billion active (Mixture of Experts)
- Hybrid architecture: Mamba-2 layers for linear-time sequence processing, Transformer attention layers interleaved at key depths for associative recall
- 1 million token native context window
- Trained on 25 trillion tokens with 7 million supervised fine-tuning samples
- Native NVFP4 pretraining optimized for Blackwell — 4x memory and compute efficiency vs FP8 on H100
On PinchBench, the benchmark that measures how well models perform with OpenClaw, Nemotron 3 Super scores 85.6% — top of the open-weight leaderboard, beating Kimi K2.5, GLM-5, and the Qwen models. On SWE-Bench Verified it hits 60.47% versus GPT-OSS's 41.90%.
The practical upshot: you can run a competitive agent model locally. Zero token costs. No data leaving your infrastructure.
NVIDIA also announced that Nemotron 3 Ultra has finished pre-training and will likely be post-trained specifically for agentic workloads over the coming months.
2. OpenShell — A Security Runtime for Agents
The second component is where NemoClaw gets serious about safety. OpenShell is an open-source, out-of-process security runtime with a deny-by-default approach. Think Docker, but with YAML-based policy controls designed specifically for autonomous agents.
OpenShell has four defense layers:
| Layer | Function | Updateable at Runtime? |
|---|---|---|
| Network | Blocks unauthorized outbound connections | Yes |
| Filesystem | Restricts access to /sandbox and /tmp only |
No (locked at creation) |
| Process | Prevents privilege escalation and dangerous syscalls | No (locked at creation) |
| Inference | Routes model calls to controlled backends | Yes |
Once a policy is defined, anything outside it is automatically blocked. Network and inference policies can be hot-swapped without restarting the agent. When agents hit constraints, they can propose policy updates for developer approval rather than silently failing.
The Privacy Router is particularly clever. It keeps sensitive context on-device using local Nemotron models by default, and only routes to frontier models like Claude or GPT when the policy explicitly permits it. Routing decisions are based on cost and privacy policies — not agent preferences.
OpenShell works with more than just OpenClaw. It's compatible with Claude Code, OpenAI's Codex, Cursor, and OpenCode.
3. NVIDIA Agent Toolkit — The Production Layer
The third piece is NVIDIA's full-stack platform for building production-grade agentic workflows. This is what enterprise partners like Box, Adobe, Salesforce, and ServiceNow are building on.
Box's integration is a good example of what this looks like in practice. They're using the Agent Toolkit to enable claws that use the Box file system as their primary working environment, with pre-built skills for:
- Invoice extraction
- Contract lifecycle management
- RFP sourcing
- Go-to-market workflows
The architecture supports hierarchical agent management — a parent claw like a "Client Onboarding Agent" can spin up specialized sub-agents for discrete tasks, all governed by the same OpenShell policy engine. Agent permissions mirror employee permissions. If a human user can't access a folder, neither can their agent.
The Hardware Play: Groq 3 LPU and the DGX Lineup
This isn't just a software story. NVIDIA is clearly using NemoClaw to sell hardware.
Groq 3 LPU
The surprise hardware announcement was the Groq 3 LPU (Language Processing Unit) — NVIDIA's first non-GPU rack, built from IP acquired in their $20 billion licensing deal with Groq Inc. in December 2025.
The Groq 3 LPU is a dedicated inference chip — not for training, purely for running models fast:
- 40 petabytes per second of memory bandwidth
- Up to 1,500 tokens per second for agentic workloads (vs ~100 tokens/sec needed for human reading)
- 256 LPUs per rack with 128 GB of solid-state RAM
- Functions as a coprocessor paired with Vera Rubin NVL72 GPU systems
- 35x higher throughput per megawatt versus current GPU inference
The logic is straightforward: when agents talk to other agents (not to humans), they need far faster inference than human reading speed demands. The Groq 3 LPU is purpose-built for that. Available from cloud providers and OEMs in the second half of 2026.
Desktop and Workstation Hardware
For running NemoClaw locally, NVIDIA is targeting multiple tiers:
| Hardware | Key Spec | Price |
|---|---|---|
| DGX Spark | 128 GB unified memory, 1 PFLOPS FP4 | $3,999–$4,699 |
| DGX Spark (4-node cluster) | 512 GB shared memory pool | ~$16,000–$18,800 |
| DGX Station (GB300) | 748 GB coherent memory, 20 PFLOPS | TBD |
| RTX PRO 6000 | 96 GB GDDR7, 5th-gen Tensor Cores | TBD |
The DGX Spark is particularly interesting — a desktop form factor "personal AI supercomputer" with enough memory to run the full Nemotron 3 Super locally. The new four-node clustering feature lets you link four Sparks together for a 512 GB shared memory pool.
What This Means for the Agentic AI Landscape
NemoClaw represents a clear shift in how the industry thinks about AI agents.
The open vs. closed debate is tilting open. Jensen explicitly argued that it's "not in our interest to just have OpenAI running OpenClaw for everyone, or Anthropic or Google or any of the hyperscalers." The pitch is that enterprises should have their own customized version with local inference and local security policies.
Security is no longer optional. The 42,900 exposed OpenClaw instances proved that powerful agents without guardrails are a liability. OpenShell's deny-by-default approach and YAML policy engine are the kind of infrastructure that was missing from day one.
The hardware moat matters. Running a 120B-parameter model locally requires serious compute. NVIDIA happens to sell that compute. NemoClaw gives enterprises a reason to buy DGX hardware that goes beyond training workloads — now they need it for always-on agent inference too.
Agent permissions will mirror employee permissions. Box's approach of mapping agent access controls to existing employee permission structures is likely the template every enterprise will follow. This is how IT teams will get comfortable deploying agents in production.
The Bigger Picture
OpenClaw proved that autonomous agents are not a theoretical concept — they're a product category. In 60 days it went from zero to 250,000 stars because it gave people something they desperately wanted: an AI that could actually do things on their behalf, not just answer questions.
NemoClaw is NVIDIA's bet that the next phase isn't about making agents more capable. It's about making them safe enough to deploy at scale. Local models that keep data on-prem. Security runtimes that enforce policies. Hardware that can actually run these workloads 24/7.
Harrison Chase summed it up well: "Production systems require explicit safety mechanisms that don't compromise functional capability."
NVIDIA just shipped those mechanisms. The enterprise claw era starts now.



