Categories: News

WEKA Breaks The AI Memory Barrier With Augmented Memory Grid on NeuralMesh

Breakthrough Memory Extension Technology, Validated on Oracle Cloud Infrastructure, Democratizes Inference, Delivering 1000x More Memory and 20x Faster Time to First Token for NeuralMesh Customers

- Advertisement -

ST. LOUIS and CAMPBELL, Calif., Nov. 19, 2025 /PRNewswire/ — From SC25: WEKA, the AI storage company, today announced the commercial availability of Augmented Memory Grid™ on NeuralMesh™, a revolutionary memory extension technology that solves the fundamental bottleneck throttling AI innovation: GPU memory. Validated on Oracle Cloud Infrastructure (OCI) and other leading AI cloud platforms, Augmented Memory Grid extends GPU memory capacity by 1000x, from gigabytes to petabytes, while reducing time-to-first-token by up to 20x. This breakthrough enables AI builders to streamline long-context reasoning and agentic AI workflows, dramatically improving the efficiency of inference workloads that have previously been challenging to scale.

- Advertisement -

From Innovation to Production: Solving For The AI Memory Wall
Since its introduction at NVIDIA GTC 2025, Augmented Memory Grid has been hardened, tested, and validated in leading production AI cloud environments, starting with OCI. The results have confirmed what early testing indicated: as AI systems evolve toward longer, more complex interactions—from coding copilots to research assistants and reasoning agents—memory has become the critical bottleneck limiting inference performance and economics.

- Advertisement -

“We’re bringing to market a proven solution validated with Oracle Cloud Infrastructure and other leading AI infrastructure platforms,” said Liran Zvibel, co-founder and CEO at WEKA. “Scaling agentic AI isn’t just about raw compute—it’s about solving the memory wall with intelligent data pathways. Augmented Memory Grid enables customers to run more tokens per GPU, support more concurrent users, and unlock entirely new service models for long-context workloads. OCI’s bare metal infrastructure with high-performance RDMA networking and GPUDirect Storage capabilities makes it a unique platform for accelerating inference at scale.”

- Advertisement -

Today’s inference systems face a fundamental constraint: GPU high-bandwidth memory (HBM) is extraordinarily fast but limited in capacity, while system DRAM offers more space but far less bandwidth. Once both tiers fill, key-value cache (KV cache) entries are evicted and GPUs are forced to recompute tokens they’ve already processed—wasting cycles, power, and time.

- Advertisement -

WEKA’s Augmented Memory Grid breaks through the GPU memory wall by creating a high-speed bridge between GPU memory (typically HBM) and flash-based storage. It continuously streams key-value cache data between GPU memory and WEKA’s token warehouse, using RDMA and NVIDIA Magnum IO GPUDirect Storage to achieve memory speeds. This allows large language and agentic AI models to access far more context without having to recompute previously computed KV cache or previously generated tokens, dramatically improving efficiency and scalability.

- Advertisement -

OCI-Tested Performance and Ecosystem Integration
Independent testing, including validation on OCI, has confirmed:

- Advertisement -
  • 1000x more KV cache capacity while maintaining near-memory performance.
  • 20x faster time to first token when processing 128,000 tokens compared to recomputing the prefill phase.
  • 7.5M read IOPs and 1.0M write IOPs in an eight-node cluster.

For AI cloud providers, model providers, and enterprise AI builders, these performance gains fundamentally change inference economics. By eliminating redundant prefill operations and sustaining high cache hit rates, organizations can maximize tenant density, reduce idle GPU cycles, and dramatically improve ROI per kilowatt-hour. Model providers can now profitably serve long-context models, slashing input token costs and enabling entirely new business models around persistent, stateful AI sessions.

- Advertisement -

The move to commercial availability reflects deep collaboration with leading AI infrastructure collaborators, including NVIDIA and Oracle. The solution integrates tightly with NVIDIA GPUDirect Storage, NVIDIA Dynamo, and NVIDIA NIXL, with WEKA having open-sourced a dedicated plugin for the NVIDIA Inference Transfer Library (NIXL). OCI’s bare-metal GPU compute with RDMA networking and NVIDIA GPUDirect Storage capabilities provides the high-performance foundation WEKA needs to deliver an Augmented Memory Grid without performance compromises in cloud-based AI deployments.

- Advertisement -

“The economics of large-scale inference are a major consideration for enterprises,” said Nathan Thomas, vice president, multicloud, Oracle Cloud Infrastructure. “WEKA’s Augmented Memory Grid directly confronts this challenge. “The 20x improvement in time-to-first-token we observed in joint testing on OCI isn’t just a performance metric; it fundamentally reshapes the cost structure of running AI workloads. For our customers, this makes deploying the next generation of AI easier and cheaper.”

- Advertisement -

Commercial Availability
Augmented Memory Grid is now included as a feature for NeuralMesh deployments and on the Oracle Cloud Marketplace, with support for additional cloud platforms coming soon.

- Advertisement -

Organizations interested in deploying Augmented Memory Grid should visit WEKA’s Augmented Memory Grid page to learn more about the solution and the qualification criteria.

- Advertisement -

About WEKA
WEKA is transforming how organizations build, run, and scale AI workflows with NeuralMesh™, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes slower and more fragile as workloads expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, dynamically adapting to AI environments to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50, NeuralMesh helps leading enterprises, AI cloud providers, and AI builders optimize GPUs, scale AI faster, and reduce innovation costs. Learn more at www.weka.io or connect with us on LinkedIn and X.

- Advertisement -

WEKA and the W logo are registered trademarks of WekaIO, Inc. Other trade names herein may be trademarks of their respective owners.

- Advertisement -

Photo – https://mma.prnewswire.com/media/2825138/PR_WEKA_Augmented_Memory_Grid.jpg
Logo – https://mma.prnewswire.com/media/1796062/WEKA_v1_Logo_new.jpg

- Advertisement -

View original content:https://www.prnewswire.com/in/news-releases/weka-breaks-the-ai-memory-barrier-with-augmented-memory-grid-on-neuralmesh-302618251.html

- Advertisement -

Recent Posts

Safety. For Family: TIGGO9 Completes Public Three-Vehicle Composite Crash Verification During the 2026 International Business Summit

WUHU, China, April 26, 2026 /PRNewswire/ -- In April 2026, during the 2026 International Business…

2 hours ago

Oruka Therapeutics to Host Conference Call to Report Week 16 Data for ORKA-001 from the Ongoing EVERLAST-A Trial on April 27, 2026

April 26, 2026 12:00 ET  | Source: Oruka Therapeutics, Inc. MENLO PARK, Calif., April 26,…

8 hours ago

C3 Davos of Healthcare Silicon Valley Summit Opens Tomorrow in Sunnyvale with KFSH

SUNNYVALE, Calif., April 26, 2026 (GLOBE NEWSWIRE) -- The C3 Davos of Healthcare™ Silicon Valley…

8 hours ago

KFSH Advances Precision Surgery Through In-House 3D Printing Program

RIYADH, Saudi Arabia, April 25, 2026 (GLOBE NEWSWIRE) -- King Faisal Specialist Hospital and Research…

8 hours ago

Crypto News: AlphaPepe Presale Crosses $960,000 Raised as Bitcoin Price Prediction Aims at $200,000

MONACO CITY, Monaco, April 25, 2026 (GLOBE NEWSWIRE) -- AlphaPepe has crossed $960,000 raised as…

10 hours ago

How to Sell a Business Without a Broker: New Course Helps Owners Keep More of Their Money

Orlando, FL, April 25, 2026 (GLOBE NEWSWIRE) -- Bizforsalebyowner.us has launched a new course designed…

10 hours ago