AI Inferencing Didn’t Break Your Architecture – It Reveals What Comes Next

Webinar title ‘AI Inferencing: Don’t Break Your Architecture’ with photos of speakers Tom O’Neill of Silk and Eduardo Kasner of Microsoft.
Feb 2, 2026
00:32

As real‑time AI inferencing becomes foundational to enterprise applications, infrastructure teams are encountering new patterns: sudden latency variability, unpredictable resource contention, shifting cost dynamics, and pressure on mission‑critical workloads. These aren’t signs of failure — they’re signs of change. Existing cloud architectures, built for steady transactional loads, are now being asked to support burst‑heavy, data‑intensive AI behaviors at an unprecedented scale.

In this session, Eduardo Kassner, Chief Data & AI Officer at Microsoft, and Tom O’Neill, VP of Product at Silk, examine how AI inferencing reshapes system behavior and why the solution isn’t simply adding replicas, adopting new storage systems, or rewriting applications. Instead, leading enterprises are introducing cloud‑native acceleration layers — such as virtual SAN architectures — to deliver consistent performance, isolate AI workflows, and scale responsibly in shared cloud environments.

If AI is stressing the boundaries of your current cloud design, this session will help you understand the architectural shift underway — and how organizations are adapting without disruptive re‑architecture or added operational risk.

Meet the Speakers

Eduardo Kasner, Chief Data and AI Officer for the High-Tech sector at Microsoft

Eduardo Kasner

Chief Data & AI Officer, Microsoft

Tom O'Neill Silk

Tom O'Neill

VP of Product, Silk

Additional Resources