How I Achieved 20 GB/s of Throughput on a Single Cloud VM

Webinar Transcript

Inside the Lab: How I Achieved 20 GB/s of Throughput on a Single Cloud VM

Topic: Real-World Database Performance Testing with Silk Cloud Storage

Speakers:

Tom O’Neill, Vice President of Product, Silk

Tanel Poder, Database Performance Consultant

Summary

In this technical Inside the Lab session, Silk’s VP of Product Tom O’Neill talks with performance expert Tanel Poder about his latest benchmark findings using Silk Cloud Storage.

Tanel used VD Bench to test Silk’s throughput and endurance under real-world mixed workloads — and achieved an unprecedented 20 GB/s of sustained throughput on a single virtual machine. The pair discuss how Silk’s software-defined architecture enables these results, why it outperforms native cloud storage, and what it means for modern databases and AI workloads.

They also look ahead at how next-generation cloud networking — scaling from 400 Gbps to 1.6 Tbps — will extend Silk’s performance even further.

Key Takeaways

Silk achieved 20 GB/s reads and 10 GB/s sustained writes on a single cloud VM.

Benchmarks hit 1.3 million 8KB random reads per second, sustained for hours.

VD Bench validated mixed read/write workloads across sequential and random patterns.

Silk’s software-defined architecture scales performance linearly without rearchitecting.

Perfect for PostgreSQL, AlloyDB, Oracle, and SQL Server — including AI and vector workloads.

Future-proof design keeps pace with emerging network standards (400 Gbps → 1.6 Tbps).

Transcript
[00:00–01:00] Introduction

Tom O’Neill (Silk):
Hello everyone! I’m Tom O’Neill, VP of Product at Silk. Today, I’m joined by Tanel Poder, one of the most respected voices in database performance testing.

We asked Tanel to push Silk Cloud Storage to its limits, and he did exactly that — using VD Bench to measure just how far Silk could go on modern cloud infrastructure.

Tanel, let’s start at the beginning: what were you hoping to prove with this round of testing?

[01:00–02:00] Objectives of the Test

Tanel Poder:
The goal was to validate Silk’s performance claims in a real-world, sustained workload scenario — not just a short spike.

I’d tested Silk about three years ago, when network throughput was much lower. This time, I wanted to measure how far the system could scale and how long it could sustain those speeds. Using VD Bench, I simulated realistic read/write workloads — from large sequential operations to small random I/O — all running simultaneously.

The idea was simple: test Silk like an enterprise database would behave in production.

[02:00–03:00] Headline Results

Tom:
And what kind of results did you achieve?

Tanel:
They were remarkable — especially for the cloud:

One Silk cluster connected to a single VM reached over 20 GB/s of large reads.

Random small reads hit 1.3 million 8KB IOPS — that’s around 10 GB/s of data movement.

Sustained 10 GB/s writes for hours with zero throttling.

These weren’t peaks — they were stable, repeatable results across long test runs.

[03:00–04:00] Why PostgreSQL and AlloyDB

Tom:
You’ve tested a lot of database systems. Why focus on PostgreSQL and AlloyDB this time?

Tanel:
Postgres has become the open-source standard for enterprise workloads — and Google AlloyDB builds on that with improved caching and parallelization.

Both are now widely used for AI, analytics, and transactional systems, so they’re perfect for testing Silk’s real-world performance under heavy mixed workloads.

[04:00–05:00] What Makes Silk Different

Tom:
You’ve run these kinds of tests across platforms for decades. What’s the key differentiator here?

Tanel:
Silk’s architecture is what sets it apart. It’s software-defined, built to leverage each VM’s own high-speed cloud network — not a shared, throttled storage layer.

It’s distributed and elastic, meaning you can scale performance simply by adding nodes. The system automatically balances I/O without downtime or configuration changes.

That’s why Silk consistently outperforms native cloud block storage — it’s using all the available bandwidth intelligently, with no architectural bottlenecks.

[05:00–06:00] Real-World Workloads Enabled

Tom:
What kind of workloads benefit most from this level of sustained performance?

Tanel:
Practically all of them — but especially:

Data warehouses, which rely on huge sequential reads and temp space writes.

OLTP systems, which generate millions of small random reads and writes every second.

At 1.3 million random reads per second, that’s roughly 10 GB/s of continuous I/O. You need serious infrastructure to sustain that, and Silk makes it not only possible but efficient.

[06:00–07:00] AI and Vector Search Implications

Tanel:
These results also have massive implications for AI and vector search.

Modern databases now integrate vector extensions — meaning they can run semantic similarity searches directly within SQL. But that also means far more data needs to be scanned per query.

So when your 2 TB CRM database suddenly becomes 6 TB because of embeddings, Silk ensures it still performs — giving AI workloads the throughput and predictability they need.

[07:00–09:00] From Raw Throughput to AI Use Cases

Tom:
And you’re working on a follow-up test that focuses specifically on AI workloads, right?

Tanel:
Exactly. My first post was about raw I/O benchmarks. The next will focus on AI-driven use cases — how Silk supports vector extensions and semantic search workloads inside traditional relational systems.

You don’t need a new “AI database.”
You can keep your current Postgres or Oracle setup, simply add vector indexes, and let Silk handle the high-throughput layer underneath.

[09:00–10:00] Network Evolution and Scaling

Tom:
You’ve been benchmarking systems for a long time. How does this compare to what you’ve seen historically?

Tanel:
It’s incredible. When I first tested Silk, network cards topped out at 100 Gbps. Today, 400 Gbps NICs are common, and that’s how we hit 20 GB/s throughput.

Next-gen NICs are already standardized at 1.6 Tbps, which means future Silk deployments could easily reach 80 GB/s on a single VM.

Silk’s software-defined, distributed architecture means it’ll scale naturally as the hardware improves — no redesign required.

[10:00–11:00] Why It Matters

Tanel:
This isn’t just about breaking performance records — it’s about enabling predictable, future-proof performance for mission-critical databases.

Silk abstracts complexity but delivers raw power. It’s built to handle increasing I/O density and scaling cloud networks.

If your system reads and writes data — whether it’s Oracle, Postgres, AlloyDB, or SQL Server — Silk will make it faster, safer, and more stable.

[11:00–12:00] Final Thoughts and Recommendations

Tom:
For anyone watching who’s battling performance or cost challenges in the cloud, what’s your advice?

Tanel:
Start by testing Silk. You don’t need to refactor your database or rewrite applications — Silk works under the hood.

Using tools like VD Bench, you can validate improvements immediately. If you’re hitting I/O or latency ceilings, Silk gives you room to grow without replatforming.

Tom:
That’s fantastic advice. Thanks, Tanel, for joining and for running these tests.

You’ve shown that Silk isn’t just keeping up with cloud innovation — it’s leading it.

And thanks to everyone watching! If you’d like to learn more, see the full results, or run your own benchmark, visit silk.us.

See you next time in Inside the Lab.