P99 CONF 2024 is now less than 30 days away. We hope you’ll be joining the community of over 20K performance-obsessed engineers for two deeply technical and highly engaging days. This year’s agenda spans Rust, Zig, Go, C++, compute/infrastructure, Linux, Kubernetes, and databases. Speakers include Michael Stonebraker, Andy Pavlo, Bryan Cantrill, Liz Rice, Gunnar Morling, Tanel Poder, and Avi Kivity. We’ll also have lively virtual lounges, book bundle giveaways, and more. As always, it’s free, virtual, and open source oriented.
Whether you can’t wait for P99 CONF 2024 or you’re still debating whether to attend, now’s a great time to binge-watch the 150+ tech talks available in our on-demand library. If you’re not sure where to start, here’s a rundown of the most popular sessions to date.
Rust, Wright’s Law, and the Future of Low-Latency Systems
Bryan Cantrill, CTO of Oxide Computer Company
This decade will see two important changes with profound ramifications for low-latency systems: the rise of Rust-based systems, and the ceding of Moore’s Law to Wright’s Law. In this talk, Bryan discusses these two trends, and (especially) their confluence – and explains why he believes that the future of low-latency systems will include Rust programs in some surprising places.
Bun, Tokio, Turso Creators on Rust vs Zig
Glauber Costa (Turso co-founder), Jarred Sumner (creator of Bun.js) and Carl Lerche (creator of Tokio, major Rust contributor, and Principal Engineer at AWS)
See what transpired when Glauber Costa, Jarred Sumner, and Carl Lerche got together for an impromptu “coding for speed” panel at P99 CONF. The lively discussion covered topics such as:
- Glauber’s Rust vs Zig deliberations for the code added to Turso’s fork of SQLite
- Carl’s take on why Rust is so well-suited for high-performance systems and applications
- Jarred’s experiences implementing Bun in Zig, focusing on performance and productivity
- Why Jarred didn’t end up writing Bun in Rust
- The time, place, and tradeoffs of dropping down into Unsafe Rust
- The massive Rust learning curve (beyond Jarred’s atypical 2 weeks!)
Whoops! I Rewrote It in Rust
Brian Martin, Software Engineer at Twitter
Pelikan is Twitter’s open source and modular framework for in-memory caching, allowing them to replace Memcached and Redis forks with a single codebase and achieve better performance. Twitter operates hundreds of cache clusters storing hundreds of terabytes of small objects in memory. In-memory caching is critical and demands performance, reliability, and efficiency. In this talk, Brian shares his adventures in working on Pelikan and rewriting it from C to Rust.
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
Marc Richards, Performance Engineer at AWS
In this talk Marc walks you through the performance tuning steps that he took to serve 1.2M JSON requests per second from a 4 vCPU c5 instance, using a simple API server written in C. At the start of the journey, the server was capable of a very respectable 224k req/s with the default configuration. Along the way, he made extensive use of tools like FlameGraph and bpftrace to measure, analyze, and optimize the entire stack, from the application framework, to the network driver, all the way down to the kernel. Marc began this wild adventure without any prior low-level performance optimization experience, but once he started going down the performance tuning rabbit hole, there was no turning back. Fueled by his curiosity, willingness to learn, and relentless persistence, he was able to boost performance by over 400% and reduce p99 latency by almost 80%.
Misery Metrics & Consequences
Gil Tene, CTO of Azul Systems
In a lab we don’t measure misery. When we measure performance, say, in benchmarking, a lot of people focus on what’s going well. The lowest latencies. The highest throughputs. But to truly understand how things are going to work in production we have to measure our systems operating at their worst. We also have to narrow down to what operationally works, and what is a waste of time. Is P99 even a useful metric? Or do we need to track event anomalies per 100,000? Learn from Gil Tene, CEO of Azul Systems, how to stop worrying and learn to love misery.
Using eBPF for High-Performance Networking in Cilium
Liz Rice, Chief Open Source Officer at Isovalent (now part of Cisco)
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel’s built-in networking stack. You’ll see how eBPF enables high-performance networking as well as deep network observability and security.
Why User-Mode Threads Are Good for Performance
Ron Pressler, Project Loom Technical Lead, Java Platform Group, Oracle
Java recently added virtual threads, an implementation of user-mode threads, to help write high-throughput servers. In this talk we’ll see why we decided to do it, understand exactly how user-mode threads improve server performance, and why context-switching has little to do with it.
Corporate Open Source Anti-Patterns: A Decade Later
Bryan Cantrill, CTO of Oxide Computer Company
A little over a decade ago, I gave a talk on corporate open source anti-patterns, vowing that I would return in ten years to give an update. Much has changed in the last decade: open source is pervasive in infrastructure software, with many companies (like our hosts!) having significant open source components from their inception. But just as open source has changed, the corporate anti-patterns around open source have changed too: where the challenges of the previous decade were all around how to open source existing products (and how to engage with existing communities), the challenges now seem to revolve around how to thrive as a business without betraying the community that made it one in the first place. Open source remains one of humanity’s most important collective achievements and one that all companies should seek to engage with at some level; in this talk, we will describe the changes that open source has seen in the last decade, and provide updated guidance for corporations for ways not to do it!
Crimson: Ceph for the Age of NVMe and Persistent Memory
Orit Wasserman, Architect at Red Hat
Ceph is a mature open source software-defined storage solution that was created over a decade ago. During that time new faster storage technologies have emerged including NVMe and Persistent memory. The crimson project aim is to create a better Ceph OSD that is more well suited to those faster devices. The crimson OSD is built on the Seastar C++ framework and can leverage these devices by minimizing latency, cpu overhead, and cross-core communication. This talk discusses the project’s design, current status, and future plans.