Hosting eBPF and container security expert Liz Rice at P99 CONF 2022 was a treat from start to finish – from when her preview video went viral to when she spent an extra half hour chatting with us from her music/work studio amongst some of the tech industry’s most clever posters. And we’re thrilled to share that she’s agreed to return for P99 CONF 2023!
In her P99 CONF 2022 keynote, Liz walked attendees through how Cilium (part of the CNCF) improves throughput, frees up CPU usage and makes Kubernetes networking more efficient by using eBPF to bypass parts of the network stack.
Using XDP (eXpress Data Path), Cilium can run eBPF programs on the network interface card – enabling you to take advantage of eBPF as soon as a network packet arrives. For example, as Rice demonstrates, you could use eBPF as a very fast and efficient way to identify and discard “packets of death.” Notably, such a mitigation can be loaded dynamically, without installing a kernel patch or rebooting machines. And that’s just one case of how you can use eBPF to dynamically change the behavior of networking in a system.
eBPF can also be used for manipulating packets; for example to change the source and destination addresses contained in the packets for load balancing. As a packet arrives, an eBPF XDP program can determine where to send it – on that host or to a different machine – without the packet being processed by the kernel’s networking stack. This enables impressive performance gains (Exhibit A: read how Seznam.cz achieved over two times better throughput and saved an “unbelievable amount of CPU usage” by running an XDP-based load balancer vs IPVS one.)
Looking beyond XDP, eBPF programs can be attached to a variety of different points in the network stack, and this is especially helpful when working with the complex networking stack of Kubernetes. As Rice’s demos, flamegraphs and benchmarks show, this yields yet more opportunities for throughput and CPU gains. Watch the video and see the performance impact for yourself.
Following her keynote, Liz joined us in Speaker’s Lounge to continue the lively conversation around all things eBPF as well as the emergence and evolution of Kubernetes.
A Tour of the Talk
0:03: About eBPF and Cilium
2:21: Shifting from containers into Kubernetes
3:57 How consistently is the eBPF ISA implemented in practice
6:28 Ways that teams are using eBPF for real-world projects
9:05 How eBPF enables a whole new set of tools
10:17 Security risks of eBPF
11:37 Using eBPF within a managed Kubernetes service like AKS
12:52 Awesome open source contributors to the Cilium project
14:18 Herding cats at the CNCF
15:44 The role of “Chief Open Source Officer”
17:15 Can eBPF manipulate USB packets
19:09 Early technical indicators that Kubernetes would win the orchestration wars
20:43 Overcoming the performance penalty of containers
21:49 How Liz’s focus on security impacted her work at Cilium
Teasers
Here’s a taste of the many memorable moments:
At the first KubeCon I attended, I thought, “Oh, yeah, Kubernetes is going to win [vs containers]” – because it wasn’t just one organization pushing it. There were so many people from so many different companies who were really engaged with Kubernetes, believed in it, and were building it. That’s what convinced me that I should be placing my bets on Kubernetes.
eBPF programming rapidly becomes kernel programming. At some point, particularly if we’re hooking deeper into the kernel (and you can hook in anywhere), you start manipulating kernel data structures, and you have to know what the impact of that is going to be. With that in mind, there’s a huge amount of excitement around eBPF. That being said, I don’t think the majority of users are actually going to write a line of eBPF code. They’re going to use projects and products developed by other organizations.
When we first saw containers, people were just using them for the portability benefits: I’m going to spin up a VM for every container because I’m currently spinning up a VM for every workload anyway. So they were using containers as a sort of distribution mechanism, rather than a performance mechanism. But then people realized that these containers actually start up in the blink of an eye, and that’s something we should be taking advantage of.