The Art of Macro Benchmarking: Evaluating Cloud Native Services Efficiency

Benchmarking is hard, especially on a macro level that integrates multiple code components into one or multiple microservices. It’s challenging to reproduce production conditions that matter, isolate the test from external effects and reliably automate all steps. This leads to unreliable benchmark results, a long time to execute benchmarks and a high cost of stress tests, so generally wasted engineering time and business money.

Fortunately, with good tools and some education, we can turn macro benchmarks into remarkably effective tools that catch performance regressions and let us know where our workloads should be optimized further!

In this talk, Bartek, an author of the “Efficient Go” book with O’Reilly and Prometheus maintainer, will walk you through the basics of macro benchmarks and load tests. You will learn about open-source frameworks, observability tools and patterns you can use to benchmark your applications in Cloud, on Kubernetes and beyond. Gain significant efficiency improvements by Macro Benchmarking correctly and with ease!

21 minutes
Register now to access all 50+ P99 CONF videos and slide decks.
Watch this session from the P99 CONF livestream, plus get instant access to all of the P99 CONF sessions and decks.

Bartłomiej Płotka, Senior Software Engineer at Google

Bartek Płotka is a Senior Software Engineer at Google. SWE by heart, with an SRE background, working on Observability for OSS and Google Cloud Users. Previously Principal Software Engineer at Red Hat. Author of "Efficient Go" book with O'Reilly. As the co-founder of the CNCF Thanos project and core maintainer of various open-source projects, including Prometheus, he enjoys building OSS communities and maintainable, reliable distributed systems, ideally in Go. CNCF TAG Observability Technical Lead.

P99 CONF OCT. 23 + 24, 2024

Register for Your Free Ticket