Yoshi Yamaguchi – Continuous Pprof

Profiler is the another important aspect to optimize your application in terms of observability and leveraging it gives you more power for performant applications. This talk will introduce how you can improve the performance with profiler and how you can expand it for longer term improvement.

More About What Was Presented

Profile is the fundamental observability when it comes to performance tuning of the app including runtime behavior. google/pprof has been around as OSS for a while and it gives a lot of insights for profiling with ease of use. In this talk, I focus on the power of profiler in general in order to improve performance tuning and how you can utilize it continuously. Pprof supports Java, Go and C++, but other languages such as Python, Ruby and PHP are supported for it with 3rd party bindings as well.

For the explanation in this talk, I use Go as example. In this Cloud Native era, Go is the first citizen of the languages not only for infrastructure and middle ware but for applications. In that context, being prepared for the better observability of Go application is powerful skill set for future capability of the system. Go provides some tools out of the box that enables Go users to test and measure their applications, such as unit tests, benchmark, trace and profiler. Especially, Go standard tool set includes pprof for profiling.

The outline of the talk is as follows:

  • The process of performance tuning in general
  • Go’s standard tools for performance tuning. (testing.B, runtime/pprof)
  • How to run pprof and how to analyze the result out of it
  • Continuous pprof
  • Further possibilities (trace, distributed trace, metrics, etc…)