I've been working for many years in the computer field. In the last 15 years I've cultivated a strong passion in Java development and the under the hood details of OpenJDK, recently joined by C and (x86) ASM.
Big fan of DDD (Domain Driven Design) world, I've developed several Event-Sourcing high performance solutions in the medical and IoT field.
I'm an active member of various online communities on performance (https://groups.google.com/forum/#!forum/mechanical-sympathy), Principal (Software) Performance Engineer and Performance Lead for Red Hat on Quarkus, Red Hat Top Inventor (2019).
I've collaborated to different projects related high-performance computing both as committer and contributors eg Quarkus, Vert-x, Netty committer, JCTools author, PMC of ActiveMQ Apache Artemis (Messaging Broker), HdrHistogram, JGroups-raft, ...
How many times have you implemented a clever performance improvement, and maybe put it in production, because it seemed the right thing™ to do, without even measuring the actual consequences of your change? And even if you are measuring, are you using the right tools and interpreting the results correctly? During this deep dive session we will use some examples, taken from real-world situations, to demonstrate how to develop meaningful benchmarks, avoiding the most common, but also often subtle, possible pitfalls and how to correctly interpret their results and taking actions to improve them. In particular we will illustrate how to use JMH for these purposes, explaining why it is the only reliable tool to be used when benchmarking Java applications, and showing what can go horribly wrong if you decide to measure the actual performance of a Java program without it. At the end of this session you will be able to create your own JMH based benchmarks and more important to effectively use their results in order to improve the overall performance of your software.
Searching for speaker images...