Speaker Details

José Paumard
José works as Java Developer Advocate at Oracle. PhD in applied maths and computer science, assistant professor at the University Sorbonne Paris Nord for 25 years, he is a Java Champion Alumnus and JavaOne Rockstar. He is a member of the french Paris Java User Group, has been a co-organizer of the conference Devoxx France, and is a disorganizer of JChateau, an unconference held in the Chateau of the Loire Valley. He works on the dev.java documentation and community website, publishes the JEP Café, a monthly video cast on YouTube, and maintains a french YouTube channel with more than 80 hours of Java courses. He is also a Pluralsight author in the Java space.
Data modeling is one thing in Java that didn't change much in the last 20 years when generics and enumerations were introduced. Things are changing on two fronts. On the one hand, Valhalla is bringing Primitive Classes and Value Classes, bringing new ways for the JVM to layout your objects in memory, in order to get better performances. On the other hand, the Amber project brings back data at the center of your applications: Records bring better modelization, and along with Pattern Matching and Sealed Types, they also bring new ways to organize your business processes. In the future, Valhalla will unify object types and primitive types, with better handling of null values. Amber will continue to develop pattern matching, with deconstructors for regular classes and named patterns. These new approaches will bring new ways to better organize your applications, and to better create independent modules. The new data types Valhalla will bring you will allow you to create your object model in a different way, to get better performances for your in-memory computations. Expect to see a lot of code in this deep dive session, with new patterns, and performance considerations.
The first version of the Vector API was published as an incubator feature in the JDK 16. We now have the 6th incubator version in the JDK 21, which is stable enough to take a look at it, and see how we can use it. The Vector API can tremendously speed up your in-memory computations by using the SIMD (Single Instruction Multiple Data) capabilities of the cores of your CPU. The SIMD architecture is not a new concept, as it was already used in parallel computers in the 80s. This session explains the differences between parallel streams and parallel computing, and how SIMD computations are working internaly on simple examples. It then shows the patterns of code that the Vector API is giving along with their performances, and how you can use them to improve your in-memory data processing computations. More advanced techniques are also presented, to go beyond the basic examples.