Speaker Details

Alina Yurenko
Oracle Labs

Alina is a developer advocate for GraalVM at Oracle Labs, a research & development organization at Oracle. Loves both programming and natural languages, compilers, and open source.

Spring Boot in Native Images are all the rage now: faster startup, stable performance and optimized resource usage, what's not to like? While it is super easy to integrate in a brand new, greenfield application, there are a few things you need to know to migrate older, already existing apps.

While the Spring Ahead-Of-Time compilation process can infer a lot about an application, it is not enough. There are general patterns to follow, and common pitfalls that can be avoided.

In this live demo, you will learn practical recipes for migrating an existing JVM-based Spring Boot application to Native Image, tips for improving your workflows, and how to measure the improvements Spring Native brings.

More

GraalVM has been around for a while, and more and more developers and teams use it to run Java applications faster, more efficiently, and more securely. According to the latest State of Spring survey, 37% of Spring users either already run applications natively compiled GraalVM in production, or are currently evaluating it, and 31% more have such plans.

For those who haven't moved to GraalVM yet though, the questions and concerns are usually similar: how hard is to migrate? Can I use libraries and tools? Can I monitor such native applications? What about using latest Java features?

In this no-slides, all-code session we'll talk about all the practical aspects of building and running applications with GraalVM, tooling, monitoring, using popular libraries, and cover common questions that we get from users.

More

Large Language Models (LLMs) have become essential in many applications, but integrating them effectively into Java environments can still be challenging. This session will explore practical approaches to implementing local LLM inference using modern Java.

We'll demonstrate how to leverage the latest Java features to implement local inference for a variety of open-source LLMs, starting with Llama 2&3 (Meta). Importantly, we'll show how the same approach can easily be extended to run other popular open-source models on standard CPUs without the need for specialized hardware.

Key topics we'll cover:

- Implementing efficient LLM inference engines in modern Java for local execution

- Utilizing Java 21+ features for optimized CPU-based performance

- Creating a flexible framework adaptable to multiple LLM architectures

- Maximizing standard CPU utilization for inference without GPU dependencies

- Integrating with LangChain4j for streamlined local inference execution

- Optimizing performance with Java Vector API for accelerated matrix operations and leveraging GraalVM to reduce latency and memory consumption.

Join us to learn about implementing and optimizing local LLM inference for open-source models in your Java projects and creating fast and efficient AI applications using the latest Java technologies.

More

Searching for speaker images...