Speaker Details

Clement Escoffier
Red Hat

Clement Escoffier (@clementplop) is a distinguished engineer at Red Hat. He is a Java Champion. Before joining Red Hat, Clement had several professional lives, from academic positions to management. He contributed to projects and products, touching many domains and technologies such as OSGi, mobile, continuous delivery, and DevOps. Clement has always been interested in software engineering, distributed systems, and event-driven architecture. He recently focused on Reactive Systems, Cloud-Native applications, and Kubernetes. Clement contributed to many open-source projects, such as Apache Felix, Eclipse Vert.x, SmallRye, Mutiny, and Quarkus. He also authored the "Reactive Systems in Java" book.

Generative AI has taken the world by storm over the last year, and it seems like every executive leader out there is telling us “regular” Java application developers to “add AI” to our applications. Does that mean we need to drop everything we’ve built and become data scientists instead now? 

Fortunately, we can infuse AI models built by actual AI experts into our applications in a fairly straightforward way, thanks to some new projects out there. We promise it’s not as complicated as you might think! Thanks to the ease of use and superb developer experience of Quarkus and the nice AI integration capabilities that the LangChain4j libraries offer, it becomes trivial to start working with AI and make your stakeholders happy 🙂

In this lab, you’ll explore a variety of AI capabilities. We’ll start from the Quarkus DevUI where you can try out AI models even before writing any code. Then we’ll get our hands dirty with writing some code and exploring LangChain4j features such as prompting, chaining, and preserving state; agents and function-calling; enriching your AI model’s knowledge with your own documents using retrieval augmented generation (RAG); and discovering ways to run (and train) models locally using tools like Ollama and/or Podman AI Lab. In addition, you’ll add observability and fault tolerance to the AI integration and compile the app to a native binary. You might even try new features, such as generating images or audio! 

Come to this session to learn how to build AI-infused applications in Java from the actual Quarkus experts and engineers working on the Quarkus LangChain4j extensions. This is also an opportunity to provide feedback to the maintainers of these projects and contribute back to the community.

More

Join us for a guided tour through the possibilities of the LangChain4j framework! Chat with virtually any LLM provider (OpenAI, Gemini, HuggingFace, Azure, AWS, ...)? Generate AI images straight from your Java application with Dall-E and Gemini? Have LLMs return POJOs? Interact with local models on your machine? LangChain4j makes it a piece of cake! We will explain the fundamental building blocks of LLM-powered applications, show you how to chain them together into AI Services, and how to interact with your knowledge base using advanced RAG.

Then, we take a deeper dive into the Quarkus LangChain4j integration. We'll show how little code is needed when using Quarkus, how live reload makes experimenting with prompts a breeze and finally we'll look at its native image generation capabilities, aiming to get your AI-powered app deployment-ready in no time. By the end of this session, you will have all the technical knowledge to get your hands dirty, along with plenty of inspiration for designing the apps of the future.

More

Searching for speaker images...