Speaker Details

Julien Dubois
Microsoft

Julien Dubois manages the Java Developer Relations team at Microsoft.

He is known as the creator and lead developer of the JHipster project, and as a Java Champion. In the past 25 years, Julien mainly worked with the Java and Spring technologies, leading technical teams for many different customers across all industries. As he loves sharing his passion, Julien wrote a book on the Spring Framework, spoke at more than 200 international conferences, and created several popular Open Source projects.

LangChain4j was presented for the first time at Devoxx.be 2023 and since then it has met a lot of interest from Java Developers.

Come to this panel discussion where member of the LangChain4j community discuss the present and future of the project.

You can submit your questions to the panelists at any time using this form: https://forms.gle/VDR5ghpY2sfrCbKs7

See you there!

More

Our goal is to provide you with tools to start experimenting the RAG (retrieval-augmented generation) pattern right away, so you leave this talk with infrastructure and code ready to use for your next AI project.

Using the "Easy RAG" project from LangChain4J, we will:

- explain the ideas and concepts behind the RAG pattern

- configure a vector database and an LLM to create a realistic RAG infrastructure inside Docker

- code a simple RAG application in Java

Everything will run locally, and we'll use Phi-3 to have a small and efficient model, so you can experiment with the RAG pattern on your laptop.

More

AI technologies, and particularly large language models (LLMs), have been popping up like mushrooms lately. But how can you use them in your applications?

 

In this workshop, we will use a chatbot to interact with GPT-4 and implement the Retrieval Augmented Generation (RAG) pattern. Using a vector database, the model will be able to answer questions in natural language and generate complete, sourced responses from your own documents. To do this, we will create a Quarkus service based on the Open Source LangChain4J and ChatBootAI frameworks to test our chatbot. Finally, we will deploy everything to the Cloud.

 

After a short introduction to language models (operations and limitations), and prompt engineering, you will:

- Create a knowledge base: local HuggingFace LLMs, embeddings, a vector database, and semantic search

- Use LangChain4J to implement the RAG (Retrieval Augmented Generation) pattern

- Create a Quarkus API to interact with the LLM: OpenAI / AzureOpenAI

- Use ChatBootAI to interact with the Quarkus API

- Improve performance thanks to prompt engineering

- Containerize the application

- Deploy the containerized application to the Cloud

- Tweak your RAG integration

- Optimize for quality, cost or size

 

At the end of the workshop, you will have a clearer understanding of large language models and how they work, as well as ideas for using them in your applications. You will also know how to create a functional knowledge base and chatbot, and how to deploy them in the cloud.

More

Searching for speaker images...