Georgios works for Red Hat as a Principal Software Engineer and is currently the most active contributor for Quarkus, where he works in all sorts of areas, including but not limited to LangChain4j, RESTEasy Reactive, Spring compatibility, Kubernetes support, testing, Kotlin and more.
He is also an enthusiastic promoter of Quarkus that never misses a chance to spread the Quarkus love!
One of the sometimes overlooked strengths of GitHub is how much power it gives us developers in creating and augmenting our development workflows.
When combined with the capabilities that Large Language Models now offer us, our workflows can now become truly intelligent and provide us a much needed productivity boost!
A non exhaustive list of use cases that can now be implemented are the following:
- Extract the summary from long GitHub conversations
- Automate label handling
- Provide automatic help for common use cases
- Automate reviewer assignment
- Issue triaging
- Convert screenshots into code
- Suggestions for better commit messages
If you would like to know how to implement these, this session is for you. It will explain and demonstrate with live coding how to develop AI infused GitHub bots using the unrivaled combination of Quarkus GitHub App Framework and Quarkus LangChain4j.
It was exactly a year ago, at Devoxx.be 2023, when the LangChain4j project was presented to the world for the very first time. Members of the Quarkus team at Devoxx saw the presentation and they were blown away by the potential of the new library. Right after the talk they reached out to the LangChain4j folks and that very same afternoon, in the busy corridors of Kinepolis they started coding the first Quarkus/LangChain4j extension. Next morning they showed the Alpha version of that work to the LangChain4j team, and the rest is history.
Quarkus provides an ideal runtime for coding intelligent applications in Java. It builds on top of LangChain4j, offering seamless integration with CDI, build time optimizations, performance and production enhancements, effortless creation of native images using GraalVM and an unmatched developer experience and joy! The Quarkus DevMode makes iterative development and prompt engineering a breeze, Quarkus DevServices can spin up containers for you behind the scenes, and the Quarkus DevUI offers insights into your intelligent applications.
Come to this talk to find out how Quarkus embraces the AI/LLM universe through LangChain4j, and get ideas for crafting your own intelligent applications, starting from simple examples accessing local or remote LLMs, to implementing AI Services and advanced RAG scenarios!
Generative AI has taken the world by storm over the last year, and it seems like every executive leader out there is telling us “regular” Java application developers to “add AI” to our applications. Does that mean we need to drop everything we’ve built and become data scientists instead now?
Fortunately, we can infuse AI models built by actual AI experts into our applications in a fairly straightforward way, thanks to some new projects out there. We promise it’s not as complicated as you might think! Thanks to the ease of use and superb developer experience of Quarkus and the nice AI integration capabilities that the LangChain4j libraries offer, it becomes trivial to start working with AI and make your stakeholders happy 🙂
In this lab, you’ll explore a variety of AI capabilities. We’ll start from the Quarkus DevUI where you can try out AI models even before writing any code. Then we’ll get our hands dirty with writing some code and exploring LangChain4j features such as prompting, chaining, and preserving state; agents and function-calling; enriching your AI model’s knowledge with your own documents using retrieval augmented generation (RAG); and discovering ways to run (and train) models locally using tools like Ollama and/or Podman AI Lab. In addition, you’ll add observability and fault tolerance to the AI integration and compile the app to a native binary. You might even try new features, such as generating images or audio!
Come to this session to learn how to build AI-infused applications in Java from the actual Quarkus experts and engineers working on the Quarkus LangChain4j extensions. This is also an opportunity to provide feedback to the maintainers of these projects and contribute back to the community.
Come to our BOF to discuss with members of the LangChain4j community the present and future of the project!
Join us for a guided tour through the possibilities of the LangChain4j framework! Chat with virtually any LLM provider (OpenAI, Gemini, HuggingFace, Azure, AWS, ...)? Generate AI images straight from your Java application with Dall-E and Gemini? Have LLMs return POJOs? Interact with local models on your machine? LangChain4j makes it a piece of cake! We will explain the fundamental building blocks of LLM-powered applications, show you how to chain them together into AI Services, and how to interact with your knowledge base using advanced RAG.
Then, we take a deeper dive into the Quarkus LangChain4j integration. We'll show how little code is needed when using Quarkus, how live reload makes experimenting with prompts a breeze and finally we'll look at its native image generation capabilities, aiming to get your AI-powered app deployment-ready in no time. By the end of this session, you will have all the technical knowledge to get your hands dirty, along with plenty of inspiration for designing the apps of the future.
Searching for speaker images...