Speaker

Wietse Venema
Google

Wietse Venema is an engineer at Google Cloud. He wrote the O’Reilly book on Cloud Run.

View
Running open large language models in production with Ollama and serverless GPUs
Conference (BEGINNER level)
Room 10

Many companies are interested in running open large language models such as Gemma and Llama because it gives them full control over the deployment options, the timing of model upgrades, and the private data that goes into the model. Ollama is a very popular open-source LLM inference server that works great on localhost and in a container. In this talk, you’ll learn how to deploy an application that uses an open model with Ollama on Cloud Run with scale to zero, serverless GPUs.

More

Searching for speaker images...