Neural Networks Can Teach Themselves: Contrastive Learning and the Future of Computer Vision
Conference (INTERMEDIATE level)
Room 7

Neural networks have revolutionized computer vision. They can find photos of "you with your dog on a beach" or detect cars and pedestrians in a real-time video feed. But to do so, they traditionally required massive datasets of hand-labeled images. With the development of contrastive learning methods, neural network training becomes semi-supervised or even self-supervised, which removes the hand-labeling bottleneck.

This talk covers the theory of contrastive learning, specific approaches such as SimCLR, SimSiam, and Masked AutoEncoder Networks and why developments in contrastive learning matter to applied ML developers. Following the theoretical discussion, this talk shows off the power of contrastive learning in practice, using Keras-CV. We will first demonstrate how to tackle the STL-10 benchmark and then perform ImageNet classification using only 10% of the labeled training data.

Luke Wood

Luke Wood is a Machine Learning Specialist and a Software Engineering Generalist. Currently, Luke's focuses are in making KerasCV a powerful and expressive library to solve common Computer Vision tasks and publishing high quality research in top Machine Learning conferences. Luke currently works full time at Google on the Keras team, and is pursuing his Doctorate in Machine Learning at UC San Diego under Peter Gerstoft.