Speaker Details

Dunith Danushka
Redpanda Data

Dunith Danushka is a Developer Advocate, a data professional, and a content creator.

Dunith began his career during the early days of Big Data, spending a significant portion of it at WSO2, an Enterprise Integration and API management middleware company based in Sri Lanka. He contributed significantly to the WSO2 Data Analytics Server through coding, designing its stream processing engine, Siddhi, and mentoring and leading engineers. After five years in product engineering, Dunith transitioned to the Solutions Architecture team, assisting customers and prospects with pre-sales and post-sales activities for big data, integration, and API management projects.

During his time at WSO2, Dunith mastered the skills of technical content production, especially targeting developers, architects, as well as business leaders. Taking those skills further, Dunith joined StarTree, the company behind Apache Pinot, as a Developer Advocate. At StarTree, Dunith created a substantial amount of content about real-time analytics and stream processing, with a particular focus on the Apache Kafka ecosystem and Apache Pinot.

Currently, he resides in the UK, working for Redpanda as a Senior Developer Advocate.

Dunith devotes most of his time to helping people develop software systems to handle large volumes of data, teaching them how to use data for decision-making, and simplifying complex data concepts for everyone. His work has established him as a thought leader in the streaming data space.

Apart from his passion for frequent blogging and sketch noting, he’s a regular speaker at data conferences, and otherwise an everyday learner.

When I was a child, I had an uncle who was blind. Every day after returning from school, I would describe the sunset to him.

What if, with the help of Generative AI and computer vision technologies, I could make this a reality for many people like him today? In this talk, I'll share a hobby project I've developed to narrate the world in real-time.

The "Be My Eyes" project leverages AI to extend the scope of experience for blind or visually impaired people, utilizing OpenAI's advanced object detection and computer vision models.

The magic unfolds through a simple yet powerful set-up: a video camera that continuously records the user's surroundings. These real-time recordings are then fed into an AI model trained to analyze and interpret the visual content of the video.

In a remarkable feat of technological integration, "Be My Eyes" converts the AI model's textual narration of the scene into an audio description via a text-to-speech model.

More

Searching for speaker images...