Explainable AI Analysis Visualization: Applications from Brain Computer Interfaces to ChatGPT
Conference (INTERMEDIATE level)
Room 3

The future of human machine teaming will combine Artificial Intelligence with the human mind in ways that were once science fiction. The technology is here now but researchers are challenged to understand the incredible complexity of both the human brain and Deep Learning models.
Both Brain Computer Interfaces and extremely large deep learning models are rapidly accelerating fields. With biological neural networks the physical technology to interface has become a reality. With artificial neural networks, such as ChatGPT and DALLE the software technology to employ them has become a reality. Unfortunately the capacity to use these technologies incorrectly or even maliciously is also a reality. Governments, corporations and research facilities are racing to find ways to understand how large models and the human mind can be understood and therefore be harnessed for good.
With Deep Learning models inspired by the human brain, the problem space has significant overlap. A primary concern is interpreting and explaining the hyper-dimensional feature space that comes with both domains. A critical method for explaining this space is visualization.
This discussion will describe what makes a brain computer interface possible and how it can be utilized. The open source JavaFX Explainable AI tool Trinity will demonstrate organizing and visualizing neural data with dimensions well beyond 3D. Similar analysis will be demonstrated with deep learning embeddings such as those from ChatGPT.
Sean Phillips
The Johns Hopkins University Applied Physics Laboratory
The Johns Hopkins University Applied Physics Laboratory
AI/ML Researcher and Software Engineer.
Explainable AI (XAI) and Cislunar defense expert.
Author of the open source XAI 3D visualization tool Trinity:
Author of the Deep Space Trajectory Explorer:
Java Champion
JavaFX Specialist.