Latentris: Interactive 3D Visualization of Neural Network Representation Learning Over Training Time
Document Type
Presentation
Date of Original Version
3-27-2026
Abstract
Neural networks learn complex internal representations during training, but these representations are often difficult to inspect or interpret directly. Standard performance metrics such as accuracy or loss indicate how well a model performs, but they reveal little about how a neural network organizes data internally or how its representations evolve during training. This project introduces a framework for capturing and visualizing neural network embeddings throughout training in order to better understand representation learning dynamics. The system records intermediate layer activations from neural network models at selected stages of training and reduces these high-dimensional embeddings to low-dimensional spaces using common dimensionality-reduction techniques. These projections are displayed in an interactive 3D viewer that allows users to explore embedding spaces across training epochs and across different model layers. By examining how samples cluster and separate over time, the framework enables analysis of how neural networks gradually organize semantic information during training and how different model architectures influence this organization. In addition to visual exploration, the project evaluates simple quantitative metrics derived from embedding geometry, such as distances between samples and their class centroids and separation between class clusters. These metrics provide a complementary perspective to the visualizations and help characterize how well a model’s representation space organizes different categories. Together, visual and quantitative analyses can reveal patterns that may help diagnose model behavior, understand training dynamics, or compare architectural differences. The work relates to interpretability research that seeks to better understand the internal mechanisms of deep learning models. While visualization of learned representations is a common exploratory approach in machine learning research, many existing methods focus only on final embeddings or static projections. This project instead emphasizes tracking representation changes throughout the training process, enabling dynamic analysis of how neural networks construct their internal feature spaces. Ultimately, the goal of this project is to provide both a practical tool and a methodological framework for exploring neural network representations. By making embedding dynamics easier to inspect and analyze during training, the project aims to contribute to ongoing efforts in interpretable machine learning and provide insight into how deep learning models learn structured representations of complex data.
Recommended Citation
Puls, Ethen and Barnett, Alina J., "Latentris: Interactive 3D Visualization of Neural Network Representation Learning Over Training Time" (2026). Oral Presentations. Paper 32.
https://digitalcommons.uri.edu/gradcon2026-presentations/32