Date of Award
1-1-2022
Degree Type
Thesis
Degree Name
Master of Science in Computer Science
Department
Computer Science and Statistics
First Advisor
Marco Alvarez
Abstract
Understanding neural networks has been challenging since their inception. Neural networks are non-linear and learn representations without the user defining them, making the internal workings obscure. Methods for understanding networks have been researched for many years. This study uses centered kernel alignment (CKA) to measure the similarity between the representations of data between layers in a network. The method is invariant to invertible orthogonal transformation, invariant to isotropic scaling, and exhibits behavior demonstrating its superiority over other methods.
The similarity results enable understanding of how architecture choices impact learning and network representations. Depth and width changes of MLPMixer, a neural network architecture, are visualized using representation similarity. VGGNet, another neural network architecture, is used in a novel similarity pruning technique which achieved higher accuracy for a given percentage of neurons pruned. Network accuracy in relation to similarity is shown and discussed.
This study furthers understanding of specific architectures, general principles, and proposes and evaluates a new pruning technique based on this analysis.We created a pruning method that performs better than standard iterative magnitude pruning in both accuracy and pruning amounts. We also developed a clear and meaningful relationship between network similarity, performance, capacity, and objective.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Lachance, Alex, "Using Centered Kernel Alignment for Understanding and Pruning Neural Networks" (2022). Open Access Master's Theses. Paper 2283.
https://digitalcommons.uri.edu/theses/2283