Student Work

Dynamical Systems Approaches for Deep Learning

Público Deposited

Conteúdo disponível para baixar

open in viewer

This study explores iterative neural networks (INNs), which reimagine neural network designs as iterated functions, and the recently introduced Sequential2D framework for neural networks that frames INN functions as left matrix multiplications for enhanced computational efficiency. We investigate the effects of sparse and low-rank matrix approximations on model performance, particularly focusing on sparsity and weight distribution using the MNIST Random Anomaly Task. Our results highlight the delicate balance between parallelization advantages and the need for equitable weight distribution. The comparison of sparse, low-rank, and dense matrices reveals low-rank matrices' role in boosting computational speed while improving model accuracy. Overall, this research advances our understanding of INNs and Sequential2D, underlining the significance of matrix representation methods in fine-tuning neural network architectures for improved performance and efficiency.

  • This report represents the work of one or more WPI undergraduate students submitted to the faculty as evidence of completion of a degree requirement. WPI routinely publishes these reports on its website without editorial or peer review.
Creator
Publisher
Identifier
  • E-project-042224-161749
  • 121330
Palavra-chave
Advisor
Year
  • 2024
Date created
  • 2024-04-22
Resource type
Major
Source
  • E-project-042224-161749
Rights statement

Relações

Em Collection:

Itens

Itens

Permanent link to this page: https://digital.wpi.edu/show/pg15bj94r