Understanding Internal Feature Development in Deep Convolutional Neural Networks for Time Series
ÖffentlichTaken as a black box, a neural network applies a complex nonlinear function to its input to produce an output. This complex nonlinear function is determined by the network's internal weights. During the network's training process, these weights are adjusted based on a sample of input-output pairs so that the difference between the network's actual outputs and desired outputs is minimized. Deep neural networks have been employed very successfully as prediction models in many applications. A major remaining challenge is to understand how the internal adaptations that result from learning enable such models to make accurate predictions. In this research, we propose several methods for interpreting the learned internal representations in neural networks in the context of time-series signal classification. Our two main interpretation goals are: (1) Feature Interpretation, to understand what features of the inputs are learned by the network's internal units; and (2) Feature Development, to visualize how the ability of individual network layers to differentiate among classes varies with the depth of the layer within the network. To evaluate the proposed methods, we develop neural networks for sleep stage classification. The networks take as inputs physiological signals of human subjects during sleep and map these multidimensional time-series to sequences of symbols corresponding to different sleep stages. Our results demonstrate that our techniques succeed in our aims of Feature Interpretation and Feature Development. We show that the networks' internal units can learn features that closely resemble those used by human sleep experts in the traditional sleep stage scoring process, such as sleep spindles, K-complexes, and slow waves. Furthermore, our results describe the development of these features with layer depth, showing that the network assembles them gradually, as layer depth increases, by extracting simple building blocks in shallow layers and combining them in deeper layers to form more complex features. Additionally, we observe an increase in the ability of the network layers to differentiate among sleep stages as depth of the layers within the network increases.
- Creator
- Mitwirkende
- Degree
- Unit
- Publisher
- Identifier
- etd-27121
- Stichwort
- Advisor
- Committee
- Defense date
- Year
- 2021
- Date created
- 2021-08-11
- Resource type
- Rights statement
- Zuletzt geändert
- 2023-12-05
Beziehungen
- In Collection:
Objekte
Permanent link to this page: https://digital.wpi.edu/show/mp48sh024