Etd

Sparse Learning in Brain Networks and Brain-like Artificial Intelligence

Public

Downloadable Content

open in viewer

Sparsity is one of the concepts that can improve effectiveness of machine learning methods. A sparse machine learning model is one in which only a relatively small number of parameters or predictors play an important role. The main benefits of sparsity in machine learning include: simplifying the model for improving interpretability; reducing runtime and generalization error. Our works in this dissertation attempt to study methods that exploit sparsity to help recover the underlying information in many real-world applications. In particular, our works involve two different domains: sparse graph discovery of brain network, and spike trains classification on resource-constrained devices. 1. Brain Network Discovery. The network discovery of brain is useful in many ways such as aiding brain disease diagnosis and understanding cognitive behaviors. The goal of brain network discovery is to discover both brain regions (nodes) and functional connections (edges) between these regions from fMRI scan of human brain. In this research, we focus on extending current sparse Gaussian graphical models by discovering the nodes and edges simultaneously through the model with mathematical consistency. Because sparse graphic models can simplify the network representation by only discovering meaningful edges of brain network. Moreover, the node discovery and edge detection facilitate each other in a synergistic fashion, which improves the discovered network. Besides, we further reform the problem of multi-state brain network and propose two approaches to solve this problem. 2. Spike Train Classification. Spike train classification is an important problem in many areas such as healthcare and mobile sensing, where each spike train is a high-dimensional time series of binary values. Conventional research on spike train classification mainly focus on developing Spiking Neural Networks (SNNs) under resource-sufficient settings (e.g., on GPU servers). However, in many real-world applications, we often need to deploy the SNN models on resource-constrained platforms (e.g., mobile devices) to analyze high-dimensional spike train data. We propose two extensions of SNN to improve the performance of accuracy and computational cost on spike trains related tasks. One of them is based on the re-parameterization of parameters in an SNN and the application of sparsity regularization during optimization. The other one is proposed to quickly find sparse and useful signals from long and noisy spike trains.

Creator
Contributors
Degree
Unit
Publisher
Identifier
  • etd-63106
Keyword
Advisor
Committee
Defense date
Year
  • 2022
Date created
  • 2022-04-21
Resource type
Rights statement

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/x059cb514