July 2014, Ali’s “Patchwise Joint Sparse Tracking with Occlusion Detection” has been accepted for publication in TIP 2014

SSP is pleased to announce that our paper, titled ‘Patchwise Joint Sparse Tracking with Occlusion Detection’ has been accepted for publication in the IEEE Transactions on Image Processing. Please see here for more details.

SSP @ ICIP 2014

banniere_icip_v3

 

 

May 2014, SSP will be at ICIP 2014

SSP will be presenting two of its recent works on visual tracking and dictionary learning at the IEEE International Conference on Image Processing, 2014, Paris, France. Please see here for more details.

Me_BWMay 2014, Ali’s “When Pixels Team up: Spatially-Weighted Sparse Coding for Hyperspectral Image Classification” has been accepted for publication in GRSL 2014

SSP is pleased to announce that our paper, titled ‘When Pixels Team up: Spatially-Weighted Sparse Coding for Hyperspectral Image Classification’ has been accepted for publication in the IEEE Geoscience and Remote Sensing Letters. Please see here for more details.

Me_BWApril 2014, Ali’s “Spatial-Aware Dictionary Learning for Hyperspectral Image Classification” has been accepted for publication in TGRS 2014

SSP is pleased to announce that our paper, titled ‘Spatial-Aware Dictionary Learning for Hyperspectral Image Classification’ has been accepted for publication in the IEEE Transactions on Geoscience and Remote Sensing. Please see here for more details.

IMG_2

May 2013, Mohammad’s “Semi-Spatio Temporal fMRI Brain Decoding” accepted at PRNI 2013

SSP is pleased to announce that our paper, titled ‘Semi-Spatio Temporal fMRI Brain Decoding’ has been accepted for a poster presentation at PRNI 2013. Please see here for more details.

Ax=b in two settings

18 July 2011,  y=Ax

The title equation, where x is supposedly sparse arises in two different settings, with quite different meanings, yet with similar solutions:

The first is in signal processing where y is the result of measurements obtained from an object of interest x using a linear operator denoted here by A. In this setting one is looking to recover the original signal x from a small number of measurements. To achieve this goal we need to constrain x and A. x is usually considered to be sparse, either itself or in some known domain. A is usually selected so that it is incoherent with the sparsifying basis of x. The solution then is to find x in a way that will minimize the cardinality of the support of x constrained to y=Ax.

The second setting is in machine learning where y denotes the original signal and we are looking to represent it as a combination of few sources. Some ideas are important here and I will try to point them out. First and foremost the general object (perhaps a natural scene) represented by the signal y may be the result of many different causes in nature. So we need a largely redundant dictionary A to represent all y in a sparse and linear manner. Perhaps y represents a face, in which case we need atoms atleast as many as there are people. Secondly an instance of y when captured by sensors is usually the result of a few active causes and may thus be presented sparsely using A. This is the main idea behind finding sparse representations or features for our signals y. The solution though is  the same as the signal processing setting.

In signal Processing, where A is a measurement matrix, we are interested in using matrices that will capture the information in x using very few measurements and convey that information in y. In machine learning we are interested in learning dictionaries A that will result in very sparse yet information preserving features, x. Now considering the notion of mutual information, the fact that the amount of information conveyed through y  about x is equal to the amount of information conveyed about y through x, one is led to believe that the same A should be used for both cases. The state of the art solution though is quite different. The signal processing community searches for matrices (with fixed number of rows) that are maximally incoherent yet have the minimum number of rows (two obviously contradictory constraints). The machine learning community is looking for maximally redundant dictionaries with as few columns as possible with fixed number of rows (again, two contradictory constraints).

 

 

We have seen the model y=Ax, where x is sparse in two different settings but with similar solutions:

First is in signal processing where y is the result of measurements obtained from x using a linear operator denoted here by A. In this setting we are looking to recover the original signal x from a small number of measurements. To achieve our goal we put conditions on x and A such as sparsity and incoherency respectively. The solution then is to find x in a way that will minimize the cardinality of the support of x constrained to y=Ax.

The second setting is in machine learning where y denotes the original signal and we are looking to represent it as a combination of a few sources. Some ideas are important here and I will try to point them out. First and foremost the general object (perhaps a natural scene) represented by the signal y may be the result of many different causes in nature. So we need a largely redundant dictionary A to represent all y in a sparse and linear manner. Perhaps y represents a face, in which case we need atoms atleast as many as there are people. Secondly an instance of y when captured by sensors is usually the result of a few active causes and may thus be presented sparsely using A. This is the main idea behind finding sparse representations or features for our signals y. The solution though is the same as the signal processing setting.

In signal Processing where A is a measurement matrix, we are interested in using matrices that will capture the information in x using very few measurements. In machine learning we are interested in learning dictionaries A that will result in very sparse yet information preserving features.

Sparse Signal Processing

May 2011, Sparse Signal Processing Group Kickoff

The Sparse Signal Processing Group (SSPG) at DML is born.

The Sparse Signal Processing Group (SSPG) at DML is focused on the application of sparse signal representations and compressed sensing theory to challenging signal processing tasks such as image categorization, remote sensing, sampling of complex networks, etc.

Sparse Signal Processing Group