Ax=b in two settings

18 July 2011,  y=Ax

The title equation, where x is supposedly sparse arises in two different settings, with quite different meanings, yet with similar solutions:

The first is in signal processing where y is the result of measurements obtained from an object of interest x using a linear operator denoted here by A. In this setting one is looking to recover the original signal x from a small number of measurements. To achieve this goal we need to constrain x and A. x is usually considered to be sparse, either itself or in some known domain. A is usually selected so that it is incoherent with the sparsifying basis of x. The solution then is to find x in a way that will minimize the cardinality of the support of x constrained to y=Ax.

The second setting is in machine learning where y denotes the original signal and we are looking to represent it as a combination of few sources. Some ideas are important here and I will try to point them out. First and foremost the general object (perhaps a natural scene) represented by the signal y may be the result of many different causes in nature. So we need a largely redundant dictionary A to represent all y in a sparse and linear manner. Perhaps y represents a face, in which case we need atoms atleast as many as there are people. Secondly an instance of y when captured by sensors is usually the result of a few active causes and may thus be presented sparsely using A. This is the main idea behind finding sparse representations or features for our signals y. The solution though is  the same as the signal processing setting.

In signal Processing, where A is a measurement matrix, we are interested in using matrices that will capture the information in x using very few measurements and convey that information in y. In machine learning we are interested in learning dictionaries A that will result in very sparse yet information preserving features, x. Now considering the notion of mutual information, the fact that the amount of information conveyed through y  about x is equal to the amount of information conveyed about y through x, one is led to believe that the same A should be used for both cases. The state of the art solution though is quite different. The signal processing community searches for matrices (with fixed number of rows) that are maximally incoherent yet have the minimum number of rows (two obviously contradictory constraints). The machine learning community is looking for maximally redundant dictionaries with as few columns as possible with fixed number of rows (again, two contradictory constraints).

 

 

We have seen the model y=Ax, where x is sparse in two different settings but with similar solutions:

First is in signal processing where y is the result of measurements obtained from x using a linear operator denoted here by A. In this setting we are looking to recover the original signal x from a small number of measurements. To achieve our goal we put conditions on x and A such as sparsity and incoherency respectively. The solution then is to find x in a way that will minimize the cardinality of the support of x constrained to y=Ax.

The second setting is in machine learning where y denotes the original signal and we are looking to represent it as a combination of a few sources. Some ideas are important here and I will try to point them out. First and foremost the general object (perhaps a natural scene) represented by the signal y may be the result of many different causes in nature. So we need a largely redundant dictionary A to represent all y in a sparse and linear manner. Perhaps y represents a face, in which case we need atoms atleast as many as there are people. Secondly an instance of y when captured by sensors is usually the result of a few active causes and may thus be presented sparsely using A. This is the main idea behind finding sparse representations or features for our signals y. The solution though is the same as the signal processing setting.

In signal Processing where A is a measurement matrix, we are interested in using matrices that will capture the information in x using very few measurements. In machine learning we are interested in learning dictionaries A that will result in very sparse yet information preserving features.