Our research in the area of signal processing has been mainly on the following:
Signal processing for space-time wireless communications has been described separately.
Discrete signal processing
Suppose a source with unknown distribution is observed after going through a discrete memoryless channel with known transition matrix. The denoising problem is to generate an estimate of the underlying noise-free sequence given the noisy observations. Weissman et al, (2004) recently proposed a discrete universal denoising algorithm (DUDE) and asymptotic analysis for this problem. The central idea in this elegant approach is to compute for each position of the noisy sequence, an empirical distribution of its occurences within a context, i.e., a bi-directional window. We made a connection between this and string matching techniques commonly used in computer science. Using this, we are able to design more efficient denoising algorithms, even when the context sizes are larger than the regularity condition needed in DUDE. Also, we looked at loss estimation methods that allow us to choose appropriate context sizes for improving denoising of particular observations. This is a topic of on-going research.
- S. Chen, S. N. Diggavi, S. Dusad, and S. Muthukrishnan, "Efficient String Matching Algorithms for Combinatorial Universal Denoising," in Proc. IEEE Data Compression Conference (DCC), Snowbird, Utah, 2005, pp. 153-162.Download: strmtch.pdfAbstract:Inspired by the combinatorial denoising method DUDE, we present efficient algorithms for implementing this idea for arbitrary contexts or for using it within subsequences. We also propose effective, efficient denoising error estimators so we can find the best denoising of an input sequence over different context lengths. Our methods are simple, drawing from string matching methods and radix sorting. We also present experimental results of our proposed algorithms.
Geometry signal processing
Many problems in computer graphics requires faithful reconstruction of a given scene which has been constructed through given primitive elements. Information about a scalar distance field (i.e., distance to a point on scene closest to a given point of observation) might be used to reconstruct the scene. We have looked into efficient algorithms to compute such distance fields using the $l_\infty$ distance metric that allows us to reliably localize the surface of interest. This also leads to the question of how to place the sampling points in 3-dimensional space in order to minimize the reconstruction distortion.
- G. Varadhan, S. Krishnan, S. N. Diggavi, Y. Kim, and D. Manocha, "Efficient Max-norm Distance Computation and Reliable Voxelization," in Proc. Eurographics Symposium on Geometry Processing, Aachen, Germany, 2003, pp. 116-126.
Wireless video coding
Signal processing for multi-terminal source coding has been described separately.
In source transmission schemes over noisy channels considerable performance gains can be obtained if we utilize the characteristics of the payload (such as voice, images, video etc.). The source-channel separation theorem (Cover and Thomas, Elements of Information Theory, 1991) implies that we can separate the problems of source compression and channel transmission. However, for finite complexity and delay we can obtain significant gains through source-channel coding. We proposed a source-channel coding scheme in which a scalable source coding scheme was combined with Rate Compatible Punctured Codes (RCPC) (Hagenauer, 1988). The flexibility offered by this scheme is ideally suited for the uncertain wireless environment and results using images over realistic wireless channels demonstrated this property.
- N. Chaddha and S. N. Diggavi, "A frame-work for joint source-channel coding of over time-varying wireless channels," in Proc. International Conference on Image Processing, 1996, pp. 89-92.
Multi-layer perceptrons (or neural networks) are frequently used in signal processing for various tasks of learning. Applications in image processing, speech recognition, image recognition and numerous other areas are quite well known. Most of the learning algorithms used in practice are nonlinear adaptive algorithms. We presented a stochastic analysis of the steady-state and transient convergence properties of one such adaptive algorithm, the Rosenblatt algorithm, on single-layer perceptrons. We examined the steady-state and transient convergence properties of a single-layer perceptron for fast learning (large step-size, input-power product). It was shown that the convergence points of the algorithm depend on the step size and the input signal power (variance), and that the algorithm is stable essentially for all positive step sizes. We also developed nonlinear recursion models for the transient behavior of the algorithm.
- S. N. Diggavi, J. J. Shynk, and N. J. Bershad, "Convergence Models for Rosenblatt's Perceptron Learning Algorithm," IEEE Transactions on Signal Processing, vol. 43, iss. 7, pp. 1696-1702, 1995.
Array signal processing
In several hostile environments it is important to be able to track incoming aircrafts and missiles with high accuracy in order to take the appropriate defense measures. Several modern inferometry and direction-of-arrival estimation algorithms have been developed over the past few decades. However, some of the equipment used older lens-based systems which throw out the phase information which is quite important to the modern methods. We developed a high-resolution, direction-of-arrival (DOA) estimation algorithm for the enhancement of such a lens-based array system. Using the algorithm developed we could potentially still use the lens-based systems and adapt them to yield high-resolution estimates of the arrival directions, even though the phase information was lost.