Graduate seminar in communication theory and systems

 

The class meets Wed 3-4pm in Calit2 (Atkinson Hall) 4004


We discuss current research topics through weekly invited seminars

WED April 1. Yuri Polyanskly.  MIT

ROOM 4309 Jacobs

Title: Finite blocklength information theory of wireless channels.


Abstract:

Finite blocklength information theory attempts to elucidate the fundamenetal

performance-delay tradeoffs.  

In this talk I will overview several recent results for single and multiple-antenna channels

subject to fading. The multiplicative component of the noise in these

channels poses a number of new challenges for extending existing tools.


I will focus on three results:

1) Analysis based on outage-capacity imposes contradicting assumptions: the blocklength should be small enough for the fading to be almost constant, and it should be large enough for Shannon coding theorem to be effective. Practically, however, this analysis "just works". I will explain why (channel dispersion $V=0$).


2) Classical result shows that the optimal energy efficiency (-1.59 dB) is achievable in the

  absense of CSI at the receiver by flash signaling. Should we then throw the CSI away? (No!)


3) Space-time codes provide diversity, so are only useful for modulation (diversity)

  and are useless for increasing rate (multiplexing). Right? (Wrong.)



WED April 8. Ivana Maric. Ericsson.  

Title: Noisy Network Coding for 5G?

 

Abstract:

We consider communications in wireless networks with multiple relays.  These networks will find their applications in 5G systems to enable mesh networking, e.g. in wireless backhaul at high-frequencies and massive machine-type-communications. For such networks, quantize-map-and-forward (or noisy network coding (NNC)) can achieve the cut-set upper bound within a constant gap. In 5G multihop applications, however, this gap may not be negligible. 

 

Hou and Kramer proposed an improvement to NNC, by allowing relays to choose whether to decode the data or not. Specifically, relays in favorable positions decode-and-forward and the rest quantize via short message NNC.  However, forwarding of quantized signals by NNC relays introduces interference in the rest of the network.

 

We first present a novel scheme that overcomes this problem by deploying rate-splitting at NNC relays, thereby facilitating interference cancellation at the rest of the relays.  Secondly, we show that optimal quantization at relays yields an improved capacity scaling whereby the gap from capacity grows only logarithmically. We finally discuss a low complexity modification to NNC that can lead to practical solutions for 5G mesh networking.

Based on joint work with Song-Nam Hong, Dennis Hui and Giuseppe Caire.

 


WED April 15. Silvio Savarese.  Stanford.

Title: Sensing and Sensibility


Abstract:

We are moving into a world where sensors will be everywhere and will be 

ubiquitous. In the recent years, we have seen an explosion of new artificial 

visual sensors that can integrate luminance values with other sensing 

modalities: infrared, thermal, gravity, depth, to cite a few. Sensing is not 

the hard problem here, however. Sensibility, or, intelligent understanding 

of the sensing data is the challenge. When we look at an environment such as 

a coffee shop, we don't just recognize the objects in isolation, but rather 

perceive a rich scenery of the 3D space, its objects and all the relations 

among them. This allows us to effortlessly navigate through the environment, 

or to interact and manipulate objects in the scene with amazing precision. 

A major line of work from my group in recent years has been to design visual 

models that can process different sensing modalities and enable sensibility. 

I will start by giving an overview of our research for detecting objects and 

determining their geometric and physical properties such as 3D location, 

pose or shape from images, videos or RGB-Depth imagery. Then, I will 

demonstrate that these detection methods play a critical role for modeling 

the interplay between objects and space which, in turn, enable simultaneous 

inference of the semantic and spatial properties of a scene. I will conclude 

this talk by demonstrating that our models for sense and sensibility are 

potentially transformative in application areas related to autonomous or 

assisted navigation, smart environments, augmented reality, and large scale 

information management.



WED April  22. Fabio Pasqualetti.  UC Riverside

Controllability Metrics, Limitations and Algorithms for Complex Networks


Complex networks emerge in diverse natural and technological systems. The ability to manipulate and reconfigure complex networks through local interventions is fundamental to guarantee reliable and efficient network functionalities. Typically, it is not feasible nor desirable to control each network component, thus the importance to identify optimal control components and strategies, and to characterize to what extent few control components can reprogram complex networks.


In this talk I will present different metrics for network controllability and highlight fundamental limitations. In particular, based on the network topology and weights, I will show that most complex network are difficult to control by few control nodes, as the control energy grows exponentially with the network cardinality. Conversely, I will characterize certain “anisotropic” networks whose controllability degree is independent of the network cardinality. Finally, I will describe a scalable control algorithm with performance guarantees, and I will discuss possible research directions.


WED April  29.  .  Venkat Chandrasekaran Caltech

Title Relative Entropy Relaxations for Signomial Optimization


Abstract:

Due to its favorable analytical properties, the relative entropy function plays a prominent role in a variety of contexts in information theory and in statistics. In this talk, I'll discuss some of the beneficial computational properties of this function by describing a class of relative-entropy-based convex relaxations for obtaining bounds on signomial programs (SPs), which arise commonly in many problems domains.  SPs are non-convex in general, and families of NP-hard problems can be reduced to SPs.  The central idea underlying our approach is a connection between the relative entropy function and efficient proofs of nonnegativity via the arithmetic-geometric-mean inequality.



WED May 6.  .  Dror Baron NCSU.

Title:  Recovery from Linear Measurements via Denoising and Approximate Message Passing


Abstract:

Approximate message passing (AMP) decouples linear inverse problems into iterative

additive white Gaussian noise (AWGN) scalar channel denoising problems. The first

part of the talk provides a tutorial-style overview of AMP, including advantages,

a convenient design methodology, and possible pitfalls. The second part describes

how we have designed denoising algorithms that recover signals from AWGN, and

applied these denoisers within AMP to reconstruct signals in linear inverse

problems. Examples include denoising parametric signals, universal denoising for

stationary ergodic signals whose input statistics are unknown, and image denoising.

Our favorable numerical results indicate that AMP is a promising tool for solving

linear inverse problems.



WED May 13. Prakash Ishwar Boston University

Title:  A Topic modeling approach to ranking rom pairwise preferences

The recent explosion of web analytics tools has enabled us to collect an immense amount of partial preferences for large sets of items, e.g., products from Amazon, movies from Netflix, or restaurants from Yelp, from a large and diverse population of users through transactions, clicks, etc. Modeling, learning, and ultimately predicting the preference behavior of users from pairwise comparisons has been extensively studied since the 1927 work of Thurstone. Yet, almost all models to date have been founded on a clustering-perspective in which users are grouped by their preference behavior.

We take a fundamentally different decomposition-perspective and propose a new class of generative models for pairwise comparisons in which user preference behavior can be decomposed into contributions from multiple shared latent “causes” (global rankings) that are prevalent in the population. We show how the estimation of shared latent rankings in the new generative model can be formally reduced to the estimation of topics in a statistically equivalent topic modeling problem in which causes correspond to topics and item-pairs to words. We show that an inevitable consequence of having a relatively small number of shared latent causes in a world of large number of item-pairs is the presence of “novel” item-pairs for each latent cause. We then leverage recent advances in the topic modeling literature and develop an algorithm based on extreme-point identification of convex polytopes to learn the shared latent rankings. Our algorithm is provably consistent and comes with polynomial sample and computational complexity guarantees. We demonstrate that our new model is empirically competitive with the current state-of-the-art approaches in predicting preferences on semi-synthetic and real world datasets.

This talk is based on joint work with Weicong Ding and Venkatesh Saligrama at Boston University.



WED May  20.  .  NO SEMINAR

TBD

TBD


WED May  27.  .  Massimo Vergasola UCSD

Finding the needle in a biological haystack  


Early T-cell activation is selected by evolution to rapidly discriminate a few foreign peptides among a vast excess of self-peptides, and it is unclear in quantitative terms how this is possible. It will be discussed how a generic proofreading cascade supplemented by a single negative feedback accounts for early T-cell activation, including antagonistic effects. Modulation of the negative feedback mediated by the SHP-1 phosphatase explains previous counterintuitive observations and new experiments validate predictions. Absolute limits on the tradeoffs between decision speed and accuracy are then explored. In addition to the immune system, rapidly developing embryos, and cellular response to stress, provide examples where fast and accurate actions are required. Statistical theory under the rubric of ’exploit-explore’ supplies rigorous performance bounds and algorithms that realize them. It will be shown that common protein phosphorylation networks can implement optimal decision theory algorithms, and speculated that the ubiquitous chemical modifications to receptors during signaling actually perform such analogue computations.



WED June 3.  .  Yonatan Caspi UCSD

Searching with Measurement Dependent Noise 


We consider a search problem in which a target is arbitrarily placed on the unit interval. To acquire the target, any region of the interval can be probed for its presence, but the associated measurement noise increases with the size of the probed region. We are interested in the expected search time required to find the target to within some given resolution and error probability.  When the measurement noise is constant (independent of the probed region), this problem is known to be equivalent to standard channel coding with feedback. We characterize the optimal tradeoff between time and resolution (i.e., maximal rate), and show that in contrast to the case of constant measurement noise, measurement dependent noise incurs a multiplicative gap between adaptive search and non-adaptive search. Moreover, our adaptive scheme attains the optimal rate-reliability tradeoff.  


An extension of this problem into a multi-target setting is also considered. We highlight the equivalence of this extension to coding for a certain multiple access channel and the optimal rate, as a function of the number of targets, is characterized. Finally, we show that as the number of targets increases, the performance gap between adaptive- and non-adaptive search becomes negligible.


This talk is based on joint work with Ofer Shayevitz and Tara Javidi.