November 22, 2019 – 11:15 AM
TSRB Auditorium

Max Raginsky

University of Illinois at Urbana Champaign

Abstract

A number of problems in machine learning involve sequence-to-sequence transformations, i.e., nonlinear operators that map an input sequence to an output sequence. Traditionally, such input-output maps have been modeled using discrete-time recurrent neural nets. However, there has been a recent shift in sequence-to-sequence modeling from recurrent network architectures to autoregressive convolutional network architectures. These temporal convolutional nets (TCNs) allow for easily parallelizable training and operation, while still achieving competitive performance. In this talk, based on joint work with Joshua Hanson, I will show that TCNs are universal approximators for a large class of causal and time-invariant input-output maps that have approximately finite memory. Specifically, I will present quantitative approximation rates for deep TCNs with rectified linear unit (ReLU) activation functions in terms of the width and depth of the network and the modulus of continuity of the original input-output map. Next, I will show how to apply these results to input-output maps with incrementally stable nonlinear state-space realizations. As an example, I will discuss a class of nonlinear systems of Lur’e type that satisfy a variant of the discrete-time circle criterion.

Biography

Maxim Raginsky received the B.S. and M.S. degrees in 2000 and the Ph.D. degree in 2002 from Northwestern University, all in Electrical Engineering. He has held research positions with Northwestern, the University of Illinois at Urbana-Champaign (where he was a Beckman Foundation Fellow from 2004 to 2007), and Duke University. In 2012, he has returned to the UIUC, where he is currently an Associate Professor with the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory. He also holds a courtesy appointment with the Department of Computer Science.