Amazon Computer Scientist Stefano Soatto Disentangles Mysteries of Representation Learning
Stefano Soatto, Director of the UCLA Vision lab, Director of Applied Science at Amazon-AI-AWS (Amazon Web Services), and one of the leaders in the march to pure machine intelligence, spoke to a rapt audience at NYU Tandon on how machine-learning functions can be made to streamline data to fit specific tasks.
Soatto, the third speaker in a new seminar series Modern Artificial Intelligence that is organized by Professor Anna Choromanska and hosted by NYU Tandon’s Department of Electrical and Computer Engineering, took the stage at Pfizer Auditorium on April 5 to explain fundamental ideas in representation learning, a critical function of artificial intelligence. Representations parse undifferentiated information — from images, for example — to make it useful for machine learning tasks like computer vision for autonomous and medical image diagnostics. Generally speaking, representations are functions of past data useful to accomplish future decision or control tasks.
According to Soatto, an expert in computer vision, the key to optimizing the potential of these processes lies in how this metadata is “packaged.” In the best of all worlds, representations should have four qualities: they should be as informative as the data; they should be invariant, or unaffected by nuisance factors, or “noise”; and they should be disentangled, or as simple as possible and easy to work with. In other words, for a representation to be optimized to the task at hand, it should find the right tradeoff between accuracy and complexity.
To make what we see comprehensible, our own brains constantly perform highly complex representation processes, noted Soatto. “In visual perception, for example, half of your brain is devoted to visual information. So most of your brain is trying to make sense of what comes from the optical nerve.”
Although Soatto noted that current techniques in deep learning aren’t great at enforcing the four key properties of representations, his research with collaborators including Choromanska, NYU Tandon assistant professor of electrical and computer engineering aims to remedy that. It includes a new algorithm known as entropy based stochastic gradient descent that improves representations even when “noise” is non-isotropic, or where the value is variable, as is the case in many real-world problems.
Soatto said that while research in this area is still nascent, several teams around the world are at work on expanding on his and Choromanska’s experiments, and concluded by emphasizing that the representation theory is highly specific to its task and does not provide any information if the task is not defined.
Next Seminar: May 4
The final seminar in the series will feature Vladimir Vapnik, the creator of the first support vector machine (SVM) algorithm. See event details.