Chris Larson

I am a lead scientist / software engineer at and an adjunct professor at Georgetown University where I teach courses in machine learning and natural language processing. Previously I built the machine learning stack that powers the virtual assistant Eno at Capital One.

I am a generalist at heart. Currently I am interested in problems rooted in machine learning, statistics, and optimization, where I have a breadth of experience in theoretical algorithm and system design and building production software in Python and C++. I did my PhD in Mechanical Engineering at Cornell University, with a focus in Theoretical and Applied Mechanics and Computer Science. My doctoral research focused on the application of machine learning, and in particular deep neural networks, to the problem of tactile sensing in robotics. My work has been publish in places like Science Magazine, Soft Robotics, and Advanced Materials. I've also worked on problems in stretchable electronics, solid state physics and solid mechanics at Cornell, NASA, and Corning.

Email  /  Google Scholar  /  Github  /  LinkedIn

Adversarial Bootstrapping for Dialogue Model Training.
Oluwatobi Olabiyi, Erik Mueller, Chris Larson, Tarek Lahlou
AAAI Workshop on Reasoning and Learning for Human-Machine Dialogues (DEEP–DIAL).
Feb 8, 2020, New York, USA.  
arxiv / bibtex

This paper proposes bootstrapping a dialogue response generator with an adversarially trained discriminator to address exposure bias, improving response relevance and coherence. The method involves training a neural generator in both autoregressive and traditional teacher-forcing modes, with the maximum likelihood loss of the auto-regressive outputs weighted by the score from a metric-based discriminator model.

Telephonetic: Making Neural Language Models Robust to ASR and Semantic Noise.
Chris Larson, Tarek Lahlou, Diana Mingles, Zachary Kulis, Erik Mueller
arXiv:1906.05678 [eess.AS], 2019  
arxiv / bibtex

(i) Language models can be made robust to ASR noise through phonetic and semantic perturbations to training data. (ii) We achieve state-of-the-art perplexity of 37.87 on the Penntree Bank corpus (among models trained only on that data source) using a character-based language model and training procedure that eliminates correlation in sequential inputs at the minibatch level.

A Deformable Interface for Human Touch Recognition using Stretchable Carbon Nanotube Dielectric Elastomer Sensors and Deep Neural Networks.
Chris Larson, Joseph Spjut, Ross Knepper, Rob Sheppard
Soft Robotics, 2019  
pdf / arxiv / project page / bibtex

Neural networks can learn latent representations of deformation in elastic bodies, enabling deformable media to be used as a communication medium.

Untethered Stretchable Displays for Tactile Interaction.
Bryan Peele, Shuo Li, Chris Larson, Jason Cortell, Ed Habtour, Rob Sheppard
Soft Robotics, 2019  

We made a balloon version of the children's toy Simon that uses a vanishing touch interface.

Highly stretchable electroluminescent skin for optical signaling and tactile sensing.
Chris Larson, B. Peele, S. Li, S. Robinson, M. Totaro, L. Beccai, B. Mazzolai, R. Sheppard
Science Magazine, 2016  
pdf / supplement / interview / bibtex

Ionic hydrogel electrodes are used to create a hyperelastic light display that can stretch 5X its original length, and expand its surface area by a factor of 6.5X, eclipsing the previous state-of-the-art by a factor of 4.

Quantitative measurement of Q3 species in silicate and borosilicate glasses using Raman spectroscopy.
B.G. Parkinson, D. Holland, M.E. Smith, Chris Larson, J. Doerr, M. Affatigato, S.A. Feller, A.P. Howes, C.R. Scales
Journal of Non–crystalline Solids, 2008  

A 29Si MAS NMR study of silicate glasses with high lithium content.
Chris Larson, J. Doerr, M. Affatigato, S.A. Feller, D. Holland, M.E. Smith
Journal of Physics: Condensed Matter, 2006  

Blog Posts

The Ellipsoid method

The Ellipsoid method is an approach proposed by Schor to solve convex optimization problems. It was further developed by Yudin and Nemirovskii, and Leonid Khachiyan later used it in his derivation of the first polynomial-time algorithm for linear programming. In this post I summarize an approach based on Khachiyans algorithm. I also wrote a small c++ library that implements the algorithm.

OrbTouch -> using deformation as a medium for human-computer interaction

This mini-post explores the use of deformation as a medium for human computer interaction. The question being asked is the following: can we use the shape of an object, and how it changes in time, to encode information? Here I present a deep learning approach to this problem and show that we can use a balloon to control a computer.

Deep reinforcement learning for checkers -- pretraining a policy

This post discusses an approach to approximate a checkers playing policy using a neural network trained on a database of human play. It is the first post in a series covering the engine behind, which uses a combination of supervised learning, reinforcement learning, and game tree search to play checkers. This set of approaches is based on AlphaGo (see this paper). Unlike Go, checkers has a known optimal policy that was found using exhaustive tree search and end–game databases (see this paper). Although Chinook has already solved checkers, the game is still very complex with over 5 x 10^20 states, making it interesting application of deep reinforcement learning. To give you an idea, it took Jonathan Schaeffer and his team at UC Alberta 18 years (1989-2007) to complete to search the game tree end-to-end. Here I will discuss how we can use a database of expert human moves to pretrain a policy, which will be the building block of the engine. Ill show that a pretrained policy can beat intermediate/advanced human players as well as some of the popular online checkers engines. In later posts we will take this policy and improve it through self play.