Machine Learning: Applications and Theory Conference

Convened on June 12, 2023

The David Rubenstein Forum | 1201 E. 60th Street, Chicago, IL 60637

Conference Organizers

Stephane Bonhomme | The Ann L. and Lawrence B. Buttenwieser Professor in Economics and the College

Azeem M. Shaikh | Ralph and Mary Otis Isham Professor in Economics and the College

 

About the Event

Speakers

Machine learning is transforming social sciences. This one-day workshop featured both researchers working to deepen our understanding of modern methods as well as researchers applying machine learning in novel ways.

Stephen Hansen, University College London

John Lafferty, Yale University

Annie Liang, Northwestern University

Panos Toulis, University of Chicago Booth School of Business

James Evans, University of Chicago

Jens Ludwig, University of Chicago

James Robins, Harvard University

Jose Montiel Olea, Cornell University

2023 Conference Photos

Machine Learning: Applications and Theory Conference

Agenda

8:00 – 9:00 a.m.

Breakfast

9:00 – 9:40 a.m.

On Minimaxity and Admissibility of Double Machine Learning (DML) Estimators under Minimal Assumptions
• James M. Robins, Harvard University

Abstract

For many functionals, DML estimators are the state-of-the-art, incorporating the good predictive performance of black-box machine learning algorithms; the decreased bias of doubly robust estimators; and the analytic tractability and bias reduction of sample splitting with cross fitting. Recently Balakrishnan, Wasserman and Kennedy (BWK) introduced a novel assumption-lean model that formalizes the problem of functional estimation when no complexity reducing assumptions (such as smoothness or sparsity) are imposed on the nuisance functions occurring in the functional’s first order influence function (IF1). Then, for the integrated squared density and the expected conditional variance functionals, they showed that first-order estimators, which includes DML estimators, based on IF1 are rate minimax under squared error loss.

However, earlier Liu, Mukherjee, and Robins (2020) had shown that, for these functionals, second-order estimators (ie estimators that add a debiasing second-order U-statistic IF22 to a first -order estimator) could have smaller risk (mean squared error) than the first order estimator. In this talk, I resolve this apparent paradox by showing that, although minimax, DML estimators are (asymptotically) inadmissible under the BWK model because (i) the risk of any first-order estimator is never less than that of the corresponding 2nd order estimator and, under many laws, may be much greater.

9:45 – 10:25 a.m.

Experiment Design in Large-Interfirm Networks via Zeroth-order Optimization
• Panos Toulis, University of Chicago Booth School of Business

Abstract

This talk presents the early phases (Wave I) of an ongoing large field experiment in a country of South America. The experiment randomizes tax audit notices (treatment) to firms embedded in multiple large inter-firm networks determined by monthly sales/purchase reports. Of particular interest is to understand spillovers, that is, the response of firms that are not treated but are connected to other firms that are treated. First, I will discuss why current popular approaches to experimenting on networks are limited by the reality of inter-firm networks, such as high interconnectivity and heavy-tailed degree distributions. I will then describe an experimental design that leverages subtle sub-structures in the network, and is specifically designed to allow the application of Fisherian-style permutation tests for causal spillover effects. Lastly, I will discuss strategies to optimize this design space through Kiefer-Wolfowitz (KW)-style procedures. These procedures are a form of zeroth optimization, which is becoming increasingly popular in machine learning.

10:25 – 10:55 p.m.

Break

10:55 – 11:35 a.m.

Machine Learning Frameworks to Support Abstract Reasoning
• John Lafferty, Yale University

Abstract

Reasoning in terms of relations, analogies, and abstraction is a hallmark of human intelligence. This ability is largely separate from function approximation for sensory tasks such as image and audio processing. How can abstract symbols emerge from distributed, neural representations? One general approach is motivated by a type of inductive bias for learning called the “relational bottleneck,” which is motivated by principles of cognitive neuroscience. We present a framework that casts this inductive bias in terms of an extension of transformers, in which specific types of attention mechanisms enforce the relational bottleneck and transform distributed symbols to implement a form of relational reasoning and abstraction. Robust estimation theory sheds light on how distributed abstract symbols can tolerate corruption and missing values.

11:40 a.m. – 12:20 p.m.

The Transfer Performance of Economic Models (joint with Isaiah Andrews, Drew Fudenberg, Lihua Lei, and Chaofeng Wu)
• Annie Liang, Northwestern University

Abstract

Economists often estimate models using data from a particular setting, e.g. estimating risk preferences in a specific subject pool. Whether a model’s predictions extrapolate well across settings depends on whether the estimated model has captured generalizable structure. We provide a tractable formulation for this “out-of-domain” prediction problem, and define the transfer error of a model to be its performance on data from a new domain. We derive finite-sample forecast intervals that are guaranteed to cover realized transfer errors with a user-selected probability when domains are iid, and use these intervals to compare the transferability of economic models and black box algorithms for predicting certainty equivalents. We find that in this application, black box algorithms outperform the economic models when estimated and tested on different data from the same domain, but models motivated by economic theory generalize across domains better than the black-box algorithms do.

12:20 – 1:30 p.m.

Break/Lunch

1:30 – 2:10 p.m.

Remote Work across Jobs, Companies, and Space
• Stephen Hansen, University College London

Abstract

The pandemic catalyzed an enduring shift to remote work. To measure and characterize this shift, we examine more than 250 million job vacancy postings across five English-speaking countries. Our measurements rely on a state-of-the-art languageprocessing framework that we fit, test, and refine using 30,000 human classifications. We achieve 99% accuracy in flagging job postings that advertise hybrid or fully remote work, greatly outperforming dictionary methods and also outperforming other machine-learning methods. From 2019 to early 2023, the share of postings that say new employees can work remotely one or more days per week rose more than three-fold in the U.S and by a factor of five or more in Australia, Canada, New Zealand and the U.K. These developments are highly non-uniform across and within cities, industries, occupations, and companies. Even when zooming in on employers in the same industry competing for talent in the same occupations, we find large differences in the share of job postings that explicitly offer remote work.

2:15 – 2:55 p.m.

On the Testability of the Anchor Words Assumption in Topic Models (joint with Simon Freyaldenhoven, Jesse Goodman, Dingyi Li, and Shikun Ki)
• José L. Montiel Olea, Cornell University

Abstract

Topic models are a simple and popular tool for the statistical analysis of textual data. Their identification and estimation is typically enabled by assuming the existence of \emph{anchor words}; that is, words that are exclusive to specific topics. In this paper we show that the existence of anchor words is statistically testable: there exists a test with correct size that has nontrivial power. This means that, in general, the anchor word assumption cannot be viewed simply as a convenient normalization. At the core of our result lies a simple characterization of when a column-stochastic matrix with known nonnegative rank admits a \emph{separable} factorization. We use a simulation study to analyze the power of a bootstrapped version of our suggested procedure and to discuss its computational limitations.

 

2:55 – 3:25 p.m.

Break

3:25 – 4:05 p.m.

Machine Learning as a Tool for Hypothesis Generation (joint w/ Sendhil Mullainathan)
• Jens Ludwig, University of Chicago

Abstract

While hypothesis testing is a highly formalized activity, hypothesis generation remains largely informal. We propose a systematic procedure to generate novel hypotheses about human behavior, which uses the capacity of machine learning algorithms to notice patterns people might not. We illustrate the procedure with a concrete application: judge decisions about who to jail. We begin with a striking fact: The defendant’s face alone matters greatly for the judge’s jailing decision. In fact, an algorithm given only the pixels in the defendant’s mugshot accounts for up to half of the predictable variation. We develop a procedure that allows human subjects to interact with this black-box algorithm to produce hypotheses about what in the face influences judge decisions. The procedure generates hypotheses that are both interpretable and novel: They are not explained by demographics (e.g. race) or existing psychology research; nor are they already known (even if tacitly) to people or even experts. Though these results are specific, our procedure is general. It provides a way to produce novel, interpretable hypotheses from any high-dimensional dataset (e.g. cell phones, satellites, online behavior, news headlines, corporate filings, and high-frequency time series). A central tenet of our paper is that hypothesis generation is in and of itself a valuable activity, and hope this encourages future work in this largely “pre-scientific” stage of science.

 

4:10 – 4:50 p.m.

Accelerating Science with Human-Aware Artificial Intelligence
• James Evans, University of Chicago

Abstract

Artificial intelligence (AI) models trained on published scientific findings have been used to invent valuable materials and targeted therapies, but they typically ignore the human scientists who continually alter the landscape of discovery. Here we show that incorporating the distribution of human expertise by training unsupervised models on simulated inferences cognitively available to experts dramatically improves (up to 400%) AI prediction of future discoveries beyond those focused on research content alone, especially when relevant literature is sparse. These models succeed by predicting human predictions and the scientists who will make them. By tuning human-aware AI to avoid the crowd, we can generate scientifically promising “alien” hypotheses unlikely to be imagined or pursued without intervention until the distant future, which hold promise to punctuate scientific advance beyond questions currently pursued. I also explore the creation of other kinds of data-driven, machine learned “digital doubles” that facilitate cycles of semi-automated virtual and staged experiments tuned to reveal social and scientific insights and generate social and material technologies. Accelerating human discovery or probing its blind spots, I show how human-aware and complementary AI enables us to move toward and beyond the contemporary scientific frontier.

 

5:00 – 6:00 p.m.

Cocktail Reception (David Rubenstein Forum)

Details

Register

The deadline to register has passed.

Date

Date: June 12, 2023

Time: 8:00 a.m. – 5:30 p.m. CDT

Location

The David Rubenstein Forum1201 E. 60th Street, Chicago, IL 60637