Seminars for 2018-2019

Unless specified, seminars are held on Thursdays, from 11am to noon, Stevanovich Center Library, 5727 S. University Ave.

We won’t hold Seminars between May and October 2019. Please consider attending one of our conferences!

Monday, October 8, 2018 – Eckhart 133, 4:30-5:30pm. This seminar is organized jointly by the Stevanovich Center and the UC Department of Statistics

Arnoldo Frigessi

Professor, Department of Biostatistics, Institute of Basic Medical Sciences, University of Oslo, Norway
Towards personalized computer simulation of breast cancer treatment: a multi-scale pharmacokinetic and pharmacodynamic model informed by multi-type patient data
Mathematical modeling and simulation have emerged as a potentially powerful, time- and cost effective approach to personalized cancer treatment. In order to predict the effect of a therapeutic regimen for an individual patient, it is necessary to initialize and to parametrize the model so to mirror exactly this patient’s tumor. I will present a comprehensive approach to model and simulate a breast tumor treated by two different chemotherapies in combination and not.  In the multi-scale model we represent individual tumor and normal cells, with their cell cycle and others intracellular processes (depending on key molecular characteristics), the formation of blood vessels and their disruption, extracellular processes, as the diffusion of oxygen, drugs and important molecules (including VEGF which modulates vascular dynamics) . The model is informed by data estimated from routinely acquired measurements of the patient’s tumor, including histopathology, imaging, and molecular profiling. We implemented a computer system which simulates a cross-section of the tumor under a 12-week therapy regimen. We show how the model is able to reproduce patients from a clinical trial, both responders and not. We show by scenario simulation, that other drug regimens might have led to a different outcome. Approximate Bayesian Computation (ABC) is used to estimate some of the parameters. Preprint: https://www.biorxiv.org/content/early/2018/07/19/371369

Thursday, October 18, 2018 – 11am-noon – Stevanovich Center Library

Eric Renault

Professor of Commerce, Organizations and Entrepreneurship, Professor of Economics, Brown University

Wald Tests When Restrictions Are Locally Singular
This paper provides an exhaustive characterization of the asymptotic null distribution of Wald-type statistics for testing restrictions given by polynomial functions – which may involve asymptotic singularities – when the limiting distribution of the parameter estimators is absolutely continuous (e.g., Gaussian). In addition to the well-known finite-sample non-invariance, there  is also an asymptotic non-invariance (non-pivotality): with standard critical values, the test may either under-reject or over-reject, and may even diverge under the null hypothesis. The asymptotic distributions of the test statistic can vary under the null hypothesis and depend on the true unknown parameter value. All these situations are possible in testing restrictions which arise in the statistical and econometric literature, e.g. for examining the specification of ARMA models, causality at different horizons, indirect effects, zero determinant hypotheses on matrices of coefficients, and many other situations when singularity in the restrictions cannot be excluded. We provide the limit distribution and general bounds for the single restriction case. For multiple restrictions, we give a necessary and sufficient condition for the existence of a limiting distribution and the form of the limit distribution whenever it exists.
Authors: Jean-Marie Dufour, McGill University; Eric Renault, Brown University; Victoria Zinde-Walsh, McGill University

Wednesday, October 24, 2018 – 2:30pm-3:30pm – Stevanovich Center Library

Eric Ghysels

Edward Bernstein Distinguished Professor of Economics and Adjunct Professor of Finance, Kenan-Flagler Business School, University of North Carolina at Chapel Hill

Artificial Intelligence Alter Egos
Robo-advising is a fast-growing application of financial technology (FinTech) solutions to asset and wealth management.  We assess benefits from robo-advising using a unique data set covering brokerage accounts for a large cross-section of investors over a long time span, namely more than 10 years and more than 20,000 individuals using their investor-specific characteristics. We study robo-investors which shadow the individuals in our data set. They track each individual’s portfolio over the past two years and make allocation decisions either based on a Markowitz mean-variance scheme or a 1/N rule.  We call these robo-investors Artificial Intelligence Alter Egos, and study their portfolio return behavior vis-\`a-vis each of the individuals they shadow. We study the investor characteristics which determine who stands to gain from robo-advising. In addition, we also introduce the notion of robust robots, which take into account the potential for fragile beliefs.

Thursday, November 8, 2018 – 11am to noon – Stevanovich Center Library

Paolo Zaffaroni

Professor in Financial Econometrics, Business School, Imperial College London, UK

Beyond the Bound: Pricing Assets with Misspecified Stochastic Discount Factors
We show how, given a misspecified stochastic discount factor (SDF), one can con- struct an admissible SDF, namely an SDF that prices assets correctly. We characterize misspecification using the extended Arbitrage Pricing Theory (APT) developed in Raponi, Uppal and Zaffaroni (2017), which allows not just for small but also large pricing errors that are pervasive (related to factors). We show how the pricing errors implied by the extended APT can be exploited to develop a theory that provides the correction required to a given SDF in order to obtain an admissible SDF that is robust to model misspecification. We show that the corrected SDF is on the mean-variance efficient frontier, and thus satisfies the Hansen and Jagannathan (1991) bound exactly. For the case where the number of assets, N, is asymptotically large, we obtain results that are even stronger, in contrast to the existing literature that requires N to be small. For large N, we show that the component of the SDF, corresponding to the missing pervasive factors, recovers exactly the contribution of such missing factors to the admissible SDF without requiring one to identify which or how many factors are missing. Estimation of our admissible SDF does not suffer from the curse of dimensionality that typically arises when N is large because of the structure imposed by the extended APT. Joint paper with  Raman Uppal and Irina Zviadadze

Thursday, November 15, 2018 – 1:20pm to 2:20pm – Booth School of Business (Harper Center, room HC3B) – This seminar is jointly organized by the Booth School of Business and the Stevanovich Center.

Yoosoon Chang

Professor of Economics, Indiana University, Bloomington

Origins of Monetary Policy Shifts: A New Approach to Regime Switching in DSGE Models

We examine monetary policy shifts by taking a new approach to regime switching in a small scale monetary DSGE model with threshold-type switching in the monetary policy rule. The policy response to inflation is allowed to switch endogenously between two regimes, hawkish and dovish, depending on whether a latent regime factor crosses a threshold level. Endogeneity stems from the historical impacts of structural shocks driving the economy on the regime factor. We quantify the endogenous feedback from each structural shock to the regime factor to understand the sources of the observed policy shifts. This new channel sheds new light on the interaction between policy changes and measured economic behavior. We develop a computationally efficient filtering algorithm for state-space models with time-varying transition probabilities that handles classical regression models as a special case. We apply this filter to estimate our DSGE model using the U.S. data and find strong evidence of endogeneity in the monetary policy shifts.

Paper available here (updated Feb 17, 2021)


Thursday, November 29, 2018 – 10am to 11am – Stevanovich Center Library

Michael Barnett, 2018 Stevanovich Fellow

PhD student in the joint program in Financial Economics, University of Chicago

A Run on Oil: Climate Policy, Stranded Assets, and Asset Prices
I study the dynamic implications of uncertain climate policy on macroeconomic outcomes and asset prices. Focusing particularly on the oil sector, I find that accounting for uncertain climate policy in an otherwise standard climate-economic model with oil extraction generates a run on oil, meaning oil firms accelerate extraction as climate change increases and oil reserves decrease due to the risk of future climate policy actions stranding oil reserves. Furthermore, the risk of uncertain climate policy and the run on oil it causes leads to a downward shift and dynamic decrease in the oil spot price and value of oil firms compared to the setting without uncertain policy. Ignoring the impact of uncertain climate policy would therefore lead to an overvaluation of the oil sector or “carbon bubble.” Empirical evidence suggests observable market outcomes are consistent with the model predictions about the effects of uncertain climate policy.

Thursday, November 29, 2018 – 11 am to noon – Stevanovich Center Library

Dachuan Chen, 2018 Stevanovich Fellow

PhD student in Business Administration, University of Illinois at Chicago.

The Five Trolls under the Bridge: Principal Component Analysis with Asynchronous and Noisy High Frequency Data
We develop a principal component analysis (PCA) for high frequency data. As in Northern fairly tales, there are trolls waiting for the explorer. The first three trolls are market microstructure noise, asynchronous sampling times, and edge effects in estimators. To get around these, a robust estimator of spot covariance matrix is developed based on the Smoothed TSRV (Mykland et al. (2017)). The fourth troll is how to pass from estimated time-varying covariance matrix to PCA. Under finite dimensionality, we develop this methodology through the estimation of realized spectral functions. Rates of convergence and central limit theory, as well as an estimator of standard error, are established. The fifth troll is high dimension on top of high frequency, where we also develop PCA. With the help of a new identity concerning the spot principal orthogonal complement, the high-dimensional rates of convergence have been studied by freeing several strong assumptions in classical PCA. As an application, we show that our first principal component (PC) potentially outperforms the S&P 100 market index, while three of the next four PCs are cointegrated with two of the Fama-French non-market factors.

Thursday, January 17, 2019 – 11am-noon – Stevanovich Center Library

Luca Benzoni

Senior Financial Economist and Research Advisor, Federal Reserve Bank of Chicago

Core and Crust: Consumer Prices and the Term Structure of Interest Rates
We propose a no-arbitrage model of the nominal and real term structures that accommodates the different persistence and volatility of distinct inflation components. Core, food, and energy inflation combine into a single total inflation measure that ties nominal and real risk-free bond prices together. The model is successful at extracting market participants’ expectations of future inflation from nominal yields and
inflation data. Estimation uncovers a factor structure that is common to core inflation and interest rates and downplays the pass-through effect of short-lived food and energy shocks on inflation and interest rates. Model forecasts systematically outperform survey forecasts and other benchmarks.
Joint work with Andrea Ajello and Olena Chyruk
 
Download the Research Paper

Thursday, February 7, 2019 – 11am-noon – Stevanovich Center Library

Giovanni Motta

Assistant Professor, Financial and Business Analytics, Columbia University Data Science Institute

Semi-parametric factor models for non-stationary time series

Our previous approach to fitting dynamic non-stationary factor models to multivariate time series is based on the principal components of the time-varying spectral-density matrix. This approach allows the spectral matrix to be smoothly time-varying, which imposes very little structure on the moments of the underlying process. However, the estimation delivers time-varying filters that are high-dimensional and two-sided. Moreover, the estimation of the spectral matrix strongly depends on the chosen bandwidths for smoothing over frequency and time. As an alternative, we introduce a novel semi-parametric approach in which only part of the model is allowed to be time-varying. More precisely, the small-dimensional latent factors admit a dynamic representation with time-varying parameters while the high-dimensional loadings are time-invariant.

In particular, we consider two specifications for the latent factors. In the first model, the latent factors are locally stationary AR processes. The time-varying parameters are approximated by local polynomials and estimated by maximizing the likelihood locally. In the second model, the volatility of the common latent factors is decomposed into the product of two distinct components. The first component reflects short-run volatility dynamics that we model as factor GARCH processes. The second component captures long-run risks, modeled as an ‘evolutionary’ (or slowly evolving) function of time.

We provide asymptotic theory, simulation results and applications to real data.


Thursday, February 21, 2019 – 11am-noon – Stevanovich Center Library

Andrew Patton

Zelter Family Professor of Economics and Professor of Finance, Duke University

Risk Price Variation: The Missing Half of the Cross-Section of Expected Returns

The Law of One Price is a bedrock of asset pricing theory and empirics. Yet real-world frictions can violate the Law by generating unequal compensation across assets for the same risk exposures. We develop new methods for cross-sectional asset pricing with unobserved heterogeneity in compensation for risk. We extend k-means clustering to group assets by risk prices and introduce a formal test for whether differences in risk premia across market segments are too large to occur by chance. Using portfolios of US stocks, international stocks, and assets from multiple classes, we find significant evidence of cross-sectional variation in risk prices for all 135 combinations of test assets, factor models, and time periods. Variation in risk prices is as important as variation in risk exposures for explaining the cross-section of expected returns.

Link to the paper


Thursday, March 7, 2019 – 11am-noon – Stevanovich Center Library

Robert Korajczyk

Harry G. Guthmann Professor of Finance and Co-Director of the Financial Institutions and Markets Research Center, Kellogg School of Management at Northwestern University

Arbitrage Portfolios

We propose a new methodology to estimate arbitrage portfolios by utilizing information contained in firm characteristics for both abnormal returns and factor loadings. The methodology gives maximal weight to risk-based interpretations of characteristic predictive power before any attribution to abnormal returns. We apply the methodology in simulated factor economies and on a large panel of U.S. stock returns from 1965–2014. The methodology works well in simulation and in out-of-sample portfolios of U.S. stocks. Empirically, we find the arbitrage portfolio has (statistically and economically) significant alphas relative to several popular asset pricing models and annualized Sharpe ratios ranging from 0.67 to 1.12. Data-mining-driven alphas imply that performance of the strategy should decline after the discovery of pricing anomalies. However, we find that the abnormal returns on the arbitrage portfolio do not decrease significantly over time.

Arbitrage Portfolio-Kim Korajczyk Neuhierl 2019


Thursday, April 4, 2019 – 11am-noon – Stevanovich Center Library

Anders Bredahl Kock

Associate Professor in Economics, University of Oxford, Aarhus University, CREATES and St. Hilda’s College

Power in High Dimensional Testing Problems

Fan et al. (2015) recently introduced a remarkable method for increasing asymptotic power of tests in high-dimensional testing problems. If applicable to a given test, their power enhancement principle leads to an improved test that has the same asymptotic size, uniformly non-inferior asymptotic power, and is consistent against a strictly broader range of alternatives than the initially given test. We study under which conditions this method can be applied and show the following: In asymptotic regimes where the dimensionality of the parameter space is fixed as sample size increases, there often exist tests that can not be further improved with the power enhancement principle. However, when the dimensionality of the parameter space increases sufficiently slowly with sample size and a marginal local asymptotic normality (LAN) condition is satisfied, every test with asymptotic size smaller than one can be improved with the power enhancement principle. While the marginal LAN condition alone does not allow one to extend the latter statement to all rates at which the dimensionality increases with sample size, we give sufficient conditions under which this is the case.


Thursday, April 18, 2019 – 11am-noon – Stevanovich Center Library

Enrique Sentana

Professor of Economics, Center for Monetary and Financial Studies  – Research Fellow, Center for Economic Policy Research –  Senior Research Associate, London School of Economics, Financial Markets Group

Empirical Evaluation of Over-specified Asset Pricing Models

Asset pricing models with potentially too many risk factors are increasingly common in empirical work. Unfortunately, they can yield misleading statistical inferences. Unlike other studies focusing on the properties of standard estimators and tests, we estimate the sets of SDFs and risk prices compatible with the asset pricing restrictions of a given model. We also propose tests to detect problematic situations with economically meaningless SDFs uncorrelated to the test assets. We confirm the empirical relevance of our proposed estimators and tests with Yogo’s (2006) linearized version of the consumption CAPM, and provide Monte Carlo evidence on their reliability in finite samples.

Jointly with Elena Manresa and Francisco Peñaranda