For many functionals, DML estimators are the state-of-the-art, incorporating the good predictive performance of black-box machine learning algorithms; the decreased bias of doubly robust estimators; and the analytic tractability and bias reduction of sample splitting with cross fitting. Recently Balakrishnan, Wasserman and Kennedy (BWK) introduced a novel assumption-lean model that formalizes the problem of functional estimation when no complexity reducing assumptions (such as smoothness or sparsity) are imposed on the nuisance functions occurring in the functional’s first order influence function (IF1). Then, for the integrated squared density and the expected conditional variance functionals, they showed that first-order estimators, which includes DML estimators, based on IF1 are rate minimax under squared error loss.
However, earlier Liu, Mukherjee, and Robins (2020) had shown that, for these functionals, second-order estimators (ie estimators that add a debiasing second-order U-statistic IF22 to a first -order estimator) could have smaller risk (mean squared error) than the first order estimator. In this talk, I resolve this apparent paradox by showing that, although minimax, DML estimators are (asymptotically) inadmissible under the BWK model because (i) the risk of any first-order estimator is never less than that of the corresponding 2nd order estimator and, under many laws, may be much greater.