Last edited 20 April, 2004 a href="mailto:www@statlab.uni-heidelberg.de">daemon. - Host home page: StatLab Heidelberg
Ankündigung eines

Wochenend-Seminars

über

Fitting Linear Models with Many Regressors

Referent: Prof. Rudy Beran,
Department of Statistics, University of California, Berkeley

am 23./24. Mai 1998 in Oberflockenbach/Odenwald bei Weinheim im Seminarhaus der Universität Heidelberg.

Prof.R.Beran, z.Zt. Humboldt-Preisträger an der Universität Heidelberg, bekannt durch zahlreiche Arbeiten u.a. über Robustheit und Resampling, wird über einen neuen Ansatz in der Regression mit hoher Parameterdimension referieren. Neben einer Vortragsreihe finden Computer-Demonstrationen statt, und es ist eine Diskussionsrunde vorgesehen. Das Seminar wendet sich vorzugsweise an jüngere graduierte Statistiker (Doktoranden, Postdocs) mit mathematischer Ausbildung.

Veranstalter: Prof. D. W. Müller, Lehrstuhl für Mathematische Statistik, Fakultät für Mathematik, Universität Heidelberg.

Tagungsort und Kosten: Das Seminarhaus der Universität Heidelberg besitzt Doppel- und Einzelzimmer. Die Teilnehmerzahl ist beschränkt auf ca. 20. Bei Überbuchung wird eine Auswahl der Teilnehmer vorgenommen, die den oben genannten Teilnehmerkreis bevorzugt. Das Seminarhaus verlangt für die Teilnahme, Unterkunft und Verpflegung DM 90. Diese Kosten und die Reisekosten müssen die Teilnehmer selbst tragen. Beginn des Seminars ist Samstag, 23. Mai, mittags, Ende am Sonntag, 24. Mai abends.

Interessenten werden gebeten, sich bis zum 11. März 1998 mit dem beigefügten Formular oder per e-mail anzumelden. Die Teilnehmer werden von uns bis April 98 über die Einzelheiten der Anreise (mit PKW oder mit öffentlichen Verkehrsmitteln) informiert. Aktuelle Informationen finden sich in <../termine/ManyRegressors.html>. Die Seminarsprache ist Englisch.

Rückfragen bitte an

Dr. W. Polonik
Institut für Angewandte Mathematik
Im Neuenheimer Feld 294
69120 Heidelberg
GERMANY
Tel. 06221-54 5716
e-mail: polonik@statlab.uni-heidelberg.de


Outline

Interest in signal processing with wavelets has prompted statisticians to advance the ideas of C. Stein, C. Mallows, and M. Pinsker on estimation when the parameter space is of high or infinite dimension. This seminar treats consequences of this recent work for fitting linear models with many regressors.

It has been known since the 1950's that maximum likelihood or related estimators may "overfit" a high-dimensional parametric model, paying too much attention to bias and not enough to variance. Stein (1956) proved that the best unbiased estimator of the mean is inadmissible when the data consists of a discrete-time signal plus Gaussian white noise. In his rarely read 1966 paper, Stein demonstrated the merits of the following idea: express the data in terms of a suitable orthonormal basis; shrink the coefficients of the data towards zero so as to reduce estimated risk within a certain class of procedures; then use the reduced coefficients to construct an estimator of the mean. This approach might justly be called a signal processing method.

For fitting a linear model, Mallows (1973) discussed using the submodel fit that minimizes estimated risk (the CL criterion). Estimated risks may similarly be used to select the order of nested principal component regression or the tuning parameter in ridge regression. Each of these fitting techniques is equivalent to shrinking the least squares regression coefficients in a suitable canonical form of the linear model.

Pinsker (1980) showed that shrinkage estimators for the mean of a Gaussian process can achieve asymptotic minimaxity over ellipsoids in the parameter space among all estimators. His result distinguishes usefully among competing model-selection or shrinkage procedures.

Beran and Dümbgen (1997) studied shrinkage estimators that monotonically taper the coefficients of the data toward zero. Subject to the monotonicity constraint, the shrinkage factors are chosen to minimize estimated risk. When applied to a suitable canonical form of the linear model with many regressors, the monotone shrinkage procedure yields a better fit (smaller quadratic risk) than either principal component regression or ridge regression. Numerical implementation of monotone shrinkage relies on the PAV algorithm for isotonic regression. Fitted linear models obtained by monotone principal component shrinkage have an attractive asymptotic minimax property of Pinsker type and yield asymptotic confidence sets for the true mean vector.

The new methods for fitting linear models will be illustrated on real and artificial data.


Main Topics


Related Readings

  1. Monographs supplying statistical background

    Rao, C.R. and Toutenberg, H. (1995). Linear Models. Least Squares and Alternatives. Springer, New York.

    Robertson, T., Wright, F.T. and Dykstra, R.L. (1988). Order Restricted Statistical Inference. Wiley, New York.

  2. Classical articles

    Mallows, C. (1973). Some comments on Cp. Technometrics 15, 661-675.

    Pinsker, M.S. (1980). Optimal filtration of square-integrable signals in Gaussian noise. Problems Inform. Transmission 16, 120-133.

    Stein, C. (1966). An approach to the recovery of inter-block information in balanced incomplete block designs. Research Papers in Statistics. Festschrift for Jerzy Neyman (F.N. David, ed.), 351-366. Wiley, London.

    Stein, C. (1981). Estimation of the mean of a multivariate normal distribution. Ann. Statist. 9, 1135-1151.

  3. Recent research papers (unpublished material is available from the Statlab web site or at the seminar)

    Beran, R. (1994). Stein confidence sets and the bootstrap. Statistical Sinica 5, 109-127.

    Beran, R. (1998). Maximum risk in principal component regression. Unpublished preprint.

    Beran, R. (1998). Superefficient estimation of multivariate trend. Unpublished preprint .

    Beran, R. and Dümbgen, L. (1997). Modulation estimators and confidence sets. Unpublished preprint.

    Buja, A., Hastie, T. and Tibshirani, R. (1989). Linear smoothers and additive models (with discussion). Ann. Statist. 17, 453-510.

    Donoho, D.L. and Johnstone, I.M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81, 425-455.

    Kneip, A. (1994). Ordered linear smoothers. Ann. Statist. 22, 835-866.

    Li, K.-C. (1987). Asymptotic optimality for Cp, CL, cross-validation and generalized cross-validation: Discrete index set. Ann. Statist. 15, 958-976.

    Li, K.-C. (1989). Honest confidence sets for nonparametric regression. Ann. Statist. 17, 1001-1008.

    Nussbaum, M. (1996). The Pinsker bound: a review. Unpublished preprint.