Iterative stochastic approximation methods are widely used to solve M-estimation problems, in the context of predictive learning in particular. In certain situations that shall be undoubtedly more and more common in the Big Data era, the datasets available are so massive that computing statistics over the full sample is hardly feasible, if not unfeasible. A natural and popular approach to gradient descent in this context consists in substituting the “full data” statistics with their counterparts based on subsamples picked at random of manageable size. It is the main purpose of this paper to investigate the impact of survey sampling with unequal inclusion probabilities on stochastic gradient descent-based M-estimation methods. Precisely, we prove that, in presence of some a priori information, one may significantly increase statistical accuracy in terms of limit variance, when choosing appropriate first order inclusion probabilities. These results are described by asymptotic theorems and are also supported by illustrative numerical experiments.
Accepté le :
DOI : 10.1051/ps/2018021
Mots-clés : Asymptotic analysis, central limit theorem, Horvitz–Thompson estimator, M-estimation, Poisson sampling, stochastic gradient descent, survey scheme
@article{PS_2019__23__310_0, author = {Cl\'emen\c{c}on, Stephan and Bertail, Patrice and Chautru, Emilie and Papa, Guillaume}, title = {Optimal survey schemes for stochastic gradient descent with applications to {M-estimation}}, journal = {ESAIM: Probability and Statistics}, pages = {310--337}, publisher = {EDP-Sciences}, volume = {23}, year = {2019}, doi = {10.1051/ps/2018021}, mrnumber = {3963530}, zbl = {1420.62036}, language = {en}, url = {http://archive.numdam.org/articles/10.1051/ps/2018021/} }
TY - JOUR AU - Clémençon, Stephan AU - Bertail, Patrice AU - Chautru, Emilie AU - Papa, Guillaume TI - Optimal survey schemes for stochastic gradient descent with applications to M-estimation JO - ESAIM: Probability and Statistics PY - 2019 SP - 310 EP - 337 VL - 23 PB - EDP-Sciences UR - http://archive.numdam.org/articles/10.1051/ps/2018021/ DO - 10.1051/ps/2018021 LA - en ID - PS_2019__23__310_0 ER -
%0 Journal Article %A Clémençon, Stephan %A Bertail, Patrice %A Chautru, Emilie %A Papa, Guillaume %T Optimal survey schemes for stochastic gradient descent with applications to M-estimation %J ESAIM: Probability and Statistics %D 2019 %P 310-337 %V 23 %I EDP-Sciences %U http://archive.numdam.org/articles/10.1051/ps/2018021/ %R 10.1051/ps/2018021 %G en %F PS_2019__23__310_0
Clémençon, Stephan; Bertail, Patrice; Chautru, Emilie; Papa, Guillaume. Optimal survey schemes for stochastic gradient descent with applications to M-estimation. ESAIM: Probability and Statistics, Tome 23 (2019), pp. 310-337. doi : 10.1051/ps/2018021. http://archive.numdam.org/articles/10.1051/ps/2018021/
[1] Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning, in Vol. 24 of Advances in Neural Information Processing Systems, edited by , , , , , Curran Associates, Inc. (2011) 451–459.
and , , and ,[2] Scaling Up Machine Learning. Cambridge University Press, Cambridge (2011). | DOI
, and ,[3] Rate of convergence to normal distribution for the Horvitz-Thompson estimator. J. Stat. Plan. Inference 67 (1998) 209–226. | DOI | MR | Zbl
,[4] Asymptotic consistency under large entropy sampling designs with unequal probabilities. Pak. J. Stat. 27 (2011) 407–426. | MR | Zbl
,[5] Empirical processes in survey sampling with (conditional) Poisson designs. Scand. J. Stat. 44 (2017) 97–111. | DOI | MR | Zbl
, and ,[6] Convex Analysis and Optimization. Athena Scientific, NH (2003). | Zbl
,[7] On-Line Learning Gossip Algorithm in Multi-Agent Systems with Local Decision Rules, in 2013 IEEE International Conference on Big Data (BIG DATA) (2014) 6–14.
, , and ,[8] Efficient and Adaptive Estimation for Semiparametric Models. Johns Hopkins University Press, Baltimore (1993). | MR | Zbl
, , and ,[9] Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press, Cambridge (2008). | DOI | MR | Zbl
,[10] Online Algorithms and Stochastic Approximations: Online Learning and Neural Networks. Cambridge University Press, Cambridge (1998). | Zbl
,[11] The tradeoffs of large scale learning. Adv. Neural Inf. Process. Syst. 20 (2008) 161–168.
and ,[12] Theory of classification: a survey of some recent advances. ESAIM: PS 9 (2005) 323–375. | DOI | Numdam | MR | Zbl
, and ,[13] Weighted likelihood for semiparametric models and two-phase stratified samples, with application to Cox regression. Scand. J. Stat. 35 (2007) 186–192. | DOI | MR | Zbl
and ,[14] A Z-theorem with estimated nuisance parameters and correction note for “Weighted likelihood for semiparametric models and two-phase stratified samples, with application to Cox regression”. Scand. J. Stat. 35 (2008) 186–192. | DOI | MR | Zbl
and ,[15] Improved Horvitz-Thompson estimation of model parameters from two-phase stratified samples: applications in epidemiology. Stat. Biosci. 1 (2009) 32–49. | DOI
, , , and ,[16] Maximal Deviations of Incomplete U-statistics with Applications to Empirical Risk Sampling, in Proceedings of the 2013 SIAM International Conference on Data Mining (2013) 19–27. | DOI
, and ,[17] Scaling-up empirical risk minimization: optimization of incomplete U-statistics. J. Mach. Learn. Res. 17 (2016) 1–36. | MR | Zbl
, and ,[18] Sampling Techniques. Wiley, NY (1977). | MR | Zbl
,[19] Stochastic Approximation with Decreasing Gain: Convergence and Asymptotic Theory, 2000. Available at: http://perso.univ-rennes1.fr/bernard.delyon/.
,[20] Réplications d’échantillons, demi-échantillons, Jackknife, bootstrap dans, Les Sondages, edited by , , . Economica (1987).
,[21] Calibration estimators in survey sampling. J. Acoust. Soc. Amer. 87 (1992) 376–382. | MR | Zbl
and ,[22] A Probabilistic Theory of Pattern Recognition. Springer, New York (1996). | DOI | MR | Zbl
, and ,[23] Large sample theory of empirical distributions in biased sampling models. Ann. Stat. 16 (1988) 1069–1112. | DOI | MR | Zbl
, and ,[24] Asymptotic theory of rejective sampling with varying probabilities from a finite population. Ann. Math. Stat. 35 (1964) 1491–1523. | DOI | MR | Zbl
,[25] A generalization of sampling without replacement from a finite universe. J. Acoust. Soc. Amer. 47 (1951) 663–685. | MR | Zbl
and ,[26] Local Rademacher complexities and oracle inequalities in risk minimization (with discussion). Ann. Stat. 34 (2006) 2593–2706. | Zbl
,[27] Stochastic Approximation and Recursive Algorithms and Applications. Springer, New York (2010). | Zbl
and ,[28] Distributed sparse linear regression. IEEE Trans. Signal Process. 58 (2010) 5262–5276. | DOI | MR | Zbl
, and ,[29] Distributed support vector machines. IEEE Trans. Neural Netw. 17 (2006) 1091–1097. | DOI
, , and ,[30] Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19 (2009) 1574–1609. | DOI | MR | Zbl
, , and ,[31] Introductory lectures on convex optimization: a basic course, in Applied Optimization. Kluwer Academic Publ., Boston, Dordrecht, London (2004). | DOI | MR | Zbl
,[32] Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing. Ann. Appl. Probab. 8 (1998) 10–44. | DOI | MR | Zbl
,[33] On the convergence of the Horvitz-Thompson estimator. Aust. J. Stat. 24 (1982) 234–238. | DOI | MR | Zbl
,[34] Asymptotic theory for successive sampling. AMS J. 43 (1972) 373–397. | Zbl
,[35] Weighted likelihood estimation under two-phase sampling. Ann. Statist. 41 (2013) 269–295. | DOI | MR | Zbl
and ,[36] Empirical Processes in M-Estimation. Cambridge University Press, Cambridge (2000). | Zbl
,[37] Asymptotic Statistics. Vol. 3, Cambridge University Press, Cambridge (2000). | MR | Zbl
,Cité par Sources :