Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Estimation of the Derivatives of a Function in a Convolution Regression Model with Random Design

Estimation of the Derivatives of a Function in a Convolution Regression Model with Random Design Estimation of the Derivatives of a Function in a Convolution Regression Model with Random Design div.banner_title_bkg div.trangle { border-color: #A76330 transparent transparent transparent; opacity:0.75; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=75)" ;filter: alpha(opacity=75); /*new styles end*/ } div.banner_title_bkg_if div.trangle { border-color: transparent transparent #A76330 transparent ; opacity:0.75; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=75)" ;filter: alpha(opacity=75); /*new styles end*/ } div.banner_title_bkg div.trangle { width: 215px; } #banner { background-image: url('http://images.hindawi.com/journals/as/as.banner.jpg'); background-position: 50% 0;} Hindawi Publishing Corporation Home Journals About Us Advances in Statistics About this Journal Submit a Manuscript Table of Contents Journal Menu About this Journal · Abstracting and Indexing · Advance Access · Aims and Scope · Article Processing Charges · Articles in Press · Author Guidelines · Bibliographic Information · Contact Information · Editorial Board · Editorial Workflow · Free eTOC Alerts · Publication Ethics · Reviewers Acknowledgment · Submit a Manuscript · Subscription Information · Table of Contents Open Special Issues · Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article Advances in Statistics Volume 2015 (2015), Article ID 695904, 11 pages http://dx.doi.org/10.1155/2015/695904 Research Article Estimation of the Derivatives of a Function in a Convolution Regression Model with Random Design Christophe Chesneau 1 and Maher Kachour 2 1 Laboratoire de Mathématiques Nicolas Oresme, Université de Caen, BP 5186, 14032 Caen Cedex, France 2 École Supérieure de Commerce IDRAC, 47 rue Sergent Michel Berthet, CP 607, 69258 Lyon Cedex 09, France Received 8 August 2014; Revised 25 February 2015; Accepted 5 March 2015 Academic Editor: Jos De Brabanter Copyright © 2015 Christophe Chesneau and Maher Kachour. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract A convolution regression model with random design is considered. We investigate the estimation of the derivatives of an unknown function, element of the convolution product. We introduce new estimators based on wavelet methods and provide theoretical guarantees on their good performances. 1. Introduction We consider the convolution regression model with random design described as follows. Let be i.i.d. random variables defined on a probability space , where , is an unknown function, is a known function, are i.i.d. random variables with common density , and are i.i.d. random variables such that and . Throughout this paper, we assume that , , and are compactly supported with , , , , , , , , is times differentiable with , is integrable and ordinary smooth (the precise definition is given by (K2) in Section 3.1 ), and and are independent for any . We aim to estimate the unknown function and its th derivative, denoted by , from the sample . The motivation of this problem is the deconvolution of a signal from perturbed by noise and randomly observed. The function can represent a driving force that was applied to a physical system. Such situations naturally appear in various applied areas, as astronomy, optics, seismology, and biology. Model ( 1 ) can also be viewed as a natural extension of some -periodic convolution regression models as those considered by, for example, Cavalier and Tsybakov [ 1 ], Pensky and Sapatinas [ 2 ], and Loubes and Marteau [ 3 ]. In the form ( 1 ), it has been considered in Bissantz and Birke [ 4 ] and Birke et al. [ 5 ] with a deterministic design and in Hildebrandt et al. [ 6 ] with a random design. These last works focus on kernel methods and establish their asymptotic normality. The estimation of , more general to , is of interest to examine possible bumps and to study the convexity-concavity properties of (see, for instance, Prakasa Rao [ 7 ], for standard statistical models). In this paper, we introduce new estimators for based on wavelet methods. Through the use of a multiresolution analysis, these methods enjoy local adaptivity against discontinuities and provide efficient estimators for a wide variety of unknown functions . Basics on wavelet estimation can be found in, for example, Antoniadis [ 8 ], Härdle et al. [ 9 ], and Vidakovic [ 10 ]. Results on the wavelet estimation of in other regression frameworks can be found in, for example, Cai [ 11 ], Petsa and Sapatinas [ 12 ], and Chesneau [ 13 ]. The first part of the study is devoted to the case where , the common density of , is known. We develop a linear wavelet estimator and an adaptive nonlinear wavelet estimator. The second one uses the double hard thresholding technique introduced by Delyon and Juditsky [ 14 ]. It does not depend on the smoothness of in its construction; it is adaptive. We exhibit their rates of convergence via the mean integrated squared error (MISE) and the assumption that belongs to Besov balls. The obtained rates of convergence coincide with existing results for the estimation of in the -periodic convolution regression models (see, for instance, Chesneau [ 15 ]). The second part is devoted to the case where is unknown. We construct a new linear wavelet estimator using a plug-in approach for the estimation of . Its construction follows the idea of the “NES linear wavelet estimator” introduced by Pensky and Vidakovic [ 16 ] in another regression context. Then we investigate its MISE properties when belongs to Besov balls, which naturally depend on the MISE of the considered estimator for . Furthermore, let us mention that all our results are proved with only moments of order on , which provides another theoretical contribution to the subject. The remaining part of this paper is organized as follows. In Section 2 we describe some basics on wavelets and Besov balls and present our wavelet estimation methodology. Section 3 is devoted to our estimators and their performances. The proofs are carried out in Section 4 . 2. Preliminaries This section is devoted to the presentation of the considered wavelet basis, the Besov balls, and our wavelet estimation methodology. 2.1. Wavelet Basis Let us briefly present the wavelet basis on the interval , , introduced by Cohen et al. [ 17 ]. Let and be the initial wavelet functions of the Daubechies wavelets family db2N with (see, e.g., Daubechies [ 18 ]). These functions have the distinction of being compactly supported and belong to the class for . For any and , we set With appropriated treatments at the boundaries, there exist an integer and a set of consecutive integers of cardinality proportional to (both depending on , , and ) such that, for any integer , forms an orthonormal basis of the space of squared integrable functions on ; that is, For the case and , is the smallest integer satisfying and . For any integer and , we have the following wavelet expansion: where An interesting feature of the wavelet basis is to provide sparse representation of ; only few wavelet coefficients characterized by a high magnitude reveal the main details of . See, for example, Cohen et al. [ 17 ] and Mallat [ 19 ]. 2.2. Besov Balls We say that a function belongs to the Besov ball with , , , and if there exists a constant such that and ( 6 ) satisfy with the usual modifications if or . The interest of Besov balls is to contain various kinds of homogeneous and inhomogeneous functions . See, for example, Meyer [ 20 ], Donoho et al. [ 21 ], and Härdle et al. [ 9 ]. 2.3. Wavelet Estimation Let be the unknown function in ( 1 ) and the considered wavelet basis taken with (to ensure that and belong to the class ). Suppose that exists with . The first step in the wavelet estimation consists in expanding on as where and The second step is the estimation of and using . The idea of the third step is to exploit the sparse representation of by selecting the most interesting wavelet coefficients estimators. This selection can be of different natures (truncation, thresholding,…). Finally, we reconstruct these wavelet coefficients estimators on , providing an estimator for . In this study, we evaluate the performance of by studying the asymptotic properties of its MISE under the assumption that . More precisely, we aim to determine the sharpest rate of convergence such that where denotes a constant independent of . 3. Rates of Convergence In this section, we list the assumptions on the model, present our wavelet estimators, and determine their rates of convergence under the MISE over Besov balls. 3.1. Assumptions Let us recall that and are the functions in ( 1 ) and is the density of . We formulate the following assumptions. (K1) We have for any , , and there exists a known constant such that . (K2) First of all, let us define the Fourier transform of an integrable function by The notation will be used for the complex conjugate. We have and there exist two constants, and , such that (K3) There exists a constant such that The assumptions (K1) and (K3) are standard in a nonparametric regression framework (see, for instance, Tsybakov [ 22 ]). Remark that we do not need for the estimation of . The assumption (K2) is the so-called “ordinary smooth case” on . It is common for the deconvolution-estimation of densities (see, e.g., Fan and Koo [ 23 ] and Pensky and Vidakovic [ 24 ]). An example of compactly supported function satisfying (K2) is . Then , , and (K2) is satisfied with and . 3.2. When Is Known 3.2.1. Linear Wavelet Estimator We define the linear wavelet estimator by where and is an integer chosen a posteriori. Proposition 1 presents an elementary property of . Proposition 1. Let be ( 15 ) and let be ( 9 ). Suppose that (K1) holds. Then one has Theorem 2 below investigates the performance of in terms of rates of convergence under the MISE over Besov balls. Theorem 2. Suppose that (K1) – (K3) are satisfied and that with , , , , and . Let be defined by ( 14 ) with such that ( denotes the integer part of ). Then there exists a constant such that Note that the rate of convergence corresponds to the one obtained in the estimation of in the -periodic white noise convolution model with an adapted linear wavelet estimator (see, e.g., Chesneau [ 15 ]). The considered estimator depends on (the smoothness parameter of ); it is not adaptive. This aspect, as well as the rate of convergence , can be improved with thresholding methods. The next paragraph is devoted to one of them: the hard thresholding method. 3.2.2. Hard Thresholding Wavelet Estimator Suppose that (K2) is satisfied. We define the hard thresholding wavelet estimator by , where is defined by ( 15 ), is the indicator function, is a large enough constant, is the integer satisfying refers to ( 12 ), The construction of uses the double hard thresholding technique introduced by Delyon and Juditsky [ 14 ] and recently improved by Chaubey et al. [ 25 ]. The main interest of the thresholding using is to make adaptive; the construction (and performance) of does not depend on the knowledge of the smoothness of . The role of the thresholding using in ( 20 ) is to relax some usual restrictions on the model. To be more specific, it enables us to only suppose that admits finite moments of order (with known or a known upper bound of ), relaxing the standard assumption , for any . Further details on the constructions of hard thresholding wavelet estimators can be found in, for example, Donoho and Johnstone [ 26 , 27 ], Donoho et al. [ 21 , 28 ], Delyon and Juditsky [ 14 ], and Härdle et al. [ 9 ]. Theorem 3 below investigates the performance of in terms of rates of convergence under the MISE over Besov balls. Theorem 3. Suppose that (K1) – (K3) are satisfied and that with , , , or , and . Let be defined by ( 19 ). Then there exists a constant such that The proof of Theorem 3 is an application of a general result established by [ 25 , Theorem 6.1]. Let us mention that corresponds to the rate of convergence obtained in the estimation of in the -periodic white noise convolution model with an adapted hard thresholding wavelet estimator (see, e.g., Chesneau [ 15 ]). In the case and , this rate of convergence becomes the optimal one in the minimax sense for the standard density-regression estimation problems (see Härdle et al. [ 9 ]). In comparison to Theorem 2 , note that (i) for the case corresponding to the homogeneous zone of Besov balls is equal to the rate of convergence attained by up to a logarithmic term, (ii) for the case corresponding to the inhomogeneous zone of Besov balls it is significantly better in terms of power. 3.3. When Is Unknown In the case where is unknown, we propose a plug-in technique which consists in estimating in the construction of ( 14 ). This yields the linear wavelet estimator defined by where , is an integer chosen a posteriori, refers to (K3) , and is an estimator of constructed from the random variables . There are numerous possibilities for the choice of . For instance, can be a kernel density estimator or a wavelet density estimator (see, e.g., Donoho et al. [ 21 ], Härdle et al. [ 9 ], and Juditsky and Lambert-Lacroix [ 29 ]). The estimator is derived to the “NES linear wavelet estimator” introduced by Pensky and Vidakovic [ 16 ] and recently revisited in a more simple form by Chesneau [ 13 ]. Theorem 4 below determines an upper bound of the MISE of . Theorem 4. Suppose that (K1) – (K3) are satisfied, , and that with , , , , and . Let be defined by ( 24 ) with such that . Then there exists a constant such that with . The proof follows the idea of [ 13 , Theorem 3] and uses technical operations on Fourier transforms. From Theorem 4 , (i) if we chose and ( 17 ), we obtain Theorem 2 , (ii) if and satisfy that there exist and a constant such that then, the optimal integer is such that and we obtain the following rate of convergence for : Naturally the estimation of has a negative impact on the performance of . In particular, if , then the standard density linear wavelet estimator attains the rate of convergence with and (and it is optimal in the minimax sense for ; see Härdle et al. [ 9 ]). With this choice, the rate of convergence for becomes . Let us mention that is not adaptive since it depends on . However, remains an acceptable first approach for the estimation of with unknown . Conclusion and Perspectives . This study considers the estimation of from ( 1 ). According to the knowledge of or not, we propose wavelet methods and prove that they attain fast rates of convergence under the MISE over Besov balls. Among the perspectives of this work, we retain the following. (i) The relaxation of the assumption (K2) , perhaps by considering (K2 ′ ) : there exist four constants, , , , and , such that This condition was first introduced by Delaigle and Meister [ 30 ] in a context of deconvolution-estimation of function. It implies (K2) and has the advantage to consider some functions having zeros in Fourier transform domain as numerous kinds of compactly supported functions. (ii) The construction of an adaptive version of through the use of a thresholding method. (iii) The extension of our results to the risk with . All these aspects need further investigations that we leave for future works. 4. Proofs In this section, denotes any constant that does not depend on , , or . Its value may change from one term to another and may depend on or . Proof of Proposition 1 . By the independence between and , , , and , we have It follows from (K1) and integration by parts that . Using the Fubini theorem, , ( 30 ), and the Parseval identity, we obtain Proposition 1 is proved. Proof of Theorem 2 . We expand the function on as ( 8 ) at the level . Since forms an orthonormal basis of , we get Using Proposition 1 , that are i.i.d., the inequalities for any random complex variable and , , and (K1) and (K3) , we have The Parseval identity yields Using (K2) , , and a change of variables, we obtain (Let us mention that is finite thanks to .) Combining ( 33 ), ( 34 ), and ( 35 ), we have For the integer satisfying ( 17 ), it holds that Let us now bound the last term in ( 32 ). Since (see [ 9 , Corollary 9.2]), we obtain Owing to ( 32 ), ( 37 ), and ( 38 ), we have Theorem 2 is proved. Proof of Theorem 3 . For , any integer , and , (a1) using arguments similar to those in Proposition 1 , we obtain (a2) using ( 33 ), ( 34 ), and ( 35 ) with instead of , we have with . Thanks to (a1) and (a2) , we can apply [ 25 , Theorem 6.1] (see Appendix) with , , , , and with , , either and or and , we prove the existence of a constant such that Theorem 3 is proved. Proof of Theorem 4 . We expand the function on as ( 8 ) at the level . Since forms an orthonormal basis of , we get Using (see [ 9 , Corollary 9.2]), we have Let be ( 15 ) with and . The elementary inequality , , yields where Upper Bound for . Proceeding as in ( 37 ), we get Upper Bound for . The triangular inequality gives Owing to the triangular inequality, the indicator function, (K3) , , and the Markov inequality, we have Therefore where Let us now consider . For any complex random variable , we have the equality where denotes the expectation of conditionally to and and the variance of conditionally to . Therefore where Let us now observe that, owing to the independence of , the random variables , − conditionally to are independent. Using this property with the inequalities for any complex random variable and , , the independence between and , (K1) and (K3) , we get Owing to (K2) , , and a change of variables, we obtain Therefore, using and , we obtain Now, by the Hölder inequality for conditional expectations, arguments similar to ( 33 ), ( 34 ), and ( 35 ), we get Hence It follows from ( 54 ), ( 58 ), and ( 60 ) that Putting ( 46 ), ( 48 ), and ( 61 ) together, we get Combining ( 44 ), ( 45 ), and ( 62 ), we obtain the desired result; that is, Theorem 4 is proved. Appendix Let us now present in detail the general result of [ 25 , Theorem 6.1] used in the proof of Theorem 3 . We consider the wavelet basis presented in Section 2 and a general form of the hard thresholding wavelet estimator denoted by for estimating an unknown function from independent random variables : where , and is the integer satisfying Here, we suppose that there exist (i) functions with for any , (ii) two sequences of real numbers and satisfying and such that, for , any integer and any , there exist two constants, and , such that, for any integer and any , Let be ( A.1 ) under and . Suppose that with , and or and . Then there exists a constant such that Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgment The authors are thankful to the reviewers for their comments which have helped in improving the presented work. References L. Cavalier and A. Tsybakov, “Sharp adaptation for inverse problems with random noise,” Probability Theory and Related Fields , vol. 123, no. 3, pp. 323–354, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus M. Pensky and T. Sapatinas, “On convergence rates equivalency and sampling strategies in functional deconvolution models,” The Annals of Statistics , vol. 38, no. 3, pp. 1793–1844, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus J.-M. Loubes and C. Marteau, “Adaptive estimation for an inverse regression model with unknown operator,” Statistics & Risk Modeling , vol. 29, no. 3, pp. 215–242, 2012. View at Publisher · View at Google Scholar · View at MathSciNet N. Bissantz and M. Birke, “Asymptotic normality and confidence intervals for inverse regression models with convolution-type operators,” Journal of Multivariate Analysis , vol. 100, no. 10, pp. 2364–2375, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus M. Birke, N. Bissantz, and H. Holzmann, “Confidence bands for inverse regression models,” Inverse Problems , vol. 26, no. 11, Article ID 115020, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus T. Hildebrandt, N. Bissantz, and H. Dette, “Additive inverse regression models with convolution-type operators,” Electronic Journal of Statistics , vol. 8, no. 1, pp. 1–40, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus B. L. S. Prakasa Rao, Nonparametric Functional Estimation , Academic Press, Orlando, Fla, USA, 1983. View at MathSciNet A. Antoniadis, “Wavelets in statistics: a review (with discussion),” Journal of the Italian Statistical Society Series B , vol. 6, pp. 97–144, 1997. View at Publisher · View at Google Scholar W. Härdle, G. Kerkyacharian, D. Picard, and A. Tsybakov, Wavelets, Approximation, and Statistical Applications , vol. 129 of Lectures Notes in Statistics , Springer, New York, NY, USA, 1998. B. Vidakovic, Statistical Modeling by Wavelets , John Wiley & Sons, New York, NY, USA, 1999. View at Publisher · View at Google Scholar · View at MathSciNet T. T. Cai, “On adaptive wavelet estimation of a derivative and other related linear inverse problems,” Journal of Statistical Planning and Inference , vol. 108, no. 1-2, pp. 329–349, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus A. Petsa and T. Sapatinas, “On the estimation of the function and its derivatives in nonparametric regression: a Bayesian testimation approach,” Sankhya A , vol. 73, no. 2, pp. 231–244, 2011. View at Publisher · View at Google Scholar · View at MathSciNet C. Chesneau, “A note on wavelet estimation of the derivatives of a regression function in a random design setting,” International Journal of Mathematics and Mathematical Sciences , vol. 2014, Article ID 195765, 8 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus B. Delyon and A. Juditsky, “On minimax wavelet estimators,” Applied Computational Harmonic Analysis , vol. 3, no. 3, pp. 215–228, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus C. Chesneau, “Wavelet estimation of the derivatives of an unknown function from a convolution model,” Current Development in Theory and Applications of Wavelets , vol. 4, no. 2, pp. 131–151, 2010. View at Google Scholar · View at MathSciNet M. Pensky and B. Vidakovic, “On non-equally spaced wavelet regression,” Annals of the Institute of Statistical Mathematics , vol. 53, no. 4, pp. 681–690, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus A. Cohen, I. Daubechies, and P. Vial, “Wavelets on the interval and fast wavelet transforms,” Applied and Computational Harmonic Analysis , vol. 1, no. 1, pp. 54–81, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus I. Daubechies, Ten Lectures on Wavelets , SIAM, 1992. S. Mallat, A Wavelet Tour of Signal Processing , Elsevier/Academic Press, Amsterdam, The Netherlands, 3rd edition, 2009. View at MathSciNet Y. Meyer, Wavelets and Operators , Cambridge University Press, Cambridge, UK, 1992. View at MathSciNet D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, “Density estimation by wavelet thresholding,” The Annals of Statistics , vol. 24, no. 2, pp. 508–539, 1996. View at Publisher · View at Google Scholar · View at MathSciNet A. B. Tsybakov, Introduction à l'Estimation Non Paramétrique , Springer, Berlin, Germany, 2004. J. Fan and J.-Y. Koo, “Wavelet deconvolution,” IEEE Transactions on Information Theory , vol. 48, no. 3, pp. 734–747, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus M. Pensky and B. Vidakovic, “Adaptive wavelet estimator for nonparametric density deconvolution,” The Annals of Statistics , vol. 27, no. 6, pp. 2033–2053, 1999. View at Publisher · View at Google Scholar · View at MathSciNet Y. P. Chaubey, C. Chesneau, and H. Doosti, “Adaptive wavelet estimation of a density from mixtures under multiplicative censoring,” Statistics: A Journal of Theoretical and Applied Statistics , 2014. View at Publisher · View at Google Scholar D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika , vol. 81, no. 3, pp. 425–455, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus D. L. Donoho and I. M. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” Journal of the American Statistical Association , vol. 90, no. 432, pp. 1200–1224, 1995. View at Publisher · View at Google Scholar · View at MathSciNet D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, “Wavelet shrinkage: asymptopia?” Journal of the Royal Statistical Society Series B: Methodological , vol. 57, no. 2, pp. 301–369, 1995. View at Google Scholar · View at MathSciNet A. Juditsky and S. Lambert-Lacroix, “On minimax density estimation on R ,” Bernoulli. Official Journal of the Bernoulli Society for Mathematical Statistics and Probability , vol. 10, no. 2, pp. 187–220, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus A. Delaigle and A. Meister, “Nonparametric function estimation under Fourier-oscillating noise,” Statistica Sinica , vol. 21, no. 3, pp. 1065–1092, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus (function (i, s, o, g, r, a, m) { i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments) }, i[r].l = 1 * new Date(); a = s.createElement(o), m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m) })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga'); ga('create', 'UA-8578054-2', 'auto'); ga('send', 'pageview'); http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Statistics Hindawi Publishing Corporation

Estimation of the Derivatives of a Function in a Convolution Regression Model with Random Design

Advances in Statistics , Volume 2015 (2015) – Mar 22, 2015

Loading next page...
 
/lp/hindawi-publishing-corporation/estimation-of-the-derivatives-of-a-function-in-a-convolution-7h1eUD90ok

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2015 Christophe Chesneau and Maher Kachour.
ISSN
2356-6892
Publisher site
See Article on Publisher Site

Abstract

Estimation of the Derivatives of a Function in a Convolution Regression Model with Random Design div.banner_title_bkg div.trangle { border-color: #A76330 transparent transparent transparent; opacity:0.75; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=75)" ;filter: alpha(opacity=75); /*new styles end*/ } div.banner_title_bkg_if div.trangle { border-color: transparent transparent #A76330 transparent ; opacity:0.75; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=75)" ;filter: alpha(opacity=75); /*new styles end*/ } div.banner_title_bkg div.trangle { width: 215px; } #banner { background-image: url('http://images.hindawi.com/journals/as/as.banner.jpg'); background-position: 50% 0;} Hindawi Publishing Corporation Home Journals About Us Advances in Statistics About this Journal Submit a Manuscript Table of Contents Journal Menu About this Journal · Abstracting and Indexing · Advance Access · Aims and Scope · Article Processing Charges · Articles in Press · Author Guidelines · Bibliographic Information · Contact Information · Editorial Board · Editorial Workflow · Free eTOC Alerts · Publication Ethics · Reviewers Acknowledgment · Submit a Manuscript · Subscription Information · Table of Contents Open Special Issues · Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article Advances in Statistics Volume 2015 (2015), Article ID 695904, 11 pages http://dx.doi.org/10.1155/2015/695904 Research Article Estimation of the Derivatives of a Function in a Convolution Regression Model with Random Design Christophe Chesneau 1 and Maher Kachour 2 1 Laboratoire de Mathématiques Nicolas Oresme, Université de Caen, BP 5186, 14032 Caen Cedex, France 2 École Supérieure de Commerce IDRAC, 47 rue Sergent Michel Berthet, CP 607, 69258 Lyon Cedex 09, France Received 8 August 2014; Revised 25 February 2015; Accepted 5 March 2015 Academic Editor: Jos De Brabanter Copyright © 2015 Christophe Chesneau and Maher Kachour. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract A convolution regression model with random design is considered. We investigate the estimation of the derivatives of an unknown function, element of the convolution product. We introduce new estimators based on wavelet methods and provide theoretical guarantees on their good performances. 1. Introduction We consider the convolution regression model with random design described as follows. Let be i.i.d. random variables defined on a probability space , where , is an unknown function, is a known function, are i.i.d. random variables with common density , and are i.i.d. random variables such that and . Throughout this paper, we assume that , , and are compactly supported with , , , , , , , , is times differentiable with , is integrable and ordinary smooth (the precise definition is given by (K2) in Section 3.1 ), and and are independent for any . We aim to estimate the unknown function and its th derivative, denoted by , from the sample . The motivation of this problem is the deconvolution of a signal from perturbed by noise and randomly observed. The function can represent a driving force that was applied to a physical system. Such situations naturally appear in various applied areas, as astronomy, optics, seismology, and biology. Model ( 1 ) can also be viewed as a natural extension of some -periodic convolution regression models as those considered by, for example, Cavalier and Tsybakov [ 1 ], Pensky and Sapatinas [ 2 ], and Loubes and Marteau [ 3 ]. In the form ( 1 ), it has been considered in Bissantz and Birke [ 4 ] and Birke et al. [ 5 ] with a deterministic design and in Hildebrandt et al. [ 6 ] with a random design. These last works focus on kernel methods and establish their asymptotic normality. The estimation of , more general to , is of interest to examine possible bumps and to study the convexity-concavity properties of (see, for instance, Prakasa Rao [ 7 ], for standard statistical models). In this paper, we introduce new estimators for based on wavelet methods. Through the use of a multiresolution analysis, these methods enjoy local adaptivity against discontinuities and provide efficient estimators for a wide variety of unknown functions . Basics on wavelet estimation can be found in, for example, Antoniadis [ 8 ], Härdle et al. [ 9 ], and Vidakovic [ 10 ]. Results on the wavelet estimation of in other regression frameworks can be found in, for example, Cai [ 11 ], Petsa and Sapatinas [ 12 ], and Chesneau [ 13 ]. The first part of the study is devoted to the case where , the common density of , is known. We develop a linear wavelet estimator and an adaptive nonlinear wavelet estimator. The second one uses the double hard thresholding technique introduced by Delyon and Juditsky [ 14 ]. It does not depend on the smoothness of in its construction; it is adaptive. We exhibit their rates of convergence via the mean integrated squared error (MISE) and the assumption that belongs to Besov balls. The obtained rates of convergence coincide with existing results for the estimation of in the -periodic convolution regression models (see, for instance, Chesneau [ 15 ]). The second part is devoted to the case where is unknown. We construct a new linear wavelet estimator using a plug-in approach for the estimation of . Its construction follows the idea of the “NES linear wavelet estimator” introduced by Pensky and Vidakovic [ 16 ] in another regression context. Then we investigate its MISE properties when belongs to Besov balls, which naturally depend on the MISE of the considered estimator for . Furthermore, let us mention that all our results are proved with only moments of order on , which provides another theoretical contribution to the subject. The remaining part of this paper is organized as follows. In Section 2 we describe some basics on wavelets and Besov balls and present our wavelet estimation methodology. Section 3 is devoted to our estimators and their performances. The proofs are carried out in Section 4 . 2. Preliminaries This section is devoted to the presentation of the considered wavelet basis, the Besov balls, and our wavelet estimation methodology. 2.1. Wavelet Basis Let us briefly present the wavelet basis on the interval , , introduced by Cohen et al. [ 17 ]. Let and be the initial wavelet functions of the Daubechies wavelets family db2N with (see, e.g., Daubechies [ 18 ]). These functions have the distinction of being compactly supported and belong to the class for . For any and , we set With appropriated treatments at the boundaries, there exist an integer and a set of consecutive integers of cardinality proportional to (both depending on , , and ) such that, for any integer , forms an orthonormal basis of the space of squared integrable functions on ; that is, For the case and , is the smallest integer satisfying and . For any integer and , we have the following wavelet expansion: where An interesting feature of the wavelet basis is to provide sparse representation of ; only few wavelet coefficients characterized by a high magnitude reveal the main details of . See, for example, Cohen et al. [ 17 ] and Mallat [ 19 ]. 2.2. Besov Balls We say that a function belongs to the Besov ball with , , , and if there exists a constant such that and ( 6 ) satisfy with the usual modifications if or . The interest of Besov balls is to contain various kinds of homogeneous and inhomogeneous functions . See, for example, Meyer [ 20 ], Donoho et al. [ 21 ], and Härdle et al. [ 9 ]. 2.3. Wavelet Estimation Let be the unknown function in ( 1 ) and the considered wavelet basis taken with (to ensure that and belong to the class ). Suppose that exists with . The first step in the wavelet estimation consists in expanding on as where and The second step is the estimation of and using . The idea of the third step is to exploit the sparse representation of by selecting the most interesting wavelet coefficients estimators. This selection can be of different natures (truncation, thresholding,…). Finally, we reconstruct these wavelet coefficients estimators on , providing an estimator for . In this study, we evaluate the performance of by studying the asymptotic properties of its MISE under the assumption that . More precisely, we aim to determine the sharpest rate of convergence such that where denotes a constant independent of . 3. Rates of Convergence In this section, we list the assumptions on the model, present our wavelet estimators, and determine their rates of convergence under the MISE over Besov balls. 3.1. Assumptions Let us recall that and are the functions in ( 1 ) and is the density of . We formulate the following assumptions. (K1) We have for any , , and there exists a known constant such that . (K2) First of all, let us define the Fourier transform of an integrable function by The notation will be used for the complex conjugate. We have and there exist two constants, and , such that (K3) There exists a constant such that The assumptions (K1) and (K3) are standard in a nonparametric regression framework (see, for instance, Tsybakov [ 22 ]). Remark that we do not need for the estimation of . The assumption (K2) is the so-called “ordinary smooth case” on . It is common for the deconvolution-estimation of densities (see, e.g., Fan and Koo [ 23 ] and Pensky and Vidakovic [ 24 ]). An example of compactly supported function satisfying (K2) is . Then , , and (K2) is satisfied with and . 3.2. When Is Known 3.2.1. Linear Wavelet Estimator We define the linear wavelet estimator by where and is an integer chosen a posteriori. Proposition 1 presents an elementary property of . Proposition 1. Let be ( 15 ) and let be ( 9 ). Suppose that (K1) holds. Then one has Theorem 2 below investigates the performance of in terms of rates of convergence under the MISE over Besov balls. Theorem 2. Suppose that (K1) – (K3) are satisfied and that with , , , , and . Let be defined by ( 14 ) with such that ( denotes the integer part of ). Then there exists a constant such that Note that the rate of convergence corresponds to the one obtained in the estimation of in the -periodic white noise convolution model with an adapted linear wavelet estimator (see, e.g., Chesneau [ 15 ]). The considered estimator depends on (the smoothness parameter of ); it is not adaptive. This aspect, as well as the rate of convergence , can be improved with thresholding methods. The next paragraph is devoted to one of them: the hard thresholding method. 3.2.2. Hard Thresholding Wavelet Estimator Suppose that (K2) is satisfied. We define the hard thresholding wavelet estimator by , where is defined by ( 15 ), is the indicator function, is a large enough constant, is the integer satisfying refers to ( 12 ), The construction of uses the double hard thresholding technique introduced by Delyon and Juditsky [ 14 ] and recently improved by Chaubey et al. [ 25 ]. The main interest of the thresholding using is to make adaptive; the construction (and performance) of does not depend on the knowledge of the smoothness of . The role of the thresholding using in ( 20 ) is to relax some usual restrictions on the model. To be more specific, it enables us to only suppose that admits finite moments of order (with known or a known upper bound of ), relaxing the standard assumption , for any . Further details on the constructions of hard thresholding wavelet estimators can be found in, for example, Donoho and Johnstone [ 26 , 27 ], Donoho et al. [ 21 , 28 ], Delyon and Juditsky [ 14 ], and Härdle et al. [ 9 ]. Theorem 3 below investigates the performance of in terms of rates of convergence under the MISE over Besov balls. Theorem 3. Suppose that (K1) – (K3) are satisfied and that with , , , or , and . Let be defined by ( 19 ). Then there exists a constant such that The proof of Theorem 3 is an application of a general result established by [ 25 , Theorem 6.1]. Let us mention that corresponds to the rate of convergence obtained in the estimation of in the -periodic white noise convolution model with an adapted hard thresholding wavelet estimator (see, e.g., Chesneau [ 15 ]). In the case and , this rate of convergence becomes the optimal one in the minimax sense for the standard density-regression estimation problems (see Härdle et al. [ 9 ]). In comparison to Theorem 2 , note that (i) for the case corresponding to the homogeneous zone of Besov balls is equal to the rate of convergence attained by up to a logarithmic term, (ii) for the case corresponding to the inhomogeneous zone of Besov balls it is significantly better in terms of power. 3.3. When Is Unknown In the case where is unknown, we propose a plug-in technique which consists in estimating in the construction of ( 14 ). This yields the linear wavelet estimator defined by where , is an integer chosen a posteriori, refers to (K3) , and is an estimator of constructed from the random variables . There are numerous possibilities for the choice of . For instance, can be a kernel density estimator or a wavelet density estimator (see, e.g., Donoho et al. [ 21 ], Härdle et al. [ 9 ], and Juditsky and Lambert-Lacroix [ 29 ]). The estimator is derived to the “NES linear wavelet estimator” introduced by Pensky and Vidakovic [ 16 ] and recently revisited in a more simple form by Chesneau [ 13 ]. Theorem 4 below determines an upper bound of the MISE of . Theorem 4. Suppose that (K1) – (K3) are satisfied, , and that with , , , , and . Let be defined by ( 24 ) with such that . Then there exists a constant such that with . The proof follows the idea of [ 13 , Theorem 3] and uses technical operations on Fourier transforms. From Theorem 4 , (i) if we chose and ( 17 ), we obtain Theorem 2 , (ii) if and satisfy that there exist and a constant such that then, the optimal integer is such that and we obtain the following rate of convergence for : Naturally the estimation of has a negative impact on the performance of . In particular, if , then the standard density linear wavelet estimator attains the rate of convergence with and (and it is optimal in the minimax sense for ; see Härdle et al. [ 9 ]). With this choice, the rate of convergence for becomes . Let us mention that is not adaptive since it depends on . However, remains an acceptable first approach for the estimation of with unknown . Conclusion and Perspectives . This study considers the estimation of from ( 1 ). According to the knowledge of or not, we propose wavelet methods and prove that they attain fast rates of convergence under the MISE over Besov balls. Among the perspectives of this work, we retain the following. (i) The relaxation of the assumption (K2) , perhaps by considering (K2 ′ ) : there exist four constants, , , , and , such that This condition was first introduced by Delaigle and Meister [ 30 ] in a context of deconvolution-estimation of function. It implies (K2) and has the advantage to consider some functions having zeros in Fourier transform domain as numerous kinds of compactly supported functions. (ii) The construction of an adaptive version of through the use of a thresholding method. (iii) The extension of our results to the risk with . All these aspects need further investigations that we leave for future works. 4. Proofs In this section, denotes any constant that does not depend on , , or . Its value may change from one term to another and may depend on or . Proof of Proposition 1 . By the independence between and , , , and , we have It follows from (K1) and integration by parts that . Using the Fubini theorem, , ( 30 ), and the Parseval identity, we obtain Proposition 1 is proved. Proof of Theorem 2 . We expand the function on as ( 8 ) at the level . Since forms an orthonormal basis of , we get Using Proposition 1 , that are i.i.d., the inequalities for any random complex variable and , , and (K1) and (K3) , we have The Parseval identity yields Using (K2) , , and a change of variables, we obtain (Let us mention that is finite thanks to .) Combining ( 33 ), ( 34 ), and ( 35 ), we have For the integer satisfying ( 17 ), it holds that Let us now bound the last term in ( 32 ). Since (see [ 9 , Corollary 9.2]), we obtain Owing to ( 32 ), ( 37 ), and ( 38 ), we have Theorem 2 is proved. Proof of Theorem 3 . For , any integer , and , (a1) using arguments similar to those in Proposition 1 , we obtain (a2) using ( 33 ), ( 34 ), and ( 35 ) with instead of , we have with . Thanks to (a1) and (a2) , we can apply [ 25 , Theorem 6.1] (see Appendix) with , , , , and with , , either and or and , we prove the existence of a constant such that Theorem 3 is proved. Proof of Theorem 4 . We expand the function on as ( 8 ) at the level . Since forms an orthonormal basis of , we get Using (see [ 9 , Corollary 9.2]), we have Let be ( 15 ) with and . The elementary inequality , , yields where Upper Bound for . Proceeding as in ( 37 ), we get Upper Bound for . The triangular inequality gives Owing to the triangular inequality, the indicator function, (K3) , , and the Markov inequality, we have Therefore where Let us now consider . For any complex random variable , we have the equality where denotes the expectation of conditionally to and and the variance of conditionally to . Therefore where Let us now observe that, owing to the independence of , the random variables , − conditionally to are independent. Using this property with the inequalities for any complex random variable and , , the independence between and , (K1) and (K3) , we get Owing to (K2) , , and a change of variables, we obtain Therefore, using and , we obtain Now, by the Hölder inequality for conditional expectations, arguments similar to ( 33 ), ( 34 ), and ( 35 ), we get Hence It follows from ( 54 ), ( 58 ), and ( 60 ) that Putting ( 46 ), ( 48 ), and ( 61 ) together, we get Combining ( 44 ), ( 45 ), and ( 62 ), we obtain the desired result; that is, Theorem 4 is proved. Appendix Let us now present in detail the general result of [ 25 , Theorem 6.1] used in the proof of Theorem 3 . We consider the wavelet basis presented in Section 2 and a general form of the hard thresholding wavelet estimator denoted by for estimating an unknown function from independent random variables : where , and is the integer satisfying Here, we suppose that there exist (i) functions with for any , (ii) two sequences of real numbers and satisfying and such that, for , any integer and any , there exist two constants, and , such that, for any integer and any , Let be ( A.1 ) under and . Suppose that with , and or and . Then there exists a constant such that Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgment The authors are thankful to the reviewers for their comments which have helped in improving the presented work. References L. Cavalier and A. Tsybakov, “Sharp adaptation for inverse problems with random noise,” Probability Theory and Related Fields , vol. 123, no. 3, pp. 323–354, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus M. Pensky and T. Sapatinas, “On convergence rates equivalency and sampling strategies in functional deconvolution models,” The Annals of Statistics , vol. 38, no. 3, pp. 1793–1844, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus J.-M. Loubes and C. Marteau, “Adaptive estimation for an inverse regression model with unknown operator,” Statistics & Risk Modeling , vol. 29, no. 3, pp. 215–242, 2012. View at Publisher · View at Google Scholar · View at MathSciNet N. Bissantz and M. Birke, “Asymptotic normality and confidence intervals for inverse regression models with convolution-type operators,” Journal of Multivariate Analysis , vol. 100, no. 10, pp. 2364–2375, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus M. Birke, N. Bissantz, and H. Holzmann, “Confidence bands for inverse regression models,” Inverse Problems , vol. 26, no. 11, Article ID 115020, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus T. Hildebrandt, N. Bissantz, and H. Dette, “Additive inverse regression models with convolution-type operators,” Electronic Journal of Statistics , vol. 8, no. 1, pp. 1–40, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus B. L. S. Prakasa Rao, Nonparametric Functional Estimation , Academic Press, Orlando, Fla, USA, 1983. View at MathSciNet A. Antoniadis, “Wavelets in statistics: a review (with discussion),” Journal of the Italian Statistical Society Series B , vol. 6, pp. 97–144, 1997. View at Publisher · View at Google Scholar W. Härdle, G. Kerkyacharian, D. Picard, and A. Tsybakov, Wavelets, Approximation, and Statistical Applications , vol. 129 of Lectures Notes in Statistics , Springer, New York, NY, USA, 1998. B. Vidakovic, Statistical Modeling by Wavelets , John Wiley & Sons, New York, NY, USA, 1999. View at Publisher · View at Google Scholar · View at MathSciNet T. T. Cai, “On adaptive wavelet estimation of a derivative and other related linear inverse problems,” Journal of Statistical Planning and Inference , vol. 108, no. 1-2, pp. 329–349, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus A. Petsa and T. Sapatinas, “On the estimation of the function and its derivatives in nonparametric regression: a Bayesian testimation approach,” Sankhya A , vol. 73, no. 2, pp. 231–244, 2011. View at Publisher · View at Google Scholar · View at MathSciNet C. Chesneau, “A note on wavelet estimation of the derivatives of a regression function in a random design setting,” International Journal of Mathematics and Mathematical Sciences , vol. 2014, Article ID 195765, 8 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus B. Delyon and A. Juditsky, “On minimax wavelet estimators,” Applied Computational Harmonic Analysis , vol. 3, no. 3, pp. 215–228, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus C. Chesneau, “Wavelet estimation of the derivatives of an unknown function from a convolution model,” Current Development in Theory and Applications of Wavelets , vol. 4, no. 2, pp. 131–151, 2010. View at Google Scholar · View at MathSciNet M. Pensky and B. Vidakovic, “On non-equally spaced wavelet regression,” Annals of the Institute of Statistical Mathematics , vol. 53, no. 4, pp. 681–690, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus A. Cohen, I. Daubechies, and P. Vial, “Wavelets on the interval and fast wavelet transforms,” Applied and Computational Harmonic Analysis , vol. 1, no. 1, pp. 54–81, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus I. Daubechies, Ten Lectures on Wavelets , SIAM, 1992. S. Mallat, A Wavelet Tour of Signal Processing , Elsevier/Academic Press, Amsterdam, The Netherlands, 3rd edition, 2009. View at MathSciNet Y. Meyer, Wavelets and Operators , Cambridge University Press, Cambridge, UK, 1992. View at MathSciNet D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, “Density estimation by wavelet thresholding,” The Annals of Statistics , vol. 24, no. 2, pp. 508–539, 1996. View at Publisher · View at Google Scholar · View at MathSciNet A. B. Tsybakov, Introduction à l'Estimation Non Paramétrique , Springer, Berlin, Germany, 2004. J. Fan and J.-Y. Koo, “Wavelet deconvolution,” IEEE Transactions on Information Theory , vol. 48, no. 3, pp. 734–747, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus M. Pensky and B. Vidakovic, “Adaptive wavelet estimator for nonparametric density deconvolution,” The Annals of Statistics , vol. 27, no. 6, pp. 2033–2053, 1999. View at Publisher · View at Google Scholar · View at MathSciNet Y. P. Chaubey, C. Chesneau, and H. Doosti, “Adaptive wavelet estimation of a density from mixtures under multiplicative censoring,” Statistics: A Journal of Theoretical and Applied Statistics , 2014. View at Publisher · View at Google Scholar D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika , vol. 81, no. 3, pp. 425–455, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus D. L. Donoho and I. M. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” Journal of the American Statistical Association , vol. 90, no. 432, pp. 1200–1224, 1995. View at Publisher · View at Google Scholar · View at MathSciNet D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, “Wavelet shrinkage: asymptopia?” Journal of the Royal Statistical Society Series B: Methodological , vol. 57, no. 2, pp. 301–369, 1995. View at Google Scholar · View at MathSciNet A. Juditsky and S. Lambert-Lacroix, “On minimax density estimation on R ,” Bernoulli. Official Journal of the Bernoulli Society for Mathematical Statistics and Probability , vol. 10, no. 2, pp. 187–220, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus A. Delaigle and A. Meister, “Nonparametric function estimation under Fourier-oscillating noise,” Statistica Sinica , vol. 21, no. 3, pp. 1065–1092, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus (function (i, s, o, g, r, a, m) { i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments) }, i[r].l = 1 * new Date(); a = s.createElement(o), m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m) })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga'); ga('create', 'UA-8578054-2', 'auto'); ga('send', 'pageview');

Journal

Advances in StatisticsHindawi Publishing Corporation

Published: Mar 22, 2015

There are no references for this article.