<HashMap><database>biostudies-literature</database><scores/><additional><submitter>Shi X</submitter><funding>National Bureau of Statistics of China</funding><funding>National Natural Science Foundation of China</funding><funding>NCI NIH HHS</funding><funding>National Institutes of Health</funding><pagination>235-251</pagination><full_dataset_link>https://www.ebi.ac.uk/biostudies/studies/S-EPMC6181148</full_dataset_link><repository>biostudies-literature</repository><omics_type>Unknown</omics_type><volume>124</volume><pubmed_abstract>Penalization is a popular tool for multi- and high-dimensional data. Most of the existing computational algorithms have been developed for convex loss functions. Nonconvex loss functions can sometimes generate more robust results and have important applications. Motivated by the BLasso algorithm, this study develops the Forward and Backward Stagewise (Fabs) algorithm for nonconvex loss functions with the adaptive Lasso (aLasso) penalty. It is shown that each point along the Fabs paths is a &lt;i>δ&lt;/i>-approximate solution to the aLasso problem and the Fabs paths converge to the stationary points of the aLasso problem when &lt;i>δ&lt;/i> goes to zero, given that the loss function has second-order derivatives bounded from above. This study exemplifies the Fabs with an application to the penalized smooth partial rank (SPR) estimation, for which there is still a lack of effective algorithm. Extensive numerical studies are conducted to demonstrate the benefit of penalized SPR estimation using Fabs, especially under high-dimensional settings. Application to the smoothed 0-1 loss in binary classification is introduced to demonstrate its capability to work with other differentiable nonconvex loss function.</pubmed_abstract><journal>Computational statistics &amp; data analysis</journal><pubmed_title>A Forward and Backward Stagewise Algorithm for Nonconvex Loss Functions with Adaptive Lasso.</pubmed_title><pmcid>PMC6181148</pmcid><funding_grant_id>2016LD01</funding_grant_id><funding_grant_id>R01 CA204120</funding_grant_id><funding_grant_id>71501089</funding_grant_id><pubmed_authors>Ma S</pubmed_authors><pubmed_authors>Huang Y</pubmed_authors><pubmed_authors>Huang J</pubmed_authors><pubmed_authors>Shi X</pubmed_authors></additional><is_claimable>false</is_claimable><name>A Forward and Backward Stagewise Algorithm for Nonconvex Loss Functions with Adaptive Lasso.</name><description>Penalization is a popular tool for multi- and high-dimensional data. Most of the existing computational algorithms have been developed for convex loss functions. Nonconvex loss functions can sometimes generate more robust results and have important applications. Motivated by the BLasso algorithm, this study develops the Forward and Backward Stagewise (Fabs) algorithm for nonconvex loss functions with the adaptive Lasso (aLasso) penalty. It is shown that each point along the Fabs paths is a &lt;i>δ&lt;/i>-approximate solution to the aLasso problem and the Fabs paths converge to the stationary points of the aLasso problem when &lt;i>δ&lt;/i> goes to zero, given that the loss function has second-order derivatives bounded from above. This study exemplifies the Fabs with an application to the penalized smooth partial rank (SPR) estimation, for which there is still a lack of effective algorithm. Extensive numerical studies are conducted to demonstrate the benefit of penalized SPR estimation using Fabs, especially under high-dimensional settings. Application to the smoothed 0-1 loss in binary classification is introduced to demonstrate its capability to work with other differentiable nonconvex loss function.</description><dates><release>2018-01-01T00:00:00Z</release><publication>2018 Aug</publication><modification>2024-11-09T18:42:36.267Z</modification><creation>2019-08-07T07:02:48Z</creation></dates><accession>S-EPMC6181148</accession><cross_references><pubmed>30319163</pubmed><doi>10.1016/j.csda.2018.03.006</doi></cross_references></HashMap>