Statistics Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.Anti-spam check. Do not fill this in! =====Error===== Working from a [[null hypothesis]], two broad categories of error are recognized: * [[Type I and type II errors#Type I error|Type I errors]] where the null hypothesis is falsely rejected, giving a "false positive". * [[Type I and type II errors#Type II error|Type II errors]] where the null hypothesis fails to be rejected and an actual difference between populations is missed, giving a "false negative". [[Standard deviation]] refers to the extent to which individual observations in a sample differ from a central value, such as the sample or population mean, while [[Standard error (statistics)#Standard error of the mean|Standard error]] refers to an estimate of difference between sample mean and population mean. A [[Errors and residuals in statistics#Introduction|statistical error]] is the amount by which an observation differs from its [[expected value]]. A [[Errors and residuals in statistics#Introduction|residual]] is the amount an observation differs from the value the estimator of the expected value assumes on a given sample (also called prediction). [[Mean squared error]] is used for obtaining [[efficient estimators]], a widely used class of estimators. [[Root mean square error]] is simply the square root of mean squared error. [[File:Linear least squares(2).svg|thumb|right|A least squares fit: in red the points to be fitted, in blue the fitted line.]] Many statistical methods seek to minimize the [[residual sum of squares]], and these are called "[[least squares|methods of least squares]]" in contrast to [[Least absolute deviations]]. The latter gives equal weight to small and big errors, while the former gives more weight to large errors. Residual sum of squares is also [[Differentiable function|differentiable]], which provides a handy property for doing [[regression analysis|regression]]. Least squares applied to [[linear regression]] is called [[ordinary least squares]] method and least squares applied to [[nonlinear regression]] is called [[non-linear least squares]]. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in [[polynomial least squares]], which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve. Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as [[Random error|random]] (noise) or [[Systematic error|systematic]] ([[bias]]), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence of [[missing data]] or [[censoring (statistics)|censoring]] may result in [[bias (statistics)|biased estimates]] and specific techniques have been developed to address these problems.<ref>Rubin, Donald B.; Little, Roderick J.A., Statistical analysis with missing data, New York: Wiley 2002</ref> Summary: Please note that all contributions to Christianpedia may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here. You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see Christianpedia:Copyrights for details). Do not submit copyrighted work without permission! Cancel Editing help (opens in new window) Discuss this page