This lesson is being piloted (Beta version)

Statistical Methods for the Physical Sciences

Key Points

Looking at some univariate data: summary statistics and histograms
  • Univariate data can be plotted using histograms, e.g. with matplotlib.pyplot.hist. Histograms can also be calculated (without plotting) using numpy.hist.

  • Pre-binned data can be plotted using matplotlib.pyplot.hist using weights matching the binned frequencies/densities and bin centres used as dummy values to be binned.

  • Statistical errors are due to random measurement errors or randomness of the population being sampled, while systematic errors are non-random and linked to faults in the experiment or limitations/biases in sample collection.

  • Precise measurements have low relative statistical error, while accurate measurements have low relative systematic error.

  • Data distributions can be quantified using sample statistics such as the mean and median and variance or standard-deviation (quantifying the width of the distribution), e.g. with numpy functions mean, median, var and std. Remember to check the degrees of freedom assumed for the variance and standard deviation functions!

  • Quantities calculated from data such as such as mean, median and variance are statistics. Hypotheses about the data can be tested by comparing a suitable test statistic with its expected distribution, given the hypothesis and appropriate assumptions.

Introducing probability distributions
  • Probability distributions show how random variables are distributed. Two common distributions are the uniform and normal distributions.

  • Uniform and normal distributions and many associated functions can be accessed using scipy.stats.uniform and scipy.stats.norm respectively.

  • The probability density function (pdf) shows the distribution of relative likelihood or frequency of different values of a random variable and can be accessed with the scipy statistical distribution’s pdf method.

  • The cumulative distribution function (cdf) is the integral of the pdf and shows the cumulative probability for a variable to be equal to or less than a given value. It can be accessed with the scipy statistical distribution’s cdf method.

  • Quantiles such as percentiles and quartiles give the values of the random variable which correspond to fixed probability intervals (e.g. of 1 per cent and 25 per cent respectively). They can be calculated for a distribution in scipy using the percentile or interval methods.

  • The percent point function (ppf) (ppf method) is the inverse function of the cdf and shows the value of the random variable corresponding to a given quantile in its distribution.

  • Probability distributions are defined by common types of parameter such as the location and scale parameters. Some distributions also include shape parameters.

Random variables
  • Random variables are drawn from probability distributions. The expectation value (arithmetic mean for an infinite number of sampled variates) is equal to the mean of the distribution function (pdf).

  • The expectation of the variance of a random variable is equal to the expectation of the squared variable minus the squared expectation of the variable.

  • Sums of scaled random variables have expectation values equal to the sum of scaled expectations of the individual variables, and variances equal to the sum of scaled individual variances.

  • The means and variances of summed random variables lead to the calculation of the standard error (the standard deviation) of the mean.

  • scipy.stats distributions have methods to calculate the mean (.mean), variance (.var) and other properties of the distribution.

  • scipy.stats distributions have a method (.rvs) to generate arrays of random variates drawn from that distribution.

The Central Limit Theorem
  • Sums of samples of random variates from non-normal distributions with finite mean and variance, become asymptotically normally distributed as their sample size increases.

  • The theorem holds for sums of differently distributed variates, but the speed at which a normal distribution is approached depends on the shape of the variate’s distribution, with symmetric distributions approaching the normal limit faster than asymmetric distributions.

  • Means of large numbers (e.g. 100 or more) of non-normally distributed measurements are distributed close to normal, with distribution mean equal to the population mean that the measurements are drawn from and standard deviation given by the standard error on the mean.

  • Distributions of means (or other types of sum) of non-normal random data are closer to normal in their centres than in the tails of the distribution, so the normal assumption is most reliable for smaller deviations of sample mean from the population mean.

Significance tests: the z-test - comparing with a population of known mean and variance
  • Significance testing is used to determine whether a given (null) hypothesis is rejected by the data, by calculating a test statistic and comparing it with the distribution expected for it, under the assumption that the null hypothesis is true.

  • A null hypothesis is formulated from a physical model (with parameters that are fixed and independent of the experiment) and a statistical model (which governs the probability distribution of the test statistic). Additional assumptions may be required to derive the distribution of the test statistic.

  • A null hypothesis is rejected if the measured p-value of the test statistic is equal to or less than a pre-defined significance level.

  • Rejection of the null hypothesis could indicate rejection of either the physical model or the statistical model (or both), with further experiments or tests required to determine which.

  • For comparing measurements with an expected (population mean) value, a z-statistic can be calculated to compare the sample mean with the expected value, normalising by the standard error on the sample mean, which requires knowledge of the variance of the population that the measurements are sampled from.

  • The z-statistic should be distributed as a standard normal provided that the sample mean is normally distributed, which may arise for large samples from the central limit theorem, or for any sample size if the measurements are drawn from normal distributions.

Significance tests: the t-test - comparing means when population variance is unknown
  • A t-statistic can be defined from the sample mean and its standard error, which is distributed following a t-distribution, If the sample mean is normally distributed and sample variance is distributed as a scaled chi-squared distribution.

  • The one-sample t-test can be used to compare a sample mean with a population mean when the population variance is unknown, as is often the case with experimental statistical errors.

  • The two-sample t-test can be used to compare two sample means, to see if they could be drawn from distributions with the same population mean and either the same or different variances (e.g. to compare measurements of the same quantity obtained with different experiments).

  • Caution must be applied when interpreting t-test significances of more than 2 to 3 sigma unless the sample is large or the measurements themselves are known to be normally distributed.

Discrete random variables and their probability distributions
  • Discrete probability distributions map a sample space of discrete outcomes (categorical or numerical) on to their probabilities.

  • By assigning an outcome to an ordered sequence of integers corresponding to the discrete variates, functional forms for probability distributions (the pmf or probability mass function) can be defined.

  • Bernoulli trials correspond to a single binary outcome (success/fail) while the number of successes in repeated Bernoulli trials is given by the binomial distribution.

  • The Poisson distribution can be derived as a limiting case of the binomial distribution and corresponds to the probability of obtaining a certain number of counts in a fixed interval, from a random process with a constant rate.

  • Counts in fixed histogram bins follow Poisson statistics.

  • In the limit of large numbers of successes/counts, the binomial and Poisson distributions approach the normal distribution.

Probability calculus and conditional probability
  • A sample space contains all possible mutually exclusive outcomes of an experiment or trial.

  • Events consist of sets of outcomes which may overlap, leading to conditional dependence of the occurrence of one event on another. The conditional dependence of events can be described graphically using Venn diagrams.

  • Two events are independent if their probability does not depend on the occurrence (or not) of the other event. Events are mutually exclusive if the probability of one event is zero given that the other event occurs.

  • The probability of an event A occurring, given that B occurs, is in general not equal to the probability of B occurring, given that A occurs.

  • Calculations with conditional probabilities can be made using the probability calculus, including the addition rule, multiplication rule and extensions such as the law of total probability.

  • Multivariate probability distributions can be understood using the mathematics of conditional probability.

Reading, working with and plotting multivariate data
  • The Pandas module is an efficient way to work with complex multivariate data, by reading in and writing the data to a dataframe, which is easier to work with than a numpy structured array.

  • Pandas functionality can be used to clean dataframes of bad or missing data, while scipy and numpy functions can be applied to columns of the dataframe, in order to modify or transform the data.

  • Scatter plot matrices and 3-D plots offer powerful ways to plot and explore multi-dimensional data.

Correlation tests and least-squares fitting
  • The sample covariance between two variables is an unbiased estimator for population covariance and shows the part of variance that is produced by linearly related variations in both variables.

  • Normalising the sample covariance by the sample standard deviations in both bands yields Pearson’s correlation coefficient, r.

  • Spearman’s rho correlation coefficient is based on the correlation in the ranking of variables, not their absolute values, so is more robust to outliers than Pearson’s coefficient.

  • By assuming that the data are independent (and thus uncorrelated) and identically distributed, significance tests can be carried out on the hypothesis of no correlation, provided the sample is large (\(n>500\)) and/or is normally distributed.

  • By minimising the squared differences between the data and a linear model, linear regression can be used to obtain the model parameters.

  • Bootstrapping uses resampling (with replacement) of the data to estimate the standard deviation of any model parameters or other quantities calculated from the data.

Bayes' Theorem
  • For conditional probabilities, Bayes’ theorem tells us how to swap the conditionals around.

  • In statistical terminology, the probability distribution of the hypothesis given the data is the posterior and is given by the likelihood multiplied by the prior probability, divided by the evidence.

  • The likelihood is the probability to obtained fixed data as a function of the distribution parameters, in contrast to the pdf which obtains the distribution of data for fixed parameters.

  • The prior probability represents our prior belief that the hypothesis is correct, before collecting the data

  • The evidence is the total probability of obtaining the data, marginalising over viable hypotheses. It is usually the most difficult quantity to calculate unless simplifying assumptions are made.

Maximum likelihood estimation
Fitting models to data

Glossary

accuracy
The relative amount of non-random deviation from the ‘true’ value of a quantity being measured. Measurements of the same quantity but with smaller systematic error are more accurate. See also: precision
argument
A value given to a python function or program when it runs. In python programming, the term is often used interchangeably (and inconsistently) with ‘parameter’, but here we restrict the use of the term ‘parameter’ to probability distributions only.
Bayesian
TBD
Bessel’s correction
The correction \(\frac{1}{n-1}\) (where \(n\) is sample size) to the arithmetic sum of sample variance so that it becomes an unbiased estimator of the population variance. The value of 1 subtracted from sample size is called the degrees of freedom. The correction compensates for the fact that the part of population variance that leads to variance of the sample mean is already removed from the sample variance, because it is calculated w.r.t. to sample mean and not population mean.
bias
The bias of an estimator is the difference between the expected value of the estimator and the true value of the quantity being estimated.
bivariate
Involving two variates, e.g. bivariate data is a type of data consisting of observations/measurements of two different variables; bivariate analysis studies the relationship between two paired measured variables.
categorical data
A type of data which takes on non-numerical values (e.g. subatomic particle types).
Cauchy-Schwarz inequality
TBD
central limit theorem
The theorem states that under general conditions of finite mean and variance, sums of variates drawn from non-normal distributions will tend towards being normally distributed, asymptotically with sample size \(n\).
cdf
A cumulative distribution function (cdf) gives the cumulative probability that a random variable following a given probability distribution may be less than or equal to a given value, i.e. the cdf gives \(P(X \leq x)\). The cdf is therefore limited to have values over the interval \([0,1]\). For a continuous random variable, the derivative function of a cdf is the pdf. For a discrete random variable, the cdf is the cumulative sum of the pmf.
chi-squared fitting
See weighted least squares
chi-squared test
TBD
conditional probability
If the probability of an event \(A\) depends on the occurence of another event \(B\), \(A\) is said to be conditional on \(B\). The probability of \(A\) happening if \(B\) also happens is denoted \(P(A\vert B)\), i.e. the probability of ‘\(A\) conditional on \(B\)’ or of ‘\(A\) given \(B\)’. See also: independence.
confidence interval
TBD
confidence level
Often used as an alternative form when stating the significance level (\(\alpha\)), it is expressed as 1 minus the significance when quoted as a percentage. E.g. ‘the hypothesis is ruled out at the 95% confidence level’ (for \(\alpha=0.05\)).
continuous
Relating to a continuous random variable, i.e. may take on a continuous and infinite number of possible values within a range specified by the corresponding continuous probability distribution.
correlation coefficient
TBD
covariance
TBD
discrete
Relating to a discrete random variable, i.e. may only take on discrete values (e.g. integers) within a range specified by the corresponding discrete probability distribution.
distributions - Bernoulli
The result of single trial with two outcomes (usually described as ‘success’ and ‘failure’, with probability of success \(\theta\)) follows a Bernoulli distribution. If success is defined as \(X=1\) then a variate distributed as \(X\sim \mathrm{Bern}(\theta)\) has pmf \(p(x\vert \theta) = \theta^{x}(1-\theta)^{1-x} \quad \mbox{for }x=0,1\), with \(E[X]=\theta\) and \(V[X]=\theta(1-\theta)\). See also: binomial distribution.
distributions - binomial
The distribution of number of ‘successes’ \(x\) produced by \(n\) repeated Bernoulli trials with probability of success \(\theta\). \(X\sim \mathrm{Binom}(n,\theta)\) has pmf \(p(x\vert n,\theta) = \frac{n!}{(n-x)!x!} \theta^{x}(1-\theta)^{n-x} \quad \mbox{for }x=0,1,2,...,n.\), with \(E[X]=n\theta\) and \(V[X]=n\theta(1-\theta)\).
distributions - chi-squared
TBD
distributions - normal
A normally distributed variate \(X\sim N(\mu,\sigma)\) has pdf \(p(x\vert \mu,\sigma)=\frac{1}{\sigma \sqrt{2\pi}} e^{-(x-\mu)^{2}/(2\sigma^{2})}\), and location parameter \(\mu\), scale parameter \(\sigma\). Mean \(E[X]=\mu\) and variance \(V[X]=\sigma^{2}.\) The limiting distribution of sums of random variables (see: central limit theorem). The standard normal distribution has \(\mu=0\), \(\sigma^{2}=1\).
distributions - Poisson
The Poisson distribution gives the probability distribution of counts measured in a fixed interval or bin, assuming that the counts are independent follow a constant mean rate of counts/interval \(\lambda\). For variates \(X\sim \mathrm{Pois}(\lambda)\), the pmf \(p(x \vert \lambda) = \frac{\lambda^{x}e^{-\lambda}}{x!}\), with \(E[X] = \lambda\) and \(V[X] = \lambda\). The Poisson distribution can be derived as a limiting case of the binomial distribution, for an infinite number of trials.
distributions - t
The distribution followed by the \(t\)-statistic, corresponding to the distribution of variates equal to \(T=X/Y\) where \(X\) is drawn from a standard normal distribution and \(Y\) is the square root of a variate drawn from a scaled chi-squared distribution, for a given number of degrees of freedom \(\nu\).
distributions - uniform
\(X\sim U(a,b)\) has pdf \(p(x\vert a,b)=\mathrm{constant}\) on interval \([a,b]\) (and zero elsewhere), and location parameter \(a\), scale parameter \(\lvert b-a \rvert\). Mean \(E[X] = (b+a)/2\) and variance \(V[X] = (b-a)^{2}/12\). Uniform random variates can be used to generate random variates from any other probability distribution via the ppf of that distribution.
estimator
A method for calculating from data an estimate of a given quantity. For example, the sample mean and variance are estimators of the population mean and variance. See also: bias, MLE.
event
In probability theory, an event is an outcome or set of outcomes of a trial to which a probability can be assigned. E.g. an experimental measurement or sample of measurements, a sample of observational data, a dice roll, a ‘hand’ in a card game, a sequence of computer-generated random numbers or the quantity calculated from them.
evidence
TBD
expectation
The expectation value of a quantity, which may be a random variable or a function of a random variable, is the integral (over the variable) of the quantity weighted by the pdf of the variable. In frequentist terms, expectation gives the mean of the random variates (or function of them) in the case of an infinite number of measurements.
false negative
TBD
false positive
TBD
frequentism
Interpretation of probability which defines the probability of an event as the limit of its frequency in many independent trials.
goodness of fit
TBD
histogram
A method of plotting the distribution of data by binning (assigning data values) in discrete bins and plotting either the number of values or counts (sometimes denoted frequency) per bin or normalising by the total bin width to give the count density, or further normalising the count density by the total number of counts to give a probability density or sometimes just denoted density).
hypothesis
In statistical terms a hypothesis is a scientific question which is formulated in a way which can be tested using statistical methods. A null hypothesis is a specific kind of baseline hypothesis which is assumed true in order to formulate a statistical test to see whether it is rejected by the data.
hypothesis test
A statistical test, the result of which either rejects a (null) hypothesis to a given significance level (this is also called a significance test) or gives a probability that an alternative hypothesis is preferred over the null hypothesis, to explain the data.
independence
Two events are independent if the outcome of one does not affect the probability of the outcome of the other. Formally, if the events \(A\) and \(B\) are independent, \(P(A\vert B)=P(A)\) and \(P(A \mbox{ and } B)=P(A)P(B)\). See also conditional probability.
interquartile range
The IQR is a form of confidence interval corresponding to the range of data values from the 25th to the 75th percentile.
likelihood
TBD
likelihood function
TBD
mean
The mean \(\bar{x}\) for a quantity \(x_{i}\) measured from a sample of data is a statistic calculated as the average of the quantity, i.e. \(\frac{1}{n} \sum\limits_{i=1}^{n} x_{i}\). For a random variable \(X\) defined by a probability distribution with pdf \(p(x)\), the mean \(\mu\) is the expectation value of the variable, \(\mu=E[X]=\int^{+\infty}_{-\infty} xp(x)\mathrm{d}x\).
median
The median for a quantity measured from a sample of data, is a statistic calculated as the central value of the ordered values of the quantity. For a random variable defined by a probability distribution, the median corresponds to the value of the 50th percentile of the variable (i.e. with half the total probability below and above the median value).
member
A python variable contained within an object.
method
A python function which is tied to a particular python object. Each of an object’s methods typically implements one of the things it can do, or one of the questions it can answer.
MLE
TBD
mode
The mode is the most frequent value in a sample of data. For a random variable defined by a probability distribution, the mode is the value of the variable corresponding to the peak of the pdf.
multivariate
Involving three or more variates, e.g. multivariate data is a type of data consisting of observations/measurements of three variables; multivariate analysis studies the relationships between three or more variables, to see which are related and how.
mutual exclusivity
Two events are mutually exclusive if they cannot both occur, or equivalently the probability of one occurring is conditional on the other not occurring. I.e. events \(A\) and \(B\) are mutually exclusive if \(P(A \mbox{ and } B)=0\) which occurs if \(P(A\vert B)=0\). For mutually exclusive events, it follows that \(P(A \mbox{ or } B)=P(A)+P(B)\).
object
A collection of conceptually related python variables (members) and functions using those variables (methods).
ordinal data
A type of categorical data which can be given a relative ordering or ranking but where the differences between ranks are not known or explicitly specified by the categories (e.g. stellar spectral types).
parameter
Probability distributions are defined by parameters which are specific to the distribution, but can be classified according to their effects on the distribution. A location parameter determines the location of the distribution on the variable (\(x\)) axis, with changes shifting the distribution on that axis. A scale parameter determines the width of the distribution and stretches or shrinks it along the \(x\)-axis. Shape parameters do something other than shifting/shrinking/stretching the distribution, changing the distribution shape in some way. Some distributions use a rate parameter, which is the reciprocal of the scale parameter.
pdf
A probability density function (pdf) gives the probability density (i.e. per unit variable) of a continuous probability distribution, i.e. the values of the pdf give the relative probability or frequency of occurrence of values of a random variable. The pdf should be normalised so that the definite integral over all possible values is unity. The integral function of the pdf is the cdf.
percentile
Value of an ordered variable (which may be data) below which a given percentage of the values fall (exclusive definition - inclusive definition corresponds to ‘at or below which’) . E.g. 25% of values lie below the data value corresponding to the 25th percentile. For a random variable, the percentile corresponds to the value of the variable below which a given percentage of the probability is contained (i.e. it is the value of the variable corresponding to the inverse of the cdf - or ppf for the percentage probability expressed as a decimal fraction). See also: quantile.
pmf
The probability mass function (pmf) is the discrete equivalent of the pdf, corresponding to the probability of drawing a given integer value from a discrete probability distribution. The sum of pmf values for all possible outcomes from a discrete probability distribution should equal unity.
population
The notional population of random variates or objects from which a sample is drawn. A population may have some real equivalent (e.g. an actual population of objects which is being sampled). In the frequentist approach to statistics it can also represent the notional infinite set of trials from which a random variable is drawn.
posterior
TBD
ppf
A percent point function (ppf) gives the value of a variable as a function of the cumulative probability that it corresponds to, i.e. it is the inverse of the cdf.
precision
The relative amount of random deviation in a quantity being measured. Measurements of the same quantity but with smaller statistical error are more precise. See also: accuracy.
prior
TBD0
probability distribution
Distribution giving the relative frequencies of occurence of a random variable (or variables, for bivariate and multivariate distributions).
\(p\)-value
A statistical test probability calculated for a given test statistic and assumptions about how the test statistic is distributed (e.g. depending on the null hypothesis and any other assumptions required for the test).
quantile
Values which divide a probability distribution (or ordered data set) into equal steps in cumulative probability (or cumulative frequency, for data). Common forms of quantile include percentiles, quartiles (corresponding to steps of 25%, known as 1st, 2nd - the median - 3rd and 4th quartile) and deciles (steps of 10%).
random variable
A variable which may taken on random values (variates) with a range and frequency specified by a probability distribution.
random variate
Also known simply as a ‘variate’, a random variate is an observed outcome of a random variable, i.e. drawn from a probability distribution of that variable.
realisation
An observed outcome of a random process, e.g. it may be a set of random variates, or the result of an algorithm applied to a set of random variates.
rug plot
A method of plotting univariate data as a set of (usually vertical) marks representing each data value, along an axis (usually the \(x\)-axis). It is usually combined with a histogram to also show the frequency or probability distribution of the plotted variable.
sample
A set of measurements, drawn from an underlying population, either real (e.g. the height distribution of Dutch adults) or notional (the distribution of possible measurements from an experiment with some random measurement error). A sample may also refer to a set of random variates drawn from a probability distribution.
sample space
The set of all possible outcomes of an experiment or trial.
seed
(pseudo-)Random number generators must be ‘seeded’ using a number, usually an integer, which is usually provided automatically by a system call, but may also be specified by the user. Starting from a given seed, a random number generator will return a fixed sequence of pseudo-random variates, as long as the generating function is called repeatedly without resetting the seed (this behaviour must be forced in Python using numpy.random.seed, otherwise a new seed is provided at every function call).
significance level
The significance level \(\alpha\) is a pre-specified level of probability required from a significance test in order for a hypothesis to be rejected, i.e. the hypothesis is rejected if the \(p\)-value is less than or equal to \(\alpha\).
significance test
See hypothesis test
standard deviation
The standard deviation of a sample of data or a random variable with a probability distribution, is equal to the square-root of variance for that quantity. In the context of time-variable quantities it is also often called the root-mean-squared deviation (or just rms).
standard error
The standard error is the expected standard deviation on the sample mean (with respect to the ‘true’ population mean). For \(n\) measurements drawn from a population with variance \(\sigma^{2}\), the standard error is \(\sigma_{\bar{x}} = \sigma/\sqrt{n}\).
stationary process
A random process is said to be stationary if it is produced by random variates drawn from a probability distribution which is constant and does not change over time.
statistic
A single number calculated by applying a statistical algorithm or function to the values of the items in a sample. The sample mean, sample median and sample variance are all examples of statistics.
statistical error
A random error (deviation from the ‘true’ value(s)) in quantities obtained or derived from data, possibly resulting from random measurement error in the apparatus (or measurer) or due to intrinsic randomness in the quantity being measured (e.g. photon counts) or the sample obtained (i.e. the sample is a random subset of an underlying population, e.g. of stars in a cluster). See also: systematic error, precision.
systematic error
An error that is not random but is a systematic shift away from the ‘true’ value of the measured quantity obtained from data (or a quantity derived from it). E.g. a systematic error may be produced by a fault in the experimental setup or apparatus, or a flaw in the design of a survey so it is biased towards members of the population being sampled with specific properties in a way that cannot be corrected for. See also statistical error, accuracy
statistical test
A test of whether a given test statistic is consistent with its distribution under a specified hypothesis (and associated assumptions).
survival function
A function equal to 1 minus the cdf, i.e. it corresponds to the probability \(P(X\gt x)\) and is therefore useful for assessing \(p\)-values of test statistics.
test statistic
A statistic calculated from data for comparison with a known probability distribution which the test statistic is expected to follow if certain assumptions (including a given hypothesis about the data) are satisfied.
trial
An ‘experiment’ which results in a sample of (random) data. It may also refer to the process of generating a sample of random variates or quantities calculated from random variates, e.g. in a numerical experiment or simulation of a random process.
t-statistic
A test statistic which is defined for a sample with mean \(\bar{x}\) and standard deviation \(s_{x}\) with respect to a population of known mean \(\mu\) as: \(t = (\bar{x}-\mu)/(s_{x}/\sqrt{n})\). \(t\) is drawn from a t-distribution if the sample mean is normally distributed (e.g. via the central limit theorem or if the sample is drawn from a population which is itself normally distributed).
t-test
Any test where the test statistic follows a \(t\)-distribution under the null hypothesis, such as tests using the \(t\)-statistic.
univariate
Involving a single variate, e.g. univariate data is a type of data consisting only of observations/measurements of a single quantity or characteristic; univariate analysis studies statistical properties of a single quantity such as its statistical moments and/or probability distribution.
variance
The variance \(s_{x}^{2}\) for a quantity \(x_{i}\) measured from a sample of data, is a statistic calculated as the average of the squared deviations of the data values from the sample mean (corrected by Bessel’s correction), i.e. \(\frac{1}{n-1} \sum\limits_{i=1}^{n} (x_{i}-\bar{x})^{2}\). For a random variable \(X\) defined by a probability distribution with pdf \(p(x)\), the variance \(V[X]\) is the expectation value of the squared difference of the variable from its mean \(\mu\), \(V[X] = E[(X-\mu)^{2}] = \int^{+\infty}_{-\infty} (x-\mu)^{2}p(x)\mathrm{d}x\), which is equivalent to the expectation of squares minus the square of expectations of the variable, \(E[X^{2}]-E[X]^{2}\).
weighted least squares
TBD
z-statistic
A test statistic which is defined for a sample mean \(\bar{x}\) with respect to a population of known mean \(\mu\) and variance \(\sigma^{2}\) as: \(Z = (\bar{x}-\mu)/(\sigma/\sqrt{n})\). \(Z\) is drawn from a standard normal distribution if the sample mean is normally distributed (e.g. via the central limit theorem or if the sample is drawn from a population which is itself normally distributed).
z-test
Any test where the test statistic is normally distributed under the null hypothesis, such as tests using the \(z\)-statistic (although a \(z\)-test does not have to use the \(z\)-statistic).