Compute a confidence interval from sample data

Question:

I have sample data which I would like to compute a confidence interval for, assuming a normal distribution.

I have found and installed the numpy and scipy packages and have gotten numpy to return a mean and standard deviation (numpy.mean(data) with data being a list). Any advice on getting a sample confidence interval would be much appreciated.

Asked By: Bmayer0122

||

Answers:

Start with looking up the z-value for your desired confidence interval from a look-up table. The confidence interval is then mean +/- z*sigma, where sigma is the estimated standard deviation of your sample mean, given by sigma = s / sqrt(n), where s is the standard deviation computed from your sample data and n is your sample size.

Answered By: bogatron
import numpy as np
import scipy.stats

def mean_confidence_interval(data, confidence=0.95):
    a = 1.0 * np.array(data)
    n = len(a)
    m, se = np.mean(a), scipy.stats.sem(a)
    h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
    return m, m-h, m+h

You can calculate like this.

Answered By: shasan

Here a shortened version of shasan’s code, calculating the 95% confidence interval of the mean of array a:

import numpy as np, scipy.stats as st

st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))

But using StatsModels’ tconfint_mean is arguably even nicer:

import statsmodels.stats.api as sms

sms.DescrStatsW(a).tconfint_mean()

The underlying assumptions for both are that the sample (array a) was drawn independently from a normal distribution with unknown standard deviation (see MathWorld or Wikipedia).

For large sample size n, the sample mean is normally distributed, and one can calculate its confidence interval using st.norm.interval() (as suggested in Jaime’s comment). But the above solutions are correct also for small n, where st.norm.interval() gives confidence intervals that are too narrow (i.e., “fake confidence”). See my answer to a similar question for more details (and one of Russ’s comments here).

Here an example where the correct options give (essentially) identical confidence intervals:

In [9]: a = range(10,14)

In [10]: mean_confidence_interval(a)
Out[10]: (11.5, 9.4457397432391215, 13.554260256760879)

In [11]: st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))
Out[11]: (9.4457397432391215, 13.554260256760879)

In [12]: sms.DescrStatsW(a).tconfint_mean()
Out[12]: (9.4457397432391197, 13.55426025676088)

And finally, the incorrect result using st.norm.interval():

In [13]: st.norm.interval(0.95, loc=np.mean(a), scale=st.sem(a))
Out[13]: (10.23484868811834, 12.76515131188166)
Answered By: Ulrich Stern

Starting Python 3.8, the standard library provides the NormalDist object as part of the statistics module:

from statistics import NormalDist

def confidence_interval(data, confidence=0.95):
  dist = NormalDist.from_samples(data)
  z = NormalDist().inv_cdf((1 + confidence) / 2.)
  h = dist.stdev * z / ((len(data) - 1) ** .5)
  return dist.mean - h, dist.mean + h

This:

  • Creates a NormalDist object from the data sample (NormalDist.from_samples(data), which gives us access to the sample’s mean and standard deviation via NormalDist.mean and NormalDist.stdev.

  • Compute the Z-score based on the standard normal distribution (represented by NormalDist()) for the given confidence using the inverse of the cumulative distribution function (inv_cdf).

  • Produces the confidence interval based on the sample’s standard deviation and mean.


This assumes the sample size is big enough (let’s say more than ~100 points) in order to use the standard normal distribution rather than the student’s t distribution to compute the z value.

Answered By: Xavier Guihot

Regarding Ulrich’s answer – that is using the t-value. We use this when the true variance is unknown. This is when the only data you have is the sample data.

For bogatron’s answer, this involves z-tables. The z-tables are used when variance is already known and provided. Then you also have sample data. Sigma is not the estimated standard deviation of the sample mean. It is already known.

Let’s say variance is known and we want 95% confidence:

from scipy.stats import norm
alpha = 0.95
# Define our z
ci = alpha + (1-alpha)/2
#Lower Interval, where n is sample siz
c_lb = sample_mean - norm.ppf(ci)*((sigma/(n**0.5)))
c_ub = sample_mean + norm.ppf(ci)*((sigma/(n**0.5)))

With only sample data and an unknown variance (meaning that the variance will have to be calculated solely from sample data), Ulrich’s answer works perfectly. However, you probably would like to designate the confidence interval. If your data is a and you want a confidence interval of 0.95:

import statsmodels.stats.api as sms
conf = sms.DescrStatsW(a).tconfint_mean(alpha=0.05)
conf
Answered By: Candace Agonafir

Based on the original but with some concrete examples:

import numpy as np

def mean_confidence_interval(data, confidence: float = 0.95) -> tuple[float, np.ndarray]:
    """
    Returns (tuple of) the mean and confidence interval for given data.
    Data is a np.arrayable iterable.

    ref:
        - https://stackoverflow.com/a/15034143/1601580
        - https://github.com/WangYueFt/rfs/blob/f8c837ba93c62dd0ac68a2f4019c619aa86b8421/eval/meta_eval.py#L19
    """
    import scipy.stats
    import numpy as np

    a: np.ndarray = 1.0 * np.array(data)
    n: int = len(a)
    if n == 1:
        import logging
        logging.warning('The first dimension of your data is 1, perhaps you meant to transpose your data? or remove the'
                        'singleton dimension?')
    m, se = a.mean(), scipy.stats.sem(a)
    tp = scipy.stats.t.ppf((1 + confidence) / 2., n - 1)
    h = se * tp
    return m, h

def ci_test_float():
    import numpy as np
    # - one WRONG data set of size 1 by N
    data = np.random.randn(1, 30)  # gives an error becuase len sets n=1, so not this shape!
    m, ci = mean_confidence_interval(data)
    print('-- you should get a mean and a list of nan ci (since data is in wrong format, it thinks its 30 data sets of '
          'length 1.')
    print(m, ci)

    # right data as N by 1
    data = np.random.randn(30, 1)
    m, ci = mean_confidence_interval(data)
    print('-- gives a mean and a list of length 1 for a single CI (since it thinks you have a single dat aset)')
    print(m, ci)

    # multiple data sets (7) of size N (=30)
    data = np.random.randn(30, 7)
    print('-- gives 7 CIs for the 7 data sets of length 30. 30 is the number ud want large if you were using z(p)'
          'due to the CLT.')
    m, ci = mean_confidence_interval(data)
    print(m, ci)

ci_test_float()

output:

-- you should get a mean and a list of nan ci (since data is in wrong format, it thinks its 30 data sets of length 1.
0.1431623130952463 [nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan]
-- gives a mean and a list of length 1 for a single CI (since it thinks you have a single dat aset)
0.04947206018132864 [0.40627264]
-- gives 7 CIs for the 7 data sets of length 30. 30 is the number ud want large if you were using z(p)due to the CLT.
-0.03585104402718902 [0.31867309 0.35619134 0.34860011 0.3812853  0.44334033 0.35841138
 0.40739732]

I think the Num_samples by Num_datasets is right but if it’s not let me know in the comment section.


For what type of data does it work for?

I think it can be used for any data because of the following:

I believe it is fine since the mean and std are calculated for general numeric data and the z_p/t_p value only takes in the confidence interval and data size, so it is independent of assumptions on the distribution of data.

So it can be used for regression & classification I believe.


As a bonus, a torch implementation that nearly only uses torch only:

def torch_compute_confidence_interval(data: Tensor,
                                      confidence: float = 0.95
                                      ) -> Tensor:
    """
    Computes the confidence interval for a given survey of a data set.
    """
    n: int = len(data)
    mean: Tensor = data.mean()
    # se: Tensor = scipy.stats.sem(data)  # compute standard error
    # se, mean: Tensor = torch.std_mean(data, unbiased=True)  # compute standard error
    se: Tensor = data.std(unbiased=True) / (n ** 0.5)
    t_p: float = float(scipy.stats.t.ppf((1 + confidence) / 2., n - 1))
    ci = t_p * se
    return mean, ci

Some comments on CI (or see https://stats.stackexchange.com/questions/554332/confidence-interval-given-the-population-mean-and-standard-deviation?noredirect=1&lq=1):

"""
Review for confidence intervals. Confidence intervals say that the true mean is inside the estimated confidence interval
(the r.v. the user generates). In particular it says:
    Pr[mu^* in [mu_n +- t.val(p) * std_n / sqrt(n) ] ] >= p
e.g. p = 0.95
This does not say that for a specific CI you compute the true mean is in that interval with prob 0.95. Instead it means
that if you surveyed/sampled 100 data sets D_n = {x_i}^n_{i=1} of size n (where n is ideally >=30) then for 95 of those
you'd expect to have the truee mean inside the CI compute for that current data set. Note you can never check for which
ones mu^* is in the CI since mu^* is unknown. If you knew mu^* you wouldn't need to estimate it. This analysis assumes
that the the estimator/value your estimating is the true mean using the sample mean (estimator). Since it usually uses
the t.val or z.val (second for the standardozed r.v. of a normal) then it means the approximation that mu_n ~ gaussian
must hold. This is most likely true if n >= 0. Note this is similar to statistical learning theory where we use
the MLE/ERM estimator to choose a function with delta, gamma etc reasoning. Note that if you do algebra you can also
say that the sample mean is in that interval but wrt mu^* but that is borning, no one cares since you do not know mu^*
so it's not helpful.

An example use could be for computing the CI of the loss (e.g. 0-1, CE loss, etc). The mu^* you want is the expected
risk. So x_i = loss(f(x_i), y_i) and you are computing the CI for what is the true expected risk for that specific loss
function you choose. So mu_n = emperical mean of the loss and std_n = (unbiased) estimate of the std and then you can
simply plug in the values.

Assumptions for p-CI:
    - we are making a statement that mu^* is in mu+-pCI = mu+-t_p * sig_n / sqrt n, sig_n ~ Var[x] is inside the CI
    p% of the time.
    - we are estimating mu^, a mean
    - since the quantity of interest is mu^, then the z_p value (or p-value, depending which one is the unknown), is
    computed using the normal distribution.
    - p(mu) ~ N(mu; mu_n, sig_n/ sqrt n), vial CTL which holds for sample means. Ideally n >= 30.
    - x ~ p^*(x) are iid.

Std_n vs t_p*std_n/ sqrt(n)
    - std_n = var(x) is more pessimistic but holds always. Never shrinks as n->infity
    - but if n is small then pCI might be too small and your "lying to yourself". So if you have very small data
    perhaps doing std_n for the CI is better. That holds with prob 99.9%. Hopefuly std is not too large for your
    experiments to be invalidated.

ref:
    - https://stats.stackexchange.com/questions/554332/confidence-interval-given-the-population-mean-and-standard-deviation?noredirect=1&lq=1
    - https://stackoverflow.com/questions/70356922/what-is-the-proper-way-to-compute-95-confidence-intervals-with-pytorch-for-clas
    - https://www.youtube.com/watch?v=MzvRQFYUEFU&list=PLUl4u3cNGP60hI9ATjSFgLZpbNJ7myAg6&index=205
"""
Answered By: Charlie Parker