Z-Score Calculator

Standard Deviation Calculator

Computes both the sample SD (n − 1 divisor, used in most statistics work) and the population SD (n divisor) for the same dataset.

Tip: paste from a spreadsheet column or type values directly. Press Cmd/Ctrl+Enter to compute immediately.

Sample SD (s) =

2.1381

Count (n): 8

Sum: 40.0000

Mean: 5.0000

Median: 4.5000

Min: 2.0000

Max: 9.0000

Range: 7.0000

Σ(x − x̄)²: 32.0000

Sample (n − 1)

Variance s²: 4.5714

Std. Dev. s: 2.1381

Population (n)

Variance σ²: 4.0000

Std. Dev. σ: 2.0000

Share:

Worked Examples

Test scores

8 students: 2, 4, 4, 4, 5, 5, 7, 9

A teacher records eight test scores. Treating these as a sample, find the standard deviation.

  1. Mean: (2 + 4 + 4 + 4 + 5 + 5 + 7 + 9) / 8 = 40 / 8 = 5.
  2. Squared deviations from mean: 9, 1, 1, 1, 0, 0, 4, 16 → sum = 32.
  3. Sample variance: 32 / (n − 1) = 32 / 7 ≈ 4.5714.
  4. Sample SD: √4.5714 ≈ 2.1381.
  5. Population SD (for comparison): √(32 / 8) = √4 = 2.

Sample SD (≈2.138) is slightly larger than population SD (2.000) because Bessel's correction (n − 1) accounts for the sample mean being an estimate.

Heights (cm)

10 measurements: 165, 170, 172, 168, 175, 169, 173, 171, 167, 174

Adult height measurements from a small sample. Compute the descriptive statistics.

  1. Sum = 1704; n = 10; mean = 170.4 cm.
  2. Compute each (xᵢ − 170.4)² and sum: 92.4.
  3. Sample variance: 92.4 / 9 ≈ 10.2667.
  4. Sample SD: √10.2667 ≈ 3.2042 cm.
  5. Population SD: √(92.4 / 10) ≈ 3.0397 cm.

Heights are typically reported with sample SD, since any specific group of 10 is treated as a sample of the broader population.

All same values

Constant data: 7, 7, 7, 7, 7

When every value is the same, there is no variation and the standard deviation is 0.

  1. Mean = 7.
  2. Each deviation = 0, so each squared deviation = 0.
  3. Sum of squared deviations = 0.
  4. Sample variance = 0 / 4 = 0; sample SD = 0.
  5. Population SD = 0 too.

SD = 0 is a meaningful answer: it tells you there is no spread in the data, not that the calculation failed.

Sample Standard Deviation

Use when your data is a sample drawn from a larger population (the typical case in research and statistics classes). The (n − 1) divisor — Bessel's correction — makes the *sample variance* s² an unbiased estimator of σ². Note that taking the square root introduces a small downward bias, so s is technically a biased estimator of σ; the (n − 1) form is still the standard convention in nearly all statistical software.

s = √( Σ(xᵢ − x̄)² / (n − 1) )

Population Standard Deviation

Use when your dataset IS the entire population (every member is included), not just a sample. The n divisor — no correction — is correct because there is no inference to a wider population. The two formulas differ only by which divisor sits under the squared deviations.

σ = √( Σ(xᵢ − μ)² / n )

How It Works

Standard deviation measures how spread out a dataset is around its mean. The calculator computes both the sample standard deviation s (with the n − 1 Bessel divisor) and the population standard deviation σ (with the n divisor) for the same data so you can pick the right one. The procedure is the same for both: compute the mean x̄, find each value's deviation from the mean (xᵢ − x̄), square those deviations, sum them up, divide by either n (population) or n − 1 (sample), and take the square root. The squared step matters — it makes positive and negative deviations both contribute to the spread instead of cancelling, and it weights large deviations more heavily than small ones. The calculator also reports the variance (the SD without the square root), the median, the min/max/range, and the sum of squared deviations Σ(xᵢ − x̄)² so you can verify the calculation step by step.

Example Problem

A teacher records the test scores of eight students: 2, 4, 4, 4, 5, 5, 7, 9. Treat these scores as a sample drawn from the broader student population and compute the sample standard deviation.

  1. Find the mean: x̄ = (2 + 4 + 4 + 4 + 5 + 5 + 7 + 9) / 8 = 40 / 8 = 5.
  2. Compute each deviation from the mean (xᵢ − x̄): −3, −1, −1, −1, 0, 0, 2, 4.
  3. Square each deviation: 9, 1, 1, 1, 0, 0, 4, 16.
  4. Sum the squared deviations: 9 + 1 + 1 + 1 + 0 + 0 + 4 + 16 = 32.
  5. Divide by n − 1 = 7 (sample variance): 32 / 7 ≈ 4.5714.
  6. Take the square root: s = √4.5714 ≈ 2.1381.
  7. (For comparison, the population formula divides by n = 8 instead: σ² = 32 / 8 = 4, σ = 2.)

The sample SD is slightly larger than the population SD because Bessel's correction (n − 1 instead of n) accounts for the fact that the sample mean is an estimate of the true mean — there's one less degree of freedom available to estimate variability.

Key Concepts

The single most common mistake in basic statistics is using the wrong divisor. Use n − 1 (sample SD) when your data is a sample drawn from a larger population — this is the default in nearly all statistical software, including spreadsheet functions like STDEV.S and Python's `np.std(..., ddof=1)`. Use n (population SD) only when your dataset is genuinely the entire population (every voter, every product, every student). Variance and standard deviation differ by a square root: variance has the units of x², SD has the original units. SD is more interpretable because it shares units with the data — 'mean test score = 75 with SD 10' is concrete, while 'variance = 100' is abstract. Standard deviation is the foundation of z-scores, confidence intervals, and hypothesis tests; nearly every downstream stats calculation traces back to this one quantity.

Applications

  • Test scores and grading — describing the spread of student performance around a class average
  • Investment risk — the SD of returns is a common measure of asset volatility
  • Quality control — monitoring whether process measurements stay within ±k SDs of a target value
  • Scientific experiments — reporting a mean ± SD pair for measured quantities in lab results
  • Comparing distributions — two datasets with the same mean can have very different SDs, and that difference matters
  • Computing z-scores and confidence intervals — both require an SD as input, so this calculator is upstream of most other z-tools on this site

Common Mistakes

  • Using n instead of n − 1 (or vice versa) — sample SD uses n − 1 (Bessel's correction); population SD uses n. Most statistical software defaults to sample.
  • Forgetting to take the square root — that gives variance, not standard deviation. Variance has squared units (e.g. m²), SD has original units (e.g. m).
  • Computing the mean first then re-computing the SD with a different mean (e.g. a hypothesized value) — the SD formula always uses the actual sample mean x̄.
  • Treating SD as a one-sided measure — SD is symmetric: it captures spread on both sides of the mean equally.
  • Confusing SD with standard error of the mean (SEM = SD / √n) — SD describes the data's spread; SEM describes how precisely the sample mean estimates the true mean.
  • Reporting SD without the mean — '±SD' is meaningful only relative to a center value; always report mean ± SD together.

Frequently Asked Questions

What's the difference between sample SD and population SD?

The formulas differ only by their divisor: sample SD uses n − 1, population SD uses n. Use sample SD when your dataset is a sample drawn from a larger population (the usual case in inferential statistics). Use population SD only when your dataset IS the entire population — every possible observation. Sample SD is slightly larger than population SD on the same data; the n − 1 adjustment compensates for the fact that the sample mean is itself an estimate.

Which one should I use?

Use sample SD (n − 1) unless you specifically know your data is the entire population. Excel's STDEV.S, Google Sheets' STDEV, R's sd(), and Python's pandas .std() all default to sample SD. STDEV.P / np.std (with ddof=0) compute population SD. When in doubt, pick sample SD — it's the standard choice in research, education, and inferential statistics.

What does standard deviation actually measure?

Standard deviation summarizes how spread out a dataset is around its mean. Technically it's the *root mean square* deviation — square each deviation from the mean, average those squared deviations, then take the square root — close in spirit to a typical distance from the mean, though the literal average absolute distance is a related but distinct measure called the mean absolute deviation. A small SD means the data clusters tightly around the mean; a large SD means it's spread out. For roughly normally-distributed data, about 68% of values fall within ±1 SD of the mean, 95% within ±2 SDs, and 99.7% within ±3 SDs (the 68-95-99.7 rule).

What's the difference between SD and variance?

Variance is the average squared deviation from the mean; SD is the square root of variance. They carry the same information but in different units: variance has the original units squared (e.g. cm²), while SD shares the original units (e.g. cm). SD is usually preferred for reporting because it's interpretable on the data's natural scale.

Can the standard deviation be zero or negative?

SD is always ≥ 0 by construction (it's a square root of a non-negative quantity). SD = 0 happens only when every data point is identical — there's no variation to measure. SD can never be negative. If you compute a negative result, something has gone wrong in the calculation.

Why square the deviations?

Squaring serves two purposes. First, it forces every deviation to contribute positively to the sum — without squaring, positive and negative deviations would cancel out and Σ(xᵢ − x̄) would always equal 0 by definition. Second, squaring penalizes larger deviations more than smaller ones, giving SD nice mathematical properties. The square root at the end returns the result to the data's original units.

How is SD related to z-scores?

A z-score measures how many standard deviations a value sits from the mean: z = (x − μ) / σ. So SD is the unit of measurement for z-scores. A z-score of 2 means '2 SDs above the mean'; a z-score of −1.5 means '1.5 SDs below the mean'. Every z-table, hypothesis test, and confidence interval starts from this relationship.

What if I have a very large dataset?

The two-pass approach the calculator uses (compute mean, then compute squared deviations from that mean) is numerically stable and handles thousands of values without precision loss. For datasets where you can't load everything into memory, statistical software uses streaming methods like Welford's online algorithm — but for any dataset that fits in a textarea, the standard approach is fine.

Reference: The calculator implements the standard textbook formulas: sample variance s² = Σ(xᵢ − x̄)² / (n − 1) and population variance σ² = Σ(xᵢ − μ)² / n. Standard deviations are square roots of the corresponding variances. All intermediate calculations use BigNumber arithmetic to avoid floating-point precision issues with long datasets. The result also reports the count, sum, mean, median, min, max, range, and the raw sum of squared deviations Σ(xᵢ − x̄)² so the calculation is fully auditable.

Related Calculators

  • Z-Score Calculator — Convert any value to a z-score: z = (x − μ) / σ. Uses the SD computed here.
  • Confidence Interval — Build a CI around the mean using the SD as the spread input.
  • Sample Size — Determine n needed for a target margin of error using a planning σ.
  • One-Sample Z-Test — Test a sample mean against a hypothesized μ₀ when σ is known.
  • P-Value Calculator — Convert any z-score to a p-value directly.

Related Sites