Proportion Z-Test Calculator
Based on the standard normal distribution (μ = 0, σ = 1). Tests H₀: p = p₀ (one-proportion) or H₀: p₁ = p₂ (two-proportion).
Based on the standard normal distribution (μ = 0, σ = 1). Tests H₀: p = p₀ (one-proportion) or H₀: p₁ = p₂ (two-proportion).
One-Proportion · Two-Tailed
A polling firm samples 1,000 voters and finds 520 in favor. Test whether true support differs from 50% at α = 0.05.
The sample share of 52% looks suggestive, but with n = 1,000 the margin of error is roughly ±3 points and 50% is comfortably inside that band. Always pair the p-value with a confidence interval around p̂.
Two-Proportion · Two-Tailed
Variant A converts 80 of 200 visitors; variant B converts 60 of 200. Test whether the conversion rates differ at α = 0.05.
The 10-point lift is statistically significant at α = 0.05, but check whether the absolute difference and the resulting confidence interval are practically meaningful for your business case before shipping the variant.
One-Proportion · Right-Tailed
A factory inspects 500 units and finds 60 defective. Test whether the true defect rate exceeds the 10% spec at α = 0.05.
Borderline result — at the more lenient α = 0.10 the test would reject. With binary inspection data, also consider the cost of a false alarm (over-reacting to noise) vs. a missed signal (a real spec drift) before treating α = 0.05 as the only threshold.
Tests whether a single sample proportion p̂ = x/n differs from a hypothesized value p₀. The standard error uses p₀ rather than p̂ because under H₀ the true proportion is p₀ — this is what makes it a hypothesis test rather than a confidence interval.
z = (p̂ − p₀) / √(p₀(1 − p₀)/n)
Tests whether two independent sample proportions differ. Uses the pooled estimate p̂_pooled = (x₁ + x₂)/(n₁ + n₂) under H₀: p₁ = p₂ — pooling produces a more powerful test than the unpooled (Wald) form when the null is true.
z = (p̂₁ − p̂₂) / √(p̂_pooled (1 − p̂_pooled)(1/n₁ + 1/n₂))
A proportion z-test asks whether observed success rates differ from a benchmark or from each other by more than chance would predict. The one-proportion test compares a single sample's success rate p̂ = x/n to a hypothesized value p₀ — useful for checking whether a poll deviates from a known baseline or whether a defect rate exceeds spec. The two-proportion test compares two independent samples' success rates p̂₁ and p̂₂ — useful for A/B tests, comparing conversion rates across segments, or comparing response rates between treatment and control. Both forms compute a z-statistic, convert it to a p-value via the standard normal distribution, and compare against your chosen significance level α to make a reject / fail-to-reject decision against H₀. Both rely on the normal approximation, which works well when np ≥ 10 and n(1 − p) ≥ 10 in each group.
A polling firm samples 1,000 voters and finds 520 in favor of a candidate. Test whether the true level of support differs from 50% at α = 0.05 (two-tailed).
A 52% sample share looks suggestive, but with n = 1000 the margin of error around p̂ is about ±3.1 points — 50% is comfortably inside that band. Reporting the sample proportion alongside a 95% confidence interval gives more context than the p-value alone.
The proportion z-test rests on the central limit theorem: for large n, the sampling distribution of p̂ is approximately normal with mean p and variance p(1 − p)/n. The one-proportion test substitutes the hypothesized p₀ into the variance formula because under H₀ the true proportion is p₀; the two-proportion test pools the two samples to estimate the common proportion under H₀: p₁ = p₂. The normal approximation requires reasonably large samples — a common rule of thumb is np ≥ 10 and n(1 − p) ≥ 10 in each group. For very small counts or when p is near 0 or 1, switch to an exact test (binomial test for one sample, Fisher's exact for two). For confidence intervals around p̂ rather than hypothesis tests, use the unpooled Wald form sqrt(p̂(1 − p̂)/n) or the more accurate Wilson interval — the test-statistic SE and the CI SE are not the same.
A proportion z-test compares observed success rates against a hypothesized value (one-proportion) or against each other (two-proportion). It uses the central-limit-theorem normal approximation to the binomial: for large enough samples, the distribution of p̂ = x/n is approximately normal, so a z-statistic and standard normal p-value can be computed.
Use one-proportion when you're comparing a single sample's success rate p̂ to a fixed hypothesized value p₀ — for example, testing whether a defect rate exceeds 5% or whether a poll deviates from 50%. Use two-proportion when comparing two independent groups' success rates against each other — for example, A/B testing or comparing two clinical arms.
Under H₀ the true proportion is p₀, so the variance of p̂ is p₀(1 − p₀)/n. Substituting p₀ keeps the test statistic's distribution under H₀ centered at zero with unit variance. Confidence intervals around p̂ instead use the Wald form sqrt(p̂(1 − p̂)/n) — those serve a different purpose (estimating the unknown p) and don't assume H₀.
Under H₀: p₁ = p₂ both groups share a common proportion, so the most efficient estimate is the pooled p̂_pooled = (x₁ + x₂)/(n₁ + n₂). Pooling produces a more powerful test than the unpooled (Wald) form when H₀ is true. Confidence intervals for the difference instead use the unpooled form, since CIs do not assume the proportions are equal.
A common rule of thumb is np ≥ 10 and n(1 − p) ≥ 10 in each group, where p is the relevant hypothesized or pooled proportion. For smaller samples or proportions near 0 or 1, use an exact test (binomial test for one sample, Fisher's exact for two) — the normal approximation gets unreliable in those regimes.
No. The two-proportion z-test assumes the two samples are independent. For paired binary data — same subjects measured before and after, or matched-pair experimental designs — use McNemar's test on the discordant pairs instead. Applying the independent two-proportion test to paired data wastes power and can be misleading.
The p-value is the probability of observing a test statistic at least as extreme as yours, assuming H₀ is true. Smaller p-values indicate stronger evidence against H₀: p = p₀ (one-proportion) or p₁ = p₂ (two-proportion). A p-value below your chosen α leads to rejecting H₀, but the p-value does not measure the size or practical importance of the effect.
Choose left-tailed when H₁ predicts the proportion is below the benchmark or below the comparison group, right-tailed when above, and two-tailed when no direction is specified. Decide before you look at the data — picking the tail post hoc inflates the false-positive rate.
Reference: The one-proportion z-test computes z = (p̂ − p₀) / √(p₀(1 − p₀)/n) and converts to a p-value using the standard normal cumulative distribution function via the Abramowitz and Stegun rational approximation. The two-proportion test uses the pooled standard error √(p̂_pooled(1 − p̂_pooled)(1/n₁ + 1/n₂)) under H₀: p₁ = p₂. Critical values are produced from the inverse normal CDF (Acklam's rational approximation) at the chosen significance level α. Both tests rely on the normal approximation to the binomial, which holds when np ≥ 10 and n(1 − p) ≥ 10 in each group.