Percent Error Calculator

Calculate percent error between an experimental and theoretical value. Shows the formula and magnitude of error. Used in science and engineering.

Enter your values above to see the results.

Tips & Notes

  • Always divide by the TRUE value, not the measured value. Using measured in denominator gives a different result.
  • The absolute value bars | | mean percent error is always reported as positive (unsigned form).
  • For signed error: positive means overestimated, negative means underestimated — tells you direction of error.
  • Percent error near 0% = high accuracy. High percent error suggests instrument or method problems.
  • Systematic errors (always too high or too low) show up as consistently signed errors across multiple trials.

Common Mistakes

  • Dividing by the measured value instead of the true value — gives percent error relative to the wrong reference.
  • Forgetting absolute value bars and reporting negative percent error when |error| is always positive (unsigned form).
  • Confusing percent error with percent difference — use percent error only when a true accepted value exists.
  • Using percent error when no standard reference exists — percent difference is appropriate in that case.
  • Reporting too many decimal places. If measurements have 3 significant figures, percent error should too.

Percent Error Calculator Overview

Percent error quantifies the accuracy of a measured or calculated value by comparing it to a known true (accepted) value. It expresses the discrepancy as a percentage of the true value, giving a scale-independent measure of how far off a measurement is. A percent error of 2% means the measurement differs from the true value by 2% of the true value — the same formula works whether you are measuring nanometers or kilometers. Percent error is the standard metric in science laboratories, manufacturing quality control, and instrument calibration.

The percent error formula:

% Error = |Measured − True| / |True| × 100%
EX: True value of gravity = 9.80 m/s², measured = 9.45 m/s² → % Error = |9.45−9.80|/9.80 × 100 = 0.35/9.80 × 100 = 3.57%
EX: Predicted sales = 500 units, actual = 480 units → % Error = |500−480|/480 × 100 = 20/480 × 100 = 4.17%
The absolute value bars ensure percent error is always non-negative — the formula reports magnitude of error, not direction. Signed percent error — when direction matters:
EX: Signed: (Measured−True)/True × 100. If measured=9.45 and true=9.80: (9.45−9.80)/9.80 × 100 = −3.57% (underestimate)
Positive signed error = overestimate. Negative = underestimate. Useful for identifying systematic bias in measurements. Percent error vs. percent difference: - Percent error: compares to a known TRUE value. Use when one value is the accepted standard. - Percent difference: compares two experimental values when neither is definitively "true." Formula: |A−B| / ((A+B)/2) × 100%
EX: Two labs measure 9.2 and 9.6 m/s². Neither is "true" → percent difference = |9.2−9.6|/((9.2+9.6)/2) × 100 = 4.3%
Percent error measures how far a measurement or estimate deviates from the accepted true value, expressed as a fraction of that true value. The absolute value in the denominator ensures that percent error is always positive — the direction (high or low) is conveyed separately by the sign of the signed error. Acceptable thresholds depend entirely on the context: 5% error is excellent in many physics experiments but disqualifying in pharmaceutical dosing or precision manufacturing. Systematic error consistently shifts measurements in one direction — a miscalibrated scale always reads 0.5 g too high, producing a consistent positive error across every measurement. Random error produces varying positive and negative deviations that average toward zero over many trials. Systematic and random errors affect measurements in fundamentally different ways: averaging many trials reduces random error but does not reduce systematic error at all. Identifying which type of error dominates is the first step in improving measurement accuracy.

Frequently Asked Questions

Percent error = |measured − true| / |true| × 100%. Example: you measure a table as 153 cm, but the true length is 150 cm. |153 − 150| / 150 × 100% = 3/150 × 100% = 2%. This tells you the measurement deviated 2% from the true value. Percent error is always relative to the true value — a 3 cm error means more for a 5 cm object (60% error) than for a 500 cm object (0.6% error).

Percent error uses the true or accepted value as the denominator — it tells you how far off you are relative to reality. Percent difference uses the average of two values as the denominator — used when neither value is considered the reference truth. Formula: percent difference = |A − B| / ((A+B)/2) × 100%. Example: comparing two experimental methods with results 45 and 50: percent difference = |45−50| / 47.5 × 100% = 10.5%. Use percent error for accuracy; percent difference for comparing two equivalent measurements.

Systematic error shifts all measurements in the same direction (always too high or always too low) — caused by faulty calibration, consistent technique errors, or instrument offset. Percent error reveals this bias. Random error causes scatter around the true value in both directions — caused by reading precision limits, environmental fluctuations, or human variation. Percent error alone does not distinguish these: a 0% average error could mean perfect measurement OR equal and opposite random errors that cancel out.

Acceptable percent error depends entirely on the field and purpose. Chemistry labs: ±5% is often acceptable for student experiments, ±1% for analytical work. Physics experiments: ±2–10% depending on complexity. Manufacturing quality control: ±0.1–1% for precision parts. Medical diagnostics: often ±5% for blood tests. Astronomy: error of 10% for distant star measurements is excellent given the challenges. Always compare to the standard or requirement for your specific application — there is no universal acceptable limit.

Significant figures in measurements limit meaningful decimal places in percent error. Example: mass measured as 25.3 g (3 significant figures), true mass 25.0 g. Percent error = 0.3/25.0 × 100% = 1.2%. Reporting as 1.20000% implies false precision — your measurement only has 3 significant figures. Report percent error to the same number of significant figures as your least precise measurement. Excess decimal places in error reporting communicate more precision than actually exists.

A negative result before taking the absolute value has meaning — it indicates whether the measurement was above or below the true value. (measured − true)/true × 100%: positive = overestimate, negative = underestimate. Standard percent error uses absolute value because magnitude matters more than direction for most accuracy assessments. However, in bias analysis, keeping the sign reveals systematic tendencies: consistently negative errors indicate a measuring instrument that consistently reads low.