Percent Error Calculator
Calculate percent error between an experimental and theoretical value. Shows the formula and magnitude of error. Used in science and engineering.
Enter your values above to see the results.
Tips & Notes
- ✓Always divide by the TRUE value, not the measured value. Using measured in denominator gives a different result.
- ✓The absolute value bars | | mean percent error is always reported as positive (unsigned form).
- ✓For signed error: positive means overestimated, negative means underestimated — tells you direction of error.
- ✓Percent error near 0% = high accuracy. High percent error suggests instrument or method problems.
- ✓Systematic errors (always too high or too low) show up as consistently signed errors across multiple trials.
Common Mistakes
- ✗Dividing by the measured value instead of the true value — gives percent error relative to the wrong reference.
- ✗Forgetting absolute value bars and reporting negative percent error when |error| is always positive (unsigned form).
- ✗Confusing percent error with percent difference — use percent error only when a true accepted value exists.
- ✗Using percent error when no standard reference exists — percent difference is appropriate in that case.
- ✗Reporting too many decimal places. If measurements have 3 significant figures, percent error should too.
Percent Error Calculator Overview
Percent error quantifies the accuracy of a measured or calculated value by comparing it to a known true (accepted) value. It expresses the discrepancy as a percentage of the true value, giving a scale-independent measure of how far off a measurement is. A percent error of 2% means the measurement differs from the true value by 2% of the true value — the same formula works whether you are measuring nanometers or kilometers. Percent error is the standard metric in science laboratories, manufacturing quality control, and instrument calibration.
The percent error formula:
% Error = |Measured − True| / |True| × 100%
EX: True value of gravity = 9.80 m/s², measured = 9.45 m/s² → % Error = |9.45−9.80|/9.80 × 100 = 0.35/9.80 × 100 = 3.57%
EX: Predicted sales = 500 units, actual = 480 units → % Error = |500−480|/480 × 100 = 20/480 × 100 = 4.17%The absolute value bars ensure percent error is always non-negative — the formula reports magnitude of error, not direction. Signed percent error — when direction matters:
EX: Signed: (Measured−True)/True × 100. If measured=9.45 and true=9.80: (9.45−9.80)/9.80 × 100 = −3.57% (underestimate)Positive signed error = overestimate. Negative = underestimate. Useful for identifying systematic bias in measurements. Percent error vs. percent difference: - Percent error: compares to a known TRUE value. Use when one value is the accepted standard. - Percent difference: compares two experimental values when neither is definitively "true." Formula: |A−B| / ((A+B)/2) × 100%
EX: Two labs measure 9.2 and 9.6 m/s². Neither is "true" → percent difference = |9.2−9.6|/((9.2+9.6)/2) × 100 = 4.3%Percent error measures how far a measurement or estimate deviates from the accepted true value, expressed as a fraction of that true value. The absolute value in the denominator ensures that percent error is always positive — the direction (high or low) is conveyed separately by the sign of the signed error. Acceptable thresholds depend entirely on the context: 5% error is excellent in many physics experiments but disqualifying in pharmaceutical dosing or precision manufacturing. Systematic error consistently shifts measurements in one direction — a miscalibrated scale always reads 0.5 g too high, producing a consistent positive error across every measurement. Random error produces varying positive and negative deviations that average toward zero over many trials. Systematic and random errors affect measurements in fundamentally different ways: averaging many trials reduces random error but does not reduce systematic error at all. Identifying which type of error dominates is the first step in improving measurement accuracy.