No measurement is perfect
Think about the last time you measured something. Maybe you used a ruler to measure a table. You probably got something like 120 cm. But was it exactly 120.000000 cm? Of course not. Your eyes rounded. The ruler has a manufacturing tolerance. The table edge is not perfectly straight.
This is not a flaw in your technique. Every measurement in physics has uncertainty. The tools are imperfect, the environment fluctuates, and human judgement introduces variability. The difference between good science and bad science is not eliminating error — it is understanding how much error you have and what it means.
Precision vs accuracy — they are not the same thing
These two words sound interchangeable. They are not. And confusing them is one of the most common mistakes in physics.
Think of it like throwing darts at a target:
Precision means your measurements are consistent — you get the same result every time. Accuracy means your measurements are close to the true value. You can be precise without being accurate (consistently wrong), and you can be accurate without being precise (right on average but scattered).
The best measurements are both. But if you had to choose one, precision is usually more useful — because a consistent bias can be corrected. Random scatter cannot.
Two types of error
Every measurement error falls into one of two categories. Understanding which type you are dealing with determines how you fix it.
Always shifts your result in the same direction. Caused by flawed instruments, calibration issues, or measurement bias. A thermometer that always reads 2° too high produces systematic error. You can fix it once you identify it.
Scatters your results in both directions. Caused by environmental fluctuations, human reaction time, or electrical noise. You cannot eliminate it — but you can reduce its effect by taking more measurements and averaging them.
Systematic errors affect accuracy (they shift your results away from the true value). Random errors affect precision (they scatter your results around). Fixing a systematic error requires finding the cause. Reducing random error just requires more data.
How wrong is wrong? Percent error
Saying "my measurement was wrong" is not useful in physics. The question is: how wrong? That is what percent error tells you. It compares your measured value to the true (accepted) value and gives you a number.
Imagine you measured the boiling point of water as 94°C. The accepted value is 100°C. Your percent error is |94 − 100| ÷ 100 × 100% = 6%. That tells you exactly how far off you were, in a way that is comparable across different experiments.
Add measurements one by one and watch the data spread on a live graph. Then calculate percent error yourself with instant feedback. The fastest way to actually understand measurement uncertainty.
True or False?
One question. Based on what you just read.
Why this matters
Measurement uncertainty is not a footnote in physics — it is the difference between a conclusion you can trust and one you cannot. Clinical drug trials, bridge engineering, satellite navigation — all of these depend on knowing how wrong a measurement might be.
The percent error formula gives you a single number that quantifies your mistake. The precision vs accuracy distinction tells you what kind of mistake it is. And understanding systematic vs random errors tells you how to fix it.
Want to see it happen in real time? The Physiworld lesson lets you add measurements one by one and watch how the mean stabilises, the uncertainty shrinks, and the data tells its own story.
Every measurement has uncertainty. Precision means consistent results. Accuracy means correct results. Systematic errors shift your data — random errors scatter it. Percent error quantifies how far off you are. Understanding these concepts separates real science from guessing.
The Units section covers SI units, prefixes, precision, accuracy, uncertainty, and percent error through interactive lessons with live data visualisation.