Imagine you're a doctor trying to figure out why a patient has a fever.
You could check their temperature (a hard number), listen to them describe their symptoms (their own words), or study their medical history (past records).
But even with all that information, pinpointing the *exact* cause isn't simple — maybe it's an infection, maybe it's stress, maybe it's both.
Political scientists face this same puzzle every single day, just on a massive scale.
When political scientists study countries, they gather two kinds of evidence: quantitative data (think election turnout percentages, GDP charts, graphs of protest frequency) and qualitative data (speeches, political cartoons, interviews, founding documents).
They use both to draw comparisons — why does one country's democracy thrive while another's crumbles?
Here's the tricky part: just because two things move together (say, rising poverty and rising political instability) doesn't mean one *causes* the other.
That's the crucial difference between correlation and causation, and in comparative politics, where dozens of variables swirl together, proving true causation is maddeningly difficult.
There's one more distinction that separates sharp political analysis from opinion: the line between empirical and normative statements.
An empirical claim says "voter turnout in Nigeria was 35%." A normative claim says "voter turnout *should* be higher." Political scientists build their arguments primarily on empirical evidence — observable, measurable facts — not on what they wish were true.
Learning to spot that difference is the first real skill of thinking like a political scientist.