Introduction

A well-designed experiment allows you to establish causation—not just correlation. However, experimental errors can lead to incorrect conclusions. Understanding how to control for these errors is essential for valid business experimentation.


Types of Experimental Errors

Systematic Errors (Bias)

Consistent errors that skew results in one direction.

  • Selection bias: Groups differ systematically before treatment
  • Measurement bias: Measuring tool consistently over/under reports
  • Observer bias: Experimenter influences results
  • Attrition bias: Different dropout rates between groups

Random Errors

Unpredictable fluctuations that increase variance but don't skew results systematically.

  • Natural variation in subjects
  • Measurement noise
  • Environmental fluctuations

Validity Concepts

TypeQuestionThreats
Internal ValidityDid treatment cause the effect?Confounders, selection bias, history
External ValidityCan results generalize?Sample not representative, artificial setting
Construct ValidityAre we measuring what we think?Poor operationalization
Statistical ValidityAre conclusions statistically sound?Low power, multiple testing

Common Threats to Internal Validity

  • History: External events during experiment
  • Maturation: Natural changes over time
  • Testing: Taking a test affects subsequent scores
  • Instrumentation: Measurement changes during study
  • Selection: Pre-existing group differences
  • Mortality: Differential dropout

Control Techniques

Randomization

Random assignment to treatment and control groups ensures groups are comparable on average.

  • Controls for both known and unknown confounders
  • Foundation of causal inference

Control Groups

A group that doesn't receive treatment, providing a baseline for comparison.

  • Allows you to isolate effect of treatment
  • Controls for history and maturation threats

Blocking

Group similar subjects together, then randomize within blocks.

  • Reduces variance from known confounders
  • Example: Block by age group, then randomize within each age group

Blinding

  • Single-blind: Subjects don't know their group
  • Double-blind: Neither subjects nor experimenters know
  • Prevents expectation effects and observer bias

Design Principles

ANOVA Design Principles

  1. Replication: Multiple observations per condition
  2. Randomization: Random assignment to conditions
  3. Local control (blocking): Group similar units together

Best Practices

  • Pre-register your hypothesis and analysis plan
  • Calculate required sample size before starting
  • Use appropriate control conditions
  • Minimize time between treatment and measurement
  • Document everything—procedures, deviations, observations

Conclusion

Key Takeaways

  • Systematic errors (bias) skew results; random errors add noise
  • Internal validity: Did treatment cause effect?
  • External validity: Can results generalize?
  • Randomization controls for known and unknown confounders
  • Control groups provide baseline for comparison
  • Blocking reduces variance from known factors
  • Blinding prevents expectation and observer bias