• Fermion@feddit.nl
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    I’m not saying normalization is a bad strategy, just that it, like any other processing technique comes with limitations and requires extra attention to avoid incorrect conclusions when interpreting the results.

    Because relative to the population density, there were 100 times as many sightings. Or what am I missing.

    If you were to attempt to trap and tag bigfoots in both areas, would you end up with 100 times as many angry people in a gorilla suit in the small town? No. You would end up with 1 in both areas. So while the tiny town does technically have 100x the density per capita, each region has only one observable suit wearer.

    Assuming the distribution of gorilla suit wearers is uniform, you would expect approximately 99 tiny towns with no big foot sightings for every 1 town with a sighting. So if you were to sample random small towns, because the map says big foots live near small towns, you would actually see fewer hairy beasts than your peer who decided to sample areas with higher population density.

    If we could have fractional observations, then all this would be a lot more straightforward, but the discrete nature of the subject matter makes the data imherently noisy. Interpreting data involving discrete events is a whole art and usually involves a lot of filtering.