Computer vision: Taylor Swift saliency mapping

In cognitive neuroscience, we’re interested in what guides human attention. We distinguish between influences from high-level cognition (e.g. current goals), and low-level visual features. There are highly sophisticated models of how visual features such as intensity, colour, and movement guide human attention. Computerised implementations of these models allow computers to mimic human eye movements. Turns out Taylor Swift’s amazing videos are an excellent example!

Continue reading