Computer vision: Taylor Swift saliency mapping

In cognitive neuroscience, we’re interested in what guides human attention. We distinguish between influences from high-level cognition (e.g. current goals), and low-level visual features. There are highly sophisticated models of how visual features such as intensity, colour, and movement guide human attention. Computerised implementations of these models allow computers to mimic human eye movements. Turns out Taylor Swift’s amazing videos are an excellent example!

Continue reading

When do you “see” what you see?

That is one confusing title! The point is this: When light reaches your eyes, you’re not immediately aware of that. It takes some time for your visual system to process the light, and to translate it into something the rest of your brain can work with. When that’s done, you consciously ‘see’. In a new paper, we show that the process of becoming aware of what you see, is affected by how large an object is. With an oversimplified example: If light bounces of a puppy, into your eyes, it takes a fraction of a second for you to become aware of the puppy. And it takes a fraction of a second longer if it’s a fat puppy.

Continue reading