Title: Computational models of vision: From early vision to deep convolutional neural networks
Early visual processing in human observers has been studied extensively over the last decades. From these studies a relatively standard model emerged of the first steps in human visual processing. In the first part of my talk I will present an image based early vision model implementing our knowledge about early visual processing including oriented spatial frequency channels, divisive normalization and optimal decoding. The model explains the classical psychophysical data reasonably well, matching the performance of older, non-image based models for contrast detection, contrast discrimination and oblique masking data. Leveraging the advantage of an image based model, I show how well our model performs for detecting Gabors masked by patches of natural scenes and how to use it for image distortion assessment. In the second part I show a series of experiments comparing object recognition of convolutional neural networks (CNNs) to human object recognition using the exact same stimuli. Whilst we clearly find certain similarities, we also find strikingly non-human behaviour in CNNs. I will discuss possible reasons for our findings in the light of our knowledge of early visual processing in human observers.