Spotlight presentations at CVSS are brief summaries of the context and main results of your current work/paper with perhaps a mention of the novelty of the approach taken. Spotlights should enable people to judge if they wish to visit your poster and/or to read your paper. Thus, they should be simple, visual, and accessible to “non” experts; speakers should not aim to communicate detailed technical aspects of the work/paper. The slides should be prepared in a large font (at least 16pt), and ideally with some appealing visual elements. Each speaker is limited to 3-4 slides.The first slide should include the speaker’s name and the title.
Each Spotlight is allotted 3 minutes of time.
The slides must be submitted by July 24 to email@example.com and cannot be changed after that date.
The Spotlight presentations are scheduled on July 28 at 8 pm.
- Alvarez, Ivan (Chiasmal misrouting and visual space representations: Modelling population receptive fields in human albinism)
- Bernard, Florian (A solution for multi-alignment by transformation synchronisation)
- Bozkurt Yagmur (Automatic measurement of retinal vessel tortuosity)
- Choi, Hannah (Shape discrimination under partial occlusion: a dynamical model of V4-prefrontal cortex network)
- Clarke, Alasdair (Object salience)
- Dalal, Nisha (Cortical and subcortical activations in response to visual spatial-temporal uncertainties)
- Ecke, Gerrit (Vision based flying robots biomimetic spatial mapping and dynamic dis-occlusion detection)
- Gatys, Leon (Texture synthesis with convolutional neural networks)
- Güney, Fatma (Displets: resolving stereo ambiguities using object knowledge)
- Gupta, Ankush (Deep convolutional neural networks for text spotting)
- Han, Biao (Could excitatory feedback generate an inhibitory effect in lower area?)
- Jeagle, Andrew (Filter characteristics of a deep neural network for shape localization)
- Jozwik, Kamila Maria (Categories or features? – Explaining object similarity in perception and brain representations using nonnegative least squares)
- Kalogeropoulou, Zampeta (How attention to motion direction shapes visual sensitivity)
- Kartheek, Medathati (Towards synergistic models for motion estimation: Unifying studies in biological and computer vision)
- Kasneci, Enkelejda (Automated comparison of visual scanpaths)
- Kortylewski, Adam (A stochastic grammar for shoeprint recognition)
- Kumar, Satwant (Neural encoding of shapes in the middle superior temporal sulcus (mid-STS) body patch neurons)
- Lassner, Christoph (Predicting more structure in human pose estimation)
- Liu, Jian (Encoding of natural images by retinal ganglion cells)
- Nestmeyer, Thomas (Learning intrinsic image parameters with the help of a proxy task)
- Ochs, Matthias (Making phase correlation robust)
- Paiton, Dylan (Causal neural networks for learning sparse representations of video)
- Rajaei, Karim (What is the role of recurrent processing in the object recognition?)
- Rezapour Lakani, Safoura (3D compositional part segmentation through grasping)
- Rogelj, Nina (Goniospectrophotometric space curve of optically complex samples)
- Schütt Heiko (Dynamic models for eye movements including early vision knowledge)
- Stabinger Sebastian (Learning abstract classes using deep learning)
- Wulff, Jonas (Efficient sparse-to-dense optical flow estimation using a learned basis and layers)