Eleonora Vig

Eleonóra Víg

Computer Vision Scientist

I am a research scientist at the German Aerospace Center (DLR), in Munich, Germany. My research interests include visual attention models and saliency prediction, multi-object detection and tracking, human action recognition, and human crowd analysis in aerial images.

About me

I studied Computer Science at the Babes-Bolyai University, Romania, where I obtained a M.Sc. degree in Intelligent Systems. In 2011, I completed doctoral studies at the University of Lübeck, Germany, with a dissertation on saliency and eye movement prediction and guidance. From 2011 to 2013 I was a postdoctoral research fellow in the Computer and Biological Vision Lab of David Cox at the Center for Brain Science, Harvard University, Cambridge, USA. During my Post-Doc, I was partly funded by a German Academic Exchange Service (DAAD) grant. From October 2013 to February 2016, I worked as a research scientist at Xerox Research Centre Europe (XRCE), in Grenoble, France.


Selected publications

My Google Scholar profile.

C. de Souza, A. Gaidon, E. Vig, A. M. Lopez. Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition. ECCV, 2016.

S. Jetley, N. Murray, E. Vig. End-To-End Saliency Mapping via Probability Distribution Prediction. CVPR, 2016. (spotlight presentation)
[pdf] [bibtex]

A. Gaidon, Q. Wang, Y. Cabon, E. Vig. Virtual Worlds as Proxy for Multi-Object Tracking Analysis. CVPR, 2016.
[pdf] [bibtex] [dataset] [poster]

A. Gaidon, E. Vig. Online Domain Adaptation for Multi-Object Tracking. BMVC, 2015. (oral)
[pdf] [bibtex]

E. Vig, M. Dorr, D. Cox. Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images. CVPR, 2014.
[pdf] [bibtex] [code] [poster]

M. Milford, W. Scheirer, E. Vig, A. Glover, O. Baumann, J. Mattingley, D. Cox. Condition-Invariant, Top-Down Visual Place Recognition. ICRA, 2014.
[pdf] [bibtex]

M. Milford, E. Vig, W. Scheirer, D. Cox. Vision‐based Simultaneous Localization and Mapping in Changing Outdoor Environments. Journal of Field Robotics 31 (5), 780-802, 2014.
[pdf] [bibtex]

E. Vig, M. Dorr, D. Cox. Space-variant Descriptor Sampling for Action Recognition based on Saliency and Eye Movements. ECCV, 2012.
[pdf] [bibtex] [dataset] [poster]

E. Vig, M. Dorr, T. Martinetz, E. Barth. Intrinsic Dimensionality Predicts the Saliency of Natural Dynamic Scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) 34 (6), 1080-1091, 2012.
[pdf] [bibtex]

E. Vig, M. Dorr, E. Barth. Efficient Visual Coding and the Predictability of Eye Movements on Natural Movies. Spatial Vision 22 (5), 397-408, 2009.
[pdf] [bibtex]

Software and datasets

Virtual KITTI
A photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks, such as object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation. Link to the accompanying CVPR'2016 paper.
eDN saliency
Reference code for computing Ensembles of Deep Networks (eDN) saliency maps based on the CVPR'2014 paper "Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images".
Eye movement dataset for the Hollywood2 benchmark
A large dataset of eye movements we collected from five subjects who performed the action recognition task for the Hollywood2 Action Recognition challenge. The accompanying ECCV'2012 paper can be found here.

Theme adapted from orderedlist/minimal.