Eleonora Vig

Eleonóra Víg

Computer Vision Scientist
eleonora.vig@dlr.de

I am a research scientist at the German Aerospace Center (DLR), in Munich, Germany. My research interests include saliency prediction, multi-object detection and tracking, and human action recognition, using deep learning techniques and the simulation of virtual worlds.

About me

I received my PhD in Computer Vision from the University of Lübeck, Germany, in 2011 for work on human gaze prediction and guidance in videos. In the following two years, I was a postdoctoral research fellow in the Computer and Biological Vision Lab of David Cox at the Center for Brain Science, Harvard University. During my Post-Doc, I was partly funded by a German Academic Exchange Service (DAAD) grant. From October 2013 to February 2016, I worked as a research scientist in the Computer Vision group at Xerox Research Centre Europe (XRCE), in Grenoble, France. Since March 2016 I have been with the Remote Sensing Technology Institute of the German Aerospace Center (DLR) working on human crowd analysis in aerial images and videos.

News

Selected publications

My Google Scholar profile.

C. de Souza, A. Gaidon, E. Vig, A. M. Lopez. Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition. ECCV, 2016.
[pdf]

S. Jetley, N. Murray, E. Vig. End-To-End Saliency Mapping via Probability Distribution Prediction. CVPR, 2016. (spotlight presentation)
[pdf] [bibtex]

A. Gaidon, Q. Wang, Y. Cabon, E. Vig. Virtual Worlds as Proxy for Multi-Object Tracking Analysis. CVPR, 2016.
[pdf] [bibtex] [dataset] [poster]

A. Gaidon, E. Vig. Online Domain Adaptation for Multi-Object Tracking. BMVC, 2015. (oral)
[pdf] [bibtex]

E. Vig, M. Dorr, D. Cox. Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images. CVPR, 2014.
[pdf] [bibtex] [code] [poster]

M. Milford, W. Scheirer, E. Vig, A. Glover, O. Baumann, J. Mattingley, D. Cox. Condition-Invariant, Top-Down Visual Place Recognition. ICRA, 2014.
[pdf] [bibtex]

M. Milford, E. Vig, W. Scheirer, D. Cox. Vision‐based Simultaneous Localization and Mapping in Changing Outdoor Environments. Journal of Field Robotics 31 (5), 780-802, 2014.
[pdf] [bibtex]

E. Vig, M. Dorr, D. Cox. Space-variant Descriptor Sampling for Action Recognition based on Saliency and Eye Movements. ECCV, 2012.
[pdf] [bibtex] [dataset] [poster]

E. Vig, M. Dorr, T. Martinetz, E. Barth. Intrinsic Dimensionality Predicts the Saliency of Natural Dynamic Scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) 34 (6), 1080-1091, 2012.
[pdf] [bibtex]

E. Vig, M. Dorr, E. Barth. Efficient Visual Coding and the Predictability of Eye Movements on Natural Movies. Spatial Vision 22 (5), 397-408, 2009.
[pdf] [bibtex]

Software and datasets

Virtual KITTI
A photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks, such as object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation. Link to the accompanying CVPR'2016 paper.
eDN saliency
Reference code for computing Ensembles of Deep Networks (eDN) saliency maps based on the CVPR'2014 paper "Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images".
Eye movement dataset for the Hollywood2 benchmark
A large dataset of eye movements we collected from five subjects who performed the action recognition task for the Hollywood2 Action Recognition challenge. The accompanying ECCV'2012 paper can be found here.

Theme adapted from orderedlist/minimal.