With the rising amount of gaze behavior analysis, where participants get tracked watching video stimuli, the amount of corresponding gaze data records increases, too. Unfortunately, in most cases, these data are collected in separate files in custom-made or proprietary data formats. Thus these data are hard to access even for experts, and effectively inaccessible for non-experts. Expensive or custom-made software is necessary for the data analysis. By promoting the use of existing multimedia container formats for distributing and archiving eye tracking and gaze data bundled with the stimuli media, we define an exchange format that can be interpreted by standard multimedia players as well as streamed via the Internet. We converted several gaze data sets into our format, demonstrating the feasability of our approach and allowing to visualize these data with standard multimedia players. We also introduce two plugins for one of these players, allowing for further visual analytics. We discuss the benefit of gaze data in a multimedia containers and explain possible visual analytics approaches based on our implementations, converted datasets, and first user interviews.
Converted Eye Tracking Data Sets with Instantaneous Visualizations
The following data sets are provided for research purposes. By using these data sets in the proposed multimedia container format, please cite [S0], [S1], [S2], or [S3] and also the original dataset [A], [B], [C], [K], or [R].
- Real World Visual Processing [S3] - 1 video
- Açik et al. [A] dataset - 216 videos
- Sundberg et al. [B] dataset - 7 videos
- Coutrot & Guyader [C] dataset - 60 videos
- Kurzhals et al. [K] dataset - 11 videos
- Riche et al. [R] dataset - 24 videos
Source Code
VLC 3.0.0 patch
Modified version of the subsusf.c. Necessary for the playback of the USF files.
USF to ASS translation
XSL file for USF to ASS translation usf2ass.xsl.
VLC Visual Analytics Plugins
To install the extensions under Linux, please copy the SimSub.lua and/or MergeSub.lua into ~/.local/share/vlc/lua/extensions/ for the current user or /usr/lib/vlc/lua/extensions/ for all users. SimSub: Visualization of different eye tracking datasets in multiple windows MergeSub: Visualization of different eye tracking datasets in a single window
References
[S0] J. Schöning, C. Gundler, G. Heidemann, P. König & U. Krumnack. Visual Analytics of Gaze Data with Standard Multimedia Players. Journal of Eye Movement Research, 10(5) : 1-14, 2017. http://dx.doi.org/10.16910/jemr.10.5.4
[S1] J. Schöning, P. Faion, G. Heidemann & U. Krumnack. Eye Tracking Data in Multimedia Containers for Instantaneous Visualizations. In IEEE VIS Workshop on Eye Tracking and Visualization (ETVIS), pages: 74-78, 2016. IEEE. http://dx.doi.org/10.1109/ETVIS.2016.7851171
[S3] J. Schöning, A.L. Gert, A. Açik, T.C. Kietzmann, G. Heidemann & P. König. Exploratory Multimodal Data Analysis with Standard Multimedia Player --- Multimedia Containers: a Feasible Solution to make Multimodal Research Data Accessible to the Broad Audience. In Proceedings of the 12th Joint Conference on Computer Vision, Imagingand Computer Graphics Theory and Applications (VISAPP), pages: 272-279, ISBN: 978-989-758-225-7, 2017. SCITEPRESS. http://dx.doi.org/10.5220/0006260202720279
[A] A. Açik, A. Bartel & P. König. Real and implied motion at the center of gaze. Journal of Vision, 14(2) : 1-19, 2014. Association for Research in Vision and Ophthalmology (ARVO). http://dx.doi.org/10.1167/14.1.2
[B] P. Sundberg, T. Brox, M. Maire, P. Arbelaez & J. Malik. Occlusion boundary detection and figure/ground assignment from optical flow. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2233-2240, 2011. IEEE. http://dx.doi.org/10.1109/cvpr.2011.5995364
[C] A. Coutrot & N. Guyader. How saliency, faces, and sound influence gaze in dynamic social scenes. Journal of Vision, 14(8) : 5-5, 2014. Association for Research in Vision and Ophthalmology (ARVO). http://dx.doi.org/10.1167/14.8.5
[K] K. Kurzhals, C.F. Bopp, J. Bässler, F. Ebinger & D. Weiskopf. Benchmark data for evaluating visualization and analysis techniques for eye tracking for video stimuli. In ACM Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), pages: 54-60, 2014. ACM Press. http://dx.doi.org/10.1145/2669557.2669558
[R] N. Riche, M. Mancas, D. Culibrk, V. Crnojevic, B. Gosselin & T. Dutoit. Dynamic Saliency Models and Human Attention: A Comparative Study on Videos. Lecture Notes in Computer Science, pages: 586-598, ISBN: 978-3-642-37431-9, 2013. Springer Science + Business Media. http://dx.doi.org/10.1007/978-3-642-37431-9_45
(2017-10-23)