A large variety of factors influence individuals' eating behavior in groups, such as individual motivation, general social situations, and specific peer perception. To the best of our knowledge, there is no visual analytics system targeting data similar to the analysis of collective human behavior, which includes the representation of time-series data, event sequences, dynamic networks, and conversational text data. In addition to the variety of measurement variables, the final behavior has multiple contrasting dependencies, which require putting domain experts in the loop to steer the data analysis interactively.
Such a visual analytics system further supports novel analysis methods based on new granularities of behavioral assessment. Previously, studies on collective eating behavior were limited due to many influence factors and the vast effort in manually encoding videos of real-life social situations. Modern tracking technology allows for a fine-granular measure of different behavior indicators (e.g., movement of eyes, body posture, chewing movement), which enables the analysis of the intertwined effect cycles between social- and eating behavior. To generate and confirm psychological hypotheses from such large and complex datasets, we want to collaboratively develop a visual analytics system that supports visual data exploration, predictive modeling, and human-in-the-loop steering of the data analysis. Each system component will be developed in close contact with the collaboration partners, and, after its completion, psychologists will evaluate it regarding its effectiveness and usability.
Finally, the data analysis itself is inherently interdisciplinary by including domain experts' insights and generalization abilities as feedback into the algorithmic data analysis. With this approach, the human bias in judging behavioral cues and the algorithmic bias due to the limitation of training data will balance each other and lead to better overall performance. The final system should enable domain experts to extract meaningful patterns and theories from the datasets provided.
The raw dataset consists of video data capturing a scene of human collective behavior from several angles. The extraction of several different events (e.g., eating, eye contact, conversation) from the raw video data comprises the initial preprocessing stage.
Finally, the extracted event data can be aggregated into semantically meaningful situation characteristics. For example, the head movements might be aggregated into a vision field; audio signatures, direction, and valence might be aggregated into a network between participants.
The raw video information could be compared with the aggregated situation data in an animated glyph. A first design draft of such a glyph, showing movement, vision, and conversation, is shown in the header figure.
- 3D human pose reconstruction from a multi-camera setup
- Determining extrinsic and intrinsic camera parameters
- Extraction of event sequences from video data
- Quality assessment of extracted event data
- Development of a Visual Analytics framework for user-supported analysis of collective eating scenarios
Note: Not all tasks need to be adressed in the project!
- Basic knowledge about visual analytics
- Knowledge in Computer Vision
Scope / Duration / Start
- Scope: Bachelor / Master
- Project/Thesis Duration: 3 months / 3 months (Bachelor), 3 months / 6 months (Master)
- Start: Consider the project registration deadlines provided by the Department of Computer and Information Science (BA | MA)
- Bachelor / Master Project Guide
|||Z. Cao, et al., “OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, [Online].|
|||P. J. Polack, et al., "Chronodes: Interactive multifocus exploration of event sequences." ACM Transactions on Interactive Intelligent Systems (TiiS) 8.1, 2018, [Online].|
|||J. Zhao, et al., "Exploratory analysis of time-series with chronolenses.", IEEE Trans. Vis. Comput. Graph. 17.12, 2011, [Online].|