Collective Appetite

Theoretical (Analytical):

Practical (Implementation):

Literature Work:


Overview

A large variety of different factors influence the eating behaviour of individuals in groups, such as individual motivation, general social situations, and specific peer perception. To the best of our knowledge, there is no visual analytics system targeting data similar to the analysis of collective human behaviour which includes the representation of time-series data, event sequences, dynamic networks, and conversational text data. In addition to the variety of measurement variables, the final behaviour has multiple contrasting dependencies which require putting domain experts in-the-loop to steer the data analysis interactively.

Such a visual analytics system further supports novel analysis methods based on new granularities of behavioural assessment. Previously, studies on collective eating behaviour were limited due to a large number of influence factors and the huge effort in manually encoding videos of real-life social situations. Modern tracking technology allows for a fine-granular measure of different behaviour indicators (e.g.
movement of eyes, body posture, chewing movement), which in turn enables the analysis of the intertwined effect cycles between social behaviour and eating behaviour. To generate and confirm psychological hypotheses from such large and complex datasets, we want to collaboratively develop a visual analytics system that supports both visual data exploration, predictive modelling, and human-in-the-loop steering of the data analysis. Each system component will be developed in close contact with the collaboration partners and, after its completion, will be evaluated by psychologists regarding its effectiveness and usability.

Finally, the data analysis itself is inherently interdisciplinary by including the insights and generalization abilities of domain experts as feedback into the algorithmic data analysis. With this approach, the human bias in judging behavioural cues and the algorithmic bias due to the limitation of training data will balance each other and lead to better overall performance. The final system should enable domain experts to extract meaningful patterns and theories from the datasets provided.

Problem Statement

The collected dataset comprises different types of data ondifferent aggregation levels. The data extracted from sensor measures or tracking within video data (e.g. eye movement) canbe represented as a time-series. Several tools for the visual analysis of such a collection of multi-variate time series have been proposed. Besides, the raw time-seriessignatures, the annotated behavioural cues can be visualized as single event sequences. There are also existing visual analytics systems for conversation analysis that could be used to present multi-party interaction sequences. Finally, each time slice of measurements can be aggregated into semantically meaningful situation characteristics. For example, the headmovements might be aggregated into a vision field, the audio signatures, their direction, and their valence might be aggregated into a network between participants. The raw video information could be compared with the aggregated situation data in ananimated glyph. A first design draft of such a glyph, showing movement, vision, and conversation is shown in the header figure. 

Tasks

  • Analyze audio and video data in Python
  • Build a basic visual analytics framework for user-supported analysis of collective eating scenarios

Requirements

  • Basic knowledge about visual analytics
  • Advanced programming skills in Javascript, python, web
  • Good conceptual skills (software architectures)

Scope/Duration/Start

  • Scope: Bachelor
  • 3 Month Project
  • 3 Month Thesis

Contact

References

  • Polack Jr, Peter J., et al. "Chronodes: Interactive multifocus exploration of event sequences." ACM Transactions on Interactive Intelligent Systems (TiiS) 8.1 (2018): 2.
  • Zhao, Jian, et al. "Exploratory analysis of time-series with chronolenses." IEEE Transactions on Visualization and Computer Graphics 17.12 (2011): 2422-2431.