Delete search term

Header

Quick navigation

Main navigation

Visual Interestingness - All images are equal but some images are more equal than others

At a glance

Description

Interestingness -- the power of attracting or holding one's attention (thefreedictionary.com). Not only the features of a given object or event determines whether it is considered as being interesting or not, but also prior knowledge, the current task or even emotional or motivational valence.

In the project sketched in this grant application investigates (i) what humans consider as interesting in images; and (ii) what examples a machine learning algorithm considers as interesting, i.e., which examples are most useful for learning the task at hand. What do human beings find interesting? Our daily life is highly influenced by what we consume and see. On the one hand, given our personal interests we choose what news, movies or other events we pay attention to. On the other hand most people are also very open to external visual stimuli which might influence their behaviour.

In order to learn more about human visual perception and how it acts on judging events as interesting, but also for commercial purposes, it is of great concern to understand what triggers human attention and interest. The obvious commercial use case might be advertisement but we believe it can go far beyond that. For example, models of what people consider as “interesting” might be used to automatically analyse video streams in video surveillance applications and alert users. Or, it might assist people during their work, to automatically highlight “interesting” facts which might have been overlooked otherwise (e.g., medical applications). In general, we believe that a better understanding of “interestingness” and related concepts can pave the way to help humans in various circumstances.

What do machines find interesting? Over the last year enormous progress has been made in various domains, including computer vision, speech recognition and text translation. In most applications very large datasets have been used therefore. For many tasks, however, such datasets do not exist or can be hardly created at all. We focus on the minimal quantity and optimum quality (=”interesting”) of training examples for successful learning, being able to transfer knowledge from related tasks to new domains. Furthermore, adaptation to a particular scene/ use case and adapting the model over time is essential for robust real world application as not all data might be available during training time. A car detector trained 100 years ago (with examples of cars at that time) might have quite a hard time recognizing a modern Tesla nowadays. At a first glance the two questions might look very different, but, the goal for both is to learn proper feature representation. We will make use of recent advances in deep learning and causality to approach the problem.

By exploring datasets we aim to train an invariant representation which relates to interestingness, from which other concepts can be derived. This common framework will be used to investigate both concepts, results and insights will be shared and influence each other, paving the way for many real world applications.