Code underlying the publication: "Are current long-term video understanding datasets long-term?"

Dataset

Description

Many real-world applications, from sport analysis to surveillance, benefit from automatic long-term action recognition. In the current deep learning paradigm for automatic action recognition, it is imperative that models are trained and tested on datasets and tasks that evaluate if such models actually learn and reason over long-term information. In this work, we propose a method to evaluate how suitable a video dataset is to evaluate models for long-term action recognition. To this end, we define a long-term action as excluding all the videos that can be correctly recognized using solely short-term information. We test this definition on existing long-term classification tasks on three popular real-world datasets, namely Breakfast, CrossTask and LVU, to determine if these datasets are truly evaluating long-term recognition. Our method involves conducting user studies, where we ask humans to annotate videos from these datasets. Our study reveals that these datasets can be effectively solved using shortcuts based on short-term information. In this repository, we provide the code and data. The code includes the HTML files for the user studies and the data analysis. The data includes the input to the user studies (e.g., video urls) and the responses collected on Amazon Mechanical Turk.
Date made available24 May 2024
PublisherTU Delft - 4TU.ResearchData
Date of data production2024 -
  • Are current long-term video understanding datasets long-term?

    Strafforello, O., Schutte, K. & van Gemert, J., 2023, Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Ceballos, C. (ed.). Piscataway: IEEE, p. 2959-2968 10 p.

    Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

    Open Access
    File
    1 Citation (Scopus)
    5 Downloads (Pure)

Cite this