DesignMinds: Enhancing Video-Based Design Ideation with Vision-Language Model and Context-Injected Large Language Model

Research output: Working paper/PreprintPreprint

Abstract

Ideation is a critical component of video-based design (VBD), where videos serve as the primary medium for design exploration and inspiration. The emergence of generative AI offers considerable potential to enhance this process by streamlining video analysis and facilitating idea generation. In this paper, we present DesignMinds, a prototype that integrates a state-of-the-art Vision-Language Model (VLM) with a context-enhanced Large Language Model (LLM) to support ideation in VBD. To evaluate DesignMinds, we conducted a between-subject study with 35 design practitioners, comparing its performance to a baseline condition. Our results demonstrate that DesignMinds significantly enhances the flexibility and originality of ideation, while also increasing task engagement. Importantly, the introduction of this technology did not negatively impact user experience, technology acceptance, or usability.
Original languageEnglish
PublisherCornell University Library - arXiv.org
Number of pages23
Publication statusPublished - 2024

Keywords

  • design ideation
  • generative AI
  • video-based design
  • large language model
  • vision language model
  • eye-tracking
  • designer-AI collaboration

Fingerprint

Dive into the research topics of 'DesignMinds: Enhancing Video-Based Design Ideation with Vision-Language Model and Context-Injected Large Language Model'. Together they form a unique fingerprint.

Cite this