skip to main content
Ngôn ngữ:
Giới hạn tìm kiếm: Giới hạn tìm kiếm: Dạng tài nguyên Hiển thị kết quả với: Hiển thị kết quả với: Chỉ mục

Beyond Features for Recognition: Human-Readable Measures to Understand Users’ Whole-Body Gesture Performance

Vatavu, Radu-Daniel

International Journal of Human–Computer Interaction, 02 September 2017, Vol.33(9), p.713-730 [Tạp chí có phản biện]

ISSN: 1044-7318 ; E-ISSN: 1532-7590 ; DOI: 10.1080/10447318.2017.1278897

Toàn văn không sẵn có

Trích dẫn Trích dẫn bởi
  • Nhan đề:
    Beyond Features for Recognition: Human-Readable Measures to Understand Users’ Whole-Body Gesture Performance
  • Tác giả: Vatavu, Radu-Daniel
  • Chủ đề: Article
  • Là 1 phần của: International Journal of Human–Computer Interaction, 02 September 2017, Vol.33(9), p.713-730
  • Mô tả: Understanding users’ whole-body gesture performance quantitatively requires numerical gesture descriptors or features. However, the vast majority of gesture features that have been proposed in the literature were specifically designed for machines to recognize gestures accurately, which makes those features exclusively machine-readable . The complexity of such features makes it difficult for user interface designers, non-experts in machine learning, to understand and use them effectively (see, for instance, the Hu moment statistics or the Histogram of Gradients features), which reduces considerably designers’ available options to describe users’ whole-body gesture performance with legible and easily interpretable numerical measures. To address this problem, we introduce in this work a set of 17 measures that user interface practitioners can readily employ to characterize users’ whole-body gesture performance with human-readable concepts, such as area, volume, or quantity. Our measures describe (1) spatial characteristics of body movement, (2) kinematic performance, and (3) body posture appearance for whole-body gestures. We evaluate our measures on a public dataset composed of 5,654 gestures collected from 30 participants, for which we report several gesture findings, e.g ., participants performed body gestures in an average volume of space of 1.0 m 3 , with an average amount of hands movement of 14.6 m, and a maximum body posture diffusion of 5.8 m. We show the relationship between our gesture measures and recognition rates delivered by a template-based Nearest-Neighbor whole-body gesture classifier implementing the Dynamic Time Warping dissimilarity function. We also release BOGArT, the Body Gesture Analysis Toolkit, that automatically computes our measures. This work will empower researchers and practitioners with new numerical tools to reach a better understanding of how users perform whole-body gestures and thus, to use this knowledge to inform improved designs of whole-body gesture user interfaces.
  • Nơi xuất bản: Taylor & Francis
  • Ngôn ngữ: English
  • Số nhận dạng: ISSN: 1044-7318 ; E-ISSN: 1532-7590 ; DOI: 10.1080/10447318.2017.1278897

Đang tìm Cơ sở dữ liệu bên ngoài...