Knowledge points represent the fundamental units of teaching information or concepts and serve as the foundation for teaching content. The segmentation of knowledge points from extensive classroom teaching videos by integrating artificial intelligence technology can assist educators comprehend the way in which course content is presented and teaching methods are employed. However, due to the complexity of teaching scenario and the fact that knowledge points presentations contain multimodal information, there have been few studies conducted to analyze the correlation between the knowledge points and the multimodal features and to segment the video clips from the perspective of the knowledge points. In light of the above, this paper proposes a novel multimodal knowledge points segmentation tool for classroom teaching videos. The tool has three main functions, which are designed to achieve intelligent segmentation and extraction of knowledge points. These include extraction of candidate knowledge points sequence fusing auditory and textual features, extraction of candidate knowledge points sequence based on visual features and segmentation of knowledge points integrating multimodal information. It provides teachers with a convenient platform and method for segmenting knowledge points, thus enabling them to conduct a series of teaching analyses.
Research on Multimodal Knowledge Points Segmentation Tool for Classroom Teaching Videos
Jing Wang,Jiarong Yi,Gang Zhao,Yinan Zhang,Chao Yu,Fengying Dai
Published 2024 in 2024 International Conference on Intelligent Education and Intelligent Research (IEIR)
ABSTRACT
PUBLICATION RECORD
- Publication year
2024
- Venue
2024 International Conference on Intelligent Education and Intelligent Research (IEIR)
- Publication date
2024-11-06
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-28 of 28 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1