This paper builds a fully neural, open-domain chat-bot that learns to respond like a conversational partner rather than a search engine, using an encoder-decoder with bidirectional recurrent memory and a token-level attention mechanism to track context across turns. A large corpus of fictional, face-to-face exchanges is cleaned into paired utterances, tokenized with subword units, and used to train the model end-to-end, yielding fluent, on-topic replies without hand-crafted rules or retrieval templates. Training and validation use sequence likelihood objectives, while quality is assessed with both automatic indicators (e.g., perplexity and n-gram overlap) and qualitative probes that test specificity, coherence, and avoidance of generic "safe" answers. A lightweight desktop interface demonstrates interactive behavior by surfacing multiple candidate responses from beam search and selecting among them for variety and fit. The study discusses common failure modes in open-domain chat (repetition, blandness, drift) and outlines practical remedies—data curation, decoding constraints, and post-training reward signals—to further align responses with human conversational expectations.
Conversations From Make-Believe: An Attentive Encoder–Decoder Chatbot Trained on Scripted Dialogue
Published 2025 in International Conference on Advances in Computing and Artificial Intelligence
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
International Conference on Advances in Computing and Artificial Intelligence
- Publication date
2025-12-26
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-21 of 21 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1