Developing Semi-Supervised Seq2Seq (<inline-formula><tex-math notation="LaTeX">$S^4$</tex-math></inline-formula>) learning for sequence transduction tasks in natural language processing (NLP), e.g. semantic parsing, is challenging, since both the input and the output sequences are discrete. This discrete nature makes trouble for methods which need gradients either from the input space or from the output space. Recently, a new learning method called joint stochastic approximation is developed for unsupervised learning of fixed-dimensional autoencoders and theoretically avoids gradient propagation through discrete latent variables, which is suffered by Variational Auto-Encoders (VAEs). In this letter, we propose seq2seq Joint-stochastic-approximation Auto-Encoders (JAEs) and apply them to <inline-formula><tex-math notation="LaTeX">$S^4$</tex-math></inline-formula> learning for NLP sequence transduction tasks. Further, we propose bi-directional JAEs (called bi-JAEs) to leverage not only unpaired input sequences (which is most commonly studied) but also unpaired output sequences. Experiments on two benchmarking datasets for semantic parsing show that JAEs consistently outperform VAEs in <inline-formula><tex-math notation="LaTeX">$S^4$</tex-math></inline-formula> learning and bi-JAEs yield further improvements.
Semi-Supervised Seq2seq Joint-Stochastic-Approximation Autoencoders With Applications to Semantic Parsing
Published 2020 in IEEE Signal Processing Letters
ABSTRACT
PUBLICATION RECORD
- Publication year
2020
- Venue
IEEE Signal Processing Letters
- Publication date
Unknown publication date
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-18 of 18 references · Page 1 of 1
CITED BY
Showing 1-8 of 8 citing papers · Page 1 of 1