Skip to main content

Research Repository

Advanced Search

Jialin Yu's Outputs (4)

Language as a latent sequence: Deep latent variable models for semi-supervised paraphrase generation (2023)
Journal Article
Yu, J., Cristea, A. I., Harit, A., Sun, Z., Aduragba, O. T., Shi, L., & Al Moubayed, N. (2023). Language as a latent sequence: Deep latent variable models for semi-supervised paraphrase generation. AI open, 4, 19-32. https://doi.org/10.1016/j.aiopen.2023.05.001

This paper explores deep latent variable models for semi-supervised paraphrase generation, where the missing target pair for unlabelled data is modelled as a latent paraphrase sequence. We present a novel unsupervised model named variational sequence... Read More about Language as a latent sequence: Deep latent variable models for semi-supervised paraphrase generation.

Efficient Uncertainty Quantification for Multilabel Text Classification (2022)
Presentation / Conference Contribution
Yu, J., Cristea, A. I., Harit, A., Sun, Z., Aduragba, O. T., Shi, L., & Al Moubayed, N. (2022, July). Efficient Uncertainty Quantification for Multilabel Text Classification. Presented at 2022 International Joint Conference on Neural Networks (IJCNN), Padova, Italy

Despite rapid advances of modern artificial intelligence (AI), there is a growing concern regarding its capacity to be explainable, transparent, and accountable. One crucial step towards such AI systems involves reliable and efficient uncertainty qua... Read More about Efficient Uncertainty Quantification for Multilabel Text Classification.

INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations (2022)
Presentation / Conference Contribution
Yu, J., Cristea, A. I., Harit, A., Sun, Z., Aduragba, O. T., Shi, L., & Al Moubayed, N. (2022, July). INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations. Presented at 2022 International Joint Conference on Neural Networks (IJCNN), Padova, Italy

XAI with natural language processing aims to produce human-readable explanations as evidence for AI decisionmaking, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a s... Read More about INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations.