Learning Multimodal VAEs Through Mutual Supervision
(2021)
Presentation / Conference Contribution
Joy, T., Shi, Y., Torr, P. H., Rainforth, T., Schmon, S. M., & Siddharth, N. (2022, April). Learning Multimodal VAEs Through Mutual Supervision. Presented at ICLR 2022: The Tenth International Conference on Learning Representations, Virtual
Multimodal VAEs seek to model the joint distribution over heterogeneous data (e.g.\ vision, language), whilst also capturing a shared representation across such modalities. Prior work has typically combined information from the modalities by reconcil... Read More about Learning Multimodal VAEs Through Mutual Supervision.