
[4] LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding
·
Paper Review/Key Information Extraction
논문 링크: https://arxiv.org/pdf/2104.08836.pdf github: https://github.com/microsoft/unilm GitHub - microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - GitHub - microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities github.com hug..