My research interests broadly include Natural Language Processing, Knowledge Discovery, and Multimodal Models. Now, I am also committed to the frontier exploration and practical application of Large Language Models (LLM) and Artificial Intelligence Generated Content (AIGC).
(* denotes equal contribution)
ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts []
Zhida Feng*, Zhenyu Zhang*, Xintong Yu*, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Li Chen, Hao Tian, Hua Wu, Haifeng Wang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (CVPR 2023)
ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding []
Qiming Peng*, Yinxu Pan*, Wenjin Wang*, Bin Luo, Zhenyu Zhang, Zhengjie Huang, Teng Hu, Weichong Yin, Yongfeng Chen, Yin Zhang, Shikun Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Findings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022, Findings)