MPhil Student @ HKUST(GZ)
Bachelor of Engineering @ Jinan University
I am a second-year MPhil student in Data Science at the Hong Kong University of Science and Technology (Guangzhou), supervised by Prof. Yuxuan Liang.
[Highlight] I will start my joint Ph.D. program between CUHK and Knowin AI in 2026Fall, under the supervision of Prof. James Cheng. If interested in my research direction, please feel free to contact me!
My research interests include Time Series Forecasting, Spatio-Temporal Data Mining, Multimodal Learning, Multi-Agent Systems, and Embodied AI.
Weilin Ruan, Wenzhuo Wang, Siru Zhong, Wei Chen, Li Liu, Yuxuan Liang
TITS. 2025
This paper proposes a novel spatio-temporal unitized model for traffic flow forecasting that effectively captures complex dependencies across both space and time dimensions, achieving state-of-the-art performance on multiple benchmark datasets.
Yongzheng Liu, Siru Zhong, Gefeng Luo, Weilin Ruan, Yuxuan Liang
ACM MM. 2025
We propose MMLoad, a novel diffusion-based multimodal framework for multi-scenario building load forecasting with three innovations: Multimodal Data Enhancement Pipeline, Cross-modal Relation Encoder, and Scenario-Conditioned Diffusion Generator with uncertainty quantification, establishing a new paradigm for multimodal learning in smart energy systems.
Weilin Ruan, Wei Chen, Xilin Dang, Jianxiang Zhou, Weichuang Li, Xu Liu, Yuxuan Liang
ECML. 2025
This paper presents ST-LoRA, a novel low-rank adaptation framework as an off-the-shelf plugin for existing spatial-temporal prediction models, which alleviates node heterogeneity problems through node-level adjustments while minimally increasing parameters and training time.
Xingchen Zou, Weilin Ruan, Siru Zhong, Yuehong HU, Yuxuan Liang
KDD. 2025
We present DeepUHI, a heat equation-based framework that models urban heat island effects through thermodynamic cycles and thermal flows, integrating multimodal environmental data to achieve precise street-level temperature forecasting, now deployed as a real-time warning system in Seoul.
Siru Zhong, Weilin Ruan, Ming Jin, Huan Li, Qingsong Wen, Yuxuan Liang
ICML. 2025
This paper proposes Time-VLM, a novel multimodal framework that leverages pre-trained Vision-Language Models (VLMs) to bridge temporal, visual, and textual modalities for enhanced time series forecasting.