OpenLens AI: Fully Autonomous Multimodal Agent for Health Informatics Research
A fully autonomous multimodal agent designed for medical/ML/stats research, or any data-driven project, and is optimized for medical + AI research.
Department of Automation, Tsinghua University
I am Yuxiao Cheng, a Ph.D. student in the Department of Automation, Tsinghua University, advised by Prof. Jinli Suo.
My research focuses on AI for Healthcare, specifically on causal modelling, AI Agents, and multi-modal learning in digital healthcare.
I received my B.E. in Department of Automation from Tsinghua University in 2022 and am currently pursuing my Ph.D. in the Department of Automation, advised by Prof. Jinli Suo. I won the National Scholarship (国家奖学金) in 2024.
I have published 9 first/co-first author papers in top-tier venues including The Lancet Digital Health, Nature Biomedical Engineering, Nature Communications, PNAS, ICLR, and AAAI.
(* indicates equal contribution)
A fully autonomous multimodal agent designed for medical/ML/stats research, or any data-driven project, and is optimized for medical + AI research.
A causal deep learning approach that combines neural networks with causal discovery to develop a reliable and generalizable model to predict a patient's risk of developing CSA-AKI.
A generative model that leverages paired healthy–diseased X-rays for interpretable pathology localization.
A supervised deep-learning denoising method that enables single-molecule FRET with up to 10-fold reduction in photon requirement per frame.
A supervised deep-learning denoising method that enables single-molecule FRET with up to 10-fold reduction in photon requirement per frame.
Represent the high-dimensional biomedical data volume with a compact implicit neural function and successfully reduce the demanding bandwidth by 2-3 orders of magnitude at high data fidelity
Reviewer for ICLR 2025, Pattern Recognition, IEEE Internet of Things Journal.