Loading...

About Me

Safe robot learning demos

I'm Shangding Gu

Now I am a postdoc at UC Berkeley, USA, and a guest researcher at Technical University of Munich (TUM). I fortunately work with Prof. Costas Spanos and Prof. Ming Jin. I had a great time visiting Prof. Jan Peters' lab from September 2022 to December 2022. Following this, I did a research internship at Microsoft from April 2023 to August 2023. In 2024, I earned my Ph.D. in Computer Science from TUM under the supervision of Prof. Alois Knoll. My current research focuses on reinforcement learning, planning, and AI safety, with applications in foundation models (e.g., large language models and multi-modal models), robotics, and semiconductor manufacturing. My goal is to design safe, reliable, and efficient systems that address pressing real-world challenges and drive impactful applications across diverse domains. My work has been featured in leading publications, including top-tier journals and conferences such as The Journal of Artificial Intelligence, IEEE Transactions on Pattern Analysis and Machine Intelligence, NeurIPS, and other prestigious venues. See Safe RL YouTube Channel. I support slow science. I am a student of mind, nature, and cosmos. If you are interested in my research topics, please feel free to contact me indicating your background and skills. Outside of research, I enjoy playing the guitar, reading, running, swimming and playing badminton with friends. I am seeking highly self-motivated and talented students at the Bachelor, Master, or PhD level. If you're interested in collaborating, feel free to reach out to me via email.
Email : shangding.gu[at]berkeley.edu
Location : Berkeley, USA

Research Interests

Safe/Robust Reinforcement Learning; Reinforcement Learning Theory; AI Safety.
Foundation Models; Motion Planning; Autonomous Driving; Robotics.

Recent News

11.2024: Invited lecture on safe reinforcement learning and its applications in Virginia Tech ML course
11.2024: Our paper on high-throughput parallel reinforcement learning framework got accepted by IEEE Transactions on Parallel and Distributed Systems
10.2024: Our paper on safe multi-agent reinforcement learning for autonomous driving got accepted by IEEE Transactions on Artificial Intelligence
09.2024: Our paper on efficient safe reinforcement learning got accepted by NeurIPS 2024
09.2024: We presented a tutorial on safe reinforcement learning for smart grid control and operations at IEEE SmartGridComm 2024, see Slides
09.2024: Our paper on safe reinforcement learning got accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (IF: 20.8)
08.2024: We presented a tutorial on safe reinforcement learning: bridging theory and practice at IJCAI 2024, see Slides
08.2024: Invited lecture on safe robot learning in WHUT summer course
07.2024: Our paper on robust safe robot learning got accepted by IEEE Transactions on Automation Science and Engineering (IF: 5.9)
05.2024: Join us at SmartGridComm 2024 Workshop on Safe RL for Smart Grid Control and Operations (Call for Contributions)
04.2024: Our paper on safe learning for real-world robot control got accepted by IEEE Transactions on Industrial Informatics (IF: 12.3)
12.2023: Our paper on safety and reward balance for safe RL got accepted by AAAI 2024 (Oral Paper)
10.2023: Our paper on a safe human-robot learning framework got accepted by Frontiers in Neurorobotics
08.2023: Our paper on RL for autonomous driving parking lots got accepted by IEEE Transactions on Cybernetics (IF: 11.8)
06.2023: Our paper on offline RL with uncertain action constraint got accepted by IEEE Transactions on Cognitive and Developmental Systems
03.2023: Our paper on safe multi-robot learning got accepted by the journal of Artificial Intelligence (IF: 14.4)
12.2022: We launched a long-term safe reinforcement learning online seminar. Every month, we will invite at least one speaker to share cutting-edge research with RL researchers and students (each speaker has about 1 hour to share his/her research). We believe that holding this seminar can promote the research of safe reinforcement learning. For details, please see the Seminar Homepage
11.2022: Invited a safe RL talk at the RL China community
10.2022: Invited a safe RL talk at Prof. Jan Peters' lab
09.2022: We launched the 1st Safe RL Workshop @ IEEE MFI 2022

Recent Works

Gu, S.*, Shi, L.*, Wen, M., Jin, M., Mazumdar, E., Chi, Y., Wierman, A., & Spanos, C. (2024).Robust Gymnasium: A Unified Modular Benchmark for Robust Reinforcement Learning.

[Paper], [Code]

Zheng, Z., & Gu, S. (2024). Safe Multi-Agent Reinforcement Learning with Bilevel Optimization in Autonomous Driving. arXiv preprint arXiv:2405.18209.

[Paper], [Code]

Gu, S., Sel, B., Ding, Y., Wang, L., Lin, Q., Knoll, A., & Jin, M. (2024). Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning. arXiv preprint arXiv:2405.16390.

[Paper]

Gu, S., Knoll, A., & Jin, M. (2024). TeaMs-RL: Teaching LLMs to Teach Themselves Better Instructions via Reinforcement Learning. arXiv preprint arxiv: 2403.08694.

[Paper], [Code]

Supervised Students

Kathleen Baur (Now at Cornell University)
Mhamed Jaafar (Now at Brainlab)
Zheng Zhi (Now at Agile Robots AG)
Jiarui Zou (Now at TUM)
Manxi Sun (Now at TUM)
Donghao Song (Collaborated with Derui Zhu)

My Services

01
Reviewer

As a reviewer for some journals and conferences, e.g., JMLR, IEEE TASE, IEEE TVT, IEEE TNNLS, IEEE TITS, IEEE TAI, ICML, NeurIPS, ICLR, AAAI, ICRA, IROS, AISTATS.

02
Volunteer

I used to be the head of the supporting education department of the student union and participated in the teaching activities for nearly two years.