Loading...

About Me

Safe robot learning demos

I'm Shangding Gu

Now I am a postdoc at UC Berkeley, USA, and also a guest researcher at Technical University of Munich. I fortunately work with Prof. Costas Spanos and Prof. Ming Jin. I had a great time visiting Prof. Jan Peters' lab from September 2022 to December 2022. Following this, I did a research internship at Microsoft from April 2023 to August 2023. My research currently focuses on developing artificial intelligence methods and models, with a special interest in exploring safe reinforcement learning, planning, foundation models, and robotics, in which my goal is to enable robots to know how to learn, reason and plan, and enable robots to work in support of people. See Safe RL YouTube Channel. I support slow science. I am a student of mind, nature, and cosmos. If you are interested in my research topics, please feel free to contact me indicating your background and skills. Outside of research, I enjoy playing the guitar, reading, running, swimming and playing badminton with friends. I am seeking highly self-motivated and talented students at the Bachelor, Master, or PhD level. If you're interested in collaborating, feel free to reach out to me via email.
Email : shangding.gu[at]berkeley.edu
Location : Berkeley, USA

Research Interests

Safe/Robust Reinforcement Learning; Reinforcement Learning Theory; AI Safety.
Foundation Models; Motion Planning; Autonomous Driving; Robotics (e.g., arm robotics and marine robotics).

Recent News

10.2024: Our paper on safe multi-agent reinforcement learning for autonomous driving got accepted by IEEE Transactions on Artificial Intelligence
09.2024: Our paper on efficient safe reinforcement learning got accepted by NeurIPS 2024
09.2024: We presented a tutorial on safe reinforcement learning for smart grid control and operations at IEEE SmartGridComm 2024, see Slides
09.2024: Our paper on safe reinforcement learning got accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (IF: 20.8)
08.2024: We presented a tutorial on safe reinforcement learning: bridging theory and practice at IJCAI 2024, see Slides
07.2024: Our paper on robust safe robot learning got accepted by IEEE Transactions on Automation Science and Engineering (IF: 5.9)
04.2024: Our paper on safe learning for real-world robot control got accepted by IEEE Transactions on Industrial Informatics (IF: 12.3)
12.2023: Our paper on safety and reward balance for safe RL got accepted by AAAI 2024 (Oral Paper)
10.2023: Our paper on a safe human-robot learning framework got accepted by Frontiers in Neurorobotics
08.2023: Our paper on RL for autonomous driving parking lots got accepted by IEEE Transactions on Cybernetics (IF: 11.8)
06.2023: Our paper on offline RL with uncertain action constraint got accepted by IEEE Transactions on Cognitive and Developmental Systems
03.2023: Our paper on safe multi-robot learning got accepted by the journal of Artificial Intelligence (IF: 14.4)
12.2022: We launched a long-term safe reinforcement learning online seminar. Every month, we will invite at least one speaker to share cutting-edge research with RL researchers and students (each speaker has about 1 hour to share his/her research). We believe that holding this seminar can promote the research of safe reinforcement learning. For details, please see the Seminar Homepage
11.2022: Invited a safe RL talk at the RL China community
10.2022: Invited a safe RL talk at Prof. Jan Peters' lab
09.2022: We launched the 1st Safe RL Workshop @ IEEE MFI 2022

Recent Works

Gu, S.*, Shi, L.*, Ding, Y., Knoll, A., Spanos, C., Wierman, A., & Jin, M. (2024). Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation. arXiv preprint arXiv:2405.20860.

[Arxiv]

Zheng, Z., & Gu, S. (2024). Safe Multi-Agent Reinforcement Learning with Bilevel Optimization in Autonomous Driving. arXiv preprint arXiv:2405.18209.

[Arxiv], [Code]

Gu, S., Sel, B., Ding, Y., Wang, L., Lin, Q., Knoll, A., & Jin, M. (2024). Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning. arXiv preprint arXiv:2405.16390.

[Arxiv]

Gu, S., Knoll, A., & Jin, M. (2024). TeaMs-RL: Teaching LLMs to Teach Themselves Better Instructions via Reinforcement Learning. arXiv preprint arxiv: 2403.08694.

[Arxiv], [Code]

Supervised Students

Kathleen Baur (Now at Cornell University)
Mhamed Jaafar (Now at Brainlab)
Zheng Zhi (Now at Agile Robots AG)
Jiarui Zou (Now at TUM)
Manxi Sun (Now at TUM)
Donghao Song (Collaborated with Derui Zhu)

My Services

01
Reviewer

As a reviewer for some journals and conferences, e.g., JMLR, IEEE TASE, IEEE TVT, IEEE TNNLS, IEEE TITS, IEEE TAI, ICML, NeurIPS, ICLR, AAAI, ICRA.

02
Volunteer

I used to be the head of the supporting education department of the student union and participated in the teaching activities for nearly two years.