教授
杨恺伦,博士、教授、博士生导师、硕士生导师、国家优秀青年基金(海外)获得者、湖南大学岳麓学者、华为青年学者。2014年6月获北京理工大学测控技术与仪器和北京大学经济学本科双学位。2019年6月获浙江大学测试计量技术及仪器博士学位。2017年9月至2018年9月在西班牙阿尔卡拉大学(UAH)机器人与电子安全(RobeSafe)研究组进行博士联合培养。2019年11月至2023年1月在德国卡尔斯鲁厄理工学院(KIT)计算机视觉与人机交互(CV:HCI)实验室开展博士后研究。2023年2月加入湖南大学。
围绕多模态、高维度、全视角计算光学与计算视觉开展研究,以支撑自动驾驶、盲人辅助、四足机器人等应用。在TPAMI、IJCV、TIP、T-RO、TMC、TNNLS、T-ITS、TMM、TCSVT、T-ASE、T-IV、TIM、TCI、TAI与计算机视觉、机器学习、人工智能、机器人、多媒体、智能交通顶会CVPR、NeurIPS、CoRL、ICCV、ECCV、ICLR、AAAI、IJCAI、ICRA、IROS、MM、IV、ITSC等期刊会议上发表论文100余篇,入选斯坦福全球前2%顶尖科学家、ScholarGPS全球前0.05%顶尖科学家。现拥有及与他人合有专利50余项,4项形成技术转移,获共青团中央举办的“创青春”创新创业大赛全国总冠军。CMX入选IEEE T-ITS Top-10 Popular Articles,MateRobot入选机器人顶会IEEE ICRA 2024 Finalistfor Best Paper Award on Human-Robot Interaction,ACNet入选IEEE ICIP 2019 Most Cited Papers,获智能车旗舰会议IEEE IV 2021 Best Paper Award,图像处理前沿会议ICFIP 2018 Best Presentation Award,无障碍顶会W4A 2023 Accessibility Challenge Judges' Award,两次获Applied Optics Editors' Pick [2018][2019]。担任IEEE Transactions on Intelligent Transportation Systems AE、IEEE Robotics and Automation Letters AE、Robot Learning AE、机器人顶会ICRA、IROS AE。担任TPAMI、IJCV、CVPR、NeurIPS、ICML、ICLR等100余本期刊与会议审稿人。获IEEE TIM 2024杰出审稿人、计算机视觉顶会ECCV 2022、ACCV 2022杰出审稿人。指导学生毕业在KIT、华为、字节跳动、蔚来汽车等单位深造或工作。
人工智能,作为引领新一轮科技革命的引擎,正悄然改变着全球科技与产业的发展格局,甚至重新定义我们的日常生活。从智能驾驶、AI导盲到混合现实、智慧工厂与智能医疗,再到运动辅助、机器人跑酷,AI的触角无处不在。这一切的背后,“Vision-视觉”是AI的超级数据源--其重要性不言而喻,是机器人精确决策与执行的核心驱动力。Computer Vision for Panoramic Understanding Lab (CV:PU)课题组以Vision 为切入点,交叉融合计算成像、多维感知、全景理解、视频分析,致力于破解各种现实世界的感知与建模难题,突破光照不足、极端天气、领域迁移、场景开放、标签稀缺、高动态多目标、大视场宽频谱、轻量化低算力等限制下的感知挑战。同时,研究具身感知与技能学习,探讨机器人可供性与空间理解,优化世界模型与人机交互,以提升机器人系统的整体表现与可解释性。我们的研究延展到自动驾驶、四足机器人、智能导盲辅助与智能运动分析等应用场景,为机器人的未来发展提供技术支撑。
现有若干博士后、博士生、直博生、硕士生、研究助理、本科生毕业设计、本科生科研助理等招生名额。教育最重要的目标莫过于塑造独立之人格、自由之精神,鼓励尝试,宽容失败,培养怀有家国情怀、志在改造人生、改造社会、改造世界的知识阶层。深刻地认识到大学教育不仅仅是传授知识,甚至也不止培养能力,更为重要的是营造平等的学术氛围并在与同学交互的过程中启发科学思考和科学研究,形成“独立之人格”。课题组注重营造平等的交流与探讨的气氛,提倡以co-work的形式相互合作。CV:PU 研究小组非常年轻,沟通融洽,除了本小组成员外,还与卡尔斯鲁厄理工学院、浙江大学光电学院、湖南大学智能学院的其他导师的学生一起共同学习与科研,可以充分交叉协作创新。博士生在读期间至少有两次国外开会交流的机会,硕士生至少有一次国外开会交流的机会。对于科研优秀有余力、提前圆满完成课题研究的同学,可推荐其到海外单位如NVIDIA、KIT、TUM、NTU、UAH等知名课题组进行一年左右的联合培养。如果您对 {Computer Vision, Scene Understanding, Robot Learning, Video Understanding, Computational Imaging, Embodied Intelligence, Spatial Intelligence, Autonomous Driving} 感兴趣,欢迎与我联系。
相关链接:个人主页、Google Scholar、ResearchGate、DBLP、GitHub、课题组风采
联系方式:kailun.yang@hnu.edu.cn
2025.07 – 今 湖南大学,人工智能与机器人学院,教授、博士生导师、硕士生导师
2023.11 – 2025.06 湖南大学,机器人学院,教授、博士生导师、硕士生导师
2023.02 – 2023.10 湖南大学,机器人学院,副教授、博士生导师、硕士生导师
2019.11 – 2023.01 德国卡尔斯鲁厄理工学院(KIT),计算机视觉与人机交互(CV:HCI)实验室,博士后
2014.09 – 2019.06 浙江大学,现代光学仪器国家重点实验室,博士
2017.09 – 2018.09 西班牙阿尔卡拉大学(UAH),机器人与电子安全(RobeSafe)研究组,联合培养博士
2012.09 – 2014.06 北京大学,国家发展研究院,经济学双学位,本科
2010.09 – 2014.06 北京理工大学,光电学院,测控技术与仪器,本科
[1]全景计算成像驱动的持续自动驾驶场景解析方法研究.国家自然科学基金面上项目,2025.01-2028.12(主持)
[2]机器人视觉感知与辅助技术.国家自然科学基金优秀青年科学基金项目(海外),2024.01-2026.12(主持)
[3]面向具身空间理解的四足机器人全景偏振融合感知技术研发.湖南省重点研发计划项目,2025.07-2028.06(主持)
[4]具有情感表达的强化学习四足仿生运动算法研究.湖南大学-中国移动产业智能联合研究院揭榜挂帅项目,2025.11-2026.10(主持)
[5]基于全景图像的时空感知技术研发.湖南大学-苏州灵境空间智能科技有限公司联合研发项目,2026.01-2026.08(主持)
[6]异构视角下的类脑智能无人系统协同感知方法研究.自主智能无人系统全国重点实验室开放课题,2025.11-2026.10(主持)
[7]四足机器人边云协同全景视觉感知技术研究.工业控制技术全国重点实验室开放课题,2025.01-2025.12(主持)
[8]Accessible Maps: Barrier-free maps to improve the occupational mobility of people with visual or mobility impairments.德国联邦劳动和社会服务部(BMAS)项目(01KM151112),2019.11-2022.12(主研)
[9]KIT Future Fields.KIT校园项目,2021.01-2023.01(主研)
[10]视觉精确定位技术研究.横向项目(K横20180747),2018.05-2020.04(主研)
[11]融合多维度参数的视觉传感技术研究.横向项目(K横20181674),2018.08-2019.08(主研)
[12]Semantic Perception for Navigation Assistance.浙江大学校派海外交流项目,2017.09-2018.09(主持)
[13]基于三维地形传感的盲人视觉辅助技术.农业与社会发展部公益性项目(KN20161853),2016.01-2017.12(主研)
发表论文:
计算视觉与场景理解(Computer Vision and Scene Understanding)方向:
[1] J. Zhang, K. Yang†, H. Shi, S. Reiß, K. Peng, C. Ma, H. Fu, P.H.S. Torr, K. Wang, R. Stiefelhagen.Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024 [PDF]
[2] K. Peng†, D. Wen, M.S. Sarfraz, Y. Chen, J. Zheng, D. Schneider, K. Yang†, J. Wu, A. Roitberg, R. Stiefelhagen.Mitigating Label Noise using Prompt-Based Hyperbolic Meta-Learning in Open-Set Domain Generalization.International Journal of Computer Vision (IJCV), 2026 [PDF]
[3] K. Yang†, X. Hu, R. Stiefelhagen.Is Context-Aware CNN Ready for the Surroundings? Panoramic Semantic Segmentation in the Wild.IEEE Transactions on Image Processing (TIP), 2021 [PDF]
[4] S. Li*, F. Teng*, Y. Cao, K. Yang†, Z. Li†, Y. Wang.NRSeg: Noise-Resilient Learning for BEV Semantic Segmentation via Driving World Models.IEEE Transactions on Image Processing (TIP), 2026 [PDF]
[5] Q. Jiang*, S. Gao*, Y. Gao, K. Yang†, Z. Yi, H. Shi, L. Sun, K. Wang†.Minimalist and High-Quality Panoramic Imaging with PSF-aware Transformers.IEEE Transactions on Image Processing (TIP), 2024 [PDF]
[6] H. Shi*, C. Pang*, J. Zhang*, K. Yang†, Y. Wu, H. Ni, Y. Lin, R. Stiefelhagen, K. Wang†.CoBEV: Elevating Roadside 3D Object Detection with Depth and Height Complementarity.IEEE Transactions on Image Processing (TIP), 2024 [PDF]
[7] J. Lin, J. Chen, K. Yang†, A. Roitberg, S. Li, Z. Li†, S. Li.AdaptiveClick: Click-aware Transformer with Adaptive Focal Loss for Interactive Image Segmentation.IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024 [PDF]
[8] K. Peng, A. Roitberg, K. Yang†, J. Zhang, R. Stiefelhagen.Delving Deep into One-Shot Skeleton-based Action Recognition with Diverse Occlusions.IEEE Transactions on Multimedia (TMM), 2023 [PDF]
[9] H. Li, Q. Hu, B. Zhou, Y. Yao, J. Lin, K. Yang†, P. Chen†.CFMW: Cross-modality Fusion Mamba for Robust Object Detection under Adverse Weather.IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2025 [PDF]
[10] F. Teng*, J. Zhang*, K. Peng, Y. Wang, R. Stiefelhagen, K. Yang†.OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation.IEEE Transactions on Artificial Intelligence (TAI), 2024 [PDF]
[11] K. Yang†, J. Zhang, S. Reiß, X. Hu, R. Stiefelhagen.Capturing Omni-Range Context for Omnidirectional Segmentation.In CVPR, 2021 [PDF]
[12] Y. Zheng, K. Peng, X. Zheng, K. Yang†.Seeing Beyond: Extrapolative Domain Adaptive Panoramic Segmentation.In CVPR, 2026 [PDF]
[13] Y. Zhang, M. Duan, K. Peng†, Y. Wang, D. Wen, D.P. Paudel, L. Van Gool, K. Yang†.ProOOD: Prototype-Guided Out-of-Distribution 3D Occupancy Prediction.In CVPR, 2026 [PDF]
[14] K. Luo*, H. Shi*, S. Wu, F. Teng, M. Duan, C. Huang, Y. Wang, K. Wang, K. Yang†.Omnidirectional Multi-Object Tracking.In CVPR, 2025 [PDF]
[15] J. Zhang, K. Yang†, C. Ma, S. Reiß, K. Peng, R. Stiefelhagen.Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation.In CVPR, 2022 [PDF]
[16] J. Zhang*, R. Liu*, H. Shi, K. Yang†, S. Reiß, H. Fu, K. Peng, K. Wang, R. Stiefelhagen.Delivering Arbitrary-Modal Semantic Segmentation.In CVPR, 2023 [PDF]
[17] H. Shi*, Z. Wang*, S. Guo*, M. Duan, S. Wang, T. Chen, K. Yang†, L. Wang†, K. Wang†.OneOcc: Semantic Occupancy Prediction for Legged Robots with a Single Panoramic Camera.In CVPR, 2026 [PDF]
[18] K. Peng, D. Wen, K. Yang†, A. Luo, Y. Chen, J. Fu, M.S. Sarfraz, A. Roitberg, R. Stiefelhagen.Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler.In NeurIPS, 2024 [PDF]
[19] S. Wu*, F. Teng*, H. Shi*, Q. Jiang, K. Luo, K. Wang, K. Yang†.QuaDreamer: Controllable Panoramic Video Generation for Quadruped Robots.In CoRL, 2025 [PDF]
[20] Y. Cao*, J. Zhang*, X. Zheng, H. Shi, K. Peng, H. Liu, K. Yang†, H. Zhang†.Unlocking Constraints: Source-Free Occlusion-Aware Seamless Segmentation.In ICCV, 2025 [PDF]
[21] K. Peng, C. Yin, J. Zheng, R. Liu, D. Schneider, J. Zhang, K. Yang†, M.S. Sarfraz, R. Stiefelhagen, A. Roitberg.Navigating Open Set Scenarios for Skeleton-based Action Recognition.In AAAI, 2024 [PDF]
[22] Y. Cao*, J. Zhang*, H. Shi, K. Peng, Y. Zhang, H. Zhang†, R. Stiefelhagen, K. Yang†.Occlusion-Aware Seamless Segmentation.In ECCV, 2024 [PDF]
[23] K. Peng*, J. Fu*, K. Yang†, D. Wen, Y. Chen, R. Liu, J. Zheng, J. Zhang, M.S. Sarfraz, R. Stiefelhagen, A. Roitberg.Referring Atomic Video Action Recognition.In ECCV, 2024 [PDF]
[24] K. Zeng, H. Shi, J. Lin, S. Li, J. Cheng, K. Wang, Z. Li†, K. Yang†.MambaMOS: LiDAR-based 3D Moving Object Segmentation with Motion-aware State Space Model.In MM, 2024 [PDF]
[25] K. Peng, D. Schneider, A. Roitberg, K. Yang†, J. Zhang, C. Deng, K. Zhang, M.S. Sarfraz, R. Stiefelhagen.Towards Video-based Activated Muscle Group Estimation in the Wild.In MM, 2024 [PDF]
[26] X. Hu, K. Yang, L. Fei, K. Wang.ACNet: Attention Based Network to Exploit Complementary Features for RGBD Semantic Segmentation.Most Cited Paper in ICIP 2019[PDF]
[27] Q. Wang*, J. Zhang*, K. Yang†, K. Peng, R. Stiefelhagen.MatchFormer: Interleaving Attention in Transformers for Feature Matching.Top-3 Cited Paper in ACCV 2022[PDF]
[28] J. Zhang, K. Yang†, A. Constantinescu, K. Peng, K. Müller, R. Stiefelhagen.Trans4Trans: Efficient Transformer for Transparent Object Segmentation to Help Visually Impaired People Navigate in the Real World.Main Publication in Google Scholar Metrics in ICCVW, 2021 [PDF]
极端光学与计算成像(Extreme Photonics and Computational Imaging)方向:
[1] S. Gao, K. Yang†, H. Shi, K. Wang†, J. Bai.Review on Panoramic Imaging and Its Applications in Scene Understanding.IEEE Transactions on Instrumentation and Measurement (TIM), 2022 [PDF]
[2] Q. Jiang*, H. Shi*, S. Gao, J. Zhang, K. Yang†, L. Sun, H. Ni, K. Wang†.Computational Imaging for Machine Perception: Transferring Semantic Segmentation beyond Aberrations.IEEE Transactions on Computational Imaging (TCI), 2024 [PDF]
[3] X. Qian*, Q. Jiang*, Y. Gao, S. Gao, Z. Yi, L. Sun, K. Wei, H. Li, K. Yang†, K. Wang†, J. Bai.Towards Single-Lens Controllable Depth-of-Field Imaging via Depth-Aware Point Spread Functions.IEEE Transactions on Computational Imaging (TCI), 2025 [PDF]
[4] K. Xiang, K. Yang, K. Wang.Polarization-driven Semantic Segmentation via Efficient Attention-bridged Fusion.Optics Express (OE), 2021 [PDF]
[5] K. Yang, L.M. Bergasa, E. Romera, K. Wang.Robustifying Semantic Cognition of Traversability across Wearable RGB-Depth Cameras.Editors' Pick at Applied Optics (AO), 2019 [PDF]
[6] K. Yang, K. Wang, H. Chen, J. Bai.Reducing the Minimum Range of a RGB-Depth Sensor to Aid Navigation in Visually Impaired Individuals.Editors' Pick at Applied Optics (AO), 2018 [PDF]
[7] Q. Jiang, Z. Yi, S. Gao, Y. Gao, X. Qian, H. Shi, L. Sun, J. Niu, K. Wang†, K. Yang†, J. Bai.Representing Domain-Mixing Optical Degradation for Real-World Computational Aberration Correction via Vector Quantization.Optics & Laser Technology (JOLT), 2024 [PDF]
[8] K. Zhou, K. Yang, K. Wang.Panoramic Depth Estimation via Supervised and Unsupervised Learning in Indoor Scenes.Applied Optics (AO), 2021 [PDF]
[9] X. Yin*, H. Shi*, Y. Bao*, Z. Bing, Y. Liao, K. Yang†, K. Wang†.E-3DGS: 3D Gaussian Splatting with Exposure and Motion Events.Applied Optics (AO), 2025 [PDF]
[10] K. Yang, K. Wang†, X. Zhao, R. Cheng, J. Bai, Y. Yang, D. Liu.IR Stereo RealSense: Decreasing Minimum Range of Navigational Assistance for Visually Impaired Individuals.Journal of Ambient Intelligence and Smart Environments (JAISE), 2017 [PDF]
[11] 陈浩, 杨恺伦, 胡伟健, 白剑, 汪凯巍.基于全景环带成像的语义视觉里程计.光学学报, 2021 [PDF]
自动驾驶与人机交互(Autonomous Driving and Human-Computer Interaction)方向:
[1] K. Yang, X. Hu, L.M. Bergasa, E. Romera, K. Wang.PASS: Panoramic Annular Semantic Segmentation.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2019 [PDF]
[2] K. Yang†, X. Hu, Y. Fang, K. Wang, R. Stiefelhagen.Omnisupervised Omnidirectional Semantic Segmentation.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2020 [PDF]
[3] A. Jaus, K. Yang†, R. Stiefelhagen.Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2023 [PDF]
[4] J. Zhang, K. Yang†, R. Stiefelhagen.Exploring Event-driven Dynamic Context for Accident Scene Segmentation.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2021 [PDF]
[5] J. Zhang, K. Yang†, A. Constantinescu, K. Peng, K. Müller, R. Stiefelhagen.Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2022 [PDF]
[6] R. Liu, K. Yang†, A. Roitberg, J. Zhang, K. Peng, H. Liu, Y. Wang, R. Stiefelhagen.TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2024 [PDF]
[7] J. Zhang*, H. Liu*, K. Yang*†, X. Hu, R. Liu, R. Stiefelhagen.CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers.IEEE Transactions on Intelligent Transportation Systems (T-ITS)Top-10 Popular Article,Most Cited Paper in 2023[PDF]
[8] S. Li, J. Lin, H. Shi, J. Zhang, S. Wang, Y. Yao, Z. Li†, K. Yang†.DTCLMapper: Dual Temporal Consistent Learning for Vectorized HD Map Construction.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2024 [PDF]
[9] J. Lin*, J. Chen*, K. Peng*, X. He, Z. Li†, R. Stiefelhagen, K. Yang†.EchoTrack: Auditory Referring Multi-Object Tracking for Autonomous Driving.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2024 [PDF]
[10] H. Shi*, Y. Zhou*, K. Yang†, X. Yin, Z. Wang, Y. Ye, Z. Yin, S. Meng, P. Li, K. Wang†.PanoFlow: Learning 360° Optical Flow for Surrounding Temporal Understanding.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2023 [PDF]
[11] J. Zhang, C. Ma, K. Yang†, A. Roitberg, K. Peng, R. Stiefelhagen.Transfer beyond the Field of View: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2021 [PDF]
[12] H. Shi*, S. Wang*, J. Zhang, X. Yin, G. Wang, J. Zhu, K. Yang†, K. Wang†.Offboard Occupancy Refinement with Hybrid Propagation for Autonomous Driving.IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2025 [PDF]
[13] J. Sun, W. Sun†, G. Zhang†, K. Yang†, S. Li, X. Meng, N. Deng, C. Tan.CT-UIO: Continuous-Time UWB-Inertial-Odometer Localization Using Non-Uniform B-spline with Fewer Anchors.IEEE Transactions on Mobile Computing (TMC), 2025 [PDF]
[14] Z. Wang*, K. Yang*†, H. Shi, P. Li, F. Gao, J. Bai, K. Wang†.LF-VISLAM: A SLAM Framework for Large Field-of-View Cameras with Negative Imaging Plane on Mobile Agents.IEEE Transactions on Automation Science and Engineering (T-ASE), 2023 [PDF]
[15] Z. Wang, K. Yang†, H. Shi, Y. Zhang, Z. Xu, F. Gao, K. Wang†.LF-PGVIO: A Visual-Inertial-Odometry Framework for Large Field-of-View Cameras using Points and Geodesic Segments.IEEE Transactions on Intelligent Vehicles (T-IV), 2024 [PDF]
[16] H. Shi*, Q. Jiang*, K. Yang†, X. Yin, Z. Wang, K. Wang†.Beyond the Field-of-View: Enhancing Scene Visibility and Perception with Clip-Recurrent Transformer.IEEE Transactions on Intelligent Vehicles (T-IV), 2024 [PDF]
[17] Z. Yi*, H. Shi*, K. Yang†, Q. Jiang, Y. Ye, Z. Wang, K. Wang†.FocusFlow: Boosting Key-Points Optical Flow Estimation for Autonomous Driving.IEEE Transactions on Intelligent Vehicles (T-IV), 2023 [PDF]
[18] L. Sun, K. Yang, X. Hu, W. Hu, K. Wang.Real-time Fusion Network for RGB-D Semantic Segmentation Incorporating Unexpected Obstacle Detection for Road-driving Images.Main Publication in Google Scholar Metrics in IEEE Robotics and Automation Letters (RA-L), 2020 [PDF]
[19] S. Li, K. Yang†, H. Shi, J. Zhang, J. Lin, Z. Teng, Z. Li†.Bi-Mapper: Holistic BEV Semantic Mapping for Autonomous Driving.IEEE Robotics and Automation Letters (RA-L), 2023 [PDF]
[20] F. Teng*, K. Luo*, S. Wu*, S. Li, P. Guo, J. Wei, J. Zhang, K. Peng, K. Yang†.Hallucinating 360°: Panoramic Street-View Generation via Local Scenes Diffusion and Probabilistic Prompting.In ICRA, 2026 [PDF]
[21] W. Li, K. Peng†, D. Wen, R. Liu, M. Duan, K. Luo, K. Yang†.Segment-to-Act: Label-Noise-Robust Action-Prompted Video Segmentation Towards Embodied Intelligence.In ICRA, 2026 [PDF]
[22] L. Kong, J. Lin, S. Li, K. Luo, Z. Li†, K. Yang†.CoBEVMoE: Heterogeneity-aware Feature Fusion with Dynamic Mixture-of-Experts for Collaborative Perception.In ICRA, 2026 [PDF]
[23] J. Zheng, J. Zhang, K. Yang†, K. Peng, R. Stiefelhagen.MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments.InICRA Finalist for Best Paper Award on Human-Robot Interaction, 2024 [PDF]
[24] J. Zhang, K. Yang†, R. Stiefelhagen.ISSAFE: Improving Semantic Segmentation in Accidents by Fusing Event-based Data.In IROS, 2021 [PDF]
[25] J. Zhao*, F. Teng*, K. Luo, G. Zhao, Z. Li, X. Zheng†, K. Yang†.Unveiling the Potential of Segment Anything Model 2 for RGB-Thermal Semantic Segmentation with Language Guidance.In IROS, 2025 [PDF]
[26] Y. Huang, F. Yang, G. Zhu, G. Li, H. Shi, Y. Zuo, W. Chen, Z. Li†, K. Yang†.Resource-Efficient Affordance Grounding with Complementary Depth and Semantic Prompts.In IROS, 2025 [PDF]
[27] Z. Wang, Y. Li, L. Xu, H. Shi, Z. Ma, Z. Chu, C. Li, F. Gao, K. Yang†, K. Wang†.SF-TIM: A Simple Framework for Enhancing Quadrupedal Robot Jumping Agility by Combining Terrain Imagination and Measurement.In IROS, 2025 [PDF]
[28] W. Jia, F. Yang, M. Duan, X. Chen, Y. Wang, Y. Jiang, W. Chen, K. Yang†, Z. Li†.One-Shot Affordance Grounding of Deformable Objects in Egocentric Organizing Scenes.In IROS, 2025 [PDF]
[29] K. Yang†, L.M. Bergasa, E. Romera, R. Cheng, T. Chen, K. Wang.Unifying Terrain Awareness through Real-Time Semantic Segmentation.Main Publication in Google Scholar Metrics in IV, 2018 [PDF]
[30] A. Jaus, K. Yang†, R. Stiefelhagen.Panoramic Panoptic Segmentation: Towards Complete Surrounding Understanding via Unsupervised Contrastive Learning.Best Paper Award at IV 2021[PDF]
[31] E. Romera, L.M. Bergasa, K. Yang, J.M. Alvarez, R. Barea.Bridging the Day and Night Domain Gap for Semantic Segmentation.Main Publication in Google Scholar Metrics in IV, 2019 [PDF]
[32] K. Yang†, L.M. Bergasa, E. Romera, X. Huang, K. Wang.Predicting Polarization beyond Semantics for Wearable Robotics.In Humanoids, 2018 [PDF]
[33] 冯嘉琪, 杨恺伦, 林家丞, 杨观赐.视觉SLAM运动分割技术综述.自动化学报, 2026 [PDF]
发明专利:
[1] 杨恺伦,罗凯,吴盛,滕飞,段梦飞,黄畅,王宇航,李智勇。一种基于全景图像的视觉多目标跟踪方法。已授权。专利号:ZL202511146427.7。
[2] 杨恺伦,吴盛,滕飞,罗凯。一种全景街景图像生成方法及系统。已授权。专利号:ZL202510323125.6。
[3] 杨恺伦,杨喆,罗凯,滕飞。基于3DGS的全景重建方法、电子设备及存储介质。已授权。专利号:ZL202510305523.5。
[4] 杨恺伦,张毓恒,罗凯,段梦飞,姚伟嘉。一种用于停车场车位占据检测的3D语义占据预测方法。已授权。专利号:ZL202510287278.X。
[5] 杨恺伦,林凯鑫,滕飞,罗凯,姚伟嘉。一种全景动态NeRF重建与渲染方法。已授权。专利号:ZL202510291639.8。
[6] 杨恺伦,王椿婷,滕飞,李思羽,李智勇。一种融合SD地图的地图拓扑关系推理方法。已授权。专利号:ZL202510286364.9。
[7] 杨恺伦,黄翊洲,滕飞,李智勇。一种基于可供性分割网络的厨房场景工具理解方法及系统。已授权。专利号:ZL202411324813.6。
[8] 杨恺伦,刘嘉炜,张辉,李智勇。一种光场相机语义分割方法、系统及电子设备。已授权。专利号:ZL202411008638.X。
[9] 杨恺伦,胡鑫欣,孙东明,李华兵。一种全景图像的连续性分割方法。已授权。专利号:ZL202010198068.0。
[10] 杨恺伦,汪凯巍,程瑞琦。一种单相机偏振信息预测方法。已授权。专利号:ZL201810534076.0。
[11] 杨恺伦,汪凯巍,于红雷,胡伟健。一种智能盲人辅助眼镜。已授权。获数千万Pre-A轮融资。专利号:ZL201610590755.0。
[12] 杨恺伦,汪凯巍,程瑞琦,陈浩。一种基于RGB‐IR相机的声音编码交互系统。已转移(转让金额60万)。专利号:ZL201610018944.0。
[13] 杨恺伦,汪凯巍,王晨。一种智能汽车倒车辅助系统及辅助方法。已授权。专利号:ZL201510186028.3。
[1] 斯坦福全球前2%顶尖科学家,2020、2023、2024、2025.
[2] ScholarGPS全球前0.05%顶尖科学家,2025.
[3] 华为青年学者,2025.
[4] IEEE TIM 2024杰出审稿人,2025.01.
[5] 湖南大学2024届本科毕业论文(设计)优秀指导教师,2024.06.
[6]IEEE ICRA 2024 Finalist for Best Paper Award on Human-Robot Interaction,2024.04.
[7]ACCV 2022杰出审稿人, 2022.12.
[8]ECCV 2022杰出审稿人, 2022.10.
[9]IEEE Intelligent Vehicles Symposium (IV) 2021最佳论文奖, 2021.07.
[10] 博士研究生国家奖学金,2018.12.
[11]ICFIP 2018 Best Presentation Award, 2018.03.
[12] 第三届“创青春”中国青年互联网创业大赛冠军,2017.08.
[13] 北京理工大学光电学院毕业杯足球赛冠军&最佳球员,2014.06.
湖南大学-国家卓越工程师学院 版权所有
校址:湖南省长沙市岳麓区麓山南路麓山门 | 邮箱:zgxy@hnu.edu.cn | 邮编:410082
湘教QS4-201312-010059 湘ICP备09007699号

微信公众号