논문 리뷰
논문에 대한 한글 번역 및 사전 지식 등을 정리하고 있습니다.

-
Flow Matching for Generative ModelingYaron Lipman and Ricky T. Q. Chen and Heli Ben-Hamu and Maximilian Nickel and Matt Lehttps://arxiv.org/abs/2210.02747 Flow Matching for Generative ModelingWe introduce a new paradigm for generative modeling built on Continuous Normalizing Flows (CNFs), allowing us to train CNFs at unprecedented scale. Specifically, we present the notion of Flow Matching (FM),..
Flow Matching for Generative ModelingFlow Matching for Generative ModelingYaron Lipman and Ricky T. Q. Chen and Heli Ben-Hamu and Maximilian Nickel and Matt Lehttps://arxiv.org/abs/2210.02747 Flow Matching for Generative ModelingWe introduce a new paradigm for generative modeling built on Continuous Normalizing Flows (CNFs), allowing us to train CNFs at unprecedented scale. Specifically, we present the notion of Flow Matching (FM),..
2025.03.31 -
Planning with Diffusion for Flexible Behavior SynthesisMichael Janner and Yilun Du and Joshua B. Tenenbaum and Sergey Levinehttps://arxiv.org/abs/2205.09991 Planning with Diffusion for Flexible Behavior SynthesisModel-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classic..
Planning with Diffusion for Flexible Behavior SynthesisPlanning with Diffusion for Flexible Behavior SynthesisMichael Janner and Yilun Du and Joshua B. Tenenbaum and Sergey Levinehttps://arxiv.org/abs/2205.09991 Planning with Diffusion for Flexible Behavior SynthesisModel-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classic..
2025.03.19 -
Diffusion Policy : Visuomotor Policy Learning via Action DiffusionCheng Chi1, Siyuan Feng2, Yilun Du3, Zhenjia Xu1, Eric Cousineau2, Benjamin Burchfiel2, Shuran Song1https://diffusion-policy.cs.columbia.edu/ Diffusion Policy: Visuomotor Policy Learning via Action DiffusionThis paper introduces Diffusion Policy, a new way of generating robot behavior by representing a robot's visuomotor policy as..
Diffusion Policy:Visuomotor Policy Learning via Action DiffusionDiffusion Policy : Visuomotor Policy Learning via Action DiffusionCheng Chi1, Siyuan Feng2, Yilun Du3, Zhenjia Xu1, Eric Cousineau2, Benjamin Burchfiel2, Shuran Song1https://diffusion-policy.cs.columbia.edu/ Diffusion Policy: Visuomotor Policy Learning via Action DiffusionThis paper introduces Diffusion Policy, a new way of generating robot behavior by representing a robot's visuomotor policy as..
2025.03.19 -
LDP: A Local Diffusion Planner for Efficient Robot Navigation and Collision AvoidanceWenhao Yu and Jie Peng and Huanyu Yang and Junrui Zhang and Yifan Duan and Jianmin Ji and Yanyong Zhanghttps://arxiv.org/abs/2407.01950 LDP: A Local Diffusion Planner for Efficient Robot Navigation and Collision AvoidanceThe conditional diffusion model has been demonstrated as an efficient tool for learning robo..
LDP: A Local Diffusion Planner for Efficient Robot Navigation and Collision AvoidanceLDP: A Local Diffusion Planner for Efficient Robot Navigation and Collision AvoidanceWenhao Yu and Jie Peng and Huanyu Yang and Junrui Zhang and Yifan Duan and Jianmin Ji and Yanyong Zhanghttps://arxiv.org/abs/2407.01950 LDP: A Local Diffusion Planner for Efficient Robot Navigation and Collision AvoidanceThe conditional diffusion model has been demonstrated as an efficient tool for learning robo..
2025.03.17 -
FiLM: Visual Reasoning with a General Conditioning LayerEthan Perez and Florian Strub and Harm de Vries and Vincent Dumoulin and Aaron Courvillehttps://arxiv.org/abs/1709.07871 FiLM: Visual Reasoning with a General Conditioning LayerWe introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation ..
FiLM: Visual Reasoning with a General Conditioning LayerFiLM: Visual Reasoning with a General Conditioning LayerEthan Perez and Florian Strub and Harm de Vries and Vincent Dumoulin and Aaron Courvillehttps://arxiv.org/abs/1709.07871 FiLM: Visual Reasoning with a General Conditioning LayerWe introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation ..
2025.03.13 -
Potential Based Diffusion Motion PlanningYunhao Luo1, Chen Sun1, Joshua B. Tenenbaum2, Yilun Du2https://energy-based-model.github.io/potential-motion-plan/ Potential Based Diffusion Motion PlanningEffective motion planning in high dimensional spaces is a long-standing open problem in robotics. One class of traditional motion planning algorithms corresponds to potential-based motion planning. An ..
Potential Based Diffusion Motion PlanningPotential Based Diffusion Motion PlanningYunhao Luo1, Chen Sun1, Joshua B. Tenenbaum2, Yilun Du2https://energy-based-model.github.io/potential-motion-plan/ Potential Based Diffusion Motion PlanningEffective motion planning in high dimensional spaces is a long-standing open problem in robotics. One class of traditional motion planning algorithms corresponds to potential-based motion planning. An ..
2025.02.28 -
A Lifelong Learning Approach to Mobile Robot NavigationBo Liu, Xuesu Xiao, Peter Stonehttps://arxiv.org/abs/2007.14486 A Lifelong Learning Approach to Mobile Robot NavigationThis paper presents a self-improving lifelong learning framework for a mobile robot navigating in different environments. Classical static navigation methods require environment-specific in-situ system adjustment, e.g. from ..
A Lifelong Learning Approach to Mobile Robot NavigationA Lifelong Learning Approach to Mobile Robot NavigationBo Liu, Xuesu Xiao, Peter Stonehttps://arxiv.org/abs/2007.14486 A Lifelong Learning Approach to Mobile Robot NavigationThis paper presents a self-improving lifelong learning framework for a mobile robot navigating in different environments. Classical static navigation methods require environment-specific in-situ system adjustment, e.g. from ..
2025.02.02 -
TOP-Nav : Legged Navigation Integrating Terrain, Obstacle and Proprioception EstimationJunli Ren1,*, Yikai Liu1,*, Yingru Dai1, Junfeng Long2, Guijin Wang1,†https://top-nav-legged.github.io/TOP-Nav-Legged-page/ SOCIAL MEDIA TITLE TAGSOCIAL MEDIA DESCRIPTION TAG TAGtop-nav-legged.github.io2024 CoRL세상 사람들이 내가 하고 싶은 거 다 먼저 해부렀네...https://www.youtube.com/watch?v=CzsE8kEf5lo&ab_channel=YikaiLiu Abstr..
TOP-Nav : Legged Navigation Integrating Terrain, Obstacle and Proprioception EstimationTOP-Nav : Legged Navigation Integrating Terrain, Obstacle and Proprioception EstimationJunli Ren1,*, Yikai Liu1,*, Yingru Dai1, Junfeng Long2, Guijin Wang1,†https://top-nav-legged.github.io/TOP-Nav-Legged-page/ SOCIAL MEDIA TITLE TAGSOCIAL MEDIA DESCRIPTION TAG TAGtop-nav-legged.github.io2024 CoRL세상 사람들이 내가 하고 싶은 거 다 먼저 해부렀네...https://www.youtube.com/watch?v=CzsE8kEf5lo&ab_channel=YikaiLiu Abstr..
2025.01.30 -
Neural Kinodynamic Planning : Learning for KinoDynamic Tree ExpansionTin Lai; Weiming Zhi; Tucker Hermans; Fabio Ramoshttps://ieeexplore.ieee.org/document/10801948 Neural Kinodynamic Planning: Learning for KinoDynamic Tree ExpansionWe integrate neural networks into kinodynamic motion planning and present the Learning for KinoDynamic Tree Expansion (L4KDE) method. Tree-based planning approaches, ..
Neural Kinodynamic Planning: Learning for KinoDynamic Tree ExpansionNeural Kinodynamic Planning : Learning for KinoDynamic Tree ExpansionTin Lai; Weiming Zhi; Tucker Hermans; Fabio Ramoshttps://ieeexplore.ieee.org/document/10801948 Neural Kinodynamic Planning: Learning for KinoDynamic Tree ExpansionWe integrate neural networks into kinodynamic motion planning and present the Learning for KinoDynamic Tree Expansion (L4KDE) method. Tree-based planning approaches, ..
2025.01.23 -
SR-LIO : LiDAR-Inertial Odometry with Sweep ReconstructionZikang Yuan1, Fengtian Lang2, Tianle Xu2 and Xin Yang2https://arxiv.org/abs/2210.10424 SR-LIO: LiDAR-Inertial Odometry with Sweep ReconstructionThis paper proposes a novel LiDAR-Inertial odometry (LIO), named SR-LIO, based on an iterated extended Kalman filter (iEKF) framework. We adapt the sweep reconstruction method, which segments and ..
SR-LIO: LiDAR-Inertial Odometry with Sweep ReconstructionSR-LIO : LiDAR-Inertial Odometry with Sweep ReconstructionZikang Yuan1, Fengtian Lang2, Tianle Xu2 and Xin Yang2https://arxiv.org/abs/2210.10424 SR-LIO: LiDAR-Inertial Odometry with Sweep ReconstructionThis paper proposes a novel LiDAR-Inertial odometry (LIO), named SR-LIO, based on an iterated extended Kalman filter (iEKF) framework. We adapt the sweep reconstruction method, which segments and ..
2025.01.18 -
DTG : Diffusion-based Trajectory Generation for Mapless Global NavigationJing Liang1, Amirreza Payandeh 2, Daeun Song1, Xuesu Xiao2 and Dinesh Manocha1 https://arxiv.org/abs/2403.09900 DTG : Diffusion-based Trajectory Generation for Mapless Global NavigationWe present a novel end-to-end diffusion-based trajectory generation method, DTG, for mapless global navigation in challenging outdoor scenar..
DTG : Diffusion-based Trajectory Generation for Mapless Global NavigationDTG : Diffusion-based Trajectory Generation for Mapless Global NavigationJing Liang1, Amirreza Payandeh 2, Daeun Song1, Xuesu Xiao2 and Dinesh Manocha1 https://arxiv.org/abs/2403.09900 DTG : Diffusion-based Trajectory Generation for Mapless Global NavigationWe present a novel end-to-end diffusion-based trajectory generation method, DTG, for mapless global navigation in challenging outdoor scenar..
2024.12.18 -
ResultsAfter 1 epoch After 5 epoch 우선 데이터셋으로 제공해주는 pkl 파일 안에 뭐가 있는지 확인해보자. Train Dataset모델 학습을 위한 raw 센서 데이터pose: 로봇의 현재 위치/자세vel: 속도 데이터 (50개 시퀀스)imu: IMU 센서 데이터 (20개 시퀀스)camera: 카메라 이미지lidar: 3D LiDAR 데이터lidar2d: 2D LiDAR 스캔 데이터targets: 목표 위치들 (100개)trajectories: gt trajectorylocal_map: 로컬 맵 데이터Data type: Dictionary keys:pose: type=, shape=(3, 4)First element type: First element shape: (4,)vel..
DTG : inferenceResultsAfter 1 epoch After 5 epoch 우선 데이터셋으로 제공해주는 pkl 파일 안에 뭐가 있는지 확인해보자. Train Dataset모델 학습을 위한 raw 센서 데이터pose: 로봇의 현재 위치/자세vel: 속도 데이터 (50개 시퀀스)imu: IMU 센서 데이터 (20개 시퀀스)camera: 카메라 이미지lidar: 3D LiDAR 데이터lidar2d: 2D LiDAR 스캔 데이터targets: 목표 위치들 (100개)trajectories: gt trajectorylocal_map: 로컬 맵 데이터Data type: Dictionary keys:pose: type=, shape=(3, 4)First element type: First element shape: (4,)vel..
2024.12.09