통합 참고문헌 (References)
191 references
[1] Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K., & Hirukawa, H. (2003). Biped walking pattern generation by using preview control of zero-moment point. Proc. IEEE ICRA. doi:10.1109/ROBOT.2003.1241826.
[2] Pratt, J., Carff, J., Drakunov, S., & Goswami, A. (2006). Capture point: A step toward humanoid push recovery. Proc. IEEE-RAS Humanoids. doi:10.1109/ICHR.2006.321385.
[3] Pratt, J., Koolen, T., de Boer, T., Rebula, J., Cotton, S., Carff, J., Johnson, M., & Neuhaus, P. (2012). Capturability-based analysis and control of legged locomotion, Part 2: Application to M2V2, a lower-body humanoid. International Journal of Robotics Research.
[4] Feng, S., Whitman, E., Xinjilefu, X., & Atkeson, C. G. (2014). Optimization-based full body control for the DARPA Robotics Challenge. Journal of Field Robotics. doi:10.1002/rob.21559.
[5] Radosavovic, I., Xiao, T., Zhang, B., Darrell, T., Malik, J., & Sreenath, K. (2024). Real-world humanoid locomotion with reinforcement learning. Science Robotics. doi:10.1126/scirobotics.adi9579. arXiv:2303.03381.
[6] Kajita, S., Hirukawa, H., & Harada, K. (2014). Introduction to Humanoid Robotics. Springer. doi:10.1007/978-3-642-54536-8.
[7] Westervelt, E. R., Grizzle, J. W., & Chevallereau, C. (2007). Feedback Control of Dynamic Bipedal Robot Locomotion. CRC Press.
[8] Reher, J., & Ames, A. D. (2021). Algorithmic foundations of dynamic bipedal robots with an emphasis on underactuated locomotion. Annual Review of Control, Robotics, and Autonomous Systems. doi:10.1146/annurev-control-071020-032422.
[9] Koenemann, J., Del Prete, A., & Tassa, Y. (2015). A whole-body model predictive control framework for humanoid robots. Proc. IEEE/RSJ IROS. doi:10.1109/IROS.2015.7353596.
[10] Kober, J., Bagnell, J. A., & Peters, J. (2013). Reinforcement learning in robotics: A survey. International Journal of Robotics Research. doi:10.1177/0278364913495721.
[11] Wensing, P. M., Posa, M., & Hu, Y. (2024). Optimization-based control for dynamic legged robots. IEEE Transactions on Robotics. arXiv:2211.11644.
[12] Gu, Z., Li, J., & Shen, W. (2025). Humanoid locomotion and manipulation: Current progress and challenges in control, planning, and learning. arXiv preprint 2501.02116.
[13] Peng, X. B., Abbeel, P., Levine, S., & van de Panne, M. (2018). DeepMimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions on Graphics 37(4). arXiv:1804.02717.
[14] Hurst, J. W. (2019). Cassie bipedal robot and the ATRIAS lineage. Agility Robotics / Oregon State University 기술 보고서.
[15] Hwangbo, J., Lee, J., Dosovitskiy, A., Bellicoso, D., Tsounis, V., Koltun, V., & Hutter, M. (2019). Learning agile and dynamic motor skills for legged robots. Science Robotics 4(26). doi:10.1126/scirobotics.aau5872. arXiv:1901.08652.
[16] Siekmann, J., Godse, Y., Fern, A., & Hurst, J. (2021). Blind bipedal stair traversal via sim-to-real reinforcement learning. Proc. RSS. arXiv:2105.08328.
[17] Agility Robotics. (2025). Motor Cortex: An always-on safety layer for Digit. Agility Robotics 기술 발표. https://agilityrobotics.com
[18] Boston Dynamics & RAI Institute. (2025). Electric Atlas reinforcement learning pipeline. BD–RAI 파트너십 발표, 2025년 2월. https://bostondynamics.com
[19] Figure AI. (2026). Helix 02: Fully-onboard VLA with System 0. Figure AI 발표, 2026년 1/2월. https://figure.ai
[20] Wensing, P. M., Wang, A., Seok, S., Otten, D., Lang, J., & Kim, S. (2017). Proprioceptive actuator design in the MIT Cheetah: Impact mitigation and high-bandwidth physical interaction for dynamic legged robots. IEEE T-RO. (4장에서 상세.)
[21] Katz, B., Di Carlo, J., & Kim, S. (2019). Mini Cheetah: A platform for pushing the limits of dynamic quadruped control. Proc. IEEE ICRA.
[22] Seok, S., et al. (2013). Design principles for highly efficient quadrupeds and implementation on the MIT Cheetah robot. Proc. IEEE ICRA.
[23] Makoviychuk, V., et al. (2021). Isaac Gym: High performance GPU-based physics simulation for robot learning. NeurIPS.
[24] Rudin, N., Hoeller, D., Reist, P., & Hutter, M. (2021). Learning to walk in minutes using massively parallel deep reinforcement learning. Proc. CoRL.
[25] Mittal, M., et al. (2023). Orbit: A unified simulation framework for interactive robot learning environments. (현재 Isaac Lab.)
[26] Zakka, K., et al. (2025). MuJoCo Playground: A unified platform for robot learning.
[27] Hwangbo, J., et al. (2019). Learning agile and dynamic motor skills for legged robots. Science Robotics.
[28] Lee, J., Hwangbo, J., Wellhausen, L., Koltun, V., & Hutter, M. (2020). Learning quadrupedal locomotion over challenging terrain. Science Robotics.
[29] Kumar, A., Fu, Z., Pathak, D., & Malik, J. (2021). RMA: Rapid motor adaptation for legged robots. Proc. RSS.
[30] Radosavovic, I. (2024). From catalysts to convergence: A paradigm shift in humanoid robotics. UC Berkeley EECS 박사학위 논문 및 공개 강연.
[31] Tobin, J., et al. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. Proc. IROS.
[32] He, T., et al. (2025). ASAP: Aligning simulation and real-world physics for learning agile humanoid whole-body skills.
[33] Tang, C., Abbatematteo, B., & Hu, J. (2025). Deep reinforcement learning for robotics: A survey of real-world successes. Annual Review of Control, Robotics, and Autonomous Systems. doi:10.1146/annurev-control-030323-022510. arXiv:2408.03539.
[34] Schwarke, C., Klemm, V., & Tordesillas, J. (2024). Learning quadrupedal locomotion via differentiable simulation. Proc. CoRL. arXiv:2403.14864.
[35] Chi, C., et al. (2023). Diffusion policy: Visuomotor policy learning via action diffusion. Proc. RSS.
[36] Figure AI. (2025). Helix: A vision-language-action model for generalist humanoid control. Figure AI tech blog, 2025년 2월. https://figure.ai
[37] AgiBot. (2025). AgiBot World Colosseo: A large-scale manipulation platform. arXiv preprint.
[38] AgiBot. (2026). GO-2: Asynchronous dual-system humanoid control. ACL 2026.
[39] Agility Robotics. (2025). Motor Cortex: An always-on safety layer for Digit. https://agilityrobotics.com
[40] Unitree Robotics. (2024). Unitree G1 humanoid platform and `unitree_rl_gym`. Unitree 제품 출시.
[41] Yang, H., et al. (2025). OmniRetarget: Interaction-preserving data generation for humanoid whole-body loco-manipulation and scene interaction. arXiv preprint 2509.26633.
[42] Li, K., et al. (2025). ManipTrans: Efficient dexterous bimanual manipulation transfer via residual learning. Proc. CVPR. arXiv:2503.21860.
[43] Seok, S., Wang, A., & Otten, D. (2013). Design principles for highly efficient quadrupeds and implementation on the MIT Cheetah robot. Proc. IEEE ICRA. doi:10.1109/ICRA.2013.6631038.
[44] Paine, N., Oh, S., & Sentis, L. (2014). Design and control considerations for high-performance series elastic actuators. IEEE/ASME Transactions on Mechatronics. doi:10.1109/TMECH.2013.2264338.
[45] Kim, H.-S., Kim, Y.-J., & NAVER LABS. (2017). AMBIDEX: Cable-driven dual-arm manipulator from NAVER LABS. NAVER LABS product / Korea Tech research.
[46] Kim Lab, MIT Biomimetics Robotics. (2017). Impact Mitigation Factor (IMF) and MIT Cost-of-Bandwidth-per-Ampere (CBA) for legged design.
[47] Unitree Robotics. (2024). Unitree G1 휴머노이드 플랫폼과 `unitree_rl_gym`. Unitree 제품 출시.
[48] Unitree Robotics. (2023–2024). Unitree H1 휴머노이드 플랫폼 사양.
[49] Boston Dynamics. (2024). Electric Atlas: 56-DoF 전기 휴머노이드. Boston Dynamics 발표, 2024년 4월.
[50] Liao, Q., Zhang, B., & Huang, X. (2024). Berkeley Humanoid: A research platform for learning-based control. arXiv preprint 2407.21781.
[51] Fourier Intelligence. (2024). Fourier GR-1 / GR-2 humanoid platform.
[52] Shi, H., Wang, W., & Song, S. (2025). ToddlerBot: Open-source ML-compatible humanoid platform for loco-manipulation. arXiv preprint 2502.00893.
[53] Gu, X., Zhang, Y., Wu, K., et al. (2024). Humanoid-Gym: Reinforcement learning for humanoid robot with zero-shot sim2real transfer. arXiv preprint 2404.05695.
[54] Sferrazza, C., Huang, D.-M., Lin, X., Lee, Y., & Abbeel, P. (2024). HumanoidBench: Simulated humanoid benchmark for whole-body locomotion and manipulation. Proc. RSS. arXiv:2403.10506.
[55] Wang, Y., et al. (2025). Booster Gym: An end-to-end reinforcement learning framework for humanoid robot locomotion. arXiv preprint 2506.15132.
[56] Zakka, K., et al. (2025). MuJoCo Playground: An open-source framework for GPU-accelerated robot learning and sim-to-real. arXiv preprint 2502.08844.
[57] Seo, S., et al. (2025). Learning sim-to-real humanoid locomotion in 15 minutes. arXiv preprint 2512.01996.
[58] AgiBot. (2026). Genie Sim 3.0: A high-fidelity comprehensive simulation platform for humanoid robot. arXiv preprint 2601.02078.
[59] Genesis Team. (2024). Genesis: A generative and universal physics engine for robotics and beyond. Open-source release, December 2024.
[60] Fujimoto, S., van Hoof, H., & Meger, D. (2018). Addressing function approximation error in actor-critic methods. Proc. ICML. arXiv:1802.09477.
[61] Akkaya, I., Andrychowicz, M., et al. (2019). Solving Rubik's cube with a robot hand. arXiv preprint.
[62] Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G., & Black, M. J. (2019). AMASS: Archive of motion capture as surface shapes. Proc. ICCV. arXiv:1904.03278.
[63] Siekmann, J., Green, K., Warila, J., Fern, A., & Hurst, J. (2020). Learning memory-based control for human-scale bipedal locomotion. Proc. RSS. arXiv:2006.02402.
[64] Harvey, F. G., Yurick, M., Nowrouzezahrai, D., & Pal, C. (2020). Robust motion in-betweening (LAFAN1 dataset). ACM TOG / SIGGRAPH.
[65] Dao, J., Duan, H., Apgar, T., & Hurst, J. (2022). Sim-to-real learning of all common bipedal gaits via periodic reward composition. Proc. IEEE ICRA. arXiv:2011.01387.
[66] Radosavovic, I., et al. (2024). Humanoid locomotion as next token prediction. NeurIPS. arXiv:2402.19469.
[67] Cheng, X., et al. (2024). Expressive whole-body control for humanoid robots. Proc. RSS. arXiv:2402.16796.
[68] Fu, Z., et al. (2024). HumanPlus: Humanoid shadowing and imitation from humans. Proc. CoRL. arXiv:2406.10454.
[69] He, T., et al. (2024). Learning human-to-humanoid real-time whole-body teleoperation (H2O). Proc. IEEE/RSJ IROS. arXiv:2403.04436.
[70] He, T., et al. (2024). OmniH2O: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. Proc. CoRL. arXiv:2406.08858.
[71] Zhuang, Z., et al. (2024). Humanoid parkour learning. Proc. CoRL. arXiv:2406.10759.
[72] He, T., et al. (2024). HOVER: Versatile neural whole-body controller for humanoid robots. Proc. IEEE ICRA 2025. arXiv:2410.21229.
[73] Luo, Z., et al. (2024). Universal humanoid motion representations for physics-based control (PHC/PULSE). Proc. ICLR. arXiv:2310.04582.
[74] He, T., et al. (2025). Learning getting-up policies for real-world humanoid robots. arXiv preprint 2502.12152.
[75] Seo, H., et al. (2025). FastTD3: Simple, fast, and capable reinforcement learning for humanoid control. arXiv preprint 2505.22642.
[76] Ze, Y., et al. (2025). TWIST: Teleoperated whole-body imitation system. arXiv preprint 2505.02833.
[77] Wang, X., et al. (2025). From experts to a generalist: Toward general whole-body control for humanoid robots. arXiv preprint 2506.12779.
[78] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint 1707.06347.
[79] Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. Proc. IROS. arXiv:1703.06907.
[80] OpenAI et al. (2019). Learning dexterous in-hand manipulation. IJRR. arXiv:1808.00177.
[81] Handa, A., et al. (2023). DeXtreme: Transfer of agile in-hand manipulation from simulation to reality. Proc. IEEE ICRA. arXiv:2210.13702.
[82] He, T., et al. (2024). Agile but safe (ABS): Learning collision-free high-speed legged locomotion. Proc. RSS. arXiv:2401.17583.
[83] Radosavovic, I., et al. (2024). Real-world humanoid locomotion with reinforcement learning. Science Robotics. arXiv:2303.03381.
[84] Gu, X., et al. (2025). Advancing humanoid locomotion: Mastering challenging terrains with denoising world model learning. arXiv preprint 2408.14472.
[85] Yao, X., et al. (2025). Real2Sim2Real: Self-supervised system identification for zero-shot humanoid deployment. arXiv preprint 2506.12769.
[86] Vaswani, A., et al. (2017). Attention is all you need. Proc. NeurIPS.
[87] Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. Proc. ICML.
[88] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Proc. NeurIPS.
[89] Kroemer, O., Niekum, S., & Konidaris, G. (2021). A review of robot learning for manipulation: Challenges, representations, and algorithms. JMLR.
[90] Pang, B., et al. (2021). Convergence analysis of soft-actor-critic + entropy bonus in continuous control.
[91] Yarats, D., Fergus, R., Lazaric, A., & Pinto, L. (2022). Mastering visual continuous control: Improved data-augmented reinforcement learning (DrQ-v2).
[92] Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., & Le, M. (2022). Flow matching for generative modeling. Proc. ICLR.
[93] Zhao, T. Z., Kumar, V., Levine, S., & Finn, C. (2023). Learning fine-grained bimanual manipulation with low-cost hardware (ACT). Proc. RSS.
[94] Kim, M. J., et al. (2024). OpenVLA: An open-source vision-language-action model. arXiv preprint.
[95] Black, K., et al. (2024). π0: A vision-language-action flow model for general robot control. arXiv preprint.
[96] Wolf, R., Shi, Y., Liu, S., & Rayyes, R. (2025). Diffusion models for robotic manipulation: A survey.
[97] Bjorck, J., et al. (2025). GR00T N1: An open foundation model for generalist humanoid robots. NVIDIA 기술 보고서와 arXiv preprint.
[98] NVIDIA. (2025). GR00T N1.5: Improved foundation model for generalist humanoid robots. NVIDIA 기술 발표.
[99] Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
[100] Vaswani, A., et al. (2017). Attention is all you need. Proc. NeurIPS. arXiv:1706.03762.
[101] Lipman, Y., et al. (2022). Flow matching for generative modeling. Proc. ICLR.
[102] Figure AI. (2026). Figure 03 + Helix 02: General-purpose humanoid system. Figure AI announcement, January/February 2026.
[103] Bjorck, J., et al. (2025). GR00T N1: Open foundation model for generalist humanoid robots. NVIDIA technical report and arXiv preprint.
[104] AgiBot. (2026). GO-2 asynchronous dual-system humanoid control architecture.
[105] Agility Robotics. (2025). Motor Cortex: Whole-body control foundation model for Digit.
[106] Cheng, X., et al. (2025). From experts to a generalist: Toward general whole-body control for humanoid robots. arXiv preprint 2506.12779.
[107] Yuan, M., et al. (2025). A survey of behavior foundation model: Next-generation whole-body control system of humanoid robots. arXiv preprint 2506.20487.
[108] Billard, A., & Kragic, D. (2019). Trends and challenges in robot manipulation. Science.
[109] Padalkar, A., et al. (2023). Open X-Embodiment: Robotic learning datasets and RT-X models. arXiv preprint.
[110] Brohan, A., et al. (2023). RT-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint.
[111] Grandia, R., et al. (2024). Perceptive pedipulation with local obstacle avoidance.
[112] Weinberg, A., et al. (2024). Survey of learning-based approaches for robotic in-hand manipulation.
[113] Tedrake, R. (2024). Toyota Research Institute Large Behavior Models for robot manipulation. TRI presentations and partial releases.
[114] Liu, S., et al. (2024). Visual whole-body control for legged loco-manipulation.
[115] Wang, A., et al. (2024). Mobile ALOHA: Learning bimanual mobile manipulation with low-cost whole-body teleoperation.
[116] Ha, H., et al. (2024). Universal manipulation interface: In-the-wild robot teaching without in-the-wild data (UMI).
[117] Physical Intelligence. (2025). π0.5: A vision-language-action model with open-world generalization.
[118] Black, K., et al. (2025). π0-FAST: Autoregressive π0 with FAST tokenizer.
[119] Pertsch, K., et al. (2025). FAST: Efficient action tokenization for vision-language-action models.
[120] Shukla, S., et al. (2025). SmolVLA: A vision-language-action model for affordable and efficient robotics.
[121] Yang, H., et al. (2025). Humanoid-VLA: Generalist VLA with dynamic whole-body control.
[122] Ze, Y., et al. (2025). TWIST: Teleoperated whole-body imitation system.
[123] Zhang, J., et al. (2025). FALCON: Learning force-adaptive humanoid loco-manipulation.
[124] AgiBot. (2025). AgiBot World Colosseo: A large-scale manipulation platform.
[125] AgiBot. (2025). AgiBot GO-1: ViLLA generalist embodied foundation model.
[126] AgiBot. (2026). AgiBot GO-2: Asynchronous dual-system architecture.
[127] Figure AI. (2026). Figure 03 + Helix 02: General-purpose humanoid system.
[128] Di Carlo, J., Wensing, P. M., Katz, B., Bledt, G., & Kim, S. (2018). Dynamic locomotion in the MIT Cheetah 3 through convex model-predictive control. Proc. IEEE/RSJ IROS.
[129] Nelson, G., Saunders, A., & Raibert, M. (2012/2019). Atlas: The hydraulic humanoid for the DARPA Robotics Challenge. Annual Review of Control, Robotics, and Autonomous Systems (retrospective).
[130] Boston Dynamics. (2024). Electric Atlas: 56-DoF all-electric humanoid. Boston Dynamics announcement, April 2024.
[131] Boston Dynamics. (2024). All New Atlas: Electric Atlas reveal video. April 17, 2024.
[132] Boston Dynamics. (2020+). Spot and its commercial lessons for Atlas. Product pipeline.
[133] Xie, K., et al. (2024). Hierarchical reinforcement learning for humanoid whole-body control. arXiv preprint 2412.10210.
[134] Boston Dynamics and RAI Institute. (2025). Partnership for Atlas reinforcement learning. February 2025.
[135] Hyundai Motor Group. (2025). HD Hyundai Robotics and Hyundai–Boston Dynamics ecosystem.
[136] Hyundai Motor Group. (2026). Hyundai Metaplant Georgia and Atlas deployment. January 2026.
[137] Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., & Schaal, S. (2013). Optimal distribution of contact forces with inverse-dynamics control. International Journal of Robotics Research.
[138] Figure AI. (2026). Figure 03 + Helix 02: General-purpose humanoid system. (Contrast로 인용.)
[139] Hurst, J. W. (2019). Cassie 이족 로봇과 ATRIAS 계보. Agility Robotics / Oregon State University.
[140] Figure AI. (2024). Figure 02 휴머노이드와 OpenAI 파트너십. Figure AI 발표, 2024년 8월.
[141] Agility Robotics & GXO. (2024). Digit × GXO: 최초의 상업적 휴머노이드 Robots-as-a-Service 배치. Agility/GXO 보도자료, 2024년 6월 이후.
[142] Tesla. (2024). Optimus Gen 2 / Gen 3 엔지니어링 공개. Tesla 시연, 2023년 12월 및 이후 AI Day.
[143] 1X Technologies. (2024). Neo Gamma 휴머노이드와 NVIDIA GR00T 통합. 1X 공개 시연, 2024–2025.
[144] Sanctuary AI. (2024). 유압 손과 Carbon AI를 갖춘 Phoenix Gen 7 휴머노이드. Sanctuary AI 제품 발표, 2024.
[145] Figure AI. (2025). Helix: 범용 휴머노이드 제어를 위한 vision-language-action 모델. Figure AI 기술 블로그, 2025년 2월.
[146] Figure AI. (2025). Helix로 실세계 물류 가속화. Figure AI 기술 블로그, 2025.
[147] Agility Robotics. (2025). Motor Cortex: Digit을 위한 전신 제어 파운데이션 모델. Agility Robotics 발표, 2025년 8월.
[148] Figure AI. (2026). Figure 03 + Helix 02: 범용 휴머노이드 시스템. Figure AI 발표, 2026년 1월/2월.
[149] Unitree Robotics. (2024). Unitree H1 휴머노이드 플랫폼 사양. Unitree 제품 페이지와 공개 기술 자료, 2023–2024.
[150] Unitree Robotics. (2024). Unitree G1 휴머노이드 플랫폼과 `unitree_rl_gym` 오픈소스 훈련 프레임워크. Unitree 제품 출시, 2024년 6월.
[151] AgiBot. (2025). AgiBot World Colosseo: 확장 가능한 지능형 임베디드 시스템을 위한 대규모 조작 플랫폼. AgiBot 기술 발표, 2025.
[152] AgiBot. (2026). Genie Sim 3.0: 휴머노이드 로봇 학습을 위한 고충실도 종합 시뮬레이션 플랫폼. AgiBot 기술 발표, 2026.
[153] AgiBot. (2025). AgiBot GO-1: ViLLA 범용 임베디드 파운데이션 모델. AgiBot 기술 발표, 2025년 3월.
[154] AgiBot. (2026). AgiBot GO-2: 비동기 이중 시스템 휴머노이드 제어 아키텍처. AgiBot 기술 발표, 2026.
[155] Bjorck, J., et al. (2025). GR00T N1: 범용 휴머노이드 로봇을 위한 오픈 파운데이션 모델. NVIDIA 기술 보고서와 arXiv preprint 2503.14734.
[156] Fourier Intelligence. (2024). Fourier GR-1 / GR-2 휴머노이드 플랫폼. Fourier 제품 발표, 2023–2024.
[157] Xiaomi. (2023). Xiaomi CyberOne 휴머노이드 플랫폼. Xiaomi 발표, 2022년 8월, 2023년까지 후속 업데이트.
[158] Zhang, Y., et al. (2025). FALCON: Learning force-adaptive humanoid loco-manipulation.
[159] Won, Y. S. (2025). 휴머노이드 중심의 한국 AI로봇 생태계 분석. 전자통신동향분석 40(6), 102–116.
[160] Oh, H. J. (2024). 휴머노이드 로봇의 진화와 미래 과제. 전자통신동향분석 214.
[161] MOTIE. (2025). K-Humanoid Alliance 출범 발표. 산업통상자원부.
[162] KIET. (2025). 제조업의 휴머노이드 로보틱스: 전략적 전망. 산업연구원 백서.
[163] SNU. (2025). 서울대학교 Physical AI 프로그램과 휴머노이드 이니셔티브.
[164] POSTECH. (2025). POSTECH와 K-Humanoid Alliance 학계 트랙.
[165] Kim, J., et al. (2024). KAIST 휴머노이드 — HuboLab 계보의 12 km/h 이족.
[166] Hyundai Motor Group. (2025). HD 현대로보틱스와 현대-Boston Dynamics 생태계.
[167] Hyundai Motor Group. (2026). Metaplant 조지아와 Atlas 배치. 현대 발표, 2026년 1월.
[168] LG Electronics. (2025). LG 전자 로봇 전략과 K-Humanoid 참여.
[169] Doosan Robotics. (2024). 두산로보틱스: M-시리즈와 휴머노이드 계획.
[170] Rainbow Robotics. (2025). Rainbow Robotics RB-Y1과 HUBO 계보.
[171] Kim, Y. J., et al. (2017). AMBIDEX: NAVER LABS의 케이블 구동 양팔 매니퓰레이터.
[172] Rebellions. (2025). Rebellions ATOM: 로봇 추론용 한국 AI 칩.
[173] DEEPX. (2024). DEEPX: 로보틱스용 한국 엣지 AI 칩.
[174] SK On, LG 에너지솔루션, 삼성 SDI. (2025). 한국 배터리 기업 휴머노이드 프로그램.
[175] Lee, J., et al. (2020). Learning quadrupedal locomotion over challenging terrain. Science Robotics. arXiv:2010.11251.
[176] World Economic Forum. (2025). Physical AI: 산업 운영 새 시대의 추동. WEF 백서.
[177] MOTIE. (2025). M.AX (Manufacturing AI Transformation) Alliance 출범. 산업통상자원부, 2025년 12월.
[178] MOTIE. (2025). K-Humanoid Alliance 출범 발표.
[179] KIET. (2025). 제조업의 휴머노이드 로보틱스: 전략적 전망. 산업연구원.
[180] AgiBot. (2025). AgiBot World Colosseo: 대규모 조작 플랫폼.
[181] Figure AI. (2026). Figure 03 + Helix 02: 범용 휴머노이드 시스템.
[182] Shukla, A., et al. (2025). SmolVLA: 저렴하고 효율적인 로보틱스를 위한 vision-language-action 모델.
[183] Physical Intelligence. (2025). π0.5: 개방 세계 일반화를 갖는 vision-language-action 모델.
[184] MOTIE. (2025). M.AX (Manufacturing AI Transformation) Alliance 출범.
[185] Hyundai Motor Group. (2026). Metaplant 조지아와 Atlas 배치.
[186] Agility Robotics & GXO. (2024). Digit × GXO: 최초의 상업 휴머노이드 Robots-as-a-Service 배치.
[187] ABI Research. (2025). 휴머노이드 로봇 출하 투영 2025–2032.
[188] Fourier X. (2025). 휴머노이드 시장 확산 경로: 산업에서 서비스로 가정으로.
[189] Morgan Stanley. (2024). 휴머노이드 로봇: $3조 시장 기회.
[190] Blankemeyer, G. (2025). 휴머노이드 로봇 노동 경제와 일자리 대체 전망.
[191] Agility Robotics. (2025). Motor Cortex: Digit을 위한 전신 제어 파운데이션 모델.