Consolidated References

179 references

[1] Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K., & Hirukawa, H. (2003). Biped walking pattern generation by using preview control of zero-moment point. Proc. IEEE ICRA. doi:10.1109/ROBOT.2003.1241826.
[2] Pratt, J., Carff, J., Drakunov, S., & Goswami, A. (2006). Capture point: A step toward humanoid push recovery. Proc. IEEE-RAS Humanoids. doi:10.1109/ICHR.2006.321385.
[3] Pratt, J., Koolen, T., de Boer, T., Rebula, J., Cotton, S., Carff, J., Johnson, M., & Neuhaus, P. (2012). Capturability-based analysis and control of legged locomotion, Part 2: Application to M2V2, a lower-body humanoid. International Journal of Robotics Research.
[4] Feng, S., Whitman, E., Xinjilefu, X., & Atkeson, C. G. (2014). Optimization-based full body control for the DARPA Robotics Challenge. Journal of Field Robotics. doi:10.1002/rob.21559.
[5] Radosavovic, I., Xiao, T., Zhang, B., Darrell, T., Malik, J., & Sreenath, K. (2024). Real-world humanoid locomotion with reinforcement learning. Science Robotics. doi:10.1126/scirobotics.adi9579. arXiv:2303.03381.
[6] Kajita, S., Hirukawa, H., & Harada, K. (2014). Introduction to Humanoid Robotics. Springer. doi:10.1007/978-3-642-54536-8.
[7] Westervelt, E. R., Grizzle, J. W., & Chevallereau, C. (2007). Feedback Control of Dynamic Bipedal Robot Locomotion. CRC Press.
[8] Reher, J., & Ames, A. D. (2021). Algorithmic foundations of dynamic bipedal robots with an emphasis on underactuated locomotion. Annual Review of Control, Robotics, and Autonomous Systems. doi:10.1146/annurev-control-071020-032422.
[9] Koenemann, J., Del Prete, A., & Tassa, Y. (2015). A whole-body model predictive control framework for humanoid robots. Proc. IEEE/RSJ IROS. doi:10.1109/IROS.2015.7353596.
[10] Kober, J., Bagnell, J. A., & Peters, J. (2013). Reinforcement learning in robotics: A survey. International Journal of Robotics Research. doi:10.1177/0278364913495721.
[11] Wensing, P. M., Posa, M., & Hu, Y. (2024). Optimization-based control for dynamic legged robots. IEEE Transactions on Robotics. arXiv:2211.11644.
[12] Gu, Z., Li, J., & Shen, W. (2025). Humanoid locomotion and manipulation: Current progress and challenges in control, planning, and learning. arXiv preprint 2501.02116.
[13] Peng, X. B., Abbeel, P., Levine, S., & van de Panne, M. (2018). DeepMimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions on Graphics 37(4). arXiv:1804.02717.
[14] Hurst, J. W. (2019). Cassie bipedal robot and the ATRIAS lineage. Agility Robotics / Oregon State University technical report.
[15] Hwangbo, J., Lee, J., Dosovitskiy, A., Bellicoso, D., Tsounis, V., Koltun, V., & Hutter, M. (2019). Learning agile and dynamic motor skills for legged robots. Science Robotics 4(26). doi:10.1126/scirobotics.aau5872. arXiv:1901.08652.
[16] Siekmann, J., Godse, Y., Fern, A., & Hurst, J. (2021). Blind bipedal stair traversal via sim-to-real reinforcement learning. Proc. RSS. arXiv:2105.08328.
[17] Agility Robotics. (2025). Motor Cortex: An always-on safety layer for Digit. Agility Robotics technical announcement. https://agilityrobotics.com
[18] Boston Dynamics & RAI Institute. (2025). Electric Atlas reinforcement learning pipeline. BD–RAI partnership announcement, February 2025. https://bostondynamics.com
[19] Figure AI. (2026). Helix 02: Fully-onboard VLA with System 0. Figure AI announcement, January/February 2026. https://figure.ai
[20] Wensing, P. M., Wang, A., Seok, S., Otten, D., Lang, J., & Kim, S. (2017). Proprioceptive actuator design in the MIT Cheetah: Impact mitigation and high-bandwidth physical interaction for dynamic legged robots. IEEE Transactions on Robotics. (Full details in Chapter 4.)
[21] Katz, B., Di Carlo, J., & Kim, S. (2019). Mini Cheetah: A platform for pushing the limits of dynamic quadruped control. Proc. IEEE ICRA.
[22] Seok, S., et al. (2013). Design principles for highly efficient quadrupeds and implementation on the MIT Cheetah robot. Proc. IEEE ICRA.
[23] Makoviychuk, V., et al. (2021). Isaac Gym: High performance GPU-based physics simulation for robot learning. NeurIPS.
[24] Rudin, N., Hoeller, D., Reist, P., & Hutter, M. (2021). Learning to walk in minutes using massively parallel deep reinforcement learning. Proc. CoRL.
[25] Mittal, M., et al. (2023). Orbit: A unified simulation framework for interactive robot learning environments. (Now Isaac Lab.)
[26] Zakka, K., et al. (2025). MuJoCo Playground: A unified platform for robot learning.
[27] Hwangbo, J., et al. (2019). Learning agile and dynamic motor skills for legged robots. Science Robotics.
[28] Lee, J., Hwangbo, J., Wellhausen, L., Koltun, V., & Hutter, M. (2020). Learning quadrupedal locomotion over challenging terrain. Science Robotics.
[29] Kumar, A., Fu, Z., Pathak, D., & Malik, J. (2021). RMA: Rapid motor adaptation for legged robots. Proc. RSS.
[30] Radosavovic, I. (2024). From catalysts to convergence: A paradigm shift in humanoid robotics. UC Berkeley EECS dissertation and public talks.
[31] Tobin, J., et al. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. Proc. IROS.
[32] He, T., et al. (2025). ASAP: Aligning simulation and real-world physics for learning agile humanoid whole-body skills.
[33] Tang, C., Abbatematteo, B., & Hu, J. (2025). Deep reinforcement learning for robotics: A survey of real-world successes. Annual Review of Control, Robotics, and Autonomous Systems. doi:10.1146/annurev-control-030323-022510. arXiv:2408.03539.
[34] Schwarke, C., Klemm, V., & Tordesillas, J. (2024). Learning quadrupedal locomotion via differentiable simulation. Proc. CoRL. arXiv:2403.14864.
[35] Chi, C., et al. (2023). Diffusion policy: Visuomotor policy learning via action diffusion. Proc. RSS.
[36] Figure AI. (2025). Helix: A vision-language-action model for generalist humanoid control. Figure AI tech blog, February 2025. https://figure.ai
[37] AgiBot. (2025). AgiBot World Colosseo: A large-scale manipulation platform. arXiv preprint.
[38] AgiBot. (2026). GO-2: Asynchronous dual-system humanoid control. ACL 2026.
[39] Agility Robotics. (2025). Motor Cortex: An always-on safety layer for Digit. https://agilityrobotics.com
[40] Unitree Robotics. (2024). Unitree G1 humanoid platform and `unitree_rl_gym`. Unitree product release.
[41] Yang, H., et al. (2025). OmniRetarget: Interaction-preserving data generation for humanoid whole-body loco-manipulation and scene interaction. arXiv preprint 2509.26633.
[42] Li, K., et al. (2025). ManipTrans: Efficient dexterous bimanual manipulation transfer via residual learning. Proc. CVPR. arXiv:2503.21860.
[43] Seok, S., Wang, A., & Otten, D. (2013). Design principles for highly efficient quadrupeds and implementation on the MIT Cheetah robot. Proc. IEEE ICRA. doi:10.1109/ICRA.2013.6631038.
[44] Paine, N., Oh, S., & Sentis, L. (2014). Design and control considerations for high-performance series elastic actuators. IEEE/ASME Transactions on Mechatronics. doi:10.1109/TMECH.2013.2264338.
[45] Kim, H.-S., Kim, Y.-J., & NAVER LABS. (2017). AMBIDEX: Cable-driven dual-arm manipulator from NAVER LABS. NAVER LABS product / Korea Tech research.
[46] Kim Lab, MIT Biomimetics Robotics. (2017). Impact Mitigation Factor (IMF) and MIT Cost-of-Bandwidth-per-Ampere (CBA) for legged design.
[47] Unitree Robotics. (2023–2024). Unitree H1 humanoid platform specifications.
[48] Boston Dynamics. (2024). Electric Atlas: 56-DoF all-electric humanoid. Boston Dynamics announcement, April 2024.
[49] Liao, Q., Zhang, B., & Huang, X. (2024). Berkeley Humanoid: A research platform for learning-based control. arXiv preprint 2407.21781.
[50] Fourier Intelligence. (2024). Fourier GR-1 / GR-2 humanoid platform.
[51] Shi, H., Wang, W., & Song, S. (2025). ToddlerBot: Open-source ML-compatible humanoid platform for loco-manipulation. arXiv preprint 2502.00893.
[52] Gu, X., Zhang, Y., Wu, K., et al. (2024). Humanoid-Gym: Reinforcement learning for humanoid robot with zero-shot sim2real transfer. arXiv preprint 2404.05695.
[53] Sferrazza, C., Huang, D.-M., Lin, X., Lee, Y., & Abbeel, P. (2024). HumanoidBench: Simulated humanoid benchmark for whole-body locomotion and manipulation. Proc. RSS. arXiv:2403.10506.
[54] Wang, Y., et al. (2025). Booster Gym: An end-to-end reinforcement learning framework for humanoid robot locomotion. arXiv preprint 2506.15132.
[55] Zakka, K., et al. (2025). MuJoCo Playground: An open-source framework for GPU-accelerated robot learning and sim-to-real. arXiv preprint 2502.08844.
[56] Seo, S., et al. (2025). Learning sim-to-real humanoid locomotion in 15 minutes. arXiv preprint 2512.01996.
[57] AgiBot. (2026). Genie Sim 3.0: A high-fidelity comprehensive simulation platform for humanoid robot. arXiv preprint 2601.02078.
[58] Genesis Team. (2024). Genesis: A generative and universal physics engine for robotics and beyond. Open-source release, December 2024.
[59] Fujimoto, S., van Hoof, H., & Meger, D. (2018). Addressing function approximation error in actor-critic methods. Proc. ICML. arXiv:1802.09477.
[60] Akkaya, I., Andrychowicz, M., et al. (2019). Solving Rubik's cube with a robot hand. arXiv preprint.
[61] Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G., & Black, M. J. (2019). AMASS: Archive of motion capture as surface shapes. Proc. ICCV. arXiv:1904.03278.
[62] Siekmann, J., Green, K., Warila, J., Fern, A., & Hurst, J. (2020). Learning memory-based control for human-scale bipedal locomotion. Proc. RSS. arXiv:2006.02402.
[63] Harvey, F. G., Yurick, M., Nowrouzezahrai, D., & Pal, C. (2020). Robust motion in-betweening (LAFAN1 dataset). ACM TOG / SIGGRAPH.
[64] Dao, J., Duan, H., Apgar, T., & Hurst, J. (2022). Sim-to-real learning of all common bipedal gaits via periodic reward composition. Proc. IEEE ICRA. arXiv:2011.01387.
[65] Radosavovic, I., et al. (2024). Humanoid locomotion as next token prediction. NeurIPS. arXiv:2402.19469.
[66] Cheng, X., et al. (2024). Expressive whole-body control for humanoid robots. Proc. RSS. arXiv:2402.16796.
[67] Fu, Z., et al. (2024). HumanPlus: Humanoid shadowing and imitation from humans. Proc. CoRL. arXiv:2406.10454.
[68] He, T., et al. (2024). Learning human-to-humanoid real-time whole-body teleoperation (H2O). Proc. IEEE/RSJ IROS. arXiv:2403.04436.
[69] He, T., et al. (2024). OmniH2O: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. Proc. CoRL. arXiv:2406.08858.
[70] Zhuang, Z., et al. (2024). Humanoid parkour learning. Proc. CoRL. arXiv:2406.10759.
[71] He, T., et al. (2024). HOVER: Versatile neural whole-body controller for humanoid robots. Proc. IEEE ICRA 2025. arXiv:2410.21229.
[72] Luo, Z., et al. (2024). Universal humanoid motion representations for physics-based control (PHC/PULSE). Proc. ICLR. arXiv:2310.04582.
[73] He, T., et al. (2025). Learning getting-up policies for real-world humanoid robots. arXiv preprint 2502.12152.
[74] Seo, H., et al. (2025). FastTD3: Simple, fast, and capable reinforcement learning for humanoid control. arXiv preprint 2505.22642.
[75] Ze, Y., et al. (2025). TWIST: Teleoperated whole-body imitation system. arXiv preprint 2505.02833.
[76] Wang, X., et al. (2025). From experts to a generalist: Toward general whole-body control for humanoid robots. arXiv preprint 2506.12779.
[77] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint 1707.06347.
[78] Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. Proc. IROS. arXiv:1703.06907.
[79] OpenAI et al. (2019). Learning dexterous in-hand manipulation. International Journal of Robotics Research. arXiv:1808.00177.
[80] Handa, A., et al. (2023). DeXtreme: Transfer of agile in-hand manipulation from simulation to reality. Proc. IEEE ICRA. arXiv:2210.13702.
[81] He, T., et al. (2024). Agile but safe (ABS): Learning collision-free high-speed legged locomotion. Proc. RSS. arXiv:2401.17583.
[82] Radosavovic, I., et al. (2024). Real-world humanoid locomotion with reinforcement learning. Science Robotics. arXiv:2303.03381.
[83] Gu, X., et al. (2025). Advancing humanoid locomotion: Mastering challenging terrains with denoising world model learning. arXiv preprint 2408.14472.
[84] Yao, X., et al. (2025). Real2Sim2Real: Self-supervised system identification for zero-shot humanoid deployment. arXiv preprint 2506.12769.
[85] Vaswani, A., et al. (2017). Attention is all you need. Proc. NeurIPS.
[86] Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. Proc. ICML.
[87] OpenAI et al. (2019). Learning dexterous in-hand manipulation. IJRR. arXiv:1808.00177.
[88] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Proc. NeurIPS.
[89] Kroemer, O., Niekum, S., & Konidaris, G. (2021). A review of robot learning for manipulation: Challenges, representations, and algorithms. JMLR.
[90] Pang, B., et al. (2021). Convergence analysis of soft-actor-critic + entropy bonus in continuous control.
[91] Yarats, D., Fergus, R., Lazaric, A., & Pinto, L. (2022). Mastering visual continuous control: Improved data-augmented reinforcement learning (DrQ-v2).
[92] Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., & Le, M. (2022). Flow matching for generative modeling. Proc. ICLR.
[93] Zhao, T. Z., Kumar, V., Levine, S., & Finn, C. (2023). Learning fine-grained bimanual manipulation with low-cost hardware (ACT). Proc. RSS.
[94] Kim, M. J., et al. (2024). OpenVLA: An open-source vision-language-action model. arXiv preprint.
[95] Black, K., et al. (2024). π0: A vision-language-action flow model for general robot control. arXiv preprint.
[96] Wolf, R., Shi, Y., Liu, S., & Rayyes, R. (2025). Diffusion models for robotic manipulation: A survey.
[97] Bjorck, J., et al. (2025). GR00T N1: An open foundation model for generalist humanoid robots. NVIDIA technical report and arXiv preprint.
[98] NVIDIA. (2025). GR00T N1.5: Improved foundation model for generalist humanoid robots. NVIDIA technical announcement.
[99] Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
[100] Vaswani, A., et al. (2017). Attention is all you need. Proc. NeurIPS. arXiv:1706.03762.
[101] Lipman, Y., et al. (2022). Flow matching for generative modeling. Proc. ICLR.
[102] Figure AI. (2026). Figure 03 + Helix 02: General-purpose humanoid system. Figure AI announcement, January/February 2026.
[103] Bjorck, J., et al. (2025). GR00T N1: Open foundation model for generalist humanoid robots. NVIDIA technical report and arXiv preprint.
[104] AgiBot. (2026). GO-2 asynchronous dual-system humanoid control architecture.
[105] Agility Robotics. (2025). Motor Cortex: Whole-body control foundation model for Digit.
[106] Cheng, X., et al. (2025). From experts to a generalist: Toward general whole-body control for humanoid robots. arXiv preprint 2506.12779.
[107] Yuan, M., et al. (2025). A survey of behavior foundation model: Next-generation whole-body control system of humanoid robots. arXiv preprint 2506.20487.
[108] Billard, A., & Kragic, D. (2019). Trends and challenges in robot manipulation. Science.
[109] Padalkar, A., et al. (2023). Open X-Embodiment: Robotic learning datasets and RT-X models. arXiv preprint.
[110] Brohan, A., et al. (2023). RT-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint.
[111] Grandia, R., et al. (2024). Perceptive pedipulation with local obstacle avoidance.
[112] Weinberg, A., et al. (2024). Survey of learning-based approaches for robotic in-hand manipulation.
[113] Tedrake, R. (2024). Toyota Research Institute Large Behavior Models for robot manipulation. TRI presentations and partial releases.
[114] Liu, S., et al. (2024). Visual whole-body control for legged loco-manipulation.
[115] Wang, A., et al. (2024). Mobile ALOHA: Learning bimanual mobile manipulation with low-cost whole-body teleoperation.
[116] Ha, H., et al. (2024). Universal manipulation interface: In-the-wild robot teaching without in-the-wild data (UMI).
[117] Physical Intelligence. (2025). π0.5: A vision-language-action model with open-world generalization.
[118] Black, K., et al. (2025). π0-FAST: Autoregressive π0 with FAST tokenizer.
[119] Pertsch, K., et al. (2025). FAST: Efficient action tokenization for vision-language-action models.
[120] Shukla, S., et al. (2025). SmolVLA: A vision-language-action model for affordable and efficient robotics.
[121] Yang, H., et al. (2025). Humanoid-VLA: Generalist VLA with dynamic whole-body control.
[122] Ze, Y., et al. (2025). TWIST: Teleoperated whole-body imitation system.
[123] Zhang, J., et al. (2025). FALCON: Learning force-adaptive humanoid loco-manipulation.
[124] AgiBot. (2025). AgiBot World Colosseo: A large-scale manipulation platform.
[125] AgiBot. (2025). AgiBot GO-1: ViLLA generalist embodied foundation model.
[126] AgiBot. (2026). AgiBot GO-2: Asynchronous dual-system architecture.
[127] Figure AI. (2026). Figure 03 + Helix 02: General-purpose humanoid system.
[128] Di Carlo, J., Wensing, P. M., Katz, B., Bledt, G., & Kim, S. (2018). Dynamic locomotion in the MIT Cheetah 3 through convex model-predictive control. Proc. IEEE/RSJ IROS.
[129] Nelson, G., Saunders, A., & Raibert, M. (2012/2019). Atlas: The hydraulic humanoid for the DARPA Robotics Challenge. Annual Review of Control, Robotics, and Autonomous Systems (retrospective).
[130] Boston Dynamics. (2024). All New Atlas: Electric Atlas reveal video. April 17, 2024.
[131] Boston Dynamics. (2020+). Spot and its commercial lessons for Atlas. Product pipeline.
[132] Xie, K., et al. (2024). Hierarchical reinforcement learning for humanoid whole-body control. arXiv preprint 2412.10210.
[133] Boston Dynamics and RAI Institute. (2025). Partnership for Atlas reinforcement learning. February 2025.
[134] Hyundai Motor Group. (2025). HD Hyundai Robotics and Hyundai–Boston Dynamics ecosystem.
[135] Hyundai Motor Group. (2026). Hyundai Metaplant Georgia and Atlas deployment. January 2026.
[136] Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., & Schaal, S. (2013). Optimal distribution of contact forces with inverse-dynamics control. International Journal of Robotics Research.
[137] Figure AI. (2026). Figure 03 + Helix 02: General-purpose humanoid system. (Cited as contrast.)
[138] Figure AI. (2024). Figure 02 humanoid with OpenAI partnership. Figure AI announcement, August 2024.
[139] Agility Robotics & GXO. (2024). Digit × GXO: First commercial humanoid Robots-as-a-Service deployment. Agility/GXO press releases, June 2024 onward.
[140] Tesla. (2024). Optimus Gen 2 / Gen 3 engineering disclosures. Tesla press demos, December 2023 and subsequent AI Day events.
[141] 1X Technologies. (2024). Neo Gamma humanoid and NVIDIA GR00T integration. 1X public demos, 2024–2025.
[142] Sanctuary AI. (2024). Phoenix Gen 7 humanoid with hydraulic hands and Carbon AI. Sanctuary AI product release, 2024.
[143] Figure AI. (2025). Helix accelerating real-world logistics. Figure AI tech blog, 2025.
[144] Unitree Robotics. (2024). Unitree H1 humanoid platform specifications. Unitree product page and public technical disclosures, 2023–2024.
[145] Unitree Robotics. (2024). Unitree G1 humanoid platform and `unitree_rl_gym` open-source training framework. Unitree product launch, June 2024.
[146] AgiBot. (2025). AgiBot World Colosseo: A large-scale manipulation platform for scalable and intelligent embodied systems. AgiBot technical announcement, 2025.
[147] AgiBot. (2025). AgiBot GO-1: ViLLA generalist embodied foundation model. AgiBot technical announcement, March 2025.
[148] AgiBot. (2026). AgiBot GO-2: Asynchronous dual-system humanoid control architecture. AgiBot technical announcement, 2026.
[149] Fourier Intelligence. (2024). Fourier GR-1 / GR-2 humanoid platform. Fourier product announcements, 2023–2024.
[150] Xiaomi. (2023). Xiaomi CyberOne humanoid platform. Xiaomi announcement, August 2022, subsequent updates through 2023.
[151] Zhang, Y., et al. (2025). FALCON: Learning force-adaptive humanoid loco-manipulation.
[152] Won, Y. S. (2025). 휴머노이드 중심의 한국 AI로봇 생태계 분석 (Analysis of Korea's humanoid-centric AI robot ecosystem). Electronics and Telecommunications Trends 40(6), 102–116.
[153] Oh, H. J. (2024). 휴머노이드 로봇의 진화와 미래 과제 (Evolution and future challenges of humanoid robots). Electronics and Telecommunications Trends 214.
[154] MOTIE. (2025). K-Humanoid Alliance launch announcement. Ministry of Trade, Industry and Energy, Republic of Korea.
[155] KIET. (2025). Humanoid robotics in manufacturing: Strategic outlook. Korea Institute for Industrial Economics and Trade white paper.
[156] SNU. (2025). Seoul National University Physical AI program and humanoid initiatives. SNU announcement.
[157] POSTECH. (2025). POSTECH and K-Humanoid Alliance academic track.
[158] Kim, J., et al. (2024). KAIST Humanoid — 12 km/h biped with HuboLab lineage. KAIST technical disclosure.
[159] Hyundai Motor Group. (2025). HD Hyundai Robotics and Hyundai-Boston Dynamics ecosystem. Hyundai press release.
[160] Hyundai Motor Group. (2026). Metaplant Georgia and Atlas deployment. Hyundai announcement, January 2026.
[161] LG Electronics. (2025). LG Electronics robot strategy and K-Humanoid participation. LG press release.
[162] Doosan Robotics. (2024). Doosan Robotics: M-series and humanoid plans.
[163] Rainbow Robotics. (2025). Rainbow Robotics RB-Y1 and HUBO lineage.
[164] Kim, Y. J., et al. (2017). AMBIDEX: Cable-driven dual-arm manipulator from NAVER LABS.
[165] Rebellions. (2025). Rebellions ATOM: Korean AI chip for robot inference.
[166] DEEPX. (2024). DEEPX: Korean edge AI chip for robotics.
[167] SK On, LG Energy Solution, Samsung SDI. (2025). Korean battery company humanoid programs.
[168] Lee, J., et al. (2020). Learning quadrupedal locomotion over challenging terrain. Science Robotics. arXiv:2010.11251.
[169] World Economic Forum. (2025). Physical AI: Powering the new age of industrial operations. WEF White Paper.
[170] MOTIE. (2025). M.AX (Manufacturing AI Transformation) Alliance launch. Ministry of Trade, Industry and Energy, Republic of Korea, December 2025.
[171] MOTIE. (2025). K-Humanoid Alliance launch announcement.
[172] AgiBot. (2025). AgiBot World Colosseo: A large-scale manipulation platform. AgiBot technical announcement, 2025.
[173] Shukla, A., et al. (2025). SmolVLA: A vision-language-action model for affordable and efficient robotics.
[174] MOTIE. (2025). M.AX (Manufacturing AI Transformation) Alliance launch.
[175] Hyundai Motor Group. (2026). Metaplant Georgia and Atlas deployment.
[176] ABI Research. (2025). Humanoid robot shipment projections 2025–2032.
[177] Fourier X. (2025). Humanoid market diffusion path: Industry to service to home.
[178] Morgan Stanley. (2024). Humanoid robots: The $3 trillion market opportunity.
[179] Blankemeyer, G. (2025). Humanoid robot labor economics and job displacement outlook.