Acta Metallurgica Sinica (English Letters) ›› 2025, Vol. 38 ›› Issue (12): 2077-2101.DOI: 10.1007/s40195-025-01934-x
Jiaye Li1,2, Xinyuan Zhang1,3, Chunlei Shang3, Xing Ran4, Zhe Wang4,5, Chengjiang Tang4, Xiaohang Zhang1,2, Mingshuo Nie6(
), Wei Xu1,2(
), Xin Lu1,2
Received:2025-05-22
Revised:2025-07-30
Accepted:2025-08-20
Online:2025-12-10
Published:2025-11-11
Contact:
Mingshuo Nie, ms_nie@163.com;Wei Xu, weixu@ustb.edu.cn
About author:Jiaye Li and Xinyuan Zhang have contributed equally to this work.
Jiaye Li, Xinyuan Zhang, Chunlei Shang, Xing Ran, Zhe Wang, Chengjiang Tang, Xiaohang Zhang, Mingshuo Nie, Wei Xu, Xin Lu. Reinforcement Learning in Materials Science: Recent Advances, Methodologies and Applications[J]. Acta Metallurgica Sinica (English Letters), 2025, 38(12): 2077-2101.
Add to citation manager EndNote|Ris|BibTeX
Fig. 1 Evolution and effect of reinforcement learning in materials science. a Historical development stages; b publication articles and citations based on the data from the Web of Science; c applications of reinforcement learning in the field of materials science
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | Model-free | Dynamics-agnostic learning |
| Trial-and-error learning | Complex system adaptability | |
| Off-policy | Historical data utilization | |
| Low computational cost | Low-dimensional state-action spaces | |
| Limitations | Scalability issues | Storage-computation cost |
| Explore-exploit trade-off | Epsilon-sensitivity | |
| Discretization requirement | Continuous state difficulty | |
| Hyperparameter sensitivity | Learning rate/discount rate sensitivity |
Table 1 Advantages and limitations of Q-learning
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | Model-free | Dynamics-agnostic learning |
| Trial-and-error learning | Complex system adaptability | |
| Off-policy | Historical data utilization | |
| Low computational cost | Low-dimensional state-action spaces | |
| Limitations | Scalability issues | Storage-computation cost |
| Explore-exploit trade-off | Epsilon-sensitivity | |
| Discretization requirement | Continuous state difficulty | |
| Hyperparameter sensitivity | Learning rate/discount rate sensitivity |
Fig. 4 Important structural parameters of cellular metamaterials optimized by Q-learning. a Flowchart of reinforcement learning Ω; b final Ω values for each episode and the band structure of the initial state; c Ω values at every step during the final episode and the band structure of the final state; d evolution paths during the final episode for four tests with different initial states; e state ratios for the four tests with different initial states [59]
Fig. 5 A novel framework leveraging reinforcement learning to predict porosity in metal laser-powder bed fusion processes. a Agent-environment interaction of the traditional reinforcement learning framework adapted for L-PBF optimization; b average reward per episode received by the agent versus standard deviation; c Q-value associated with each state of the parameter space, to be considered as a processing map, with four micrographs and their corresponding location on the map [61]
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | High-dimensional state spaces | Deep neural networks for complex inputs |
| End-to-end learning | Automated feature extraction | |
| Experience replay mechanism | Improved sample efficiency | |
| Target network | Enhanced training stability | |
| Limitations | Overestimation bias | Suboptimal solution generation |
| Low sample efficiency | Sparse-reward inefficiency | |
| Limited memory | Temporal modeling deficiency | |
| Discrete action space | Discrete-action limitation |
Table 2 Advantages and limitations of DQN
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | High-dimensional state spaces | Deep neural networks for complex inputs |
| End-to-end learning | Automated feature extraction | |
| Experience replay mechanism | Improved sample efficiency | |
| Target network | Enhanced training stability | |
| Limitations | Overestimation bias | Suboptimal solution generation |
| Low sample efficiency | Sparse-reward inefficiency | |
| Limited memory | Temporal modeling deficiency | |
| Discrete action space | Discrete-action limitation |
Fig. 6 Reinforcement learning for multicomponent alloy compositional design: a Schematic of the proposed reinforcement learning-based alloy design; b component fractions and maximum ΔH proposed by the reinforcement learning agent converge within 4,000 training episodes before synthesis and characterization; c evolution of GP regressor performance over iterations; d GP diagonal plot highlighting; e interactions required to train the reinforcement learning agent with the surrogate; f T-SNE visualization of all experimental compositions containing Ti, Ni, Cu, Hf, and Zr [31]
Fig. 7 Reinforcement learning applied to optimize clinch joint characteristics. a Illustration of the deep reinforcement learning framework used in clinch joint simulations; b total reward at each time step for different random seeds; c overview of the resulting clinch joint and its key quality-related geometric characteristics [69]
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | Continuous action generation | Policy-induced continuation |
| Real-time action evaluation | Temporal action critic | |
| Model-free | Nonlinear direct learning from data | |
| Dynamic strategy adjustment | Process-robust online adaptation | |
| Limitations | High implementation complexity | Dual-network training |
| Hyperparameter sensitivity | Sensitivity in data-scarce contexts | |
| Low sample efficiency | High-interaction-data dependence | |
| Lack of physical constraints | Material-AI transfer paradox |
Table 3 Advantages and limitations of actor-critic
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | Continuous action generation | Policy-induced continuation |
| Real-time action evaluation | Temporal action critic | |
| Model-free | Nonlinear direct learning from data | |
| Dynamic strategy adjustment | Process-robust online adaptation | |
| Limitations | High implementation complexity | Dual-network training |
| Hyperparameter sensitivity | Sensitivity in data-scarce contexts | |
| Low sample efficiency | High-interaction-data dependence | |
| Lack of physical constraints | Material-AI transfer paradox |
Fig. 8 A safe and efficient fast charging strategy for energy storage materials based on reduced-order model (ROM) and soft actor-critic (SAC). a Flowchart of fast charging strategy based on SAC; b reward value during training episodes; c maximum terminal voltage; d maximum core temperature; e minimum side reaction; f training performance comparison graph of SAC and other deep reinforcement learning (DRL) Algorithms [76]
Fig. 9 Exploration of nanocluster potential energy surfaces using actor-critic-based DRL. a Schematic of the actor-critic DRL framework; b evolution of episodic rewards during training; c K-means clustering analysis of unique minimum energy configurations identified during training; d, e Energy profiles from early training episodes; f, g, h, i Distribution metrics of Cu nanoclusters during pre- and post-policy learning across twenty episodes [74]
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | Continuous action spaces | Direct continuous control |
| Improved stability | Dual memory architecture | |
| Deterministic policy opt | Deterministic policy function learning | |
| Flexible exploration | Noise-enhanced multimodal processing | |
| Limitations | High sample needs | Dual-network data hunger |
| Poor on small data | Small-data convergence difficulty | |
| Limited evaluation metric | Proxy metric limitation | |
| High-dimensional action spaces | Curse of dimensionality in control |
Table 4 Advantages and limitations of DDPG
| Topic | Characteristics | Content |
|---|---|---|
| Advantages | Continuous action spaces | Direct continuous control |
| Improved stability | Dual memory architecture | |
| Deterministic policy opt | Deterministic policy function learning | |
| Flexible exploration | Noise-enhanced multimodal processing | |
| Limitations | High sample needs | Dual-network data hunger |
| Poor on small data | Small-data convergence difficulty | |
| Limited evaluation metric | Proxy metric limitation | |
| High-dimensional action spaces | Curse of dimensionality in control |
Fig. 10 Coupling of an analytical rolling model and reinforcement learning to design a pass schedule. a Schematic structure of the DDPG algorithm; b evolution of the reward constituents after each iteration; c comparison between the measurements and FRM-predicted values [40]
Fig. 11 Accelerating thermal deformation of titanium aluminide TNM-B1 via reinforcement learning. a Schematic of the reinforcement learning and finite element co-simulation framework using the DDPG reinforcement learning algorithm; b action profiles for the bone compression simulation; c reward profiles for the bone compression simulation; d stress profiles of elements in the bone compression simulation [86]
Fig. 12 The processes and results of the activation and methanation using the reinforcement learning Monte Carlo Tree Search Algorithm. a Framework diagram of the reinforcement learning method-policy gradient (MCTS-PG); b, c test results and datasets of MCTS-PG and the original MCTS; d the search and screening workflow of MCTS-PG [41]
| [1] | C. Lee, C. Lim, Technol. Forecast. Soc. Change 167, 120653 (2021) |
| [2] | Y. Liu, T. Zhao, W. Ju, S. Shi, J.Materiomics 3, 159 (2017) |
| [3] | T. Zhou, Z. Song, K. Sundmacher,Engineering 5, 1017 (2019) |
| [4] | Y. Su, H. Fu, Y. Bai, X. Jiang, J. Xie, Acta Metall. Sin. 56, 1313 (2020) |
| [5] |
X.Y. Zhou, H.H. Wu, J. Zhang, S. Ye, T. Lookman, X. Mao, J. Mater. Sci. Technol. 223, 91 (2025)
DOI URL |
| [6] |
B. Singh, R. Kumar, V.P. Singh, Artif. Intell. Rev. 55, 945 (2022)
DOI |
| [7] | V.N. Vapnik, S. Mukherjee, in 13th Annual Conference on Neural Information Processing Systems (NIPS), Denver, CO, US, November 29-December 4, 1999 |
| [8] | J. Bi, V.N. Vapnik, in 16th Annual Conference on Computational Learning Theory (COLT) and 7th Kernel Workshop, Washington, D.C., US, August24-27, 2003 |
| [9] |
H.A. Simon, Q. J. Econ. 69, 99 (1955)
DOI URL |
| [10] | H.A. Simon, Psychol. Rev. 63, 129 (1956) |
| [11] | R. Sarikaya, G.E. Hinton, A. Deoras, IEEE/ACM Trans. Audio Speech Lang. Process. 22, 778 (2014) |
| [12] | G.E. Hinton, A. Krizhevsky, S.D. Wang, in 21st International Conference on Artificial Neural Networks, Espoo, Finland, June14-17, 2011 |
| [13] | R.S. Sutton, D. McAllester, S. Singh, Y. Mansour, in 13th Annual Conference on Neural Information Processing Systems (NIPS), Denver, CO, US, November 29-December 4, 1999 |
| [14] |
O. Khatib, S. Ren, J. Malof, W.J. Padilla, Adv. Funct. Mater. 31, 2101748 (2021)
DOI URL |
| [15] | E. Zhang, R. Zhang, N. Masoud, Transp. Res. Part C Emerg. Technol. 149, 104063 (2023) |
| [16] |
H.L. Chen, H. Mao, Q. Chen, Mater. Chem. Phys. 210, 279 (2018)
DOI URL |
| [17] |
I. Steinbach, Model. Simul. Mater. Sci. Eng. 17, 073001 (2009)
DOI URL |
| [18] |
I. Singer-Loginova, H. Singer, Rep. Prog. Phys. 71, 106501 (2008)
DOI URL |
| [19] |
J. Ågren, Curr. Opin. Solid State Mater. Sci. 1, 355 (1996)
DOI URL |
| [20] | S. Gorsse, O.N. Senkov,Entropy 20, 899 (2018) |
| [21] |
A. Kroupa, Comput. Mater. Sci. 66, 3 (2013)
DOI URL |
| [22] | R.K. Vasudevan, E. Orozco, S.V. Kalinin, Mach. Learn. Sci. Technol. 3, 04LT03 (2022) |
| [23] |
K. Choudhary, B. DeCost, C. Chen, A. Jain, F. Tavazza, R. Cohn, C.W. Park, A. Choudhary, A. Agrawal, S.J. Billinge, NPJ Comput. Mater. 8, 59 (2022)
DOI |
| [24] |
L.Q. Chen, Annu. Rev. Mater. Res. 32, 113 (2002)
DOI URL |
| [25] | Z. Ding, Y. Huang, H. Yuan, H. Dong, Introduction to Reinforcement Learning, Deep Reinforcement Learning: Fundamentals, Research and Applications, 1st edn. (Springer, Singapore, 2020), pp. 47-123 |
| [26] | R.S. Sutton, A.G. Barto, Introduction to Reinforcement Learning, 1st edn. (MIT Press, Cambridge, 1998), pp. 223-260 |
| [27] | Z.K. Liu, Zentropy: Theory and Fundamentals, 1st edn. (CRC Press, Florida, 2024), pp. 273-348 |
| [28] |
Z.H. Du, J. Cheng, X. Ran, Z. Wang, Y.X. He, X.H. Zhang, X.Y. Zhu, J.Z. Zhang, W. Xu, X. Lu, J. Alloys Compd. 1010, 177748 (2025)
DOI URL |
| [29] |
T.Q. Bui, X. Hu, Eng. Fract. Mech. 248, 107705 (2021)
DOI URL |
| [30] | M. Perrut, Aerosp. Lab 9, 1 (2015) |
| [31] |
Y. Xian, P. Dang, Y. Tian, X. Jiang, Y. Zhou, X. Ding, J. Sun, T. Lookman, D. Xue, Acta Mater. 274, 120017 (2024)
DOI URL |
| [32] |
X. Pei, J. Pei, H. Hou, Y. Zhao, NPJ Comput. Mater. 11, 27 (2025)
DOI |
| [33] | J. Chung, B. Shen, A.C.C. Law, Z. Kong, J. Manuf. 65, 822 (2022) |
| [34] |
J. Liu, Q. Qian, Comput. Mater. Sci. 221, 112075 (2023)
DOI URL |
| [35] | G. Huang, Y. Guo, Y. Chen, Z. Nie,Materials 16, 5977 (2023) |
| [36] | D. Seo, D.W. Nam, J. Park, C.Y. Park, M.S. Jang,ACS Photonics 9, 452 (2022) |
| [37] |
A.K. Shakya, G. Pillai, S. Chakrabarty, Expert Syst. Appl. 231, 120495 (2023)
DOI URL |
| [38] | E. Osaro, Y.J. Colon, AIChE J. 70, e18611 (2024) |
| [39] |
B. Liu, D. Zhao, X. Lu, Y. Liu, IEEE Trans. Semicond. Manuf. 38, 210 (2025)
DOI URL |
| [40] |
C. Idzik, A. Kraemer, G. Hirt, J. Lohmar, J. Intell. Manuf. 35, 1469 (2024)
DOI |
| [41] |
Z. Song, Q. Zhou, S. Lu, S. Dieb, C. Ling, J. Wang, J. Phys. Chem. Lett. 14, 3594 (2023)
DOI URL |
| [42] | H.Y. Jeong, J. Park, Y. Kim, S.Y. Shin, N. Kim, J. Mater. Res. Technol. 23, 1995 ( 2023) |
| [43] |
L. Rosafalco, J.M. De Ponti, L. Iorio, R.V. Craster, R. Ardito, A. Corigliano, Sci. Rep. 13, 21836 (2023)
DOI |
| [44] | C. Gao, D. Wang, J. Build. Eng. 74, 106852 (2023) |
| [45] | S. Sun, X. Lan, H. Zhang, N. Zheng, Patt. Recognit. Artif. Intell. 35, 1 (2022) |
| [46] |
A. Oroojlooy, D. Hajinezhad, Appl. Intell. 53, 13677 (2023)
DOI |
| [47] |
J. Chen, H. Xing, Z. Xiao, L. Xu, T. Tao, IEEE Internet Things J. 8, 17508 (2021)
DOI URL |
| [48] | F. Song, H. Xing, X. Wang, S. Luo, P. Dai, Z. Xiao, B. Zhao, IEEE Trans. Mob. Comput. 22, 7387 (2023) |
| [49] |
K. Arulkumaran, M.P. Deisenroth, M. Brundage, A.A. Bharath, IEEE Signal Process. Mag. 34, 26 (2017)
DOI URL |
| [50] | K.T. Butler, D.W. Davies, H. Cartwright, O. Isayev, A. Walsh,Nature 559, 547 (2018) |
| [51] | J. Wei, X. Chu, X.Y. Sun, K. Xu, H.X. Deng, J. Chen, Z. Wei, M. Lei,Infomat 1, 338 (2019) |
| [52] | S. Meyn, IEEE Trans. Autom. Control 69, 8323 (2024) |
| [53] | Y.H. Wang, T.H.S. Li, C.J. Lin, Eng. Appl. Artif. Intell. 26, 2184 ( 2013) |
| [54] |
Q. Yan, H. Wang, F. Wu, Comput. Oper. Res. 144, 105823 (2022)
DOI URL |
| [55] |
Y. Goldberg, M.R. Kosorok, Ann. Stat. 40, 529 (2012)
PMID |
| [56] |
Z. Tong, H. Chen, X. Deng, K. Li, K. Li, Inf. Sci. 512, 1170 (2020)
DOI URL |
| [57] |
I. Sajedian, H. Lee, J. Rho, Sci. Rep. 9, 10899 (2019)
DOI PMID |
| [58] |
T. Shah, L. Zhuo, P. Lai, A. De la Rosa-Moreno, F. Amirkulova, P. Gerstoft, J. Acoust. Soc. Am. 150, 321 (2021)
DOI URL |
| [59] |
S. Han, Q. Han, N. Ma, C. Li, Thin-Walled Struct. 191, 111071 (2023)
DOI URL |
| [60] |
X. Zhang, X. Ran, Z. Wang, W. Xu, X. Zhu, Z. Du, J. Zhang, R. Li, Y. Li, X. Lu, J. Mater. Sci. Technol. 237, 323 (2025)
DOI URL |
| [61] |
A.M.F. Mohamed, F. Careri, R.H.U. Khan, M.M. Attallah, L. Stella, Scr. Mater. 255, 116377 (2025)
DOI URL |
| [62] | V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller, A.K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis,Nature 518, 529 (2015) |
| [63] | M.M. Afsar, T. Crump, B. Far, ACM Comput. Surv. 55, 145 (2023) |
| [64] |
J. Clifton, E. Laber, Annu. Rev. Stat. Appl. 7, 279 (2020)
DOI URL |
| [65] | L. Liu, Q. Fu, Y. Lu, Y. Wang, H. Wu, J. Chen, J. Build. Eng. 80, 108044 (2023) |
| [66] |
J. Dornheim, L. Morand, S. Zeitvogel, T. Iraki, N. Link, D. Helm, J. Intell. Manuf. 33, 333 (2022)
DOI |
| [67] |
H. Kong, J. Yan, H. Wang, L. Fan, Neural Comput. Appl. 32, 14431 (2020)
DOI |
| [68] | J. Viquerat, P. Meliga, A. Larcher, E. Hachem, Phys. Fluids 34, 111301 (2022) |
| [69] |
C. Zirngibl, F. Dworschak, B. Schleich, S. Wartzack, Prod. Eng. 16, 315 (2021)
DOI |
| [70] |
L. Li, Y. Li, W. Wei, Y. Zhang, J. Liang, Inf. Sci. 647, 119494 (2023)
DOI URL |
| [71] |
N.D. Nguyen, T.T. Nguyen, P. Vamplew, R. Dazeley, S. Nahavandi, Neural Comput. Appl. 33, 10335 (2021)
DOI |
| [72] | K. Hu, M. Li, Z. Song, K. Xu, Q. Xia, N. Sun, P. Zhou, M. Xia,Neurocomputing 599, 128068 (2024) |
| [73] | P. Jaiswal, Ph.D. Thesis, University at Buffalo, 2019 |
| [74] | R.K. Raju, J. Phys. Chem. A 128, 9122 (2024) |
| [75] |
V. Wuerz, C. Weissenfels, Comput. Methods Appl. Mech. Eng. 435, 117617 (2025)
DOI URL |
| [76] | Z. Wei, X. Yang, Y. Li, H. He, W. Li, D.U. Sauer, Energy Storage Mater. 56, 62 (2023) |
| [77] | X. Wang, S. Wang, X. Liang, D. Zhao, J. Huang, X. Xu, B. Dai, Q. Miao, IEEE Trans. Neural Netw. Learn. Syst. 35, 5064 (2024) |
| [78] | J. Ma, M. Zhang, K. Ma, H. Zhang, G. Geng, Proc. Inst. Mech. Eng. D J. Automob. Eng. 239, 1505 (2025) |
| [79] |
P. Gupta, A. Pal, V. Vittal, IEEE Trans. Power Syst. 37, 365 (2022)
DOI URL |
| [80] | M. Zhang, Y. Zhang, Z. Gao, X. He,IEEE Access 8, 177734 (2020) |
| [81] | Y. Zhang, W. Zhao, J. Wang, Y. Yuan,Neurocomputing 608, 128423 (2024) |
| [82] | E.H. Sumiea, S.J. Abdulkadir, H.S. Alhussian, S.M. Al-Selwi, A. Alqushaibi, M.G. Ragab, S.M. Fati,Heliyon 10, e 30697 (2024) |
| [83] | W. Zeng, J. Wang, Y. Zhang, Y. Han, Q. Zhao, Int. J. Adv. Manuf. Technol. 120, 7277 (2022) |
| [84] | J. Ruan, M. Ponder, I. Parkes, W. Blejde, G. Chiu, N. Jain, in American Control Conference, Atlanta, GA, US, June 8-10 (IEEE, 2022) |
| [85] |
J.A. Stendal, M. Bambach, J. Intell. Manuf. 35, 3331 (2024)
DOI |
| [86] | D.A. Woodford, Mater. Des. 14, 231 (1993) |
| [87] | T. Alfakih, M.M. Hassan, A. Gumaei, C. Savaglio, G. Fortino,IEEE Access 8, 54074 (2020) |
| [88] | A. Bouchard-Cote, S.J. Vollmer, A. Doucet, J. Am. Stat. Assoc. 113, 855 (2018) |
| [89] | X. Li, C. Wang, L. Zhang, S. Zhou, J. Huang, M. Gao, C. Liu, M. Huang, Y. Zhu, H. Chen, Acta Metall. Sin.-Engl. Lett. 37, 1858 ( 2024) |
| [90] | X. Teng, J. Pang, F. Liu, C. Zou, X. Bai, S. Li, Z. Zhang, Acta Metall. Sin.-Engl. Lett. 36, 1536 (2023) |
| [91] | H. Wang, Z. Duan, Q. Guo, Y. Zhang, Y. Zhao, Comput. Mater. Contin. 77, 1393 (2023) |
| [92] | P. Wu, C. Zhao, E. Cui, S. Xu, T. Liu, F. Wang, C. Lee, X. Mu, Int. J. Extrem. Manuf. 6, 052007 (2024) |
| [93] |
Z. Xue, R. Tan, H. Wang, J. Tian, X. Wei, H. Hou, Y. Zhao, J. Colloid Interface Sci. 651, 149 (2023)
DOI URL |
| [94] | Z. Chen, Z. Zhao, Y. Hao, X. Chen, L. Zhou, J. Wang, T. Ying, B. Chen, X. Zeng, Acta Metall. Sin.-Engl. Lett. 38, 245 (2025) |
| [95] | Z. Liu, Q. Zhou, X. Liang, X. Wang, G. Li, K. Vanmeensel, J. Xie, Int. J. Extrem. Manuf. 6, 022002 (2024) |
| [96] | X. Wang, X. Ji, B. He, D. Wang, C. Li, Y. Liu, W. Guan, L. Cui, Acta Metall. Sin.-Engl. Lett. 36, 573 (2023) |
| [97] | M. Luo, R. Li, D. Zheng, J. Kang, H. Wu, S. Deng, P. Niu, Int. J. Extrem. Manuf. 5, 035005 (2023) |
| [98] | X. Sun, M. Chen, T. Liu, K. Zhang, H. Wei, Z. Zhu, W. Liao, Int. J. Extrem. Manuf. 6, 012003 (2023) |
| [99] |
S. Shi, X. Liu, Z. Wang, H. Chang, Y. Wu, R. Yang, Z. Zhai, J. Manuf. Process. 120, 1130 (2024)
DOI URL |
| [100] | S. Sui, S. Guo, D. Ma, C. Guo, X. Wu, Z. Zhang, C. Xu, D. Shechtman, S. Remennik, D. Safranchik, Int. J. Extrem. Manuf. 5, 042009 (2023) |
| [101] | C. Ling, Q. Li, Z. Zhang, Y. Yang, W. Zhou, W. Chen, Z. Dong, C. Pan, C. Shuai, Int. J. Extrem. Manuf. 6, 015001 (2023) |
| No related articles found! |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||
WeChat
