Abstract
Recycling steel at scale is hindered by tramp elements such as Cu and Sn, which degrade material properties. Atomistic simulations using foundational machine-learned interatomic potentials (MLIPs) trained on large databases, such as Materials Project, Alexandria, and OMAT, offer a promising approach to study the effects of these impurities. However, fine-tuning these models to specific systems can lead to catastrophic forgetting–the loss of general chemical knowledge acquired during pretraining. Here, we evaluate forgetting in three foundational MLIPs: CHGNet, SevenNet-O, and MACE, by fine-tuning on a data set of bcc-based structures, with Fe atoms only. When evaluated on a subset of the Materials Project data set with a learning rate of 0.0001, the fine-tuned MLIPs of CHGNet and SevenNet-O exhibited only a minor increase in RMSE of 0.047 and 0.022 eV/atom, respectively, indicating markedly minor forgetting. In contrast, fine-tuned MACE exhibited catastrophic forgetting, despite a range of additional strategies such as layer freezing and data set replay. We attribute the catastrophic forgetting to architectural sensitivity. These results highlight the importance of fine-tuning hyperparameters, model architecture, and data set design, with fine-tuned models of CHGnet and SevenNet-O showing some potential for efficient and transferable modeling of recycled steels.
| Original language | English |
|---|---|
| Number of pages | 23 |
| Journal | Journal of Phase Equilibria and Diffusion |
| DOIs | |
| Publication status | Published - 2026 |
Keywords
- CHGNet
- fine-tuning
- iron
- MACE
- machine learned interatomic potentials
- SevenNet-O
Fingerprint
Dive into the research topics of 'Fine-Tuning Universal Machine-Learned Interatomic Potentials for Applications in the Science of Steels'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver