About
MACE (Message-passing Atomic Cluster Expansion) is a cutting-edge open-source machine learning library designed for computational chemistry and materials science. It predicts many-body atomic interactions with exceptional accuracy and efficiency, enabling the generation of machine learning force fields (MLFFs) suitable for large-scale molecular dynamics simulations. At the core of MACE is its use of higher-order equivariant message-passing neural networks, which respect the physical symmetries of atomic systems (rotational, translational, and permutation invariance), leading to superior accuracy and data efficiency compared to conventional ML potentials. Key capabilities include out-of-the-box foundation models pre-trained on diverse datasets (e.g., ANI-1x, MD22, liquid water), support for fine-tuning via LoRA and multihead replay, multi-GPU training for large datasets, analytical Hessians, CUDA acceleration via cuEquivariance, and ASE calculator integration. It also supports electrostatic MACE and heterogeneous data training workflows. MACE is particularly well-suited for computational chemists, materials scientists, and researchers in molecular simulation who need accurate interatomic potentials beyond classical force fields. With interfaces to LAMMPS and OpenMM, users can plug MACE models directly into established molecular dynamics pipelines. The project is academic in origin, hosted on GitHub, and actively supported through GitHub Discussions.
Key Features
- Higher-Order Equivariant Message Passing: Uses physically motivated equivariant neural architectures that respect rotational, translational, and permutation symmetries for superior accuracy.
- Pre-trained Foundation Models: Ships with ready-to-use foundation models trained on datasets like ANI-1x, MD22, and liquid water for immediate use in simulations.
- Fine-Tuning Support: Supports LoRA fine-tuning, multihead replay finetuning, and multihead training to adapt foundation models to custom datasets.
- MD Engine Integration: Native interfaces to LAMMPS and OpenMM allow MACE force fields to be used in established molecular dynamics pipelines.
- Multi-GPU & Large Dataset Training: Scales to large datasets with multi-GPU training support and CUDA acceleration via the cuEquivariance library.
Use Cases
- Generating transferable machine learning force fields for organic molecules using datasets like ANI-1x.
- Running NVT molecular dynamics simulations of liquid water using MACE foundation models.
- Fine-tuning a pre-trained MACE model on custom experimental or DFT data for a specific material system.
- Integrating MACE potentials into LAMMPS or OpenMM for large-scale atomistic simulations.
- Performing active learning workflows to iteratively improve MACE models with targeted data collection.
Pros
- High Accuracy and Data Efficiency: Equivariant architecture achieves state-of-the-art accuracy on benchmark datasets while requiring less training data than competing approaches.
- Ready-to-Use Foundation Models: Pre-trained models are available out of the box, enabling fast deployment for common chemical systems without training from scratch.
- Broad Ecosystem Integration: Works natively with ASE, LAMMPS, and OpenMM, fitting seamlessly into existing computational chemistry workflows.
Cons
- Steep Learning Curve: Requires familiarity with machine learning concepts, molecular dynamics, and Python to use effectively; not suited for non-technical users.
- Primarily Linux-Focused: Like most HPC and ML research tools, MACE is primarily designed for Linux/HPC environments, with limited out-of-the-box Windows support.
Frequently Asked Questions
MACE is used to train machine learning force fields (MLFFs) that can predict atomic interactions in molecular and materials systems, enabling fast and accurate molecular dynamics simulations.
Yes, MACE is fully open-source and free to use. The source code is available on GitHub under an open-source license.
MACE provides interfaces to LAMMPS (via ML-IAP) and OpenMM, two of the most widely used molecular dynamics simulation packages.
Yes. MACE supports multiple fine-tuning strategies including LoRA fine-tuning, multihead replay finetuning, and standard multihead training to adapt pre-trained models to new datasets.
Yes, MACE supports CUDA acceleration through the cuEquivariance library and also supports multi-GPU training for large-scale dataset workflows.