IEEE Logo Columbia Logo

Introduction

Recent advances in AI, particularly reinforcement learning, have enabled data-driven approaches to challenging combinatorial optimization and physics problems, including Max-Cut and the search for ground states of Ising spin systems [1–6]. These problems are central to graph optimization and statistical physics and serve as standard benchmarks for evaluating learning-based optimization methods. Building on prior work in learning combinatorial optimization algorithms [2,6] and applying reinforcement learning to spin systems [1,3,4,5], this track encourages the development of AI agents that achieve strong performance, generalization, and scalability on benchmarks.

We design two challenge tasks that allow participants to explore learning-based optimization methods and contribute to AI-driven scientific discovery. We welcome students, researchers, and engineers interested in AI, physics, and optimization to participate.

Tasks

Each team can choose to participate in one or both tasks. Awards and recognitions will be given for each task.

Task I: Graph Maxcut

Develop reinforcement learning agents to solve Max-Cut problems on large graphs. Agents must be trained in a distribution-wise fashion across families of graphs, utilizing GPU-based environments for sampling.

Dataset

Synthetic graphs generated from the following distributions:

  • BA (Barabási–Albert)
  • ER (Erdős–Rényi)
  • PL (Power-Law)

Each graph file follows:

n m           # number of nodes and edges  
u v w         # edge from node u to v with weight w  

Goal

Maximize the cut value using RL agents with multiple training environments.

Starter kit is available at
GitHub – Task I Starter Kit.




Task II: Finding Ground State Energy of Ising Model

This task benchmarks the reliability of AI agents for scientific simulation, specifically for the Ising model. Finding the ground state energy of the Ising model is computationally difficult but fundamental to simulating complex physical systems. Participants are expected to develop reinforcement learning or foundation models that will be evaluated on their ability to efficiently locate ground states in large-scale Ising spin lattices.

  • Goal: Minimize the Hamiltonian energy on large-scale Ising lattices.
  • Dataset: We utilize a compiled standard dataset of Ising model instances on Huggingface.
  • Metric: We provide an evaluator program in the starter kit that calculates the geometric mean across results.

Starter kit is available at
GitHub – Task II Starter Kit.




[1] Lin, Levy, et al. "Reinforcement Learning for Ising Models: Datasets and Benchmark." NeurIPS 2025 Workshop on Machine Learning and the Physical Sciences. [NeurIPS 2025 Workshop]

[2] Liu, Xiao-Yang, and Zhu, Ming. "K-Spin Ising Model for Combinatorial Optimizations over Graphs: A Reinforcement Learning Approach." NeurIPS 2023 Workshop on Optimization for Machine Learning. [NeurIPS 2023 Workshop]

[3] Hibat-Allah, Mohamed, et al. "Variational Neural Annealing." Nature Machine Intelligence (2021). [Nature]

[4] Mills, Kyle, et al. "Finding the Ground State of Spin Hamiltonians with Reinforcement Learning." Nature Machine Intelligence (2020). [Nature]

[5] Fan, Changjun, et al. "Searching for Spin Glass Ground States through Deep Reinforcement Learning." Nature Communications (2023). [Nature]

[6] Barrett, Thomas, et al. "Exploratory Combinatorial Optimization with Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence (2020). [AAAI]

Contact

Contact email: rlsolvercontest@outlook.com

Contestants can communicate any questions on

  • Discord.
  • QQ Group: 922523057
  • WeChat Group:
WeChat Group