Juan Camilo Ramirez — Research, Projects, Publications

About

Juan Ramirez is a fourth-year PhD candidate at Mila and the University of Montreal, supervised by Simon Lacoste-Julien.

He works on constrained learning—developing scalable algorithms that enforce requirements such as fairness, sparsity, and safety in neural networks via Lagrangian methods.

He develops Cooper, an open-source PyTorch library for constrained learning, and co-organized the NeurIPS 2025 Workshop on Constrained Learning.

Research interests: Constrained Learning, Feasible Learning, Reliable & Trustworthy AI, Min-max Optimization, Sparsity.

Contact: juan.ramirez@mila.quebec

Actively seeking research internships for 2026.

View CV →

Featured Projects

Cooper: A PyTorch Library for Constrained Deep Learning

A practical toolbox for training deep nets under constraints.

I co-develop Cooper, an open-source PyTorch library for non-convex constrained optimization that lets you train deep models under explicit constraints. The goal is to make it practical to state requirements (fairness, sparsity, safety) as constraints and actually enforce them during training — the only assumption is that the constraints are (sub-)differentiable in PyTorch. [Docs] [Paper]

Position: Adopt Constraints Over Penalties in Deep Learning

Most models are only encouraged to satisfy requirements. We argue they should be guaranteed to.

This position paper argues that adding penalty terms to the loss — which nudges models toward desirable behavior but cannot reliably enforce those requirements — is fundamentally unreliable in deep learning. We advocate for constrained optimization instead: directly enforcing requirements (fairness, robustness, sparsity, safety) during training through explicit constraints, with enforceable guarantees.

Feasible Learning: A Sample-Centric Paradigm

Instead of optimizing the average, we enforce per-sample performance.

In high-stakes settings, failure on a single example can be unacceptable. Even in lower-stakes settings, average metrics can hide systematic harm — a model can look good on paper while still failing specific users or groups. Feasible Learning trains models by solving a feasibility problem: enforce a target loss on every training sample, rather than just optimizing the average. Accepted at AISTATS 2025.

News

2025

Previous

2024

2023

2022

2021

2020 and earlier

  • : Jose Gallego-Posada, PhD student at Mila will be my supervisor for my undergraduate thesis. I will be working on deep generative models.

  • : Now part of McKinsey & Co. in Belgium as a research intern. Working with Antoine Stevens and Patrick Dehout in ML for the agricultural and chemical industries.

  • : Arrived in Louvain-la-Neuve, Belgium for an exchange semester at Université Catholique de Louvain.

  • : I have been appointed as president of CIGMA-OE, the Mathematical Engineering chapter of the Student Organization at Universidad EAFIT.

  • : I was awarded a full scholarship for the Bachelor's degree in Mathematical Engineering at Universidad EAFIT.

  • : Scored amongst the best 0.1% on the Colombian high school examination ICFES.

  • : Ranked first in the National Chemistry Olympiads of Universidad de Antioquia.

Publications

* denotes equal contribution. ^ denotes equal supervision.

Preprints

  1. Juan Ramirez and S. Lacoste-Julien. Dual Optimistic Ascent (PI Control) is the Augmented Lagrangian Method in Disguise. arXiv:2509.22500, 2025.
  2. Juan Ramirez, M. Hashemizadeh, and S. Lacoste-Julien. Position: Adopt Constraints Over Penalties in Deep Learning. arXiv:2505.20628, 2025.
  3. J. Gallego-Posada*, Juan Ramirez*, M. Hashemizadeh*, and S. Lacoste-Julien. Cooper: A Library for Constrained Optimization in Deep Learning. arXiv:2504.01212, 2025.

Conference

  1. Juan Ramirez*, I. Hounie*, J. Elenter*, J. Gallego-Posada*, M. Hashemizadeh, A. Ribeiro^, and S. Lacoste-Julien^. Feasible Learning. In AISTATS, 2025.
  2. M. Sohrabi*, Juan Ramirez*, T. H. Zhang, S. Lacoste-Julien, and J. Gallego-Posada. On PI Controllers for Updating Lagrange Multipliers in Constrained Optimization. In ICML, 2024.
  3. M. Hashemizadeh*, Juan Ramirez*, R. Sukumaran, G. Farnadi, S. Lacoste-Julien, and J. Gallego-Posada. Balancing Act: Constraining Disparate Impact in Sparse Models. In ICLR, 2024.
  4. J. Gallego-Posada, Juan Ramirez, A. Erraqabi, Y. Bengio, and S. Lacoste-Julien. Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints. In NeurIPS, 2022.

Workshop

  1. Juan Ramirez, R. Sukumaran, Q. Bertrand, and G. Gidel. Omega: Optimistic EMA Gradients. LatinX in AI workshop at ICML, 2023.
  2. Juan Ramirez and J. Gallego-Posada. L0onie: Compressing COINs with L0-constraints. Sparsity in Neural Networks Workshop, 2022.
  3. J. Gallego-Posada, Juan Ramirez, and A. Erraqabi. Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization. LatinX in AI Workshop at NeurIPS, 2021.

Service

Teaching Assistantships