Photo of Juan Ramirez

PhD Candidate · Mila & Université de Montréal · Expected graduation: Mid 2027

Juan Ramirez

Constrained deep learning

I develop scalable methods for training neural networks that satisfy explicit requirements such as fairness, sparsity, and safety.

My research focuses on constrained optimization, feasible learning, and Lagrangian methods. I am supervised by Simon Lacoste-Julien and co-develop Cooper, an open-source PyTorch library for constrained optimization in deep learning.

Selected Work

Perspective

Position: Adopt Constraints Over Penalties in Deep Learning

This paper argues that fixed penalty terms are the wrong default for enforcing explicit requirements in deep learning. Instead, when a problem naturally has targets to satisfy, we should solve it as a constrained optimization problem with tailored methods rather than hope that penalty tuning recovers the right trade-off.

Open-source software

Cooper: A PyTorch Library for Constrained Deep Learning

Cooper is an open-source package for solving constrained optimization problems in deep learning. It implements Lagrangian-based first-order update schemes and makes it easy to combine constrained optimization algorithms with PyTorch models, autograd, and modern training pipelines. See the code and docs.

Learning paradigm

Feasible Learning: A Sample-Centric Paradigm

Feasible Learning trains models by solving a feasibility problem that bounds the loss on every training example, rather than optimizing for average performance. It is a sample-centric alternative to ERM for settings where tail behavior and per-example reliability matter.

Theory

Dual PI Control is the Augmented Lagrangian Method in Disguise

This paper shows that dual optimistic ascent on the dual variables of the Lagrangian is equivalent to gradient descent-ascent on the Augmented Lagrangian in the single-step, first-order regime commonly used in constrained deep learning. The result helps explain the practical success of dual optimism and establish its limits.

News

Archive

2024

2023

2022

2021

2020 and earlier

  • Jose Gallego-Posada, PhD student at Mila will be my supervisor for my undergraduate thesis. I will be working on deep generative models.
  • Now part of McKinsey & Co. in Belgium as a research intern. Working with Antoine Stevens and Patrick Dehout in ML for the agricultural and chemical industries.
  • Arrived in Louvain-la-Neuve, Belgium for an exchange semester at Université Catholique de Louvain.
  • I have been appointed as president of CIGMA-OE, the Mathematical Engineering chapter of the Student Organization at Universidad EAFIT.
  • I was awarded a full scholarship for the Bachelor's degree in Mathematical Engineering at Universidad EAFIT.
  • Scored amongst the best 0.1% on the Colombian high school examination ICFES.

Publications

* denotes equal contribution. ^ denotes equal supervision.

Preprints

  1. Juan Ramirez, M. Hashemizadeh, and S. Lacoste-Julien. Position: Adopt Constraints Over Penalties in Deep Learning. arXiv:2505.20628, 2025.
  2. J. Gallego-Posada*, Juan Ramirez*, M. Hashemizadeh*, and S. Lacoste-Julien. Cooper: A Library for Constrained Optimization in Deep Learning. arXiv:2504.01212, 2025.

Conference

  1. Juan Ramirez and S. Lacoste-Julien. Dual Optimistic Ascent (PI Control) is the Augmented Lagrangian Method in Disguise. In ICLR, 2026.
  2. Juan Ramirez*, I. Hounie*, J. Elenter*, J. Gallego-Posada*, M. Hashemizadeh, A. Ribeiro^, and S. Lacoste-Julien^. Feasible Learning. In AISTATS, 2025.
  3. M. Sohrabi*, Juan Ramirez*, T. H. Zhang, S. Lacoste-Julien, and J. Gallego-Posada. On PI Controllers for Updating Lagrange Multipliers in Constrained Optimization. In ICML, 2024.
  4. M. Hashemizadeh*, Juan Ramirez*, R. Sukumaran, G. Farnadi, S. Lacoste-Julien, and J. Gallego-Posada. Balancing Act: Constraining Disparate Impact in Sparse Models. In ICLR, 2024.
  5. J. Gallego-Posada, Juan Ramirez, A. Erraqabi, Y. Bengio, and S. Lacoste-Julien. Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints. In NeurIPS, 2022.

Workshop

  1. Juan Ramirez, R. Sukumaran, Q. Bertrand, and G. Gidel. Omega: Optimistic EMA Gradients. LatinX in AI Workshop at ICML, 2023.
  2. Juan Ramirez and J. Gallego-Posada. L0onie: Compressing COINs with L0-constraints. Sparsity in Neural Networks Workshop, 2022.
  3. J. Gallego-Posada, Juan Ramirez, and A. Erraqabi. Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization. LatinX in AI Workshop at NeurIPS, 2021.

Service

Invited Talks

Teaching Assistantships