About

Photo of Juan Ramirez

I am a fourth-year PhD candidate at Mila and DIRO, supervised by Simon Lacoste-Julien.

I am currently seeking internships for 2026. My CV is available here.

I work in constrained learning: developing scalable algorithms that make deep learning models more reliable and controllable. Constrained optimization provides a principled way to enforce requirements in modern AI systems—such as fairness, sparsity, and safety. Highlights of my work can be found in the Featured Projects section below.

Research interests: Constrained Deep Learning and Applications, Feasible Learning.

Contact: juan.ramirez@mila.quebec

Featured Projects

Position Paper: Adopt Constraints Over Penalties in Deep Learning

How can we reliably enforce requirements like fairness and safety in deep learning?

In this position paper, we argue against enforcing requirements on models through penalty terms, an approach that is often unreliable. Instead, we advocate for tailored constrained optimization algorithms, which provide a more robust framework for ensuring desirable properties.


Cooper: A PyTorch Library for Constrained Deep Learning

How can researchers easily apply constrained optimization in their work?

I co-develop Cooper, an open-source library for non-convex constrained optimization. It is designed to integrate seamlessly into existing PyTorch workflows, facilitating the wider adoption of constrained methods. [Documentation] [Accompanying Paper]


Feasible Learning: A Sample-Centric Paradigm

Machine learning beyond optimizing for average performance.

The deployment of AI in high-stakes scenarios demands models that satisfy strict performance criteria on individual samples. This work introduces a novel learning paradigm where a model is trained to meet a target performance on each training sample, rather than optimizing an average metric.


News

2025

2024

Previous

2023

2022

2021

2020 and beyond

  • Jul 2020: Jose Gallego-Posada, PhD student at Mila will be my supervisor for my undergraduate thesis. I will be working on deep generative models.

  • Jun 2019: Now part of McKinsey & Co. in Belgium as a research intern. Working with Antoine Stevens and Patrick Dehout in ML for the agricultural and chemical industries.

  • Jan 2019: Arrived in Louvain-la-Neuve, Belgium for an exchange semestrer at Université Catholique de Louvain.

  • Nov 2017: I have been appointed as president of CIGMA-OE, the Mathematical Engineering chapter of the Student Organization at Universidad EAFIT.

  • Dec 2015: I was awarded a full scholarship for the Bachelor's degree in Mathematical Engineering at Universidad EAFIT.

  • Dec 2015: Scored amongst the best 0.1% on the Colombian high school examination ICFES.

  • Dec 2015: Ranked first in the National Chemistry Olympiads of Universidad de Antioquia.


Publications

* denotes equal contribution. ^ denotes equal supervision.

Preprints

  1. Juan Ramirez and S. Lacoste-Julien. Dual Optimistic Ascent (PI Control) is the Augmented Lagrangian Method in Disguise. arXiv preprint at arXiv:2509.22500, 2025.
  2. Juan Ramirez, M. Hashemizadeh and S. Lacoste-Julien. Position: Adopt Constraints Over Penalties in Deep Learning. arXiv preprint at arXiv:2505.20628, 2025.

  3. J. Gallego-Posada*, Juan Ramirez*, M. Hashemizadeh* and S. Lacoste-Julien. Cooper: A Library for Constrained Optimization in Deep Learning. arXiv preprint at arXiv:2504.01212, 2025.

Conference

  1. Juan Ramirez*, I. Hounie*, J. Elenter*, J. Gallego-Posada*, M. Hashemizadeh, A. Ribeiro^ and S. Lacoste-Julien^. Feasible Learning. In AISTATS, 2025.

  2. M. Sohrabi*, Juan Ramirez*, T. H. Zhang, S. Lacoste-Julien and J. Gallego-Posada. On PI Controllers for Updating Lagrange Multipliers in Constrained Optimization. In ICML, 2024.

  3. M. Hashemizadeh*, Juan Ramirez*, R. Sukumaran, G. Farnadi, S. Lacoste-Julien and J. Gallego-Posada. Balancing Act: Constraining Disparate Impact in Sparse Models. In ICLR, 2024.

  4. J. Gallego-Posada, Juan Ramirez, A. Erraqabi, Y. Bengio and S. Lacoste-Julien. Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints. In NeurIPS, 2022.

Workshop

  1. Juan Ramirez, R. Sukumaran, Q. Bertrand and G. Gidel. Omega: Optimistic EMA Gradients. LatinX in AI workshop at ICML, 2023.

  2. Juan Ramirez and J. Gallego-Posada. L0onie: Compressing COINs with L0-constraints. Sparsity in Neural Networks Workshop, 2022.

  3. J. Gallego-Posada, Juan Ramirez and A. Erraqabi. Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization. LatinX in AI Workshop at NeurIPS, 2021.



Service


Juan Ramirez © - Last updated on