I am a second year PhD student at
Mila and DIRO under the supervision of
Simon Lacoste-Julien.
I am interested in constrained optimization for deep learning. I am
working on how to reliably solve constrained optimization problems with
some of the following characteristics: non-convexity, non-smoothness,
non-differentiability, stochastic constraints, and large numbers of
constraints.
Check out Cooper, a library for constrained optimization in Pytorch that I co-founded.
Previously, I was an intern at Simon's group at Mila under the
supervision of Jose Gallego-Posada.
Before that, I graduated from the BSc in Mathematical Engineer at
Universidad EAFIT. During the BSc, I spent a
summer at McKinsey & Co. as a research intern.
Research interests: (Non-Convex) Constrained Optimization, Min-Max Optimization, Deep Learning, Sparsity, Fairness.
Contact: juan43ramirez (at) gmail (dot) com
Sep 4: Together with António Góis, I will be TAing Simon Lacoste-Julien's graduate course on Probabilistic Graphical Models at Mila.
Jul 23: Arriving in Honolulu, Hawaii, where I will have the pleasure to present Omega: Optimistic EMA Gradients. Omega, our method for stochastic min-max optimization, was awarded an oral at the LatinX in AI workshop at ICML 2023!
Jan 27: I will attend Khipu 2023 in Montevideo, Uruguay. Glad to be a part of the Latin American AI community.
Oct 14: See you at the Pytorch conference 2022 in New Orleans! Cooper will be making an appearance (poster).
Sep 14: I will be presenting my first NeurIPS paper: Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints in New Orleans.
Sep 05: Excited to start a PhD in Artificial Intelligence under the supervision of Simon Lacoste-Julien at Mila and the University of Montreal.
Jul 25: Taking part in CIFAR's Deep Learning Reinforcement Learning Summer School.
Jul 8: Attending the Sparsity in Neural Networks workshop. We are presenting our latest work on implicit neural representations: L0onie: Compressing COINs with L0-constraints.
Mar 15: We have released Cooper: a library for Lagrangian-based constrained optimization in Pytorch.
Dec 17: Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization, won the best poster award at the LatinX in AI Workshop @ NeurIPS 2021!
Nov 1: Starting an internship at Mila, a fundamental machine learning research lab. I will join Simon Lacoste-Julien's group under the supervision of Jose Gallego-Posada.
Oct 22: My first ever paper, Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization, has been accepted at the LatinX in AI Workshop at NeurIPS 2021!
Aug 2: during the next three weeks, I will attend (virtually) Neuromatch's Deep Learning Summer School.
Jul 18: I am attending an ML conference for the first time: ICML 2021.
Jul 1: I graduated from the BSc in Mathematical Engineering at Universidad EAFIT! Plus, got a mention for my contributions to the student organization. Check out my Linkedin post.
Jan 23: I started Mandarin Chinese lessons at Instituto Confucio. 我会说一点汉语.
Jan 14: I am auditing Ioannis Mitliagkas' graduate course on Deep Learning Theory at Université de Montréal.
Jul 2020: Jose Gallego-Posada, PhD student at Mila will be my supervisor for my undergraduate thesis. I will be working on deep generative models.
Jun 2019: Now part of McKinsey & Co. in Belgium as a research intern. Working with Antoine Stevens and Patrick Dehout in ML for the agricultural and chemical industries.
Jan 2019: Arrived in Louvain-la-Neuve, Belgium for an exchange semestrer at the Université Catholique de Louvain.
Nov 2017: I have been appointed as president of CIGMA-OE, the Mathematical Engineering chapter of the Student Organization at Universidad EAFIT.
Dec 2015: I was awarded a full scholarship for the Bachelor's degree in Mathematical Engineering at Universidad EAFIT.
Dec 2015: Scored amongst the best 0.1% on the Colombian high school examination ICFES. Unfortunately, I did not obtain an Andrés Bello prize.
Dec 2015: Ranked first in the National Chemistry Olympiads of Universidad de Antioquia.
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints. J. Gallego-Posada, J. Ramirez, A. Erraqabi, Y. Bengio and S. Lacoste-Julien. NeurIPS, 2022.
Balancing Act: Constraining Disparate Impact in Sparse Models. M. Hashemizadeh*, J. Ramirez*, R. Sukumaran, G. Farnadi, S. Lacoste-Julien and J. Gallego-Posada. arXiv preprint, 2023.
Omega: Optimistic EMA Gradients. J. Ramirez, R. Sukumaran, Q. Bertrand and G. Gidel. LatinX in AI workshop at ICML, 2023.
L0onie: Compressing COINs with L0-constraints. J. Ramirez and J. Gallego-Posada. Sparsity in Neural Networks Workshop, 2022.
Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization. J. Gallego-Posada, J. Ramirez and A. Erraqabi.
LatinX in AI Workshop at NeurIPS, 2021.
J. Gallego-Posada and J. Ramirez (2022). Cooper: a general-purpose library for constrained optimization in Pytorch [Computer software].
Fall 23: Probabilistic Graphical Models by Simon Lacoste-Julien