Who am I?
My name is Albert Manuel Orozco Camacho; I am originally from Guadalajara, Jalisco, México 🇲🇽.
👁️👁 ️My last name is Orozco Camacho. As a proud Latin American, I strongly prefer to use both words on any context, especially on academic publications. I consider these subtleities as important heritage markers, and am often curious about the naming customs of other cultures.
I enjoy all things artificial intelligence. 😎
Present-days: Doing a PhD at Mila and Concordia University
Previously: Research Master’s at Mila and McGill University
In 2022, I completed a Master’s degree at Mila and the Reasoning and Learning Lab of McGill University, under Prof Reihaneh Rabbany’s supervision. It all started on 2019, about 9 months before the SARS-COV-2 pandemic hit 🦠.
During that time, I worked in the intersection of Network Science and Natural Language Processing.
My MSc thesis can be found here: 🐦
What happened before? 🤔 Undergrad degree at UNAM!
I did my undergrad at the Facultad de Ciencias of the Universidad Nacional Autónoma de México (UNAM). I graduated in 2018 with a degree in Computer Science.
During this time, I was very happy to work under Dr Ivan Vladimir Meza Ruiz’s guidance. Under him, I worked on several speech and language processing projects, which derived into teaching experiences, hackathons, and a publication.
Before I started playing with modern-day language models, I spent a year playing with IIMAS’ Golem service robot. This was a very wholesome experience that allowed me to to learn the essence of AI before the Deep Learning boom.
My undergraduate concluded with a famous dissertation in which I was able to showcase a bit of my flippant side towards the academic world. I deeply believe that humans shall seek any degree of joy in their professional activities; and moreover, it is worth taking one’s life a bit less seriously, from time to time.
My BSC thesis can be found here: 🦕
This was a very early effort to creating coherent meme captions, given some popular 2010s characters. It was based on the Show-and-Tell model, which was a precursor to the modern-day Image Captioning models. It combined a pre-trained InceptionV3 model (on ImageNet) with an LSTM for text generation.