Hi, I’m Michael Ryan and I’m a Master’s student studying Artificial Intelligence at Stanford University. I’m fortunate to be doing NLP research as a member of Dr. Diyi Yang’s SALT Lab! This Spring I am the Head Course Assistant for CS221: Artificial Intelligence: Principles and Techniques with Dr. Nima Anari, Dr. Moses Charikar, and Dr. Sanmi Koyejo.
My research interests include LLM personalization for various cultures, languages, and individuals. Right now I am looking at some of the effects of preference tuning on LLMs. I am also exploring prompt optimization strategies via DSPy. Previously I was an undergraduate researcher in Dr. Wei Xu’s NLP X Lab at Georgia Tech.
Have a look at my CV, or if you’re in a hurry, check out my resume!
MS in Computer Science, 2025
Stanford University
BSc in Computer Science (Intelligence & Systems/Architecture), 2023
Georgia Institute of Technology
We release the MultiSim benchmark, a collection of 27 resources in 12 distinct languages containing over 1.7 million complex-simple sentence pairs. This benchmark will encourage research in developing more effective multilingual text simplification models and evaluation metrics. Our experiments using MultiSim with pre-trained multilingual language models reveal exciting performance improvements from multilingual training in non-English settings.