00 Overview
timeline title My AI Security Journey Timeline section Philosophical Beginnings (2021-2022) Fall 2021 : Discovered Effective Altruism at MIT : Engaged with EA/AIS community and literature Spring 2022 : Leadership role in EA MIT (now Impact@MIT) section Research Experience (2022) Summer 2022 : SERI Summer Research Fellowship in Berkeley : Red-teaming research with GPT-2 models section Organization & Growth (2022-2024) 2022-2023 : Operations Director at MIT AI Alignment (MAIA) 2023-2024 : Technical training and network building : Tools development for AI Security orgs section Applied Work (2024-2025) Fall 2024 : METR evaluation infrastructure contributions Feb 2025 : AI risk demonstration at Congressional Exhibition Spring 2025 : MIT AI Security Institute initiative section Future Path Fall 2025 : Early MIT graduation : Full-time AI Security career focus
01 Philosophical Beginnings
01.01 Discovery and Introduction (Fall 2021)
My AI Security journey began at MIT in 2021 when I discovered Effective Altruism1 as a freshman seeking career direction. I quickly immersed myself in popular EA/AIS literature2 and community engagement through the EA-Intro Fellowship, two EAGs, and an AI Security workshop. During this period, I wrote an article on Transformative AI to solidify some of my fundamental thoughts on AI for a general audience.
01.02 Community Leadership (Spring 2022)
In Spring 2022, I transitioned to a leadership role within EA MIT (now Impact@MIT), running MIT’s EA Intro Fellowship, and managing our office space in the MIT Student Center.
02 Zooming in on AI Risks
02.01 SERI Summer Research Fellowship (Summer 2022)
Summer 2022 marked my formal research entry through the (now discontinued) SERI Summer Research Fellowship in Berkeley, CA. During this intensive program, I worked under Stephen Casper from MIT Algorithmic Alignment Group3 developing RL fine-tuning techniques for GPT-2 to autonomously identify diverse prompts resulting in harmful model outputs (violence, disinformation, etc.) resulting in the paper Explore, Establish, Exploit: Red Teaming Language Models from Scratch.
03 Organizational Leadership (2022-2024)
03.01 MAIA Operations Director (Fall 2022-Spring 2023)
From Fall 2022, I served as Operations Director on MIT AI Alignment’s (MAIA) executive board, managing organizational strategy, communications infrastructure, and technical problem-solving—occasionally extending to resolving high-stakes administrative issues among members.4
03.02 Technical Growth and Network Building (2023)
During 2023, I completed AI Security Fundamentals (technical track)5 I also attended specialized workshops (two AISST-MAIA technical workshops, one policy-focused, and another hosted in Constellation for university group organizers) and conferences (EAGxLATAM and others).
Throughout this period, I continuously developed perspectives on AI Security strategy and philosophical/social concerns in light of a radical future, influenced by my work, readings, and interactions with the wider AI Security community.
04 Applied AI Security Work
04.01 METR Evaluation Infrastructure (Fall 2024)
In Fall 2024, I transitioned to more applied work, contributing to METR’s evaluation infrastructure as a contractor. My work involved developing CLI tools, evaluation templates, and installers6 for Vivaria, a platform used to conduct AI capability and risk evaluations in partnership with OpenAI, Anthropic, and US/UK AI Security institutes.
04.02 Policy Engagements (Winter 2024-2025)
In February 2025, I co-presented on a targeted phone-line attacks demo at Congressional Exhibition on Advanced AI (hosted by the Center for AI Policy or CAIP, supported by Congressman Bill Foster of Illinois) to showcase the potential risks of AI misuse to congressional staffers.
04.03 Current Projects (Spring 2025)
Currently, I’m collaborating with the MIT Algorithmic Alignment Group on evaluations for AI R&D automation capabilities. I’m also spearheading efforts with MIT Faculty to establish a formal MIT AI Security Institute while maintaining active involvement with MAIA.
05 Near-Term Plans
For the past three+ years, my career has focused on mitigating risks from advanced AI, with my current emphasis on technical governance and policy work. I plan to graduate a semester early (Fall 2025) from MIT with a BS in Computer Science with a concentration in AI & Decision Making to fully enter this field.
Looking beyond graduation, I aim to continue working at the intersection of technical AI Security research and policy development, helping to build robust governance frameworks for increasingly capable AI systems. My ultimate goal is to contribute to ensuring AI development remains safe, beneficial, and aligned with human values as these technologies become increasingly powerful and transformative.
Footnotes
-
Most know EA as a fringe philosophical movement affiliated with what I believe is the largest case of crypto fraud as of April 2025. My historical relationship with EA is complex and I don’t interact with the community much. I intend to write about this eventually. ↩
-
Initially: The Precipice, Doing Good Better, Human Compatible, Superintelligence, numerous AIS/LW articles. Later expanded to include The Sequences, Joe Carlsmith’s “Otherness and control in the age of AGI”, Uncontrollable, Superforecasting, Life 3.0, and others. ↩
-
Ironically going from MIT to Berkeley and joining a research project with someone at MIT 💀 ↩
-
In my tenure at MAIA, I also developed MopMan, an operations management system with AirTable, GatPack, a LaTeX-based packet generation tool, a repository of AIS university group resources, and the AIS @ MIT Directory (Contact me for access!) ↩
-
Also completed about one week of the ARENA curriculum, but that’s hardly substantial ↩
-
The team had no desire to maintain the Homebrew Formula unfortunately. Some parts of it lived on elsewhere but the project overall was scrapped. ↩