Anthropic’s Post

View organization page for Anthropic, graphic

641,362 followers

We’re starting a Fellows program to help engineers and researchers transition into doing frontier AI safety research full-time. Beginning in March 2025, we’ll provide funding, compute, and research mentorship to 10–15 Fellows with strong coding and technical backgrounds. Fellows will have access to: - A weekly stipend of $2,100 - ~$10k per month for compute & research costs - 1:1 mentorship from an Anthropic researcher - Shared workspaces in the Bay Area and London Fellows will collaborate with Anthropic researchers for 6 months on projects in areas such as: - Adversarial robustness & AI control - Scalable oversight - Model organisms of misalignment - Interpretability Fellows can participate while affiliated with other organizations (e.g., while in a PhD program). At the end of the program, we expect Fellows to be stronger candidates for roles at Anthropic, and we might directly extend some full-time offers. Apply by January 20 to join our first cohort! Full details are at the following link: https://lnkd.in/dDNzwZCp

  • text

To be shipping the strongest product and keeping an eye on the long game is what makes you folks hands down my long term bull case. Love it.

Claudia Martinez

💡Global Events @ Microsoft | Corporate Unicorn | Empowering Teams & Making AI Innovation Work IRL ✨

1mo

Finally - a fellowship that gets it! As someone who speaks both 'tech' and 'human' fluently, love seeing a program that values connecting the dots between innovation and reality 🎯 Anthropic Quick q: How much do you value the 'translation skills' for breaking down complex AI concepts? (Asking for all of us who make technical magic make sense to humans daily 💫) Already sharing this gem with my network - because someone has to build the bridge between #AI dreams and AI reality! ✨🌉

Ayush K

Founder of CalmEmail — building AI email assistants for founders | love building and teaching about AI agent based software

1mo

in other words: if you're doing anything other or less than pursuing a PhD degree don't apply-you will get auto rejected.

Exciting initiative, Anthropic! Programs like this play a crucial role in building the future of safe and responsible AI. At LLMate, we share a similar commitment to advancing AI technology while prioritizing its ethical and safe deployment. Best of luck to all the talented engineers and researchers applying to the Fellows program—this is a fantastic opportunity to contribute to frontier AI safety research with access to top-notch mentorship, funding, and resources. Looking forward to seeing the impact of this cohort’s work on the AI safety landscape!

Like
Reply

We often find that part of the journey for domain experts is to build the professional curiosity necessary to explore niches in AI that might feel counter intuitive in traditional domains? Does the Fellows Program have an angle on broadening worldviews and perspectives to support that? Engineers and Academics often carry a very specific (and useful) perspective or approach to challenges that in new fields can get bogged down (IYKYK). We need more engineers and researchers in this space. But we also need them thinking differently…

Like
Reply

This is fantastic news! Anthropic's commitment to AI safety is inspiring. We are excited to see the contributions of the Fellows program and the positive impact it will have on the field.

This is an incredible initiative, Anthropic. I am excited to see the impact this program will have on shaping the future of AI!

This is a great opportunity for engineers and researchers passionate about AI! Excited to see the impact of this program will have on the future of AI development!

Like
Reply

This research is well needed! We're looking forward to the good that comes from this program.

Like
Reply

This sounds great! 🎉

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics