Research outdoors with an AI Robot

A 3-month research fellowship developing core topics in cooperative AI, with world-class mentorship.

Applications are now closed.

APPLICATIONS CLOSED

About The Fellowship

The fellowship is a full-time 3-month research program for participants from diverse backgrounds around the world to pursue AI safety research from a cooperative AI perspective. The fellowship will run from January to April 2026 in Cape Town, South Africa, and kicks off with a week-long retreat.

While working from the AI Safety Cape Town co-working space, participants will receive mentorship from top researchers in the field of cooperative AI, including from organisations such as Google DeepMind, the University of Oxford, and MIT. Alongside this, participants will be provided with resources for building their knowledge and network in cooperative AI, and financial support covering their living and travel expenses.

The aim of this program is to prepare fellows for research careers in cooperative AI, and to support the burgeoning AI safety and cooperative AI ecosystem in South Africa. In line with this, the University of Cape Town (UCT) will be launching the African AI Safety Hub at the UCT AI Initiative. We aim to support this emerging institution with research direction setting and talent from this program.

APPLICATIONS CLOSED

Key Details

Location: In-person in Cape Town, South Africa.

Application Deadline: 1 October 2025.

Start Date: 31 January 2026.

Duration: Full-time for 3 months, ending 30 April 2026.

Stipend: $3000 (R53000)/month for living expenses. Note that these are generous given the comparatively low cost of living in South Africa.

Accommodation: Private room in a group house with other fellows.

Amenities: We will provide an office space (with a beautiful view of Table Mountain), and workday meals.

Travel Support: We cover flights to and from Cape Town.

Visas: We are unable to provide visa sponsorship; however visitor visas are easy to acquire for many countries and last up to 90 days, with relatively simple processes for extension. We can provide support with handling your visitor visa extension process.

Compute Budget: We will provide compute based on your project requirements.

Participants: We are looking for candidates across the globe for this program. Additionally, we aim to provide special consideration to applicants who would otherwise have trouble accessing in-person programs in the UK or US due to visa requirements.

If you have any questions, please see our FAQ section or watch the webinar recording for more details.

APPLICATIONS CLOSED

Research Areas

Multi-Agent Safety

AI for Facilitating Human Cooperation

Mitigating Gradual Disempowerment

Wildcard

APPLICATIONS CLOSED

Mentorship

Each fellow will be matched with an expert mentor, who will provide supervision for the duration of the fellowship. In addition, fellows will be supported by a research manager who will provide general research advice, career coaching, and ensure they are on track to meet their goals.

We have gathered some truly world-class mentors for this fellowship, and expect more to join in the coming weeks. Prospective mentors include:

Divya Siddarth

Collective Intelligence Project

Vincent Conitzer

Carnegie Mellon University and University of Oxford

Michiel Bakker

MIT & Google DeepMind

Sahar Abdelnabi

Microsoft, Max-Planck Institute for Intelligent Systems, Tübingen AI Center

Joel Z. Leibo

Google DeepMind

Zhijing Jin

University of Toronto

Lewis Hammond

Cooperative AI Foundation & University of Oxford

Tan Zhi Xuan

National University of Singapore

Max Kleiman-Weiner

University of Washington

APPLICATIONS CLOSED

Application Process

This will be a five-phase application process. The first phase will take approximately 45-60 minutes to complete. We encourage you to submit your application even if it feels unpolished; we value authenticity and substance over perfect presentation. We value inclusion and encourage applicants from diverse backgrounds. Please contact us if you require special accommodation in order to apply.

Phase 1 - Initial Review - deadline 1 October 2025:

Applications are reviewed on a rolling basis with decisions made by October 6, 2025. Early submission is encouraged.

Phase 2 - Paid Work Sample (2-3 hours):

Selected candidates will be asked to complete a compensated research task. Successful applicants will be notified by the 20th of October.

Phase 3 - Interview:

Selected candidates will participate in a 45-60 minute interview with program staff. Successful applicants will be notified by October 29th.

Phase 4 - Mentor Matching and Offers:

Selected candidates will be interviewed by one or more mentors based on research interests and compatibility, who will make suitable candidates an offer.

Phase 5 - Final Offers:

Here, our team reviews the final mentor-mentee pairings to ensure their projects are within scope. We expect the vast majority of candidates who pass phase 4 to be accepted at this stage.


Selection Criteria

We welcome participants from anywhere in the world, from many levels of experience, but a basic understanding of machine learning is required (i.e. the equivalent of having completed one undergraduate course in ML). Our intention for this fellowship is to catalyse career growth in early-stage researchers who aim to contribute significantly to the field of cooperative AI and AI safety. As such, we are looking for candidates with high potential whose careers could be significantly accelerated by this program.

As evidence of this, we evaluate applications based on the following criteria:

General Program Fit:

We look for candidates who have demonstrated the ability to complete projects, solve problems independently, and drive results despite obstacles or uncertainty. We also look for candidates who have demonstrated prior engagement with topics in AI safety or cooperative AI through reading, coursework, workshops, conferences, or other learning activities.

Domain Competence & Research Skills:

We value experience relating to the field that you wish to contribute to. This may include relevant coursework, skills, publications or other evidence of track record, appropriate to your career stage. We also strongly value research experience, though this is not strictly required.

Career Goals:

We expect clear alignment between the fellowship and your career aspirations in cooperative AI or AI safety research. We look for candidates with thoughtful, well-articulated plans for contributing to the field.

Research Proposal Potential:

We will examine the quality and feasibility of your proposed research within our tracks. We evaluate understanding of the research area, connection to existing literature, and potential for meaningful contribution within the 3-month timeline. Note that we expect many fellows will end up working on projects quite different from their original proposal. Our main motivation for including this section is to test your ability to synthesize ideas and develop a promising direction.

Counterfactual:

We also consider the potential counterfactual impact of the fellowship on your career trajectory. We particularly look at candidates who would have limited access to similar opportunities elsewhere, those from underrepresented communities, or those who could significantly benefit from exposure to the cooperative AI research community.

APPLICATIONS CLOSED

Why Cooperative AI

Powerful AI systems are increasingly being deployed with the ability to autonomously interact with the world. This is a profound change from the more passive, static AI services with which most of us are familiar, such as chatbots and image generation tools.

In the coming years the competitive advantages offered by autonomous, adaptive agents will likely drive their adoption in high-stakes domains with increasingly complex and important tasks. In order to fulfil their roles, these advanced agents will need to communicate and interact with each other and with people, giving rise to new multi-agent systems of unprecedented complexity.

While the broader fields of AI safety and AI governance often focus on individual AI systems, cooperative AI focuses specifically on multi-agent safety and how AI can overcome cooperation challenges between many actors. This includes reducing risks associated with interactions between advanced AI agents, as well as making use of AI to overcome human cooperation challenges. You can learn more about cooperative AI through the Cooperative AI self-paced online course.

Through the fellowship, we are supporting global talent in advancing research during a crucial phase of AI development. Our partners in South Africa and abroad aim to facilitate collaboration across continents to solve safety and alignment problems, enabling researchers to build ongoing relationships that lead to impactful careers.

Cooperative AI illustration with birds flying over forest

Why South Africa

We expect rapid AI adoption in Africa given that it is, demographically, the youngest, most quickly growing continent. We believe that preparing African nations with societal safeguards for the mass-adoption of AI will be crucial for preventing and mitigating human suffering. We also believe that AI can be used beneficially in this context to uplift human coordination and resolve resource sharing problems. This perspective aligns with the Continental Strategy on AI outlined by the African Union.

Given this, South Africa is an excellent home for this program as it hosts the top academic institutions on the continent. In particular, the University of Cape Town – the continent's highest ranked institution and a core partner of this program – has strong national and continental academic ties and a rapidly expanding AI ecosystem internally. AI Safety South Africa (AISSA) has been working alongside the University of Cape Town to integrate AI safety topics into the university's curriculum since AISSA's inception. With this in mind, we aim to support the burgeoning AI safety and cooperative AI ecosystem in South Africa with the fellowship, including supporting the establishment of the African AI Safety Hub at the University of Cape Town.

Furthermore, due to more lenient visa requirements, we expect hosting this program in South Africa will result in a more diverse pool of applicants to be able to contribute to this critical, globally-relevant field. Lastly, as an added bonus, the fellowship takes place in summertime in South Africa with sunny beaches just 15 minutes away from the co-working space!

APPLICATIONS CLOSED
Cooperative AI illustration with birds flying over forest

Timeline

2025 - Applications

26 August
Phase 1 applications open.
18 September
Info session webinar.
1 October
Phase 1 applications close.
6 October
Applicants proceed to the skills test phase.
20 October
Applicants proceed to the interview phase.
29 October
Fellows will be matched with mentors.
29 October - 13th November
Interviews with mentors and final offers.

2026 - Fellowship

31 January
Kickoff retreat in Cape Town.
9 February
Research phase begins.
February
Research continues.
March
Research continues.
30 April
Fellowship Conclusion.

Partners

The fellowship is a collaboration between the Cooperative AI Foundation (CAIF), Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS), The AI Initiative at the University of Cape Town (UCT) and AI Safety South Africa (AISSA). AISSA is driving this project, and building on PIBBSS fellowship methodology with CAIF research oversight. This initiative serves as both a talent pipeline and research direction-setting mechanism for UCT's emerging African AI Safety Hub. This initiative is funded by the AI Safety Tactical Opportunities Fund and the Cooperative AI Foundation.

Partnership illustration featuring anthropomorphic monkeys in a natural landscape

AI Safety South Africa (AISSA)

A capacity building organisation focused on developing skills, networks, and community for preventing global catastrophic outcomes from advanced AI. AISSA drives impact through education, research, community, and partnerships.

The Cooperative AI Foundation (CAIF)

A charitable entity, backed by a $15 million philanthropic commitment from Macroscopic Ventures. CAIF's mission is to support research that will improve the cooperative intelligence of advanced AI for the benefit of all.

The UCT AI Initiative

A research, teaching and knowledge translation ecosystem, dedicated to advancing world-class AI rooted in African realities. The initiative's mission is to design technologies that drive justice, dignity, and collective flourishing. The AI Initiative will have focus areas in AI applications including improving outcomes in health, climate, and poverty as well as AI safety and foundational AI theory.

Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS)

A research initiative aiming to leverage insights on the parallels between intelligent behaviour in natural and artificial systems towards progress on important questions in AI risk, governance and safety. PIBBSS has successfully run multiple research fellowships, developing a unique methodology for mentoring early-career researchers in AI safety.

Cooperative AI Foundation Logo
PIBBSS Logo UCT AI Initiative Logo
AI Safety South Africa Logo

Frequently Asked Questions











If you have further questions, please reach out to us at info@cai-research-fellowship.com.

APPLICATIONS CLOSED

Information Session Webinar Recording