The application deadline is 28 September 2025.
The fellowship is a 3 month-long research program for participants from diverse backgrounds from around the world to pursue AI Safety research from a cooperative AI perspective. The fellowship will run from January to April 2026 and kicks off with a week-long retreat and may culminate with a conference.
Participants will receive mentorship from top researchers in the field of cooperative AI, including from organisations such as Google Deepmind, the University of Oxford, and the Collective Intelligence Project (among others). Alongside this, participants will be provisioned resources for building knowledge in AI Safety, and comprehensive financial support covering their living and travel expenses while in Cape Town.
There will be opportunities for collaboration with other cohort members, a week-long retreat, workshops and access to a co-working space.
Alumni from previous iterations of similar fellowships have gone on to work in academia, join leading AI Safety labs, work for the UK AISI, or start independent research projects.
Location: In-person in Cape Town, South Africa.
Application Deadline: 28 September 2025.
Start Date: 10 January 2026.
Duration: Full-time for 3 months, ending 13 April 2026.
Stipend: $1170 (R21000)/month for living expenses. Note that these are generous given the comparatively low cost of living in South Africa.
Accommodation: Private room in a group house with other fellows, or a stipend of $669 (R12000)/month for providing your own accommodation.
Travel Support: We cover flights to and from Cape Town.
Visas: We are unable to provide visa sponsorship; however visitor visas are easy to acquire for many countries and last up to 90 days, with relatively simple processes for extension. We can provide support with handling your visitor visa extension process.
Compute Budget: We offer up to $1,500 in compute credits, depending on your project requirements.
Eligibility: We welcome participants from all kinds of backgrounds, but a fundamental understanding of machine learning is required. Additional expertise in MARL, game theory, complex systems, mathematics, economics, and international governance is appreciated. We are committed to the growth of a diverse and inclusive research community and welcome applicants from underrepresented backgrounds.
Participants: There will be 6-12 fellows, with at least 30% from South Africa or South African institutions. The remaining 50-70% of the cohort will be drawn from a diverse pool of international applicants. We aim to provide special consideration to applicants who would otherwise have trouble accessing in-person programs in the UK or US due to visa requirements.
We have identified the following tracks for the fellowship.
This research area serves to develop the nascent field of understanding risks of and preventing harm from large systems of autonomous AI agents. The focus is on cooperation problems in mixed-motive settings: situations where individual agents or groups would benefit from working together, yet have incentives that make cooperation difficult or unstable. Work in this area includes:
See the associated grant areas on the Cooperative AI Foundation website for more details.
Many of the greatest challenges that humanity faces can be understood as cooperation challenges, where we would benefit from working together, yet have incentives that make cooperation difficult or unstable. In this area, we would like to see proposals to develop AI tools that help humans resolve major cooperation challenges. By virtue of their potentially greater ability to identify mutually beneficial agreements or to create novel institutional designs, for example, AI systems could have a huge positive impact via helping humans to cooperate.
See the associated grant area on the Cooperative AI Foundation website for more details.
As AI deployment increases and critical social systems — like economy, state, culture — become less reliant on human labor and cognition, the extent to which humans can explicitly or implicitly align such social systems could dramatically decrease. Competitive pressures and 'wicked' interactions across systems and scales could make it systematically difficult to avoid outsourcing critical societal functions to AI. As a result, these systems —and the outcomes they produce— might drift further from providing what humans want. In this area, we're looking to develop mitigations to preserve human agency and ensure that our institutions serve us.
Pitch us a project! The outcome of your project must necessarily reduce catastrophic outcomes that arise as a result of AI, but other than that there are no constraints here. Note that you may be less likely to be matched with a mentor (and therefore accepted to the fellowship) if you choose this option, but we will make an effort to find mentors for exceptional candidates that don't exactly fit the tracks above.
This research area serves to develop the nascent field of understanding risks of and preventing harm from large systems of autonomous AI agents. The focus is on cooperation problems in mixed-motive settings: situations where individual agents or groups would benefit from working together, yet have incentives that make cooperation difficult or unstable. Work in this area includes:
See the associated grant areas on the Cooperative AI Foundation website for more details.
This research area serves to develop the nascent field of understanding risks of and preventing harm from large systems of autonomous AI agents. The focus is on cooperation problems in mixed-motive settings: situations where individual agents or groups would benefit from working together, yet have incentives that make cooperation difficult or unstable. Work in this area includes:
See the associated grant areas on the Cooperative AI Foundation website for more details.
Many of the greatest challenges that humanity faces can be understood as cooperation challenges, where we would benefit from working together, yet have incentives that make cooperation difficult or unstable. In this area, we would like to see proposals to develop AI tools that help humans resolve major cooperation challenges. By virtue of their potentially greater ability to identify mutually beneficial agreements or to create novel institutional designs, for example, AI systems could have a huge positive impact via helping humans to cooperate.
See the associated grant area on the Cooperative AI Foundation website for more details.
As AI deployment increases and critical social systems — like economy, state, culture — become less reliant on human labor and cognition, the extent to which humans can explicitly or implicitly align such social systems could dramatically decrease. Competitive pressures and 'wicked' interactions across systems and scales could make it systematically difficult to avoid outsourcing critical societal functions to AI. As a result, these systems —and the outcomes they produce— might drift further from providing what humans want. In this area, we're looking to develop mitigations to preserve human agency and ensure that our institutions serve us.
Pitch us a project! The outcome of your project must necessarily reduce catastrophic outcomes that arise as a result of AI, but other than that there are no constraints here. Note that you may be less likely to be matched with a mentor (and therefore accepted to the fellowship) if you choose this option, but we will make an effort to find mentors for exceptional candidates that don't exactly fit the tracks above.
Each fellow will be matched with an expert mentor, who will provide feedback on their project. In addition, fellows will be guided by a research manager who will facilitate their research process and connect them to relevant resources during weekly meetings.
There will be a three-phase application process. The first phase will take approximately 45-60 minutes to complete. We encourage you to submit your application even if it feels unpolished—we value authenticity and substance over perfect presentation.
We will evaluate applications based on research potential, technical capability, career alignment, motivation, and personal fit. Applications are reviewed on a rolling basis with decisions made by October 6, 2025. Early submission is encouraged.
Selected candidates will be asked to complete a compensated research task ($25/hour).
Final candidates will participate in a 45-60 minute interview with program staff. There may be a second round of interview if needed.
Successful applicants will receive offers by October 29, 2025 and be matched with mentors based on research interests and compatibility. If you receive an offer from one mentor, you will work with that mentor if you choose to accept the fellowship. If you receive multiple offers from different mentors, you will have the opportunity to choose which mentor you prefer to work with. Selected applicants will have initial meetings with their matched mentors before making final decisions about the fellowship.
A description of the fellowship structure can be found here. For any further queries please contact info@cai-research-fellowship.com
We value inclusion and encourage applicants from diverse backgrounds. Please contact us if you require special accommodation in order to apply.
You don't need to have a specific project in mind when applying for the fellowship. Throughout the interview process, we learn more about each fellow's interests and help them find a suitable mentor and project.
Our key criterion is the potential to conduct impactful cooperative AI research. As evidence of this, we evaluate applications in terms of personal fit, technical skills and career goals. Further instructions can be found on the application form.
Powerful AI systems are increasingly being deployed with the ability to autonomously interact with the world. This is a profound change from the more passive, static AI services with which most of us are familiar, such as chatbots and image generation tools.
In the coming years, the competitive advantages offered by autonomous, adaptive agents will drive their adoption both in high-stakes domains, and as intelligent personal assistants, capable of being delegated increasingly complex and important tasks. In order to fulfil their roles, these advanced agents will need to communicate and interact with each other and with people, giving rise to new multi-agent systems of unprecedented complexity.
While the broader field of AI Safety aims to resolve alignment and safety issues in AI to mitigate existential risk, cooperative AI focuses specifically on multi-agent safety and how AI can overcome cooperation challenges. This includes reducing risks associated with interactions between advanced AI agents, as well as making use of AI to overcome human cooperation challenges. You can learn more about cooperative AI through the Cooperative AI self-paced online course.
Through the fellowship, we are supporting global talent in advancing research during this crucial phase of AI development. Our partners in South Africa and abroad aim to facilitate collaboration across continents to solve safety and alignment problems, enabling researchers to build ongoing relationships that lead to impactful careers.
*Cooperative intelligence is an agent's ability to achieve their goals in ways that also promote social welfare, in a wide range of environments and with a wide range of other agents.[1]
We expect rapid AI adoption in Africa given that it is, demographically, the youngest, most quickly growing continent. We believe that preparing African nations with societal safeguards for the mass-adoption of AI will be crucial for preventing and mitigating human suffering. We also believe that AI can be used beneficially in this context to uplift human coordination and resolve resource sharing problems. This perspective aligns with the Continental Strategy on AI outlined by the African Union.
Given this, South Africa is an excellent home for this program as it hosts the top academic institutions on the continent. In particular, the University of Cape Town – the continent's highest ranked institution and a core partner of this program – has strong national and continental academic ties and has a rapidly expanding AI ecosystem internally. AI Safety South Africa (AISSA) has been working alongside the University of Cape Town to integrate AI safety topics into the university's curriculum since AISSA's inception. With this in mind, CAIRF aims to support the burgeoning AI Safety ecosystem in South Africa, including the establishment of the African AI Safety Hub at the University of Cape Town.
Lastly, due to more lenient visa requirements, we expect hosting this program in South Africa will result in a more diverse pool of applicants to be able to contribute to this critical, globally-relevant field.
The fellowship is a collaboration with the Cooperative AI Foundation, PIBBSS, The AI Initiative at UCT and AI Safety South Africa. We build on PIBBSS fellowship methodology with CAIF research oversight. This initiative serves as both a talent pipeline and research direction-setting mechanism for the University of Cape Town's emerging AI Safety Hub.
A new charitable entity, backed by a $15 million philanthropic commitment from Macroscopic Ventures. CAIF's mission is to support research that will improve the cooperative intelligence of advanced AI for the benefit of all.
Learn more hereFacilitates knowledge transfer from fields studying intelligence in natural systems toward building human-aligned AI.
Learn more hereProvides a rich ecosystem that enables researchers and students to do excellent AI research which will create tools and opportunities that facilitate the transition to a just society.
Learn more hereA capacity building organisation focused on developing skills and community in South Africa through events, courses, and co-working at AI Safety Cape Town.
Learn more hereThe fellowship is a collaboration with the Cooperative AI Foundation, PIBBSS, The AI Initiative at UCT and AI Safety South Africa. We build on PIBBSS fellowship methodology with CAIF research oversight. This initiative serves as both a talent pipeline and research direction-setting mechanism for the University of Cape Town's emerging AI Safety Hub.
A new charitable entity, backed by a $15 million philanthropic commitment from Macroscopic Ventures. CAIF's mission is to support research that will improve the cooperative intelligence of advanced AI for the benefit of all.
Learn more hereFacilitates knowledge transfer from fields studying intelligence in natural systems toward building human-aligned AI.
Learn more hereProvides a rich ecosystem that enables researchers and students to do excellent AI research which will create tools and opportunities that facilitate the transition to a just society.
Learn more hereA capacity building organisation focused on developing skills and community in South Africa through events, courses, and co-working at AI Safety Cape Town.
Learn more hereIf you have further questions, please reach out to us at info@cai-research-fellowship.com.