Centre for the Future of Intelligence · University of Cambridge
A full-time, research-intensive master's programme equipping the next generation of researchers, policymakers and industry leaders to analyse and navigate the ethical, social and practical dimensions of artificial intelligence.
We Are
Based at the Centre for the Future of Intelligence at the University of Cambridge, we bring together philosophers, social scientists, computer scientists, legal scholars, designers, cultural theorists and policy researchers with a shared mission: ensuring AI goes well for humanity. That breadth of disciplines, viewpoints and methods, all focused on AI, is what makes CFI unusual.
You Are
You want to do serious research on AI and its implications, whether your next step is a PhD, a role in policy or government, or a position in industry. You enjoy interdisciplinary challenge and want to develop real research skills. We welcome applicants from philosophy, social science, computer science, law, policy, design, humanities and beyond.
Programme Overview
The programme covers AI ethics, governance, safety, evaluation, the economics and geopolitics of AI, human-AI relationships, cultural and critical perspectives, and the future of work, while allowing students to pursue specialised interests through independent research and engagement with the range of expertise at CFI.
Join students from philosophy, law, computer science, history, political science, economics and beyond. Different perspectives are compared, challenged and integrated throughout the year.
Assessments aren't tied to specific modules, so you're free to research whatever interests you within the programme's scope, guided by expert supervision.
A structured core provides shared foundations. Elective modules change each year to reflect the research landscape, letting you go deeper into the areas that matter most to you.
Attend seminars, reading groups, conferences and events at CFI, while drawing on Cambridge's wider ecosystem in science, philosophy, law and policy.
Questions our students work on
Course Structure
The programme runs full-time across the three Cambridge terms. Taught modules build shared foundations and specialist knowledge. A mix of essays, presentations, group work and other formats develops your ability to research and communicate independently.
Core modules & electives. Research Essay 1 (5,000 words).
Elective modules & seminars. Research Essay 2 (7,000 words). Works-in-progress presentations.
Dissertation (up to 12,000 words). Presentation. Supervision and revision.
Two core modules provide shared foundations: an introduction to key concepts, theories and debates in AI ethics and society, and a technical module building intuition for how AI and ML systems work. Students attend at least four additional elective modules from a list that changes each year.
Students work individually with domain experts to produce four pieces of written work of increasing length and depth. You receive dedicated one-to-one supervision for each essay, building from shorter analytical exercises to a full dissertation. Those intending doctoral work will develop a well-planned PhD proposal.
How You'll Learn
This isn't a passive lecture programme. We use teaching formats that develop skills you can't pick up alone at home with a chatbot: arguing on your feet, working in teams, thinking under pressure.
Argue different sides of live controversies in AI policy and ethics. We also use "anti-debate" formats where the goal is arriving at truth together rather than winning.
Work in small teams on research questions and present your findings. The kind of collaboration that policy and industry roles actually require.
Work through real-world scenarios: international AI governance negotiations, organisational crises, decision-making under uncertainty. Then reflect on what happened and why.
Develop work in stages: proposal, draft, feedback, revision. The focus is on how you think, not just what you hand in at the end.
Write short module reflections and share them with your cohort. Give and receive peer feedback on each other's thinking.
Assessment
When anyone can generate polished text with AI, assessment has to go deeper. We test whether you actually understand what you're writing about and can defend it.
Four essays of increasing length (3,000 to 12,000 words), each supervised one-to-one. We ask for original analytical or empirical contributions, not literature reviews.
WrittenPresent your developing research to peers and faculty. Get live feedback and sharpen your arguments before they reach the page.
OralSome assignments involve working with AI tools as part of the process: generating, analysing, critiquing or building on AI outputs. The point is to test your judgement, not your ability to produce text.
AI-integratedSome assessment happens live in the classroom: group problem-solving, in-class exercises, collaborative analysis. Teachers see how you actually think and work with others.
CollaborativeA short synthesis after each module: key insights, an original idea, connections to your own research. These are shared with the cohort so everyone learns from each other.
WrittenSince this is an MPhil on AI and society, we treat the programme's own use of AI tools as part of the intellectual project. Early in the year, a dedicated session covers how to use LLMs well and where they go wrong.
Prompt engineering and using LLMs for literature discovery, brainstorming and stress-testing arguments
Understanding LLM limitations: hallucination, sycophancy, reasoning failures, distributional biases
Using AI for tutoring, creative thinking, getting feedback on drafts, and exploring counterarguments
Co-designing assessment norms: what does intellectual integrity look like in an era of capable language models?
Using AI to strengthen your own reasoning: stress-testing arguments, checking consistency, surfacing blind spots
Indicative Modules
Elective topics vary each year, reflecting the current research interests of staff and developments in the field. The following are examples of modules that have been or may be offered.
Key concepts, theories and debates: AI capabilities and risks, bias, fairness, moral reasoning, machine decision-making, value alignment, and anticipating future challenges.
CoreHow AI and ML systems are built, evaluated and deployed: from regression and classification to reinforcement learning and language modelling.
CoreA strategic role-playing game exploring international AI governance — used with real policymakers in industry and government. Teams role-play states and AI companies navigating transformative change.
ElectiveEmerging legal frameworks for GPAI — the EU AI Act, systemic risk regulation, governance under uncertainty, and the role of capability evaluation in law.
ElectiveWhy robust evaluation matters, alternative approaches, and the challenges of assessing increasingly capable systems for safety and societal impact.
ElectiveEmpirical approaches to AI's societal effects: public attitudes, misinformation, epistemic ecosystems, human–AI interaction and the social psychology of AI.
ElectiveCan machines have minds — or only the appearance of minds? Philosophical and neuroscientific perspectives on AI consciousness, moral status and digital welfare.
ElectiveTechnical and legal definitions of fairness, justice and accountability, tensions between them, and practical auditing methods. Cases from criminal justice, healthcare and finance.
ElectiveHow AI intersects with colonialism, global power and epistemic inequality. Decolonial and indigenous approaches to more just technological futures.
ElectiveThe epistemic power of AI: accuracy, the risks of knowing too much, classification as policy, and the ethical stakes of data-driven prediction.
ElectiveHow AI transforms national security, military strategy and geopolitics. Autonomous weapons, surveillance, cyber capabilities and arms control challenges.
ElectiveHow AI reshapes labour markets, productivity, wealth distribution and economic policy. Automation, job displacement, new forms of work, and debates around redistribution and growth.
ElectiveHow stories, media and cultural imaginaries shape the development and reception of AI. Feminist, STS and critical theory perspectives on technology and power.
ElectiveTools for anticipating AI trajectories. Superforecasting, scenario planning, and frameworks for high-stakes decisions under deep uncertainty.
ElectiveModule offerings and formats are indicative and subject to change. Not all modules listed will be available in a given year.
People
The programme is directed by researchers at the Centre for the Future of Intelligence and draws on a network of contributors from Cambridge, other universities and frontier AI organisations.
Modules are taught by researchers from CFI and the broader Cambridge community, spanning philosophy, social science, computer science, law, policy, HCI and design, and cultural and media studies. This means you encounter a wide range of disciplines, methods, angles and perspectives throughout the programme.
The programme regularly features guest lectures from researchers and practitioners at other universities, policy organisations, frontier AI labs and industry, covering AI safety, governance, philosophy, economics, law and international security.
What You'll Gain
Graduates leave with the conceptual tools, practical skills and professional networks to pursue research, policy, governance or careers at the intersection of AI and society.
Critical thinking: evaluating evidence, arguments and AI outputs carefully and honestly.
Clear communication, written and oral, developed through essays, presentations and debates.
AI literacy: how frontier systems work, how to use them as research tools, and where they fail.
Broad foundations across philosophy, social science, computer science, law, economics and public policy as they relate to AI.
Forecasting and decision-making under uncertainty. Tools for thinking about where AI is heading and what that means.
Research skills in AI governance, risk assessment, safety, regulation and policy.
Thinking on your feet, developed through live debates, presentations and in-class exercises.
Training in independent research, culminating in a supervised dissertation on a topic of your choice.
A launchpad for doctoral research, policy roles in government and international organisations, or positions at AI companies where analytical depth matters.
Student Voices
I particularly loved the flexibility of this course. The assessments aren't tied to specific modules, so you're free to research whatever interests you. That freedom made the course especially rewarding. With the guidance of my supervisors, I had the space to develop my own ideas — and realised I wanted to pursue a PhD.
The network I got exposed to, and the signal of the master's programme, meant I could secure a full-time role at the AI Safety Institute. CFI enabled me to draw connections between topics that domain experts often missed — enabling impactful research usually only possible later in one's career.
One of the best aspects is the diverse cohort. Coming from different cultural backgrounds, academic disciplines and professional experiences, I learned so much about AI ethics from a variety of viewpoints. Everyone encouraged me to carve my own academic path and explore intersections between AI, ethics, law and philosophy.
How to Apply
We're looking for people passionate about the implications of AI, committed to interdisciplinary perspectives, and from a range of academic backgrounds and experiences.
Precise dates and further information are available on the postgraduate admissions portal.
For queries: education@lcfi.cam.ac.uk