diff --git a/app/routes/how-can-i-help.career.tsx b/app/routes/how-can-i-help.career.tsx index 80c5ce20..3640ccc6 100644 --- a/app/routes/how-can-i-help.career.tsx +++ b/app/routes/how-can-i-help.career.tsx @@ -243,28 +243,31 @@ const GovernancePath = () => (

Why this is important

- To ensure humanity benefits from advanced AI and mitigates catastrophic risks, working - to solve the technical challenge of AI alignment is not enough. We must ensure that - before it is solved, AI is tested, overseen, and does not grow too quickly. If AI - alignment is solved, we must carefully deploy solutions. Both these tasks will require - organized efforts and global coordination. + To ensure humanity benefits from advanced AI and mitigates catastrophic risks, technical + solutions for AI alignment must be complemented by effective public policy and corporate + oversight to keep development tightly controlled and at a cautious pace. Even with + successful AI alignment, robust governance is essential to ensure consistent + implementation across all sectors and regions.

-

Where these people usually work

-

Coming soon...

+

+ Where professionanls in AI governance usually work +

+

+ AI governance professionals work in settings like government agencies, international + organizations, regulatory bodies, think tanks, research institutions, and private + companies, developing policies, analyzing risks, and shaping governance frameworks for + the safe use of AI technologies. +

You might be a good fit if...

- You might want to consider working in these areas if the following tasks fit your - abilities and interests: Building support for international treaties regulating AI; - preventing the deployment of AI systems that pose a significant and direct threat of - catastrophe; mitigating the negative impact of AI technology on other catastrophic - risks, such as nuclear weapons and biotechnology; slowing down AI progress when we - aren't on track to make AI safe; building government capacity to evaluate frontier AI - for danger; examining the threats misaligned AI poses to social infrastructure; - analyzing which policies would discourage an AI arms race. + You might be a good fit for a career in AI governance if you have a background in + political science, law, international relations, or economics, or if you have technical + expertise in AI or cybersecurity. You could also thrive in this field if you're skilled + in research, advocacy, or communicating complex ideas clearly.

diff --git a/app/routes/how-can-i-help.donate.tsx b/app/routes/how-can-i-help.donate.tsx index d8c95abb..79151aed 100644 --- a/app/routes/how-can-i-help.donate.tsx +++ b/app/routes/how-can-i-help.donate.tsx @@ -94,15 +94,15 @@ export default function Donate() { } >

- If you are involved in the AI safety community, we encourage you to directly fund - research, projects, or other expenses (e.g. a plane ticket to a conference) for - impactful individuals or small organizations. + If you have insights into the key obstacles or opportunities that could make a big + difference, we encourage you to directly fund research, projects, or other expenses + (like a plane ticket to a conference) for impactful individuals or small organizations.

- Donating directly puts you in the seat of the grantmaker. If you have insight into an - avenue that deserves funding, you're often likely to make a better decision than a - grantmaker, who may lack nuanced understanding of individuals and smaller organizations - in the space. + Donating directly can bypass traditional grantmaking, provide immediate impact, and + allow you to share valuable insights. It diversifies funding sources, reduces reliance + on large donors, and supports those who would otherwise face a lengthy grant application + process.