Building Poland's capacity for AI Safety research, policy and governance.
Here's an argument for why we should care.
AI systems far smarter than us may be created soon. AI is advancing fast, and this progress may result in human-level AI — but human-level is not the limit, and shortly after, we’d probably see superintelligent AI.
These systems may end up opposed to us. AI systems may pursue their own goals, those goals may not match ours, and that may bring them into conflict with us.
Consequences could be major, including human extinction. AI may defeat us and take over, leading to humanity’s extinction or permanent ruin. If we avoid this outcome, AI still has huge implications for the world, including great benefits if it’s developed safely.
We need to get our act together. Experts are worried, but humanity doesn’t have a real plan to avert disaster, and you may be able to help.
We are working towards:




