AI Safety: Designing our Future
I recently completed an AGI Strategy course at BlueDot Impact. The journey reinforced my belief that a successful AI implementation cannot exist without a heavy focus on safety. Whether you are just starting your career, leading a department, or founding a company, the conversation around AI safety is no longer optional. It is the foundation upon which a sustainable future for humanity with AI is built.
The time has come for us to reframe our perspective: AI safety isn't a hurdle to innovation but a prerequisite for it.
Where Do We Stand Today?
In the essay Preparing for Launch, authors Tim First, Tao Burga, and Tim Hwang argue that for society to truly benefit from AI, we must grapple with two core issues:
The Pace of Progress: We are not prioritizing the most pressing global issues where AI could be a force for good, and therefore the benefits we’re hoping to achieve may not get here when we want or need them.
Poor Incentives: The industry is poorly incentivized to solve the risks that AI progress brings.
Currently, we are in an aggressive "innovation race." Companies are over-extending themselves, betting on future gains to afford the massive investments required to stay ahead in the future. This “race to the top” mentality leaves very little room for safety. In fact, as of 2025, the Emerging Technology Observatory indicates that only 3% of AI-related articles focus on AI safety.
When the focus is entirely on getting to the top quickly, safety is often treated as a luxury we’ll "figure out later," or worse, painted as pessimism. But "later" could arrive faster than we think.
The Gift of the "Popping" AI Bubble
There is a lot of talk about the "AI bubble" finally bursting. While some see this as a setback, I encourage you to see it as a gift, a gentle nudge to pause.
AI isn't going anywhere, and the trajectory toward superintelligence remains unchanged. However, this period offers us a unique opportunity to catch up and reflect. The same aggressive competition that created the bubble is the same force driving the race towards superintelligent AI. By learning from the millions of dollars lost and the thousands of jobs affected by this initial "boom," we can better prepare ourselves and take action to prevent the risks associated with the next leap: the potential intelligence explosion.
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." — Irving John Good (1965)
Understanding the Intelligence Explosion
To understand why safety is a strategic necessity, we can take from the concept of "ultraintelligent" machines described above. Futurist Ray Kurzweil’s Law of Accelerating Returns helps explain the speed of this shift. It notes that technological progress is not linear, but exponential. Because each new generation of technological advancement is used to build the next, the rate of progress itself increases over time. We aren't just moving forward, we are accelerating.
Think of your grandparents. They saw the world shift from radio to mobile video calls within a single lifetime. They adapted because that change, however fast, was human-led. However, in an intelligence explosion, the curve of innovation is shaped by the AI itself. We are used to being the ones in the driver's seat of progress; we are not yet prepared for a world where the car begins to redesign its own engine while we are still inside.
This isn't a reason to be pessimistic or fearful. It is a reason to be intentional. We currently have more power to define our path than we ever will.
Beyond the Office: Individual Accountability
It is easy to feel like the future is being decided in a few lab rooms in Silicon Valley or Beijing. While those engineers and founders hold immense responsibility, so do you.
Every interaction you have with an AI system is a data point in our collective trajectory. In an era of hyper-information, playing the "ignorance card" is no longer a viable strategy. Information on how these systems are built, the data they use, and the safety priorities of the companies behind them is available to anyone willing to look. Staying informed is a choice.
Our participation, whether by green-lighting a corporate project or simply buying the latest AI-integrated gadget, is an endorsement of a company's safety culture. When we choose to ask questions and demand transparency, we do more than just improve a single project; we signal our values to the market and to policymakers.
Societies still hold power through the leaders we elect and the policies we support. By making AI safety a visible priority in our communities, we influence the platforms of those who lead our nations. Policy follows public interest. If we show that we care about safety, it becomes a prerequisite for those seeking our trust and our votes.
The AI Safety Toolkit
Questions for the Intentional Leader and Consumer
To help you move from reflection to action, I’ve categorized these questions based on most of my readers’ spheres of influence, both at work and as individual consumers.
Strategic Alignment (The Big Picture)
The Aspiration: What is the overarching goal of this project or product? Is it fundamentally aligned with human flourishing? What is human flourishing to you? Is it merely tied to material necessities?
The Long View: How could the way I use AI today inadvertently contribute to a risky future with superintelligence? (See a list of potential risks here).
Stakeholder Impact: Beyond the immediate users, who else does this impact? Have we considered diverse demographic groups and/or the broader environment (i.e. other sentient creatures)?
Conscious Consumption (Your Personal Use)
Data Sovereignty: How is my personal data being used to train these models? Is it fueling a "race" I don't agree with?
Corporate Accountability: Before I buy this product or service, do I know how this company views safety? Are they transparent about their alignment research?
The Participation Vote: Does my participation in this specific AI service help reach an unsafe outcome by incentivizing speed over security?
Operational Governance & Design (For the Builders)
Responsibility: Who is ultimately responsible if this system fails? (If you don't have an answer, you likely need a governance strategy).
Adversarial Thinking: Have we considered how this system could be made to fail or be exploited by a malicious actor?
The "Kill Switch": Is there a verified, safe process for decommissioning this system if a significant misalignment is detected?
Transparency: For high-stakes decisions, are we using interpretable "white box" models, or are we relying on opaque "black boxes"?
Safety is not about being a pessimist; it is about being a responsible architect of the future. This doesn't mean we must live in a state of constant over-analysis or perfectionism. We don't need to double-check every minor interaction; we simply need to build the habit of being mindful. By doing enough where we can, we ensure that the "last invention man need ever make" is one that helps us thrive for generations to come.