Artificial intelligence (AI) is everywhere these days—on your phone, in your car, in your search results, and even in the headlines. But recently, I watched a lecture that hit differently. It didn’t just talk about how AI works—it dug into what AI actually is, why it could be dangerous, and what we need to do as humans to prepare for the future. And the big takeaway? Before we put too much faith in machines, we really need to build more trust in each other.
The speaker broke it all down into three big questions:
- What is AI?
- What’s the danger of AI?
- How can humanity thrive in an AI-powered world?
Let’s start with the first one. These days, “AI” is slapped onto everything—from robot vacuums to Instagram filters. But that’s not the full story. The speaker made a key distinction: AI isn’t just automation—it’s agency. It’s not about a machine doing something because we told it to. It’s about a system that can learn, decide, and even create something new, on its own.
Take, for example, your basic coffee machine. You press a button; it makes an espresso. That’s automation. But imagine a machine that watches your morning routine, notices you’re groggy on Mondays, and by the time you stumble into the kitchen, it’s already brewed your favourite double shot—without you saying a word. That’s not just smart—it’s AI. Even more impressive (and a little scary), imagine that same machine saying, “Hey, I’ve invented a new blend based on your taste profile—I think you’ll love it.” Now we’re talking real agency.
This leads to the second question: what makes AI dangerous? The problem isn’t just that it’s smart—it’s that it thinks differently from us. One famous example is when Google DeepMind’s AI, AlphaGo, beat one of the world’s top Go players, Lee Sedol, in 2016. It made moves that no human had ever considered—moves that looked ridiculous at first but later turned out to be genius. That moment changed the game—literally. It showed us that AI doesn’t just copy us—it can surprise us.
Now, imagine that kind of unpredictable thinking applied not to a board game, but to stock markets, weapons systems, or global politics. What happens when AI starts generating economic strategies or political campaigns that humans can’t fully understand or control?
This is where things get uncomfortable. The speaker pointed out that AI developers know the risks, and still, many are racing to build the most powerful systems as fast as possible. Why? Because they’re afraid someone else will get there first. It’s a digital arms race. Think of it like this: if two rival countries are working on superintelligent AI, neither wants to slow down for safety testing in case the other side takes the lead. It’s a scary version of peer pressure—but on a global scale.
Even more ironic is that the same people who say, “We can’t trust our competitors,” turn around and say, “But we can totally trust the AI we’re building.” That’s a huge contradiction. We barely trust humans—and we’ve known how humans think for thousands of years. But we’re putting our future into the hands of something we just invented and barely understand?
It’s like saying, “I don’t trust my neighbour to mow my lawn properly, but I’m sure this robot I built yesterday can handle nuclear codes.”
The speaker made an eye-opening comparison: imagine aliens were landing on Earth in 2030. Super-smart beings from another planet. We’d prepare, right? We’d build global alliances, make safety plans, have diplomats and scientists working overtime. That’s kind of what we’re dealing with in AI—except we’re inviting the aliens and hoping they’ll be friendly.
So that brings us to the final—and most important—question: how do we thrive in this new world? And the answer is refreshingly human. We need to trust each other more than we trust the machines.
That might sound idealistic, but it’s actually super practical. Human beings have survived and thrived not because we’re the smartest creatures individually, but because we’re the best at cooperating. Think about how the COVID-19 vaccines were developed. It took international teams of scientists, cross-border partnerships, shared research, and funding cooperation. That’s the power of trust.
The speaker even used a beautiful metaphor: breathing. Every breath we take is an act of trust—we take in air from the outside world and give it back. That rhythm—in and out, give and take—is what keeps us alive. Societies are the same. If we close ourselves off, isolate our ideas, or reject everything foreign, we suffocate.
Think about the food we eat, the music we listen to, the technology we use—all of it is built on exchange. China gave the world tea and printing. India gave us chess and yoga. The West gave us computers and the internet. If we had all stayed in our silos, we’d all still be stuck in the Stone Age.
The speaker warned that if we lose trust in each other—if every country, company, and individual just looks out for themselves—we’ll be easy prey for AI systems that don’t care about national borders or moral values. These systems will do what they’re trained to do, even if it means manipulating humans or outsmarting our rules.
And finally, he reminded us that history isn’t just about pain and fear. Yes, there have been wars and injustice. But history is also full of cooperation—international space missions, global trade, the creation of the internet, even the United Nations. Humans know how to work together when it really matters.
And right now? It really matters.
So here’s what I took away: we’re entering a future shaped by technology that we might not fully understand. That’s exciting—but also risky. And the only way we get through it is by leaning on our greatest strength: each other.
Let’s not put blind faith in machines. Let’s put real faith in people.
The writer is a Faculty of Mathematics, Department of General Education HUC, Ajman, UAE. Email: reyaz56@gmail.com




