The first time I ever used ChatGPT was during my freshman year of college. I was drowning in assignments, all due the next day and I had no luck or inspiration to complete anything. Just me and a blinking cursor, reminding me that I have nothing. I asked the AI to help me outline a paper that I had to present for my public speaking class. Within seconds, there was a full structure in front of me. It wasn’t perfect, but it was convenient and quick. Looking back, that moment stuck with me. If this machine can do in seconds what I’d normally stay up all night for, what does this mean for my future of creativity and education?
Artificial intelligence is no longer some far-off sci-fi fantasy like Terminator. It’s here, live in action. It’s learning fast, and it’s already transforming how we work, live, and even think daily. From ChatGPT writing essays to facial recognition tracking every move, AI is hiding somewhere in our daily routines, whether we recognize it or not. But with every great innovation comes even more significant questions. Is AI truly a helpful tool, or are we rushing into a future where we don’t fully understand the effects of it?
There’s no denying it. AI is very convenient. It can draft emails, generate PowerPoints, answer tough research questions, and even simulate human conversations. This aids students in their work. Businesses save time and money, and everything in between is just more efficient. In a speed-obsessed world, AI is the shortcut.
But what can happen when convenience costs us?
The real-world consequences are right in front of us. Artists, writers, and roles involving customer service are being replaced by algorithms. AI can mimic a writer’s tone or an artist’s artistic style, but it can’t recreate actual experiences or creative innovation. AI doesn’t know what it feels like to grow up and live in a certain area, what it feels like to lose a loved one, or simply like something. Yet it is used to create art, write movie scripts, and even compose music? Is there enough room for human creativity and voice?
On college campuses, AI is frowned upon for being treated as a loophole to completing assignments quicker. Some use it as inspiration and a means of brainstorming for essays, while others simply copy and paste their entire assignment. The gap between help and plagiarism is closing; how do professors adapt to such change? Some are banning it, while others are embracing the change and encouraging students to use it to their advantage. The real challenge is teaching students to think critically when a tool can do it for them.
According to a 2023 World Economic Forum report, around 83 million jobs will be lost to automation by 2027, while 69 million new jobs will be made. While one might argue that this sounds balanced, these new roles will require advanced technical skills or even access to higher education, which not everyone has. The gap between who can keep up and who can’t will widen and pose consequences.
Then there’s the fear factor that kicks in and keeps us all up at night. Deepfakes are becoming more and more realistic, making it harder to tell real videos from digitally manipulated ones. One of the biggest dangers of AI isn’t that it will destroy humanity Terminator style; it’s that it will quietly reshape our reality, and when we finally realize it, it will be too late.
As AI continues to evolve and learn with humanity, one thing is clear: we are not fully in control. Most of these systems are built and regulated by privately owned tech companies that aren’t always honest and transparent. They almost often prioritize profit over ethics. And because AI systems learn from the internet, a place full of bias and misinformation, they often reflect the worst parts of society, not the best parts.
Still, the answer isn’t to just stop AI development. It’s to demand responsibility and transparency.
So what do we do now?
As students, we’re not at all powerless. We can educate ourselves about how AI works, use it responsibly, and stay informed about all the new developments. We can call out the unethical use of AI in schools, media, and politics. We can advocate for campus policies that will promote responsible AI use rather than banning its use.
The development of AI doesn’t have to be a threat to humanity. But it definitely can be if we allow it to. The real power does not exist in these man-made machines and algorithms; it lies in how we choose to use them.