Many of us grew up with the “Terminator” movie franchise, “The Matrix” movies, “A.I. Artificial Intelligence”, “iRobot”, “Minority Report”, etc. Before that we were given Isaac Asimov’s “Three Laws of Robotics”. Since the earliest sparks of anything imagined along the lines of Artificial Intelligence, humans have maintained a healthy dose of skepticism.

With the seemingly sudden onset of Generative AI, all facets of professional and personal life have been impacted. From corporate IT to the music industry, to graphic design, people are struggling to make heads or tails of the legal and ethical challenges associated with this new field of content generation.

I love the Terminator franchise, but that’s partly because I’m a big Arnold Schwarzenegger fan – and can anyone dismiss the impact the original “The Matrix” had on the world? But these movies and TV shows, as well as many others, have fostered an extremely cynical view of a real-life future where we, at best, cooperate with “the machines” or, at worst, are seen by rapidly learning AI as redundant or an obstacle to be removed.

Now, before you think I am a relic of the past, completely against Generative AI, I must admit that the image attached to this post was AI generated. And I played with ChatGPT right after its launch to generate a bare bones Ansible playbook to serve as a starting point for some new automation – especially after spending five years almost exclusively performing configuration management at scale with Puppet. It’s easy, it’s convenient, it’s fun. I get it!

My company’s security group has since forbidden all AI generated code, and is monitoring traffic to those sites. With the convenience and ease of access to generated code comes the concern that a developer may implement poorly secured code into a Production environment. Worse still, if a developer has a deadline they are rushing to meet, or has a lazy day, are they taking the time to understand the code they are implementing? 

Jeff Goldblum’s character in “Jurassic Park” says the following that applies really well to Generative AI, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” We are seeing AI generated art, music, books and other written works. Will this eventually displace artists, musicians, and authors? I, for one, hope not. 

And what about AI pretending to be human? There is at least one account detailing how an AI bot attempted to bypass a CAPTCHA by impersonating a human needing assistance. I don’t see AI as inherently evil, but by its very nature (can you say AI has a “nature”? I just did), it will seek out any means necessary to solve a problem. Are there guardrails? Sure, but you can’t always outwit unbridled ingenuity. 

I still believe that Generative AI can be used for good. But as Peter Parker’s Uncle Ben said, “With great power comes great responsibility.” I’ve also observed in my lifetime that with greater freedom to do good comes greater freedom to do evil. Generative AI is a tool, like a hammer, or money. It’s not good or evil on its own. It’s how it’s used that determines the outcome. Which way will humanity go? Only time will tell.

Trending