The Dangers of AI?

Authored By: Mike Sonneveldt

The Dangers of AI?

The Dangers of AI?
By: Mike Sonneveldt

 

Google Gemini AI goes woke

Last week, we watched Google scramble to play cover on a story that blew up the internet. Users of Google's AI Gemini flooded X.com (formerly known as Twitter) with images of female, Asian founder fathers, black Greek soldiers, and even a stoic black George Washington.

Most of the posts on X.com mocked Google's AI project and pointed out the obvious bias flowing from the program. However, it begged the larger question: can artificial intelligence be used in a productive, free-market way without bringing destruction in a Terminator-esque fashion?

In an email to staff, which leaked to Bloomberg, Alphabet's Sundar Pichai used the words "completely unacceptable" when referring to the latest debacle. He said in the email:

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias – to be clear, that's completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. We're already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes. And we'll review what happened and make sure we fix it at scale.

Our mission to organize the world's information and make it universally accessible and useful is sacrosanct. We've always sought to give users helpful, accurate, and unbiased information in our products. That's why people trust them. This has to be our approach for all our products, including our emerging AI products.

We'll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we've made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long- context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let's focus on what matters most: building helpful products that are deserving of our users' trust.

Pichai's wording may seem hopeful, but in reality, the foundational issue never gets addressed. Google trained the AI to act the way it did.

AI programs such as Google's Gemini work through two functions. The first is to be fed data from massive sources, such as things like Reddit, and vast inputs from across the web. The second is the programmers themselves. They provide the parameters on how the data will be used, and what is acceptable in an output. To achieve the output they wanted, the developers coded the program to provide "diverse" images. Users of X.com made the world painfully aware of the baked-in programming when Gemini repeatedly refused to produce images of "white" anything. White doctor? That's not diverse and could be harmful. White family? Could be harmful to produce ethnic stereotypes. White person eating white bread with mayonnaise? Definitely harmful.

But when users asked Gemini to produce images of black doctors, black families, or strong black men, it had no issue filling all 4 panels with the desired ethnicity and sex of the request.

The episode of woke Google falling flat on its face may be funny to us, but the reality of where artificial intelligence is headed has some smiling and others crying. Many fear the dangers of AI.

 

The future of AI...found in the past

If you want a quick 2-minute read on the reaction of people to technology, then look no further than History.com and their write-up on the Luddites. The story seems eerily similar.

Toward the tail-end of the Industrial Revolution, a group of artisan weavers and textile workers found themselves frustrated with being squeezed out of the economy. How? The financial burden of the Napoleonic wars forced Britain to become more efficient in production and economics. This led to a forcing out of more expensive "artisans" who studied their whole life to work in their craft.

The Luddites began destroying textile machines and attacking factories to slow the motion of progress. The British government responded by making the act of breaking machinery punishable by death. The army rounded up dozens, who were hanged or shipped off to Australia. A couple died in a confrontation with the military during an attack on a mill.

Today, we witness people attacking self-driving cars over fears of the dangers of AI. They believe Skynet is establishing itself and claim that AI is the sign of Satan.

 

The dangers of AI

The biggest danger of AI is the unknown. Nobody knows what will happen in 10-20 years. After all, we sit at the bottom of the paradigm shift. In business, a paradigm shift for certain conditions looks like an s-curve. Often, a gap exists between the end of one paradigm shift and the beginning of another.

Technology is no different. We watch the span of technology rush upwards as more and more people work to advance the specific technology. After a while, the technology either slows down as the top of development is reached, or a new technology is invented to take its place.

We may watch the same happen with AI. Imagine getting onto the roller coaster and putting down the lap bar. The coaster leaves the station and the massive hill looms ahead. Nerves start to dance as you catch the chain and start clicking your way up the hill. We as humanity find ourselves sitting in the coaster of artificial intelligence awaiting the steep uphill rise. Many people freak out because they fear the big drop on the other side. Some fear the height that the coaster rises to.

In AI, we are watching an interesting coming together of technology, programming, and even biology. Elon Musk's Neuralink resembles Musk's vision of connecting human brains with technology to save humanity. Something deep inside of many people quakes at the thought.

Others fear the loss of employment across the world. Goldman Sachs predicts that over 300 million jobs will be lost or diminished due to the growth of AI.

 

What's the upside?

Job loss concerns plenty of people. However, a time existed when manual labor made the world turn. If you needed a ditch dug, then dozens of men would grab their shovels and pick axes to start the work. Railroads were built by hundreds of men sweating in the middle of nowhere for months at a time. Roads were laid down by the same type of men.

The invention of equipment such as the steam shovel drove these men out of work. However, we cannot look at our current generation and say, "Well, if we just had more ditches for men to dig…" We find plenty of available jobs in America and the number of jobs seems to always expand with the population. The invention of all kinds of technology never destroyed the working population. It merely forced a migration of workers from one field to another. The invention of the automobile did not kill off all the blacksmiths in the world. It opened up new opportunities for labor to repair engines, build autos on the assembly line, and design the frame in CAD. We modified our work skills with the shifting of the economy.

AI's introduction into the creative sphere may crush a lot of jobs or industries. While that may sound like a terrifying thing, we must remember that new industries, needs, and opportunities will develop alongside the paradigm shift of technology.

 

Can AI help my business?

AI may also help streamline so many positions for business owners to perfect their products and marketing. Here's a perfect example:

Many times, content for Self-Evident requires a few pictures or images. The baseline process for us to curate pictures might be to take them ourselves. This requires a ton of time, effort, know-how, and patience. If we wish to save some time and money, we have the option of using a stock image service. Some are free and provide passable images. Other services require monthly subscriptions but give unlimited downloads. Something like that greatly reduces the effort and time required to get those pictures.

But what if we used AI? The AI generators available provide an almost instantaneous product, plenty of options, and a refresh button to keep spinning the wheel and see what lands. This means the opportunity to get exactly what we want in almost no time and for little expense. Now compare that to getting a commissioned photographer to try and get us the perfect image. That could take weeks and hundreds of dollars for a single image.

 

What are my thoughts on the dangers of AI?

I take a wait-and-see attitude. We all know the possibilities of danger with such a powerful tool in the wrong hands. But I also recognize the powerful use of taking this technology and pursuing new and creative ways to do business, ministry, and personal development. To write off the technology in a Luddite-style attack is to ignore the powerful potential of something you cannot stop anyway.

In other words, the coaster is going up the hill whether you like it or not. Might as well get on and enjoy the ride.

Self-Evident Ministries

Comments