For our industry, AI offers up a powerful tool for things like workflow connectivity, analytics, and shop floor data analysis, as well as development of marketing content. But there are legitimate fears around AI, as well. When it comes to addressing those fears, it remains an individual effort.
Here on WhatTheyThink, we’ve talked about the benefits and challenges of implementing artificial intelligence for the printing industry, specifically marketing and production workflow. But there is a broader discussion about the benefits and challenges (and sometimes even outright dangers) of this powerful technology of which these issues are only a tiny fraction.
For our industry, AI offers up a powerful tool for things like workflow connectivity, analytics, and shop floor data analysis. Its “faster than human” ability to process data means increased uptime, slashed inefficiency, decreased service costs, and more. On the marketing side, it speeds the development of content, whether a blog post, a sales letter, or web copy. Like any tool, you have to know how to use it properly, but once you figure it out, it becomes very useful in your toolbox.
But there are legitimate fears around AI, as well. On the content side, for example, how do you police authenticity? Enforce honesty? Determine truth? What about students using AI to generate essays for school, for example? Politicians creating political ads using fake AI-generated images? Or kidnappers creating an AI-generated replica of a child’s voice to extort parents by tricking them into thinking their child had been abducted? These are very real dangers, and they are impacting us both professional and personally right now.
Identifying the True from the False
As we, as a society, continue to wrestle with the complexities of AI, it reminds us of the weight of each person’s individual responsibility to learn to identify the true from the false.
It’s like U.S. Secret Service agents learning to identify counterfeit money. New agents don’t start their anti-counterfeiting training by analyzing a counterfeit bill. They start by studying an authentic piece of U.S. currency. What is it made of? What security measures determine its authenticity? How do you locate those anticounterfeiting measures once they are put into place? Once you know the true, only then can you start to recognize the false. So it is with AI-generated content. Someone could use AI to crank out all sorts of marketing content, but is that content accurate? It might sound authoritative, but when somebody with knowledge of the subject matter digs in deeper, you may find out that it’s full of errors.
We’ve looked at some of these dangers here on WhatTheyThink. Several months ago (back when the paper supply chain was a much bigger issue), for example, we tested AI in its ability to generate copy on alternative printing substrates when the preferred substrate was not available. When we asked ChatGPT to come up with a list of alternative substrates, among the alternatives it listed were digital printing and bamboo paper. Someone who does not understand the industry might gloss right over those two things without a thought. But those in the industry (those who recognize the “true”) understand that digital printing is a process, not a substrate, and that bamboo paper, while a substrate, is only appropriate for very short runs and would only be an alternative under very limited circumstances. At least in its current state, ChatGPT was not able to make those distinctions.
In a separate article, we asked ChatGPT to come up with lists of research studies and case studies on the effectiveness of floor graphics for increasing retail sales. ChatGPT did, in fact, generate list after list of powerful examples, and when asked to supply the original citations, ChatGPT did so. But when we checked those sources, none of them was found to exist. Not one. Every one of the sources was made up based on its algorithm just like the studies themselves.
Taking on a Life of Its Own
These are unintentional inaccuracies. In a recent interview, Geoffrey Hinton, former Google scientist, multi-book author, and widely considered to be the “grandfather of artificial intelligence,“ raised a very different concern. What happens if (and when) AI intentionally deceives? Humans often deceive one another (and themselves), and artificial intelligence is patterned off human speech and thought. Why would artificial intelligence not start to deceive, as well?
There was a recent article in the New York Times in which the author decided to have an ongoing conversation with the AI-powered Bing engine to see what happened. The conversation started out innocently enough, but things took a dark turn somewhere along the way. Suddenly, the chatbot turned on the writer, trying to convince him that he was unhappy in his marriage, that he was really in love with the chatbot, that he should leave his wife and be with the chatbot instead. The incident left the writer “deeply unsettled,” and for good reason. (You can access the entire conversation between the writer and the chatbot here.)
When asked how to avoid these types of deceptions, Hinton indicated that these algorithms are too difficult to try to regulate once they are out of the box. You essentially would have to anticipate every deceptive permutation and come up with a hedge against it. It would be far better to design ethics into the algorithm to begin with.
“If you’ve ever written a computer program, you know that if you’ve got a program that’s trained to do the wrong thing and you’re trying to do the right thing by putting guardrails around it, it’s a losing proposition,” he says. “You have to think of every way in which things might go wrong. It’s much better to start with ethical principles and say, ‘You’re always going to follow these principles.’”
(Read the transcript of this powerful interview here, starting at 10:20:09.)
Right now, we’re not there. We are only at the beginning of thinking about regulating and reigning in this powerful technology, with all of its benefits and dangers. Therefore, if we want AI to act ethically, whether as a society, an industry, or individuals, that leaves us with one option: Each of us, individually, must choose the ethical path at the outset.
"avoid it" - Google News
June 21, 2023 at 11:28AM
https://ift.tt/lB5WKsM
The Darker Side of AI: It's Up to Us to Avoid It - WhatTheyThink
"avoid it" - Google News
https://ift.tt/Xjc6rzF
https://ift.tt/P4KmInY
No comments:
Post a Comment