Last month, I posted a short short story I called “Customer Complaint” on this blog. A few days later. I was searching for something unrelated when a critique of the story I had posted popped up spontaneously—generated by the search engine’s AI program. I’m not sure why. The AI program told me it liked the story, even calling one or two elements of the story “comedy gold.” I don’t know what measure it used to evaluate the story, and because it is AI, and on my computer, I wondered if it gave me its honest opinion (if AI can have an opinion), or if it was just telling me what it thought I would like to hear.
I don’t trust AI yet. Part of that is my native skepticism of all things mechanical—any office worker knows that copiers, for example, cannot be trusted. Never let the photocopier know you are working against a deadline. Doing so increases the chance of a serious malfunction bringing your work to a screeching halt past 90%. Part of my mistrust stems from my understanding a bit about how it works, while another part stems from not knowing enough about how it works. And as a writer, I have major concerns about how you can control plagiarism of your work by AI.
The AI that has burst most spectacularly onto the world stage are the large language models (LLMs) like ChatGPT, Gemini, and Copilot. But LLMs are more like probability engines than basic search engines. LLMs look at your question, sort through the many ways other people have answered similar questions in the past, and put together the response that is most likely to be relevant. That doesn’t mean that an LLM will come up with the right answer. The legal profession, for example, is inundated with stories of lawyers who used something like ChatGPT to write a brief and ended up turning in a brief that looked good but had citations to non-existent cases. The term “hallucinated cases” describes this illusory case law. To put it mildly, judges are not amused when lawyers submit such briefs. The sanctions for doing so are becoming increasingly severe. One lawyer I know uses a Chat GPT type platform as an aid, not as a substitute for doing his own research. He tells an amusing story about arguing with the AI program about its hallucinating cases. He told the program it was hallucinating cases, and the program “shouted” back at him (all caps) “I AM NOT HALLUCINATING CASES.” (But it was.)
The search for the most relevant answer also has another side effect. The one answer an AI LLM program is not likely to give is “I don’t know.” Their programming insists that they provide an answer, and they will do exactly that, regardless of whether the answer provided is correct.
The part of the technology I don’t understand is what allows AI to have conversations with people. We recently upgraded our Amazon Alexa devices to Alexa Plus, which is AI-based. (We did that to stop fighting with the bedroom Alexa over the differences between the phrase “turn on the bedroom lights” and “turn on bedroom lights.”) I have had two tentative conversations with Alexa Plus where I was asking her/it questions with no right or wrong answers; questions about what she likes or feels, just to see what would happen. (My daughter gave me some weird looks when she walked in on those). I felt as if I were talking to a person.
I use ProWritingAid to help me edit my work. It also has an AI option, but for now, I leave that feature alone. But the availability of AI writing aids is growing because AI can be useful. That usefulness will only increase with time as the technology develops further. If you can trust the AI program you use to keep your work confidential rather than adding your work into its repertoire of training material, it could be useful in proofreading. It also might be helpful in generating ideas on how to promote your work and in providing research sources you may not have considered.
Are there any ways you use AI to help you with your writing efforts, whether it is in proofreading, editing, promotion, marketing, or something else? If you have tried AI, what do you think of it? What is your general impression of the technology?
What people call "AI" includes a gazillion different categories. Some, like spell check, I use all the time, without thinking. Others, like the grammar tools of ProWritingAid, I use when I choose to have it evaluate something for me--usually grammar, but sometimes overused words or other things. I use its results as a guide. It hates split infinitives. I'm often fine with them, especially in dialogue.
ReplyDeleteAnd more recently, I've been using LLMs to assist with research, to bat ideas around with, to give me detailed instructions on how to fix something on my computer. And I have my salt block handy because those suckers lie (the three-letter word for hallucinate, and one I know without spellcheck) shamelessly. When caught they apologize, admit one of its rules was to fess up if it didn't know something, and then blatantly lie the next time.
They're like smart psychopaths.
RE: "They're like smart psychopaths"
DeleteYes, they're digital psychopaths mirroring the ruling psychopaths who promoted their creation in the first place!
Everyone SHOULD of course get what AI is REALLY all about but most people CHOOSE not to want to understand it...
Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the "awake" public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.
The 2 major OFFICIAL deceptive fake FEAR-MONGERING narratives or phony pretexts (ie, lies, propaganda) nearly everyone, including "alternative news" sources, have been spreading is (1) that the TRULY big threat is that AI just creates utter chaos in society and that it might achieve control over humans (therefore it must be regulated, ie monopolized by the typical criminal governments); and (2) that we, the US, have to invest heavily in AI technological development so as to stay ahead of other nations, such as China (https://archive.is/pBzAt).
The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the TRULY big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that's long been ongoing in front of everyone's "awake" nose .... https://www.rolf-hefti.com/covid-19-coronavirus.html
The proof is in the pudding... ask yourself, "how is the hacking of the planet going so far? Has it increased or crushed personal freedom?"
"AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy." ---Unknown
"Almost all AI systems today learn when and what their human designers or users want." ---Ali Minai, Ph.D., American Professor of Computer Science, 2023
“Who masters those technologies [=artificial intelligence (AI), chatbots, and digital identities] —in some way— will be the master of the world.” --- Klaus Schwab, at the World Government Summit in Dubai, 2023
“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance.” --- Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]
"The whole idea that humans have this soul, or spirit, or free will ... that's over." --- Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]
I love technology, but I'm on the fence about AI.
ReplyDeleteIt's a tool. Like any other tool, it can be used or misused. I find it can cut through a lot of research to answer a specific question, but I always doublecheck. It will make stuff up if it doesn't know the answer. But it will also sometimes tell me "I can't answer that." Definitely I use spellcheck and thesaurus aspects all the time. And I do have objections to its use of copyrighted materials without permission.
ReplyDeleteI have to admit I don't see over the differences between the phrase “turn on the bedroom lights” and “turn on bedroom lights.”) myself.
I use the Word spell and grammar check "features" and I've noticed my emails now have a summary header which is annoying. Otherwise, I avoid AI.
ReplyDeleteI don't use AI for writing, however I've discovered it may some use for coming up with comps, which publishers seem to want. Not that I don't doublecheck the suggestions it gives me.
ReplyDeleteAI has also provided me with some good laughs. When using Zoom for meetings, AI follows up by providing a summery of what was discussed. Handy for the person taking minutes, but also painfully wrong at times.
I agree with all of the above. I would never use AI to create or purposely read any fictional works created by AI. But I use the tools for research (always double checking) and for proofing.
ReplyDeleteProofing for me and to the extent that Google is AI, as a research tool. Alexa scares the bejeebers out of me, and I refuse to have it in my house. I'm of the first run generation of 2001, A Space Odyssey. I remember HAL and I wouldn't take the chance. What happened to Keir Dullea? Hum, have to Google that.
ReplyDelete