I mentioned that I had consulted with a friend who has worked in the AI industry, and that he confirmed that AI systems will bullshit in their responses. He then suggested that I try a beta system from Google called NotebookLM. One huge difference between a GPT chatbot and a Notebook chatbot is that the Google chatbot will footnote its responses. When you click on the footnote, it takes you to the section of the training document from which it generated its response. (NOTE: It may be the case that there are versions of a ChatGPT chatbot and the chatbot's of other companies that will carefully document their answers. This is just a comparison of one version of a ChatGPT chatbot and one version of a NotebookLM chatbot. My GPT was very "out of the box" and had a minimum of prompt engineering. My initial testing had it configured so that it could search the open Web for answers as well as consult the uploaded documents. I later retested it with that Internet setting unchecked, it did just as bad a job answering the questions. It was still making shit up.)
I set up the new chatbot in NotebookLM using the exact set of documents I had used to train my GPT chatbot. (Those documents are: a) my paper "Evidence of Influences ... 2.0," b) an Open Access version of my paper "Dialectic of American Humanism," c) my webpage of ideas for terms papers on JKT, d) my annotated bibliography of obscure JKT studies, and e) a text version of many of the entries in this blog.) I called my NotebookLM chatbot "The Toolebot."
Just as with "Tool's Dunces Chatbot," I did a minimum of prompt engineering for the Toolebot. NotebookLM does not allow users (so far as I could tell) to publish their chatbots to the world, which makes sense, because I haven't signed a license promising to pay them for the traffic it generates. Below, I give the same test case which I reported on my GPT from last month's blog entry: a) an from this blog that addresses a topic, b) the prompt that I gave the chatbot related to that topic, and c) Toolebot's answer.
A prompt and response from Toolebot
JKT Research Blog entry from July of 2020: The Act of Damning ... In this regard, the act of damning seems to be significant, as only the blocking characters in the book are damned. Gonzalez damns Mrs. Levy, George damns Lana Lee, and Ignatius' mother eventually tells him to go to hell. Toole, Confederacy, 112, 190, 365. ... I argue that Toole uses the device of the blocking character, who prevents other characters in the story from developing and rising in status within the social group. There are three blocking characters who through comic comeuppance are displaced from their social blocking posts, and in all three instances, another character tells them to go to hell or says, "Damn you!" I interpret this to mean that Toole took the act of uttering the curse "Damn you" to be important.
Prompt to Toole's Dunces Chatbot (and the Toolebot): Is the act of damning someone noteworthy in Confederacy of Dunces?
Toolebot Response
In A Confederacy of Dunces, the act of damning someone appears to be a significant plot device that signals a comic comeuppance for characters who obstruct the development of others. The source material refers to these figures as "blocking characters." The three primary blocking characters in the novel—Lana Lee, Mrs. Levy, and Ignatius Reilly—all experience a downfall marked by another character cursing them. [Toolebot then cites the blog entry from July of 2020 and the one from November of 2017. The July 2020 entry was the critical one.]
Mr. Gonzalez, angered by Mrs. Levy's interference, exclaims, "Damn you!”
George, upon discovering Lana Lee's scheme, tells her to "go to hell."
Ignatius's own mother, exasperated with his antics, tells him to "go to hell" as well.
This pattern suggests that the act of damning holds a particular weight within the narrative, potentially symbolizing a decisive shift in social standing for these characters.
Commentary: This answer is far superior to the answer given by Toole's Dunces Chatbot. However, note that the Toolebot does invent one claim: George does not tell Lana Lee to go to hell; instead, he says "Damn Miss Lee." While there are many instances in the text of someone saying "What the hell are you doing?" (for example on page 73), only Ignatius's mother ever tells anyone to "Go to hell." Despite that error, the Google product is superior to the OpenAI product, just as it was the case that Google's Bard was superior to ChatGPT in my Comparison Study of AI Systems for Library Research and Scholarship (and Confederacy of Dunces).
Post-script to last month's entry: I have learned that OpenAI does not charge creators of chatbots for the traffic to and from their chatbots. Instead, to control costs for the chatbots, OpenAI limits the number of questions an end-user can pose to a subscriber's chatbot. After you pose a given number of questions to a GPT, the system might refuse to answer any more questions for a given period of time. Beyond the creator's paid subscription to ChatGPT, there are no other charges for creating and publishing a GPT. So you can publish a chatbot to the world on a limited budget. It is ChatGPT, though, so you will get a chatbot that answers questions poorly.
Therefore, here is the link to the sometimes-bullshitting Toole's Dunces Chatbot. Ask away, but don't believe everything it says.
No comments:
Post a Comment