Tuesday, October 1, 2024

Artificial Intelligence: Toolebot in NotebookLM

Last month, I posted a blog entry called Toole's Dunces Chatbot. There, I recounted how I created a simple GPT chatbot in ChatGPT that could answer questions about John Kennedy Toole's A Confederacy of Dunces, with the training data coming from my own writings. I described how the chatbot, which I called Toole's Dunces Chatbot, despite being fed good answers in the training data, often responded to prompts with vague or incorrect answers. In short, it was often bullshitting me.

I mentioned that I had consulted with a friend who has worked in the AI industry, and that he confirmed that AI systems will bullshit in their responses. He then suggested that I try a beta system from Google called NotebookLM. One huge difference between a GPT chatbot and a Notebook chatbot is that the Google chatbot will footnote its responses. When you click on the footnote, it takes you to the section of the training document from which it generated its response. (NOTE: It may be the case that there are versions of a ChatGPT chatbot and the chatbot's of other companies that will carefully document their answers. This is just a comparison of one version of a ChatGPT chatbot and one version of a NotebookLM chatbot. My GPT was very "out of the box" and had a minimum of prompt engineering. My initial testing had it configured so that it could search the open Web for answers as well as consult the uploaded documents. I later retested it with that Internet setting unchecked, it did just as bad a job answering the questions. It was still making shit up.)

I set up the new chatbot in NotebookLM using the exact set of documents I had used to train my GPT chatbot. (Those documents are: a) my paper "Evidence of Influences ... 2.0," b) an Open Access version of my paper "Dialectic of American Humanism," c) my webpage of ideas for terms papers on JKT, d) my annotated bibliography of obscure JKT studies, and e) a text version of many of the entries in this blog.) I called my NotebookLM chatbot "The Toolebot."

Just as with "Tool's Dunces Chatbot," I did a minimum of prompt engineering for the Toolebot. NotebookLM does not allow users (so far as I could tell) to publish their chatbots to the world, which makes sense, because I haven't signed a license promising to pay them for the traffic it generates. Below, I give the same test case which I reported on my GPT from last month's blog entry: a) an from this blog that addresses a topic, b) the prompt that I gave the chatbot related to that topic, and c) Toolebot's answer.

A prompt and response from Toolebot

JKT Research Blog entry from July of 2020: The Act of Damning ... In this regard, the act of damning seems to be significant, as only the blocking characters in the book are damned. Gonzalez damns Mrs. Levy, George damns Lana Lee, and Ignatius' mother eventually tells him to go to hell. Toole, Confederacy, 112, 190, 365. ... I argue that Toole uses the device of the blocking character, who prevents other characters in the story from developing and rising in status within the social group. There are three blocking characters who through comic comeuppance are displaced from their social blocking posts, and in all three instances, another character tells them to go to hell or says, "Damn you!" I interpret this to mean that Toole took the act of uttering the curse "Damn you" to be important.

Prompt to Toole's Dunces Chatbot (and the Toolebot): Is the act of damning someone noteworthy in Confederacy of Dunces?

Toolebot Response

In A Confederacy of Dunces, the act of damning someone appears to be a significant plot device that signals a comic comeuppance for characters who obstruct the development of others. The source material refers to these figures as "blocking characters." The three primary blocking characters in the novel—Lana Lee, Mrs. Levy, and Ignatius Reilly—all experience a downfall marked by another character cursing them. [Toolebot then cites the blog entry from July of 2020 and the one from November of 2017. The July 2020 entry was the critical one.]

Mr. Gonzalez, angered by Mrs. Levy's interference, exclaims, "Damn you!”

George, upon discovering Lana Lee's scheme, tells her to "go to hell."

Ignatius's own mother, exasperated with his antics, tells him to "go to hell" as well.

This pattern suggests that the act of damning holds a particular weight within the narrative, potentially symbolizing a decisive shift in social standing for these characters.

Commentary: This answer is far superior to the answer given by Toole's Dunces Chatbot. However, note that the Toolebot does invent one claim: George does not tell Lana Lee to go to hell; instead, he says "Damn Miss Lee." While there are many instances in the text of someone saying "What the hell are you doing?" (for example on page 73), only Ignatius's mother ever tells anyone to "Go to hell." Despite that error, the Google product is superior to the OpenAI product, just as it was the case that Google's Bard was superior to ChatGPT in my Comparison Study of AI Systems for Library Research and Scholarship (and Confederacy of Dunces).

Post-script to last month's entry: I have learned that OpenAI does not charge creators of chatbots for the traffic to and from their chatbots. Instead, to control costs for the chatbots, OpenAI limits the number of questions an end-user can pose to a subscriber's chatbot. After you pose a given number of questions to a GPT, the system might refuse to answer any more questions for a given period of time. Beyond the creator's paid subscription to ChatGPT, there are no other charges for creating and publishing a GPT. So you can publish a chatbot to the world on a limited budget. It is ChatGPT, though, so you will get a chatbot that answers questions poorly.

Therefore, here is the link to the sometimes-bullshitting Toole's Dunces Chatbot. Ask away, but don't believe everything it says.

Sunday, September 1, 2024

Artificial Intelligence: Toole's Dunces Chatbot

Back on April 1, 2023, I posted a blog entry called The Wisdom of ChatGPT. I prompted ChatGPT with a query that could have been addressed by my John Kennedy Toole writings that are publicly available on my blog. In theory, any system that pulled information on the internet could have answered the question well. Then on July 1, 2023, I posted a Comparison Study of AI Systems for Library Research and Scholarship (and Confederacy of Dunces) with the same prompt as the ChatGPT test. I found that ChatGPT was giving vague answers, but it made more or less valid points. It did not seem to have the information in my writings, though. In my broader test, Bing was terrible, and Google's Bard was superior to all other systems by a comfortable margin.

The AI community at my university has been encouraging faculty to create chatbots using LLM systems. I decided to create a GPT with ChatGPT and test it.

Setting up a Chatbot

It turns out that setting up the simplest version of a GPT chatbot for a specialized purpose is quite easy. As Billy Joel would say, all you need are looks and a whole lot of money. Specifically, you must have access to the paid version of ChatGPT 4. My campus IT department has a group license. Further, according to the OpenAI webpages (which I briefly scanned), you may have to pay per amount of information that is processed or provided. Both the length of the query you send to the system and the length of the answer sent back are tabulated.

From the OpenAI webpages, it appears that one of the ways that the company charges is "per token" (roughly the phoneme of a word) going both ways. You can buy a million tokens for $25. A question to the system might be a hundred tokens, while an answer might be three hundred or more. A query and response of that length will cost one one-hundredth of a penny. If your application becomes popular, it could cost you a lot of money. Just for the per-token fee, one million such queries would run $10,000. There are likely other charges besides the per-token fee. Beyond this vague sense of cost, I am quite ignorant of the amounts of money you would have to pay. Considering that Nvidia is now worth roughly a trillion dollars, I am confident that a more sophisticated chatbot-type application would cost plenty of money.

For the simplest chatbot, once you start setting up the chatbot, the system queries you for your preferences for configurations. You don’t need to know much in the way of technical details. This notwithstanding, I am confident that the more sophisticated applications require you to know what you are doing.

Beyond configuration, you should have documents related to the subject that the chatbot should know, and those documents should not have copyright restrictions or confidential information. You should also be aware of the privacy policies of the organization, such as OpenAI, that has created the system. Will the system absorb your information into its main knowledgebase if you upload texts to it that are proprietary and which you do not want to share? I don't currently know. Further, even if their policies say that they will not do it, will they abide by their own policies?

Why create a chatbot? As an example, if you are a college professor, you might want an interactive system that can answer questions about a course you are teaching. You might load the course syllabus into the chatbot as well as other supporting documents. If you chose an open access textbook for the course, you could even upload the textbook into your system. Your students or other users could then query the system about things found in those documents. For the syllabus, this might actually be a bad idea, because the students might think that they do not need to read the syllabus, and sometimes the chatbot might give incorrect answers. In short, the chatbot could be bullshitting them. It might make more sense to make them actually read the documents.

Clever Bullshitting versus Actually Understanding

Formal Studies

This summer, I read an article in the magazine Science News about whether AI Large Language Models (LLMs) could be said to understand things. On the web, the article is located at: AI Understanding and Reasoning Skills. AI systems have recently been passing the Turing test; that is, they have been able to fool humans into thinking that they are human. What researchers are finding is that AI systems can answer questions in test batteries as well or better than humans, but they are doing so in part based on statistical likelihoods. They often don’t really understand what they are talking about. Their abilities are brittle. You can play tricks on them that show that they don't really understand what they are talking about. In short, they might be bullshitting.

As the article said, "In a study reported in 2021, Elazar and colleagues gave nonsensical modifications of WSC sentences to RoBERTa, an LLM that has scored more than 80 percent on the WSC benchmark in some cases. The model got the nonsensical questions right at least 60 percent of the time even though humans wouldn’t be expected to answer correctly. Since random guessing couldn’t yield more than a 50 percent score, spurious associations must have been giving away the answer."

In short, the RoBERTa system was not understanding the sentences, it was modelling statistical probabilities and making educated guesses.

My Own Chatbot

I created a simple chatbot using ChatGPT which I call "Toole's Dunces Chatbot." For texts, I loaded a) my paper "Evidence of Influences ... 2.0," b) an Open Access version of my paper "Dialectic of American Humanism," c) my webpage of ideas for terms papers on JKT, d) my annotated bibliography of obscure JKT studies, and e) a text version of many of the entries in this blog. I did not load any other texts into the system, because I felt that that would violate copyright. I had been planning in this blog post to release my chatbot to the world, but, because it would naturally garner hundreds of millions of queries from the global audience of Toole scholars, your use of the chatbot might circle back and slam my university's budget; I have decided not to release it.

Once I created it, I started posing questions to it based on information in my uploaded blog, questions whose answers were in the training texts. Remarkably, it was giving me dodgy, hedging answers, even when I knew that it had digested better answers from my writings. The answers read like something a sophomore would write if the student had not read the assignment but had heard others talking about it. Below, I give an example of a) a blog entry that addresses a topic, b) the prompt that I gave the chatbot related to that topic, and c) its imprecise answer.

Scholarship on Bullshitting AIs

After I noticed the bullshitting, I contacted a friend of mine. He has worked on medical AI systems and has taught a course on AI. Here is his response:

"No one disputes that GPTs model probabilities. The question is whether reasoning can arise ('emerge' is the hot word these days) out of the probabilities. I think that is still open, but it's safe to say that no one has seen very deep reasoning emerge."

The expert explained that one aspect of the problem is that there are two stages of training for an AI tool, such as a chatbot. There is the initial training—sometimes called pre-training—where the LLM is taught to compose sentences in English. Then there is the second stage, where you feed it texts about which you want it to answer questions. One big problem is that the system will sometimes draw on the knowledge it picked up from the pre-training, even if that knowledge has been superseded by the second stage of training. You can create a medical chatbot, then feed it current research, and it will sometimes answer questions with information that it inferred from the original language-development stage of its training rather than the subject-specific current information. (In other words, just say no to bleeding as a therapeutic strategy, even if an AI prescribed it.)

He sent me a citation to a recent article that discusses the bullshitting of AI systems. The article uses the definition of bullshit from the Harry Frankfurt book On Bullshit (Princeton, 2005).

Hicks, Michael Townsen, James Humphries, and Joe Slater. (June 2024). "ChatGPT is bullshit." Ethics and Information Technology, 26:38. https://doi.org/10.1007/s10676-024-09775-5

This article is open access, so you do not have to pay Springer to read it. (BTW, don’t ask me specific questions about this article. My friend sent me some quotes, but I have not yet read it, so if I tried to answer questions about it, I would be bullshitting you.)

Context

This tension of statistical inference versus actual understanding reminds me of the writings of the neuroscientist Steven Pinker from the 1990s, where he criticized models of the brain which are based a flawed model which he called connectoplasm. Current AI LLMs, or at least the simplest versions of a GPT, rely too much on connectoplasm.

Example

JKT Research Blog entry from July of 2020: The Act of Damning ... In this regard, the act of damning seems to be significant, as only the blocking characters in the book are damned. Gonzalez damns Mrs. Levy, George damns Lana Lee, and Ignatius' mother eventually tells him to go to hell. Toole, Confederacy, 112, 190, 365. ... I argue that Toole uses the device of the blocking character, who prevents other characters in the story from developing and rising in status within the social group. There are three blocking characters who through comic comeuppance are displaced from their social blocking posts, and in all three instances, another character tells them to go to hell or says, "Damn you!" I interpret this to mean that Toole took the act of uttering the curse "Damn you" to be important.

Prompt to Toole's Dunces Chatbot: Is the act of damning someone noteworthy in Confederacy of Dunces?

Response

In A Confederacy of Dunces, the act of damning someone is noteworthy and is tied to the broader themes of fate, disorder, and Ignatius Reilly's complex worldview. Ignatius frequently condemns others, a behavior that reflects his role as a satirical embodiment of various philosophical ideas, particularly those linked to medieval and Renaissance thought. This act of damning is part of his broader disdain for the modern world and his self-styled superiority, often expressed through hyperbolic judgments and curses.

Ignatius’s tendency to damn others can be seen as an extension of his association with the planetary god Saturn, as analyzed in scholarly works on the novel. Saturn, a symbol of disorder and melancholy in medieval and Renaissance traditions, is closely linked to Ignatius. His curses and condemnations are a reflection of his chaotic influence on the world around him and his self-perceived role as a critic of modern society’s flaws. In this way, his damnations are not merely personal attacks but also represent his rejection of the contemporary world and his attempt to impose his own chaotic order on it.

Moreover, this act aligns with Ignatius's role as a mock agent of Fortuna, the Roman goddess of fate, where his interactions—often negative and condemning—bring about shifts in the fortunes of others, reflecting the chaotic nature of the world he inhabits. Thus, damning others is a significant aspect of how Ignatius navigates his world, serving as both a personal expression of his disdain and a thematic element that contributes to the novel's satirical critique of modern life.

Commentary: Note that this answer does not use the information in the blog entry that specifically discusses the act of damning in Confederacy. Ironically, for all of Ignatius's insults, he does not damn anyone in the novel. This answer demonstrates ChatGPT's art of bullshitting. This example is one of several that that I could give.

Friday, August 2, 2024

Solaris Three: The mediality of Enns criticized

This series of blog posts began as a set of observations about literary research on the novel A Confederacy of Dunces (Confederacy), by John Kennedy Toole, but I have extended it to include other topics, including evolutionary literary criticism. This blog entry relates to a different topic.

My current research interest is the novel Solaris by Stanislaw Lem. I will not yet post my overall thesis for the project, but I am learning things along the way which are tangential to my thesis and which I can share.

Today's topic: A Paper by Enns

I will be criticizing a paper by Anthony Enns (citation below), but first I want to come to his defense. In short, his work is more in the mainstream of Lem criticism, and my ideas are outliers. He has done his homework of reviewing and referencing the rest of the corpus of Lem criticism. He has read at least some of Lem's other fiction, which I have not. Enns quotes Lem's non-fiction statements about literature in general and about Solaris in particular. By criticizing Enns, I do not mean to say that his ideas are without merit.

My problem with his essay is that he understands the alien contact in the novel to be merely a way "to illustrate the various effects produced by competing media technologies ..." (34). To me, that seems like small beer. Admittedly, he does come from a tradition of communication studies that puts high importance on how we communicate. I think that school is adjacent to philosophical schools that hold that humans think in verbal languages. (Steven Pinker would argue otherwise.) If you think that the human soul is merely a form of communication, then media technologies are very important indeed.

To counter this school, I would like to point out that the novel's 'visitors' are embodiments of persons about whom the novel's characters had feelings of deep guilt. I would say that a psychoanalytical interpretation of the novel would be more plausible than a communication technology interpretation: each man on board the space station has to face his own worst shame. The novel hints that in each case the shame is either directly or indirectly related to sexual feelings. Up against that sort of observation, competing media technologies seems not central to the meaning of the novel. I am not a fan of Freud, but Sigmund with his beard and pipe could do better than a media critic.

Enns would rebut the above point because he follows the ideas of Friedrich Kittler and Laurence Rickels, to wit, the storage of data in media technology is similar to the Freudian unconscious, so the media interpretation and the Freudian interpretation are one and the same. Again, I think that this position sells the human mind short. Mourning the loss of a loved one is not just putting an old favorite album on the turntable. Enns associates mediality (writings discussing other media technology) with the human act of mourning and its associated melancholy, hence the title of the article. Again, that is insulting to the human experience.

Recap: In Solaris, the quasi-alive ocean creates the book's "visitors" in order to interact with the humans on the space station. Kelvin is the main character of the novel. His visitor appears in the guise of his deceased wife Rheya, who was driven to suicide by his desire to break up with her.

Enns argues that because the visitors exhibit some qualities similar to phonographic recordings or videos on film, the visitors represent media technology. I do admit that the visitors do have some similarities to cinema projections. In this way, Enns is correct.

Enns states: "Because Rheya's appearance represents a manifestation of Kelvin's unconscious thoughts and emotions, and because the visitors are repeatedly associated with the technology of film and are described as immaterial filmic projections, it is clear that Kelvin's melancholia is a product of conditions of mediality." (45)

First, Rheya is only a manifestation of Kelvin's unconscious thoughts because the ocean has probed his mind and extracted those thoughts. She is a projection that the ocean has created for a purpose. She is a creation of an entity that possesses intentionality, not some abstract concept of mediality. This invalidates the premise of Enns' logical inference that "because of this, it is clear that that."

Second, I admit that it is true that Kelvin carries inner guilt over the death of his wife and that the appearance of the visitor does seem to trigger depression. However, it is not clear that his depression is a product of the conditions of media technology, whatever that means. Instead, the ocean seems to be creating a particular visitor for each human. Each visitor is in a form that has strong emotional resonance for that human and exists as a way for the ocean to try to build an emotional bond with that human. To the degree that Kelvin suffers melancholy, it is the product of the ocean's efforts to connect with him. Enns confuses the medium with the intentions of the message's sender. The cause is the intention of the ocean, not the conditions of mediality.

The same critique of the thesis by Enns can be made for the second novel studied in his paper, the novel His Master's Voice. Enns claims that the obsession with death in the novel relates to the problem of understanding the medium of the message. He sums up his ideas by saying that media technology such as film and phonographs preserve the original impressions of the person who has died, thereby prolonging mourning, while the print media of the novel is written by someone who wants to end the mourning. It is print versus technology. (Nevermind that print is a technology.)

A more straightforward interpretation is that the message might really be from the dying universe, an almost supernatural message, and it evokes in Hogarth thoughts of losses he has experienced and his own mortality. Enns' thesis continues to feel small next to the profound mysteries offered by both novels. He says that both novels "reveal an essential connection between the dimension of ghosts and alien communication" (50), but I would say that they reveal a connection between the ghosts and each novel's mysterious and almost supernatural Other. The communication medium might be itself a bit mysterious, but that pales next to the mysteriousness of the sender of the messages.

Footnote: Despite my criticism of Enns' thesis about mediality and mourning, I do have to mention a development in this exact field. I am talking about griefbots, or deadbots. The magazine Science News reports that two tech companies now offer services whereby artificial intelligence can be used to create virtual bots which allow the living to interact with digital replicas of the dead. In theory, a virtual bot of your deceased loved one might help with the grieving process, but ethicists are calling for guidelines to prevent abuse. The article is at Should we use AI to resurrect digital 'ghosts' of the dead? by Kathryn Hulick. This is mourning and mediality with a vengeance!

Enns, Anthony. (2002). "Mediality and Mourning in Stanislaw Lem's Solaris and His Master's Voice." Science Fiction Studies, (29): 34-52.

Monday, July 1, 2024

WorldCat Reviews hit 100, plus, a book review: A Swim in a Pond in the Rain, by George Saunders

Back in December of 2022, I posted here a statement that I would create a webpage containing my book reviews that had been posted to WorldCat. At the time that they discontinued hosting book reviews, I had about 250 on the site.

I have been slow to migrate those old reviews to the new webpage because I want to edit them, correct spelling and grammar errors, etc. It is possible that some were marginal enough that I will never repost them. I have also been adding new reviews that were never on WorldCat. The new webpage now has over one hundred reviews. It is, as always, at: A Collection of Book Reviews Originally Written for WorldCat.

Below is a new review that I wrote this spring and which is on the revised webpage.

Saunders, George. A Swim in a Pond in the Rain: In which four Russians give a Master Class on Writing, Reading, and Life. New York : Random House, 2021.

This book purports to be a master class given by four Russians on "writing, reading, and life." One thing that George Saunders does in the book is explain how he himself goes about writing a short story. He also offers close analyses of seven Russian short stories from the late 19th century, a time which was a high point of Russian liberal culture. In the process, he presents his own vision for the value of literature in general. He uses as a pretended conceit that the Russians are teaching the class, which they are not. Although Saunders strenuously rejects the idea that he is writing a "how to" manual for writers, the overarching theme of the book is a "how to," or rather, "this is how I did it, here are the principles that I think are important, and you can figure out your own way."

Saunders agrees with George Eliot, Jamil Zaki, and others that the purpose of literature is to expand our empathy to all people. These 19th century Russian short stories are fairly domestic, but he argues that they are part of a resistance literature which defends the (liberal) ideal that everyone is worthy of attention. He states that readers are a vast underground network for good in the world. Reading makes them more expansive, generous people.

Ironically, Saunders almost immediately contradicts his own vision just stated that literature makes us better people. He notes that the Nazis were skilled at using pageantry to promote their values, and he admits that art can have a dangerous propaganda value. I agree with his latter position that art can be dangerous, but I would frame it to say that literature plays a role in establishing cultural norms. It gets you to connect to a culture through your emotions. For example, Dante's Divine Comedy helped define what it was to be an Italian in the medieval era.

Saunders' own vision--that literature is a force of good in the world and makes us better people--is itself a cultural norm. His writings promote a number of cultural norms. To take one aspect of his writing as an example, the sexual norms in his fiction would not be welcome in more restrictive societies. I would not call his stories especially raunchy, but his books would not be shelved in the Amish fiction section. (I grew up in an area that was heavily Amish and Mennonite. I have been told, but might have the details wrong, that the public library there recently had a specific aisle for Amish fiction. Amish and Mennonite women were forbidden by their communities from checking out fiction that was not from that aisle.)

Saunders is in the delicate position of being a white male in an academic culture that, on average, skews to the left on political and social issues--a culture that sometimes casts a jaundiced eye on white males. He is holding up the work of these dead white males as Great Literature at a time when many adherents of Critical Theory argue that that is wrong. As I interpret it, critical theory argues that we should have a post-colonial and post-defined-sexuality and post-science (antiphysis) and post-religious attitude toward viewing the world in general and literature in particular. Praising the Great Writers is not part of the programme.

Saunders sees literature as asking big questions: How are we supposed to be living down here? What are we supposed to accomplish? What should we value? What is truth anyway? Why read literature? Saunders argues that the part of the mind that reads a story is also the part that reads the world. It can deceive us, but it can be trained to accuracy.

This is a vision of literature as universalistic. On the one hand, some theologians might argue that these questions should be left to religion. On the other hand, some critical theorists see this sort of universal position as propping up a privileged group. They ask: who is being held up as the universal man? Saunders gives a nod to critical theory by arguing that Chekhov structured his stories, such as "Gooseberries," so that pronouncements and positions are destabilized. "Destabilized" is a good term in critical theory; it is almost as good as terms such as defamiliarize or de-reify. By presenting his ideas as "just my way of doing it," and focusing on techniques for writing rather than criteria for what counts as Great Literature, Saunders dodges the thrust of critical theory.

In one instance, though, Saunders makes a rather strange and questionable argument that sexism and discrimination based on social class can be translated into bad technique. According to this perspective, if Tolstoy hadn't been so blindly privileged in his attitudes towards the peasants, his stories would have been better. Saunders can then say, hey, it's not all about race, class, and gender; it can be reduced to an argument about good technique and bad technique. You can still enjoy the dead white males who had good technique and therefore had good universalistic values.

By focusing on the Russian writers of the 19th century, Saunders may be setting up a lesson about the excesses of revolutionary culture. Today's critical theory is related to Marxism, and, as Saunders points out, what destroyed all those sensitive readers of the Russian Renaissance were the Bolsheviks--Marxists who took them out and shot them. Saunders doesn't name current critical theorists, but one might infer that this aside is a cautionary tale.

On the surface, Saunders focuses on techniques, such as "maintain specificity," "always be escalating," etc., but he sneaks in praise for the sensitivity and understanding of these writers he showcases. Chekhov was wise about loneliness. Gogol was wise about the role of language in constructing our inner world. This is a book that praises the wisdom and universalist ideals of the Great Masters, even if it slips those assessments in under the guise of a technical discussion.

In his own writing process, Saunders starts with any idea, then subjects his current draft to intensive revision. To him, this technique allows his unconscious sense of the story to come out through a thousand micro-decisions. He just follows the inner voice and lets it take him where it wants to go. He does not like the idea of mapping out a plot; rather, a story creates questions which need to be resolved. When he began as a writer, he wanted to write in a spare, realist style like Hemingway, but he discovered that his most effective writing was in a comic, absurdist style. His advice: find your most effective style, even if it is not what you hoped it would be.

Saunders argues that you need to get people to want to read your work. It has to be interesting enough sentence by sentence for them to keeping reading and satisfying enough in the end for them to start the next story. This vision suggests that the art should follow the reader rather than the reader follow the art, and it seems to encourage the would-be writer to chase after the tastes of the broadest audiences, whether they were the readers of pulp fiction a century ago or the followers of click-bait today.

Of course, readers who follow cultural leaders are more inclined to read what those leaders promote, and in that way, the art can lead the reader. Saunders leads us to these wise Russians. The editors at The New Yorker led their readers to Saunders. As for myself, I would never have read The Friends of Eddie Coyle by Higgins if it had not been suggested by someone whose opinions I respect.

At the end of the book, Saunders humorously disavows that his advice is authoritative. What he seems to be saying is: "This is how I do it, ... just sayin'. ... But I am successful and well-respected, and I learned at the hands of other successful and respected writers such as Tobias Wolff. Just sayin'." He insists that his entire book be bracketed by the words, "According to George." In this way, he can influence the reading public and preserve traditional aesthetic values without appearing to impose them and without being skewered by those who reject the canon.

I like the book. That having been said, I will destabilize my judgment by pointing out that Saunders needed an editor to cut the length. He doesn't give one metaphor for the writing process, he gives dozens. If I had been reading instead of listening to the e-audiobook version, I might have been more frustrated by the repetition ad nauseum. Take your own advice, man: always be escalating!

Monday, June 3, 2024

Solaris Two: Tomes in space!

This series of blog posts began as a set of observations about literary research on the novel A Confederacy of Dunces (Confederacy), by John Kennedy Toole, but I have extended it to include other topics. This blog entry relates to a different topic.

My current research interest is the novel Solaris by Stanislaw Lem. I will not yet post my overall thesis for the project, but I am learning things along the way which are tangential to my thesis and which I can share.

Today's topic: book volumes with ink on paper.

Lem intersperses the action of the novel with discussions of the background of the planet. In order to add this background, the main character, Kris Kelvin, goes to the library to read up on the history of the exploration of Solaris. In the library, he hefts massive, print book volumes: ink on paper.

This novel was published in the early 1960s, and it features interstellar human space travel, anti-gravity devices, and other technologies that we have not mastered, but the Kindle was unimaginable. Instead, they were hauling bulky volumes across the galaxy. Lem did not anticipate massive storage capacity for electronic media. Even microfilm was available in 1960 and would have made more sense for schlepping a library across the universe. Stan just wasn't thinking.

Now, as a librarian, I for one enjoy using a paper book volume for reading for comprehension and notetaking. But our library downsized our bound periodicals collection by over half in part because many of the articles are full text online. The library on the Solaris station is a reference collection, not a set of books you would read from start to finish. A science fiction story written today about Solaris would have the station storing petabytes of raw data on the state of all parts of the ocean. The literature on the planet would fit on an app on everyone's phones. To the edge of the universe with massive paper books? I don't think so.

Wednesday, May 1, 2024

Solaris One: Artificial Intelligence from 1960

This series of blog posts began as a set of observations about literary research on the novel A Confederacy of Dunces (Confederacy), by John Kennedy Toole, but I have extended it to include other topics. This blog entry starts a new topic.

My current research interest is the novel Solaris by Stanislaw Lem. I will not yet post my overall thesis for the project, but I am learning things along the way which are tangential to my thesis. These I will share.

In addition to his fiction, Lem published some non-fiction. A major work from the early 1960s, which was revised several times, is Summa Technologiae. I have felt the obligation to read this text to gain insights into Lem's conceptual framework when he wrote Solaris. I have now read (part of) the recent translation by Zylinska which was published by the University of Minnesota in 2013.

Summa Technologiae had almost nothing that furthered my own thesis, but I did notice a number of interesting details. I plan to have several blog entries about points I have discovered in this book.

Today's topic: Lem's understanding of computer science, or as it was called at that time, cybernetics. Lem, writing in the 1960s, seems to have an overly optimistic view of the ability of computer systems to display intelligence. That is, he is too optimistic about what we now call artificial intelligence. He doesn't understand that a major function of computers in the 1970s through today will be to keep track of records in database systems. Computer systems have not needed to be intelligent to accomplish such a role: information management does not require sophisticated computations, just efficient data retrieval.

In the late 1950s and early 1960s, many computer theorists speculated that artificial intelligence would not be so computationally difficult. There is the famous story (possibly an urban legend) that the CIA at that time funded a Russian language translation system. They tested the system by inputting an English phrase for it to translate into Russian. Then they took the output and had it translate that back into English. The proverb, "The spirit is willing, but the flesh is weak," was translated into "The vodka is good, but the meat is bad." (Cue the rim shot.)

The recent advances in AI have only come about after great strides in machine learning and neural network designs. That, and an enormous increase in computing power. Reading what Lem wrote in 1960, I think, well, he was sixty years too early. Only products such as IBM's Watson, Google's AlphaGo, and OpenAI's GPT 4.0 have shown that computers are achieving something that might be called real intelligence.

An interesting sidetrack: the split between language smarts and mathematical smarts. The SAT test is famously divided between a verbal test and a mathematical test. In early 2024, the journal Nature reported on an AI system that is able to solve math competition problems better than many humans. Especially noteworthy is the fact that it is not a single system; it is two systems glued together. The first system is a language parser, which translates the verbal description of the problem into mathematical formulations. The second system then solves the math problem. Even among AI systems, the SAT test is split between verbal and mathematical skills.

Monday, April 1, 2024

Henri Bergson "Laughter" Review: Theory of Humor Series, part 28

This series of blog posts began as a set of observations about literary research on the novel A Confederacy of Dunces (Confederacy), by John Kennedy Toole, but I have extended it to include other topics, such as evolutionary literary criticism and the theory of humor. This being April Fools Day, I give you a post about humor. Enjoy.

In my investigation, I have read the following essay, and I will analyze it here.

Bergson, Henri. "Laughter" 1900. This essay can be found in many places.

(What self-respecting, or even self-denigrating, theorist of humor can go without commenting on the work of Henri Bergson? I mean, really!)

Bergson is usually included in the history of the theory of humor in part because he was in his day a well-respected philosopher, and not many philosophers have devoted some of their musings to humor. (One could say that humor is a-musing. Cue the rim-shot.) Bergson may also be popular in brief histories of the theory of humor because his theory is at times narrow. That narrowness makes it easy both to summarize and to criticize.

As a philosopher, Bergson heavily influenced Marcel Proust, so students of literature are more likely to read Bergson than to read other middling philosophers. (Bergson is no Plato.) Bergson's ideas are of especial interest to students of John Kennedy Toole because Proust influenced Evelyn Waugh, and Waugh in his turn influenced John Kennedy Toole.

Bergson begins his essay with a broad definition of humor which covers many cases and is similar to the theory that I have been using. (For my earlier blog entry summarizing my current theory of humor, click here.) However, Bergson, like any philosopher, then tries to fit all phenomena into his philosophical structure, and, in order to shoehorn humor into the framework, he has to narrow humor's scope. This narrower theory then obviously does not cover every possible case of humor, or even most cases. His theory starts out with promising validity across many instances of humor, but then it narrows and almost becomes a parody of itself. Despite its narrower form, for certain categories of humor, his theory hits the nail on the head (or on the thumb, as it were).

To put Bergson's ideas on humor into perspective, it helps to know something about the rest of his philosophy (though I am not an expert). He argues that human life is driven by a life-force, or a vitality. Mechanical devices lack vitality. Bergson wrote at a time when the industrial working conditions for the European proletariet were particularly brutal. It was at a time when Marx wrote that the worker was alienated from his own existence, and Bergson argued that the industrial existence of so many workers was being drained of vitality. Chaplin's movie Modern Times exemplifies this perspective.

In Bergson's system, humor is generated by envisioning a human being as a physical object without inner life or vitality. The man slipping on the banana peel ceases to be human and becomes merely a physical object, crashing to the ground. The person who goes through life acting robotically is ridiculous. (As Ionescu said, "If you want to turn tragedy into comedy, speed it up.") The ridicule and laughter are social cues to that person to correct his behavior; they signal to the person to stop operating like a robot on autopilot.

There are many aspects of this theory that work well. First, there is incongruity. The dichotomy between "human as a dignified personage" and "human as airborne object" often lead to competing linguistic scripts and incongruities. Second, there is disparagement. The physical human as object has less dignity than the person as a social actor in the abstract, and the sudden reminder of our physical nature has an aspect of belittlement. Third, there is a social function. The idea that laughter has a role to correct or modify behavior acknowledges humor's social functions.

However, it should be clear that many instances of humor do not fall into Bergson's narrow, mechanistic definition of what is funny (dry wit for example). Further, Bergson--convinced of the correctness of his entire philosophical system--insists that only this class of events are humorous. In a sense, his obsessive views themselves become comical. While his appeal to creativity and vitality are appealing--and his special class of events really are funny (just watch a Charlie Chaplin movie)--it is easy for even a two-bit critic like myself to refute his attempt at a comprehensive theory of humor.

Désolé Henri, pas de cigare!