Chatbots Killed the Academic Star

Michael Munger

Having been an editor on three journals, I can say that the most difficult thing a journal must do is find referees. So I was thinking last week of how nice it would be to have a “refbot” (software that was trained on what made for a good, publishable article) to use as a referee.  And then I suddenly realized that I was contemplating a new kind of singularity.

 

The word “singularity” comes from Latin, and then French, meaning unusual, exceptional, unique behavior. Scientifically, a singularity is an event horizon, the penetration of which results in the suspension or negation of all the physical rules that govern mechanics, electromagnetism, pretty much everything.

 

Mathematically, a singularity is when an operation is not defined. It’s not so much that it can’t be done, but rather that it literally cannot be defined or understood using the normal logical rules of premises or induction. Think “divide by zero” or “invert an NxN matrix whose rank K<N.”

 

Your Own Singularity

All that may seem, pretty metaphysical, and “you wouldn’t understand” is what grifters always say before they ask for money. That’s not what’s happening here. I’m talking about a singularity we can all easily understand:

 

AI chatbots start writing papers and sending them to journals (This is pretty much happening now, folks). At first, the chatbots would be used by specific researchers to assist in writing a paper that was going to be written anyway. But there’s a lot of downtime, and writing is easy.

Once sent to the journal, refbots, “trained” for the specific journal by analyzing the corpus of “good” (previously published) academic papers in that journal, perhaps leavened by other academic work the editors aspire to have their journal mimi, will evaluate the submissions. The refbots will investigate contradictions, look for consensus, and check references. (That may sound pretty superficial, but that would be better than about 2/3 of the human referees journals can actually find, and it would be fast. On the other hand, it appears that perhaps the tech is not quite there yet.)

The feedback loop is then closed by the next generation of chatbots scanning the published literature and deciding what is important, citing that work in the next round of published articles. The articles that attract the most citations from the next generation of chatbot authors, and the next, will get higher status in search engines that return the “best” articles for human researchers to use, after the selection process has culled the dross.

Notice that a “generation” might be no more than a week, or as little as a few seconds, as machine learning becomes faster and faster. There is no reason to wait for articles to be “published” in the usual fashion, as the system would dynamically update itself, without human mediation of any kind. Before long, all the academic journal articles will have been written.

 

Chatbots can already write bad prose, the kind we routinely publish in “professional” journals.  And a “refbot” could be trained (though not easily, it appears) to discriminate publishable from unpublishable papers. The academic scandals that have arisen from fake, nonsensical submissions, including the Sokol Hoax and the Lindsay, Pluckrose, and Boghossian “Grievance Studies” Experiment, have been interpreted incorrectly. The secondary responses to the Grievance Studies publications are more nearly correct: the problem is not so much that fake studies can be published, but rather that much of the other work published in journals is, effectively, fake. (There are many examples of compelling evidence, which I will not review further).

 

You can see the problem: chatbots can write quickly but badly, and they can produce wads of stuff that will clog the journals. The worst problem is that if the chatbots write enough, some of it will seem good. That’s already a problem in music, where autotune and computer-generated melodies and lyrics produce an overwhelming tsunami of pop songs. It doesn’t matter that most are bad, because some will get through and become hits.

 

But chatbots killed the academic star, or more accurately they will kill the academic profession, at least as we have known it. The standard career strategy (”write and publish a whole bunch of papers even if no one ever reads them”) will no longer be viable. It’s not clear this is bad, in most disciplines. Scholarship, as opposed to research, might come back into vogue, meaning that a professor teaching Shakespeare might actually focus on the text of Shakespeare rather than their “latest research” on Hamlet’s colonial commitment or gender dysphoria. To put it more bluntly, once the ability to produce text is infinite, we might refocus on good text and the reasons why it is good.

 

Bad for Me

In the title of this piece, I claimed that chatbots killed the academic “star.” But I was thinking, as usual, of myself. As I have noted many times, my comparative advantage in academics is writing quickly; to paraphrase Othello, “I am one who wrote not wisely but pretty fast.” It is not necessary, in academics, to write well, and some good writing is even viewed with suspicion as being not serious. Fortunately (for me), a lot of academic writing is bad. I have even written down some suggestions on “how to write less badly” and had others say the advice was useful. (One commenter said the title was badly written; yes, sweetie, that was the point!)

 

I can already see the end (of me) on the horizon. Being “one who wrote not wisely but pretty fast” will not be distinctive, come the singularity, because GPT4 entities can write badly and really, really fast. Sure, the text will need to be edited, and refbots will be needed to judge between competing versions of the same account. But my advantages in doing these things are far less significant than the advantage I have always had in writing (relatively) quickly.

Courtesy: (AIER)