By Jacob Derin
Over the weekend, I attended SacHacks 2021, which describes itself as “the first major intercollegiate hackathon in the Sacramento, California area.” It consisted of a series of presentations by computer science experts while students worked in groups to complete programming projects.
One of these presentations described a natural language program that could help songwriters develop lyrics for their songs. It focused on how machine learning works, and how computer scientists have been teaching computers to understand and write language by exposing it to many human-generated writing examples. This is the process that allowed IBM’s Watson to win Jeopardy.
This raised some interesting questions about whether or not a computer program can be creative. While the presenter made the case that computer programs can only ever augment human creativity, I don’t see any reason to think why they couldn’t do creative jobs themselves.
It’s been 24 years since Deep Blue defeated the reigning chess champion, Gary Kasparov. Chess experts and computer scientists had long predicted that it would take massive advances in the field for computers to outperform humans in a game like chess. It required too much creative, flexible thinking, they argued.
Recently, computers beat the human Go champion for the first time and defeated the best Jeopardy champions. If the history of computer science has taught us anything, it’s to have some humility regarding what computers can be capable of doing.
Still, the intuition that computers will never replace human artists feels ingrained in a different way. Ascendancy in chess, Go and Jeopardy is one thing, but writing poetry or works of fiction feels much more human. These are works of emotion, and there’s something undeniably uncanny about the possibility of computer-generated artwork.
But that doesn’t mean that computers can’t do it. There’s nothing a human artist can do that a computer couldn’t. This sounds a little strange, but reflecting on the creative process reveals it to be true. The human brain, after all, is little more than a biological machine. Given enough practice, it can learn which combinations of words, colors and sounds seem to “work well” together, but it basically runs its own sorts of programs to do this.
That’s how learning works. Fire the same neurons together enough times, and you make a strong pathway between them, just like programming a computer chip.
Computers have been writing poetry, for instance, for some time now. Some of it is even relatively passable. One of these poems, written using an algorithm created by a Duke University student, was even published in a Duke literary journal:
“A home transformed by the lightning
the balanced alcoves smother
this insatiable earth of a planet, Earth.
They attacked it with mechanical horns
because they love you, love, in fire and wind.
You say, what is the time waiting for in its spring?
I tell you it is waiting for your branch that flows,
because you are a sweet-smelling diamond architecture
that does not know why it grows.”
Using poetry as an example, however, is lowering the bar somewhat, of course. Poetry doesn’t have to conform to the same standards of grammar and logic that prose does. However, artificial intelligence is now also capable of writing “natural language” prose which is very hard to tell apart from human-written text.
How long will it be before we have the first passably written computer-generated novel? It seems conceivable that we’ll have one before too long.
Does this mean that human creative work is going to go the way of manual labor jobs? Are we going to be outsourcing our novel-writing, screenwriting and painting to computers in the near future? I don’t think so.
As I argued in a previous article for the Davis Vanguard at UC Davis, writing jobs are safe from automation, at least for the foreseeable future. Even the very impressive achievements of natural-language programs that I’ve been describing don’t come close to the creative visions of the world’s great poets or authors. Some day that might change, but there will always be a fundamental difference between computer programs and people: we understand what we’re writing.
When computers learn to write as we do, they’re learning about the syntax or grammar of language but never understand the semantics or meaning of it. Computers can only ever mimic human creativity by reshuffling examples that it’s seen in new and interesting ways. No matter how good computers get at this, they can’t express an emotion or an opinion.
Just as humans still play chess even though chess programs hopelessly outmatch the best chess players, we’ll still be writing and creating long after computers master the ability. The very fact that a feeling person, one who understands what they’ve written, authored a text will distinguish human writing from the computer variety.
I’ll leave you with this example as a way to demonstrate what I mean. There’s a website out there called the library of babel. A computer program generated every possible 3200-word page that could be written and stored there. It might be difficult to wrap your head around this. This website contains every single page of Shakespeare’s plays, the United States Constitution and the Encyclopedia Britannica. It even contains every single page of Shakespeare’s plays as they would have been written if each character had been replaced with a Zebra.
Even this very article was there before I wrote it (albeit stored in two separate pages, because it’s longer than 3200 characters). But did that make my attempt to write it futile? I don’t think so. If I did, I wouldn’t have written it.
What I took away from my experience at SacHacks is that we’re still exploring our technological power limits. The next generation of students will need a solid foundation in computer science if it’s going to keep that work going, and so the more programs like SacHacks that are out there, the better.
Jacob Derin is a third-year English and Philosophy major at UC Davis.
Support our work – to become a sustaining at $5 – $10- $25 per month hit the link:
Fascinating article Jacob. Thanks for posting your thoughts on a subject foreign to me.
I do take partial exception to one point.
“No matter how good computers get at this, they can’t express an emotion or an opinion”
While I agree, computers cannot express emotion, I am not so sure about opinion. Computers are clearly capable of enumerating a number of options when confronted with a problem given all relevant data. If they then prioritize the options from most effective to least, are they not in effect expressing an “opinion” on which path should be chosen?
This is an interesting and contentious question in philosophy of language. For instance, this idea of “unconscious opinions” has been criticized by the philosopher Dan Dennett as one reason why the concept of a philosophical zombie is incoherent.
I tend to think that only conscious subjects can express opinions. What a computer does when it assembles a sentence that contains information about a topic couldn’t properly be called formulating an opinion. Computers have no “position” to speak of. They just translate inputs into outputs.
You can argue that we do the same thing, but people understand that it’s happening and computers don’t.