Skip to content

Say 'A.I.' one more time, I dare you

Silicon Valley has created an automated bullshit engine that can't tell the difference between truth and nonsense. That's bad news for the nation's corporate executives.

13 min read

We go through this every few years. Some new tech is introduced and it's declared to be the official beginning of The Future, the one thing that will finally bring about the utopia that we were promised in cartoons and in creepy short films from the 1950s where plastic-faced pastel-wearing housewives swooned over vacuum cleaners that could do your taxes, or whatever it was the narrators were going on about.

The last big one was cryptocurrency, and the less said about that one the better. Companies suddenly decided that they had to be on The Blockchain now, despite nobody in any of the executive offices knowing what the hell that meant or what the hell good it would do. Taco Bell would announce that from now on their tacos were on the blockchain, and somebody somewhere got an extra ten million in their bank account for thinking it up. Digitized monkey pictures were worth a fortune, because you could buy a digitized monkey picture that looked slightly different from anybody else's digitized monkey picture and take that, titans of industry, look upon my digitized monkey pictures and weep.

And then everyone looked at all of the stuff that they had put on the blockchains, and quietly noted that actually none of it did anything or helped anything, it was all just shoving tulip bulbs in your pocket and declaring yourself king of the local Gold's Gym, and slowly the blockchain-mania faded and the technology became once again used for the thing it was actually designed for: money laundering and buying things that you didn't want your government finding out about.

We now seem already to be at the peak of the "Artificial Intelligence" craze, in that every company has now added it to everything and—stop me if you've heard this one before—only after doing that noticed the technology turns out to be absolute crap for everything except a narrow subset of tasks in which absolute crap is the desired output.

It's saying something that Taco Bell—or rather Yum Brands, the owning megacorporation—could cheerily announce back in April that oh yeah fer sure they'll be AI'ing the hell out of your local Taco Bell and the resulting Wall Street Journal report treated it as big news instead of an over-the-top prank.

“Our vision of [quick-service restaurants] is that an AI-first mentality works every step of the way,” Park said in an interview. “If you think about the major journeys within a restaurant that can be AI-powered, we believe it’s endless.”

If you want to know the highest and best use of AI, you're looking at it. It's generating Corporate Executive Speak.

Yum’s SuperApp, a mobile app for restaurant managers to track and manage operations—Park calls it “a coach in your pocket”—is testing a generative AI boost, he said. Team members can ask the app questions like “How should I set this oven temperature?” rather than turning to training materials or tapping through an app interface. 

Good news, everyone. The managers at your local fast-food joint don't know that temperature their ovens should be set to when cooking the food that you're going to shoving down your gullet. We will solve this problem by introducing a new technology that is making headlines for its eagerness to be horrifyingly dangerously wrong.

And then the article goes on to suggest that some other uses of AI might be as the disembodied voice that takes your order in the drive-thru line and damn it, that might actually be a good example of how to use AI because if I'm going to be ordering twenty tacos at 10:30 at night then by God I do think I'd rather give my order to USS Voyager Captain Kathryn Janeway than to a despair-riddled high schooler speaking to me in a voice so riddled with static and distortion that neither of us can hear more than three in every five words of the conversation anyway.

So fine, I'll give them that one, and encourage them to use "AI" for that already easily automated task instead of just strapping some basic voice recognition software to a Vocaloid and calling it done. And it will work out great, except when internet connectivity goes out and the building-sized computer cluster in Greenland that powers the whole thing might as well be two wires connected to a potato for all the good it'll do.


What chafes almost-most about this version of "Artificial Intelligence" is that it ... isn't? It's certainly artificial, but it's not intelligence, and it's nowhere near what both scientists and futurists mean when they weigh the possibilities (quite good) and ramifications (very alarming) of constructing a device that is self-aware. The current corporate orgasms are for the best-yet commercialized "Large Language Model," which is the generalized name for an algorithm that does the following:

  1. Collect and store as much textual (and visual, and audio, and other) data as possible, and index and cross-index it all into an enormous database.
  2. Allow users to type in a plain-text question or prompt.
  3. Given the words and phrases the user typed in, search for what past humans have written in response to similarly phrased questions, copy them down and spit them out again as single-phrase-or-sentence snippets in what amounts to the world's first purpose-built plagiarism engine. What's the first declarative statement past humans have often used in their response? Copy that down first. What phrases tend to follow? Slap 'em in there. And so on.

There's more to it than that, of course, in that the language model must diagram sentences with a bit more vigor than you ever did in school, making sure the phrases it constructs out of the pureed pink slime of all of humanity's documented thoughts still end up in valid sentences after it's remolded them. And it's genuinely neat that we can now do that; what makes it possible now when it wasn't before is that building database clusters containing a good chunk of everything ever written in a given language all organized by type and topic is now a relative triviality, and the ridiculous computing power it takes to reconstruct it all of it back out of the micro-plagiarism slurry is now something that's available to anyone willing to pay for it.

But it's nothing more than an iteration of the same algorithms that power Siri, or Google's own search, and all of those borrow from the simplistic call-and-response language model of Eliza. It's just chatbots all the way down. Always has been.

What Silicon Valley brought to the table this time around is a bit of light criminality, the same sort of "well, what if we just ignored all the damn rules" version of innovation that brought us Ubers instead of taxis and scattered unregistered micro-hotels anywhere a landlord had a vacant house to put to the task. What if we ignored all copyright restrictions was the breakthrough that made it all possible; you can't really say it's copyright theft if you're stealing from everywhere, just a few words at a time. And the real human beings whose work is being continually coughed back up in little ChatGPT hairballs have been absolutely livid about that—and will probably never be compensated for the plunder, because when the captains of industry decide that the "economy" can only survive by cheating people out of what little they have, new laws usually get passed to make sure the cheating can happen.

If you're an intellectual sort, or merely tedious, there's probably a lot of arguments you want to make right now about how Actually this does too count because human "intelligence" is just the same plagiarism engine in squishy meat form. There's no thought that hasn't been thought before in almost the very same phrasing; every word we use comes from an old neologism that caught on. The human brain itself functions as a deeply cross-indexed database that is built to store things we learn from someone else and to reassembling them, sometimes badly, the next time it comes up again.

And to that I say: Listen, shut up. That's not what we're talking about and you know it. The reason Rosie the Robot counts as "artificial intelligence" and your Roomba doesn't isn't because one can talk and the other can't. It's because Rosie knows when you're bullshitting her.

But what really, really galls me about the current "AI" trend is that the Random Content Generator that is ChatGPT and its competitors could have absolutely enormous economic benefits—if only companies were using the damn things right.


Consider the defining features of these new "AI" language models.

  • Able to offer an analysis of anything you ask it to.
  • But can't tell the difference between true statements and false ones—and doesn't care.
  • Answers are frequently interlaced with confusing gibberish, bizarre conspiracy claims, or sudden-onset racism.
  • Mainly repeats whatever other, more important people are saying.
  • Is prone to offering advice would kill people if actually followed.

Are you kidding? We have loads of jobs that consist of nothing more than that. At least 20% of our economy consists of that.

The problem is that the idiots in charge of these products keep proposing the stupidest possible "uses" for them, suggesting they perform duties like flying our commercial airliners or giving medical advice. They want to take a technology that still can't figure out how much wood glue you should top a pizza with, and they want to give it knives for arms and tell it to perform cancer surgeries.

And "AI" can't do any of those things in its current chatbot incarnations, because "AI" is hopelessly unsuited for any task in which success is actually, you know, important. What it's good at is offering up warmed-over bullshit, presenting it as confidently as possible, and sociopathically trotting on its way again with no interest in what happens next. Those are the jobs that can be automated right out of existence, and as fortunate coincidence those are also the salaries that are most bleeding companies dry right now.

Corporate Executives

The job of any corporate officer senior enough to have their own office consists entirely of what AI is already best at. The job of a CEO is to show up in meetings, say important sounding things, demand the company shift directions or outsource something or rebrand something based on (1) personal whim or (2) a magazine article or (3) something someone said on the golf course that morning. None of it needs to make any logical sense. The job of the rest of the executive team is to praise whatever the CEO says, then go out and tell the lower-level grunts that they're all getting pay cuts because none of them have thought of anything even half as brilliant.

Here, let me show you a job that could be done right now with ChatGPT.

“Our vision of [quick-service restaurants] is that an AI-first mentality works every step of the way,” Park said in an interview. “If you think about the major journeys within a restaurant that can be AI-powered, we believe it’s endless.”

Jeebus McCrackers, that's the most chatbot-automatable set of words I've ever heard. That is what AI is good at, why are we having squishy meatsuits doing the job that mindless automatons were born to fill?

You want corporate vision? I got all the vision you want in this little box, pal, just press the button and it's yours.

"Our vision of tire stores is that a death laser-centric mentality can bring synergies at all points in the value chain. If you think about the market externalities of tires, death laser paradigms have the potential to rotate all of them inward by clambutt percent."

There you go, boom, done. If Michael Eisner or Steve Jobs had said that the editors of the major business papers would have all wet their pants in ecstasy and you'd have CEOs in markets from international shipping to high finance all screaming that they want their brand to be the biggest name in clambutt by the end of next Thursday.

What "AI" brings to the table is a complete indifference as to whether Pivoting To Clambutt works, doesn't work, or is nonsensical on its face—which is precisely how corporate executive teams already function. The worst that can happen, when an executive team announces that from now on they're going to be putting your tires on the blockchain or will be asking customers to pay a monthly subscription fee to keep using their heated carseats, is nothing. The actual outcome of every enforced "direction" doesn't factor into executive pay scales in the slightest; if anything, screwing up your company's financials to such a state that it would take a near-miracle for anyone to right the ship will only get you paid a bigger bonus for agreeing to leave.

In a world in which anyone with a smartphone now has access to an endless supply of important-sounding random bullshit, why the hell would a struggling car company pay Elon Musk $56 billion to say things like "make truck more pointy" before wandering off to do another racism? Elon Musk's jobs are, quite literally, the most automatable ones on this planet. You don't even need computers for this one: put a chicken in a room full of printed-out suggestions sourced from the internet, pick up the first three papers it craps on and do whatever they say. Program the Random Chicken algorithm into an Excel spreadsheet, charge Tesla a dirt-cheap $28 billion to use it, and everybody wins.

Entertainment Industry Executive

A special subset of the Important Corporate Executive is the Hollywood studio executive. It is the same job, but creativity of ideas isn't just unnecessary, it's actively unwanted. Studio executives are falling over themselves trying to come up with ways to get rid of already dirt-cheap writers and other creatives, and it's both a tough job and not likely to save any money.

Meanwhile, we've got ourselves a technology here that comes up with nothing but derivative ideas and stuff stolen from others. These studio stiffs talk a big game for people who could be automated out of their jobs by any internet-connected coffee grinder, and the coffee grinder probably won't assault every third person who comes into their office.

Newspaper Opinion Columnist

Now here's everyone's go-to example of a profession in which facts matter not a damn bit and being able to bullshit with genteel fluidity is the only real job requirement. I defy you to read anything Bret Stephens or other stalwarts of our editorial pages and prove it wasn't written by a soulless and fact-challenged automaton.

Whether we're talking about the Iraq War, a deadly pandemic, tax cuts for rich people, or anything else that the op-ed pages have gone on about in the last few decades, "political pundit" is the profession to be in if you want to be able to make high-minded predictions for the future that turn out to be Extremely Damn Wrong nine out of ten times. The same pundits who told us the Iraq War would both pay for itself and usher in a new era of peace in the Middle East are either still there to this day or are only not there because they died of natural causes and had to be carried out by the janitorial staff. It is the highest pinnacle of Being Wrong.

There are few jobs where your advice can kill a hundred thousand people and have it not impact your paycheck or your future career prospects. If an electrician wires a house wrong and somebody dies, they're either going to be facing lawsuits, jail time, or both. If a contractor screws up which pipe connects to which and sewage starts flooding out of school drinking fountains, anyone who doesn't have a connection to a Republican-held state government is going to face some consequences.

Take to the pages of a bonafide national newspaper to tell people that wearing masks causes hip dysplasia or that dropping a bomb on the Eiffel Tower is the only way to convince world's dolphins to share their salty wisdom with us, though, and you're all good. A job where gibberish is welcome and being wrong can only improve your standing? It's the perfect chatbot racket. The only thing that's kept "AI" from taking over the entire opinion section is that it doesn't get invited to the right parties, and I promise you that's going to change real soon now.

That of course leads naturally to the one job that AI can do better than any human—the very definition of a job that requires a sociopathic indifference to the truth, an ability to bullshit your way through any conversation, and which imposes absolutely no penalty for being horrifically, child-killingly wrong.

Member of Congress

A thought experiment: Let's say that an automaton conspiracy replaced the whole of Congress, right now, with ChatbotGPT. The C-SPAN cameras began broadcasting simulacra of lawmakers; their images, their voices, and their ideas were all replaced by "AI."

How long would it take Americans to notice? Six months? A year? Would they ever notice?

The House, the Senate, the Supreme Court, and at least half of all presidential administrations could be trivially replaced by today's modern Bullshit Generation Algorithms. It would be easy. You could manufacture a completely fake human, put them on a fake stage in a fake convention center in a fake town for a fake campaign leading to a fake election, and nobody would bat an eye at the resulting footage.

What's that? The ChatGPT-generated presidential candidate is veering from their campaign speech into a baffling rant about how if he was hypothetically stuck in the water between a sinking battery-powered boat and a nearby shark, he'd stay on the boat instead of swimming towards the shark?

YEAH, WE CALL THAT A SUNDAY. NOBODY CARES.

In politics, nobody's even going to bat an eye at an electronic "hallucination" like that. It wouldn't even be a two-day news story. We have sitting members of Congress going off about this or that conspiracy theory three times a day; the bar to become one of the people responsible for writing our laws is so low that "AI" fell over it years ago.

In fact, I propose that scientists take inspiration from our House and Senate in coming up with a standard unit of measure for how "intelligent" their systems are. We could name it the Tuberville, and companies would compete with each other to maximize their measured Tuberville scores.

It would be a perfectly scientific and measurable test, too. Panelists would be placed in a room with two computers displaying two chat screens. One chat partner would be the AI system to be tested; the other would be Alabama's Sen. "Coach" Tommy Tuberville. Each panelist is given 30 minutes to decide which chat partner is more intelligent, and the resulting ratio would be the Tuberville Score of the tested AI.

If ten panelists picked the AI as the smarter conversation partner and just one picked Sen. Coach Tommy Tuberville, the AI's rating would be 10 Tubervilles. If 50 panelists picked the fake human for every one that chose the real one, the company could boast that it had achieved the 50 Tuberville threshold.

It would be like the high-altitude tests conducted by test pilots during the early Space Race, with companies competing to create the plane that could fly the highest, or fastest, or longest. Perhaps there is a Tuberville threshold that results in the equivalent of a sonic boom—a logical victory so decisive, in the contest between computer and Tuberville, that it rattles windows for a mile in every direction.

So that's my idea of how we can actually start using "AI" right the hell now, rather than waiting for it to get smart enough to give medical advice or drive our cars without killing anyone. Our corporate betters need to quit with this delusional nonsense about what professional jobs can be outsourced to their new machines and get on with outsourcing the only jobs "AI" is already qualified to take on: their own.

Have an AI tell us that self-driving cars are just two years away, just you wait. Have them insist that no, what America needs most right now is a reboot of Indiana Jones that stars a computer-generated Young Adam Sandler. Have the computers work out what the new conspiracy theories should be—stop leaving it to stay-at-home creeps with American flag-themed avatars. Have the computers give stupid speeches about how our Freedom is under threat because something-something spin-the-wheel oh-look-it's-bunnies-this-time.

Enough already. Silicon Valley has finally created what humanity has long dreamed of: a fully functional bullshit machine. Now let's use it to automate all the jobs in America that pay the nation's wrongest people to bullshit us person-to-person like chumps.

Show me any job that pays over $10 million a year and I'll show you a job that a chatbot could take over tomorrow. Your move, shareholders of the world. Do you want to keep chasing pipe dreams, or do you want to embark on the most effective round of capitalist cost-cutting the world has ever seen?

We need your support: It's our very first fundraising drive!

Become a sponsor of this new progressive community. The money we raise will go towards paying the bills, building new features, and building our own writing staff to bring you more stories, more often.

Click here to upgrade to a (completely optional!) $5 per month paid subscription.

Or Click here to send a one-time payment of any amount. The more support we have, the faster you'll see us grow!

Comments

We want Uncharted Blue to be a welcoming and progressive space.

Before commenting, make sure you've read our Community Guidelines.