Andrew Gentile: Google’s Payline Is Where The Candy Is


This commentary is from Andrew K. Gentile, a Sheffield resident and independent electrical engineer.

Artificial intelligence can be extremely valuable when applied to the right kind of problems. AI can sift through the medical records of 10 million cancer patients, finding patterns that a human couldn’t. It can find genetic markers that indicate a patient’s chances of getting cancer, giving the patient the chance to treat it before the disease becomes symptomatic.

It’s a huge benefit for medicine, like being able to see the future. The reason AI can do this so effectively is because it has a mountain of information in the form of years of medical records per patient.


Of course, AI is also used for less admirable tasks, such as ad targeting. Google has a huge amount of information about me. I’m sure he knows a lot about me that I don’t even know. It continuously scans my emails and texts. It monitors my web activity and even listens to my conversations.

And from this dataset, it retrieves keywords which then become the inputs for the AI ​​program assigned to my surveillance. The problem Google’s AI is trying to solve is how to select articles or videos that will catch my attention.

I get a lot of quiz articles with titles like “Only 7% of People Can Answer This ’70s TV Quiz” or “The Unfortunate Towns in Every State.”

In a way, my News Feed looks like the checkout line at the grocery store, cluttered with mostly useless items, placed there to tempt my restraint. Although I might grab a candy bar while waiting to leave, I do so out of boredom or on impulse. It’s not that I really wanted the candy. In fact, if candy was only sold in the candy aisle, I might never buy candy.

Likewise, I read articles in my News Feed because they’re there, and they’re there because I read them. This circular feedback is Google’s self-fulfilling prophecy. Above all, convenience is important.

I could say Google’s AI isn’t working because it doesn’t know what I really want to read. Google mistakes convenience and curiosity for real interest. The only feedback Google gets is whether or not I open the article. There may be other clues, like how long I see the item, and if I open it more than once. Otherwise, Google has no idea why I opened the article.

But Google’s AI wasn’t programmed to know what my literary preferences are. It was programmed to maximize my value as a Google product. My attention is what Google sells, and the more Google can get my attention, the more valuable I am as a product. Google only has to attract, but not hold, my attention. Distracting me is both easier and more profitable than understanding me.

Google makes money by hindering my ability to see the world online. Unfortunately, this is a win-lose business model. The more questions I have to sift through, the more likely a headline will catch my eye and click on it. And of course I do. Everyone does it.

The AI ​​doesn’t just adapt to me; I adapt to it. Over time, I’ll probably look more like what Google thinks I am. Maybe I already did. That’s Google’s payline, and that’s where the candy is.

Did you know that VTDigger is a non-profit organization?

Our journalism is made possible through member donations. If you appreciate what we do, please contribute and help keep this vital resource accessible to everyone.

setTimeout(function(){ !function(f,b,e,v,n,t,s) {if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod. apply(n,arguments):n.queue.push(arguments)};if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′ ; n.tail=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window,document,’script’, ‘’); fbq(‘init’, ‘1921611918160845’); fbq(‘track’, ‘PageView’); }, 3000);

Leave a Comment