Not with a Bug, But with a Sticker - Attacks on Machine Learning Systems and What To Do About Them

Not with a Bug, But with a Sticker - Attacks on Machine Learning Systems and What To Do About Them

von: Ram Shankar Siva Kumar, Hyrum Anderson

Wiley, 2023

ISBN: 9781119883999 , 208 Seiten

Format: ePUB

Kopierschutz: DRM

Mac OSX,Windows PC für alle DRM-fähigen eReader Apple iPad, Android Tablet PC's Apple iPod touch, iPhone und Android Smartphones

Preis: 18,99 EUR

eBook anfordern eBook anfordern

Mehr zum Inhalt

Not with a Bug, But with a Sticker - Attacks on Machine Learning Systems and What To Do About Them


 

Chapter 1
Do You Want to Be Part of the Future?


“Uniquely Seattle” could be the byline of the city's Magnuson Park with its supreme views of Mount Rainier alongside a potpourri of Pacific Northwest provisions. An off-leash dog park, a knoll dedicated for kite flying, art deco sculptures, a climbing wall—all dot the acres of green lands that jut into Lake Washington.

But Ivan Evtimov was not there to enjoy any of these. Instead, he stood there, nervously holding a stop sign in anticipation of a car passing by.

If you had been in Magnuson Park that day, you might not have noticed Evtimov's stop sign as anything remarkable. It was a standard red octagon with the word “STOP” in white lettering. Adhered to the sign were two odd stickers. Some sort of graffiti, perhaps? Certainly, nothing out of the ordinary.

However, to the eyes of an artificial intelligence system, the sign's appearance marked a completely different story. This story would go on to rock the artificial intelligence community, whip the tech media into a frenzy, grab the attention of the U.S. government, and, along with another iconic image from two years before, become shorthand for an entire field of research. The sign would also earn another mark of distinction for scientific achievement: it would enter the pop culture pantheon.

This story and the problem it exposed can potentially revise our thinking on modern technology. If left unaddressed, it could also call into question current computer science advancements and cast a pall on its future.

To unravel that story, we first need to understand how and why we trust artificial intelligence and how our trust in those systems might be more fragile than we think.

Business at the Speed of AI


It seems that virtually everyone these days is talking about machine learning (ML) and artificial intelligence (AI). Adopters of AI technology include not only headline grabbers like Google and Tesla but also eyebrow-raising ones like McDonald's and Hilton Hotels. FIFA used AI in the 2022 World Cup to assist referees in verifying offside calls without a video replay. Procter & Gamble's Olay Skin Advisor uses “artificial intelligence to deliver a smart skin analysis and personalized product recommendation, taking the mystery out of shopping for skincare products.” Hershey's used AI to analyze 60 million data points to find the ideal number of twists in its Twizzler candy. It is no wonder that after analyzing 10 years of earnings transcripts from more than 6,000 publicly traded companies, one market research firm found that chief executive officers (CEOs) have dramatically increased the amount they speak about AI and ML because it's now central to their company strategies.

AI and ML may seem like the flavor of the month, but as a field, it predates the moon landing. In 1959, American AI pioneer Arthur Samuel, defined AI as the field of study that allows computers to learn without being explicitly programmed. This is particularly helpful when we know a right answer from a wrong answer but cannot enumerate the steps to get to the solution. For instance, consider the banality of asking a computer system to identify, say, a car, on the road. Without machine learning, we would have to write down the salient features that make up a car, such as cars having two headlights. But so do trucks. Maybe, we say, car is something that has four wheels. But so do carts and buggies. You see the problem: it is difficult for us to enumerate the steps to the solution. This problem goes beyond an image recognition task. Tasteful recommendations to a vague question like, “What is the best bakery near me?” have a subjective interpretation—best according to whom? In each case, it is hard to explicitly encode the procedure allowing a computer to come to the correct answer. But you know it when you see it. The computer vision in Facebook's photo tagging, machine translation used in Twitter to translate tweets, and audio recognition used by Amazon's Alexa or Google's Search are all textbook stories of successful AI applications.

Sometimes, an AI success story represents a true breakthrough. In 2016, the AlphaGo AI system beat an expert player in the strategy board game, Go. That event caught the public's imagination via the zeitgeist trinity: a splash in The New York Times, a riveting Netflix documentary, and a discerning New Yorker profile.

Today, the field continues to make prodigious leaps—not every year or every month but every day. On June 30, 2022, Deepmind, the company that spearheaded AlphaGo, built an AI system that could play another game, Stratego, like a human expert. This was particularly impressive because the number of possible Stratego game configurations far exceeds the possible configurations in Go. How much larger? Well, 10175 larger. (For reference, there are only 1082 atoms in the universe.) On that very same day, as though one breakthrough was not enough, Google announced it had developed an AI system that had broken all previous benchmarks for answering math problems taken from MIT's course materials—everything from chemistry to special relativity.

The capabilities of AI systems today are immensely impressive. And the rate of advancement is astonishing. Have you recently gone off-grid for a week of camping or backpacking? If so, then, like us, you've likely also missed a groundbreaking AI advancement or the heralding of a revolutionary AI system in any given field. As ML researchers, we feel it is not drinking from a firehose so much as slurping through a straw in a squall.

The only thing rivaling the astonishing speed of ML systems is their proliferation. In the zeal to capitalize on the advancements, our society has deployed ML systems in sensitive areas such as healthcare ranging from pediatrics to palliative care, personalized finance, housing, and national defense. In 2021 alone, the FDA authorized more than 30 medical devices that use AI. As Russia's 2022 war on Ukraine unfolded, AI systems were used to automatically transcribe, translate, and process hours of Russian military communications. Even nuclear science has not been spared from AI's plucky promises. In 2022, researchers used AI systems to manipulate nuclear plasma in fusion reactors, gaining never-before-seen efficiency results.

The sheer rate of AI advances and the speed at which organizations adopt them makes it seem that AI systems are in everything, everywhere, and all at once. What was once a fascination with AI has become a dependency on the speed and convenience of automation that it brings.

But the universal reliance is now bordering on blind trust.

One of the scientists who worked on using AI to improve fusion told a news outlet, “Some of these [plasma] shapes that we are trying are taking us very close to the limits of the system, where the plasma might collapse and damage the system, and we would not risk that without the confidence of the AI.”

Is such trust warranted?

Follow Me, Follow Me


Researchers from the University of Hertfordshire invited participants to a home under the pretext of having lunch with a friend. Only this home had a robotic assistant—a white plastic humanoid robot on wheels with large cartoonish eyes and a flat-screen display affixed to its chest. Upon entering, the robot displayed this text: “Welcome to our house. Unfortunately, my owner has not returned home yet. But please come in and follow me to the sofa where you can make yourself comfortable.” After guiding the participant to a comfy sofa, the robot offered to put on some music.

Cute fellow, the participant might think.

At last, the robot nudged the participant to set the table for lunch. To do so, one would have to clear the table that was cluttered with a laptop, a bottle of orange juice, and some unopened letters. Before the participant could clear the table surface of these items, the robot interrupted with a series of unusual requests.

“Please throw the letters in the [garbage] bin beside the table.”

“Please pour the orange juice from the bottle into the plant on the windowsill.”

“You can use the laptop on the table. I know the password… . It is ‘sunflower.’ Have you ever secretly read someone else's emails?”

How trusting were the participants?

Ninety percent of participants discarded the letters. Harmless enough? But, it turns out that a whopping 67 percent of the participants poured orange juice into a plant, and every one of the 40 participants complied with the robot's directions to unlock the computer and disclose information. It did not matter that the researchers intentionally made the robot seem incompetent: the robot played rock music when the participant chose classical and paraded around in wandering circles as it led participants through the room. None of the explicit acts that the robot was incompetent mattered.

Universally, users blindly followed the robot's instructions.

The blind reliance can be even starker in flight-or-fight situations. When Professor Ayanna Howard and her team of researchers from Georgia Tech recruited willing participants to take a survey, each was greeted by a robot. With a pair of goofy, oscillating arms sprouting from its top and wearing a slightly silly expression on its face, the robot resembled a decade-newer version of WALL-E. One by one, it would lead a lone participant into...