We are told that not only will AI take our jobs but it will take our bosses’ jobs and their bosses’ jobs and pretty soon., AI will be running the world…

We can see those films on Netflix any night.

Science writer and science fiction author Charles Q. Choi offers, in a longish piece at the Institute of Electrical and Electronic Engineers’ online magazine, Spectrum, talking about the real world where “Neural networks can be disastrously brittle, forgetful, and surprisingly bad at math.” AI frequently flubs and it is not clear how to make it flub less. Here are brief notes on three examples of the seven he offers:

“Brittle” 97% of AIs could not identify a school bus flipped on its side. Not helpful in an emergency.

There are numerous troubling cases of AI brittleness. Fastening stickers on a stop sign can make an AI misread it. Changing a single pixel on an image can make an AI think a horse is a frog. Neural networks can be 99.99 percent confident that multicolor static is a picture of a lion. Medical images can get modified in a way imperceptible to the human eye so medical scans misdiagnose cancer 100 percent of the time. And so on.

Charles Q. Choi, “7 revealing ways ais fail” at IEEE Spectrum (September 21, 2021)

There are doubtless ways to reduce bad guesses. But we are dealing with systems where no independent thinking is involved so progress may be slow, variable, and insecure.

“Forgetful” Instead of building on memory from year to year, AI can “forget” important stuff.

In the beginning, the researchers trained their neural network to spot one kind of deepfake. However, after a few months, many new types of deepfake emerged, and when they trained their AI to identify these new varieties of deepfake, it quickly forgot how to detect the old ones.

This was an example of catastrophic forgetting—the tendency of an AI to entirely and abruptly forget information it previously knew after learning new information, essentially overwriting past knowledge with new knowledge. “Artificial neural networks have a terrible memory,” Tariq says.

Charles Q. Choi, “7 revealing ways ais fail” at IEEE Spectrum (September 21, 2021)

Again, proposed remediation strategies may very well work but the limitation remains fundamental: There is no one “in there” to do the remembering. No one “in there” is concerned about forgetting.

“Surprisingly bad at math” Despite some Ais crunching…

Continue reading:

Mind Matters