Machine Learning Tackles Hidden Business Challenges

Machine Learning Tackles Hidden Business Challenges

Getting your Trinity Audio player ready...

Score one for the human brain. In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease.

“It’s a clever and important study that reminds us that ‘deep learning’ isn’t really that deep,” said Gary Marcus, a neuroscientist at New York University who was not affiliated with the work.

The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we’ll want their visual processing to be at least as good as the human eyes they’re replacing.

It won’t be easy. The new work accentuates the sophistication of human vision — and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene — an image of an elephant. The elephant’s mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.

“There are all sorts of weird things happening that show how brittle current object detection systems are,” said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto.

Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.

The Elephant in the Room

Eyes wide open, we take in staggering amounts of visual information. The human brain processes it in stride. “We open our eyes and everything happens,” said Tsotsos.

Artificial intelligence, by contrast, creates visual impressions laboriously, as if it were reading a description in Braille. It runs its algorithmic fingertips over pixels, which it shapes into increasingly complex representations. The specific type of AI system that performs this process is called a neural network. It sends an image through a series of “layers.” At each layer, the details of the image — the colors and brightnesses of individual pixels — give way to increasingly abstracted descriptions of what the image portrays. At the end of the process, the neural network produces a best-guess prediction about what it’s looking at.

“It’s all moving from one layer to the next by taking the output of the previous layer, processing it and passing it along to the next layer, like a pipeline,” said Tsotsos.

How Deep Neural Networks Adjust Connections to Improve Data Processing

Neural networks are adept at specific visual chores. They can outperform humans in narrow tasks like sorting objects into best-fit categories — labeling dogs with their breed, for example. These successes have raised expectations that computer vision systems might soon be good enough to steer a car through crowded city streets.

They’ve also provoked researchers to probe their vulnerabilities. In recent years there have been a slew of attempts, known as “adversarial attacks,” in which researchers contrive scenes to make neural networks fail. In one experiment, computer scientists tricked a neural network into mistaking a turtle for a rifle. In another, researchers waylaid a neural network by placing an image of a psychedelically colored toaster alongside ordinary objects like a banana.

This new study has the same spirit. The three researchers fed a neural network a living room scene: A man seated on the edge of a shabby chair leans forward as he plays a video game. After chewing on this scene, a neural network correctly detected a number of objects with high confidence: a person, a couch, a television, a chair, some books.

Neural Network Struggles with Perception of the Elephant in the Room

 

Then the researchers introduced something incongruous into the scene: an image of an elephant in semiprofile. The neural network started getting its pixels crossed. In some trials, the elephant led the neural network to misidentify the chair as a couch. In others, the system overlooked objects, like a row of books, that it had correctly detected in earlier trials. These errors occurred even when the elephant was far from the mistaken objects.

Snafus like those extrapolate in unsettling ways to autonomous driving. A computer can’t drive a car if it might go blind to a pedestrian just because a second earlier it passed a turkey on the side of the road.

And as for the elephant itself, the neural network was all over the place: Sometimes the system identified it correctly, sometimes it called the elephant a sheep, and sometimes it overlooked the elephant completely.

“If there is actually an elephant in the room, you as a human would likely notice it,” said Rosenfeld. “The system didn’t even detect its presence.”

Everything Connected to Everything

When human beings see something unexpected, we do a double take. It’s a common phrase with real cognitive implications — and it explains why neural networks fail when scenes get weird.

Today’s best neural networks for object detection work in a “feed forward” manner. This means that information flows through them in only one direction. They start with an input of fine-grained pixels, then move to curves, shapes, and scenes, with the network making its best guess about what it’s seeing at each step along the way. As a consequence, errant observations early in the process end up contaminating the end of the process, when the neural network pools together everything it thinks it knows in order to make a guess about what it’s looking at.

Photo of Amir Rosenfeld
Amir Rosenfeld, a researcher at York University in Toronto, said that the new work shows “how brittle current object detection systems are.”

Courtesy of Amir Rosenfeld
“By the top of the neural network you have everything connected to everything, so you have the potential to have every feature in every location interfering with every possible output,” said Tsotsos.

The human way is better. Imagine you’re given a very brief glimpse of an image containing a circle and a square, with one of them colored blue and the other red. Afterward you’re asked to name the color of the square. With only a single glance to go on, you’re likely to confuse the colors of the two shapes. But you’re also likely to recognize that you’re confused and to ask for another look. And, critically, when you take that second look, you know to focus your attention on just the color of the square.

“The human visual system says, ‘I don’t have right answer yet, so I have to go backwards to see where I might have made an error,’” explained Tsotsos, who has been developing a theory called selective tuning that explains this feature of visual cognition.

Most neural networks lack this ability to go backward. It’s a hard trait to engineer. One advantage of feed-forward networks is that they’re relatively straightforward to train — process an image through these six layers and get an answer. But if neural networks are to have license to do a double take, they’ll need a sophisticated understanding of when to draw on this new capacity (when to look twice) and when to plow ahead in a feed-forward way. Human brains switch between these different processes seamlessly; neural networks will need a new theoretical framework before they can do the same.

How Deep Neural Networks Adjust Connections to Improve Data Processing

Neural networks are designed to replicate the human brain functions by adjusting connections between nodes (similar to nerve cells). These networks process data by “learning” from examples and fine-tuning connections based on the data fed to them.

As they adjust, predictions and data processing accuracy improve. This method is especially useful for handling complex data, such as images or unstructured text, which are harder for traditional algorithms to interpret.

Overcoming Human-Like Perception Issues in AI Systems

AI systems still struggle with human-like perception, such as understanding context or emotions in a conversation. These issues arise because Acannotto fully interpret things the way humans do.

For example, a machine might misinterpret slang or subtle meaning in a conversation, which can impact decision-making. Addressing these issues requires improving data processing and training AI models with more context-rich data to better understand human behavior.

The Role of Adversarial Attacks in Business Security

Adversarial attacks occur when harmful data is fed into an AI system to trick it into making incorrect predictions or decisions. In business, these attacks can compromise AI security, leading to wrong recommendations, fraud, or vulnerabilities in sensitive data.

For example, a manipulated image or file could fool an AI model used in fraud detection, causing it to miss critical patterns. Businesses must constantly monitor and secure AI systems to protect against these risks.

Improving Accuracy and Reliability in ML Systems for Business

To improve machine learning accuracy, businesses must focus on high-quality data and continuous training. Regularly testing models against real-world data ensuret the system adapts and performs well over time.

Machine learning in business also relies on monitoring models to identify any drift (changes in how data behaves), allowing businesses to make quick adjustments. The more diverse and accurate the data, the more reliable the results.

Preparing Businesses for the Future of Machine Learning

The future of machine learning will see even more integration into business operations, from automated decision-making to smarter customer interactions. Preparing for this shift means adopting AI solutions now, focusing on improving business intelligence through data, and ensuring strong AI security practices are in place. With advancements in neural networks, businesses will continue to improve data processing, making it easier to turn complex data into actionable insights.

FAQs

How does machine learning solve business challenges?

Machine learning in business helps by automating processes, identifying patterns, and providing data-driven insights. With business intelligence with AI, it allows businesses to make faster, more informed decisions, improving operations, marketing, and customer service.

What are the practical uses of machine learning in business?

Some uses of machine learning in business are:

  • Business intelligence: Analyzing large datasets to uncover trends and optimize strategies.
  • AI solutions: Automating tasks like customer service (chatbots) and marketing (personalized campaigns).
  • Data processing: Machine learning helps process and interpret massive amounts of data quickly, improving efficiency.

How do deep neural networks improve data processing?

Neural networks are designed to mimic human brain functions, improving data processing by recognizing complex patterns. This allows businesses to analyze unstructured data (like images or textmore accuratelycy, offering better decision-making and insights.

What are human-like perception issues in AI?

AI struggles with human-like perception because icannotto understand context or interpret sensory informatiolikeay humans do. For example, a neural network might misinterpret an image or fail to understand sarcasm in text, leading to errors in data processing.

How do adversarial attacks affect business security?

Adversarial attacks involve manipulating data to trick AI models into making incorrect decisions. These attacks can compromise AI security by causing fraud, misclassification, or data breaches, leading to financial losses or reputational damage.

How can businesses improve machine learning accuracy?

Some ways businesses can improve machine learning accuracy are:

  • Data quality: Using clean, relevant, and up-to-date data improves model accuracy.
  • Model training: Continuously training the machine learning model with diverse datasets ensures better predictions.
  • Testing and evaluation: Regularly testing the model helps identify and fix inaccuracies, boosting machine learning accuracy.

Source: Quantamagazine

Loading