Bond Cliff

Main Menu

  • Home
  • Economic integration
  • Price index
  • Covariance
  • Labor augmenting
  • Fund

Bond Cliff

Header Banner

Bond Cliff

  • Home
  • Economic integration
  • Price index
  • Covariance
  • Labor augmenting
  • Fund
Covariance
Home›Covariance›Is the future of AI supervised?

Is the future of AI supervised?

By Susan Weiner
March 14, 2022
0
0

The future is already here – it’s just not evenly distributed: William Gibson

We gravitate toward technological singularity. Futurists like Louis Rosenberg, Ray Kurzweil, and Patrick Winston predicted the timeline of the “super intelligence” (between 2030 and 2045). But are these deadlines realistic? And what approaches (supervised, semi-supervised or unsupervised learning) will get us there?

Andre Ngfounder and CEO of Landing AI, swears by smart-sized, “data-centric” AI, while the vice president and chief AI scientist of Meta, Yann LeCun think that “the revolution will not be monitored”. Instead, he proposes using self-supervised learning to build AI systems with common sense, bringing us closer to human-level intelligence.

Data

Supervised learning uses a labeled data set to teach models how to map inputs to desired outputs. The algorithm measures its accuracy through the loss function, adjusting itself until the error has been minimized. Supervised learning is currently the most popular machine learning approach, with applications in fraud detection, sales forecasting, and inventory optimization.

According to a recent O’Reilly study, 82% of respondents said their company prefers supervised learning over unsupervised or semi-supervised learning. According to Gartner, supervised learning will remain the most popular type of machine learning in 2022.

“The last decade has seen a leap towards deep learning. This decade may be towards data-centric AI,” said Andrew Ng. Deep learning networks have made huge progress over the past decade, he believes the way forward is to improve the dataset while keeping the neural network architecture fixed.

Break the bias

Biased or duplicated data is another issue that affects the performance of an AI system. According to Yann LeCun, supervised learning works well in domains with well-defined boundaries. The types of inputs observed during deployment are not significantly different from those used during training. However, it is not easy to create large amounts of clean and unbiased labeled datasets.

Yann LeCun recommends self-supervised learning to tackle the data problem. SSL trains a system for learning a good representation of inputs in a task-independent way. It uses unlabeled data to learn representations from large training sets. Then it uses part of the labeled data to achieve good performance on a supervised task. You only need a few labeled data to train, and the system will be able to handle different inputs from the training samples. SSL also reduces the system’s susceptibility to biases in the data.

According to Andrew, biased data leads to biased systems. Data-centric AI allows designing a subset of data. If performance is biased towards a subset of data, but works for most of the dataset, it is counterproductive to modify the entire neural network architecture to improve performance on that sub. -set only. But if you can design a subset of the data, you can solve the problem in a targeted way.

pretend until you make it

But what if we don’t have enough data to begin with? According to Andrew, you don’t always need huge data sets to train a system: 50 carefully designed examples will be enough for the neural network to understand what it’s supposed to learn. In other words, the focus needs to shift from big data to good data.

Gartner predicts that 60% of the data used to train AI systems will be synthetic. Nvidia released a powerful engine for generating synthetic data for training neural networks, called Omniverse Replicator.

Synthetic data plays a central role in data-centric AI. Using synthetic data goes beyond a simple pre-processing step to augment the data set for a learning algorithm, Andrew NG said. However, since the application of synthetic data is controversial, methods such as increasing data, improving labeling consistency, or collecting more data also make sense.

Supervised learning can lead to disastrous results if training datasets are not properly verified. For example, an earlier version of ImageNet contained photos of naked children, porn actresses, college parties, etc., all pulled from the Internet without consent. Meanwhile, 80 million tiny images contained a variety of racist, sexist and otherwise offensive annotations, including nearly 2,000 images tagged with the N-word and labels such as “rape suspect” and “child molester “.

In short: SSL

Yann LeCun said that real progress in AI depends on our ability to make machines learn how the world works (just like humans and animals): by looking at it, and a little by acting in it. Such models of the world allow us to perceive, interpret, reason, plan and act. So how can machines learn from global patterns? He asked. Two pertinent questions to consider here are:

First, what learning paradigm should we use to train global models? Yann LeCun’s answer is SSL. He cited it with an example: ask a machine to watch a video and learn a representation of what will happen next in the video. As a result, the machine can acquire vast basic knowledge about how the world works, such as how humans and animals learn.

Second, what architecture should global models use? Yann LeCun proposed a new deep macro-architecture called Hihierarchical joint integration predictive architecture (H-JEPA). For example, instead of predicting future frames of a video clip, a JEPA learns abstract representations of the video clip and the future of the clip so that the latter is easily predicted based on its understanding of the former. This can be accomplished using some of the latest advancements in non-contrasting SSL methods, in particular a method known as VICReg (Variance, Invariance, Covariance Regularization), he says.

In practical AI systems, we are leaning towards larger architectures pre-trained with SSL over large amounts of unlabeled data for a wide variety of tasks. Meta AI has language translation systems (a single neural network) to handle hundreds of languages. Meta also has multilingual speech recognition systems capable of processing languages ​​with very little data, let alone annotated data, Yann LeCun said.

Related posts:

  1. How do lakes have an effect on vitality, warmth and carbon alternate processes in mountainous areas?
  2. Morphological integration of the human mind in adolescence and maturity
  3. Variation in HIV care and therapy outcomes by facility in South Africa, 2011–2015: A cohort research
  4. Sleep disturbances, period worse in the midst of the pandemic than in years previous

Categories

  • Covariance
  • Economic integration
  • Fund
  • Labor augmenting
  • Price index
  • TERMS AND CONDITIONS
  • Privacy Policy