Bond Cliff

Main Menu

  • Home
  • Economic integration
  • Price index
  • Covariance
  • Labor augmenting
  • Fund

Bond Cliff

Header Banner

Bond Cliff

  • Home
  • Economic integration
  • Price index
  • Covariance
  • Labor augmenting
  • Fund
Labor augmenting
Home›Labor augmenting›Building trust with responsible AI

Building trust with responsible AI

By Susan Weiner
April 2, 2022
0
0




Artificial intelligence is used in almost every aspect of life. AI symbolizes growth and productivity in the minds of some, but it also raises questions about the fairness, privacy, and security of these systems. Many legitimate issues exist, including biased choices, workforce replacement, and lack of security. When it comes to robots, it’s very scary. Self-driving cars, for example, can cause injury or death if they make mistakes. Responsible AI addresses these challenges and makes AI systems more responsible.

A responsible AI must fulfill the following objectives:

  • Interpretability: We get an explanation of how a model makes predictions when we interpret it. An AI system makes predictions for a user. Even if these selections are correct, a user is likely to search for an explanation. Responsible AI can describe how we create interpretable models.
  • Fairness: AI systems have the potential to make biased judgments towards particular groups of people. The bias in the training data is the source of this bias. The easier it is to ensure fairness and correct any bias in a model, the more interpretable it is. Therefore, we need a responsible AI framework to explain how we assess fairness and what to do if a model makes incorrect predictions.
  • Safety and security: AI systems are not deterministic. When faced with new situations, they are prone to making poor choices. Systems can even be tampered with to make rash decisions. Therefore, we must ensure the safety and security of these systems.
  • Data governance: The data used must be of high quality. If the data used by the AI ​​contains errors, the system can make bad decisions.

How do we ensure that we are building responsible AI systems?

Reliable, user-centric AI systems should be built using basic best practices for software systems and methods that address machine learning-specific issues. The following points should be kept in mind when designing reliable and responsible AI.

  • Consider augmenting and supporting users with a variety of options. A human-centered design approach should be used. This includes building a model with appropriate disclosures, clarity, and control for users. Engage a wide range of users and usage scenarios, and integrate feedback before and during project development.
  • Rather than using a single metric, you should use a combination to better understand the tradeoffs between different types of errors and experiences. Make sure your metrics are appropriate for the context and purpose of your system; for example, a fire alarm system should have a high recall, even if it involves a false alarm once in a while.
  • ML models will reflect the data they are trained on, so be sure to understand your raw data. If this is not possible, such as with sensitive raw data, try to understand your input data as much as possible while maintaining confidentiality.
  • Understand the limitations of your dataset and communicate them to users whenever possible.
  • Regular testing and quality assurance ensures that your model will work as expected and is reliable.
  • Continuous monitoring and updating of the system will ensure the proper functioning of the AI. Be sure to consider user feedback when updating your system regularly.

Users won’t use your services if they don’t trust your AI. We will not trust systems that use information that we do not want to share or that we believe will lead to biased conclusions. Explanations of decisions and accountability for those decisions go a long way in building trust. The need for this trust is what drives responsible AI.

The references:

  • https://towardsdatascience.com/what-is-responsible-ai-548743369729
  • https://ai.google/responsibilities/responsible-ai-practices/
  • https://www.nytimes.com/2021/04/16/business/artificial-intelligence-regulation.html?smtyp=cur&smid=tw-nytimes
  • https://www.wsj.com/articles/google-pushes-sensible-ideas-for-how-to-regulate-ai-11579521003?mod=article_inline
  • https://arxiv.org/abs/2001.09784

Suggested






Previous postResearchers develop parking analytics framework using deep learning
Next postDiscover “CodeGen”: an AI model that transforms simple natural language queries into executable code

Consulting Intern: Currently in his third year of B.Tech from Indian Institute of Technology (IIT), Goa. She is an ML enthusiast and has a keen interest in data science. She learns very well and tries to be well acquainted with the latest developments in artificial intelligence.


Related posts:

  1. Meals Robotics Market Measurement To Attain USD 3,441.9 Million By 2027, With CAGR Of 11.5% | Experiences and information
  2. Sensible Factories Market – Evaluation and Forecast of Trade Progress by 2025
  3. Bensulide Market – Notable Developments, Upcoming Trends & Future Applications 2025 – Clark County Blog
  4. Increase, not automation | Boston Review

Categories

  • Covariance
  • Economic integration
  • Fund
  • Labor augmenting
  • Price index
  • TERMS AND CONDITIONS
  • Privacy Policy