| |

New Code Pattern to Help Evaluate AI Fraud Prediction Models

A new code pattern has been developed to  gain much better insights and explainability by learning how to utilize the AI 360 Explainability Toolkits to debunk the decisions that are made by a machine learning design. This not only assists policymakers and information researchers to develop trusted explainable AI applications, but likewise helps with openness for everybody. To show using the AI Explainability 360 Toolkit, we utilize the existing scams detection code pattern describing the AIX360 algorithms.

Helping Detect Fraud

Imagine a situation in which you check out a bank where you wish to get a $1M loan. If you are eligible for a loan the loan officer can utilizes an AI-powered system that suggests or forecasts how much that loan can be,  If however, the AI system advises that you are not eligible for a loan. You might have couple of questions you then need to understand

  • Will you as a customer be client with the service?
  • Would you want justification for the choice made by the AI system?
  • Should the loan officer confirm the choice made by the AI system, and would you desire them to understand the underlying system of the AI design?
  • Should the bank totally trust and depend on the AI-powered system?

You may concur that it’s not enough to just make predictions. Sometimes, you must have a deep understanding of why the choice was made. There are lots of reasons that  need to be understood in an underlying system of the machine learning models. These consist of:

  • Human readability
  • Bias mitigation
  • Justifying Decisions 
  • Interpretation of Decisions
  • Ensuring that there is trust and confidence in AI systems

The new code pattern helps with these questions , in it there are three  algorithms work:

  • The Contrastive Explanations Method (CEM) algorithm that is readily available in the AI Explainability 360 Toolkit.
  • The AI Explainability 360– ProtoDash deals with an existing predictive model to demonstrate how the consumer compares to others who have similar profiles and had similar repayment records to the design’s forecast for the existing consumer. This assists to anticipate the applicant and assess’s danger. Based upon the model’s forecast and the description for how it came to that suggestion, the loan officer can make a more informed decision.
  • The Generalized Linear Rule Model (GLRM) algorithm in the AI Explainability 360 Toolkit provides an improved level of explainability to a data researcher whether the design can be deployed.

Flow

Analyze fraud prediction AI models architecture

  1. Log in to IBM Watson ® Studio powered by Spark, initiate IBM Cloud Object Storage, and
  2. produce a project. Submit the.csv data file to IBM Cloud Object Storage.
  3. Load the information file in the Watson Studio notebook.
  4. Set Up the AI Explainability 360 Toolkit and the Adversarial Robustness Toolbox in the Watson Studio notebook.
  5. Get visualization for explainability and interpretability of the AI design for the three different types of users.

Using the Code Pattern

It is fast to get started in making use of the code pattern. Simply:

  1. Create an account with IBM Cloud.
  2. Produce a brand-new Watson Studio job.
  3. Include information.
  4. Produce the note pad.
  5. Insert the information as DataFrame.
  6. Run the notebook.
  7. Analyze the outcomes.

This new code pattern has become part of the The AI 360 Toolkit: AI designs described usage case series, which assists stakeholders and designers to understand the AI design lifecycle entirely and to help them make informed decisions.

Similar Posts