AI ethics is growing up — towards an AI maturity model organizations can use

  • Recruited staff with corporate development skills, preferably with AI, to act as traditional consultants to attack a problem with professionalism from a selection of a problem t solves, data governance, the craft of building and testing a model for, among other things, credible results and fairness.
  • Developed material to guide these efforts, minimizing talk of ethics and maximizing advice for successful AI without causing harm to people or the organization. For example, this AI Ethics Maturity Model, published by Salesforce and authored by my colleague Kathy Baxter.
  • Unravel complexity in your supply chain
  • Automate repetitive tasks
  • Real-time chatbot systems.
  • Augmented (Business) Intelligence
  • Customer recommendation engines.
  • Customer churn modeling.
  • Dynamic or demand pricing strategies
  • Customer segmentation and market research.
  • Fraud detection.
  • Sales Forecasting
  • Is (usually)is a known application (low hanging fruit).
  • It has no impact; therefore, it does not increase your credibility.
  • And neither proves anything nor develops around a compelling issue.
  • Discriminatory practices
  • Inadequate data governance
  • Intrusive personalization
  • Your model makes the previous model worse
  • Overreaching, like IBM’s Watson Oncology model
  • Your model predicts the past, not the future
  • Unanticipated disruption in upstream and downstream systems and processes
  • Adoption failure
  • Be careful with algorithms that are designed to operate with high output. This leads to uniformity which leads to problems. It may be appealing to have a model that can read 10,000 resumes a day, but its recommendations’ may be too uniform to be valuable.
  • Products you use with embedded AI must be considered. Not many vendors are willing to disclose their proprietary algorithms, but you bear the responsibility for the result.
  • Data sourced from outside your organization, or the complexity of blending multiple data sources, is the leading cause of errant AI applications.
  • The “social context” — refers to people. Anything that affects people is in the social context and is subject to meticulous analysis.
  • Fairness: This is the most challenging aspect to understand. Fairness has many definitions and is context-oriented. New mathematical models are emerging to test fairness.
  • Subsequential bias is the secondary and tertiary unintended effect of your model. As your model operates, no matter how hard and thorough you scrubbed the unethical aspects, the model’s results can create an opportunity for unethical secondary and tertiary effects.
  • Data: ML isn’t developed in Excel. The volume of data needed for an ML model is vastly more than a human can examine for errors or faults. Data quality tools are helpful to a point but only for one data source at a time. Merging tables creates hidden problems that even current data management tools don’t always spot.
  • ML and even Deep Learning can cause unpredictable errors when facing situations that differ from the training data. This is because such systems are susceptible to “Shortcut Learning,” statistical associations in the training data allow the model to produce correct answers for the wrong reasons. Machine Learning, Neural Nets and Deep Learning do not learn the concepts, but instead, they learn shortcuts to connect answers on the training set.
  • Adversarial Perturbations: Adversarial attacks involve generating slightly perturbed versions of the input data that fool the classifier (i.e., change its output) but stay almost imperceptible to the human eye.
  • Immutability: Great care must be taken to ensure the model cannot be tampered with.
  • When an organization pressures development teams, the ethics risks increase.
  • You adopt an “It’s only the math” excuse or “That’s how we do it.”
  • You engage in fairwashing: concocting misleading excuses for the results.
  • You don’t know that you’re doing these things.
  • The whole process is complicated and opaque in operation.
  • The organization is not used to introspection before you embark on a solution.
  • There is an “aching desire” to do something cool that obscures your judgment.

My take

The instruction in ethics has proven to be ineffective in assisting organizations in delivering trustworthy applications. The ethics community seems to be evolving to one of the professional services, with skill in all aspects of MLOps. This is a positive step. However, organizations like the EU, UNESCO, and many others will continue to pound the ethical aspect — to the detriment of helpful guidance.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Neil Raden

Neil Raden

Consultant, Mathematician, Author; focused on iAnalytics Applied AI Ethics and Data Architecture