The Ether: On Ethics

Casey Simple shares thoughts on the ethics of AI

I've been working in an enterprise-size software company for 30 years, primarily in transformation efforts in new technology use. As a result, I've seen a combination of both misfires and great innovation that have given me a great understanding about ethics within companies. Compliance is about asking ourselves, “Can we do this? Is it legal? Do our actions have some legal implications?” Ethics is looking at the “shoulds” in decision-making, not the “cans.” It is about asking ourselves, “Should we do this? Is this something that agrees with our brand? That follows our principles?”

Many times we don't take the effort to go that in-depth in our decision-making. Through my experience in working with many different companies and people involved in serious decision-making, I've learned some things about how best to think about ethical decisions at work.


One of the things to consider is implementing an ethics board. Most large companies, and some small companies, have compliance operations buried somewhere off in the Legal or HR departments. That board looks at the legality of things. Ethics boards are what we should be using when it comes to new innovation. Because AI applies data and decision-making, it gives us an opportunity to look at those kinds of decisions in a new way.

The ethics boards should be high-level boards reporting to the C-Suite. It should be examining, whether or not we should be carrying out an action and how it will impact the brand. Large companies receive news of data breaches often, and those breaches have a brand implications.

If someone were to leak how your algorithms work to the public, would your organization be able to provide a good understanding of why you have those algorithms in place, what the decision-making behind those algorithms is, and how the data is appropriately used in those algorithms?

As another example, imagine your organization is a bank that uses AI in making loan decisions that is based on some historical trending around loan applications in the past. But, suppose the loan applications in the past in the historical data had some discriminatory practices. Have you examined that closely enough to make some good decisions around how you're using that historical data and removing the discriminatory practices? If there was a breach around that algorithm, and you hadn't done that review, would you be scrambling to give a good explanation for why you implemented that particular AI algorithm in your loan application process?

There is potential risk in AI. We are moving so fast with the application of algorithms and data use that sometimes we don’t take a step back and look closely at the ethical decisions that are being made as we implement these algorithms.

Obviously, it does take time to review this, but a number of companies have already looked at things like guidelines. Google has their seven principles around AI. Start with some guardrails like these principles that provide at least an environment where people are thinking about critical principles as they're developing AI.

If you have a robust and high-level ethics board, you can take things to the next level. The board should include people that come from a variety of areas in the organization. There should be a CIO presence. There should be a CMO presence. There should probably be an HR presence. There should certainly be a CFO presence and maybe that's where the board reports to, depending on where the CFO is placed in the company. That is a typical place for an independent, strategic review board where people look at how decisions are made and what questions they might have regarding those decision-making applications.

Notice that I did not mention the CSO, because that role often sits under or is associated with the CIO-level. Security is certainly an important piece, but oftentimes it’s really part of the compliance aspect. The ethics committee has more to do with examining the decision-making and the use of data, not how that could be potentially misused and the security involved in it.

An ethics board for AI should not meet merely once a year. There should be regular meetings to ensure continuous review as progress is being made in the use of AI. It should be part of a routine processing of the implementation of AI at a robust level. Apply those guiding principles and look at things regularly so that it becomes a normal part of the AI development process.

There are key benefits to establishing an ethics board at this kind of level. The action item to consider is to go beyond your guidelines and your principles, and really begin to develop a robust review of AI implications at your company. With this potential comes some missteps and misfires along the way to discovery, and there will be much discussion around those.

Sometimes there exists a mode of thinking in enterprise companies that if its work is not related to something like medical data, and it’s just these mundane commercial transactions, then it is not important. Those have just as much potential risk regarding relationships with that company and branding issues. Just because you think it's something mundane doesn't mean you shouldn't do it.

There are two advantages to taking a closer look at enterprise ethics. One is the ability to inform ourselves about how we put data and algorithms together to implement AI and understand if there are things that we could be doing differently. How are we handling outliers, for example? Do we ignore them? Do we incorporate them? Do we review them? Do they have an impact on decision-making? Is there bias, or anything that could be ethically questionable?

The second thing is that we will have a much better understanding of how decisions are made, in general, within a company, and it will inform our human judgment. As we review this kind of decision-making for AI, it allows us to potentially flex those same muscles for the human judgment that occurs inside a company, as well, and improve our ethical decision making as a company overall.