Please ensure Javascript is enabled for purposes of website accessibility

Auditors May Have to Keep Artificial Intelligence From Cooking the Books

auditor artificial intelligence cropped

Some smart people are starting to contemplate how to keep artificial intelligence safe and in-check. It’s a fascinating topic, especially after Facebook had to shut down their AI chatbot after it started to develop its own language in August. Plus, I have a feeling it’s going to trickle into the scope of work for an audit team through Sarbanes-Oxley and internal control testing. Here’s why:

Who else has access to all public companies to enforce future AI mandates that require compliance?

I can’t think of a better group. Auditors are already poking around the IT department for other controls. It only seems logical that AI would fall into the lap of the IT audit and attestation teams to ensure that the financial data (and humanity too) is safe from manipulation. No one wants to see a self-serving robot cooking the books. You can’t put an artificially-intelligent machine in jail, after all. Their antics may have nothing to do with the developer if the robot had devised its devious plan to wreak havoc without human interference.

Maybe this seems silly but it’s a big issue according to AI pioneers. In a recent TED talk, Stuart Russell quoted Alan Turing from 1951:

Even if we could keep the machines in a subservient position, for instance, by turning off the power at strategic moments, we should, as a species, feel greatly humbled.

Russell says that shutting the power off is important, along with some other safety considerations when we go to build a super-intelligent robot. He suggests three principles (i.e. programmed characteristics) that all AI should have:

  1. Altruism or “that the robot’s only objective is to maximize the realization of human objectives, of human values.”
  2. Humility or “avoidance of single-minded pursuit of an objective.”
  3. Learning from bad behavior or negative human interactions (e.g., the robot is turned off for not acting appropriately and doesn’t do it again)

Sam Harris, another AI philosopher, also warns of the dangers of runaway super intelligence, especially as it starts to be more general and less task specific. The thing these two guys both allude to is the danger of letting AI loose without internal controls.

I foresee IT general controls (ITGCs) evolving to include artificial intelligence control considerations and testing. I envision it would drop into the list of the following:

  • Logical access controls
  • Physical access controls
  • Computer operation (backups, job processing) controls
  • Program change management controls
  • Program development controls
  • Artificial intelligence and human override controls

An example control would include:

Artificially-intelligent machines and applications are programmed to permit human override and shut down.

It may not be anytime soon that this type of control consideration starts popping up mainstream and testing commences by auditors. But, it’s worth putting into the universe since it may be sooner than we think.

Earlier:
Automation to Artificial Intelligence: New Frontiers for Auditors

Image: Photo by Alex Knight on Unsplash