In case you missed the news earlier this month, all of the PCAOB reports are out for the Big 4's 2015 cycle of inspections. What schadenfreude awaits us? Let’s take a look.
Big picture -- regarding overall deficiency rate -- PwC is the winner this year. Even if PwC is only slightly ahead, the firm had 22% of the audits selected for inspection crop up with deficiencies compared to Deloitte (24%), EY (29%) and KPMG (38%). However, let’s refrain from patting PwC on the back because a big chunk of failures were serious enough for restatements. Although KPMG came in last again this year, we can still give the firm the “most improved” award from their abysmal showing last year.
In recent history, the deficiency rates are neck and neck. The ranking from one year to another is somewhat arbitrary. After all, PCAOB examiners inspect only a sample of about 50 audits from each of these firms. I’m sure in some years a particular firm gets lucky and starts to pull ahead.
It’s the meat and potatoes of the reports that prove more meaningful and, I’ll dare to say, more fun?
Personally, I like to see how the IT audit teams stack up when it comes to deficiencies related to IT general controls (“ITGCs”). In sum, let’s just say there’s room for improvement.
For example, PwC, EY, and KPMG all had deficiencies related to their IT control testing. Surprisingly, Deloitte walked away relatively unscathed in this match up. Nice job, Deloitte, we’ll give you some props for that. But, again, maybe you just got lucky.
Although, this year Deloitte did have some issues with the accuracy and completeness of system generated reports, which fits in the IT testing bucket. We can all agree even if the report looks pretty, it may not be accurate or worthy of our precious billable time. It’s a good idea for someone to double-check if it’s legit. But hey, everyone struggles with that one, and it’s been a PCAOB favorite to call out for a while.
As for the rest of them -- it’s an interesting mix of issues:
PwC screwed up testing compensating controls after determining an ITGC change management control was crap for Issuer J. Based on the report’s finding, I imagine that this is what happened (with a little poetic license, of course):
1. Auditors decided the control that was in place to stop someone from tinkering with source code that processes revenue transactions (read: the computer’s instructions to handle revenue) didn’t meet muster. Exception noted.
2. After the finding had surfaced, the IT audit staff was beaming with pride for finding such a juicy exception, the audit manager wanted to wring their necks, and the client felt sick.
3. The auditors circled the wagons and scrambled to think of other controls -- doing as little additional testing as possible -- that supported the idea that even though someone at the company “could” change the code without going through the formal change process, it was unlikely that anyone did.
4. Of the six compensating controls they came up with, four were so random that the PCAOB didn’t see how they even addressed the risk of unauthorized changes.
5. The one pointed back to the original change management control (which was not working, remember). I’m sure they were hoping no one would notice the circular logic.
6. The last one was just a nice idea. No one actually did any testing to verify that the control worked.
The auditors also made the leap of faith that ITGCs would be clean and didn’t adjust sampling to beef up the application control testing. According to the report:
The Firm tested an application control at an interim date using a sample of one item for each relevant scenario; this approach was based on an assumption that ITGCs were operating effectively, which was not supported due to the deficiencies in the testing of ITGCs that are described above. As a result, the Firm's testing of this control was insufficient.
Come on, that’s just lazy.
EY had a doozy called out for Issuer A. Apparently, EY failed to test ITGCs altogether for a handful of a client’s critical systems. In other words, the auditors blindly trusted these systems to spit out the numbers for revenue transactions. Sure, the client was complex, with multiple locations and lots of different systems for recording revenue, but it's not an excuse. I feel like that’s even more of a reason to do a good job.
In addition to blindly trusting some of the client’s system, these same auditors of Issuer A blindly trusted internal audit too. Inspectors found that the team just attached the internal audit workpapers with deficiencies identified without any further testing to figure out what the deficiencies meant in the grand scheme of things. Another case of sheer laziness.
Oh, and as an aside, EY also fell victim to the same finding that Deloitte ran into this year when they failed to test for the accuracy and completeness of reports.
KPMG had two issuers with big ITGC findings. The first one, Issuer B, is a nasty audit deficiency. KPMG found a bunch of control deficiencies and then didn’t do enough additional work to make the risk go away. Instead, the firm basically ignored them (with bogus compensating controls) and issued a report without a significant deficiency or material weakness. The PCAOB said:
Specifically, for each control either (1) the Firm failed to identify that the compensating control was not designed to prevent or detect unauthorized changes to these applications and data, as the compensating control was focused on the approval of planned changes to the systems' code or (2) the compensating control was also affected by the ITGC deficiencies.
Another case of circular logic, it seems. See, PwC, you’re not alone.
The next finding, this time for Issuer G, is related to administrator access, so it’s exciting. After all, I did say that companies better tighten up their access or they will hate themselves later. In this case, the client gave access to an application willy-nilly, and it resulted in a significant fraud risk. KPMG didn’t catch that some personnel got a little too much power and could manipulate data if they wanted to:
The Firm failed to sufficiently test ITGCs over user access to the issuer's. Specifically, the Firm failed to evaluate the appropriateness of the access of certain personnel who had administrator-level access both to ALL applications and to databases supporting the ALL calculation, even though the Firm had identified this access when testing user-access controls.
I’ll anxiously await the next cycle of inspections. Maybe we’ll see some improvement, maybe not. Let’s hope for under 20%? Can we dare to dream? In the meantime, auditors better dig deep with the ITGC testing, or at least try not to forget about them this year (coughEYcough).