The Auditor (And Compliance Professional) As Behavioral Scientist
By Jose Tabuena, JD, CFE, CHC
As the compliance field evolves, auditors should take heed of the power of data analytics and predictive models. The area of program evaluation is one that is ripe for opportunity to apply such techniques for both assessing compliance effectiveness and for nudging employee behavior toward supporting an ethical workplace. But keep in mind predictive models yield benefits only if appropriately acted upon.
Behavioral science provides a powerful set of tools for acting on data analytic indications when behavior change is the order of the day. Specifically, “behavioral economics” combines elements from economics and psychology to understand human behavior— even when it’s irrational.
The U.S. Department of Justice (DoJ) has signaled strong messages on the importance of having an “effective” compliance program finally bringing the conundrum of program measurement to the forefront. Although the Federal Sentencing Guidelines and its “elements” of compliance have existed for over twenty years, the formal standards and processes by which compliance programs are currently measured for effectiveness remain notoriously sketchy. This trend of the government to provide more guidance has continued with the DoJ stating it plans to release a set of sample questions to give companies an idea what investigators and prosecutors are concerned with. Apart from the ability of “effective” compliance programs to reduce the risks of high fines and liability, management has a financial stake in measuring the effectiveness of a compliance program. Operating a compliance program requires a significant investment in time and resources. Poorly functioning compliance programs are likely to waste money, divert scarce resources and operate sub-optimally with respect to mitigating serious, business-threatening risks.
Moreover, the positive effects of a compliance program may include better financial performance. Studies have started to show that in the long-run, a truly ethical and lawabiding corporation is more likely to foster on several measures—customer loyalty, increased employee retention, and strengthened public reputation.
The new DoJ compliance counsel in assisting federal prosecutors develop appropriate benchmarks for evaluating compliance programs, is to provide expert guidance to help prosecutors evaluate whether the implementation of such measures has been effective and has had a remediation effect. Naturally there is acute interest by compliance professionals in the work and impact of the DoJ compliance counsel. This position will be a focus for determining the benchmarks for effective compliance programs, and there is legitimate concern whether sufficient input from the industry compliance community will be considered in connection with future developments. Compliance professionals have had more than 20 years’ of practical experience in direct observation of what effectiveness means for organizational compliance programs, and the DoJ is only now embarking on zeroing in on this in a focused and systemic manner. The hope is that the DoJ will allow for constructive input from the compliance community on the meaningful measures of an effective compliance program.
Applying the “law” is not enough
The legal system is replete with examples where assumptions on how the world works as the basis for establishing laws and regulations has proven dreadfully wrong. Take the value of eyewitness testimony as one example. For a long history, prosecutors could argue for convictions based on the strength of a single eyewitness—the more confident the witness, the more seemingly infallible the testimony. That is, until psychologists conducted controlled studies on the reliability of eyewitness perceptions and the ability to accurately recall from memory.
An auditor evaluating an established compliance program could start with evidence that the organization has consistently implemented the elements of a program as defined by the Federal Sentencing Guidelines. But that is just the beginning. The experienced program evaluator recognizes that measuring implementation is different from the more difficult task of evaluating effectiveness.
After initial resistance, there was eventual recognition by the criminal justice system that eyewitness testimony can be extremely unreliable depending on the circumstances of the event and how potential suspects are presented to the witness. As a result, strict procedures for showing photographs and lineups for suspect identification have evolved. The use of psychologists to provide expert testimony during trials on eyewitness reliability is allowed by many judges. The emergence of DNA testing and the release of wrongly convicted individuals further demonstrate the danger of untested assumptions.
The modern American law school started with the belief that law can be understood and taught as a science. This belief was based on ideology that what mattered was understanding and rationalizing the law applied in courtrooms by judges. The search for the underlying principles provided the basis for the science of law. The body of cases, correctly analyzed, would reveal a set of internally consistent principles inherent in either human nature or culture and expressed case by case through the judges.
This approach of the law as a science has since fallen by the wayside. One only has to look at the divided opinions of the U.S. Supreme Court to recognize the fallacy of the law as a robust science. However, the myth that legal principles result in rational truth still persists. One example is the definition of an effective compliance program under the Federal Sentencing Guidelines. The elements of an effective program seem conceptually sound, but how do we know that applying them actually promotes a culture of compliance and prevents violations of law?
The fallacy is that while legal principles may seem rigorous in theory, they may not reflect actual reality. The idea of a classic mathematical proof is to begin with a series of statements that can be assumed to be true or that are self-evidently true. Then by arguing logically, it is possible to arrive at a conclusion. If the statements are correct and the logic is flawless, then the conclusion will be undeniable.
Scientific theory, on the other hand, can never be proved to the same level of a mathematical theorem. It is only considered highly likely based on the evidence available. Scientific proof relies on perception and observations both of which are fallible and provide only approximations to the truth. This is why experiments are performed to test the predictive power of a scientific hypothesis.
Legal principles often make assumptions about human behavior—such as the accuracy of eyewitness perceptions or the view that investors act rationally in financial markets. But science has started to reveal the weaknesses and subtleties underlying those assumptions.
Applying behavioral science
Principles, such as compliance program components, shouldn’t be taken on faith. When practical, the underlying elements should be field-tested using randomized controlled trials to measure their validity.
For instance, simply having a code of conduct and related compliance policies is obviously not enough to influence employee behavior. So what is it about a code of conduct, how it is written, communicated, and trained to the workforce, that can make a real difference?
In the field of behavioral economics, priming has proven to be an effective tool to subtly encourage honest behavior. Priming occurs when an individual is exposed to a specific stimulus that influences his or her ensuing actions. In studies by behavioral economist, Dan Ariely, experiments were designed to influence honest behavior when researchers “primed” people with a stimulus that involved morality and then observed how often cheating occurred when solving small math problems. When the participants were asked to recall the Ten Commandments, cheating significantly decreased compared with those who were instead asked to recall the names of Shakespeare’s sonnets.
Similar studies provide additional behavioral insights. It is easier to be just a little dishonest. Experiments show that we are more likely to cheat over a small amount of money than a large amount. People also tend to find it harder to be dishonest when interacting with another person than with an impersonal mechanism. The belief that we make rational decisions is a myth that belies the complexity of human behavior.
How do you know a program is working?
How can the auditor tasked with evaluating a compliance program take into account the findings of behavioral scientists? In the short history of the compliance profession, a variety of distinct approaches have been attempted. Yet any approach taken in isolation may yield unreliable information.
An auditor evaluating an established compliance program could start with evidence that the organization has consistently implemented the elements of a program as defined by the Federal Sentencing Guidelines. But that is just the beginning. The experienced program evaluator recognizes that measuring implementation is different from the more difficult task of evaluating effectiveness.
One might look to see if the compliance program incorporates “best practice” features adopted by leading companies. As to the code of conduct, one could inquire whether it was written with simple, understandable text and distributed to all employees. However, experience shows that just because employees received a reasonably well designed code of conduct does not necessarily mean that they understood it, found it useful or took it seriously.
Academic research indicates that the highest indicator of workplace misconduct is fear of retaliation and the confidence employees feel when raising issues. So data on employee willingness to address matters with their immediate supervisor or to use the compliance hotline, as well as their views on what would happen if they reported misconduct, can prove meaningful as a measure of effectiveness.
The current obstacle is the lack of an accepted methodology for consistent measurement along with the absence of a comprehensive set of metrics in which to benchmark your compliance program. The means by which organizations measure the effectiveness of their programs still vary, and in some cases organizations can be lulled into a false sense of security by evaluations that may not be empirically based or reliable.
Which is why the recent moves by the DoJ and particularly the hiring of a compliance counsel are such promising developments. Compliance professionals have been seeking open discussion and analysis on the measurement challenge, including consideration of possible outcome measures by which organizations could demonstrate the impact of their programs (e.g., observed misconduct, frequency and nature of reporting, fear of retaliation, direct measurement in risk areas where this is possible). Doing so could encourage companies to undertake high-quality evaluative efforts, and prompt boards of directors to review and reflect on the results of such efforts.
Subject matter expertise
When considering the compliance program as a broad control and evaluating program elements, don’t neglect the value of technical expertise. While auditors have expertise in the methodology of program evaluation (itself a valuable skill), subject matter expertise is just as essential. It does occur that auditors miss a significant problem because the evaluation approach was structurally blind to the domain and members of the review team not truly understanding the details of “how it works.” And technical folks are nudged outside their core expertise such as when audit and professional services teams strive for high utilization of its staff. Have a fraud specialist on the team for financial controls, a cyber-expert during an information security review, and definitely have a compliance specialist when evaluating a compliance program.
As the field of compliance management continues to mature, reliable means to evaluate compliance program effectiveness will increasingly become imperative. This is true not only for auditors assisting operational leaders who must effectively manage risk, but for those in enforcement who need to make informed decisions, consistent with announced policies, relating to prosecution and punishment.
Originally published in Compliance Week