Thank you for Subscribing to CIO Applications Weekly Brief
A Day in the Future - Tomorrow's Tasks and Reality of a GRC Officer
Patrick Henz, Head of Governance and Compliance, Primetals Technologies
As the scheduled audit by the Government’s Department of Artificial Intelligence approached, June the company’s GRC Officer recalled that 10 years ago, when she began her career, integrity was about humans and defined as “value-based behavior.”
The business environment back then was as complicated as today, so personal integrity had to guide employees in scenarios that had not yet beendefined by laws and guidelines.
June liked different philosophies and had been familiar with Ayn Rand’s work, especially her final and perhaps most important novel, “Atlas Shrugged.” Here Rand wrote: “Achievement of your happiness is the only moral purpose of your life, and that happiness, not pain or mindless self-indulgence, is the proof of your moral integrity, since it is the proof and the result of your loyalty to the achievement of your values.”
As head of GRC, June understood the interaction between processes and behavior, and wanted to support the employees to help them find happiness at work. To achieve this, she assessed tools and guidelines to ensure they were strong, but no more bureaucratic than necessary. She explained the company’s values to the employees, and why they were not only relevant for the organization, but also for the employees and society. Just the day before, June had updated the simulation, used to illustrate the cost of corruption in a fictive country to new hires. For other training events, she used different cases to discuss ethical blindness to make people aware of how their behavior could be manipulated in stressful situations. Emotions, positive or negative, could subconsciously influence decisions. In extreme situations, individuals might act against their own values and attitudes. June felt as though she were the corporate storyteller. Her predecessors sat around the fireplace, but today Virtual Reality glasses created the non-perfect illusion.
Even with positive human values and the best coding, the AI’s decision could only be as good as the utilized sensors and created data input
It was a rainy afternoon and June went to get another cup of coffee before continuing her analysis of the algorithm of the Artificial Intelligence (AI) application. Her tasks were challenging. Intelligent software replaced about 40 percent of the human workforce, including expert and middle management positions. Nevertheless, everything was still based on human creativity and integrity. Even the more advanced software still operated with coding done by human software designers. Because of this, the IT department had become one of her target groups. Ironically, they had been hired with the support of the HR AI software to ensure they were a good fit for the organization and represented an adequate level of diversity. The latter was key, as diverse groups seem to produce the best results. Different points of view not only infuse fresh ideas, but also make the group less vulnerable to psychological biases.
Even with positive human values and the best coding, the AI’s decision could only be as good as the utilized sensors and created data input. One of June’s daily morning routines was to ensure that the required information streams reached the AI. On the screen she could see the datapackages, including their size in megabyte. Even if a filter tried to identify “fake data,” she ensured the efficiency of the software by taking samples from the out-filtered group along with the ones that passed the filter. In the end, artificial and human employees are not that different. If they work with flawed information, their decisions and output aresuboptimal. The integrity of data was a priority. A risk, as hackers who were not able to directly enter the company’s IT infrastructure could sabotage the flow of information before it reached the organization.
In addition to the regular monitoring, June analyzed the AI’s algorithm directly from time to time. This established a control to prevent someone from adding a code that did not belong inside the application and could alter its behavior. Her biggest challenge was the autonomous learning. The AI analyzed the incoming information and learned from it. Based on this, the software adapted its decisions. The data had been stored on a server, but June was not able to read it as easily as the coding. The only possibility was to perform different stress tests with the AI. She created several business scenarios, some that were common and some that were rare. She included her Virtual Twin, which had been created to offer first-level support for employees’ questions, inside these procedures. Luckily the software has not shown any problems so far. It would be an awkward situation for June to deactivate an app that was based on her knowledge, character and outer appearance. She felt optimistic that the scheduled audit by the Government’s Department of Artificial Intelligence would go smoothly.
It was nearly time to go home. Before June shut off her computer, she checked her favorite news portals one more time. At this moment, she realized there was no clear separation between artificial and human anymore, but was a gray area. Even if she had not had any microchips implemented, as some of her co-workers had, she depended on the internet and could not imagine life without it. Her smart-phone apps had become a part of her life.
With the last sip of coffee, June closed the door of her office to drive back home.