This article first appeared in the MIT Sloan Management Review Blog on October 24, 2017.
Software robots have emerged as a potential way for organizations to achieve clear cost savings. As an evolution of automated application testing and quality assurance (QA) platforms, software bots can simulate activities that humans perform via screens and applications. A software bot can be trained to perform QA tasks, such as tapping and swiping through an app like a human would, or execute repetitive service tasks such as generating bills. It does this either by recording what humans do or via scripts that are straightforward to construct. The result is the ability to automate rote tasks that do not require complex decision-making, without the need for underlying systems changes.
As organizations build bot strategies and tactical plans, they need to keep in mind the hammer-and-nail analogy: When you give someone a shiny new hammer, suddenly every problem starts looking like a nail
Deploying bots seems deceptively simple. Robotic process automation (RPA) software vendors are actively pitching their platforms, and professional services organizations are talking up the possibilities that bots offer for cost savings with minimal project spending and limited transformational pain, which is resulting in significant corporate interest. Some forward-looking organizations are using bots and other rapid-process and data-automation tool sets to free up budget and resources to kick off large-scale reengineering programs. Others are simply using the tools to give themselves a bit of breathing room to figure out where to go next with their core platforms.
Given the push toward digital agility and away from legacy systems, it’s not surprising that organizations are executing pilots with bots across their operations. But there are five major risks to consider when designing a bot strategy.
1. If bot deployment is not standardized, bots could become another legacy albatross.
The way in which business organizations are adopting bots brings to mind another application adoption: measures to address the Y2K software bug at the end of the 20th century. To deal with the time-clock change at the turn of the century, many organizations circumvented legacy limitations. Business users embraced the increasing power in Microsoft Excel and Access to create complex, business-critical applications on their desktops. But as those custom-made computing tools proliferated, so did the problems due to the lack of a strong controls framework, QA, release-management processes, and other formalized IT processes. Companies then had to spend large sums of money tracking down all their wayward tools and slowly eliminating them from critical functions.
Today’s explosion of bots threatens to repeat this pattern. In many cases, the configurations of underlying applications, networks, or data services may need to be changed to allow the bots to work effectively with them. Often, the real power of bots can be realized only alongside other technology tools. For example, a bot might extract information from several hard-to-access systems and push information into a database for use by data-transformation tools, calculators, and models. These integrations require IT involvement to properly design and deploy. Without such expertise, a script designer might simply push the data into an Excel file as a proxy database, which creates another custom-tool remediation exercise — a large number of scripts, running on a larger number of bots, without the necessary standards and monitored source code that is critical in any modern enterprise technology platform. That remediation will take budget and management focus away from badly needed investments in application modernization.
The bottom line is that the scripts that program bots are software code and should be treated as such. They need to be designed using industry-standard methodologies that focus on reuse and abstraction, and they should be versioned and properly logged so that QA processes can be executed against them. It is critical that bot implementation be tightly coordinated between business users, technology teams, and, where appropriate, third-party companies hired to write the scripts. Bots should be put into production through the same tested processes that are used for all enterprise software applications.
Often, the real power of bots can be realized only alongside other technology tools
2. Bots might make innovation more difficult — and slower.
As bots are trained to interact with Windows and browser-based applications, they will become a dependency for any change to those underlying systems. If an IT team needs to roll out an upgrade, a critical patch, or any enhancement, it will need to consider how the system change will affect the bots that interact with it. Unless handled very carefully, this will potentially slow down the process of innovation.
Unlike humans, who adapt easily to small changes in the way a specific screen works or the data contained within a dropdown menu, bot scripts may not react positively to even minor changes to a user interface. When a bot “breaks,” it has the potential to cause substantial data corruption because it won’t realize that the work it is doing is wrong and won’t know that it should stop to ask questions, as a human would. Of course, some of this risk can be mitigated by good programming, but this assumes a formal software-development methodology has been used to develop the scripts — an approach that often is not taken. Even something as innocuous as changing the internal name of a screen object in application source code as part of a production release — a piece of information that is never seen by any user — can break a bot script that relies on it.
By introducing bots into their environments, companies have potentially created a set of dependencies that are poorly documented (or not documented at all), not designed to be adaptable to change, and most likely lack traceability. This creates further barriers to changing core systems, requiring more testing and verification to ensure that no bot scripts are broken. It also complicates QA environments, as they now need to encompass both the core application and the bots that run on it.
3. Broad deployment of bots, done too quickly, can jeopardize success.
The risk of taking a broad approach of bot deployment from the start is that it can consume a significant amount of an organization’s budget to develop the overall governance framework — all before the organization has really determined how to make its bot investments effective. This will limit the ability of the organization to build momentum around its automation efforts, and potentially allow small and early failures to put the entire program in jeopardy.
A better strategy is to start small, demonstrate success, and then expand the overall automation program. While it is important to strategically approach bot systems, involving process users and IT, it’s also important to learn through the first few deployments how to best analyze and optimize bot platforms. This can be done via six- to eight-week deliverables. Then the organization can build on what it has learned and start to collect accurate measurements of efficiencies and cost savings.
When a bot “breaks,” it has the potential to cause substantial data corruption because it won’t realize that the work it is doing is wrong
4. Business-process owners have no incentive to automate themselves or their staffs out of jobs.
It is unreasonable to assume that the people who own a process are the right people to automate it. A key premise underlying the process-automation programs that many organizations have underway is that bots will reduce the headcount required to execute core functions. Even if using bots will clearly improve the efficiency of the process and even if staff whose jobs are replaced by the use of bots get redeployed elsewhere in the company, it is a rare operations manager who will actively work to reduce the size of his or her group. Salaries and bonuses are often tied to the number of people who work for a specific manager, creating a disincentive to trade improved productivity for fewer workers.
On the other hand, process-owner expertise is necessary to understand the scope and behavior of the process so that it can be automated properly. A better solution might be to first do a scan of multiple processes to produce a heat map that prioritizes processes, then get the process owners to describe in detail how each of their processes works. Then bring in outsiders to automate the routine.
5. Bots don’t eliminate the need for rethinking core platforms.
As organizations build bot strategies and tactical plans, they need to keep in mind the hammer-and-nail analogy: When you give someone a shiny new hammer, suddenly every problem starts looking like a nail.
It’s true that bot platforms can help automate manual processes and improve productivity. It’s also true that there are other tools that can achieve even higher levels of productivity and cost savings, often in conjunction with bots. These tools include end-to-end process digitization, rapid process reengineering, user self-service interfaces, custom-tool remediation, and machine learning. It’s important to fill out the toolbox, so to speak, with a range of efficiency solutions and not bring down the bot hammer to fix every problem.
The technology infrastructure in many companies suffers from consistent underinvestment. While bots can free up some resources, they don’t eliminate the need for organizations to take a hard look at their IT capabilities and think about how they need to be modernized. There is a risk that the success of small automation exercises results in management concluding that it can avoid the expense and risk of larger initiatives. That isn’t the case.