Analytics and RPA — the perfect team

Hand draws process chart

I keep hearing that robotic process automation (RPA) is simple to use, doesn’t require a programmer, and can automate any business process. Unfortunately, we all know this isn’t entirely true. But, even if it were, there is more to automation than the RPA tool itself. Let’s look at a standard automation process used to automate business processes.

Hand draws process chart

Five phases to getting automation right

In 2010, OpenConnect worked with several customers to develop the OpenConnect Approach to Process Automation. This approach consists of five phases — Discover, Design, Build, Execute, and Measure — as shown in the diagram below. It is an end-to-end, iterative process that helps companies (a.) identify automation opportunities based on their automation strategy, (b.) design and build the solution, and (c.) measure the results of the robots.

Five phases of the automation process

This process has been used by several of our customers for many years, helping them successfully implement automation robots to improve quality, increase throughput, and reduce costs. But how is this process any different than others? After all, it’s very similar to many other processes you find in the RPA world. The major difference is how this process is implemented and executed.

The OpenConnect Approach to Process Automation uses a combination of analytics tools and automation tools to provide:

  • A list of automation opportunities in order of importance
  • The best path to take to get the highest throughput
  • The steps taken for each process/task
  • A document showing the details of the process/task
  • Robot measurements that can be used to help identify improvements

The importance of analytics

I think everyone would agree that analytics can improve automation; and many RPA companies provide analytics; but most RPA analytics tools available today show only what the robots did, not how to automate the process involved. The analytics data is created by the robots’ activity; so, before you get any data to analyze —i.e., Measure — you first must Discover, Design, Build, and Execute. And, while the data helps you improve the existing robots, it doesn’t help build the process logic in the first place.

Since RPA is automating what people do, we need to know what people do. The standard way to understand what people do is — to ask them! Sit down with a subject matter expert (SME) or a business analyst (BA), and have him/her walk you through the process. People who have used this method know that it’s not the best way to get the information you need. First, if you don’t ask the right questions, your process is going to have major holes that you won’t discover until you start to build the logic. Second, if you ask 10 people how to do something, you’ll probably get at least five different answers. What people tell you will be based on their experience; so, if you get a more experienced user, you might get more accurate data.

You can capture a few workers’ activities . . .

There are some tools available that allow you to capture what a person does and then use that information in the RPA tool to create your robot logic. To use one of these tools, you first install a piece of software on one or more desktop computers, and then have people start the recorder every time they execute a process. Typically, the tool provides an option to enter information for each step. This could include business logic or notes about why the person did something.

It’s true that this can help get the information into the RPA tool more quickly, but the data is only as good as the person executing the process. Also, what about process variations? Does the user have to navigate every possible path for this to be accurate? And how is the business logic captured? Finally, how reliable is the data when the person knows his/her activities are being recorded, much less when he/she is actually starting each recording manually?

So, on the face of it, although this initially may sound like a great feature, all it really does is help you get the data (useful or not) into the RPA tool. However, most RPA tools already have a “record” feature, so why couldn’t you just do the same thing within the RPA tool itself?

. . . or you can capture and analyze the workforce’s activities

Now, let’s consider a better way. Instead of capturing what one or two people are doing, why not capture the activity of the entire workforce?

Let’s say you have a fairly complex process, one that takes a user several weeks to learn and a couple of months to master. It contains multiple decision points and complex logic. Let’s say also that you have 200 people executing this process multiple times every day. If you could passively capture the data for every user, for every process, and for multiple days or week, you would have every variation, every decision point, and every screen used in the process. And users wouldn’t even be aware that their activity is being captured, so you’d get realistic data from actual users doing actual work — not from a few users being very careful about how they execute a process because they know they’re being recorded.

That sounds right; but how do you sift through all the data so you can make sense of it?

First, you must identify the users who execute the process the most often and with the highest efficiency. I have heard many people say that you don’t need to improve the process for RPA, because robots don’t care if it’s efficient. Well, the robots may not care, but you should care; because, the more efficient the process, the fewer robots you’ll need in the first place! That’s why I believe it’s important to learn the most efficient way to execute a process; and the right analytics can help you gain that knowledge.

How finding the “happy path” helps you

After you’ve narrowed down who the top users are, you then must understand the paths they take and identify the best one — commonly called the “happy path.” To do this, you need a tool that can automatically show you the process, and all its variations, in a graphical format that’s easy to understand and analyze.

The process map should be detailed enough to show the decision points, but not so detailed that it adds unnecessary clutter. Maybe User A goes to Screen 3 followed by Screen 2, but User B goes to Screen 2 followed by Screen 3. Does it really matter? In most cases, probably not. But it could. That’s why you must have control over the data that goes into the process map. For some activities, you might include several screens per activity; but, for others, you may want only one screen per activity. (An activity is displayed as a box in the process map, and lines connect the various activities based on user navigation.) You should also be able to analyze each variation by human “think-time.” If your goal is to reduce costs, automating the path with the greatest amount of human “think-time” will provide the best ROI.

Once you’ve identified the “happy path,” you should be able to document the selected process. Since all of the data is in the system, it should be a simple matter of clicking on a button to create the documentation. But how do you capture the business logic? You can capture what people do and how they do it but, for now, you can’t capture why people do it. The why refers to the business logic. We know that a user entered a specific value into a field, causing a specific action; but we don’t know why the user entered that value in that field.

This is where you need to include an SME, so he/she can help document those rules. Still, this is much different than the manual process I mentioned at the beginning of this article. We aren’t asking an SME to go through the process and provide details; instead, we’re asking the SME to review a document that already has all the screen-shots and navigation steps (the what) and asking him/her to add the why (the business logic). The SME simply adds text to explain why the user entered that value in that field.

Summing up: The right way

Since most of the time building automation logic is in the Discover and Design phases of the process (again, please refer to the diagram), using the right analytics can reduce the time to automate by as much as 50%. It also improves the quality of the automation and helps you identify the most efficient paths, which reduces the number of robots required. Fewer robots means a more economical deployment, purely and simply. This is the right way to automate.

Don’t be fooled when you hear people say they “identify what people do and automatically automate it.” I’m very skeptical about such claims if those making them have acted without proper analysis and the benefit of business logic provided by a knowledgeable human! To be sure, we eventually will get there with artificial intelligence, but we aren’t there yet.

With RPA, it’s not just “Point A to Point B”

RPA - Not just Point A to Point B

Imagine you decide one day to go shopping for a new ride. You do a quick Web search for what’s available, what’s in your price range, who’s selling what in your area, and so forth. However, something strange occurs. No matter where you go and no matter how you word your search, you keep coming up with a silver-colored, base-level, four-door sedan — mind you, it’s the same basic silver car, over and over. Every dealer’s website emphasizes that you don’t need different vehicle choices because it doesn’t matter what you get: all you want to do is get from Point A to Point B.

You say to yourself, “This can’t be right. What if I need more room to carry my family and friends around? What if I want something with more horsepower? What if I want a different color? What if I want some special features?”

And you’ve got it: This can’t be right. Yet that’s precisely how some vendors want you to think about robotic process automation (RPA). They want you to think it’s pretty much a commodity now, that one product is about the same as another, and that it doesn’t matter all that much which one you get. “Additional features? Bah! You don’t need those. Ability to handle complex tasks? Pfft! That’s not what RPA does! Server-based connectivity with mainframes? Ridiculous! It doesn’t matter.”

Well . . . yes, maybe you do; yes, it does; and yes, it does.

As much as some of the vendors out there may want to pretend that there’s no difference among the offerings, it just isn’t so. We’ll set that straight in a moment.

RPA - Not just Point A to Point B

RPA: More than just a silver sedan

First, let’s start at the beginning. What exactly is RPA?

In its simplest form, the meaning of RPA is automating human-executed processes with no changes to the existing infrastructure or applications. Among RPA vendors, most interpret this to mean a software robot running on a desktop and interacting with the same user interfaces used by humans. Moreover, the general perception apparently is that there are two types of automation: RPA and everything else.

In this mindset, RPA is billed as the easy entry product that quickly automates mundane tasks to free up users to do the more important and interesting work; so, naturally, many people consider all RPA products to be equal when it comes to automating processes. It comes down to a feature checklist to determine which RPA product(s) to choose for each organization: whatever is an RPA product — by this interpretation, something that runs on a desktop and interacts with the user interface — will be considered. It’s silver sedans all the way down.

(OK, sorry; wrong metaphor.)

There is one big problem with this: not everybody wants or needs a silver sedan! There are many different types of processes that can be automated. Here are two big mistakes organizations make:

  • They lump all RPA products into the same bucket and look at only the feature list.
  • They consider only mundane, simple tasks/processes for RPA automation.

Companies, evaluate your processes

When considering automation opportunities (as we’ve discussed in earlier blog posts here), companies should first look at all processes, from the complex to the simple (“mundane”), and then find the right tool — and they shouldn’t look at just the tools that fit the simplistic definition of RPA.

There is a continuum of process complexity that ranges from simple through moderate to complex. Traditional RPA tools can handle simple, and some moderate, processes, while they struggle with some moderate and most complex processes. For these reasons, your automation strategy should include several tools, one for each complexity level. Many companies already choose at least a couple of products for their standard RPA solution. But these choices typically are based on either the best two feature sets (the classic “look-for-the-most-bullets-in-the-feature-list” approach) or one product for unattended automation and one for attended (also called assisted) automation.

Some families need multiple vehicles: a nice-sized crossover to carry everybody around, an econo-car for workday commuting, and maybe even a sporty car for one or more teenagers. One silver sedan just won’t cut it. And, when it comes to RPA, one product isn’t enough. What you really should have is one RPA tool for attended automation and two or three others for unattended automation. The reason you should have only one attended solution is because the users who interact with it can be confused by having to learn multiple products. You should find the best-of-breed product and do a thorough evaluation of it with end users to make sure it meets their needs. For attended automation, the users are key to your success, so find a product that is easy to use, yet powerful enough to handle users’ needs.

The reason you need more than one unattended solution is because, again, the same silver sedan can’t fit everyone’s needs. One RPA product might not be able to successfully automate some of the more complex processes and another might not be cost effective for simple processes. You should look at the problems you’re trying to solve and choose the appropriate solution to each problem.

Simple to moderate processes

Simple to moderate processes are those processes that have simple logic and few decision points. For those, look at standard desktop-based RPA tools. You can find a list of the usual suspects by searching on the Web; but don’t choose one just because it has the highest rating from Gartner or Forrester or another analyst you may prefer. Instead, look at the strengths and weaknesses of each tool and compare that data to your actual requirements.

Desktop-based RPA tools should be used to automate mundane, simple tasks. These products are designed to sit on a desktop and automate processes exactly like a human. They are perfect for simple processes such as data entry, but may not be appropriate for complex processes that require advanced business logic or have many decision points. Many of these products also provide a server to allow you to monitor the desktops and robots, and to dynamically scale when needed. Still, don’t be fooled by the presence of a server here: in most cases, the server does not contain any business logic or allow robots to share information. This is not server-based RPA.

Moderate to complex processes

For automating moderate to complex processes, you should look for solutions that specifically state that they can automate complex processes. These solutions are usually server-based, as opposed to desktop-based, and usually have some proprietary technology that allows them to access applications more efficiently and accurately, and with high scalability and availability.

These systems are designed to handle complex logic and usually provide more than just desktop access. For example, access to Web-based applications might be accomplished through a server-based Web browser allowing higher scalability, efficiency and accuracy. Another example is mainframe “green-screen” access. A desktop-based RPA product interacts with mainframes through a desktop-based terminal emulator via either EHLLAPI or screen-scraping, both of which present major challenges unless access to the mainframe is a very small part of the overall picture.

What to look for in RPA

Here are some guidelines that can help you choose the right solution:

  • How long does it take an end user to learn the process?
    – Hours to a few days = simple process.
    – Days to a week = moderate process.
    – More than a week = complex process.
  • How large is the daily inventory of work and how many humans does it take to complete it?
    – Requires 10 or fewer workers = low-volume process.
    – Requires 50 or fewer workers = medium-volume process.
    – Requires more than 50 workers = high-volume process.
  • What is the impact to the business if the process isn’t executed properly?
    – Minimal impact = low.
    – Fines that could hurt the bottom line = moderate.
    – Fines and interest that add up to millions = high.

The bottom line: Where the rubber meets the road

Remember, basic silver sedans won’t serve everyone when it comes to RPA. Even though the desktop RPA vendors have managed to make a name for themselves, there are better RPA solutions available for high-volume, high-value, complex processes. You don’t need a complex, BPM-like solution to automate these processes. RPA technology will work — so long as the architecture of the solution is designed to handle it.

Free RPA: What’s the big deal?

Automation

You probably know by now that at least one company in our industry is offering a free RPA tool. You may be wondering: is that good or bad for the RPA industry? I guess that depends on whether you’re a provider or a buyer. I think the buyer perspective is obvious, so let’s look at it from the provider perspective.

If you’re a provider, and you have a desktop-based RPA tool, then you should be concerned. You should be looking for ways to add value to your product, so you can focus on the additional value you provide and not the RPA functionality. Of course, you could also try to compete strictly on features and functions, but it’s hard to compete with free. Or maybe you believe you already have a product that can compete with free. I guess you’ll soon find out.

But is it good or bad for the industry? I believe the answer is both.

Automation

On the one hand, it’s good because it will force vendors to look for ways to improve their tools beyond RPA. On the other hand, it’s bad because it makes it easier for companies to implement desktop RPA tools. Why is that bad? Because RPA is not necessarily the right solution for all situations. RPA has definite advantages and can be the right solution, but it can also cause a lot of problems if it’s not implemented correctly.

Best practices for implementing RPA

That consideration leads to the obvious question, “So just exactly how do you implement RPA correctly?” Here are some best practices I would suggest.

  1. Access applications and data using the most efficient, accurate, scalable way available. — The desktop user interface isn’t the best, but may be the only option. I’ll soon talk about what you should do if it’s not.
  2. Make sure the IT department is involved and that you’re on the change control list. — Caution: If you’re using RPA with an application that sees a lot of UI changes, this could hamper IT and delay application improvements requested by other departments; and that will not make you very popular with the folks in IT. Remember: RPA tools rely on a consistent user interface. If your IT department changes the user interface to improve usability, RPA robots may stop working.
  3. Centralize the logic and runtime execution so that you have better control. — A free RPA tool could proliferate rogue departments’ automating applications without the knowledge of their IT department. This could cause major issues and cost the organization a lot of money. It gets back to having a good automation strategy and, possibly, a center of excellence. [Editor’s note: Kevin discusses these and other aspects in his five-part series on robotic process automation, which begins here.]
  4. Provide full auditing and reporting capabilities.
  5. To avoid issues with changing UIs, use APIs, Web services, and database calls as much as possible.
  6. Use RPA when it’s the only option available, and avoid the desktop as much as possible. — I’ll explain that in more detail next.

You really can get around the desktop

Now you might be saying, “Okay, so just how do you ‘avoid the desktop as much as possible,’ and why is that important?”

First, by “the desktop,” I really mean Windows. We all know that Windows updates occur frequently, and Windows applications rely on the underlying operating system to function properly. I have seen updates in Windows and/or applications kill RPA robots.

That’s why you should avoid desktops if possible; but how?

Believe it or not, there are other ways to access many of the same applications without accessing a Windows desktop environment. OpenConnect has been providing automation solutions for over 10 years, and has always looked at automation from an enterprise level. That’s led us to stress the superiority of a server-based approach.

There are two common types of applications that can be accessed from a server: mainframe 3270 applications and Web applications. The OpenConnect automation platform, AutoiQ, has a proprietary data connector that allows it to connect directly to mainframes via the TN3270 protocol. With AutoiQ, you interact with the exact same screens that end users access on their desktop, except that there is no desktop. All sessions run in memory, and a single server supports thousands of concurrent sessions. There’s no desktop and no Windows, just pure, raw 3270 data, processed 15 times faster than desktop-based RPA tools can do it; and the process is 100% accurate.

Embrace Web services and a server-based environment

In a similar vein, there are two different ways to access Web applications other than through a desktop-based browser. The first is through Web services. They’re machine-to-machine communications and are typically available in modern Web applications. By accessing the application through Web services, you completely avoid the user interface altogether. No change control issues occur, because you’re not accessing the user interface.

The other way to access Web applications is using a server-based browser environment. Server-based browsers don’t run on Windows and don’t depend on specific browsers’ functionality. Note that this approach won’t work for all applications; but you should be able to minimize the desktop access.

A few other items that can be avoided include file access and file parsing (reading). Network access to files and the parsing of files can all take place on a server. It is common to have to read Excel, Word, and PDF files as part of an automated process. All of these file types, and more, can be accessed using server-based technology — again, avoiding the desktop as much as possible.

Here’s why you should minimize the desktop’s role

The goal isn’t to eliminate the desktop. That’s probably impossible for most environments. But you should be able to minimize the number of desktops required, thus minimizing the risk. Your approach should be to look for ways to do the work on the server, instead of a desktop. You may find that you can end up with 25% to 50% fewer desktops you originally thought you would need.

With this approach, the desktop becomes just another data source so, if there’s a free tool available, that’s good. The value is in the server and how it executes processes and manages robots — not in the tool used on the desktop.

Free RPA is a move in the right direction

In my opinion, a free RPA tool is good news. It will impel the industry to create tools that provide automation the right way. That means using the right tool for the job, or application, rather than doing everything on the desktop.

I believe the presence of free RPA tools will move our industry to act in ways that will benefit all of us, users and providers alike. I also believe that, regardless of the automation tool you select, you’ll have a better chance of filling your automation if you follow the guidelines I’ve described in this post and elsewhere.

Mainframe modernization: Start with the user interface

Mainframe modernization - data center access

What is mainframe modernization? It depends on who you ask.

The mainframe is an interesting beast. In fact, just as the term mainframe modernization means different things to different people, so does the term mainframe by itself. The core of the mainframe includes the operating system, applications, and access interfaces.

Let’s take a standard mainframe environment. The mainframe is running CICS or IMS. Users access the applications using a terminal emulator, also referred to as a “green screen.” If you talk to a system architect, he might use the term mainframe modernization to mean converting back-end applications to be more modern. If you talk to an end user, he/she will tell you it means an easy-to-use graphical interface. Those are two completely different visions.

Most large organizations’ IT departments take the architect’s view — modernizing the back-end mainframe. This could consist of converting COBOL code to Java, or running COBOL applications under a Linux environment, or converting the database from IMS to DB2 to make it easier to integrate with external applications. All of this takes time, possibly years, and lots of money. So, you have to ask the question: “What’s the goal?”

Mainframe modernization - data center access

Hide in plain sight

If the goal is to reduce costs, you should look at the back end to find ways to modernize and reduce MIPS costs. However, if your goal is to improve productivity, usability, and overall cost of operations, you should consider updating the front end (user interface) first. To do that, you need a layer of abstraction. That’s a software layer that understands how to communicate with two different layers while hiding the layers from each other. As an example, you could have a modern Web-based user interface that communicates with legacy “green-screen” applications. The UI doesn’t know it’s communicating with “green screens,” and the “green screens” don’t know they’re being converted to a Web interface.

Here’s how you might implement this solution. You want to first design your user interface without regard for the mainframe’s communications protocol or architecture. Simply design your user interface using standard UX practices. Now, you need a layer of abstraction — a layer that knows how to talk to the mainframe, using the standard native protocol, and communicate with the new user interface. If you decide, as most companies do, to create a Web-based user interface, you would use the standard Web architecture used by all your internal Web applications. Maybe you use J2EE or Microsoft, or maybe your standard is Struts or Ruby on Rails. Whatever it is, just design your Web application the same way you would if it were using a standard relational database.

Can we talk, mainframe?

Now you need a way to communicate with the mainframe “green screens.” There are several options available. OpenConnect has one called ConnectiQ. It’s a server-based platform that communicates directly with the mainframe, using the TN3270 protocol. It consumes the raw 3270 packets and interprets them in memory. It then converts that information into standard Web services that can be called and consumed by your Web application. The user interface doesn’t know it’s talking to an old legacy mainframe application, and the mainframe application doesn’t know it’s communicating with a modern Web-based user interface. In other words, you have a layer of abstraction; get it? In this case, the layer of abstraction is ConnectiQ. The Web application simply communicates with ConnectiQ, using standard Web services.

Why is the layer of abstraction so important? Because, once you have the new UI in place, you can start working on modernizing the back end applications, databases, and network interfaces. Again, that could take years, but you already have a nice, intuitive, easy-to-use, easy-to-learn user interface which should be saving money; so maybe the urgency isn’t quite as great.

The key is: once the back end is modernized, you simply change your layer of abstraction so it now communicates with the new back end. Users don’t see any changes — other than, perhaps, better performance. In the example above, you would remove ConnectiQ and use different Web service calls in your Web applications. This approach also allows you to modernize a little at a time. You can move an application, or function, at a time and have a mixture of Web service calls to ConnectiQ and the new mainframe interface. Again, the users don’t know the difference. They’re just using a nice, new, modern interface.

The benefits of this approach are numerous — including improved user productivity, simpler and faster training to bring people onboard more quickly, better and more timely customer support, and access from any Web browser so work-from-home users don’t need any special software.

Macros, schmacros

Let’s also talk about a very serious problem related to mainframe terminal emulators: macros!

Almost all terminal emulators provide a macro interface that allows a user to press a “Record” key and record a set of keystrokes. She can then play back the recording any time. For example, a common problem is copying data from one screen onto another one. The macro would be recorded to navigate to the first screen, copy specific fields, navigate back to the original screen, and paste the data into that screen.

This option saves a lot of time, and users can create their own macros. However, that’s actually the problem. You end up with hundreds or even thousands of user macros running. Most aren’t written correctly. Macros are not “intelligent,” and have no way to know if they’re working correctly. If a user hits the “Play” button while on the wrong screen, the macro is still going to try to run. In some situations, this can actually cause harm.

So, why am I talking about macros in a mainframe modernization blog post? Simple: if you modernize your user interface, you can provide the same benefits that macros provide without the risk of users creating their own, possibly harmful macros. You now have control.

Users will ask for changes that help them be more productive. That’s great, and now you have a centralized way to provide those changes. The best part is that all users benefit, not just the one person who knew how to create a macro.

Front first, back last

Here’s the bottom line. Please your users first to improve quality, reduce costs, and improve employee satisfaction. Then, look for ways to modernize your back end. But remember, you don’t have to modernize your back end after you modernize the front end. If users are more productive and operations costs are down, you already may have achieved the results you sought. Also, you already know how to maintain the back end. So why introduce new technology, processes, and procedures if you don’t have to?

Robotic process automation: Part V (conclusion)

Part V: Managing your backlog of automation opportunities

Setting priorities is critical in a successful implementation of RPA. Here is a proven strategy for making it go more smoothly.

So you now have an RPA product and you implemented an automation strategy (or Center of Excellence). This means you should have a change control process. With it, you can plan changes to your automation solution before they actually occur.

This also means you have a strategy for identifying and prioritizing automation opportunities. You see, most people feel pretty good about their automation strategy — until multiple business units come into the mix. Then, the number of opportunities on the table increases to a level that makes prioritizing more difficult. You can easily end up having to tell one business unit that it will have to wait longer than expected because another, higher-priority opportunity came in.

Worse: when you prioritize by committee, your automation team can become a scapegoat for business units that aren’t meeting financial goals. You can also have some very uncomfortable meetings when you discuss the priority list.

Make sure you have a strategy team (sometimes called a steering committee) with executive involvement. This ensures that all decisions are based on the overall company needs, not just because one group “yelled” louder than the others!

You could also put in place a standard way of measuring the opportunities. This greatly increases the probability that everyone involved will agree on the priority list.

Let’s explore these various ways to identify and prioritize your opportunities.

setting priorities

Strategy team

A strategy team identifies and prioritizes automation opportunities. It does so by working with the various business units; each unit proposes potential automation opportunities with the unit’s respective priorities. The strategy team should consist of at least one executive, one or more members of the automation team, and one member from each business unit. This team will put all of the various groups’ opportunities in a single prioritized list. The team should meet regularly — once per week or once per month, depending on the number of business units and opportunities.

This approach works well if you have a strong strategy team. Members of this team must be able to say “no” to a business unit if the opportunity isn’t a good fit. They also have to manage the expectations of each business unit and explain why that unit’s highest priority is well below that of other business units. Communication is key, and a well-documented, well-thought-out process is vital.

Problems occur when a member from one business unit constantly fights for higher priorities for that unit’s opportunities. A scoring system can help eliminate this infighting, but the system has to take into account each business unit’s key attributes. The scoring has to make sense for everyone.

Estimated cost savings

Another approach is to use the estimated cost savings as your key measurement. You can require each business unit to provide the potential cost savings for each opportunity it provides — but it has to show its math! One of the biggest problems organizations have is business units’ overestimating their cost savings just to get their project to the top of the list. They should be required to show the real numbers.

You can provide each business unit an Excel template that has fields for all of the data points required. The resulting spreadsheet can then be submitted to the automation team for review and inclusion in the backlog of work. The template will calculate the potential cost savings based on the time it takes a human to do the work and the potential automation success rate. You can estimate the cost of the work to be automated by first calculating the total human time spent in a process. Here’s how you do it:

  1. Multiply the time it takes for a human to execute the process by the number of times your company executes the process each year.
  2. Divide this number by the productive hours expected for each employee, and you get the number of full-time equivalents (FTEs) your company requires to do the work.
  3. Multiply the FTEs number with the all-in cost of an FTE to get the total cost of the process.
  4. Using subject matter experts, calculate the percent of the work that can you can automate successfully.
  5. Multiply that by the total cost to get your estimated cost savings.

This arms the automation team with hard data to help prioritize the backlog. It reduces the infighting and helps the process run more smoothly.

Workforce/process analytics

Another approach is to use analytics to identify and prioritize automation opportunities. With the resulting data, you can then present your findings to the various business units. That way, you can get their buy-in and cooperation, instead of relying on them to provide the information. Depending on the analytics solution you use, you might also get additional benefits such as:

  • Workforce productivity improvement
  • Process improvement
  • Eliminate self-reporting of work/time

When choosing an analytics solution to use to identify automation opportunities, look for one that can measure workforce activity. That capability should allow you to capture the time it takes each user to execute processes. Then, compose a list of processes in which you sort them by total time spent, which is the same as cost. Process improvement products that measure the details of a process can also help in prioritizing. More importantly, they can help in identification of the process steps. This can help you automate more quickly, because RPA robots take the same steps as humans.

OpenConnect’s approach to process automation

OpenConnect has developed an approach to process automation that includes both workforce and process analytics tools to help identify and prioritize automation opportunities. We developed this approach nearly a decade ago, and many large customers have validated it. I have personally worked with many of these companies and have seen how powerful the combination of analytics and automation can be. If you follow the suggestions we have made in this series, I believe you will have similarly impressive results.

This is the conclusion of a five-part series about robotic process automation. The previous articles in the series are:

  • Part I, “What is RPA, and why should CIOs care?”
  • Part II, “Creating a process automation strategy”
  • Part III, “Choosing the right process automation solution”
  • Part IV, “Pitfalls in implementing process automation”

Excel is a trademark of Microsoft Corporation.

Robotic process automation: Part IV of a five-part series

implementing process automation

Part IV: Pitfalls in implementing process automation

You bought your shiny new RPA tool, and now you’re ready to build your software robots. Do you have an automation strategy? If not, see Part II. If you do, then you already have your top five to 10 automation opportunities defined, along with the expected ROI of each. You probably already have your requirements document for the first opportunity. You have your team in place and ready to go. But how do you really go about implementing process automation?

implementing process automation

First considerations

Before you hand over your RPA tool to a programmer, make sure you have applications — preferably test versions — and test data ready to go. After all, you don’t want your programmer using production applications and data! Make sure you have enough test data that the programmer can go through each scenario at least a dozen times.

Perhaps you have the type of data that either disappears or is moved to another state after processing. If so, make sure your programmer understands this and, therefore, waits until the end of the programming process before actually submitting the data.

Keep in mind that the programmer will run into issues, and must run the scenario multiple times with multiple pieces of data. That’s why you need enough records with the same issue — so the programmer can thoroughly test each scenario and each business rule.

Types of testing

Testing is obviously a crucial part of implementing process automation.

When programming is complete, the programmer should run unit tests on each scenario and document the results. Some tools can self-document the unit tests.

The business should examine these results before moving to the next stage, system integration testing. All this really means is that you configure the new process into the server and then run the tests again, to make sure everything works from end to end. If your automation solution is a standalone desktop RPA tool, you probably won’t need to do system integration testing; but, with a more powerful, server-based automation solution, you will have that need.

The last step before moving to production is user acceptance testing (UAT). To do proper and thorough UAT, you need several records for each scenario. You also need records that fit multiple scenarios, so you can make sure the robots can handle multiple scenarios for a single record. A typical volume for each scenario is 100 records but, depending on your situation, you may need more, or fewer.

The importance of audit logs

After running UAT, the business will evaluate the results, using the standard tools it has available. However, if your automation solution provides audit logs — and it should — that can be a great way to validate that the robots followed the correct rules. What often occurs is that the business tells you the robots didn’t work an item correctly. If you don’t have auditing capabilities, you’re looking at a long day of trying to figure out the problem.

I have worked with customers who said the robots did the work incorrectly. Yet, the audit logs showed the robots did exactly what they were programmed to do, based on the requirements document. It turned out that the requirements themselves were wrong. That’s a simple fix, with little time spent troubleshooting.

Trust me: make sure you have a product that provides audit logs!

When it’s “go” time

Now, finally, it’s time to deploy to the production environment. Here’s a typical scenario: everything works in unit testing, system integration testing, and UAT, but fails in production. What happened? There are two common causes for this:

  • The data used in all of the testing phases didn’t properly represent production data. You can fix this only by having good data. It’s critical that the data used in testing matches the production data.
  • Moving the code from test to production changed something. This may be due to the tool you’re using. Some products provide a deployment function that ensures the code you tested is exactly the same code you tested. This is a great feature! You should make sure it’s part of any product you choose.

Implementing process automation is a major, and wise, step for your company. Here’s hoping this post will help make it go a little more smoothly.

Robotic process automation: Part III of a five-part series

Choices

Part III: Choosing the right process automation solution

Do a search for process automation or robotic process automation (RPA). You’ll get hundreds of results for tools that let you build software robots. Does it really matter which one you choose? They’re all the same, right?

Wrong!

What’s right for one department may not be right for you. Your team IT might have a solution they’d like you to use, but it may be complex and too expensive for you. So how do you really choose an automation solution?

Choices

Understand the problem

First, you need to understand the problem you’re trying to solve.

  • Are you trying to reduce head count, or simplify a process so as to increase productivity?
  • How complex are your processes?
  • Do your processes contain a lot of complex logic, or are they fairly simple?
  • Do you have to access a lot of applications, or just one?
  • Will you need a high volume of robots, or just a few?

There are two types of automation solutions available.

Process automation, also called business process automation, is a server-based solution that uses application programming interfaces (API) and Web services to interact with applications.

The other is RPA. It runs on a desktop PC and interacts with applications the same way a human does — through the user interface.

Both have advantages and disadvantages. Multiple factors will influence your choice. Don’t choose an automation tool simply because of the hype surrounding it.

So, again, we come back to . . . how do you know which way to go?

Maybe you have very simple processes that anyone can learn to do in a short time. If a human can learn the process with less than a day of training and maybe a few days of experience, that’s a simple process. In that case, you probably need only a simple RPA product. However, if your process has many different scenarios, and each can take a week or more of training and several weeks of experience before a human can perform it, you have a complex process. If that’s your situation, you should be reluctant to purchase an RPA product until you do your due diligence.

Hybrid solutions

Let’s say you determine that you have a complex process and you therefore don’t believe an RPA tool is appropriate for your needs. What do you do?

You could contact your IT department and see if it has a process automation solution that can handle complex processes. But that will work only if your application has an available API with which the automation solution can interact.

However, there is an alternative.

There are some hybrid solutions that constitute the best of both worlds. They’re server-based solutions that can interact with applications using an API or Web services. They also have RPA components that can interact with applications through their user interfaces. These hybrid solutions usually aren’t considered to be RPA, so they’re a little more difficult to find in a Web search — but it’s worth your time to seek them out.

Hybrid process/RPA products allow you to handle complex processes while still utilizing applications’ existing user interfaces. This type of solution can be implemented and managed by your IT team, and it fits into their overall automation strategy. In addition, some business units have been successful implementing this type of solution without IT involvement, because it doesn’t require complex programming or special interfaces. One cautious reminder, though: if you’re going to implement this type of solution, it’s important that you have a good automation strategy (see Part II).

Part IV: Pitfalls in implementing process automation.

Using process automation to improve auto-adjudication

Cutting costs through auto-adjudication

The need to measure success — or the lack thereof — is critical to any business. A healthcare payer is no exception, and typically looks at its auto-adjudication (AA) rate for medical claims. This refers to the percentage of claims that pass automatically through the payer’s claims system with no human intervention. Some payers use different terms — such as pass-through rate, or first-pass rate, or operational first-pass rate.

This is an important number for various reasons. First, it’s a key performance indicator (KPI) that measures the efficiency of the company’s paying the claims. Many believe that having a high AA rate means the organization is providing efficient services to its customers with a high quality standard and for the lowest price possible. In fact, this number is so important for some organizations that it has become a KPI for executive bonuses.

Still, as advances continue in medicine and medical care, and medical codes and regulations become more complex, there’s an increasing likelihood of over- or under-paying claims through AA. Do we really know the amount of money wasted because the system auto-adjudicated claims? And what about the reverse of that — claims that the company pended for review by a human, especially an expensive human (nurse, doctor) through medical review, when the answer was obvious? I have seen situations where the cost of a person reviewing a claim was more than the cost of just paying the claim!

Cutting costs - measure success

How some use the AA rate

Some payers use the AA rate as a measure of potential cost savings for planned improvements. Yes, this can help determine which cost-cutting projects to approve. But you have to be careful how you use this, because it’s usually not as accurate as you might think.

For example, let’s say a company wants to reduce operational costs in its claims department. A standard way to incorporate the AA rate into the cost calculation is to divide the remaining AA rate into the operational budget. This will give you a cost-per-AA percentage. So, if your budget is $6 million per year and your AA rate is 85%, then your cost-per- AA percentage remaining is $6 million divided by 15, or $400,000. I have seen cost-per-AA percentages as low as $200K and as high as $2.5 million. One would tend to think that each percentage of improvement will yield that amount of cost savings; so, in this example, it would mean each percentage point of improvement would save $400,000.

But not so fast . . .

However, it’s not that simple. This is especially true if you’re using process automation, or robotic process automation (RPA), to help improve your AA rate.

Let’s say your improvement project is to automate the processing of pended claims. You need to identify claims that can give you the biggest impact on your AA rate. Typically, this is done by identifying the top errors — that is, the top edit codes. The natural thing to do would be to choose the edit codes that are on the most claims. So, perhaps you choose the most common edit codes, and the total number of claims with those edit codes is enough to provide a potential improvement of 1%. But there are several things you still need to consider:

  • First, what is the success rate you expect? In other words, of the [x] thousands of claims that the automation solution handles, what percent will it actually finalize and count toward the AA rate?
  • Second, are there edit codes you cannot automate? These tend to land with some of the edit codes you have marked for automation, If there are such codes, you may want to reevaluate your automation opportunities. Any claim that contains an edit code that can’t be automated will still have to be touched by a human. That claim therefore won’t count toward AA improvement. After doing your analysis to determine potential success rate, you will most likely end up with a success rate of somewhere between 30% and 50%. This means you need to identify another set of edit codes to get back to the expected 1% improvement.
  • Another issue with this approach is that you really won’t get a 1% improvement. The reason is that some of the edit codes you choose could be very simple to work, and therefore doesn’t take a lot of FTEs. As a result, your cost savings calculation is going to be short of expected.

An alternate method

There is another way. Instead of measuring your success by your AA rate, measure it by the average cost to process a claim.

This approach will allow you to reduce costs in several ways — including automating claims processing — but without the constraint of improving the AA rate. In the long run, you reach the same goal as AA improvement: cost savings. However, now you can automate errors with no regard to the potential improvement in AA. You no longer worry about other errors that might be on a claim. After all, any work being done by the automation solution reduces the time a human spends on the claim.

You can also implement workforce improvements using analytics tools to help your users be more productive. In turn, this reduces the cost of a claim, because users are able to work more claims in the same amount of time. Instead of measuring the cost per claim (which could be difficult), perhaps you can measure the cost per percentage point of AA rate. Through automation and/or enhanced workforce productivity, you could reduce costs. This means the cost of each percentage of AA will go down — even if you don’t improve the AA itself.

A win-win way to measure success

Here’s the bottom line. If your overall goal is to reduce costs, then you should look at alternatives to improving your AA rate. If you choose to implement the claim cost or cost-per-AA percentage, you will still improve your AA rate; but you also will save a lot more money and become more efficient and accurate, which also will reduce rework.

Robotic process automation: Part II of a five-part series

Process automation - automation strategy

Part II: Creating a process automation strategy

It’s simple, right? You bought a robotic process automation (RPA) tool, thinking you were going to automate a bunch of processes and save the company millions.

It worked fine for a couple of simple processes, but now you’re not sure what to automate next. Even worse, after only a few months, the automation robots you implemented are having issues. What happened? It was all working on Friday!

Your main problem isn’t that you chose the wrong automation tool (although that might also be an issue). Your problem is that you didn’t put together an automation strategy, before you bought the tool.

So, what’s an automation strategy?

An automation strategy is planning and management of your automation initiative. A typical automation strategy should answer the following questions:

  • Which processes do we want to automate?
  • With which applications will we have to interact?
  • Who maintains those applications, and how often are they updated?
  • Who needs to know that we have automation running against the applications?
  • How do we find new automation opportunities?
  • How do we get notified before changes are made to an application?
  • Is there test data available, so we don’t have to test with production data?
  • With what security rules and regulations do we have to be compliant?
  • Which departments (e.g., IT, Security, HR) need to be involved?
  • What resources will need to be involved, and for how long?
  • What’s the overall cost of implementing the solution?
    Hint: the automation product is probably the least expensive part of the solution.

To be sure, even with all of those items to manage, automation provides a high return on your investment. I have helped many companies implement automation into their environments and, for every one of them, it’s always been worth the time and expense. It’s definitely worth the time and money to implement process automation but, to have real success with it, you must do it the right way.

Process automation - automation strategy

Five steps

Here are my five steps to implement a successful automation strategy:

  1. Don’t reinvent the wheel.
    Talk to someone who has already implemented a successful strategy. Maybe it’s a manager of another group in your organization, or a trusted partner. Copying another successful implementation in the same industry is the best way to get started quickly.
    Another option is to find a vendor that understands your industry and your problems. There are a lot of automation tools out there, and many are very generic. Consequently, most vendors don’t have specific domain expertise. To them, all automation is the same. I can tell you: it’s not! There’s a big difference between automating a simple desktop task and automating, for example, complex medical claims.
    Do a Web search to find the right vendor and solution; but don’t search for automation or even robotic process automation; you’ll find very generic solutions from vendors who have no expertise in your area. Instead, make your searches specific — for example, automating medical claims.
  2. Pick your team carefully.
    Yes, that’s right: you’ll need an automation team. It should consist of at least one business analyst, one technical person (this could be an IT resource), at least one programmer (someone has to “build” the software robots), and a decision-maker or approval committee (to decide “What do we automate next?”). You also should have a subject matter expert (SME), but this role could — and should — be rotated among several people. Each SME has expertise with certain processes so, engage only the best SME for the process you intend to automate next.
  3. Define your process.
    Your process should start with identifying the next (or first) automation opportunity. It should also include the method used to identify automation opportunities; and, before you begin your first one, you should have a list of five to 10 opportunities. In fact: for each opportunity, you should be able to calculate the potential ROI, and that should be the highest criterion on which you base the decision to automate a process. However, basing it on cost is only one option. You could use the “squeaky wheel” approach. This is where you choose to automate processes that draw the most employee complaints. Although this might result in minimal cost savings, it could be worth it to make your users happier!
    Once you define the list of automation opportunities, you need to implement them. The rest of the process should be defining the requirements with the SME, validating the business rules with the business analyst, documenting the step-by-step instructions with business rules, and providing those rules to the programmer for creation of the software robot.
    Testing is critical. Make sure that (a.) you have valid test data that truly represents real, production data and (b.) it can emulate all of the scenarios implemented in the robot. That’s the only way to test your robot fully before it moves to production.
  4. Have a backup plan.
    What happens if you have 10 robots running all day and, suddenly, they all stop? Let’s say that, up to this point, your automation solution has been so successful that you were able to eliminate 50 FTEs. But now, you don’t have the people to do the work and your robots aren’t running. Your backup plan must take into account this nasty scenario and others like it.
  5. Measure, measure, measure.
    Make sure your software robots provide the data related to the work they’re doing. This is some of the most valuable data in your organization. Use an analytics tool (if your automation product doesn’t already have one built into it) and analyze the data. What you’ll get from it is not only what your robots did but, more importantly, what they didn’t do. Of course, what they didn’t do is what they should do. That can help you understand how to improve your robots and the value of doing so.

If you carefully plan and take advantage of the knowledge and experience of others, you will have a successful automation strategy. Remember: don’t automate unless you have a strategy in place ahead of time!

Part III: Choosing the right process automation solution.

Robotic process automation: Part I of a five-part series

Robotic process automation

Part I: What is RPA, and why should CIOs care?

Robotic process automation (RPA) has become very popular in the last couple of years. It’s become such a mainstream topic that you can find RPA-related articles in publications such as the New York Times1 and the Wall Street Journal2. So, what is RPA, and why has it become so popular?

Automation — specifically, process automation — has been around for more than a decade. However, RPA is a specific type of process automation. The robotic part means a software robot that mimics a human. More to the point, it means software that interacts with the same applications and interfaces that humans use. It’s important to understand this: it has many implications; and it’s the major distinguishing characteristic among process automation, business process automation (BPA), and robotic process automation.

Robotic process automation

There are several reasons RPA has become popular in recent years. Businesses are constantly under pressure to reduce costs, but they can’t rely on their IT departments to be responsive to their needs. RPA allows businesses to use automation without requiring IT involvement. Mind you, that approach is not best! However, it can help them implement a solution much faster. Even some IT departments have embraced RPA, because it’s easy to implement and requires no application or infrastructure changes. This allows the IT departments to deliver value to the business more quickly and efficiently. However, many see it as merely a Band-Aid® for legacy applications in which they don’t want to invest.

Part of a strategy

It’s a fact that automation reduces costs — in some cases, dramatically. But most CIOs don’t see RPA as strategic. Instead, they view RPA as a tactical approach that’s best for the business users or individual IT departments to use on a case-by-case basis. Still, CIOs should embrace RPA for what it is, rather than avoiding it for what it’s not. Yes, RPA is a Band-Aid for some situations, but it’s also an effective way to save money quickly. To be sure, a company should use RPA as part of an overall automation strategy or a digital transformation initiative. The company should not use RPA as a “one-off” solution for one process and one department.

RPA tools also have limitations. Why? Because, in most cases, the entire targeted process is running on a desktop PC. As a result, that limits you to the flexibility of the RPA tool and the robustness (or lack thereof) of the desktop environment. If your process has complex logic, it’s less likely that an RPA tool can automate it. Also, there are certain types of application interfaces that work well for humans, but are a challenge for automation. Two examples are mainframe applications and Web-based applications. You usually can access both using server-based technology that’s more powerful and more accurate than a default desktop interface allows.

A better way

Why not use the best of both worlds? That would be a server-based process automation solution that utilizes RPA technology to interact with the desktop applications when it makes sense to do so.

Here’s an example. Let’s say you have a process that uses a client/server application, a Web-based application, and a mainframe application, along with some standard desktop applications such as Excel® and Outlook®. With a pure RPA tool, you will have to create a process that runs on the desktop and interacts with all of these applications. If the logic isn’t too complex, then you might be okay, but it’s still a lot of applications with which to interact. However, if the logic is complex — well, good luck. This will be a challenge for most RPA tools.

A better approach is to have a server-based solution that runs all of the business logic and process information. This gives it the power to process complex logic and keep track of the hundreds of software robots that could be running at any one time. Such an approach also enables the server to connect directly with the Web-based application, using standard Web services. This eliminates otherwise required “screen-scraping” on the desktop or navigating a browser’s document object model (DOM). Ideally, the server-based solution can also access the mainframe applications directly on the network without “screen-scraping” a desktop-based mainframe emulator. (Of course, the desktop applications have to reside on a desktop, and that’s where the RPA tool comes into play. However, there is no need for complex logic to run on the desktop, since the logic is running where it should be: on the server.)

This approach allows you to take advantage of the RPA tools that are getting much attention, while still implementing them in a more robust, server-based environment that can be part of a strategic automation initiative. This is no Band-Aid approach. And it’s something a CIO can and should embrace: yes, bring in RPA and allow the business units to use it, but control it and implement it in a centralized, enterprise-level environment.

Part II: Creating a process automation strategy.
Band-Aid is a registered trademark of Johnson & Johnson Services, Inc. Excel and Outlook are registered trademarks of Microsoft Corporation.


1. http://www.nytimes.com/2016/02/28/magazine/the-robots-are-coming-for-wall-street.html.

2. http://www.wsj.com/articles/robots-on-track-to-bump-humans-from-call-center-jobs-1466501401.