AS TEXT MESSAGING has come to replace telephone calls as
the primary means of interpersonal communication, apps such
as SwiftKey have popped up to help those with less than nimble thumbs. SwiftKey and other “intelligent keyboards” quickly
learn a user’s writing and typing style. They operate in the background of a smartphone and take note of frequently used expressions, punctuation, emojis, and slang in a user’s text messages, emails, and social media posts. Before long, they’re able
to predict what a user is trying to say and autosuggests enough
words that it drastically reduces the time it takes to type out a
While these apps are arguably helpful and “intelligent,” they
do require a human’s touch to succeed—and they aren’t without drawbacks. Before a user realizes it, “chicken noodle soup”
can be autocorrected to “Chuck Norris soup.” The internet is full
of enough #autocorrectfails that savvy users know to slow down
a bit to avoid embarrassing typos. Human beings understand
that texts sent to bosses and colleagues require more care than a
quick note to friends or significant others.
Computer-assisted coding (CAC) occupies a similar function in the lives of coding and health information management
(HIM) professionals. CAC software uses natural language processing (NLP) to extract and translate transcribed free-text data
or computer-generated discrete data into information for billing
and coding purposes. Over time, the software picks up on a coding professional’s frequently used codes—especially when used
in a specialty hospital—and quickly becomes more precise,
learning from instances when a coding professional overrides
the CAC’s suggested code with one that’s more accurate.
Like apps that make texting faster, CAC’s success is contingent
on the reasoning, knowledge, and editing skills of the human
beings who use it. Before ICD-10-CM/PCS went live in 2015,
CAC was hailed by many in the industry as a miraculous tool for
preventing massive coding slowdowns that some predicted the
new code set would unleash.
Since that time, however, reality has set in and tempered the
expectations of coding professionals and the many CAC vendors that promised life-changing results. With the ICD- 10 transition in the rear-view mirror, it’s time to re-evaluate the following promises CAC initially offered: that it would improve coding
accuracy and documentation quality; that it would increase
productivity; that it would reduce the need for coders and transition others into coding auditors; that it would provide a positive return on investment; and that CAC could make intelligent,
human-free decisions based on documentation.
Expectations Meet Reality
In the nearly four years since ICD- 10 has been in place, there has
been no evidence to suggest that CAC will be replacing the need
for coding professionals any time soon. But that’s not to say it’s
been completely unhelpful. In fact, CAC has helped providers
in expected ways. In 2013, the AHIMA Foundation worked on a
study with the Cleveland Clinic, with funding from CAC vendor
3M, to predict how the use of CAC technology would impact accuracy and productivity with ICD- 10. 1
The AHIMA Foundation was able to validate that the time it
took the study’s coding professionals to code inpatient records
using CAC was significantly shorter than those coding profes-
sionals who didn’t use the technology, resulting in a 22 percent
reduction in time per record. Additionally, it found that Cleve-
land Clinic was able to reduce the time it took to code without
decreasing quality as measured by recall and precision for both
procedures and diagnoses.
For Monica Pinette, MBA, RHIA, CDIP, CCS, CPC, now the
assistant vice president of HIM at UConn Health, the AHIMA
Foundation’s findings weren’t all that different from what she
found when she was preparing for the ICD- 10 transition with
CAC at a previous employer, St. Francis Hospital and Medical
Center in Hartford, CT. While at St. Francis, Pinette led her cod-
ing team through extensive training with CAC prior to the ICD-
10 transition in 2015 with the expectation that the new code set
would slow them down, especially when coding procedures.
Pinette says the industry standard for the number of charts
coded, per hour, was 2. 5 records using ICD- 10. However, her
coding staff was easily able to code three or four charts per hour
with CAC. “Even though we had the implementation of ICD- 10
and it was predicted we’d slow down, CAC helped us avoid productivity losses. Coders were able to exceed their expectations,”
The CAC software also helped coding professionals familiarize
themselves with the new code set more quickly. “With CAC it
would actually highlight procedure codes and diagnosis codes
and slate them for you. Then, coders could use the CAC’s evi-denced-based feature where you could go back and validate the
procedures and diagnosis codes [suggested by the CAC engine].
And in a way it kind of helped to teach the coders by seeing
those codes over and over again,” Pinette says.
Her facility used CAC for both outpatient and inpatient coding
but says it was the most beneficial on the inpatient side because
inpatient coding professionals have the additional challenge of
assigning PCS codes and choosing DRGs.
Working with Limitations
Like many people, Pinette says her coding professionals were
concerned, at first, that CAC would be so useful that it would
replace them, but it quickly became clear to them this wouldn’t
be the case.
“I think people with less knowledge of coding operations think
‘Oh, CAC does the coding for you’ but that’s not true at all. It
does take human intervention because not every code that is
given by the CAC is necessarily correct or needed for coding
accuracy and ensuring the bill goes out on the claim appropriately. It does take human intervention and analysis on the outpatient side to look at edits and things like that in addition to
using the CAC feature,” Pinette says.
Deanna Klure, RHIT, CCS, CDIP, director, coding education,
nosology, CAC/clinical documentation improvement (CDI)
business applications at Kaiser Permanente, stresses that it’s
important that coding professionals and their managers remember that CAC is just a tool—a very effective one—but a tool
that’s as fallible as the humans that use it.
For example, on a given chart the CAC may autosuggest 10