Tips for Getting the Most Out of
By Daniel Land, RHIA, CCS
MARSHALL MCLUHAN, THE Canadian philosopher who predicted the World Wide Web nearly 30 years before its invention,
said that “we shape our tools and thereafter our tools shape us.” 1
It is important to keep this in mind in any discussion of computer-assisted coding (CAC), since this ever-evolving tool is not
completely self-sustaining or self-operating.
CAC was not designed to make truly meaningful decisions about
the context of the health record and does not replace the need
for human logic and intelligent decision-making. Rather, coding
professionals—the ultimate drivers of codes reported—are responsible for applying official coding guidance, following coding
conventions, ensuring compliance with regulations, and utilizing
common sense while reviewing CAC’s auto-suggested codes.
A partnership exists between coding professionals and CAC
in that optimal usage of the technology allows CAC to learn and
improve over time. In turn, coding professionals can benefit
from CAC’s mapping logic. For example, CAC may suggest the
correct ICD-10-PCS code for a procedure that would otherwise
pose an indexing challenge. The process of validating auto-suggested codes helps to continually refine coding professionals’
knowledge and critical thinking skills. This article shares tips
from coding experts on how to best interact with CAC in order to
improve coding efficiency and accuracy while helping to build a
better product for the future.
A Brief Overview of CAC
CAC is a software tool designed to assist with documentation and
code assignment by reviewing the patient record and suggesting
codes. While these suggested codes are automatically generated,
they require validation from a human coding professional based
on the documentation. The process of validation allows the cod-
ing professional to identify inconsistencies or gaps in documen-
tation related to the totality of the patient record.
CAC can be structured via natural language processing (NLP) or
structured input. NLP uses artificial intelligence to identify terms
in a text-based document and converts them into medical codes.
Structured input is based on menu items chosen via a template
that is then blended into the medical record. The provider selects
a diagnosis from the menu and then it is translated into code by
CAC was designed to increase coding efficiency, productivity,
and consistency for healthcare organizations. Although CAC
software has greatly improved over the past few years, it is still
far from being perfected and has the potential to increase coding errors and claims denials if not built and used properly. For
example, accepting CAC-generated codes without careful validation could lead to erroneously reported MCCs and CCs and
incorrectly assigned DRGs.
Awareness of the following tips can help coding professionals
use CAC to their advantage while ensuring revenue integrity
and data quality:
Providers often use different verbiage to describe the
same diagnosis or procedure that may not always match
the CAC’s NLP mapping. This can result in incorrect auto-suggested diagnosis and procedure codes. Careful validation of auto-suggested codes is necessary to prevent incorrect code assignment.
NLP will identify every instance of a word in the set parameters of a search. For example, the term “diabetes”
can yield an auto-suggested code for diabetes type II, un-