Wednesday, December 17, 2008

Myth: It is practical to assign ICD-10-CM codes manually

The proposed rule to mandate the switch to ICD-10-CM states:

It would be impractical to attempt to manually assign SNOMED–CT codes. The number of terms and level of detail in a reference of clinical terminology such as SNOMED CT cannot be effectively managed without automation,...

By implication, then, it would be practical to assign ICD-10-CM codes manually. Otherwise this supposed disadvantage of SNOMED-CT would not be a factor in HHS' decision to reject SNOMED-CT.

Let us examine this claim further.

ICD-10-CM, by all accounts we have seen--including the proposed rule itself, contains approximately 68,000 codes.

First, we think the very notion that the human brain can cope with 68,000 codes and reliably and manually assign a few of them correctly to patient visits or hospitalizations has no face validity.

Second, even with the manual assignment of the 13,000 codes of ICD-9-CM, there is and has been tremendous variability and low reliability. The Department of Veterans Affairs (VA) conducted a study that found substantial variability in assignment of ICD-9-CM codes:

Based on this study, OHI concluded that the coding of the primary and secondary diagnoses varied widely. The implications of this variability has to be considered when assessing the validity of health services research, health care program planning, quality assurance, utilization review, and resource allocation for VA Medical Centers based on ICD-9-CM codes or DRG information.

While OHI was not evaluating the coding "error rate" in this study, the coding variability observed in the study was comparable to error rates noted in earlier Institute of Medicine (IOM) studies. We found a 60.6 percent agreement in the primary diagnosis code among the original coders and our expert coder. The IOM studies documented a 65.2 percent agreement on the principal diagnosis code, in 1977, and a 63.4 percent agreement on the principal diagnosis code of the records analyzed in 1980. Thus, in all three studies there was approximately a 2/3's agreement in the coding of the medical record.

Even among the expert coders, there was a 19 percent disagreement on the primary diagnosis code. Since our expert coders were highly qualified, this high rate of disagreement caused OHI to question the reliability of the selection of the primary diagnosis and, thus, the accuracy of coded information.

A study of ICD-9-CM coding in psychiatry concluded:

The question was addressed how well mental health professionals were able to translate diagnostic formulations into ICD-9-CM codes. This was done with three coder groups and under two conditions. It was found that there was insufficient interrater agreement on the ICD-codes in all groups and conditions. This finding then was related to the inadequacies of the ICD-system itself. It was concluded that current mental health statistics that are based on the ICD-9-CM coding system are without scientific value.

A study of ICD-9-CM coding in intensive care concluded:

In a multicenter database designed primarily for epidemiological and cohort studies in ICU patients, the coding of medical diagnoses varied between different observers. This could limit the interpretation and validity of research and epidemiological programs using diagnoses as inclusion criteria.

Since other nations have already switched to ICD-10 or their own national variant of it (none of which has even half as many as 68,000 codes), what has their experience been with ICD-10? Better coding? No.

One study of the reliability of coding with ICD-10 concluded:

The refinement of the ICD-10 accompanied by innumerous coding rules has established a complex environment that leads to significant uncertainties even for experts. Use of coded data for quality management, health care financing, and health care policy requires a remarkable simplification of ICD-10 to receive a valid image of health care reality.

A study from Canada even compared the quality of coding between ICD-9 and ICD-10 and concluded:

The implementation of ICD-10 coding has not significantly improved the quality of administrative data relative to ICD-9-CM.

So then, manual assignment of ~13,000 ICD-9-CM codes in the U.S. and elsewhere, and the manual assignment of ~13,000-30,000 ICD-10 codes (depending on national variant), have not been "effectively managed".

It brings to mind the old adage, those who live in glass houses should not throw stones.

So what of SNOMED-CT? How many disease codes are we looking at?

The July, 2008 version of SNOMED-CT, by contrast, has 63,731 active disease concepts. [1]

SNOMED-CT, therefore, actually has fewer disease codes than ICD-10-CM! It is hard to imagine that manual assignment of SNOMED-CT disease codes could be managed any less effectively than manual assignment of ICD-10-CM disease codes.[2]

Myth: Busted.

[1]Because SNOMED-CT, unlike ICD-10-CM, comes in machine-readable format, these kinds of exact counts are easy to make.

[2]Note that we are not advocating SNOMED-CT for disease coding. And studies conducted thus far have shown lack of reliability in SNOMED-CT disease coding as well.