RSNA 2010: Quality Improvement? - MRI Wait Time

This past Sunday I had the opportunity to spend the day at the RSNA 2010 trade show event at McCormick Place in downtown Chicago. Following my attendance at RSNA 2009, I posted some observations that I made about a p-chart (a type of statistical quality control chart) on display at one of the research exhibits, and what I would have recommended to the researchers as a consultant.

That particular p-chart in my professional opinion was unfortunately incorrectly constructed. While the following snapshot is more of a general run chart than a control chart, it seems clear from the comments made later in the associated exhibit (any indicator of specific project has been purposely excluded) that this chart was intended to provide input to a process improvement effort.

RSNA2010_gfesser_monthly_MRI_wait_time

This run chart is simply a plot of wait time in days for magnetic resonance imaging (MRI) by month (at the 90th percentile). The objective in this process improvement effort was to decrease routine wait times. A couple aspects of this run chart immediately jumped out at me: (1) the plot of year-to-date average is seemingly nonsensical, (2) the values of the data points decrease and then increase over the time period provided (and there are multiple waves of increasing and decreasing periods in the interim), (3) the data points are not lined up with markers on the x-axis, and (4) the wait time is plotted by month.

The plot of year-to-date average is seemingly nonsensical. Which year is this average intended to reflect? And why was average chosen by the individuals who constructed this run chart? The only way I can see the "YTD AVG" providing any value is if it were the mean for the entire date range. The raw data was not made available in the exhibit, so let's assume that this is the case.

The values of the data points decrease and then increase over the time period provided (and there are multiple waves of increasing and decreasing periods in the interim). Since run charts like this are intended to be used for process improvement, in this case cycle-time improvement, and statistical control is first needed in order to demonstrate meaningful improvement, this run chart does not provide any evidence of Sigma increase. It is possible that a process improvement initiative occurred at some point between April 2008 and February 2010, but if that is the case it is not clear why wait time began to increase again over time.

The data points are not lined up with markers on the x-axis. Admittedly, I would typically consider this aspect minor, but this can be a bit distracting. For example, for which month is the first data point plotted? Presumably April, but perhaps it is actually for March.

The wait time is plotted by month. Why are aggregates being used at such an early stage? Why not just break down the data points day-by-day? Or perhaps even better, booking-by-booking? It is not possible to determine, for instance, whether a data point reflects wait time for a single MRI visit, or 50. Let's take a look at the results that the exhibitors provided. Perhaps all of these initial concerns are unwarranted.

RSNA2010_gfesser_monthly_MRI_wait_time_results
The fact that 103 days was the MRI wait time for February 2009 indicates that all of the data points are offset to the left of the x-axis markers. But the fact that MRI wait times decreased for a 4-month stretch in the midst of a 22-month time period does not indicate anything of significance unless there was a short-term measure in place. Unfortunately, there are three downward trends throughout this 22-month time period, so this 4-month period does not provide any assurance of process improvement.

The exhibitors indicate that the MRI wait times increased in March 2009 due to "decreased Provincial MRI funded hours", but what about the increase that occurred later that Fall? And it is unfortunately not clear why only the 90th percentile is displayed in the run chart. It is probable that focus was directed at the greatest MRI wait times, but aggregating at an apparently arbitrary monthly level is likely to lead to false conclusions.

Booking turnaround times apparently decreased from about 35 days to less than 2 days, but no data is provided to determine whether this level of defects has been sustained (and given what has already been discussed about the run chart, there is no reason to believe that there is statistical control in this case, either). The data simply needs to be provided at a more granular level. Doing so will provide more data points and presumably more variation. Only if statistical control is achieved can cost savings and the accompanying quality improvement be measured.

Subscribe to Erik on Software

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe