How did you select which of this year’s 773 poster presentations to attend?  Whether you approach poster viewing as a meticulous planner, carefree vagabond, or somewhere in between, the poster tours offer a unique alternative: the chance to consider innovative science in the company of colleagues with common interests. Each tour is guided by subject matter experts from one of the AACC Divisions.

This year, I joined the Informatics Division tour, led by Kenneth Blick, PhD and Christopher McCudden, PhD. In under an hour, we stopped at five posters which had been hand-picked by the leaders based on abstract quality and author reputation. The first presenter, Alexander Leichtle, MD, demonstrated that inpatient mortality can be predicted with good accuracy using laboratory values. Multiple models were used and, despite differences in parameterization, afforded surprisingly similar predictions. (His work was just published and is available online: http://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0159046).

Right next door, Burak Bahar, MD, showed off an open-source, freely available web tool for evaluating and visualizing method comparison data. Using R and Shiny, he has incorporated a variety of plotting and statistical options into the web app.  Be sure to check it out the next time you find yourself comparing methods (https://bahar.shinyapps.io/method_compare).

Mark Cervinski, PhD shared his novel approach for rapidly detecting the onset of systematic error using average of deltas. Briefly, sequential test results acquired between 18 and 26 hours apart were used to calculate patient deltas, the average of which is monitored for shifts in test performance. Using this approach, nine patient deltas would be needed, on average, to detect a 0.8 g/dL change in total protein measurements. Cervinski’s preliminary studies indicated that this strategy is most effective for high-volume and highly reproducible tests.

We next visited a poster by Ashleigh Muenzenmeyer, BS, C(ASCP)CM on data mining tools to establish robust and pertinent reference intervals from existing patient data.  For each of 12 common analytes, she established reference intervals using more than 250 samples, which is well above the traditionally recommended minimum of 120 samples.  The improved statistical power of this analysis allowed partitioning of some reference intervals by sex, which had not been in place prior to the analysis nor been indicated by the assay manufacturer in the package insert.

Jim Nichols, PhD finished off our poster tour with a thorough sigma analysis of numerous chemistry tests in serum/plasma and urine using different sources for total allowable error, including CLIA, RICOS, and RCPA.  The four automated chemistry analyzers evaluated were variable in performance, and Nichols found sigma metrics to be in poor overall agreement.  He emphasized that sigma metrics have limitations and laboratorians should carefully consider total allowable error metrics and estimates of bias.

A common theme shared by authors of all visited posters was their ability to engage the audience using a combination of carefully crafted graphics and concise explanations. According to Mari DeMarco, PhD, a visually appealing layout can draw someone in—then it is up to the presenting author to convey why the study and results matter.  These presenters certainly did just that, and what fun to experience it alongside colleagues enthusiastic about harnessing clinical laboratory data to answer important questions.

If you are looking to spice up your poster viewing experience at next year’s meeting, remember to catch your colleagues for an AACC Division poster walk (or two).