Date of Award

January 2018

Document Type

Open Access Thesis

Degree Name

Medical Doctor (MD)

Department

Medicine

First Advisor

Melinda S. Sharkey

Second Advisor

Jonathan Grauer

Abstract

As there is no “gold standard” in determining whether a fracture is caused by accident or abuse, agreement among medical providers is paramount. Using abstracted medical record data from children <36 months of age presenting to a level 1 pediatric emergency department (ED), we examined the extent of agreement between specialists who evaluate children with fractures for suspected abuse. To simulate clinical scenarios, two pediatric orthopaedists and two child abuse pediatricians (CAPs) reviewed the full abstraction and imaging, whereas the two pediatric radiologists reviewed a brief history and imaging.

Each physician independently rated each case using a 7-point ordinal scale designed to distinguish accidental from abusive injuries. For any discrepancy in independent ratings, the two specialists discussed the case and came to a joint rating. We analyzed 3 types of agreement: (1) within specialties using independent ratings, (2) between specialties using joint ratings, and (3) between clinicians (orthopaedists and CAPs) with more versus less experience. Agreement between pairs of raters was assessed using Cohen’s weighted kappa.

From 2007 to 2010, 551 children presented to the Yale New Haven Children’s Hospital Pediatric ED with 572 fractures. Twenty-eight cases (5.1%) were determined to have fractures with a consensus rating indicating abuse. The skull was the most commonly fractured bone and rib fractures had the highest association with an abuse consensus rating (86.7%). The incidence of children presenting with an abusive fracture in the county per year was 2.4 per 10,000 children <36 months of age. The incidence of children presenting with an abusive fracture per ED visit was 2.2 per 10,000 visits.

Orthopaedists (κ=.78) and CAPs (κ=.67) had substantial within-specialty agreement, while radiologists (κ=.53) had moderate agreement. Orthopaedists and CAPs had almost perfect between-specialty agreement (κ=.81), while agreement was much lower for orthopaedists and radiologists (κ=.37) and CAPs and radiologists (κ=.42). More-experienced clinicians had substantial between-specialty agreement (κ=.80) versus less-experienced clinicians who had moderate agreement (κ=.60). These findings suggest the level of clinical detail a physician receives and his/her experience in the field has an impact on the level of agreement when evaluating fractures in young children. The lack of clinical data provided to the radiologists limited their ability to designate a fracture as definitively abusive or accidental, likely lowering observed agreement scores.

Comments

This is an Open Access Thesis.

Open Access

This Article is Open Access

Share

COinS