Evaluating the effects of noninteractive and machine-assisted interactive manual clinical text annotation approaches on the quality of reference standards

Update Item Information
Publication Type dissertation
School or College School of Medicine
Department Biomedical Informatics
Author South, Brett Ray
Title Evaluating the effects of noninteractive and machine-assisted interactive manual clinical text annotation approaches on the quality of reference standards
Date 2014-05
Description Manual annotation of clinical texts is often used as a method of generating reference standards that provide data for training and evaluation of Natural Language Processing (NLP) systems. Manually annotating clinical texts is time consuming, expensive, and requires considerable cognitive effort on the part of human reviewers. Furthermore, reference standards must be generated in ways that produce consistent and reliable data but must also be valid in order to adequately evaluate the performance of those systems. The amount of labeled data necessary varies depending on the level of analysis, the complexity of the clinical use case, and the methods that will be used to develop automated machine systems for information extraction and classification. Evaluating methods that potentially reduce cost, manual human workload, introduce task efficiencies, and reduce the amount of labeled data necessary to train NLP tools for specific clinical use cases are active areas of research inquiry in the clinical NLP domain. This dissertation integrates a mixed methods approach using methodologies from cognitive science and artificial intelligence with manual annotation of clinical texts. Aim 1 of this dissertation identifies factors that affect manual annotation of clinical texts. These factors are further explored by evaluating approaches that may introduce efficiencies into manual review tasks applied to two different NLP development areas - semantic annotation of clinical concepts and identification of information representing Protected Health Information (PHI) as defined by HIPAA. Both experiments integrate iv different priming mechanisms using noninteractive and machine-assisted methods. The main hypothesis for this research is that integrating pre-annotation or other machineassisted methods within manual annotation workflows will improve efficiency of manual annotation tasks without diminishing the quality of generated reference standards.
Type Text
Publisher University of Utah
Subject Information Technology; Medicine
Subject MESH Information Storage and Retrieval; Health Insurance Portability and Accountability Act; Natural Language Processing; Semantics; Reference Standards; Computer Security; Medical Informatics; Electronic Health Records; Workflow; Cognition; Classification; Protected Health Information; Annotation
Dissertation Institution University of Utah
Dissertation Name Doctor of Philosophy
Language eng
Relation is Version of Digital reproduction of Evaluating the Effects of Noninteractive and Machine-Assisted Interactive Manual Clinical Text Annotation Approaches on the Quality of Reference Standards
Rights Management (c) Brett Ray South
Format Medium application/pdf
Format Extent 80,793,559 bytes
Source Original in Marriott Library Special Collections
ARK ark:/87278/s6zd1b36
Setname ir_etd
ID 196654
Reference URL https://collections.lib.utah.edu/ark:/87278/s6zd1b36