DISCLAIMERS

contact us >>

Multi-modality Image Fusion In Craniofacial Transplantation: A Novel Anatomical Perspective

Darren M Smith MD, Vijay S Gorantla MD PhD,, Jospeh E Losee MD
University of Pittsburgh
2012-02-15

Presenter: Darren M. Smith, MD

Affidavit:
I certify that the material proposed for presentation in this abstract has not been published in any scientific journal or previously presented at a major meeting. This is 100% original work by the resident.

Director Name: Joseph E. Losee, MD

Author Category: Chief Resident Plastic Surgery
Presentation Category: Clinical
Abstract Category: Craniomaxillofacial

How does this presentation meet the established conference educational objectives?
Participants will address new basic science and clinical science research, techniques, and procedures relevant to plastic and reconstructive surgery. Specifically, new methods of planning craniofacial CTA will be discussed.

How will your presentation be used by practicing physicians in the audience?
A novel workflow is presented that may enhance outcomes in craniofacial transplantation, and may also be applied to other complex reocnstructive procedures.

BACKGROUND
Functional and aesthetic outcomes are difficult to optimize in three-dimensionally complex facial transplantation. We have previously presented a workflow that combines data from 3DCT, diffusion tensor imaging (DTI), and stereo photogrammetry to generate a single 3D representation of patient anatomy (blood vessels, nerve, bone, muscle and photo-realistic skin) that supports real-time user interaction and modification. Here, we describe the first "virtual patients" constructed with these methods, and demonstrate their utility in planning craniofacial transplantation procedures.

MATERIALS AND METHODS
Polygonal meshes of a patient's bone were automatically extracted from CT datasets and polygonalized. Blood vessels were similarly captured. Nerve tracts were visualized with DTI and converted to polygonal tubes. Semi-automatic image segmentation was employed to generate polygonal meshes of facial musculature. 3D photorealistic skin models were generated with photogrammetry. The 3D anatomical models were combined in 3D animation software adopted from the film industry (Maya, Autodesk) to generate a polygon-based model of a patient's skin, blood vessels, muscle, facial skeleton, and nerve tracts. This animation software was then used to simulate a facial transplant for this patient.

RESULTS
Data from once disparate and unwieldy DTI, CT, CTA and surface scans have been integrated to develop detailed 3D computer graphics anatomical models compatible with real-time end-user manipulation and modification. Virtual facial transplantation and planning were performed.

CONCLUSIONS
Facial transplantation is visually complex in three dimensions. We for the first time demonstrate computer graphics-based fusion of multiple imaging modalities to develop patient-specific 3D anatomical models compatible with real-time user interaction and modification.

Ohio,Pennsylvania,West Virginia,Indiana,Kentucky,Pennsylvania American Society of Plastic Surgeons

OVSPS Conference