CMU logo
Search
Expand Menu
Close Menu

PhD Thesis Proposal: Brandon Taylor, "Towards the Automatic Translation of American Sign Language"

Open in new window

Speaker
Brandon Taylor
PhD candidate, HCII

When
-

Where
Gates-Hillman Center 6115

Description

Committee:
Dan Siewiorek, Chair (HCII)
Anind Dey, (HCII)
Carolyn Rose (HCII)
Roberta Klatzky (HCII)
Asim Smailagic (ECE)

Abstract:
There are estimated to be more than a million Deaf and severely hard of
hearing individuals living in the United States. For many of these
individuals, American Sign Language (ASL) is their primary means of
communication. However, for most day-to-day interactions, native-ASL users
must either get by with a mixture of gestures and written communication in
a non-native language or seek the assistance of an interpreter. Whereas
advances towards automated translation between many other languages have
benefitted greatly from decades of research into speech recognition and
Statistical Machine Translation, ASL’s lack of aural and written
components have limited exploration into automated translation of ASL.

Previous research efforts into sign language detection have met with
limited success primarily due to inaccurately tracking handshapes. Without
this vital component, research into ASL detection has been limited to
focusing on isolated components of ASL or restricted vocabulary sets that
reduce the need for accurate handtracking. However, improvements in 3D
cameras and advances in handtracking techniques provide reasons to believe
some of the technical sensing limitations may no longer exist. By
combining state of the art handtracking techniques with ASL language
modeling, there is an unexplored opportunity to develop a system capable
of fully capturing ASL.

In this work, I propose to develop the first ASL translation system
capable of detecting all five necessary parameters of ASL (Handshape, Hand
Location, Palm Orientation, Movement, and Non-Manual Features). This work
will build on existing handtracking techniques and explore the features
that are best capable of discriminating the 40 distinct handshapes used in
ASL. An ASL language model will be incorporated into the detection
algorithm to improve sign detection. Finally, the system will output a
form of transcribed ASL that will allow for the separation of sign
detection and ASL-to-English language translation.

Document: http://www.brandonttaylor.com/proposal