Saturday 4th of May 2024
 

Toward an example-based machine translation from written text to ASL using virtual agent animation


Mehrez Boulares and Mohamed Jemni

Modern computational linguistic software cannot produce important aspects of sign language translation. Using some researches we deduce that the majority of automatic sign language translation systems ignore many aspects when they generate animation; therefore the interpretation lost the truth information meaning. This problem is due to sign language consideration as a derivative language, but it is a complete language with its own unique grammar. This grammar is related to semantic-cognitive models of spatially, time, action and facial expression to represent complex information to make sign interpretation more efficiently, smooth, expressive and natural-looking human gestures. All this aspects give us useful insights into the design principles that have evolved in natural communication between people. In this work we are interested in American Sign Language, because it is the simplest and most standardized sign language. Our goals are: to translate written text from any language to ASL animation; to model maximum raw information using machine learning and computational techniques; and to produce a more adapted and expressive form to natural looking and understandable ASL animations. Our methods include linguistic annotation of initial text and semantic orientation to generate the facial expression. We use genetic algorithms coupled to learning/recognized systems to produce the most natural form. To detect emotion we based on fuzzy logic to produce the degree of interpolation between facial expressions. Roughly, we present a new expressive language Text Adapted Sign Modeling Language TASML that describes all maximum aspects related to a good sign language interpretation. This paper is organized as follow: the next section is devoted to present the comprehension effect of using Space/Time/SVO form in ASL animation based on experimentation. In section 3, we describe our technical considerations. We present the general approach we adopted to develop our tool in section 4. Finally, we give some perspectives and future works.

Keywords: American Sign Language, Animation, Natural Language Generation, Accessibility Technology for the Deaf, biological algorithms, facial expression, emotion, machine learning, Fuzzy logic.

Download Full-Text


ABOUT THE AUTHORS

Mehrez Boulares
currently a PhD student under the supervision of Prof. Mohamed Jemni. He received in September 2009 the Master degree on Computer Science from Tunis College of Sciences and Techniques (ESSTT), University of Tunis in Tunisia. His research interests are in the areas of Sign Language Processing. His current topics of interests include Computer graphics and Accessibility of ICT to Persons with Disabilities.

Mohamed Jemni
Professor of ICT and Educational Technologies at the University of Tunis, Tunisia. He is the Head of the Laboratory Research of Technologies of Information and Communication (UTIC). Since August 2008, he is the General chair of the Computing Center El Khawarizmi, the internet services provider for the sector of the higher education and scientific research. His Research Projects Involvement are tools and environments of e-learning, Accessibility of ICT to Persons with Disabilities and Parallel & Grid Computing.


IJCSI Published Papers Indexed By:

 

 

 

 
+++
About IJCSI

IJCSI is a refereed open access international journal for scientific papers dealing in all areas of computer science research...

Learn more »
Join Us
FAQs

Read the most frequently asked questions about IJCSI.

Frequently Asked Questions (FAQs) »
Get in touch

Phone: +230 911 5482
Email: info@ijcsi.org

More contact details »