Generating Facial Expressions for Speechстатья из журнала
Аннотация: This article reports results from a program that produces high‐quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning‐based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end, we have produced a high‐level programming language for three‐dimensional (3‐D) animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: This includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus,”“topic,” and “comment,”“theme” and “rheme,” or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule‐governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect, and facial expressions/affect. A meaning representation includes discourse information: What is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse? The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression, and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models.
Год издания: 1996
Авторы: Catherine Pélachaud, Norman I. Badler, Mark Steedman
Издательство: Wiley
Источник: Cognitive Science
Ключевые слова: Face recognition and analysis, Hand Gesture Recognition Systems, Human Motion and Animation
Другие ссылки: Cognitive Science (PDF)
Cognitive Science (HTML)
ScholarlyCommons (University of Pennsylvania) (PDF)
ScholarlyCommons (University of Pennsylvania) (HTML)
Cognitive Science (HTML)
ScholarlyCommons (University of Pennsylvania) (PDF)
ScholarlyCommons (University of Pennsylvania) (HTML)
Открытый доступ: bronze
Том: 20
Выпуск: 1
Страницы: 1–46