RUSSIAN MULTICHANNEL DISCOURSE
Main Corpus Technical solutions Principles of annotation Publications and talks Search (beta)
PRINCIPLES OF ANNOTATION (updated 10.12.2018)

Two kinds of mark-up are used for corpus annotation: primary and secondary. Primary mark-up includes annotation of the vocal component (verbal and prosodic channels), manual gestures, cephalic gestures, as well as annotation of the oculomotor channel. In vocal annotation, the flow of speech is divided into significant fragments (elementary discourse units, or EDUs, words, filled and silent pauses, and nonspeech sounds). Vocal annotation also attributes characteristics to EDUs and their elements. For an accurate account of each participant's vocal contribution to the communication, a special scores format was additionally developed. In order to analyze manual gestures, a novel method has been developed that allows breaking down the flow of manual kinetic behavior into phases of stillness and distinct motions which then form functional units: separate gestures, adaptors, posture changes and their groups. Then their characteristics such as handedness of gestures, phase structure, their functional type, etc. are further specified. Cephalic annotation is based on the same principles as those developed for manual annotation within the frames of this project. Oculomotor annotation involves the export of eye tracking data onto the video scene (or “superimposing the marker of visual attention”), and with the help of Tobii Pro Glasses Analyzer software time-line data containing all fixations with durations exceeding 100 ms are extracted and then manually annotated specifying the target of the gaze.

Secondary mark-up includes the annotation of torso movements, facial expressions, proxemics, a phonetic transcription, as well as referential annotation, which is based on the mark-up of all verbal expressions with a specific reference. You can find samples of the primary and secondary mark-ups in the “Corpus” tab.

For the purposes of investigating the way participants interact with one another and the way different layers of communication correlate, it is practical to conduct such research with the help of the unified multichannel annotation created with ELAN software. This software allows for the simultaneous tracking of vocal, oculomotor, cephalic, and manual events of the three main participants of the session (for more details see the description of the tiers of multichannel annotation). Below you will find an example of such annotation created for a dialogic fragment taken form session #22. To facilitate the process, technical instructions are provided here.

AN EXAMPLE OF THE MULTICHANNEL ANNOTATION
Audio Video Eye tracker files Annotation
Pears22N-au-fragment.wav (7.3 MB) Pears22N-vi-fragment.avi (516.9 MB) Pears22N-ey-fragment.avi (69.9 MB) Pears22-mult-fragment.eaf (2.5 MB)
Pears22R-au-fragment.wav (7.3 MB) Pears22R-vi-fragment.avi (838.9 MB) Pears22R-ey-fragment.avi (86.7 MB) Pears22-mult-fragment.pfsx (92 KB)
Pears22C-au-fragment.wav (7.3 MB) Pears22C-vi-fragment.avi (444.8 MB)    
  Pears22W-vi-fragment.avi (200.6 MB)    
 
 
Download all files as one archive (2,1 GB)
©The English version of the site was created by Grigory B. Dobrov