
Behavioural Data
captures rich behavioural data from participants during the
Experience, organized into four sections: eye data, face data, hand data, and body data. Each section generates its own CSV file for easy analysis:
ParticipantEyesData.csv
, ParticipantFaceData.csv
, ParticipantBodyData.csv
. For hand tracking, the data is split into two files: ParticipantRightHandData.csv
and ParticipantLeftHandData.csv
.
Understanding Positional Data
In the virtual environment, one unit equals one meter in real life. All positions are measured relative to the origin point (center) of the environment, not the participant's center.
For example, if a participant's right palm is at X-axis position 0.43, this means the palm is 43 centimeters from the environment's center, not the participant's center.Aligning the Environment
At the start of the Experience, the
Participant starts at the position set in its
Transform. To ensure positional data is accurate, it's recommended to set the environment’s GameObject position to the origin (0,0,0). This simplifies the alignment of items within the environment. For example, if a GameObject is positioned at (2,0,0), it is 2 meters away from the center of the environment, making the data easy to interpret.
