Recording observations

The script data_review/review_behavioural_data.m provides a GUI for manually scoring behavioural responses from the per-condition videos generated by generate_circ_stim_ufmf.

Workflow

  1. Generate per-condition videos using generate_circ_stim_ufmf (see Visualisation).
  2. Navigate to the folder containing the condition videos.
  3. Run review_behavioural_data.m from within that folder.
  4. The script opens each video in turn and presents a GUI with dropdown scoring fields and a free-text observation box.
  5. Scores and observations are submitted to a Google Form, which stores the results in a linked Google Sheet.

The script can be filtered to show only specific stimulus types (e.g. "gratings", "flicker", "phototaxis", "reversephi", "static", "offset") rather than all videos.

Observation fields

Each condition is scored on the following dimensions using a dropdown (0–10 scale):

Field Description
Centring Degree to which flies aggregate near the arena centre (0 = no centring, 10 = tightly clustered at centre)
Turning Strength of directional turning response to the stimulus
Rebound Presence and strength of rebound behaviour after stimulus offset
Distribution Spatial distribution of flies during the stimulus
Locomotion General locomotion level during the stimulus
Diversity Variability of behaviour across individual flies

A free-text field is also available for recording any additional observations as a string.

Metadata

The script automatically extracts metadata from the video filename (date, time, condition number, condition name, repetition) and includes it in the Google Form submission alongside the scores.

See the logging system page for how to set up a Google Form for automatic data logging.

Usage

% Review all condition videos in the current folder
review_behavioural_data()

% Review only grating conditions
review_behavioural_data("gratings")

% Review only reverse-phi conditions
review_behavioural_data("reversephi")

Available filter options: "gratings", "flicker", "phototaxis", "reversephi", "static", "offset".

When to use this tool

The manual scoring GUI is most useful for:

  • Initial quality control — quickly reviewing whether experiments produced expected behavioural responses
  • Qualitative assessment — recording subjective observations about centring, turning, and rebound behaviours that may not be fully captured by the automated metrics
  • Identifying outliers — flagging experiments with unusual behaviour for further investigation

The scored data complements the quantitative analysis pipeline and can be correlated with the automated metrics to validate observations.