API

Public

OndaVision.BrainVisionMetadataType
BrainVisionMetadata

BrainVision-specific recording and channel metadata that has no counterpart in the Onda signal or annotation schemas.

Fields are grouped into four categories:

Per-channel supplementary (parallel to the full [Channel Infos] list, VHDR order, 1-based):

  • channel_names::Vector{String}: original-case channel names from VHDR.
  • channel_references::Vector{String}: reference-channel field from [Channel Infos]; "" when absent.
  • coordinates::NamedTuple: column table (; channel, radius, theta, phi) from the [Coordinates] section. Zero-length vectors when the section is absent; use isempty(metadata.coordinates.channel) to check.

Recording conditions (parsed from the [Comment] free-text block):

  • amplifier_info::Dict{String,String}: recording-level key-value pairs from the "Amplifier Setup" sub-section (e.g. "Sampling Rate [Hz]"); empty when absent.
  • amplifier_channels::NamedTuple: per-channel hardware configuration table with columns number, name, phys_chn, resolution, low_cutoff, high_cutoff, notch; zero-length rows when absent.
  • software_filters::NamedTuple: per-channel software filter table; columns are number, low_cutoff, high_cutoff, notch (optionally with name inserted after number when amplifier channel names are available); zero-length rows when absent or disabled.
  • impedances::Dict{String,Union{Float64,Missing}}: per-channel/electrode impedance measurements in kOhm; missing for unknown values (??? in file); empty when absent.

Free-form / generic metadata:

  • comment::String: raw [Comment] section text; "" when absent.
  • user_infos::Dict{String,String}: [User Infos] key-value pairs (BrainVision v2.0+); empty otherwise.
  • channel_user_infos::Dict{String,String}: [Channel User Infos] key-value pairs (BrainVision v2.0+); empty otherwise.

Marker supplement:

  • marker_dates::Vector{Union{String,Missing}}: the date column from the VMRK [Marker Infos] table, in the same order as the annotations table returned by read_brainvision_onda. Empty when no marker file is present. Dates are strings of the form YYYYMMDDhhmmssμμμμμμ; missing when the field is absent for a given marker.
source
OndaVision.ChannelSubsetLPCMFormatType
ChannelSubsetLPCMFormat{F} <: Onda.AbstractLPCMFormat

An AbstractLPCMFormat wrapper that deserializes a full multi-channel LPCM binary file but returns only a subset of channels.

This is used when a BrainVision file contains channels with different units or resolutions, requiring multiple Onda SignalV2 records that each point to the same .eeg data file but cover different channel subsets.

The file_format string encodes the base format, total channel count, and 1-based channel indices so that Onda.load can reconstruct the format automatically. For example:

"lpcm.subset.32.1,2,3"                # MULTIPLEXED, 32 total channels, indices 1-3
"lpcm.vectorized.subset.32.27,28"     # VECTORIZED, 32 total channels, indices 27-28

See also: VectorizedLPCMFormat, LPCMFormat

source
OndaVision.VectorizedLPCMFormatType
VectorizedLPCMFormat{S} <: Onda.AbstractLPCMFormat

An AbstractLPCMFormat for non-interleaved (vectorized) LPCM binary data, where all samples for channel 1 are stored contiguously, followed by all samples for channel 2, and so on.

This is the layout used by BrainVision's DataOrientation=VECTORIZED format, as opposed to Onda's default interleaved (multiplexed) layout.

The total number of time points is inferred dynamically from the byte count during deserialization, so this format can be constructed from just a SamplesInfoV2 (or anything with channels and sample_type fields).

See also: LPCMFormat

source
OndaVision.brainvision_annotationsMethod
brainvision_annotations(vmrk, sample_rate; recording, channel_names)
brainvision_annotations(vmrk_filename, sample_rate; recording, codepage, channel_names)
brainvision_annotations(vhdr_filename; recording, codepage, channel_names)

Convert BrainVision marker data to an Onda-compatible annotation table.

The lowest-level method accepts a pre-parsed VMRK dictionary (as returned by read_vmrk) and an explicit sample_rate in Hz. The file-based method reads the VMRK file from disk. The convenience method reads the VHDR file and automatically locates the VMRK file and derives sample_rate from the SamplingInterval key.

Return value

A NamedTuple column table that complies with the onda.annotation@1 Legolas schema (i.e. passes Onda.validate_annotations). Columns:

  • recording::Vector{UUID}: the supplied recording UUID, repeated for every row
  • id::Vector{UUID}: a fresh random UUID per annotation
  • span::Vector{TimeSpan}: half-open [start, stop) time interval in nanoseconds, derived from the marker's 1-based sample position and duration in samples. Instantaneous markers (points == 0) are given a one-sample span.
  • marker_type::Vector{String}: raw BrainVision type string (e.g. "Stimulus", "Response", "New Segment")
  • description::Vector{String}: raw description field (e.g. "S255", "")
  • channel: Vector{Int} when channel_names === nothing (0 means all channels); Vector{Union{String,Missing}} otherwise (missing for channel 0)

Keyword arguments

  • recording: a UUID for the recording (default: random).
  • codepage: character encoding forwarded to read_vhdr / read_vmrk.
  • channel_names: controls the type of the channel output column.
    • nothing (default for the low-level methods): keep raw integer channel numbers.
    • true (default for the VHDR convenience method): resolve channel numbers to lowercase names using the [Channel Infos] from the VHDR file.
    • An AbstractVector{String} or AbstractDict{Int,String}: explicit 1-based mapping from channel index to name.
source
OndaVision.brainvision_to_signalMethod
brainvision_to_signal(vhdr_filename; codepage=nothing, recording=uuid4(),
                      sensor_type="eeg", sensor_label=sensor_type)

Read a BrainVision VHDR header file and return a Vector{SignalV2} pointing to the associated EEG binary data file.

When all channels share the same unit and resolution, a single SignalV2 is returned with a standard "lpcm" or "lpcm.vectorized" file format.

When channels differ in unit or resolution, they are grouped by (unit, resolution) and one SignalV2 is returned per group. Each group uses a ChannelSubsetLPCMFormat-backed file format string that encodes the total channel count and 1-based channel indices so that Onda.load can read the correct channels from the shared binary file.

Keyword arguments

  • codepage: character encoding passed through to read_vhdr. Accepted values are "UTF-8" and "Latin-1".
  • recording: a UUID identifying the recording (default: random).
  • sensor_type: Onda sensor type string (default: "eeg").
  • sensor_label: Onda sensor label string (default: same as sensor_type). When multiple groups are produced, "_$(unit)" is appended to distinguish them.
source
OndaVision.get_segmentsMethod
get_segments(vmrk::Dict{String,Any}) -> NamedTuple

Extract the "New Segment" markers from the return value of read_vmrk.

Each "New Segment" marker records the start of a new continuous recording block. The date field, when present, contains the recording timestamp in the format YYYYMMDDhhmmssμμμμμμ (year, month, day, hour, minute, second, microsecond).

Returns a Tables.jl-compatible NamedTuple column table with the same columns as "Marker Infos" (type, description, position, points, channel, date), containing only the rows where type == "New Segment".

source
OndaVision.parse_amplifier_setupMethod
parse_amplifier_setup(comment::String) -> (info, channels) or nothing

Parse the "Amplifier Setup" sub-section from a VHDR [Comment] string.

Returns nothing if no amplifier setup section is found in comment.

Otherwise returns a 2-tuple (info, channels) where:

  • info is a Dict{String,String} containing the three header key-value pairs, typically "Number of channels", "Sampling Rate [Hz]", and "Sampling Interval [µS]".

  • channels is a Tables.jl-compatible NamedTuple column table whose columns are Vector{String} with names number, name, phys_chn, resolution, low_cutoff, high_cutoff, notch. Each row corresponds to one channel in the amplifier channel table.

source
OndaVision.parse_impedancesMethod
parse_impedances(comment::String) -> Dict{String, Union{Float64, Missing}} or nothing

Parse the impedance table from a VHDR [Comment] string.

Returns nothing if no Impedance [kOhm] at ... header line is found.

Otherwise returns a Dict{String, Union{Float64, Missing}} mapping each channel name to its measured impedance in kOhm. Unknown impedances (recorded as ??? in the file) are represented as missing.

Channel names may contain spaces (e.g. "CP 6", "F3 3 part") or special characters such as + and -. The section may be preceded by optional prose lines (e.g. "Impedances Imported from actiCAP Control Software:") which are ignored.

source
OndaVision.parse_software_filtersMethod
parse_software_filters(comment::String) -> NamedTuple or nothing

Parse the "Software Filters" sub-section from a VHDR [Comment] string.

Returns nothing if the section is absent or marked as "Disabled".

Otherwise returns a Tables.jl-compatible NamedTuple column table. When an "Amplifier Setup" section is also present in comment and its channel count matches, a name column (channel names from the amplifier table) is inserted after number, giving 5 columns total:

number  name  low_cutoff  high_cutoff  notch

Without matching amplifier data the table has 4 columns:

number  low_cutoff  high_cutoff  notch

Each column is a Vector{String} with one entry per channel row.

source
OndaVision.read_brainvisionMethod
read_brainvision(filename; codepage=nothing)

Read a BrainVision recording from a VHDR header file, returning the sample data as a numeric array after applying per-channel resolution scaling.

filename must be a path to a .vhdr header file. The EEG data file and (optionally) the marker file are resolved relative to the directory containing the VHDR file.

Keyword arguments

  • codepage: character encoding passed through to read_vhdr and read_vmrk. Accepted values are "UTF-8" and "Latin-1". When nothing (the default), the encoding is auto-detected from each file.

Return value

The return type depends on the number and lengths of "New Segment" markers found in the marker file:

  • No marker file, or a single "New Segment" marker: returns a Matrix{Float64} of shape (n_channels, n_samples).
  • Multiple "New Segment" markers with equal segment lengths: returns an Array{Float64, 3} of shape (n_channels, segment_length, n_segments).
  • Multiple "New Segment" markers with unequal segment lengths: returns a Vector{Matrix{Float64}}, one matrix per segment, each of shape (n_channels, segment_samples).

All values are in the physical unit specified by the channel info (typically µV), with the per-channel resolution factor from the VHDR [Channel Infos] section applied.

Supported formats

  • DataFormat: BINARY
  • DataOrientation: MULTIPLEXED or VECTORIZED
  • BinaryFormat: INT_16 or IEEE_FLOAT_32
  • DataType: TIMEDOMAIN (default when key is absent)
source
OndaVision.read_brainvision_ondaMethod
read_brainvision_onda(vhdr_filename; codepage=nothing, recording=uuid4(),
                      sensor_type="eeg", sensor_label=sensor_type)

Read a BrainVision recording from a VHDR header file and return a named tuple (; signals, annotations, metadata) containing:

  • signals::Vector{SignalV2}: Onda signal descriptors (see brainvision_to_signal).
  • annotations::NamedTuple: Onda-compliant annotation table (see brainvision_annotations); passes Onda.validate_annotations. Empty (zero-row) when no marker file is found.
  • metadata::BrainVisionMetadata: all BrainVision-specific information that has no counterpart in the Onda schemas, including electrode coordinates, hardware/software filter settings, impedances, and recording comments.

The VHDR file is read once; the VMRK is read at most once. All keyword arguments are forwarded as documented below.

Keyword arguments

  • codepage: character encoding passed to read_vhdr/read_vmrk. Accepted values are "UTF-8" and "Latin-1".
  • recording: a UUID identifying the recording (default: random). The same UUID is embedded in every SignalV2 and in every annotation row.
  • sensor_type: Onda sensor type string (default: "eeg").
  • sensor_label: Onda sensor label string (default: same as sensor_type).
source
OndaVision.read_vhdrMethod
read_vhdr(filename; codepage=nothing)
read_vhdr(io::IO; codepage=nothing)

Returns a nested dictionary of configuration entries from a BrainVision VHDR file.

filename may be any entity with an appropriate open method.

The BrainVision VHDR format is similar to the Windows INI configuration format, but has a few additional extensions.

Keyword arguments

  • codepage: the character encoding to use when decoding the file. Accepted values are "UTF-8" and "Latin-1". When nothing (the default), the encoding is determined automatically from the Codepage key in the file, falling back to "Latin-1" if the key is absent.

Return value

Returns a Dict{String, Any} with the following structure:

  • "identification": the identification line string (e.g. "BrainVision Data Exchange Header File Version 1.0")
  • Each section name maps to a Dict{String, String} of key-value pairs, except the "Comment" section which maps to a raw String of arbitrary text.

Format notes

  • Lines beginning with ; are comments and are ignored (except inside [Comment]).
  • The [Comment] section contains arbitrary free-form text.
  • Files without a Codepage key are assumed to be Latin-1 encoded.
source
OndaVision.read_vmrkMethod
read_vmrk(filename)
read_vmrk(io::IO)

Returns a dictionary of entries from a BrainVision VMRK marker file.

filename may be any entity with an appropriate open method.

The BrainVision VMRK format is similar to the Windows INI configuration format. The [Marker Infos] section is parsed into a Tables.jl-compatible NamedTuple column table with columns type, description, position, points, channel, and date.

Return value

Returns a Dict{String, Any} with the following structure:

  • "identification": the identification line string
  • "Common Infos": a Dict{String, String} of key-value pairs
  • "Marker Infos": a NamedTuple column table (see below)
  • Any additional sections map to Dict{String, String}

The "Marker Infos" table has columns:

  • type::Vector{String}: marker type (e.g. "Stimulus", "Response")
  • description::Vector{String}: marker description (may be empty)
  • position::Vector{Int}: 1-based sample position
  • points::Vector{Int}: duration in samples (0 = instantaneous)
  • channel::Vector{Int}: channel number (0 = applies to all channels)
  • date::Vector{Union{String,Missing}}: optional recording timestamp; missing when absent

Format notes

  • Lines beginning with ; are comments and are ignored.
  • Files without a Codepage key are assumed to be Latin-1 encoded.
source
OndaVision.write_brainvisionMethod
write_brainvision(base_path, signals; annotations=nothing, metadata=nothing)

Write Onda signals and optional annotations to a BrainVision recording.

Arguments

  • base_path::AbstractString: path prefix (e.g. "/tmp/subj01" or "/tmp/subj01.vhdr"). The extension is stripped if present. Three files are written: <base>.vhdr, <base>.eeg, and optionally <base>.vmrk.
  • signals::AbstractVector{SignalV2}: one or more signal descriptors. All signals must share the same sample_rate and sample_type.
  • annotations::Union{NamedTuple,Nothing}: optional Onda annotation table as returned by brainvision_annotations. When provided, a VMRK file is written. Default: nothing (no markers).
  • metadata::Union{BrainVisionMetadata,Nothing}: optional metadata struct as returned by read_brainvision_onda. Used to recover original-case channel names, references, coordinates, and comment blocks. Default: nothing.

Returns

The path to the written .vhdr file as a String.

Validation

  • All signals must have the same sample_rate and sample_type.
  • All signals must describe the same time span.
source

Internal

OndaVision._BRAINVISION_UNIT_MAPConstant
_BRAINVISION_UNIT_MAP

Map of BrainVision unit strings to Onda-compatible lowercase snake_case unit names. Handles both U+00B5 (MICRO SIGN) and U+03BC (GREEK SMALL LETTER MU).

source
OndaVision._IDENTIFICATION_REConstant
_IDENTIFICATION_RE

Regular expression that matches the mandatory first line of a VHDR file. Accepts both the canonical spelling "BrainVision" and the legacy form "Brain Vision".

source
OndaVision._ONDA_TO_BV_UNIT_MAPConstant
_ONDA_TO_BV_UNIT_MAP

Map of Onda-compatible lowercase snakecase unit names to BrainVision unit strings. This reverses the mapping in `BRAINVISIONUNITMAP` from signal.jl.

source
OndaVision._SUPPORTED_CODEPAGESConstant
_SUPPORTED_CODEPAGES

Character encodings accepted by the codepage keyword argument of read_vhdr and read_vmrk. Currently ("UTF-8", "Latin-1").

source
OndaVision._VMRK_IDENTIFICATION_REConstant
_VMRK_IDENTIFICATION_RE

Regular expression that matches the mandatory first line of a VMRK file. Accepts the canonical spelling "BrainVision", the legacy form "Brain Vision", and variants with or without a comma before "Version".

source
OndaVision._bv_unit_from_ondaMethod
_bv_unit_from_onda(unit::String) -> String

Convert an Onda unit string to a BrainVision unit string. Known units are mapped; unknown units are returned as-is.

source
OndaVision._check_vhdr_vmrk_consistencyMethod
_check_vhdr_vmrk_consistency(vhdr_ci, vmrk_ci)

Cross-check shared fields between the VHDR and VMRK [Common Infos] dictionaries. Emits a @warn for each inconsistency (DataFile mismatch, Codepage mismatch); VHDR values take precedence in all cases.

source
OndaVision._detect_codepageMethod
_detect_codepage(bytes) -> String

Scan raw file bytes for a Codepage= key (all-ASCII, safe for any encoding) and return its value. Falls back to "Latin-1" when the key is absent.

source
OndaVision._format_resolutionMethod
_format_resolution(r::Float64) -> String

Format resolution for VHDR [Channel Infos] section. Drops trailing .0 for integer values.

source
OndaVision._julia_sample_typeMethod
_julia_sample_type(sample_type::AbstractString) -> DataType

Convert an Onda sample_type string to the corresponding Julia numeric type.

source
OndaVision._latin1_to_utf8Method
_latin1_to_utf8(bytes) -> String

Convert a Latin-1 (ISO-8859-1) encoded byte vector to a UTF-8 String.

Each byte value in 0x80..0xFF is mapped to the corresponding Unicode code point U+0080..U+00FF (the Latin-1 Supplement block) and encoded as two UTF-8 bytes. Bytes below 0x80 are passed through unchanged.

source
OndaVision._make_3dMethod
_make_3d(data, starts, ends, seg_len, n_segs) -> Array{Float64,3}

Build an (n_channels × seg_len × n_segs) 3-D array by copying contiguous column slices from data. All segment lengths must be equal (the caller is responsible for verifying this before calling).

source
OndaVision._normalize_bv_unitMethod
_normalize_bv_unit(unit::AbstractString) -> String

Convert a BrainVision unit string to an Onda-compatible lowercase snake_case alphanumeric unit name.

Known units (µV, µS, nV, mV, V, S, C, etc.) are mapped to their full names. Unknown units are lowercased with non-alphanumeric characters replaced by _, and a warning is emitted.

source
OndaVision._parse_channel_infoMethod
_parse_channel_info(ch_info::Dict{String,String}, n_channels::Int)

Parse channel names, resolutions, and units from the [Channel Infos] section. Returns (names::Vector{String}, resolutions::Vector{Float64}, units::Vector{String}).

source
OndaVision._parse_channel_referencesMethod
_parse_channel_references(ch_info::Dict{String,String}, n_channels::Int) -> Vector{String}

Extract the reference-channel field (field 2) from each Ch<N> entry in the [Channel Infos] dictionary. Returns an empty string for any channel whose reference field is absent or blank.

source
OndaVision._parse_coordinatesMethod
_parse_coordinates(coords, ch_info, n_channels) -> NamedTuple

Parse the [Coordinates] section into a column table with fields channel, radius, theta, phi. coords is the raw Dict{String,String} stored under "Coordinates" in the VHDR dict, or nothing when the section is absent.

Returns a NamedTuple with zero-length vectors when the section is absent or empty, so callers can use isempty(result.channel) to detect absence without a nothing check.

source
OndaVision._parse_marker_infosMethod
_parse_marker_infos(entries) -> NamedTuple

Parse the raw Dict{String,String} of [Marker Infos] key-value pairs into a typed NamedTuple column table with columns matching _MARKER_COLS.

Each entry has the form Mk<N>=<type>,<description>,<position>,<points>,<channel>[,<date>]. Validates that keys form the consecutive sequence Mk1MkN and that all integer fields are valid; the optional date field becomes missing when absent.

source
OndaVision._parse_resolutionsMethod
_parse_resolutions(ch_info, n_channels) -> Vector{Float64}

Parse the per-channel resolution factor from each Ch<N> entry in the [Channel Infos] dictionary.

Each entry has the form <name>,<ref>,<resolution>[,<unit>][,...]. A missing or empty resolution field defaults to 1.0.

source
OndaVision._parse_vhdrMethod
_parse_vhdr(content) -> Dict{String,Any}

Parse a decoded (UTF-8) VHDR string into a nested Dict, validating structural requirements from the BVCDF 1.0 specification along the way.

The identification line is stored under "identification". Each INI section becomes a Dict{String,String} keyed by section name, except [Comment] which is stored as a raw String.

source
OndaVision._parse_vmrkMethod
_parse_vmrk(content) -> Dict{String,Any}

Parse a decoded (UTF-8) VMRK string into a nested Dict, validating structural requirements from the BVCDF 1.0 specification along the way.

The identification line is stored under "identification". [Common Infos] becomes a Dict{String,String}. [Marker Infos] is parsed into a typed NamedTuple column table via _parse_marker_infos. Any additional sections are passed through as Dict{String,String}.

source
OndaVision._resolve_channelsMethod
_resolve_channels(channels, channel_names) -> Vector

Return the channel output column. When channel_names is nothing, return a copy of the raw integer channels vector. Otherwise map each integer to a name (or missing for channel 0, which denotes "all channels").

source
OndaVision._split_segmentsMethod
_split_segments(data, starts, ends, n_segs) -> Vector{Matrix{Float64}}

Return a Vector of 2-D matrices, one per segment, by slicing columns starts[s]:ends[s] from data for each segment index s. Used when segment lengths are unequal and a 3-D array cannot be formed.

source
OndaVision._validate_vhdrMethod
_validate_vhdr(result)

Post-parse structural validation for a parsed VHDR Dict. Checks that all mandatory sections and keys required by the BVCDF 1.0 specification are present, that NumberOfChannels is a positive integer, and that the [Channel Infos] section contains exactly the expected Ch1ChN entries. Errors on any violation; warns (rather than errors) if Codepage is absent.

source
OndaVision._validate_vmrkMethod
_validate_vmrk(raw_sections)

Post-parse structural validation for a parsed VMRK file. Checks that [Common Infos] and [Marker Infos] sections are present and that all mandatory [Common Infos] keys exist. Warns (rather than errors) if Codepage is absent.

source
OndaVision._write_eegMethod
_write_eeg(io, signals)

Read samples from all signals and write them as a combined MULTIPLEXED binary file.

source
OndaVision._write_vmrkMethod
_write_vmrk(io, base_name, annotations, metadata, sample_rate, channel_names)

Write a BrainVision VMRK marker file.

source