Data Handling

DataHandling.MissingTimeline(data, side, days=None, filtered=None)[source]

Checks missing points in timeline (missing point: when difference between points is >= than 11 minutes) and returns a new corrected timeline dict, correcting missing LFP values through mean interpolation.

new_dic = {

'X': corrected timeline, 'Y': corrected LFP, 'X_missing': missing timeline points, 'Y_missing': missing LFP values}

Parameters:

Data -- dict, from DataHandling.getData()

Paam side:

str

Returns:

dict

DataHandling.add_utc_conversion_to_dates(data, offset)[source]

Recursively convert all 'Date'-related fields to UTC with offset (time-shifted). Assumes naive datetimes are in the specified local timezone.

Parameters:
  • Data -- dict, from DataHandling.getData(), from DataHandling.getData()

  • offset --

DataHandling.correctMissingTimeline(Data, Timeline, axis='X', days=None, filtered=None)[source]

Corrects timeline recordings for both hemispheres.

Parameters:
  • Data -- dict, from DataHandling.getData()

  • Timeline -- dict

  • axis -- str

Returns:

dict, corrected Timeline dict

DataHandling.extract_time_offset(Data, string=False)[source]

Get time offset, from organized Data dictionary.

Parameters:
  • Data -- dict, from DataHandling.getData()

  • string -- boolean, default to False. If true returns string of the time offset, else returns timedelta object of time offset.

Returns:

timedelta/string

DataHandling.getBatteryInfo(Data)[source]

Returns battery information.

Parameters:

Data -- dict, from DataHandling.getData()

DataHandling.getData(file_path)[source]

Creates organized dictionary with information in JSON file. Dictionary has 7 main keys, organized by the recording modes and device information. Data = {

'Device': all information relative to device, stimulation, programming, session metadata, battery and electrode status, 'Survey': recording data from BrainSense Survey (TD and PSD), 'Setup': recording data from BrainSense Setup (Stimulation ON/OFF and PSD artifact checking), 'Streaming': recording data from BrainSense Streaming (Stimulation ON/OFF), 'Indefinite': recording data from BrainSense Indefinite Streaming, 'Events': at-home marked events, recorded 30s PSD, 'Timeline': at-home LFP recordings,

}

Parameters:

file_path -- str, path to JSON file.

Returns:

dict

DataHandling.getDeviceInfo(Data)[source]

Returns device information.

Parameters:

Data -- dict, from DataHandling.getData()

DataHandling.getEventLog(Data)[source]

Returns log of marked events.

Parameters:

Data -- dict, from DataHandling.getData()

DataHandling.getEventSummary(Data)[source]

Returns event summary (number of occurences of each event).

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

dict

DataHandling.getEventsElectrodes(data)[source]

Returns active electrodes at beginning and end, for at-home recordings.

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

list, [initial, final]

DataHandling.getImpedance(Data)[source]

Returns monopolar and bipolar impedance test results, per hemisphere.

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

list, [status (str), Left (dict), Right (dict)]

DataHandling.getLastSessionDate(Data)[source]

Returns date of previous session.

Parameters:

Data -- dict, from DataHandling.getData()

DataHandling.getModeSignals(Data, mode)[source]

Returns signals of respective mode, from Data dictionary.

Parameters:
  • Data -- dict, from DataHandling.getData()

  • mode -- str

Returns:

list

DataHandling.getModes(Data)[source]

Gets recording modes present in Data dictionary. Verifies if Data is empty or damaged (with recordings available).

Modes:
  • Survey

  • Setup

  • Indefinite (Streaming)

  • Streaming

  • Timeline

  • Events

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

list

DataHandling.getPatientInfo(Data)[source]

Returns patient information.

Parameters:

Data -- dict, from DataHandling.getData().

DataHandling.getRecordingDuration(mode, recording)[source]

Returns recording duration.

Parameters:
  • mode -- str

  • recording -- dict

Returns:

float

DataHandling.getSessionDate(Data)[source]

Returns string of session date.

Parameters:

Data -- dict, from DataHandling.getData()

DataHandling.getSessionDuration(Data)[source]

Gets session duration.

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

int

DataHandling.getSignals(Data, mode)[source]

Returns signals in respective mode, properly handled into a clean dictionary.

dict = {

'Date': utl.parse_datetime(rec['FirstPacketDateTime']), 'Channel': rec['Channel'], 'Y' : raw, 'X': time_array

}

Parameters:
  • Data -- dict, from DataHandling.getData()

  • mode -- str

Returns:

list, list of signal's dictionaries.

DataHandling.getStimStatus(Data)[source]

Gets status (ON/OFF) of stimulation, from beginning and end of recording session.

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

tuple, (initial,final)

DataHandling.getStimulation(Data, side='Right')[source]

Returns at-home stimulation amplitude.

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

dict

DataHandling.getStreamingStimulation(Data, date)[source]

Returns active stimulation group.

Parameters:
  • Data -- dict, from DataHandling.getData()

  • date -- str

Returns:

str

DataHandling.getStreamingStimulationValues(data)[source]

Returns streaming stimulation amplitude.

Parameters:

Data -- dict, from DataHandling.getData()

Returns:

str (status), dict (right amplitude), dict (left amplitude)

DataHandling.info_electrodes(data, version='Initial', side=0)[source]

Return information of the active electrodes from TIMELINE mode side 0 - Left; side 1 - Right

Parameters:
  • Data -- dict, from DataHandling.getData(), reorganized using DataHandling.getData()

  • version -- str, default to 'Initial'. Access to version configuration

  • side -- int, default to 0 ('Left')

Returns:

dict

Features

Features.export_plot(info, plot_type, size=None, line_types=None, dpi=600)[source]

Creates matplolib plot and exports it into png image, according to the prepared info in dearpygui.

If plot_type is '2D' or 'CC' (Timeline/normal PSD or cross-correlation):
  • signals: list of dicts

  • colors: list, rgba codes

If plot_type is '3D' or '3DC' (spectrogram or coherogram):
  • signals: list of tuples.

  • colors: str, name of colormap

If plot_type is 'Bar' or 'Events' (phase-amplitude coupling or event summary):
  • signals: list of dicts.

  • colors: list, rgba codes

If plot_type is 'SMP' (Show Mean Power over time):
  • signals: dict, 'Y1': signals plotted on left y axis, 'Y2': signals plotted on right y axis.

  • colors: list, rgba codes

If plot_type is 'Pie' (phase-amplitude coupling or event summary):
  • signals: single list of values.

  • colors: list, rgba codes

Parameters:
  • info -- signals (list), labels (list of signal's labels), colors (list), title (str, name of plot), axis (list, names of axis), limits (list, range of axis)

  • plot_type -- str, type of plot: '2D' (Timeline/PSD), 'CC' (cross-correlation), '3D' (spectrogram), '3DC' (coherogram), 'Bar' (phase-amplitude coupling), 'Events' (event summary), 'Pie' (event summary count)

  • size -- tuple, width and height of original plot. Optional, default to None, width and height are set to default fig sizes of matplotlib

  • line_types -- list, type of line plotted in dpg. Optional, default to None.

Features.export_signals(info, filename, tfd=False)[source]

Writes into "Signals_{filename}.csv" file, signals contained in info dictionary.

Parameters:
  • info -- dict, keys are columns

  • filename -- str, name of file

  • tfd -- Optional. Default to False, if True exports 3D signals (spectrogram/coherogram)

Features.extract_psd_features(p, band, bands=None)[source]

Extracts features from spectral density, or coherence, methods. Returns dictionary.

Features: - Mean Power (mean, std. dev) - % Mean Power (mean, std. dev) - Sum Power (AUC) - % Sum Power (AUC) - Peak Power (power, frequency)']

Parameters:
  • p -- dict, computed PSD ('X' = freqs, 'Y' = psd values)

  • band -- list, band[0] = str, band name; band[1] = list, [f0,f1]

Returns:

dict, {feature: value}

Features.extract_timeline_features(p)[source]

Extracts features from timeline recordings, returned into a dictionary.

Features: - Mean Power (mean, std. dev) - Peak Power - Peak Time - Sensing Freq. (Hz)

Parameters:

p -- dict, computed PSD ('X' = time array, 'Y' = lfp timeline values)

Returns:

dict, {feature: value}

Features.get_sum_mean_power(p, band=None)[source]

Calculates and returns the sum of all band's mean power

Parameters:
  • p -- dict, computed PSD ('X' = freqs, 'Y' = psd values)

  • bands -- dict, 'band name': [f0,f1]

Returns:

float

Features.mean_power_all_bands(stream, bands)[source]

Computes mean power for frequency bands over a stream of events.

Parameters:
  • stream -- Description

  • bands -- Description

Returns:

dict, {band: mean}

Features.mean_power_band(log, band)[source]

Calculates and returns a dictionary of band-respective mean

Parameters:
  • stream -- dict, signal information 'X'/'Frequency' --> x axis; 'Y'/'Power' --> y axis

  • bands -- dict, 'band name': [f0,f1]

Returns:

np.array, mean power

Features.peak_per_band(stream, bands)[source]

Retrives all peak values and positions, in stream, for each band.

Parameters:
  • stream -- dict, signal information 'X'/'Frequency' --> x axis; 'Y'/'Power' --> y axis

  • bands -- dict, 'band name': [f0,f1]

Features.peak_power(stream)[source]

From stream dictionary, return tuple with peak value and peak position.

Parameters:

stream -- dict, signal information 'X'/'Frequency' --> x axis; 'Y'/'Power' --> y axis

Features.std_dev_streams(streams, bands)[source]

Calculates and returns a dictionary of band-respective standard deviation

Parameters:
  • stream -- dict, signal information 'X'/'Frequency' --> x axis; 'Y'/'Power' --> y axis

  • bands -- dict, 'band name': [f0,f1]

Returns:

dict, 'band name': float standard deviation

Preprocessing

Preprocessing.MeanSignal(signals, key='Y')[source]

Calculates and return mean from specific key in signal dictionary, item of signals list

Parameters:
  • signals -- list list of dictionaries of signal data.

  • key -- str/int, default to 'Y' key of dictionary

Returns:

np.array computed mean

Preprocessing.StdDevSignal(signals, key=None)[source]

Calculates and return standard deviation from specific key in signal dictionary, item of signals list

Parameters:
  • signals -- list, list of dictionaries of signal data.

  • key -- str/int

Returns:

np.array

Preprocessing.call_function(func_name, param_dict, module='ss')[source]

Call a function from module by name, print its documentation, and handle exceptions. If given arguments dont work, function retries with default arguments given by module.

Modules:
  • 'ss': scipy.signal

  • 'mtf': mne.time_frequency

  • 'si': scipy.integrate

  • 'ssw': scipy.signal.windows

  • 'np': numpy

Parameters:
  • func_name -- str, function name

  • param_dict -- dict, required and optional arguments

  • module -- str, module name

Returns:

output of func(*args,**kwargs)

Preprocessing.call_function2(func_name, *args, **kwargs)[source]

Call a function from scipy.signal by name, print its documentation, and handle exceptions. If given arguments dont work, function retries with default arguments given by module.

Parameters:
  • func_name -- str, function name

  • args -- dict, required arguments

  • kwargs -- dict, optional keyword arguments

Returns:

output of func(*args,**kwargs)

Preprocessing.convert_from_db(value)[source]

Convert a dB value back to linear scale using 10^(dB/10).

Parameters:

value -- int/float

Returns:

int/float

Preprocessing.convert_to_db(value)[source]

Convert a value to dB scale using 10 * log10.

Parameters:

value -- int/float

Returns:

int/float

Preprocessing.extract_params_and_returns(docstring)[source]

Extracts the 'Parameters' and 'Returns' sections from a Numpydoc-formatted documentation string.

Parameters:

docstring (str) -- str, documentation string of a function.

Returns:

A string containing only the 'Parameters' and 'Returns' sections.

Return type:

str

Preprocessing.get_date_from_ts(ts)[source]

Returns datetime object correspondant to the given timestamp ts

Parameters:

ts -- int/float

Returns:

datetime.datetime

Preprocessing.get_default_kwargs(func_name, module='ss')[source]

Retrieves optional keywords parameters, and default values, from func_name.

Modules:
  • 'ss': scipy.signal

  • 'mtf': mne.time_frequency

  • 'si': scipy.integrate

  • 'ssw': scipy.signal.windows

  • 'np': numpy

Parameters:
  • func_name -- str, function name

  • module -- str, module name. Defaults to ss

Returns:

dict

Preprocessing.get_functions(module='ss')[source]

Retrieves and saves all functions names and callbacks into a dictionary

Modules:
  • 'ss': scipy.signal

  • 'mtf': mne.time_frequency

  • 'si': scipy.integrate

  • 'ssw': scipy.signal.windows

  • 'np': numpy

Parameters:

module -- str

Returns:

dict, {func_naame: obj}

Preprocessing.get_required_params(func_name, module='ss')[source]

Retrieves required parameters, and default values, from func_name.

Modules:
  • 'ss': scipy.signal

  • 'mtf': mne.time_frequency

  • 'si': scipy.integrate

  • 'ssw': scipy.signal.windows

  • 'np': numpy

Parameters:
  • func_name -- str, function name

  • module -- str, module name. Defaults to ss

Returns:

dict

Preprocessing.removeCardiacComponent(raw, band, filterMethod='Fixed Peak Find')[source]

Applies 5th order Butterworth bandpass filter with cardiac artifact removal. With Wiener Filter and scipy.find_peaks, an ECG template is created and subtracted to the filtered signal, removing the cardiac component. Adapted from https://github.com/Fixel-Institute/BRAVO_SSR/blob/main/modules/LocalPerceptDatabase.py#L327

Parameters:
  • raw -- dict, {'Y': signal values}

  • band -- Description

  • filterMethod -- Description

Returns:

filtered signal, signal with only bandpass

Utils

utils.access_by_path(dictionary, path_parts)[source]

Access the dictionary by the provided list of keys/indices.

Parameters:
  • dictionary -- dict, list

  • path_parts -- list

utils.after_point(s)[source]

Extracts string after finding '.'

Parameters:

s -- str

Returns:

str

utils.after_underscore(s)[source]

Extracts string after finding '_'

Parameters:

s -- str

Returns:

str

utils.convert_to_timestamp(date_input)[source]

Converts date_input into timestamp.

Parameters:

date_input -- datetime, str

Returns:

float

utils.extract_date(d)[source]

Returns only date-part in string, from datetime object.

Parameters:

d -- datetime

Returns:

str

utils.extract_time(dt)[source]

Extracts and returns the time portion from a datetime object as a string (HH:MM:SS).

Parameters:

d -- datetime

Returns:

str

utils.find_closest_index(new_time, target_hour, target_minute, target_second=0)[source]

Finds the index of the sample in new_time closest to the given hour and minute.

Parameters:
  • new_time -- datetime.datetime, target date

  • target_hour -- int/float

  • target_minute -- int/float

  • target_second -- int/float

Returns:

int, if the target time is out of bounds return None.

utils.find_key_path(dictionary, target_key, path='')[source]

Recursively finds and prints the path of a key in a nested dictionary (including lists).

Parameters:
  • dictionary -- dict, list

  • target_key -- int/str

  • path -- str, optional. Looks for target_key in dictionary[path] ex.: path = '[key1][key2]'

Returns:

list, names of all paths that lead to target_key

utils.full_date2str(d)[source]

Converts datetime object to full date string

Parameters:

d -- datetime.datetime

Returns:

str

utils.get_timestamp_from_dt(dt_string)[source]

Parses a datetime string and returns a timestamp of the time portion.

Parameters:

dt_string -- str

Returns:

float

utils.parse_datetime(string)[source]

Converts string into datetime.datetime object.

Parameters:

string -- str, "%Y-%m-%dT%H:%M:%S.%fZ" OR "%Y-%m-%dT%H:%M:%SZ" OR "%Y-%m-%d %H:%M:%S"

Returns:

datetime.datetime object

utils.parse_path(path)[source]

Convert the string path into a list of keys and indices. Example: '[key1][key2]' --> [key1,key2]

Parameters:

path -- str

Returns:

list

utils.parse_time(string)[source]

Converts time string into datetime.datetime object.

Parameters:

string -- str, "%H:%M:%S"

Returns:

datetime.datetime object

utils.read_event_file(filename)[source]

Reads a text file and returns a list of (hour, minute, second, event) tuples. If seconds are not provided, they default to 0.

Parameters:

filename -- str, must be path/file.txt

Returns:

list

utils.read_lines(filename)[source]

Converts txt file into dictionary, with firts line being key.

Parameters:

filename -- str, must be path/file.txt

Returns:

dict

utils.resource_path(*parts)[source]
utils.string2numbers(string)[source]

Converts string with numbers separated by ',' (TicksInMs) into a np.array. Example: '1,2,3,,4,6,7' --> np.array([1,2,3,4,6,7])

Parameters:

string -- str

Returns:

np.ndarray