Skip to content

UMD Winter Storm Workshop, January 2019

Christian Brodbeck edited this page Mar 31, 2020 · 1 revision

Workshop held in the context of Winter Storm 2019.

This workshop will introduce the Eelbrain pipeline for analyzing MEG data. We will analyze data from an MEG experiment based on an N400 paradigm using visually presented word pairs.

If you are planning to attend and have not done so, please send me an email so I can keep you updated (also feel free to email me if you have any questions or concerns).

Preparation

First session (1/15)

  • All you need is a Python interpreter and terminal. If unsure, install Anaconda as described below.

For sessions 2 and following:

  • Make sure you have Anaconda installed, it can be downloaded from here (prefer the 3.x version, but that is not crucial).

  • Make sure you have a good Python editor; Pycharm is good (Community edition), but on Windows VSCode which is installed by Anaconda is also sufficient.

  • Install the required Python packages as described in link:Installing. Make sure you install Eelbrain version 0.29b0.

  • If you haven’t received a link to the tutorial dataset, email me so I can share the data with you.

The Sessions

This is a tentative plan for the different days. For the MEG sessions, if we don’t finish in one session we will pick up where we ended in the next one.

1/15: Basics

We will talk about basic concepts of Python that will be needed for the following sessions. We’ll develop some hopefully useful examples involving reading from text files. Basically, if you already know what a class is and how to define one in Python, then you probably do not need to come.

1/16: Sensor spaces analysis

I will present an overview of the pipeline and then we will analyze data from the MEG experiment. We will start with raw data coming directly from the MEG lab at UMD, and look for an N400 using permutation-based mass-univariate statistics.

1/17: Source space analysis

We will source-localize the data and repeat the test for an N400 in source space (with minimum norm estimates). We will also talk about different parameters that affect the source localization.

1/22: Catch-up, visualization

We will first catch up with goals from the previous two sessions. We will then use any remaining time to talk about questions and concerns that may have come up, for example on how to generate figures and on other ways to explore the data.