Skip to content

Instantly share code, notes, and snippets.

@wmvanvliet
Created February 20, 2018 03:18
Show Gist options
  • Save wmvanvliet/59fb33f18baf5932fd077269cddefe6c to your computer and use it in GitHub Desktop.
Save wmvanvliet/59fb33f18baf5932fd077269cddefe6c to your computer and use it in GitHub Desktop.
Normalization of a forward solution
@verbose
def normalize_forward_solution(forward, depth=0.8, limit_depth_chs=True, loose=0.2,
use_cps=True, projs=None, verbose=None):
"""Normalize a forward solution.
Parameters
----------
forward : instance of Forward
Forward solution to normalize.
depth : None | float in [0, 1]
Depth weighting coefficients. If None, no depth weighting is performed.
Defaults to 0.8.
limit_depth_chs : bool
If True, use only grad channels in depth weighting. If grad chanels
aren't present, only mag channels will be used (if no mag, then eeg).
If False, use all channels. Defaults to True.
loose : float in [0, 1]
Value that weights the source variances of the dipole components
that are parallel (tangential) to the cortical surface. If loose
is 0 then the solution is computed with fixed orientation.
If loose is 1, it corresponds to free orientations.
Defaults to 0.2.
use_cps : bool
Whether to use cortical patch statistics to define normal
orientations. Only used when converting to surface orientation
(i.e., for surface source spaces and ``loose < 1``).
Defatuls to True.
projs : list of Projection | None
Any SSPs to apply to the forward solution before normalizing.
Defaults to None, which means no projections are applied.
verbose : bool, str, int, or None
If not None, override default verbose level (see :func:`mne.verbose`).
Returns
-------
fwd_norm : instance of Forward
The normalized forward solution.
"""
pass
@wmvanvliet
Copy link
Author

There already seems to be a lot of code in our codebase that performs normalization of the forward model of some form. It would be great if that could be bundled in a single function somehow to avoid repeating ourselves. This would for example unlock depth weighting and loose orientation priors for beamformers.

I could take a stab at this, but MNE-Python is really big. Could we pool some knowledge in this GitHub issue about the various ways forward models are getting normalized throughout the codebase?

Here is an initial stab at a function signature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment