Help generating cochleagrams in BRIAN2HEARS

Hello.

The questions below relate to simulations that form part of an ongoing project to further develop the model and theory described in a paper we just put out:

Jones, S. D., & Westermann, G. (2022). Under-resourced or overloaded? Rethinking working memory deficits in developmental language disorder.Psychological Review. Advance online publication. DOI: 10.1037/rev0000338

If you can help, we’ll of course at the very least put you in the Acknowledgements of this new paper, but we would also be very happy to invite another author on board.

Questions

We’re looking to re-create the cochleagram inner hair cell (IHC) manipulation seen in this poster using BRIAN2HEARS (i.e., dropping the low pass characteristics of the IHC from 3000Hz to 1000Hz to 50Hz): https://twitter.com/JoshHMcDermott/status/1505242257615687680/photo/1

The authors of the work above made use of this package: https://github.com/msaddler/bez2018model, which is a wrapper for the source code associated with:

Bruce, I.C., Erfani, Y., and Zilany, M.S.A. (2018). "A Phenomenological
*model of the synapse between the inner hair cell and auditory nerve: *
Implications of limited neurotransmitter release sites."

As can be seen in the DEMO.ipynb file within the repository above, this package has a ‘IhcLowPass_cutoff’ parameter, which we’d like to manipulate in order to disrupt IHC phase locking (again, see poster). Does anyone have any pointers as to how something similar can be done in BRIAN2HEARS? I’m currently working essentially from the Cochleargram and Auditory Nerve Fibre examples on the BRIAN2HEARS PAGE, but no joy as yet.

In addition, a second manipulation we’d like to run would be to generate a range of cochleagrams simulating the early development of frequency discrimination in the cochlear and auditory brainstem. So that all cochleagrams generated have the same number of neurons, but some would be tuned to wide bands and some to more narrow ‘adult-like’ bands. This could be done crudely by changing the number of filters in the erbspace, but if anyone has any more refined ideas that would be fantastic.

In sum, then, we are looking to generate (i) a low temporal resolution, phase locking degraded condition (as per the poster linked above), and (ii) a spectrally degraded (i.e., wide tuning band) condition. The resulting cochleagrams will be passed into a CNN and trained and tested on a spoken word recognition task. The geometry of the CNNs internal representations will then also be assessed as per our Psychological Review paper linked above.

Thank you very much,

Sam and Co.