# Mathematics for Neuroscientists

## Gabbiani, F. — Cox, S.

1ª Edición Septiembre 2010

Inglés

Tapa dura

490 pags

2000 gr

23 x 29 x 3 cm

### ISBN 9780123748829

### Editorial ACADEMIC PRESS

Recíbelo en un plazo De 7 a 10 días

### Key Features

- Introduces the reader to mathematical concepts of importance for the analysis of data and the formulation of theories in neuroscience, enabling all neuroscientists to apply and use those concepts in their research.

- Includes, for example, linear algebra, ordinary and partial differential equations, Fourier transforms, probabilities and stochastic processes, applying them to specific frequently encountered problems in neuroscience. For example, the book makes it possible for the researcher to apply linear algebra and differential equations to solve the Hodgkin-Huxley equations, probabilities to describe stochastic release, stochastic processes to describe noise in neurons, Fourier transforms to describe the receptive fields of visual neurons.

- Introduces the numerical methods used to implement algorithms related to these mathematical concepts, including the tools necessary to solve systems of differential equations, issues of numerical stability, discrete Fourier transforms, discrete convolutions, computation of power spectra, crosscorrelations, the generation of sample stochastic processes and random variables.

Need Feature Benefit

An easy to follow introduction to the mathematical tools most commonly used in Neuroscience Integrates mathematical analysis and modeling tools systematically by using them to address specific well known problems in Neuroscience Saves significant time in understanding and adapting mathematical methods to common problems

A book that provides a reference of tools to implement mathematical concepts on own problems A toolbox of Matlab code modules solving the most common mathematical problems encountered in the Neuroscience laboratory Modules can be directly adapted for use in the laboratory to save development time and cost

A toolset for the systems neuroscientist to support thinking about their results in a broader theoretical framework By providing a systematic introduction of the most commonly used tools needed to create cellular models of brain function, this book opens up the possibility of theoretical modeling of results to the mathematically less inclined neuroscientist Opens a new horizon for systems neuroscientists for more sophisticated analysis and modeling of their results.

### Description

The central aim of Mathematical Neuroscience is to establish a language with which to build and test a platform for the quantitative investigation of the central problems of neuroscience. Success is measured in our ability to not only reaffirm or synthesize existing theories but in our ability to guide existing, or suggest novel, experiments. Although Mathematical Neuroscience has developed along side and in conjunction with Experimental Neuroscience, both sub-disciplines have matured to a level where experts in one may be novices in the other and where passage to the research frontier carries a fairly high entry cost. The best way to train aspiring neuroscientists is to integrate early instruction in mathematics and biology.

Currently there does not exist an introductory text that simultaneously develops concrete biological and mathematical skills as means to a deeper understanding of neuroscience. ?Introduction to Mathematical Neuroscience?, is just such a text.

The book will introduce mathematical and computational tools in precisely the
contexts that first established their importance for neuroscience. It will develop partial differential equations via the seminal work of Hodgkin and Huxley (1952) on nerve conduction, probability theory following the beautiful work of Fatt and Katz (1951) on synaptic transmission, dynamical systems theory in the context of FitzHugh's (1955) critical investigation of action potential threshold, and linear algebra in the context of Hines? (1984) important work on dendritic processing. In addition, the book will apply Fourier transforms to describe neuronal receptive fields following Enroth-Cugell and Robson's (1966) work on retinal ganglion cells and its subsequent extension to Hubel and Wiesel's (1962) characterization of cat cortical neurons. Lastly it will introduce and motivate statistical decision methods starting with the historical photon detection experiments of Hecht, Shlaer and Pirenne (1942). All mathematical concepts will be introduced from the simple to complex using a the by far most widely used computing environment, Matlab.

From this foundation INtroduction to Mathematical Neuroscience will embark on a number of important modern themes, such as the role of noise in shaping the responses of neurons both at the subthreshold and suprathreshold level, the role of active conductances in determining the transfer properties of dendritic cables and the backpropagation of action potentials, the role of neuronal calcium signaling, the coding and decoding of information in neuronal spike trains, as well as the role of correlations in population coding.

This book will provide a grounded introduction in the fundamental concepts of mathematics, neuroscience and their combined use, thus providing the reader with a spring-board to cutting-edge research topics and fostering a tighter integration of mathematics and neuroscience for future generations of students.

?A Concrete Introduction to Mathematical Neuroscience? will cover approximately 400 pages of text with accompanying illustrations. A companion website accessible to students will contain solutions to approximately half of the exercises, comprising both theoretical derivations and Matlab code to solve numerical problems and reproduce the book's figures. The remaining solutions will be included in an instructor's websites that will be accessible to faculty adopting the book for an academic course.

Developed from a course given simultaneously to Computational and Mathematics, Science, and Engineering undergraduate and graduate students from Rice University and Neuroscience graduate students from Baylor College of Medicine over the last six years.

The book will alternate between mathematical chapters, introducing important concepts and numerical methods, and neurobiological chapters, applying these concepts and methods to specific topics. We aim to cover topics ranging from classical cellular biophysics and proceed up to systems level neuroscience.

Each of the book's chapters will be relatively short, roughly corresponding to the material usually presented in one or two seventy-five-minute lectures.

Starting at an introductory mathematical level, presuming no more than calculus through elementary differential equations, the level will build up as increasingly complex techniques are introduced and combined with earlier ones. Each chapter will include a comprehensive series of exercises with solutions, taken from the set we developed over the years. Matlab code will be included for each computational figure, to allow the reader to reproduce them. Biographical notes referring the reader to more specialized literature and additional mathematical material that may be needed either to deepen the
reader's understanding or to introduce basic concepts for less mathematically
inclined readers will complete each chapter.

### Readership

* Graduate and post graduate students in Neuroscience and Psychology looking for an introduction to mathematical methods in Neuroscience

* Researchers in Neuroscience and Psychology looking for a quick reference for mathematical methods

* Students in applied mathematics, physical sciences, engineering who want an introduction to Neuroscience in a mathematical context

### Quotes

"I really think this book is very, very important. This is precisely what has been missing from the field and is badly needed. Non-physicists or non-mathematicians coming to neuroscience try hard to get up to speed in the basic maths needed to get by but give up because there is no clear explication of this. Please publish this book." Dr. Kevin Franks, research fellow, Richard Axel's laboratory Columbia University, NYC

"I think this is a very interesting and worthwhile proposal. The idea of presenting sufficient maths to understand the theoretical neuroscience, alongside the neuroscience itself, is appealing. The inclusion of Matlab code for all examples and computational figures is an excellent idea. Many readers will want to use and explore the code, either to directly aid their understanding, or as the basis for their own ongoing research, and Matlab is a widely used tool in this area."

David Corney, research fellow, Institute of Ophthalmology, University College London.

**Table of Contents**

1 Introduction

- 1.1 Mathematical Notation

2 The Passive Isopotential Cell

- 2.1 The Nernst Potential
- 2.2 Membrane Conductance
- 2.3 Membrane Capacitance & Current
- 2.4 Synaptic Conductance
- 2.5 Exercises

3 Differential Equations

- 3.1 Exact Solution
- 3.2 Moment Methods
- 3.3 Approximate Solution
- 3.5 Synaptic Input
- 3.5 Exercises

4 The Active Isopotential Cell

- 4.1 The Delayed Rectifier Potassium Channel
- 4.2 The Sodium Channel
- 4.3 The Hodgkin–Huxley Equations
- 4.4 The Transient Potassium Channel
- 4.5 Exercises

5 The Quasi-Active Isopotential Cell

- 5.1 The Quasi-Active Model
- 5.2 Approximate Solution
- 5.3 Exact Solution
- 5.4 A Persistent Sodium Current
- 5.5 An HCN Current
- 5.6 Exercises

6 The Passive Cable

- 6.1 The Discrete Passive Cable Equation
- 6.2 Exact Solution via Eigenvector Expansion
- 6.3 Approximate Solution via Euler’s Method
- 6.4 The Passive Cable Equation
- 6.5 Synaptic Input
- 6.6 Exercises

7 The Passive Dendritic Tree

- 7.1 The Discrete Passive Tree
- 7.2 Eigenvector Expansion
- 7.3 Approximate Solution and Synaptic Integration
- 7.4 The Passive Dendrite Equation
- 7.5 Equivalent Cylinder
- 7.6 Branched Eigenfunctions
- 7.7 Exercises

8 The Active Dendritic Tree

- 8.1 The Active Cable
- 8.2 The Quasi-Active Cable
- 8.3 Exercises

9 Neuronal Calcium Signaling

- 9.1 Voltage Gated Calcium Channels
- 9.2 Diffusion, Buffering and Extraction of Cytosolic Calcium
- 9.3 Calcium Release from the Endoplasmic Reticulum
- 9.4 Calcium in Spines
- 9.6 Presynaptic Calcium and Transmitter Release
- 9.6 Exercises

10 The Singular Value Decomposition

- 10.1 Introduction
- 10.2 Principal Component Analysis and Spike Sorting
- 10.3 Synaptic Plasticity and Principal Components
- 10.4 Neuronal Model Reduction via Balanced Truncation
- 10.5 Exercises

11 Reduced Single Neuron Models

- 11.1 The Leaky Integrate-and-Fire Neuron
- 11.2 Simplified Models of Bursting Neurons
- 11.3 Exercises

12 Probabilities I

- 12.1 Events and Random Variables
- 12.2 Binomial Random Variables
- 12.3 Poisson Random Variables
- 12.4 Gaussian Random Variables
- 12.6 Cumulative Distribution Functions
- 12.7 Conditional Probabilities
- 12.8 Sum of Independent Random Variables
- 12.9 Exercises

13 Synaptic Transmission and Quantal Release

- 13.1 Basic Synaptic Structure and Physiology
- 13.2 Discovery of Quantal Release
- 13.3 Compound Poisson Model of Synaptic Release
- 13.4 Comparison with Experimental Data
- 13.5 Quantal Analysis at Central Synapses
- 13.6 Facilitation, Potentiation and Depression of Synaptic Transmission

14 Probabilities II

- 14.1 Transformation of Random Variables
- 14.2 Random Vectors
- 14.3 Exponential and Gamma Distributed Random Variables
- 14.4 The Homogenous Poisson Process
- 14.5 Exercises

15 Quantification of Spike Train Variability

- 15.1 Spontaneous Activity of Nerve Cells
- 15.2 Interspike Interval Histograms and Coefficient of Variation
- 15.3 Refractory Period
- 15.4 Spike Count Distribution and Fano Factor
- 15.5 Renewal Processes
- 15.6 Stationarity
- 15.7 Return Maps and Serial Correlation Coefficients

16 Fourier Transform and Distributions

- 16.1 Linear Time-Invariant Systems
- 16.2 Fourier Transform and Convolution Theorem
- 16.3 Discrete Fourier Transform
- 16.4 Discrete Impulse and Dirac Distribution
- 16.5 Derivatives of Distributions
- 16.6 Exercises

17 Stochastic Processes

- 17.1 Definition and General Properties
- 17.2 Gaussian Processes
- 17.3 Point Processes
- 17.4 The Inhomogeneous Poisson Process
- 17.5 Spectral Analysis
- 17.6 Exercises

18 Membrane noise

- 18.1 Single Channel Recordings
- 18.2 Two-State Channel Model
- 18.3 Multi-State Channel Models
- 18.4 The Ornstein-Uhlenbeck Process
- 18.5 Synaptic Noise
- 18.6 Exercises

19 Power and Cross-Spectra

- 19.1 Cross-Correlation and Coherence
- 19.2 Estimator Bias and Variance
- 19.3 Numerical Estimate of the Power Spectrum
- 19.4 Exercises

20 Natural Light Signals and Phototransduction

- 20.1 Wavelength and Intensity
- 20.2 Spatial Properties of Natural Light Signals
- 20.3 Temporal Properties of Natural Light Signals
- 20.4 Phototransduction Model
- 20.5 Exercises

21 Firing Rate Codes and Early Vision

- 21.1 Definition of Mean Instantaneous Firing Rate
- 21.2 Visual System and Visual Stimuli
- 21.3 Spatial Receptive field of Retinal Ganglion Cells
- 21.4 Characterization of Receptive Field Structure
- 21.5 Spatio-Temporal Receptive Fields
- 21.6 Static Non-Linearities
- 21.7 Exercises

22 Models of Simple and Complex Cells

- 22.1 Introduction
- 22.2 Simple Cell Models
- 22.3 Non-Separable Receptive Fields
- 22.4 Receptive Fields of Complex Cells
- 22.5 Motion-Energy Model
- 22.6 Hubel-Wiesel Model
- 22.7 Multiscale Representation of Visual Information
- 22.8 Exercises

23 Stochastic Estimation Theory

- 23.1 Minimum Mean-Square Error Estimation
- 23.2 Estimation of Gaussian signals
- 23.3 Linear Non-Linear (LN) Models
- 23.4 Exercises

24 Reverse-Correlation and Spike Train Decoding

- 24.1 Reverse-Correlation
- 24.2 Stimulus Reconstruction
- 24.3 Exercises

25 Signal detection theory

- 25.1 Testing Hypotheses
- 25.2 Ideal Decision Rules
- 25.3 ROC Curves
- 25.4 Multi-Dimensional Gaussian Signals
- 25.5 Fisher Linear Discriminant
- 25.6 Exercises

26 Relating Neuronal Responses and Psychophysics

- 26.1 Single Photon Detection
- 26.2 Signal Detection Theory and Psychophysics
- 26.3 Motion Detection
- 26.4 Exercises

27 Population Codes

- 27.1 Cartesian Coordinate Systems
- 27.2 Overcomplete Representations
- 27.3 Frames
- 27.4 Maximum Likelihood
- 27.5 Estimation Error and Cramer-Rao Bound
- 27.6 Population Coding in the Superior Colliculus
- 27.7 Exercises

28 Neuronal Networks

- 28.1 Introduction
- 28.2 Small Integrate and Fire Networks
- 28.3 Large Integrate and Fire Networks
- 28.4 Integrate and Fire Networks with Plastic Synapses
- 28.5 Exercises

29 Annotated Bibliography

30 Solutions to Exercises

**Tel**91 593 99 99

**Fax**91 448 21 88

**Dir**

C / Raimundo Lulio, 1, 28010 Madrid, España.

© 2022 Axón Librería S.L.

v1.57.1