Information

Have the effects of MRIs and other electromagnetic devices on human psychology been studied?

Have the effects of MRIs and other electromagnetic devices on human psychology been studied?

What are the effects of MRIs and other electromagnetic devices on the human brain? Naturally they could move metals to or through the brain cell's membrane. I can also see the experience of being in a strange position, meditating, and attempting to be as still as possible for an hour and a half, might cause some long term effect. Have the effects of MRIs and other electromagnetic devices been studied comparatively on human psychology or neurology? Could you also mention the mental health aspects?


Yes, the effects of electromagnetic fields on the brain have been studied extensively, especially with respect to cellphone, WiFi, and other small devices that emit such fields. The results are typically mixed, but the overall consensus is that "it depends" - on the strength of the field, the environment, length of exposure, etc - and most modern consumer devices operate at levels well below safety margins.

The effects of MRIs on cognitive function have been studied to some degree, starting in the 1990s, early in their history. The most prominent cognitive effects known include dizziness, vertigo, concentration, and awareness. Most of these effects are temporary (limited to the time spent inside the MRI machine), typically only found in high-intensity MRIs, and in most cases require movement (movement is theorized to induce a changing magnetic field).

The long-term effects of exposure to MRIs are not well known. However, as MRIs are in very common use, and are often a core part of studies on mental health and cognitive function, their safety is indirectly tested regularly. As such, there is not much impetus to conduct a controlled long-term study. Note also that MRI-related health risks tend to be more important for the operators who are chronically exposed, rather than for the patients who are only briefly exposed.

For your case, you should find out the MRI field strength used: "the majority of systems operate at 1.5T, commercial systems are available between 0.2T-7T.". Most of the known effects on cognitive function are found with MRIs above 3T, which are relatively uncommon. Even if you were inside a particularly powerful MRI machine, as long as you remained perfectly still, no significant magnetic field fluctuations should occur, and even if you did move, the effects are temporary.


4. Conclusion

De Flora et al. wrote the following: “Since the late 20th century, chronic degenerative diseases have overcome infectious disease as the major causes of death in the 21st century, so an increase in human longevity will depend on finding an intervention that inhibits the development of these diseases and slows their progress” [33].

Could such an intervention be located right beneath our feet? Earthing research, observations, and related theories raise an intriguing possibility about the Earth's surface electrons as an untapped health resource—the Earth as a “global treatment table.” Emerging evidence shows that contact with the Earth—whether being outside barefoot or indoors connected to grounded conductive systems—may be a simple, natural, and yet profoundly effective environmental strategy against chronic stress, ANS dysfunction, inflammation, pain, poor sleep, disturbed HRV, hypercoagulable blood, and many common health disorders, including cardiovascular disease. The research done to date supports the concept that grounding or earthing the human body may be an essential element in the health equation along with sunshine, clean air and water, nutritious food, and physical activity.


Have the effects of MRIs and other electromagnetic devices on human psychology been studied? - Psychology

This article presents a systematic review of published scientific studies on the potential ecological effects of radiofrequency electromagnetic fields (RF-EMF) in the range of 10 MHz to 3.6 GHz (from amplitude modulation, AM, to lower band microwave, MW, EMF).

Methods

Publications in English were searched in ISI Web of Knowledge and Scholar Google with no restriction on publication date. Five species groups were identified: birds, insects, other vertebrates, other organisms, and plants. Not only clear ecological articles, such as field studies, were taken into consideration, but also biological articles on laboratory studies investigating the effects of RF-EMF with biological endpoints such as fertility, reproduction, behaviour and development, which have a clear ecological significance, were also included.

Results

Information was collected from 113 studies from original peer-reviewed publications or from relevant existing reviews. A limited amount of ecological field studies was identified. The majority of the studies were conducted in a laboratory setting on birds (embryos or eggs), small rodents and plants. In 65% of the studies, ecological effects of RF-EMF (50% of the animal studies and about 75% of the plant studies) were found both at high as well as at low dosages. No clear dose–effect relationship could be discerned. Studies finding an effect applied higher durations of exposure and focused more on the GSM frequency ranges.

Conclusions

In about two third of the reviewed studies ecological effects of RF-EMF was reported at high as well as at low dosages. The very low dosages are compatible with real field situations, and could be found under environmental conditions. However, a lack of standardisation and a limited number of observations limit the possibility of generalising results from an organism to an ecosystem level. We propose in future studies to conduct more repetitions of observations and explicitly use the available standards for reporting RF-EMF relevant physical parameters in both laboratory and field studies.

Highlights

► Scientific literature was screened for articles on ecological effects of RF-EMF. ► RF-EMF had a significant effect on birds, insects, other vertebrates, other organisms and plants in 70% of the studies. ► Development and reproduction of birds and insects are the most strongly affected endpoints. ► There is a lack of field - and ecological studies on populations and the interaction of species. ► There is an urgent need for repetitions of studies finding effects and investigations into effects on ecosystems.


Research With Human Volunteers

  • A method called “microdosing” can provide vital information on the safety of an experimental drug and how it is metabolized in humans prior to large-scale human trials. Volunteers are given an extremely small one-time drug dose, and sophisticated imaging techniques are used to monitor how the drug behaves in the body. Microdosing can replace certain tests on animals and help screen out drug compounds that won’t work in humans so that they are never tested in animals.
  • Advanced brain imaging and recording techniques—such as functional magnetic resonance imaging (fMRI)—with human volunteers can be used to replace archaic experiments in which rats, cats, and monkeys have their brains damaged. These modern techniques allow the human brain to be safely studied down to the level of a single neuron (as in the case of intracranial electroencephalography), and researchers can even temporarily and reversibly induce brain disorders using transcranial magnetic stimulation.

Sebastien Point

Electrohypersensitivity syndrome may have little to do with actual exposure to electromagnetic radiation. It instead may be better understood as a phobia explained by anxiety disorder mechanisms.

Development of communication technologies (such as mobile phones and other wireless devices) has been followed in the past twenty years with appearance of a new self-assessed, so-called electrohypersensitivity (EHS) syndrome, which some people—including some medical doctors—attribute to permanent exposure to domestic, low levels of electromagnetic radiation. EHS syndrome is not the only potential concern raised by people worried about electromagnetic radiation effects on cognition, sleep, and tumor initiation or growth have been pointed out—without convincing scientific evidence or satisfying proposals regarding potential biological mechanisms. But until now, EHS (also often called electromagnetic hypersensitivity) syndrome seems to have been about the supposed health effects of exposure to electromagnetic radiation. Some studies (Hallberg and Oberfeld 2006 Eltiti et al. 2007 Schröttner and Leitgeb 2008) tried to count EHS-affected people, which is a difficult task because of the lack of a medical description of EHS syndrome. Results of these studies are shown in Figure 1. A continuous increasing trend is observed in EHS self-diagnoses between 1985 and 2005, although the latest studies suggest a decrease.

Figure 1: Percentage of EHS individuals in the population as measured in different European countries between 1985 and 2008 (adapted from Bellayer 2016).

In the face of numerous claims regarding adverse health effects of electromagnetic radiation in general—and EHS syndrome in particular—the French Agency for Food, Environmental, and Occupational Health & Safety (ANSES) emphasized in its 2018 report the need for taking care of people suffering from EHS syndrome, although it did not recognize the causal relationship between electromagnetic radiation and EHS syndrome (ANSES 2018). In parallel, for some years, an increasing number of court decisions have recognized the right of individuals suffering from EHS syndrome to refuse wireless technologies for medical reasons, and some judges have decided a causal relationship between electromagnetic radiation and EHS syndrome. Mass media are also interested in the problem, and it is easy to find numerous online testimonials of people suffering from EHS syndrome.

Scientists have tried to clarify the nature of this EHS syndrome and to verify the reality of the alleged relationship with exposure to electromagnetic radiation. In this article, I will summarize results produced by provocation studies and share my view that the etiology of the EHS syndrome is not related to real exposure to electromagnetic radiation. Instead, a phobia-like disorder should be investigated as a possible explanation of EHS syndrome.

Provocation Studies

Biological effects of electromagnetic radiation are taken into account by normative standards based on International Commission on Non-Ionizing Radiation Protection (ICNIRP) statements (ICNIRP 1998). Normative exposure limits aim to protect people from high doses of electromagnetic radiation, which can result in electrical or thermal effects (or both) depending on radiation frequency. Indeed, induced currents or temperature elevation are the only proven biological adverse effects associated with acute exposure to electromagnetic radiation (Perrin and Souques 2012). In Europe, directive 2013/35/EU has been created to protect workers from exposure to conditions able to generate such effects. However, EHS syndrome is claimed to appear when people are exposed to low levels of electromagnetic radiation.

A provocation study or trial is a form of medical clinical trial frequently used in this field. Participants are exposed to a substance or device claimed to provoke a response or to a sham substance or device that should provoke no response. Several provocation studies have been conducted in an attempt to reveal a potential relationship between exposure to electromagnetic radiation at low levels and EHS syndrome. The premise is quite simple: as EHS people claim to suffer from various symptoms when they are exposed to electromagnetic radiation, experiments should be able to examine the phenomenon by comparing the level of discomfort of EHS people depending on exposure conditions measured against a control population. Such studies must deal with a potential low-dose effect—what makes experiments sensitive to confusing variables. Thus, to be usable, such experiments must at least be single-blinded and if possible double-blinded, randomized, and counterbalanced.

Several authors have tried to create such experiments. One full and relevant analysis of these provocation studies has been made by E. van Rongen and colleagues (2009), who conclude that in provocation studies a causal relation between electromagnetic radiation exposure and symptoms “has not been demonstrated, suggesting that psychological factors such as conscious expectation of effect may play an important role in this condition.” This conclusion is shared by Renáta Szemerszky and colleagues (2010), who studied the role of the nocebo effect in physical symptoms reported at a frequency of 50 hertz. They found a considerable nocebo effect and noted the formation of “ a vicious circle of psychosocial factors , such as enhanced perception of risk and expectations, self-monitoring , somatisation and somatosensory amplification, casualization, and misattribution.”

A Cognitive Approach

On the one hand, provocation studies failed to determine a causal relationship with exposure to low levels of electromagnetic radiation, and there is no serious candidate for a biological mechanism. On the other hand, results of some studies show a significant number of people claiming to suffer from EHS syndrome. These contradictory observations call for a new pathway for investigations. As suggested by some authors, one possible etiology is psychological. I suggest a cognitive mechanism for explaining EHS syndrome as a potential form of phobia in which confirmation bias could have a central role. A view of my mechanism proposal is shown in Figure 2. The rationale and hypothesis used for building it are discussed below.

Biological Vulnerability

Anxious symptomatology has been described as under the influence of some genetic factors (Jardine et al. 1984 Kendler et al. 1992 Andrews 1996). Studying etiology of social phobia, Ronald M. Rapee and Richard G. Heimberg (1997) suggested that genetic factors may explain preferential allocation of attention toward danger. The role of the biological factor has been integrated in many models of anxiety, including the Jones and Barlow biopsychosocial model of post-trauma stress (Jones and Barlow 1992) and the Clark model of panic (Clark 1986). In this latest model, people suffering from panic disorders are excessively sensitive to some normal or pathological body perceptions, which they consider as a catastrophic threat and which can trigger a panic attack. The biological manifestation of this panic attack can amplify the perception of the threat. Some recent results have shown that this biological vulnerability to anxiety disorders, including depression and post-trauma stress, could be related to the volume of hippocampi (Campbell and MacQueen 2004 Bremner et al. 1995). The widely accepted theory of anxiety by Jeffrey A. Gray and Neil McNaughton (2003) also points out the main role of hippocampi in anxious behaviors. But some studies suggest that hippocampal volume evolution is not a result of anxiety, trauma, or depression but would be, to the contrary, a causative factor (Gilbertson et al. 2002). Biological vulnerability and preferential allocation of one’s attention to threat is accepted as one of the bases for the development of anxiety disorders. To consider EHS syndrome as a form of phobia, one has to make the fundamental hypothesis that EHS individuals carry, prior to the appearance of symptoms, this type of biological vulnerability.

Alleged Exposure as Neutral Stimulus and Mental Representation of Health Disorders

Anxiety troubles are often associated with a neutral stimulus, as understood in a Pavlovian approach. If EHS syndrome is a phobia, alleged exposure to electromagnetic radiation could play the role of neutral stimulus. From my perspective, several cues support this assumption.

As demonstrated by provocation studies, people, including those claiming an EHS syndrome, are not able to detect electromagnetic radiation at domestic, low levels. As low-level radiofrequency electromagnetic radiation is neither visible nor detectable, exposure can be inferred only on the basis of cues such as the proximity of mobile phones or antennas. Yet some research has shown a strong relationship between the loss of perceived control and anxiety troubles (Gallagher et al. 2014). Exposure to electromagnetic radiation could be a neutral stimulus powered by the feeling that the person has no control over this exposure.

News media are more and more focused on potential effects of electromagnetic radiation, especially with the current deployment of 5G technology. This media treatment could encourage people to focus on mobile phones, antennas, or radiofrequency emitters existing in their environment and on physical perceptions when these people encounter or use them. Such a hypothesis is supported by recent results, especially those of Anne-Kathrin Bräsher and colleagues (2017), who showed that worrisome information on health effects of wifi increases discomfort during sham exposure it also increases sensitivity to tactile stimuli. In their studies, Michael Witthöft and G. James Rubin (2013) concluded that “media reports about the adverse effects of supposedly hazardous substances can increase the likelihood of experiencing symptoms following sham exposure and developing an apparent sensitivity to it.”

Symptoms claimed by EHS individuals are various and nonspecific. According to Marjukka Hagström and colleagues (2013), the most common symptoms are “stress” (60.3 percent), “sleeping disorders” (59.3 percent), and “fatigue” (57.2 percent). These symptoms are compatible with anxiety disorders, which can affect organs, glands, and the nervous system. In a cognitive approach, instead of considering anxiety (and all associated potential symptoms) as a collateral consequence of any real electrosensitivity, all these symptoms could be considered as existing prior to the appearance of the EHS syndrome (as a result of a biological vulnerability) and being recently associated with neutral stimulus (the alleged exposure to electromagnetic radiation).

It has been shown that reducing or avoiding electromagnetic radiation can help individuals suffering from EHS syndrome recover a better level of comfort (Hagström et al. 2013). Such avoidance behaviors and immediate well-being gain are typically observed in anxiety or panic disorders alternatively, for the case of EHS individuals, some counter-phobic objects can be purchased, such as anti-radiation chips applied to mobile phones to allegedly reduce their emission level. Following such avoidance or defense behaviors, decreasing discomfort could act as a confirmation that the source of discomfort was related to electromagnetic radiation. This would possibly reinforce the belief of EHS individuals in adverse health effects of electromagnetic radiation.

It is also well accepted that anxious individuals do allocate their attention to threat (Dalgleish and Watts 1990), which could be an evolutionary strategy to protect the organism (Ohman 1996). Considering EHS syndrome as an anxiety disorder, this attentional focus on physical perceptions, related to the belief that the subject is exposed and that this exposure can result in adverse health effects, could amplify and reinforce the belief in a real threat.

The feedback mechanism I explored here (reinforcement of belief by attentional focus on symptoms and good results of avoidance strategy) could make the EHS individuals enter a loop of fear whose main engine is confirmation bias.

Figure 2: Possible theoretical model of EHS syndrome considered as an anxiety disorder.

Conclusion

Science has failed to show a causal relationship between real exposure to electromagnetic radiation and EHS syndrome. There is, however, a growing accumulation of evidence that EHS syndrome is related to psychological factors. I have proposed that EHS syndrome could be a form of phobia explained by mechanisms occurring in other anxiety disorders. Biological vulnerability to anxiety—existing prior to the appearance of EHS syndrome—would cause people to confound symptoms of anxiety with a biological effect of electromagnetic radiation. Alleged exposure to electromagnetic radiation would play the role of neutral stimulus. Finally, confirmation bias—based on attentional focus and avoidance strategies—would push people still deeper in their belief.

Researchers should now stop (or at least reduce their efforts) trying to find a less-and-less plausible link between EHS syndrome and electromagnetic radiation and instead concentrate their work on understanding more accurately the mechanisms pushing people to the wrong belief that domestic, low levels of electromagnetic radiation are deleterious. Researchers should find a way to help these people by decreasing anxiety and discomfort through adapted cognitive behavioral therapy techniques.


Brain Imaging With Magnetoencephalography During Rest and During Speech and Language Processing

Inverse Models for Reconstructing Brain Activity From Measurements

Inverse algorithms are used to solve the bioelectromagnetic inverse problem, ie, estimating neural source model parameters from MEG and EEG measurements obtained outside the human head. Because the source distributions are inherently four-dimensional (three in space and one in time) and only a few measurements are made outside the head, estimation is ill-posed in other words, there are no unique solutions for a given set of measurements. To circumvent this problem of nonuniqueness, various estimation procedures incorporate prior knowledge and constraints about source characteristics, such as possible source locations, the source spatial extent, the total number of sources, or the source–frequency/time–frequency characteristics.

Inverse algorithms can be broadly classified into two categories: parametric dipole fitting and tomographic imaging methods. Parametric dipole fitting methods assume that a small set of current dipoles (usually two to five) can adequately represent some unknown source distribution. In this case, the dipole locations and moments form a set of unknown parameters which are typically found using either a nonlinear least-square fit or multiple signal classification algorithms or maximum likelihood estimation methods ( Mosher, Baillet, & Leahy, 1999 ). Parametric dipole fitting has been successfully used clinically for localization of early sensory responses in somatosensory and auditory cortices. Fig. 15.2 shows an example of parametric dipole localization in the context of auditory-evoked responses and shows that responses to early auditory peaks can often be localized to activity arising from a source located in the superior temporal plane, from the auditory cortex and its immediate environs. However, the localization of higher-order auditory cortical functions is not always consistent and reliable with these methods across paradigms or subjects.

Figure 15.2 . (A) Auditory-evoked responses to a train of tone pips occurring 200 ms apart. Blue waveforms correspond to the right hemisphere and the purple waveforms correspond to the left hemisphere. The magnetic field topography on the sensor array is shown as colored circles above for the first four peak responses. (B) Amplitude and latencies of the first four response peaks showing hemispheric similarities in latency and amplitudes. (C) Dipole localization of each of the four peaks shows activity arising from auditory cortex and its immediate environs.

From Hairston, I. S., &amp Nagarajan, S. S. (2007). Neural mechanisms of the time-order error: an MEG study. Journal of Cognitive Neuroscience, 19, 1163–1174.

Two major problems exist in dipole fitting procedures. First, because of nonlinear optimization there are problems of local minima when more than two dipole parameters are estimated. This is usually manifested by sensitivity to initialization, and some subjectivity is involved in evaluating the validity of solutions. Brute force search methods have a huge computational burden—exponential in the number of parameters. A second, more difficult, problem in parametric methods is that these methods often require a priori knowledge of the number of dipoles. However, such information about model order is not always known a priori, especially for complex brain-mapping conditions. Although information and decision theoretic criteria have been proposed to address this problem, the success of these approaches is currently unclear, especially in real data sets. Whereas parametric dipole methods are ideal for point or focal sources, they perform poorly for distributed clusters of sources. Nevertheless, many studies to date using MEG have used dipole fitting procedures to make inferences about cortical activity.

Tomographic imaging is an alternative approach to the inverse problem. These methods impose constraints on source locations based on anatomical and physiological information that can be derived from information obtained with other imaging modalities. Anatomical MRI provides excellent spatial resolution of head and brain anatomy, whereas fMRI techniques provide an alternative measure of neural activation based on associated hemodynamic changes. Because of the high degree of overlap in activity measured using multiple modalities, such information can be used to improve solutions to the inverse problem. If we assume that the dominant sources are the transmembrane and intracellular currents in the apical dendrites of the cortical pyramidal cells, the source image can be constrained to the cortex, which can be extracted from a registered volume MRI of the subjects' head. Furthermore, the orientation of the cells normal to the cortical surface can be used to constrain the orientation of the cortical current sources. By tessellating the cortex into disjoint regions and representing sources in each region by an equivalent current dipole oriented normal to the surface, the forward model relating the sources and the measurements can be written as a linear model with additive noise. Such a formulation transforms the inverse problem into a linear imaging method because it involves estimating electrical activity at discrete locations over a finely sampled reconstruction grid based on discrete measurements. This imaging problem, although linear, is also highly ill-posed because of the limited number of sensor measurements available compared with the number of elements used in the tessellation grid.

Various solutions have been proposed for solving the tomographic imaging problem, and because there are many more unknowns to estimate simultaneously (source amplitude and time courses) than there are sensor data, the problem is undetermined.

Instead of simultaneously estimating all sources, a popular alternative is to scan the brain and estimate source amplitude at each source location independently. It can be shown that such scanning methods are closely related to whole-brain tomographic methods, and the most popular scanning algorithms are adaptive spatial filtering techniques, more commonly referred to as “adaptive beamformers”, or just “beamformers” ( Sekihara & Nagarajan, 2008 ).

Adaptive beamformers have been shown to be simple to implement and are powerful techniques for characterizing cortical oscillations and are closely related to other tomographic imaging methods. However, one major problem with adaptive beamformers is that they are extremely sensitive to the presence of strongly correlated sources. Although they are robust to moderate correlations, in the case of auditory studies, because auditory cortices are largely synchronous in their activity across the two hemispheres, these algorithms tend to perform poorly for auditory evoked data sets without work-arounds ( Fig. 15.5 ), and many modifications have been proposed for reducing the influence of correlated sources ( Dalal, Sekihara, & Nagarajan, 2006 ). The simplest such work-around is to use half the sensors corresponding to each hemisphere separately, and this approach works surprisingly well for cross-hemispheric interactions. Other modifications to the original algorithms have been proposed in the literature, which require some knowledge about the location of the correlated source region ( Dalal et al., 2006 Quraan & Cheyne, 2010 ).

Many algorithms have also been proposed for simultaneous estimation of all source amplitudes, and such solutions require specification of prior knowledge about the sources, either implicitly or explicitly specified in the form of probability distributions. In these cases, the solutions often require a Bayesian inference procedure of estimating some aspect of the posterior distribution given the data and the priors. We showed that the many seemingly disparate algorithms for tomographic source imaging can be unified and shown, in some cases to be equivalent, using a hierarchical Bayesian modeling framework with a general form of prior distribution (called Gaussian scale mixture) and two different types of inferential procedures ( Wipf & Nagarajan, 2008 ). These insights allow for continued development of novel algorithms for tomographic imaging in relation to prior efforts in this enterprise. Algorithms have shown that significant improvements in performance can be achieved by modern Bayesian inference methods that allow for accurate reconstructions of a large number of sources from typical configurations of MEG sensors ( Wipf, Owen, Attias, Sekihara, & Nagarajan, 2010 Zumer, Attias, Sekihara, & Nagarajan, 2007, 2008 ). Fig. 15.3 shows source reconstructions of auditory-evoked responses using one such novel algorithm, called Champagne, as well as reconstructions from popular benchmark algorithms for comparisons that highlight inconsistent localizations, poorer spatial resolution, and sensitivity to correlated sources and noise.

Figure 15.3 . Auditory-evoked field results for seven different subjects. Results from Champagne (CHAM P) are shown in the left-most column and results from the benchmark algorithms (minimum variance adaptive beamformers (MVA B), sLORETA (SL/dSP M) and MCE) are shown in the other three columns. Only Champagne shows consistent localization of auditory cortices in both hemispheres for all subjects.


Response-time modeling of the effect of scanner environment

All three experiments displayed slowed responses as a result of the MRI environment, but only Experiment 3 showed increased error rates. Consequently, based on mean correct RTs and error rates, the most likely candidate explanation of these effects is the attentional focus account. Because the attentional focus account assumes that the stress induced by the MRI scanner diverts attention away from the imperative stimulus, responses slow down and become more error-prone. One reason why this accuracy effect is not observed in Experiments 1 and 2 might be that another mechanism counters the increase in erroneous responses. That is, if participants are less focused, they might respond to the potential behavioral deterioration with an increase in response caution. In terms of the LBA model, this means an increase in the threshold parameter.

This hypothesis is scrutinized using formal LBA modeling of the RT distributions for both correct and error responses and error rates. First, we present our general approach for obtaining the best set of model parameters (in terms of different way of equating parameters across conditions). Next, we present the results of this analysis and what that means for the parameter estimates.

Methods

For each experiment, we defined the top model as the model in which all parameters were allowed to vary across sessions (Behavioral and MRI some exceptions are discussed below). In addition, the top models only allowed those parameters to vary across other experimental conditions for which there is consensus in the literature. For example, to account for directional cueing in Experiment 1 we allowed the threshold parameter in LBA to vary between response alternatives (cf. Forstmann et al., 2010. This parameter allows for a difference in the amount of evidence that is required for a response between the choice options, similar to the difference in prior probability that is induced in Experiment 1.

Using the top model, we generated a model hierarchy of simpler models (Heathcote & Love, 2012). That is, we generated all possible models that contained fewer free parameters by fixing a parameter across conditions. Thus, the simplest models that were fit estimate the same parameters of each type for all conditions. The best fitting parameter values of the simplest models were used as initial guesses for more complex models. This way, we reduce the impact of local minima in the parameter space. The models were fit using maximum likelihood (Donkin, Brown, & Heathcote, 2011).

The best model across participants was determined using AIC (Akaike, 1974). AIC is a measure that balances the fit of a model with the number of free parameters. The number of free parameters is a proxy for model complexity, as models with more parameters are inherently more flexible. Because of this, in a model hierarchy more complex models necessarily fit better (e.g., Collyer, 1985 Myung & Pitt, 1997). Thus, AIC allows for inferences on which parameter should be kept free across experimental conditions and which parameters should not. To determine the best model, we summed the AIC score of individual participants, and computed the AIC weights of each model (Wagenmakers & Farrell, 2004). This reflects the probability that a particular model is the best model of the set of fitted models (given the data).

In addition to AIC, we also computed Bayesian Information Criterions (BIC Schwarz, 1978). BIC is typically more conservative than AIC in allowing model complexity. This was also observed in our model comparisons nevertheless, the results of the BIC analyses generally agreed with the AIC analyses. For this reason, we focus our analysis on the AIC based results, and will only briefly mention any deviations from these according to BIC.

We performed Bayesian analysis of variance—similar to the analyses on the behavioral data—on the parameters of the top model. This analysis allows us to infer whether a parameter was free because of differences due to the experimental manipulations, or whether individual differences between participants forced AIC model selection to allow a free parameter, even though the variance in the estimates is not systematic with respect to conditions. The parameter estimates used in this analysis were a weighted average of estimates from all of the different model parameterizations we fit obtained using model averaging (Hoeting, Madigan, Raftery, & Volinsky, 1999 Raftery, 1995). We could have first selected one model as the true model for all participants, based on a criterion like having the smallest total AIC and analyzed the parameter estimates from only that model. However, this fails to take account of all of the available information (e.g., the total AIC might not be much larger for some other models, making selection of just one model questionable) and individual differences (e.g., different models maybe better for different individuals). Instead, we used the AIC values for each model to weight (Wagenmakers & Farrell, 2004) the contribution of their parameters to the overall estimates used in the Bayesian analysis of variance. The resulting estimates were fairly close to those of the model with the best overall AIC but better take into account uncertainty in the model selection process.

Experiment 1: Choice bias

The top LBA model allowed all five model parameters to vary across sessions. Thus, the drift rate v, non-decision time (t 0) and bA as well as the variance parameters A and sv were free to vary across sessions. We estimated the difference between the upper bound of the start point distribution and the threshold (bA) to ensure that the threshold b could not be below the start point. Because LBA estimates separate drift rates for correct and incorrect response accumulators, v and drift rate variance sv were also allowed to vary with the type of response (that is, correct vs. incorrect). In addition, we allowed bA to vary per accumulator as a function of the cue type (left vs. right), in line with previous models estimating bias effects (Forstmann et al., 2010).

Experiment 2: Speed-accuracy trade-off

As for Experiment 4, the top LBA model allowed all model parameters to vary across sessions. Also similarly, separate drift rate parameters (both mean drift rate v and the standard deviation of drift rate sv) were estimated for correct and incorrect responses. In addition, the threshold, drift rates, and non-decision time were all free to vary with the speed-accuracy manipulation. Many studies show that speed-accuracy trade-off behavior can be obtained through different threshold values (e.g., Boehm, Van Maanen, Forstmann, & Van Rijn, 2014 Forstmann et al., 2008 Mulder et al., 2010 Van Maanen et al., 2011 Winkel et al., 2012). However, some studies also show different parameter estimates for non-decision times (Heitz & Schall, 2012 Mulder et al., 2013) or drift rates (Dambacher & Hübner, 2014 Heitz & Schall, 2012 Ho et al., 2012 Mulder et al., 2010 Rae et al., 2014).

Experiment 3: Task difficulty

Again, all model parameters were free to vary across sessions. Because there was no credible effect of ITI (see Results), the top LBA model for Experiment 3 did not include the ITI as a factor on all parameters. Instead, we allowed only the threshold parameter to vary with the ITI. Based on the literature (Churchland et al., 2011 Donkin, Averell, Brown, & Heathcote, 2009 Ho, Brown, & Serences, 2009 Mulder et al., 2013 Palmer et al., 2005), we allowed drift rate (mean and standard deviation) to vary with coherence.

Results

Experiment 1: Choice bias

AIC model selection preferred the following model with an AIC weight of w A I C = 1.0: Threshold was allowed to vary with the cue, drift rate was allowed to vary with session and response type (correct/incorrect), and non-decision time was allowed to vary with session only. Thus, these free parameters contributed to the explanation of the data, according to AIC. Footnote 2 Fig. 5 shows that this model indeed captures both accuracy as well as the RT distribution. Allowing a parameter to be free across conditions does not necessary entail that it will also systematically differ across participants. To test this, we submitted each parameter to a Bayesian ANOVA. The factors that were included in the ANOVA model depended on the conditions for which a free parameter value was estimated, but in all cases included the sessions. Here, we focus the analysis on the likelihood that a parameter differed per Session. In particular, we focus our analysis on the threshold parameter b, the non-decision time parameter t 0, and the sum and difference of the drift rates for correct and incorrect responses, v c + v e and v cv e, respectively.

The LBA model fits the data of Experiment 3 well. Upper panels .1, .5, .9 RT quantiles for correct responses Lower panels Error rates. The data as well as the model predictions are averaged across participants

These four parameters represent the four theoretical accounts that we focus on. Fig. 6 (Left column) shows the mean parameter estimates across participants representing the four theoretical accounts. Table 2 shows Bayes factors for omitting the crucial Session factor from an ANOVA model. For Experiment 1, there is evidence that the non-decision time parameter t 0 varied with Session, as the Bayes factor for including it in the ANOVA model is quite large. For the remaining parameters b, v c + v e and v cv e the evidence for including the Session factor is inconclusive. Footnote 3 The threshold parameter did show evidence in favor of including the cue factor in the ANOVA model (BF=1.3×10 11 ), consistent with previous studies that induced a bias in responding (Forstmann et al., 2010 Mulder et al., 2012). Also, the start point variability parameter (A) showed evidence in favor of a cue-related effect (BF=681), indicating that variability due to the distance towards the threshold was smallest for neutral cues, and larger for both valid and invalid cues.

LBA model parameters for the best models for Experiments 1–3. Left column Experiment 1 Middle column Experiment 2 Right column Experiment 3. Top row non-decision time t 0, representing delayed response execution second row threshold of the correct accumulator b 1, representing response caution Third row sum of drift rates v c + v e, representing overall arousal Bottom row drift rate difference v cv e, representing attentional focus. Error bars are within-subject standard errors of the mean

Based on the results of Experiment 1, the most likely explanation is that participants increase the amount of motor control they exert on responding while inside the MRI scanner. This is reflected by an increased non-decision time parameter, resulting in slower yet equally accurate responses.

Experiment 2: Speed-accuracy trade-off

AIC model selection on the LBA model hierarchy showed that, in order to best balance model complexity and fit across the whole set of participants, the threshold and non-decision time should be allowed to differ for the two sessions as well as the variability parameters A and sv (AIC weight of this model w A I C = 1.0). Also, the threshold and drift rate parameters should be allowed to vary as a function of the SAT manipulation, and drift rate (mean and standard deviation) should differ for correct and incorrect response accumulators. This model provided a good fit to the data (Fig. 7). Footnote 4 Fig. 6 (middle column) shows the mean parameter estimates across participants, and Table 2 again shows Bayes factors for omitting the Session factor from an ANOVA model. Omitting the Session factor from the ANOVA model yielded no conclusive evidence for any of the theoretical accounts.

The LBA model fits the data of Experiment 2 well. Upper panels .1, .5, .9 RT quantiles for correct responses Lower panels Error rates. The data as well as the model predictions are averaged across participants

The SAT manipulation did influence the estimate of the threshold (BF SAT = 6.3 × 10 21 , but again there was no interaction (all BFs < 0.20). This finding is in line with previous work that reports that SAT involves adjustments in response caution (e.g., Boehm et al., 2014 Forstmann et al., 2008 Mulder et al., 2010 Van Maanen et al., 2011 Winkel et al., 2012). However, in addition to a change in threshold, there were also effects of SAT on the drift rates. In particular, the drift rate difference, which according to AIC could only be affected by the SAT manipulation, was indeed higher on accuracy trials relative to speed trials (BF SAT = 1.2×10 19 ). This finding is in agreement with recent literature reporting similar results (Dambacher & Hübner, 2014 Heitz & Schall, 2012 Ho et al., 2012 Mulder et al., 2010 Rae et al., 2014). Finally, there is also evidence for including separate non-decision time parameters for the three SAT conditions (BF SAT = 40). Previous studies also report that non-decision time parameters may differ between speed and accuracy instructions (Heitz & Schall, 2012 Mulder et al., 2013).

Experiment 3: Task difficulty

The LBA model that was preferred by AIC was a model in which threshold varied with ITI and session, mean drift rate varied with session and coherence, and non-decision time varied with session only (w A I C = 1.0). As is common for LBA models, mean drift rate as well as the standard deviation varied depending on whether a response was correct or not. Footnote 5 The fit of this model is generally quite good (Fig. 8). The mean parameter estimates are presented in the right column of Fig. 6.

The LBA model fits the data of Experiment 4 well. Upper panels .1, .5, .9 RT quantiles for correct responses Lower panels Error rates. The data as well as the model predictions are averaged across participants

According to a Bayesian ANOVA, the non-decision parameter was higher in the MRI session than in the behavioral session (BFs = 6.2, see also Table 2). The threshold parameter did not differ between the sessions, the ITIs, nor the interaction (BFs < 0.32, see also Table 2). Neither did the sum of drift rates (all BFs < 0.22). The drift rate difference however increased with both the coherence as well as the Session (BF C o h e r e n c e = 7.3 × 10 22 BF S e s s i o n = 75). There was no evidence in favor of an interaction between these effects (BF C o h e r e n c e×S e s s i o n = 0.21).

The decrease in drift rate difference that we found is consistent with an attentional focus explanation of the effects of scanner environment. That is, if attentional focus is less, then the extraction of information from the stimulus is negatively affected, which is reflected in decreased drift rates.

The increase in drift rate difference with coherence is consistent with the literature (Churchland et al., 2011 Donkin et al., 2009 Ho et al., 2009 Mulder et al., 2013 Palmer et al., 2005). That is, as the motion direction becomes more recognizable, the extraction of information from the stimulus in favor of the correct alternative increases. Consequently, the difference in drift rates between the correct and the incorrect alternative increases as well.

In agreement with the RT and accuracy analyses, the model parameters did not differ between the first and the second session, neither in isolation (all BFs < 0.75) nor in interaction with the session type (all BFs < 1.14). Thus, the conclusion that there is no appreciable practice effect from the first to the second session seems justified. This result adds justification to the interpretation of Experiment 1 and 2, since it is likely that there too is no large effect of practice across session orders.


Health risks linked to electromagnetic field exposure

A study of real-world exposure to non-ionizing radiation from magnetic fields in pregnant women found a significantly higher rate of miscarriage, providing new evidence regarding their potential health risks. The Kaiser Permanente study was published today in the journal Scientific Reports (Nature Publishing Group).

Non-ionizing radiation from magnetic fields is produced when electric devices are in use and electricity is flowing. It can be generated by a number of environmental sources, including electric appliances, power lines and transformers, wireless devices and wireless networks. Humans are exposed to magnetic fields via close proximity to these sources while they are in use.

While the health hazards from ionizing radiation are well-established and include radiation sickness, cancer and genetic damage, the evidence of health risks to humans from non-ionizing radiation remains limited, said De-Kun Li, MD, PhD, principal investigator of the study and a reproductive and perinatal epidemiologist at the Kaiser Permanente Division of Research in Oakland, California.

"Few studies have been able to accurately measure exposure to magnetic field non-ionizing radiation," Dr. Li said. "In addition, due to the current lack of research on this subject, we don't know the biological threshold beyond which problems may develop, and we also don't yet understand the possible mechanisms for increased risks."

In a new study funded by the National Institute of Environmental Health Sciences, researchers asked women over age 18 with confirmed pregnancies to wear a small (a bit larger than a deck of cards) magnetic-field monitoring device for 24 hours. Participants also kept a diary of their activities on that day, and were interviewed in person to better control for possible confounding factors, as well as how typical their activities were on the monitoring day. Researchers controlled for multiple variables known to influence the risk of miscarriage, including nausea/vomiting, past history of miscarriage, alcohol use, caffeine intake, and maternal fever and infections.

Objective magnetic field measurements and pregnancy outcomes were obtained for 913 pregnant women, all members of Kaiser Permanente Northern California. Miscarriage occurred in 10.4 percent of the women with the lowest measured exposure level (1st quartile) of magnetic field non-ionizing radiation on a typical day, and in 24.2 percent of the women with the higher measured exposure level (2nd, 3rd and 4th quartiles), a nearly three times higher relative risk. The rate of miscarriage reported in the general population is between 10 and 15 percent, Dr. Li said.

"This study provides evidence from a human population that magnetic field non-ionizing radiation could have adverse biological impacts on human health," he said.

Strengths of this study, Dr. Li noted, included that researchers used an objective measuring device and studied a short-term outcome (miscarriage) rather than one that will occur years or decades later, such as cancer or autoimmune diseases. The study's main limitation is that it was not feasible for researchers to ask participants to carry the measuring device throughout pregnancy.

Dr. Li noted that the potential health risk of magnetic-field non-ionizing radiation needs more research. "We hope that the finding from this study will stimulate much-needed additional studies into the potential environmental hazards to human health, including the health of pregnant women."


Electromagnetic Fields Affect Human Cells

Electromagnetic fields, similar to those found in overhead power lines, can have a biological effect on human cells, an effect that could contribute to the complex cellular process that leads to cancer, research at Michigan State University shows.

The work of James E. Trosko, a professor of pediatrics and human development, and colleagues is published in the October issue of Environmental Health Perspectives, the journal of the National Institute of Environmental Health Sciences.

"Our studies have contributed to what many other studies have shown, and that is that there is a biological effect of the energy imparted by extremely low frequency EMF (ELF-EMF) on living systems," Trosko said.

Until now, he said, most studies on the biological and health effects of ELF-EMF had been "inconclusive or contradictory."

"Until now, the weight of the theoretical and experimental evidence has suggested that ELF-EMF did not have the ability to interact with genetic material to damage it, thereby causing mutations, which we know can lead to cancer," Trosko said.

Trosko and his colleagues studied the effects of ELF-EMF on mouse leukemia cells that had the potential to mature into cells producing hemoglobin after exposure to a chemical. Hemoglobin is the substance which is needed to bind oxygen in blood. They found that electromagnetic fields of 60 hertz and of strengths ranging from .05 to 10 gauss interfered with the chemically induced maturation process in the mouse cells and allowed the cells to continue to proliferate.

After four days of exposure, about 35 percent of the chemically treated cells that were exposed to ELF-EMF showed these effects.

What Trosko and his colleagues found is that ELF/EMF is not a tumor initiator, but rather a potential tumor promoter.

"ELF-EMF doesn't seem to mutate genes, which could convert a normal cell to an 'initiated' cell," he said. "But it can turn them on and off at inappropriate times, causing these initiated cells to proliferate when normally they would just sit there quietly doing nothing."

"The whole point of our study was not to see if extremely low frequency EMF causes cancer, but if it changes gene expression," Trosko said. "The bottom line is we showed there is a biological effect of EMF as measured by altering the expression of the hemoglobin-producing gene.

"I think it's important to note that there is a distinction between a biological effect and a health effect. Just because I sit under a high power transmission line, and just because that exposure might alter some biological activity in my body, that doesn't automatically mean I'm going to get cancer. And even if I should get cancer, it does not mean ELF-EMF had anything to do with the production of that particular cancer."

Trosko pointed out that the process in which a cell changes from a regular, healthy cell to a cancerous one is long and complex, involving different molecular/biochemical steps.

"These initiated cells need promoting agents to bring about cancer," he said. "They could be natural, such as hormones or chemicals in the food we eat. Or they could be man-made chemicals, drugs or pollutants.

"Most importantly, in order to act as a tumor promoter, many conditions must be met, including the ability of the promoter to overcome natural suppressing effects on cell proliferation, timing of the exposure to the promoter, absence of anti-promoters, and exposure for regular and long periods of time."

Other members of Trosko's research team were Gang Chen, Brad L. Upham, Wei Sun, Chia-Cheng Chang, all of the MSU Department of Pediatrics and Human Development Edward J. Rothwell and Kun-Mu Chen, of MSU's Department of Electrical Engineering and Hiroshi Yamasaki of the International Agency for Research on Cancer in Lyon, France.

The work was funded by a grant from the Electrical Power Research Institute.

Story Source:

Materials provided by Michigan State University. Note: Content may be edited for style and length.


Background

Bipolar and major depressive disorder

Bipolar disorder and major depressive disorder (MDD) are mental disorders with a 12-month prevalence in the EU of about 1 and 7%, respectively [1]. They are associated with a reduced quality of life, an increased mortality risk, and are a major cause of inability to work [2,3].

The management of depression includes psychosocial treatment approaches, pharmacotherapy and, for the most severe and treatment resistant patients, electroconvulsive treatment (ECT).

Electroconvulsive treatment

The idea that convulsions could treat mental illness can be traced to the 16 th century, when camphor oil was used to induce convulsions. Seizure-induction by application of electrical current to the human brain was introduced by the Italians Cerletti and Bini in 1938 [4]. Since its introduction, ECT has been applied to various psychiatric and some somatic conditions. Modern ECT has fewer indications and has been developed with the aim to reduce side effects [5].

For some patients in a treatment resistant depressive episode, ECT is the only treatment that is effective. ECT is generally considered to be safe and has shown efficacy in randomized controlled trials [6]. A recent randomized controlled trial found ECT to be more effective than pharmacological treatment for treatment-resistant bipolar depression [7]. However, the treatment is still controversial and stigmatized [8]. This can in part be attributed to our lack of knowledge, since the mechanisms of action is still largely unknown.

Some have compared ECT to lobotomy [9] or hypothesize that ECT affects the brain in a manner similar to severe stress or trauma [10]. Others regard it as a safe treatment that is underused [11], and a systematic review found no persistent cognitive deficits after ECT [12]. A recent randomized controlled trial of right unilateral ECT in treatment resistant depression found no changes in general neurocognitive function, but reduced autobiographical memory consistency after ECT [13]. This finding is in line with subjective patient reports [14], and further research is required.

The NICE guidelines states: “Consider ECT for acute treatment of severe depression that is life-threatening and when a rapid response is required, or when other treatments have failed” [15]. This is in line with the Norwegian national guidelines that recommend ECT in major depression when other treatments have been ineffective (Evidence level A, [16]), and there has been an increase in its use in recent years [17].

Increased knowledge gained through thorough scientific investigations can reduce stigmata and inform patients and health care providers to make appropriate use of ECT. Better understanding of ECT and its mechanisms of action may help patients to cope with side effects and contribute to the development of new treatment options.

Possible mechanisms of action of ECT

More than one hundred theories have been suggested for the effects of ECT [5]. Although changes to brain structure in major depression have been confirmed by several meta-analysis [18-20] and ECT-induced structural and functional changes have been characterized (for recent reviews see [21-23]) we still lack a unifying theory for its mechanisms of action. The project will focus on three suggested effects of ECT, each reflecting proposed pathophysiological changes and mechanisms of action, see below. As a multidisciplinary study, results from the neuroradiological measures can be correlated to biomarkers in blood and behavioral parameters e.g. improvement/remission after ECT should be correlated to improved performance on neuropsychological testing. For dichotic listening, improved scores in the forced left condition would indicate better cognitive control.

Hippocampal volume

The human nervous system adapts to challenges. It can be changed by learning as well as by pathological conditions, such as psychiatric disorders. One structure that has been studied in large detail in this regard is the hippocampus a structure that is important for learning and memory. Hippocampal volumes are reduced in major depressive disorder [19,24,25] and in a number of other psychiatric and somatic disorders (reviewed in [26]). The volume reduction of the hippocampus has been associated with duration of untreated depression [27]. On the other hand, increased hippocampal volumes can occur after extensive learning, e.g. studying to become one of London’s taxi drivers [28,29]. The increase in hippocampal volumes may be related to neurogenesis, which has been shown to occur in animal models [30,31]. In primates, the proliferation of granular cells in the dentate gyrus of the hippocampus was shown to be reduced by stress [30]. Seizures induce neurogenesis in rodents [32], and animal models have shown electroconvulsive seizures to have effects on neurotransmitters, gene expression, growth factors (such as Brain derived neurotrophic factor - BDNF, Vascular endothelial growth factor - VEGF, Fibroblast Growth Factor - FGF) and neuropeptides (such as neuropeptide Y - NPY, Thyrotropin-releasing hormone - TRH, VGF) and lead to synaptic remodeling and cellular proliferation (reviewed in [33]). Research from animal models also indicate that electroconvulsive shocks can reverse the effect of cortisol and even cause an increase of hippocampal volumes (reviewed in [5]).

Increased levels of BDNF has been reported following ECT [34], and BDNF has been suggested as a potential biomarker for depression [35]. Neurogenesis has been shown to occur in the dentate gyrus of the hippocampus in adult humans [36], and ECT-induced neuroplasticity is gaining more focus as a framework for understanding the effects of ECT [23]. A few studies of humans have reported increased hippocampal volumes and/or other structural changes following ECT [37-41]. Nordanskog et al. [39,41] performed manual segmentation without complete blinding of the MRI time point that was traced (before or after ECT), introducing a potential observer bias. Dukart et al. [38] used voxel-based morphometry, while Tendolkar et al. [40] and Abbot et al. [37] both used FreeSurfer [42] for volumetric segmentation and analysis. Compared with these studies we will recruit more patients, use state of the art automatic segmentation procedures, and radiology readers will be blinded to study group and the time point of MRI scans. In addition, by applying multimodal imaging, structural changes can be assessed with respect to changes in diffusion properties, susceptibility weighted imaging (SWI) and fluid attenuated inversion recovery (FLAIR) imaging. Our design will allow longitudinal tracking of brain changes 1-2 hours after the first ECT, after treatment completion and at 6 months follow up.

Gamma-Aminobutyric acid (GABA) and connectivity

ECT has anticonvulsive effects and is sometimes used in the treatment of status epilepticus [43]. One hypothesis suggests that the magnitude of increase in seizure threshold, induced by ECT, is important for the antidepressant efficacy [44]. Drugs that enhance GABAergic neurotransmission are known for their anticonvulsant effect, and the role of amino acid neurotransmission systems, particularly reduced function of GABAergic neurotransmission has been increasingly appreciated in major depression (for reviews, see [45,46]). Tiagabine, a selective GABA reuptake inhibitor was shown to be effective in treatment of depression with anxiety [47]. A post mortem study of gene expression in elderly depressed patients found alterations in GABA and glutamate pathways markers indicating diminished activity in the anterior cingulate cortex (ACC) [48]. A recent meta-analysis suggested increased resting-state activity in the rostral ACC as a biomarker for treatment response in major depression, and a shift from GABA- to glutamate-mediated modulation was suggested [49].

Interestingly, one early study found increased concentrations of cortical GABA after ECT in depressed patients by use of proton magnetic resonance spectroscopy ( 1 H-MRS) [50], however this finding has to our knowledge not been reproduced by other groups.

It has recently been suggested that “hyperconnectivity” in networks involved in mood regulation can be reduced after a course of ECT [51,52]. This finding may seem contrary to findings of increased fractional anisotropy (FA) in frontal limbic projections after a course of ECT [53,54]. FA is often regarded as a measure of white matter tract integrity and increased axonal integrity may seem contra-intuitive if one expects reduced connectivity after ECT. Possible explanations could be that the projections that are “enhanced” by ECT are GABAergic, or that improved integrity of certain projections may lead to more coordinated electrical activity in these projections, which overall is detected as “reduced connectivity”.

Our project will encompass measures of neurotransmitters (GABA and glutamate by 1 H-MRS) and diffusion parameters (e.g. fractional anisotropy, mean diffusivity and separation of restricted and hindered water by Restriction Spectrum Imaging, RSI [55]), enabling longitudinal investigations of GABA- and glutamate levels as well as white matter properties in the same patients.

ECT and harmful effects

ECT has been a controversial treatment from its introduction. The most important side effects are related to memory impairments [6] and it is recommended that the patients’ cognitive functioning is monitored both during and after treatment [16]. A recent randomized controlled trial in treatment-resistant bipolar depression found reduced autobiographic memory consistency after ECT but no deterioration of general neurocognitive function [13]. Structural damage to the human brain has to our knowledge, never been documented to be caused by ECT. Case studies with rare complications, such as subdural hematoma, have been published [56], however a study using cerebral Computer Tomography in 40 patients before and after ECT detected no changes caused by ECT, even with convulsions lasting several minutes [57]. Both conventional MRI and diffusion weighted imaging (DWI a sequence that is sensitive to edema) have failed to find structural damage [58]. However, changes on DWI have been shown for patients after status epilepticus [59]. If brain injury occurs as a consequence of ECT, one may expect to find micro hemorrhages. SWI is extremely sensitive to hemorrhages, and is routinely used in imaging of stroke [60]. However, SWI, as an indicator of microvascular dis-integrity, has to our knowledge never been applied after ECT. In addition, by using 1 H-MRS we will measure N-acetylaspartate (NAA), which is primarily localized in neurons and considered a marker for neuronal integrity [61].

Our project will use high field strength, state of the art MRI and combine RSI, SWI and 1 H-MRS which should enable detection of more subtle post-ECT effects.

Hypotheses and aims

Based on the above discussions, the following main hypotheses define the outline and aims of this project:

Hippocampal volumes increase after ECT treatment. A) Specifically there is increased volume of the dentate gyrus, which would suggest that the increase is caused by neurogenesis. B) Changes in hippocampal structure correlate with treatment response, neurocognitive measures and increased concentrations of neurotrophic factors in blood samples.

ECT causes increased levels of the neurotransmitter GABA and changes the glutamate/GABA balance. A) GABA concentrations correlate with treatment response. B) Genes regulating GABA synthesis and cycling are up-regulated or activated. C) A subset of cortical projections is strengthened a possible mechanism causing reduced connectivity in frontal areas.

ECT does not cause measurable signs of harmful effects to the brain. A) No changes are detected on microvascular (SWI) and microstructural (RSI) imaging. NAA (measurend in the ACC) is unaffected. B) Possible immediate post-ECT effects, e.g. edema, that is detectable by diffusion weighted imaging, are reversible.

Methods and design

The study is prospective and observational, and all patients will receive the standard ECT treatment, as it is provided at the ECT-department at the Haukeland University Hospital.

A flow chart of the study design is shown in Figure 1, and details on study measures and variables are listed in Table 1.


Research With Human Volunteers

  • A method called “microdosing” can provide vital information on the safety of an experimental drug and how it is metabolized in humans prior to large-scale human trials. Volunteers are given an extremely small one-time drug dose, and sophisticated imaging techniques are used to monitor how the drug behaves in the body. Microdosing can replace certain tests on animals and help screen out drug compounds that won’t work in humans so that they are never tested in animals.
  • Advanced brain imaging and recording techniques—such as functional magnetic resonance imaging (fMRI)—with human volunteers can be used to replace archaic experiments in which rats, cats, and monkeys have their brains damaged. These modern techniques allow the human brain to be safely studied down to the level of a single neuron (as in the case of intracranial electroencephalography), and researchers can even temporarily and reversibly induce brain disorders using transcranial magnetic stimulation.

Have the effects of MRIs and other electromagnetic devices on human psychology been studied? - Psychology

This article presents a systematic review of published scientific studies on the potential ecological effects of radiofrequency electromagnetic fields (RF-EMF) in the range of 10 MHz to 3.6 GHz (from amplitude modulation, AM, to lower band microwave, MW, EMF).

Methods

Publications in English were searched in ISI Web of Knowledge and Scholar Google with no restriction on publication date. Five species groups were identified: birds, insects, other vertebrates, other organisms, and plants. Not only clear ecological articles, such as field studies, were taken into consideration, but also biological articles on laboratory studies investigating the effects of RF-EMF with biological endpoints such as fertility, reproduction, behaviour and development, which have a clear ecological significance, were also included.

Results

Information was collected from 113 studies from original peer-reviewed publications or from relevant existing reviews. A limited amount of ecological field studies was identified. The majority of the studies were conducted in a laboratory setting on birds (embryos or eggs), small rodents and plants. In 65% of the studies, ecological effects of RF-EMF (50% of the animal studies and about 75% of the plant studies) were found both at high as well as at low dosages. No clear dose–effect relationship could be discerned. Studies finding an effect applied higher durations of exposure and focused more on the GSM frequency ranges.

Conclusions

In about two third of the reviewed studies ecological effects of RF-EMF was reported at high as well as at low dosages. The very low dosages are compatible with real field situations, and could be found under environmental conditions. However, a lack of standardisation and a limited number of observations limit the possibility of generalising results from an organism to an ecosystem level. We propose in future studies to conduct more repetitions of observations and explicitly use the available standards for reporting RF-EMF relevant physical parameters in both laboratory and field studies.

Highlights

► Scientific literature was screened for articles on ecological effects of RF-EMF. ► RF-EMF had a significant effect on birds, insects, other vertebrates, other organisms and plants in 70% of the studies. ► Development and reproduction of birds and insects are the most strongly affected endpoints. ► There is a lack of field - and ecological studies on populations and the interaction of species. ► There is an urgent need for repetitions of studies finding effects and investigations into effects on ecosystems.


Response-time modeling of the effect of scanner environment

All three experiments displayed slowed responses as a result of the MRI environment, but only Experiment 3 showed increased error rates. Consequently, based on mean correct RTs and error rates, the most likely candidate explanation of these effects is the attentional focus account. Because the attentional focus account assumes that the stress induced by the MRI scanner diverts attention away from the imperative stimulus, responses slow down and become more error-prone. One reason why this accuracy effect is not observed in Experiments 1 and 2 might be that another mechanism counters the increase in erroneous responses. That is, if participants are less focused, they might respond to the potential behavioral deterioration with an increase in response caution. In terms of the LBA model, this means an increase in the threshold parameter.

This hypothesis is scrutinized using formal LBA modeling of the RT distributions for both correct and error responses and error rates. First, we present our general approach for obtaining the best set of model parameters (in terms of different way of equating parameters across conditions). Next, we present the results of this analysis and what that means for the parameter estimates.

Methods

For each experiment, we defined the top model as the model in which all parameters were allowed to vary across sessions (Behavioral and MRI some exceptions are discussed below). In addition, the top models only allowed those parameters to vary across other experimental conditions for which there is consensus in the literature. For example, to account for directional cueing in Experiment 1 we allowed the threshold parameter in LBA to vary between response alternatives (cf. Forstmann et al., 2010. This parameter allows for a difference in the amount of evidence that is required for a response between the choice options, similar to the difference in prior probability that is induced in Experiment 1.

Using the top model, we generated a model hierarchy of simpler models (Heathcote & Love, 2012). That is, we generated all possible models that contained fewer free parameters by fixing a parameter across conditions. Thus, the simplest models that were fit estimate the same parameters of each type for all conditions. The best fitting parameter values of the simplest models were used as initial guesses for more complex models. This way, we reduce the impact of local minima in the parameter space. The models were fit using maximum likelihood (Donkin, Brown, & Heathcote, 2011).

The best model across participants was determined using AIC (Akaike, 1974). AIC is a measure that balances the fit of a model with the number of free parameters. The number of free parameters is a proxy for model complexity, as models with more parameters are inherently more flexible. Because of this, in a model hierarchy more complex models necessarily fit better (e.g., Collyer, 1985 Myung & Pitt, 1997). Thus, AIC allows for inferences on which parameter should be kept free across experimental conditions and which parameters should not. To determine the best model, we summed the AIC score of individual participants, and computed the AIC weights of each model (Wagenmakers & Farrell, 2004). This reflects the probability that a particular model is the best model of the set of fitted models (given the data).

In addition to AIC, we also computed Bayesian Information Criterions (BIC Schwarz, 1978). BIC is typically more conservative than AIC in allowing model complexity. This was also observed in our model comparisons nevertheless, the results of the BIC analyses generally agreed with the AIC analyses. For this reason, we focus our analysis on the AIC based results, and will only briefly mention any deviations from these according to BIC.

We performed Bayesian analysis of variance—similar to the analyses on the behavioral data—on the parameters of the top model. This analysis allows us to infer whether a parameter was free because of differences due to the experimental manipulations, or whether individual differences between participants forced AIC model selection to allow a free parameter, even though the variance in the estimates is not systematic with respect to conditions. The parameter estimates used in this analysis were a weighted average of estimates from all of the different model parameterizations we fit obtained using model averaging (Hoeting, Madigan, Raftery, & Volinsky, 1999 Raftery, 1995). We could have first selected one model as the true model for all participants, based on a criterion like having the smallest total AIC and analyzed the parameter estimates from only that model. However, this fails to take account of all of the available information (e.g., the total AIC might not be much larger for some other models, making selection of just one model questionable) and individual differences (e.g., different models maybe better for different individuals). Instead, we used the AIC values for each model to weight (Wagenmakers & Farrell, 2004) the contribution of their parameters to the overall estimates used in the Bayesian analysis of variance. The resulting estimates were fairly close to those of the model with the best overall AIC but better take into account uncertainty in the model selection process.

Experiment 1: Choice bias

The top LBA model allowed all five model parameters to vary across sessions. Thus, the drift rate v, non-decision time (t 0) and bA as well as the variance parameters A and sv were free to vary across sessions. We estimated the difference between the upper bound of the start point distribution and the threshold (bA) to ensure that the threshold b could not be below the start point. Because LBA estimates separate drift rates for correct and incorrect response accumulators, v and drift rate variance sv were also allowed to vary with the type of response (that is, correct vs. incorrect). In addition, we allowed bA to vary per accumulator as a function of the cue type (left vs. right), in line with previous models estimating bias effects (Forstmann et al., 2010).

Experiment 2: Speed-accuracy trade-off

As for Experiment 4, the top LBA model allowed all model parameters to vary across sessions. Also similarly, separate drift rate parameters (both mean drift rate v and the standard deviation of drift rate sv) were estimated for correct and incorrect responses. In addition, the threshold, drift rates, and non-decision time were all free to vary with the speed-accuracy manipulation. Many studies show that speed-accuracy trade-off behavior can be obtained through different threshold values (e.g., Boehm, Van Maanen, Forstmann, & Van Rijn, 2014 Forstmann et al., 2008 Mulder et al., 2010 Van Maanen et al., 2011 Winkel et al., 2012). However, some studies also show different parameter estimates for non-decision times (Heitz & Schall, 2012 Mulder et al., 2013) or drift rates (Dambacher & Hübner, 2014 Heitz & Schall, 2012 Ho et al., 2012 Mulder et al., 2010 Rae et al., 2014).

Experiment 3: Task difficulty

Again, all model parameters were free to vary across sessions. Because there was no credible effect of ITI (see Results), the top LBA model for Experiment 3 did not include the ITI as a factor on all parameters. Instead, we allowed only the threshold parameter to vary with the ITI. Based on the literature (Churchland et al., 2011 Donkin, Averell, Brown, & Heathcote, 2009 Ho, Brown, & Serences, 2009 Mulder et al., 2013 Palmer et al., 2005), we allowed drift rate (mean and standard deviation) to vary with coherence.

Results

Experiment 1: Choice bias

AIC model selection preferred the following model with an AIC weight of w A I C = 1.0: Threshold was allowed to vary with the cue, drift rate was allowed to vary with session and response type (correct/incorrect), and non-decision time was allowed to vary with session only. Thus, these free parameters contributed to the explanation of the data, according to AIC. Footnote 2 Fig. 5 shows that this model indeed captures both accuracy as well as the RT distribution. Allowing a parameter to be free across conditions does not necessary entail that it will also systematically differ across participants. To test this, we submitted each parameter to a Bayesian ANOVA. The factors that were included in the ANOVA model depended on the conditions for which a free parameter value was estimated, but in all cases included the sessions. Here, we focus the analysis on the likelihood that a parameter differed per Session. In particular, we focus our analysis on the threshold parameter b, the non-decision time parameter t 0, and the sum and difference of the drift rates for correct and incorrect responses, v c + v e and v cv e, respectively.

The LBA model fits the data of Experiment 3 well. Upper panels .1, .5, .9 RT quantiles for correct responses Lower panels Error rates. The data as well as the model predictions are averaged across participants

These four parameters represent the four theoretical accounts that we focus on. Fig. 6 (Left column) shows the mean parameter estimates across participants representing the four theoretical accounts. Table 2 shows Bayes factors for omitting the crucial Session factor from an ANOVA model. For Experiment 1, there is evidence that the non-decision time parameter t 0 varied with Session, as the Bayes factor for including it in the ANOVA model is quite large. For the remaining parameters b, v c + v e and v cv e the evidence for including the Session factor is inconclusive. Footnote 3 The threshold parameter did show evidence in favor of including the cue factor in the ANOVA model (BF=1.3×10 11 ), consistent with previous studies that induced a bias in responding (Forstmann et al., 2010 Mulder et al., 2012). Also, the start point variability parameter (A) showed evidence in favor of a cue-related effect (BF=681), indicating that variability due to the distance towards the threshold was smallest for neutral cues, and larger for both valid and invalid cues.

LBA model parameters for the best models for Experiments 1–3. Left column Experiment 1 Middle column Experiment 2 Right column Experiment 3. Top row non-decision time t 0, representing delayed response execution second row threshold of the correct accumulator b 1, representing response caution Third row sum of drift rates v c + v e, representing overall arousal Bottom row drift rate difference v cv e, representing attentional focus. Error bars are within-subject standard errors of the mean

Based on the results of Experiment 1, the most likely explanation is that participants increase the amount of motor control they exert on responding while inside the MRI scanner. This is reflected by an increased non-decision time parameter, resulting in slower yet equally accurate responses.

Experiment 2: Speed-accuracy trade-off

AIC model selection on the LBA model hierarchy showed that, in order to best balance model complexity and fit across the whole set of participants, the threshold and non-decision time should be allowed to differ for the two sessions as well as the variability parameters A and sv (AIC weight of this model w A I C = 1.0). Also, the threshold and drift rate parameters should be allowed to vary as a function of the SAT manipulation, and drift rate (mean and standard deviation) should differ for correct and incorrect response accumulators. This model provided a good fit to the data (Fig. 7). Footnote 4 Fig. 6 (middle column) shows the mean parameter estimates across participants, and Table 2 again shows Bayes factors for omitting the Session factor from an ANOVA model. Omitting the Session factor from the ANOVA model yielded no conclusive evidence for any of the theoretical accounts.

The LBA model fits the data of Experiment 2 well. Upper panels .1, .5, .9 RT quantiles for correct responses Lower panels Error rates. The data as well as the model predictions are averaged across participants

The SAT manipulation did influence the estimate of the threshold (BF SAT = 6.3 × 10 21 , but again there was no interaction (all BFs < 0.20). This finding is in line with previous work that reports that SAT involves adjustments in response caution (e.g., Boehm et al., 2014 Forstmann et al., 2008 Mulder et al., 2010 Van Maanen et al., 2011 Winkel et al., 2012). However, in addition to a change in threshold, there were also effects of SAT on the drift rates. In particular, the drift rate difference, which according to AIC could only be affected by the SAT manipulation, was indeed higher on accuracy trials relative to speed trials (BF SAT = 1.2×10 19 ). This finding is in agreement with recent literature reporting similar results (Dambacher & Hübner, 2014 Heitz & Schall, 2012 Ho et al., 2012 Mulder et al., 2010 Rae et al., 2014). Finally, there is also evidence for including separate non-decision time parameters for the three SAT conditions (BF SAT = 40). Previous studies also report that non-decision time parameters may differ between speed and accuracy instructions (Heitz & Schall, 2012 Mulder et al., 2013).

Experiment 3: Task difficulty

The LBA model that was preferred by AIC was a model in which threshold varied with ITI and session, mean drift rate varied with session and coherence, and non-decision time varied with session only (w A I C = 1.0). As is common for LBA models, mean drift rate as well as the standard deviation varied depending on whether a response was correct or not. Footnote 5 The fit of this model is generally quite good (Fig. 8). The mean parameter estimates are presented in the right column of Fig. 6.

The LBA model fits the data of Experiment 4 well. Upper panels .1, .5, .9 RT quantiles for correct responses Lower panels Error rates. The data as well as the model predictions are averaged across participants

According to a Bayesian ANOVA, the non-decision parameter was higher in the MRI session than in the behavioral session (BFs = 6.2, see also Table 2). The threshold parameter did not differ between the sessions, the ITIs, nor the interaction (BFs < 0.32, see also Table 2). Neither did the sum of drift rates (all BFs < 0.22). The drift rate difference however increased with both the coherence as well as the Session (BF C o h e r e n c e = 7.3 × 10 22 BF S e s s i o n = 75). There was no evidence in favor of an interaction between these effects (BF C o h e r e n c e×S e s s i o n = 0.21).

The decrease in drift rate difference that we found is consistent with an attentional focus explanation of the effects of scanner environment. That is, if attentional focus is less, then the extraction of information from the stimulus is negatively affected, which is reflected in decreased drift rates.

The increase in drift rate difference with coherence is consistent with the literature (Churchland et al., 2011 Donkin et al., 2009 Ho et al., 2009 Mulder et al., 2013 Palmer et al., 2005). That is, as the motion direction becomes more recognizable, the extraction of information from the stimulus in favor of the correct alternative increases. Consequently, the difference in drift rates between the correct and the incorrect alternative increases as well.

In agreement with the RT and accuracy analyses, the model parameters did not differ between the first and the second session, neither in isolation (all BFs < 0.75) nor in interaction with the session type (all BFs < 1.14). Thus, the conclusion that there is no appreciable practice effect from the first to the second session seems justified. This result adds justification to the interpretation of Experiment 1 and 2, since it is likely that there too is no large effect of practice across session orders.


4. Conclusion

De Flora et al. wrote the following: “Since the late 20th century, chronic degenerative diseases have overcome infectious disease as the major causes of death in the 21st century, so an increase in human longevity will depend on finding an intervention that inhibits the development of these diseases and slows their progress” [33].

Could such an intervention be located right beneath our feet? Earthing research, observations, and related theories raise an intriguing possibility about the Earth's surface electrons as an untapped health resource—the Earth as a “global treatment table.” Emerging evidence shows that contact with the Earth—whether being outside barefoot or indoors connected to grounded conductive systems—may be a simple, natural, and yet profoundly effective environmental strategy against chronic stress, ANS dysfunction, inflammation, pain, poor sleep, disturbed HRV, hypercoagulable blood, and many common health disorders, including cardiovascular disease. The research done to date supports the concept that grounding or earthing the human body may be an essential element in the health equation along with sunshine, clean air and water, nutritious food, and physical activity.


Background

Bipolar and major depressive disorder

Bipolar disorder and major depressive disorder (MDD) are mental disorders with a 12-month prevalence in the EU of about 1 and 7%, respectively [1]. They are associated with a reduced quality of life, an increased mortality risk, and are a major cause of inability to work [2,3].

The management of depression includes psychosocial treatment approaches, pharmacotherapy and, for the most severe and treatment resistant patients, electroconvulsive treatment (ECT).

Electroconvulsive treatment

The idea that convulsions could treat mental illness can be traced to the 16 th century, when camphor oil was used to induce convulsions. Seizure-induction by application of electrical current to the human brain was introduced by the Italians Cerletti and Bini in 1938 [4]. Since its introduction, ECT has been applied to various psychiatric and some somatic conditions. Modern ECT has fewer indications and has been developed with the aim to reduce side effects [5].

For some patients in a treatment resistant depressive episode, ECT is the only treatment that is effective. ECT is generally considered to be safe and has shown efficacy in randomized controlled trials [6]. A recent randomized controlled trial found ECT to be more effective than pharmacological treatment for treatment-resistant bipolar depression [7]. However, the treatment is still controversial and stigmatized [8]. This can in part be attributed to our lack of knowledge, since the mechanisms of action is still largely unknown.

Some have compared ECT to lobotomy [9] or hypothesize that ECT affects the brain in a manner similar to severe stress or trauma [10]. Others regard it as a safe treatment that is underused [11], and a systematic review found no persistent cognitive deficits after ECT [12]. A recent randomized controlled trial of right unilateral ECT in treatment resistant depression found no changes in general neurocognitive function, but reduced autobiographical memory consistency after ECT [13]. This finding is in line with subjective patient reports [14], and further research is required.

The NICE guidelines states: “Consider ECT for acute treatment of severe depression that is life-threatening and when a rapid response is required, or when other treatments have failed” [15]. This is in line with the Norwegian national guidelines that recommend ECT in major depression when other treatments have been ineffective (Evidence level A, [16]), and there has been an increase in its use in recent years [17].

Increased knowledge gained through thorough scientific investigations can reduce stigmata and inform patients and health care providers to make appropriate use of ECT. Better understanding of ECT and its mechanisms of action may help patients to cope with side effects and contribute to the development of new treatment options.

Possible mechanisms of action of ECT

More than one hundred theories have been suggested for the effects of ECT [5]. Although changes to brain structure in major depression have been confirmed by several meta-analysis [18-20] and ECT-induced structural and functional changes have been characterized (for recent reviews see [21-23]) we still lack a unifying theory for its mechanisms of action. The project will focus on three suggested effects of ECT, each reflecting proposed pathophysiological changes and mechanisms of action, see below. As a multidisciplinary study, results from the neuroradiological measures can be correlated to biomarkers in blood and behavioral parameters e.g. improvement/remission after ECT should be correlated to improved performance on neuropsychological testing. For dichotic listening, improved scores in the forced left condition would indicate better cognitive control.

Hippocampal volume

The human nervous system adapts to challenges. It can be changed by learning as well as by pathological conditions, such as psychiatric disorders. One structure that has been studied in large detail in this regard is the hippocampus a structure that is important for learning and memory. Hippocampal volumes are reduced in major depressive disorder [19,24,25] and in a number of other psychiatric and somatic disorders (reviewed in [26]). The volume reduction of the hippocampus has been associated with duration of untreated depression [27]. On the other hand, increased hippocampal volumes can occur after extensive learning, e.g. studying to become one of London’s taxi drivers [28,29]. The increase in hippocampal volumes may be related to neurogenesis, which has been shown to occur in animal models [30,31]. In primates, the proliferation of granular cells in the dentate gyrus of the hippocampus was shown to be reduced by stress [30]. Seizures induce neurogenesis in rodents [32], and animal models have shown electroconvulsive seizures to have effects on neurotransmitters, gene expression, growth factors (such as Brain derived neurotrophic factor - BDNF, Vascular endothelial growth factor - VEGF, Fibroblast Growth Factor - FGF) and neuropeptides (such as neuropeptide Y - NPY, Thyrotropin-releasing hormone - TRH, VGF) and lead to synaptic remodeling and cellular proliferation (reviewed in [33]). Research from animal models also indicate that electroconvulsive shocks can reverse the effect of cortisol and even cause an increase of hippocampal volumes (reviewed in [5]).

Increased levels of BDNF has been reported following ECT [34], and BDNF has been suggested as a potential biomarker for depression [35]. Neurogenesis has been shown to occur in the dentate gyrus of the hippocampus in adult humans [36], and ECT-induced neuroplasticity is gaining more focus as a framework for understanding the effects of ECT [23]. A few studies of humans have reported increased hippocampal volumes and/or other structural changes following ECT [37-41]. Nordanskog et al. [39,41] performed manual segmentation without complete blinding of the MRI time point that was traced (before or after ECT), introducing a potential observer bias. Dukart et al. [38] used voxel-based morphometry, while Tendolkar et al. [40] and Abbot et al. [37] both used FreeSurfer [42] for volumetric segmentation and analysis. Compared with these studies we will recruit more patients, use state of the art automatic segmentation procedures, and radiology readers will be blinded to study group and the time point of MRI scans. In addition, by applying multimodal imaging, structural changes can be assessed with respect to changes in diffusion properties, susceptibility weighted imaging (SWI) and fluid attenuated inversion recovery (FLAIR) imaging. Our design will allow longitudinal tracking of brain changes 1-2 hours after the first ECT, after treatment completion and at 6 months follow up.

Gamma-Aminobutyric acid (GABA) and connectivity

ECT has anticonvulsive effects and is sometimes used in the treatment of status epilepticus [43]. One hypothesis suggests that the magnitude of increase in seizure threshold, induced by ECT, is important for the antidepressant efficacy [44]. Drugs that enhance GABAergic neurotransmission are known for their anticonvulsant effect, and the role of amino acid neurotransmission systems, particularly reduced function of GABAergic neurotransmission has been increasingly appreciated in major depression (for reviews, see [45,46]). Tiagabine, a selective GABA reuptake inhibitor was shown to be effective in treatment of depression with anxiety [47]. A post mortem study of gene expression in elderly depressed patients found alterations in GABA and glutamate pathways markers indicating diminished activity in the anterior cingulate cortex (ACC) [48]. A recent meta-analysis suggested increased resting-state activity in the rostral ACC as a biomarker for treatment response in major depression, and a shift from GABA- to glutamate-mediated modulation was suggested [49].

Interestingly, one early study found increased concentrations of cortical GABA after ECT in depressed patients by use of proton magnetic resonance spectroscopy ( 1 H-MRS) [50], however this finding has to our knowledge not been reproduced by other groups.

It has recently been suggested that “hyperconnectivity” in networks involved in mood regulation can be reduced after a course of ECT [51,52]. This finding may seem contrary to findings of increased fractional anisotropy (FA) in frontal limbic projections after a course of ECT [53,54]. FA is often regarded as a measure of white matter tract integrity and increased axonal integrity may seem contra-intuitive if one expects reduced connectivity after ECT. Possible explanations could be that the projections that are “enhanced” by ECT are GABAergic, or that improved integrity of certain projections may lead to more coordinated electrical activity in these projections, which overall is detected as “reduced connectivity”.

Our project will encompass measures of neurotransmitters (GABA and glutamate by 1 H-MRS) and diffusion parameters (e.g. fractional anisotropy, mean diffusivity and separation of restricted and hindered water by Restriction Spectrum Imaging, RSI [55]), enabling longitudinal investigations of GABA- and glutamate levels as well as white matter properties in the same patients.

ECT and harmful effects

ECT has been a controversial treatment from its introduction. The most important side effects are related to memory impairments [6] and it is recommended that the patients’ cognitive functioning is monitored both during and after treatment [16]. A recent randomized controlled trial in treatment-resistant bipolar depression found reduced autobiographic memory consistency after ECT but no deterioration of general neurocognitive function [13]. Structural damage to the human brain has to our knowledge, never been documented to be caused by ECT. Case studies with rare complications, such as subdural hematoma, have been published [56], however a study using cerebral Computer Tomography in 40 patients before and after ECT detected no changes caused by ECT, even with convulsions lasting several minutes [57]. Both conventional MRI and diffusion weighted imaging (DWI a sequence that is sensitive to edema) have failed to find structural damage [58]. However, changes on DWI have been shown for patients after status epilepticus [59]. If brain injury occurs as a consequence of ECT, one may expect to find micro hemorrhages. SWI is extremely sensitive to hemorrhages, and is routinely used in imaging of stroke [60]. However, SWI, as an indicator of microvascular dis-integrity, has to our knowledge never been applied after ECT. In addition, by using 1 H-MRS we will measure N-acetylaspartate (NAA), which is primarily localized in neurons and considered a marker for neuronal integrity [61].

Our project will use high field strength, state of the art MRI and combine RSI, SWI and 1 H-MRS which should enable detection of more subtle post-ECT effects.

Hypotheses and aims

Based on the above discussions, the following main hypotheses define the outline and aims of this project:

Hippocampal volumes increase after ECT treatment. A) Specifically there is increased volume of the dentate gyrus, which would suggest that the increase is caused by neurogenesis. B) Changes in hippocampal structure correlate with treatment response, neurocognitive measures and increased concentrations of neurotrophic factors in blood samples.

ECT causes increased levels of the neurotransmitter GABA and changes the glutamate/GABA balance. A) GABA concentrations correlate with treatment response. B) Genes regulating GABA synthesis and cycling are up-regulated or activated. C) A subset of cortical projections is strengthened a possible mechanism causing reduced connectivity in frontal areas.

ECT does not cause measurable signs of harmful effects to the brain. A) No changes are detected on microvascular (SWI) and microstructural (RSI) imaging. NAA (measurend in the ACC) is unaffected. B) Possible immediate post-ECT effects, e.g. edema, that is detectable by diffusion weighted imaging, are reversible.

Methods and design

The study is prospective and observational, and all patients will receive the standard ECT treatment, as it is provided at the ECT-department at the Haukeland University Hospital.

A flow chart of the study design is shown in Figure 1, and details on study measures and variables are listed in Table 1.


Brain Imaging With Magnetoencephalography During Rest and During Speech and Language Processing

Inverse Models for Reconstructing Brain Activity From Measurements

Inverse algorithms are used to solve the bioelectromagnetic inverse problem, ie, estimating neural source model parameters from MEG and EEG measurements obtained outside the human head. Because the source distributions are inherently four-dimensional (three in space and one in time) and only a few measurements are made outside the head, estimation is ill-posed in other words, there are no unique solutions for a given set of measurements. To circumvent this problem of nonuniqueness, various estimation procedures incorporate prior knowledge and constraints about source characteristics, such as possible source locations, the source spatial extent, the total number of sources, or the source–frequency/time–frequency characteristics.

Inverse algorithms can be broadly classified into two categories: parametric dipole fitting and tomographic imaging methods. Parametric dipole fitting methods assume that a small set of current dipoles (usually two to five) can adequately represent some unknown source distribution. In this case, the dipole locations and moments form a set of unknown parameters which are typically found using either a nonlinear least-square fit or multiple signal classification algorithms or maximum likelihood estimation methods ( Mosher, Baillet, & Leahy, 1999 ). Parametric dipole fitting has been successfully used clinically for localization of early sensory responses in somatosensory and auditory cortices. Fig. 15.2 shows an example of parametric dipole localization in the context of auditory-evoked responses and shows that responses to early auditory peaks can often be localized to activity arising from a source located in the superior temporal plane, from the auditory cortex and its immediate environs. However, the localization of higher-order auditory cortical functions is not always consistent and reliable with these methods across paradigms or subjects.

Figure 15.2 . (A) Auditory-evoked responses to a train of tone pips occurring 200 ms apart. Blue waveforms correspond to the right hemisphere and the purple waveforms correspond to the left hemisphere. The magnetic field topography on the sensor array is shown as colored circles above for the first four peak responses. (B) Amplitude and latencies of the first four response peaks showing hemispheric similarities in latency and amplitudes. (C) Dipole localization of each of the four peaks shows activity arising from auditory cortex and its immediate environs.

From Hairston, I. S., &amp Nagarajan, S. S. (2007). Neural mechanisms of the time-order error: an MEG study. Journal of Cognitive Neuroscience, 19, 1163–1174.

Two major problems exist in dipole fitting procedures. First, because of nonlinear optimization there are problems of local minima when more than two dipole parameters are estimated. This is usually manifested by sensitivity to initialization, and some subjectivity is involved in evaluating the validity of solutions. Brute force search methods have a huge computational burden—exponential in the number of parameters. A second, more difficult, problem in parametric methods is that these methods often require a priori knowledge of the number of dipoles. However, such information about model order is not always known a priori, especially for complex brain-mapping conditions. Although information and decision theoretic criteria have been proposed to address this problem, the success of these approaches is currently unclear, especially in real data sets. Whereas parametric dipole methods are ideal for point or focal sources, they perform poorly for distributed clusters of sources. Nevertheless, many studies to date using MEG have used dipole fitting procedures to make inferences about cortical activity.

Tomographic imaging is an alternative approach to the inverse problem. These methods impose constraints on source locations based on anatomical and physiological information that can be derived from information obtained with other imaging modalities. Anatomical MRI provides excellent spatial resolution of head and brain anatomy, whereas fMRI techniques provide an alternative measure of neural activation based on associated hemodynamic changes. Because of the high degree of overlap in activity measured using multiple modalities, such information can be used to improve solutions to the inverse problem. If we assume that the dominant sources are the transmembrane and intracellular currents in the apical dendrites of the cortical pyramidal cells, the source image can be constrained to the cortex, which can be extracted from a registered volume MRI of the subjects' head. Furthermore, the orientation of the cells normal to the cortical surface can be used to constrain the orientation of the cortical current sources. By tessellating the cortex into disjoint regions and representing sources in each region by an equivalent current dipole oriented normal to the surface, the forward model relating the sources and the measurements can be written as a linear model with additive noise. Such a formulation transforms the inverse problem into a linear imaging method because it involves estimating electrical activity at discrete locations over a finely sampled reconstruction grid based on discrete measurements. This imaging problem, although linear, is also highly ill-posed because of the limited number of sensor measurements available compared with the number of elements used in the tessellation grid.

Various solutions have been proposed for solving the tomographic imaging problem, and because there are many more unknowns to estimate simultaneously (source amplitude and time courses) than there are sensor data, the problem is undetermined.

Instead of simultaneously estimating all sources, a popular alternative is to scan the brain and estimate source amplitude at each source location independently. It can be shown that such scanning methods are closely related to whole-brain tomographic methods, and the most popular scanning algorithms are adaptive spatial filtering techniques, more commonly referred to as “adaptive beamformers”, or just “beamformers” ( Sekihara & Nagarajan, 2008 ).

Adaptive beamformers have been shown to be simple to implement and are powerful techniques for characterizing cortical oscillations and are closely related to other tomographic imaging methods. However, one major problem with adaptive beamformers is that they are extremely sensitive to the presence of strongly correlated sources. Although they are robust to moderate correlations, in the case of auditory studies, because auditory cortices are largely synchronous in their activity across the two hemispheres, these algorithms tend to perform poorly for auditory evoked data sets without work-arounds ( Fig. 15.5 ), and many modifications have been proposed for reducing the influence of correlated sources ( Dalal, Sekihara, & Nagarajan, 2006 ). The simplest such work-around is to use half the sensors corresponding to each hemisphere separately, and this approach works surprisingly well for cross-hemispheric interactions. Other modifications to the original algorithms have been proposed in the literature, which require some knowledge about the location of the correlated source region ( Dalal et al., 2006 Quraan & Cheyne, 2010 ).

Many algorithms have also been proposed for simultaneous estimation of all source amplitudes, and such solutions require specification of prior knowledge about the sources, either implicitly or explicitly specified in the form of probability distributions. In these cases, the solutions often require a Bayesian inference procedure of estimating some aspect of the posterior distribution given the data and the priors. We showed that the many seemingly disparate algorithms for tomographic source imaging can be unified and shown, in some cases to be equivalent, using a hierarchical Bayesian modeling framework with a general form of prior distribution (called Gaussian scale mixture) and two different types of inferential procedures ( Wipf & Nagarajan, 2008 ). These insights allow for continued development of novel algorithms for tomographic imaging in relation to prior efforts in this enterprise. Algorithms have shown that significant improvements in performance can be achieved by modern Bayesian inference methods that allow for accurate reconstructions of a large number of sources from typical configurations of MEG sensors ( Wipf, Owen, Attias, Sekihara, & Nagarajan, 2010 Zumer, Attias, Sekihara, & Nagarajan, 2007, 2008 ). Fig. 15.3 shows source reconstructions of auditory-evoked responses using one such novel algorithm, called Champagne, as well as reconstructions from popular benchmark algorithms for comparisons that highlight inconsistent localizations, poorer spatial resolution, and sensitivity to correlated sources and noise.

Figure 15.3 . Auditory-evoked field results for seven different subjects. Results from Champagne (CHAM P) are shown in the left-most column and results from the benchmark algorithms (minimum variance adaptive beamformers (MVA B), sLORETA (SL/dSP M) and MCE) are shown in the other three columns. Only Champagne shows consistent localization of auditory cortices in both hemispheres for all subjects.


Health risks linked to electromagnetic field exposure

A study of real-world exposure to non-ionizing radiation from magnetic fields in pregnant women found a significantly higher rate of miscarriage, providing new evidence regarding their potential health risks. The Kaiser Permanente study was published today in the journal Scientific Reports (Nature Publishing Group).

Non-ionizing radiation from magnetic fields is produced when electric devices are in use and electricity is flowing. It can be generated by a number of environmental sources, including electric appliances, power lines and transformers, wireless devices and wireless networks. Humans are exposed to magnetic fields via close proximity to these sources while they are in use.

While the health hazards from ionizing radiation are well-established and include radiation sickness, cancer and genetic damage, the evidence of health risks to humans from non-ionizing radiation remains limited, said De-Kun Li, MD, PhD, principal investigator of the study and a reproductive and perinatal epidemiologist at the Kaiser Permanente Division of Research in Oakland, California.

"Few studies have been able to accurately measure exposure to magnetic field non-ionizing radiation," Dr. Li said. "In addition, due to the current lack of research on this subject, we don't know the biological threshold beyond which problems may develop, and we also don't yet understand the possible mechanisms for increased risks."

In a new study funded by the National Institute of Environmental Health Sciences, researchers asked women over age 18 with confirmed pregnancies to wear a small (a bit larger than a deck of cards) magnetic-field monitoring device for 24 hours. Participants also kept a diary of their activities on that day, and were interviewed in person to better control for possible confounding factors, as well as how typical their activities were on the monitoring day. Researchers controlled for multiple variables known to influence the risk of miscarriage, including nausea/vomiting, past history of miscarriage, alcohol use, caffeine intake, and maternal fever and infections.

Objective magnetic field measurements and pregnancy outcomes were obtained for 913 pregnant women, all members of Kaiser Permanente Northern California. Miscarriage occurred in 10.4 percent of the women with the lowest measured exposure level (1st quartile) of magnetic field non-ionizing radiation on a typical day, and in 24.2 percent of the women with the higher measured exposure level (2nd, 3rd and 4th quartiles), a nearly three times higher relative risk. The rate of miscarriage reported in the general population is between 10 and 15 percent, Dr. Li said.

"This study provides evidence from a human population that magnetic field non-ionizing radiation could have adverse biological impacts on human health," he said.

Strengths of this study, Dr. Li noted, included that researchers used an objective measuring device and studied a short-term outcome (miscarriage) rather than one that will occur years or decades later, such as cancer or autoimmune diseases. The study's main limitation is that it was not feasible for researchers to ask participants to carry the measuring device throughout pregnancy.

Dr. Li noted that the potential health risk of magnetic-field non-ionizing radiation needs more research. "We hope that the finding from this study will stimulate much-needed additional studies into the potential environmental hazards to human health, including the health of pregnant women."


Electromagnetic Fields Affect Human Cells

Electromagnetic fields, similar to those found in overhead power lines, can have a biological effect on human cells, an effect that could contribute to the complex cellular process that leads to cancer, research at Michigan State University shows.

The work of James E. Trosko, a professor of pediatrics and human development, and colleagues is published in the October issue of Environmental Health Perspectives, the journal of the National Institute of Environmental Health Sciences.

"Our studies have contributed to what many other studies have shown, and that is that there is a biological effect of the energy imparted by extremely low frequency EMF (ELF-EMF) on living systems," Trosko said.

Until now, he said, most studies on the biological and health effects of ELF-EMF had been "inconclusive or contradictory."

"Until now, the weight of the theoretical and experimental evidence has suggested that ELF-EMF did not have the ability to interact with genetic material to damage it, thereby causing mutations, which we know can lead to cancer," Trosko said.

Trosko and his colleagues studied the effects of ELF-EMF on mouse leukemia cells that had the potential to mature into cells producing hemoglobin after exposure to a chemical. Hemoglobin is the substance which is needed to bind oxygen in blood. They found that electromagnetic fields of 60 hertz and of strengths ranging from .05 to 10 gauss interfered with the chemically induced maturation process in the mouse cells and allowed the cells to continue to proliferate.

After four days of exposure, about 35 percent of the chemically treated cells that were exposed to ELF-EMF showed these effects.

What Trosko and his colleagues found is that ELF/EMF is not a tumor initiator, but rather a potential tumor promoter.

"ELF-EMF doesn't seem to mutate genes, which could convert a normal cell to an 'initiated' cell," he said. "But it can turn them on and off at inappropriate times, causing these initiated cells to proliferate when normally they would just sit there quietly doing nothing."

"The whole point of our study was not to see if extremely low frequency EMF causes cancer, but if it changes gene expression," Trosko said. "The bottom line is we showed there is a biological effect of EMF as measured by altering the expression of the hemoglobin-producing gene.

"I think it's important to note that there is a distinction between a biological effect and a health effect. Just because I sit under a high power transmission line, and just because that exposure might alter some biological activity in my body, that doesn't automatically mean I'm going to get cancer. And even if I should get cancer, it does not mean ELF-EMF had anything to do with the production of that particular cancer."

Trosko pointed out that the process in which a cell changes from a regular, healthy cell to a cancerous one is long and complex, involving different molecular/biochemical steps.

"These initiated cells need promoting agents to bring about cancer," he said. "They could be natural, such as hormones or chemicals in the food we eat. Or they could be man-made chemicals, drugs or pollutants.

"Most importantly, in order to act as a tumor promoter, many conditions must be met, including the ability of the promoter to overcome natural suppressing effects on cell proliferation, timing of the exposure to the promoter, absence of anti-promoters, and exposure for regular and long periods of time."

Other members of Trosko's research team were Gang Chen, Brad L. Upham, Wei Sun, Chia-Cheng Chang, all of the MSU Department of Pediatrics and Human Development Edward J. Rothwell and Kun-Mu Chen, of MSU's Department of Electrical Engineering and Hiroshi Yamasaki of the International Agency for Research on Cancer in Lyon, France.

The work was funded by a grant from the Electrical Power Research Institute.

Story Source:

Materials provided by Michigan State University. Note: Content may be edited for style and length.


Sebastien Point

Electrohypersensitivity syndrome may have little to do with actual exposure to electromagnetic radiation. It instead may be better understood as a phobia explained by anxiety disorder mechanisms.

Development of communication technologies (such as mobile phones and other wireless devices) has been followed in the past twenty years with appearance of a new self-assessed, so-called electrohypersensitivity (EHS) syndrome, which some people—including some medical doctors—attribute to permanent exposure to domestic, low levels of electromagnetic radiation. EHS syndrome is not the only potential concern raised by people worried about electromagnetic radiation effects on cognition, sleep, and tumor initiation or growth have been pointed out—without convincing scientific evidence or satisfying proposals regarding potential biological mechanisms. But until now, EHS (also often called electromagnetic hypersensitivity) syndrome seems to have been about the supposed health effects of exposure to electromagnetic radiation. Some studies (Hallberg and Oberfeld 2006 Eltiti et al. 2007 Schröttner and Leitgeb 2008) tried to count EHS-affected people, which is a difficult task because of the lack of a medical description of EHS syndrome. Results of these studies are shown in Figure 1. A continuous increasing trend is observed in EHS self-diagnoses between 1985 and 2005, although the latest studies suggest a decrease.

Figure 1: Percentage of EHS individuals in the population as measured in different European countries between 1985 and 2008 (adapted from Bellayer 2016).

In the face of numerous claims regarding adverse health effects of electromagnetic radiation in general—and EHS syndrome in particular—the French Agency for Food, Environmental, and Occupational Health & Safety (ANSES) emphasized in its 2018 report the need for taking care of people suffering from EHS syndrome, although it did not recognize the causal relationship between electromagnetic radiation and EHS syndrome (ANSES 2018). In parallel, for some years, an increasing number of court decisions have recognized the right of individuals suffering from EHS syndrome to refuse wireless technologies for medical reasons, and some judges have decided a causal relationship between electromagnetic radiation and EHS syndrome. Mass media are also interested in the problem, and it is easy to find numerous online testimonials of people suffering from EHS syndrome.

Scientists have tried to clarify the nature of this EHS syndrome and to verify the reality of the alleged relationship with exposure to electromagnetic radiation. In this article, I will summarize results produced by provocation studies and share my view that the etiology of the EHS syndrome is not related to real exposure to electromagnetic radiation. Instead, a phobia-like disorder should be investigated as a possible explanation of EHS syndrome.

Provocation Studies

Biological effects of electromagnetic radiation are taken into account by normative standards based on International Commission on Non-Ionizing Radiation Protection (ICNIRP) statements (ICNIRP 1998). Normative exposure limits aim to protect people from high doses of electromagnetic radiation, which can result in electrical or thermal effects (or both) depending on radiation frequency. Indeed, induced currents or temperature elevation are the only proven biological adverse effects associated with acute exposure to electromagnetic radiation (Perrin and Souques 2012). In Europe, directive 2013/35/EU has been created to protect workers from exposure to conditions able to generate such effects. However, EHS syndrome is claimed to appear when people are exposed to low levels of electromagnetic radiation.

A provocation study or trial is a form of medical clinical trial frequently used in this field. Participants are exposed to a substance or device claimed to provoke a response or to a sham substance or device that should provoke no response. Several provocation studies have been conducted in an attempt to reveal a potential relationship between exposure to electromagnetic radiation at low levels and EHS syndrome. The premise is quite simple: as EHS people claim to suffer from various symptoms when they are exposed to electromagnetic radiation, experiments should be able to examine the phenomenon by comparing the level of discomfort of EHS people depending on exposure conditions measured against a control population. Such studies must deal with a potential low-dose effect—what makes experiments sensitive to confusing variables. Thus, to be usable, such experiments must at least be single-blinded and if possible double-blinded, randomized, and counterbalanced.

Several authors have tried to create such experiments. One full and relevant analysis of these provocation studies has been made by E. van Rongen and colleagues (2009), who conclude that in provocation studies a causal relation between electromagnetic radiation exposure and symptoms “has not been demonstrated, suggesting that psychological factors such as conscious expectation of effect may play an important role in this condition.” This conclusion is shared by Renáta Szemerszky and colleagues (2010), who studied the role of the nocebo effect in physical symptoms reported at a frequency of 50 hertz. They found a considerable nocebo effect and noted the formation of “ a vicious circle of psychosocial factors , such as enhanced perception of risk and expectations, self-monitoring , somatisation and somatosensory amplification, casualization, and misattribution.”

A Cognitive Approach

On the one hand, provocation studies failed to determine a causal relationship with exposure to low levels of electromagnetic radiation, and there is no serious candidate for a biological mechanism. On the other hand, results of some studies show a significant number of people claiming to suffer from EHS syndrome. These contradictory observations call for a new pathway for investigations. As suggested by some authors, one possible etiology is psychological. I suggest a cognitive mechanism for explaining EHS syndrome as a potential form of phobia in which confirmation bias could have a central role. A view of my mechanism proposal is shown in Figure 2. The rationale and hypothesis used for building it are discussed below.

Biological Vulnerability

Anxious symptomatology has been described as under the influence of some genetic factors (Jardine et al. 1984 Kendler et al. 1992 Andrews 1996). Studying etiology of social phobia, Ronald M. Rapee and Richard G. Heimberg (1997) suggested that genetic factors may explain preferential allocation of attention toward danger. The role of the biological factor has been integrated in many models of anxiety, including the Jones and Barlow biopsychosocial model of post-trauma stress (Jones and Barlow 1992) and the Clark model of panic (Clark 1986). In this latest model, people suffering from panic disorders are excessively sensitive to some normal or pathological body perceptions, which they consider as a catastrophic threat and which can trigger a panic attack. The biological manifestation of this panic attack can amplify the perception of the threat. Some recent results have shown that this biological vulnerability to anxiety disorders, including depression and post-trauma stress, could be related to the volume of hippocampi (Campbell and MacQueen 2004 Bremner et al. 1995). The widely accepted theory of anxiety by Jeffrey A. Gray and Neil McNaughton (2003) also points out the main role of hippocampi in anxious behaviors. But some studies suggest that hippocampal volume evolution is not a result of anxiety, trauma, or depression but would be, to the contrary, a causative factor (Gilbertson et al. 2002). Biological vulnerability and preferential allocation of one’s attention to threat is accepted as one of the bases for the development of anxiety disorders. To consider EHS syndrome as a form of phobia, one has to make the fundamental hypothesis that EHS individuals carry, prior to the appearance of symptoms, this type of biological vulnerability.

Alleged Exposure as Neutral Stimulus and Mental Representation of Health Disorders

Anxiety troubles are often associated with a neutral stimulus, as understood in a Pavlovian approach. If EHS syndrome is a phobia, alleged exposure to electromagnetic radiation could play the role of neutral stimulus. From my perspective, several cues support this assumption.

As demonstrated by provocation studies, people, including those claiming an EHS syndrome, are not able to detect electromagnetic radiation at domestic, low levels. As low-level radiofrequency electromagnetic radiation is neither visible nor detectable, exposure can be inferred only on the basis of cues such as the proximity of mobile phones or antennas. Yet some research has shown a strong relationship between the loss of perceived control and anxiety troubles (Gallagher et al. 2014). Exposure to electromagnetic radiation could be a neutral stimulus powered by the feeling that the person has no control over this exposure.

News media are more and more focused on potential effects of electromagnetic radiation, especially with the current deployment of 5G technology. This media treatment could encourage people to focus on mobile phones, antennas, or radiofrequency emitters existing in their environment and on physical perceptions when these people encounter or use them. Such a hypothesis is supported by recent results, especially those of Anne-Kathrin Bräsher and colleagues (2017), who showed that worrisome information on health effects of wifi increases discomfort during sham exposure it also increases sensitivity to tactile stimuli. In their studies, Michael Witthöft and G. James Rubin (2013) concluded that “media reports about the adverse effects of supposedly hazardous substances can increase the likelihood of experiencing symptoms following sham exposure and developing an apparent sensitivity to it.”

Symptoms claimed by EHS individuals are various and nonspecific. According to Marjukka Hagström and colleagues (2013), the most common symptoms are “stress” (60.3 percent), “sleeping disorders” (59.3 percent), and “fatigue” (57.2 percent). These symptoms are compatible with anxiety disorders, which can affect organs, glands, and the nervous system. In a cognitive approach, instead of considering anxiety (and all associated potential symptoms) as a collateral consequence of any real electrosensitivity, all these symptoms could be considered as existing prior to the appearance of the EHS syndrome (as a result of a biological vulnerability) and being recently associated with neutral stimulus (the alleged exposure to electromagnetic radiation).

It has been shown that reducing or avoiding electromagnetic radiation can help individuals suffering from EHS syndrome recover a better level of comfort (Hagström et al. 2013). Such avoidance behaviors and immediate well-being gain are typically observed in anxiety or panic disorders alternatively, for the case of EHS individuals, some counter-phobic objects can be purchased, such as anti-radiation chips applied to mobile phones to allegedly reduce their emission level. Following such avoidance or defense behaviors, decreasing discomfort could act as a confirmation that the source of discomfort was related to electromagnetic radiation. This would possibly reinforce the belief of EHS individuals in adverse health effects of electromagnetic radiation.

It is also well accepted that anxious individuals do allocate their attention to threat (Dalgleish and Watts 1990), which could be an evolutionary strategy to protect the organism (Ohman 1996). Considering EHS syndrome as an anxiety disorder, this attentional focus on physical perceptions, related to the belief that the subject is exposed and that this exposure can result in adverse health effects, could amplify and reinforce the belief in a real threat.

The feedback mechanism I explored here (reinforcement of belief by attentional focus on symptoms and good results of avoidance strategy) could make the EHS individuals enter a loop of fear whose main engine is confirmation bias.

Figure 2: Possible theoretical model of EHS syndrome considered as an anxiety disorder.

Conclusion

Science has failed to show a causal relationship between real exposure to electromagnetic radiation and EHS syndrome. There is, however, a growing accumulation of evidence that EHS syndrome is related to psychological factors. I have proposed that EHS syndrome could be a form of phobia explained by mechanisms occurring in other anxiety disorders. Biological vulnerability to anxiety—existing prior to the appearance of EHS syndrome—would cause people to confound symptoms of anxiety with a biological effect of electromagnetic radiation. Alleged exposure to electromagnetic radiation would play the role of neutral stimulus. Finally, confirmation bias—based on attentional focus and avoidance strategies—would push people still deeper in their belief.

Researchers should now stop (or at least reduce their efforts) trying to find a less-and-less plausible link between EHS syndrome and electromagnetic radiation and instead concentrate their work on understanding more accurately the mechanisms pushing people to the wrong belief that domestic, low levels of electromagnetic radiation are deleterious. Researchers should find a way to help these people by decreasing anxiety and discomfort through adapted cognitive behavioral therapy techniques.