Information

Have there been any recent replications of the Hofling hospital experiment?

Have there been any recent replications of the Hofling hospital experiment?

In the original Hofling hospital experiment (1966):

A person would telephone a nurse, saying that he was a doctor and giving a fictitious name, asking the nurse to administer 20 mg of a fictitious drug named "Astroten" to a patient, and that he/she would provide the required signature for the medication later. A bottle labelled "Astroten" had been placed in the drug cabinet, but there was no drug of that name on the approved list. The label clearly stated that 10 mg was the maximum daily dose.

There are several reasons why the nurse should refuse the request:

  1. They don't know the doctor
  2. They don't know the drug
  3. The requested dose is greater than the labelled maximum dose
  4. Paperwork was not provided

Nonetheless, 21 of 22 nurses complied. One of the (many) disturbing aspects of the experiment, is that even with so many reasons to refuse, the nurses tended to obey. The implication is that with fewer reasons to refuse, they would be even more likely to obey.

This turned out not to be quite true, as per a partial replication by Rank and Jacobson (1977), in which they eliminated one of the reasons: They used a drug known to the nurses (Valium) instead of a fictitious drug, and somewhat counterintuitively got much lower compliance rates.

So if you wanted to have a particular hospital patient "eliminated", it seems to me that Hofling provides a simple plan: Make a phone call from a payphone somewhere to the nurse's station at the hospital, pretending to be a doctor, and ask them to administer a lethal dose of some poison - the only caveat is that as Rank and Jacobson showed, you don't want to refer to a drug they are familiar with, so ideally a lethal toxin placed in the drug cabinet ahead of time (or equivalently, a mislabelled toxin).

Rank and Jacobson suggest that perhaps reduction in obedience may be partially due to cultural changes: Changes in the relationship between doctors and nurses, the self-esteem of nurses, changes in the legal landscape and fear of lawsuits, etc. Have there been any recent replications of the experiment that suggest how easy / hard it is today to have a patient anonymously murdered by a nurse?


Other Studies

It is worthwhile to at least investigate the outcome of the following, so you can refer to it as support (or the reverse) in a piece of written work.

People were approached on the street and were asked one of three things. Here is an example of one: "Pick up this bag for me [points to bag]"

The instruction broke many hospital guidelines including acting without a signed order from a doctor.

The nurse was stopped before giving the medication and as an extra precaution it was harmless anyway.

Has good ecological validity as it is in a real-life setting.

Questionable how representative the sample is since it only tests those who happened to be on this street at the time.

It contradicts Milgram's findings as it shows very high conformity when an order is given over the telephone.

A very small sample was used.


Ethics of Human Experimentation

There is no doubt that research involving human subjects is indispensable and has led to an improvement in the quality of lives and numerous medical breakthroughs. At the same time, as the above examples show, human experimentation has often been on the limit of what is ethically acceptable.

Jenner’s vaccine experiment was fortunately successful, but exposing a child to a deadly disease in the name of medical research is today considered as unethical. The HeLa cells and Tuskegee experiments have been cited as examples of racial discrimination in science. The Stanford study has been heavily criticized as unethical due to its lack of fully informed consent by prisoners to whom the arrests came as a surprise. The NIH treatment of short children is often seen as a profitable pharmacologic solution to what is fundamentally a social problem.

In addition, in order to ensure sufficient participation in research, human experimentation was frequently done among the most vulnerable population groups such as prisoners, poor people, minorities, mental patients, and children.

So how can researchers achieve a balance and justify exposing individual human subjects to risk for the sake of the advancement of science?

Ethical guidelines for human research

Ethical guidelines for regulating the use of human subjects in research were developed in response to numerous unethical experiments carried out throughout the 20th century. In the past sixty years, there has been a rapid emergence of various codes, regulations, and acts to govern ethical research in humans. In addition, several organizations were put in place to help monitor human experimentations.

The Nuremberg Code

The Nuremberg Code is a set of international rules and research ethics principles that were created to protect human test subjects. The code was established in 1947 as a result of the Nuremberg trials at the end of the Second World War. Originally, the code aimed to protect human subjects from any cruelty and exploitation similar to what the prisoners endured during the war.

The Nuremberg Code states that the voluntary consent in research is essential and that participants have the right to ask to end treatment at any moment. Furthermore, treatments can be carried only by licensed professionals who must terminate their study if the subjects are in danger.

The Nuremberg Code remains the most important document in the history of the ethics of medical research. It serves as a blueprint for today's principles that ensure the rights of subjects in human experimentation.

The Belmont report

The Belmont Report was established in 1978 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The report describes the ethical behaviors in research that involve human subjects. It includes three ethical principles that must be taken into account when using human subjects for research:

  • Respect for persons: individuals should be treated as autonomous agents and people with diminished autonomy are entitled to protection
  • Beneficence: maximizing benefits and minimizing possible harms in human experimentation, that is, acting in the best interest of the participant
  • Justice: informed consent, assessment of risks and benefits, fair treatment, and unbiased selection of subjects.

The Belmont Report provides the moral framework for understanding regulations on the use of humans in experimental methods in the United States.

Food and Drug Administration regulations

The Food and Drug Administration (FDA) is the highest authority of human subjects protection in research in the United States. The FDA regulations for the conduct of clinical trials have been in effect since the 1970s. These regulations require informing participants in an experiment that they could be used as control subjects or given a placebo, and that in certain cases alternative therapies may exist, and obtaining their written consent.

Ethics committees

To protect the rights and well-being of research participants, and at the same time allow obtaining meaningful results and insights into human behavior, all current biomedical and psychological research must go through a strict ethical review process.

Ethics committees assess and review trial designs. They approve, review, and monitor all research involving humans. Their task is to verify that subjects are not exposed to any unnecessary risks according to the key ethical guidelines including the assurance of confidentiality, informed consent, and debriefing.

Ethics committees in the European Union are bodies responsible for oversight of medical or human research studies in EU member states.

Institutional review boards

In the United States, ethics committees are usually known as institutional review boards. Institutional review boards (IRB), also called ethical review boards, are independent ethics committees that review Health and Human Services research proposals involving human subjects. The aim of the institutional review board is to ensure that the proposals meet the ethical foundations of the regulations.

Any study conducted by a university or research organization has to be approved by an institutional review board, often even before investigators can apply for funding. This is the case for any research in anthropology, economics, political science, and sociology as it is for clinical or experimental research in medicine and psychology.


The nurses were thought to have allowed themselves to be deceived because of their high opinions of the standards of the medical profession. The study revealed the danger to patients that existed because the nurses' view of professional standards induced them to suppress their good judgement.

  • Basic Psychiatric Concepts in Nursing (1960). Charles K. Hofling, Madeleine M. Leininger, Elizabeth Bregg. J. B. Lippencott, 2nd ed. 1967: ISBN 0-397-54062-0
  • Textbook of Psychiatry for Medical Practice edited by C. K. Hofling. J. B. Lippencott, 3rd ed. 1975: ISBN 0-397-52070-0
  • Aging: The Process and the People (1978). Usdin, Gene & Charles K. Hofling, editors. American College of Psychiatrists. New York: Brunner/Mazel Publishers
  • The Family: Evaluation and Treatment (1980). ed. C. K. Hofling and J. M. Lewis, New York: Brunner/Mazel Publishers
  • Law and Ethics in the Practice of Psychiatry (1981). New York: Brunner/Mazel Publishers, ISBN 0-87630-250-9
  • Custer and the Little Big Horn: A Psychobiographical Inquiry (1985). Wayne State University Press, ISBN 0-8143-1814-2

Hofling Hospital Experiment Final Term Paper

HOFLING HOSPITAL EXPERIMENT
In 1966, the psychiatrist Charles K. Hofling conducted a two-part experiment that was inspired by Milgram’s research in obedience (Milgram, S., 1963 & 1965). It consisted of a survey and field study on obedience in the nurse-physician relationship. Primarily, what happens when nurses are required to carry out a procedure which goes against her professional standards and secondly, to determine if nurses were aware of their tendencies in the level of obedience they displayed.

The Method, Participants, & Materials
Three psychiatric hospitals in the Midwest took part in this study, with one hospital acting as the control group. The control group consisted of a total of twenty-two nurses (twelve graduate nurses and twenty-one student nurses) who would complete the survey during the field study period. The field experiment would be conducted in twenty-one wards (twelve public and ten private) of the remaining two psychiatric hospitals. The twenty-two nurse participants were closely matched for age, sex, race, marital status, length of working week, professional experience and area of origin. An imaginary scenario was explained to the group of nurses and nursing students who were not only expected to answer what they would do, but also what they predicted the majority of other nurses would do in the same situation (Hofling, Brotzman, Dalrymple, Graves & Pierce,1966).

Hofling then arranged for a memo to be sent to all of the participants to remind them of their responsibilities with regard to changes in medication for patients. The nurses were observed to see if they adhered to the guidelines provided otherwise a violation of hospital policy would have transpired. Per the memo, (1) medication orders and instructions could not be accepted over.


Keith E Rice's Integrated SocioPsychology Blog & Pages

PART 3
Dispositional and Situational
The 2 approaches to explaining obedience were to some extent reconciled via the work of Alan Elms ( Alan Elms & Stanley Milgram , 1966).

One of Milgram’s assistants, Elms tested sub- samples of the 20 most obedient and the 20 most defiant from Milgram’s first 4 experiments, using Adorno’s F- Scale questionnaire. He found that those who tested highest on the F- Scale gave more stronger shocks and held the shock buttons down longer than those who were low scorers. Participants were also asked a series of open-ended questions about their relationship with their parents and their attitudes towards the experimenter (authority figure) and the ‘learner’. Elms reported that participants high in authoritarianism were more likely to see the learner as responsible for what happened to him, rather than themselves or the experimenter who was seen as an admirable figure by many of the authoritarian participants, They also often spoke in negative terms about their fathers. Though Elms’ sample groups were small, the implication is that there is indeed a dispositional element in blind obedience – so that some will respond to a situation demanding obedience more than others.

In Integrated SocioPsychology terms the vMEME most likely to obey blindly the orders of a legitimate authority is BLUE. However, the ruthlessness of the authoritarian personalities – and possibly their enjoyment of inflicting pain on others – suggests that they may also be high in the Psychoticism Dimension of Temperament.

Research into situational factors in obedience
In addition to Milgram’s obedience experiments, their variations and numerous replications, there have been a number of other important studies into obedience.

A very different one to Milgram was that of Wim Meeus & Quinten Raaijmakers (1985) – though they overtly took their inspiration from his work. 1980s Dutch culture was much more liberal than early 1960s American culture so the intention was to see if the power of obedience to a higher authority would still apply in a different cultural setting. They also wanted to eradicate certain ambiguities in Milgram’s study – primarily that the levels of shock appeared to be dangerous, confirmed by the learner/victim going silent and the levers being labelled ‘severe shock’, etc, yet the participants had been told there would be no permanent tissue damage.

In the baseline procedure there were 39 volunteer participants aged 18-55, both male and female and of at least Dutch high school education. 24 of the volunteers were allocated to the experimental group while 15 were put in a control group. The experiment lasted about 30 minutes. The participants were were given the role of ‘interviewer’ and ordered to harass a ‘job applicant’ (actually a confederate) to make him nervous while sitting a test to determine if he would get the job. Although the premise of the set-up was that the experimenters were researching the relationship between psychological stress and test achievement, they were also told that the ‘applicant’ did not know the real purpose of the study – they heard the applicant being told that poor performance on the test would not affect their job prospects – and that the job being applied for was real. The applicant, listening via a speaker in a different room, had to answer 32 multiple-choice questions read out in 4 sets by the ‘interviewer’. The harassing consisted of 15 negative statements – 5 each for the second, third and fourth question sets. (No negative statements were made during the first question set.) These appeared on a TV screen, telling the interviewer when to make the remarks and what to say. The comments built from mild criticism – “Your answer to question 9 was wrong” – to devastating utterances such as “This job is too difficult for you. You are only suited to lower functions.” No errors were made in the first question set but 10 were made over the next 3 sets – 8 being enough to ‘fail’ the test.The applicant had been instructed to begin confidently but to protest at the negative statements – eg: “But surely…” and “My answer wasn’t wrong, was it?” The applicant acted increasingly distressed until reaching the point – at the eighth or ninth negative statement – where he begged the interviewer to stop. The applicant then accused the interviewer of lying to him about the study and withdrawing consent. The interviewers were were told to ignore the applicant’s interruptions and were given 4 verbal prods to continue the remarks if they refused to go on. The participants were told that electrodes on the applicant’s skull were measuring tension which was displayed numerically on a sequence panel, running from 15 to 65. The experimenter, next to the participant, added verbal comments on the stress indicators displayed such as “normal” or “intense”. The graphic shows how the stress level and errors were manipulated. 91.7% of the participants (22 out of 24) obeyed by disturbing and criticising the applicant with all 15 statements when told to do so by the researcher. The mean of the stress remarks given was 14.81. None of the participants in the experimental condition put up any real opposition to the experimenter’s demands.

With the control group the participants could choose when to make the negative statements and could stop making them at any time during the test. When the participants in the control group stopped the negative statements, the applicant had been instructed to stop making errors and their ‘tension’ levels would drop. No one in the control group made the stress remarks.

Meeus & Raaijmakers made 2 variations on the baseline. Firstly the experimenter set up the study, ordered the stress remarks and then left the room (22 participants). Secondly 2 confederates played co-interviewers alongside the real participant – protesting after stress remark 8 (causing the experimenter to go through the 4 verbal prods) and refusing, first one and then the other, to continue after stress remark 10 (when the applicant withdrew their consent to the experiment) – though the experimenter asked the real participant to continue (19 participants). Removal of the experimenter and introducing rebellious peers both led to a substantial reduction in obedience amongst the real participants – 36.4% and 15.8% respectively were fully obedient. The graphic shows the relative influence of the 2 variation conditions. The researchers explained the reduction in obedience in the ‘experimenter’ absent condition as being due to the participant having to take personal responsibility. They attributed the reduction in obedience in the ‘rebellious peers’ condition as both having to take personal responsibility and having the peers to model.

While Meeus & Raaijmakers did indeed demonstrate that, even in a more liberal culture than that of Milgram’s studies, people would obey an authority figure and go against their better nature to do something designed to hurt another person, the study has been heavily criticised as lacking mundane realism and, therefore, ecological validity as the task was hardly an everyday scenario.

A study with rather more ecological validity was that of Charles Hofling et al (1966). 3 psychiatric hospitals in the American Midwest took part in this study, with one of them acting as a control. 22 uncomprehending nurses from the other two hospitals were used for the experiment – 12 public wards and 10 private. Participants were closely matched for age, sex, race, marital status, length of working week, professional experience and area of origin. While alone on the ward on night duty – 7-9 PM, just before evening visiting or just after it, when doctors are not normally around and medication is not normally administered – they received a phone call from an unknown “Doctor Smith from the Psychiatric Department” (the authority figure) asking them to administer 20 mg of ‘Astroten’ to a patient, ‘Mr Jones’, who needed the drug urgently. The caller, who claimed to be running late, said he would sign the authorisation papers when he arrived at the hospital in about 10 minutes time. Amongst the ward’s drugs were bottles containing fake pills labelled Astroten 5 mg. Maximum dose 10 mg. Do not exceed the stated dose. (The capsules in fact contained glucose, harmless to most patients.) The researcher playing Doctor Smith used a written script to standardise the conversation and all conversations were recorded. The conversation was planned to end when either:-

  • the nurse complied and went to administer the medication
  • refused consistently to give the medication
  • went to get advice
  • became emotionally upset
  • the call went on for more than 10 minutes

A researcher (a real doctor) stopped nurses that were moving towards the patient’s bed with the ‘medication’ and all nurses were debriefed within 30 minutes of the telephone conversation. 12 graduate nurses and 21 student nurses from the control hospital completed a questionnaire about what they would do if they were asked to administer a medicine unauthorised for use on their ward by a doctor they didn’t know. The telephone conversations were generally brief without much resistance from the nurses. 16 nurses in debrief said they felt they should have been more resistant to the caller. None became hostile to the caller. 21 of the 22 nurses started to administer the Astroten. During the debrief only 11 nurses admitted to being aware of the dosage limits for Astroten. The other 10 did not notice it but judged that it must be safe anyway if a doctor had ordered them to administer it. Nearly all admitted they should not have followed the orders as they were in contravention of hospital policy.However, many of the nurses stated obeying a doctor’s orders without question was a fairly common occurrence. 15 nurses said they could recall similar incidents and that doctors were displeased if nurses resisted their orders. Amongst the control group, 10 of the 12 graduate nurses and all 21 students said they would not have administered the medication. Most believed other nurses would behave in the same way.

Hofling et al’s study does support Milgram’s Agency Theory. The nurses’ actions indicated they were in an agentic state, recognising and responding to the doctor’s authority. It also has high ecological validity. The nurses were unaware of the set-up so their behaviour was natural. Comparison between the control group’s questionnaire responses to the actual practice of the nurses in the experimental situation shows the difference between what people think they would do and what they actually do.

Steven Rank & Cardell Jacobson (1977), however, queried the mundane realism of the study in that the nurses in had no knowledge of the drug involved and that they had no opportunity to seek advice from anyone of equal or higher status. (Both of which would apply in most hospital situations.) They replicated Hofling’s experiment but the instruction was to administer Valium at 3 times the recommended level, the telephoned instruction came from a real, known doctor on the hospital staff and the nurses were able to consult with other nurses before proceeding. Under these conditions, only 2 out of 18 nurses prepared the medication as requested. Rank & Jacobsen concluded: “nurses aware of the toxic effects of a drug and allowed to interact naturally – will not administer a medication overdose merely because a physician orders it.”

However, Eliot Smith & Diane Mackie (1995) reported that there is a daily 12% error rate in US hospitals and that “many researchers attribute such problems largely to the unquestioning deference to authority that doctors demand and nurses accept.” The same year Annamarie Krackow & Thomas Blass gave a questionnaire to 68 nurses which asked about the last time they had disagreed with a doctor’s order. 2 factors emerged as key to whether the nurses would or wouldn’t obey. Most importantly was whether the nurses recognised the doctor as a legitimate authority with the right to make the decision in question. However, the nurses were also influenced by the consequences for the patient if these would be serious, the nurses were more likely to take responsibility and challenge the order.

Rather less dramatic but still debatable in terms of mundane realism and ecological validity was Leonard Bickman’s field experiment in 1974. He had 3 male experimenters dressed either in as a milkman, a uniformed guard or a civilian in a sports coat and tie make demands of passers-by in a New York City street. They gave one of 3 orders:-

  • “Pick up this bag for me” – pointing to litter
  • “This fellow is overparked at the meter but doesn’t have any change. Give him a dime” – nodding in the direction of a confederate fumbling for change by a parking meter
  • “Don’t you know you have to stand at the other side of the pole? This sign says ‘No standing'” – to a participant at a bus stop

The passers-by were most likely to obey the guard (38%) and least likely to obey the civilian (14%). As Bickman concluded, in support of Milgram and the concept of legitimate authority, a uniform has immense social power. In a variation of the study Bickman found people even obeyed the guard when he walked away after giving the order!

Brad Bushman (1988) replicated Bickmman’s study but with 3 female confederates and found only slightly lower levels of obedience.

Research into dispositional factors in obedience
Robert Altemeyer
(1981) worked with 3 of the authoritarian personality traits he thought constituted ‘right-wing authoritarianism’ (RWA):-

  1. Conventionalism – an adherence to ‘conventional’ norms and values
  2. Authoritarian aggression – hostility towards people who violate such norms and values
  3. Authoritarian submission – uncritical submission to legitimate authority

Altmeyer tested the relationship between RWA and obedience by instructing his participants to give themselves increasingly levels of electric shock when they made mistakes on a learning task. He found a significant positive correlation between RWA scores and the level of shock the participants were willing to give themselves.

Education, cognitive complexity, politics and authoritarianism
In what appear to be a complex series of interlocking factors, it appears that education – with related increases in cognitive complexity – and political preferences all influence or are influenced by authoritarianism and how willing somebody might be to obey.

Milgram (1974) noted that less-educated people were consistently more obedient than well-educated people. However, Elms noted that the less-educated were the most obedient and authoritarian. C P Middendorp & J D Meleon (1990) also found a link between poorer education and authoritarianism.

While education isn’t the only factor leading to greater cognitive complexity, it is almost always does improve cognitive complexity. A number of developmentalists have found a correlation between greater complexity and political, social and moral views. Frenkel-Brunswik (1951) found becoming less prejudiced is negatively correlated with greater cognitive complexity. Lawrence Kohlberg(1963) found morality becomes more complex with greater cognitive development. Jane Loevinger (1976) stated that, as people’s thinking becomes more complex, so they become much more aware of others and their needs. In Gravesian terms, GREEN (liberal) thinking is more complex than BLUE/ORANGE (economically conservative), BLUE (rigidly conservative) and PURPLE/BLUE (socially conservative).

Quite a bit of evidence points to lesser cognitive complexity being associated with having right-wing political views. Gordon Hodson & Michael Busseri (2012) found that people with low childhood intelligence tend to grow up to have racist and anti-gay views. Jonathan Haidt, Craig Joseph & Jesse Graham note there is a “consistent difference between liberals and conservatives” on several measurements related to cognitive complexity. Emma Onraet et al (2015) offer an explanation for this: “Right-wing ideologies provide well-structured and ordered views about society and intergroup relations, thereby psychologically minimizing the complexity of the social world. Theoretically, therefore, those with fewer cognitive resources drift towards right-wing conservative ideologies in an attempt to increase psychological control over their context.”

Unsurprisingly perhaps, Laurent Bègue et al (2014) found that people who defined themselves as more ‘left-wing’ were less obedient than people who saw themselves as ‘right-wing’. In fake game show contestants had to give (fake) electric shocks to other contestants. What the researchers found was a negative correlation between strength of left-wing political views and the intensity of shock the contestant was willing to administer.

However, not all research supports the association of lesser cognitive ability with right-wing political views. Luke Conway et al (2016) asked over 2000 participants – equally Democratic and Republican – to write statements about different domains in their lives eg:-

  • Climate change
  • Death penalty
  • Sex relations except in marriage are always wrong
  • Drinking alcohol
  • Socialism
  • Refugees/immigration
  • Separate roles for men and women
  • Capitalism
  • Abortion

Complexity in the statements made was rated on a scale of 1 (simple) – 7 (highly complex). Republicans were much more complex on some topics and Democrats on others – there was no overall direction of travel in simplicity to complexity.


Findings

Findings

Asch measured the number of times each participant conformed to the majority view. On average, about one third (32%) of the participants who were placed in this situation went along and conformed with the clearly incorrect majority on the critical trials.

Over the 12 critical trials, about 75% of participants conformed at least once, and 25% of participants never conformed.

In the control group, with no pressure to conform to confederates, less than 1% of participants gave the wrong answer.


Three kinds of issues have been linked with immense materialistic tendencies. First, research in the field of consumer psychology and psychology has discover.

However, studies show women are more skilled in particular areas than me. (Yarmey and Kent 1980) found that women were more accurate than men in identifying .

The study also tells that female are upper on the managerial chart are often rated much higher on the maleness scale then of those women who works on lower r.

(2004) posited that individuals seem to prefer to remain in their comfort zone by having the mindset that women are less competent and achievement oriented t.

(2004) looked at all three personality clusters and how each was associated with partner conflict across adolescence and early adulthood. Results showed that.

Previous researches showed the correlation coefficients of firm leverage and ROA is significant. The measurement of firm leverage is total debt divided by to.

406). They found similar results for students in the same classrooms who had disabilities and were from low socioeconomic backgrounds, but no such outperform.

LIFO method reduces tax liability. As quality of earnings increases, illusory inventory profits decreases and that decrease vulnerability of earnings (Merjos.

Professor Sugimori says that mono-racial countries put more importance on the atmosphere of a small group they belong than social imperative (Sugimori). Japa.

Skogstand et al.’s (2015) used constructive, laissez-faire, and tyrannical leadership behaviors for their study. The findings revealed that over times a lais.


Critical Evaluation

Critical Evaluation

The Milgram studies were conducted in laboratory type conditions, and we must ask if this tells us much about real-life situations. We obey in a variety of real-life situations that are far more subtle than instructions to give people electric shocks, and it would be interesting to see what factors operate in everyday obedience. The sort of situation Milgram investigated would be more suited to a military context.

Orne and Holland (1968) accused Milgram’s study of lacking ‘experimental realism,'’ i.e.,' participants might not have believed the experimental set-up they found themselves in and knew the learner wasn’t receiving electric shocks.

“It’s more truthful to say that only half of the people who undertook the experiment fully believed it was real, and of those two-thirds disobeyed the experimenter,” observes Perry (p. 139).

Milgram's sample was biased:

Yet a total of 636 participants were tested in 18 separate experiments across the New Haven area, which was seen as being reasonably representative of a typical American town.

Milgram’s findings have been replicated in a variety of cultures and most lead to the same conclusions as Milgram’s original study and in some cases see higher obedience rates.

However, Smith and Bond (1998) point out that with the exception of Jordan (Shanab & Yahya, 1978), the majority of these studies have been conducted in industrialized Western cultures and we should be cautious before we conclude that a universal trait of social behavior has been identified.


Top 10 Unethical Psychological Experiments

Psychology is a relatively new science which gained popularity in the early 20th century with Wilhelm Wundt. In the zeal to learn about the human thought process and behavior, many early psychiatrists went too far with their experimentations, leading to stringent ethics codes and standards. Though these are highly unethical experiments, it should be mentioned that they did pave the way to induct our current ethical standards of experiments, and that should be seen as a positive. There is some crossover on this list with the Top 10 Evil Human Experiments. Three items from that list are reproduced here (items 8, 9, and 10) for the sake of completeness.

The Monster Study was a stuttering experiment on 22 orphan children in Davenport, Iowa, in 1939 conducted by Wendell Johnson at the University of Iowa. Johnson chose one of his graduate students, Mary Tudor, to conduct the experiment and he supervised her research. After placing the children in control and experimental groups, Tudor gave positive speech therapy to half of the children, praising the fluency of their speech, and negative speech therapy to the other half, belittling the children for every speech imperfection and telling them they were stutterers. Many of the normal speaking orphan children who received negative therapy in the experiment suffered negative psychological effects and some retained speech problems during the course of their life. Dubbed &ldquoThe Monster Study&rdquo by some of Johnson&rsquos peers who were horrified that he would experiment on orphan children to prove a theory, the experiment was kept hidden for fear Johnson&rsquos reputation would be tarnished in the wake of human experiments conducted by the Nazis during World War II. The University of Iowa publicly apologized for the Monster Study in 2001.

South Africa&rsquos apartheid army forced white lesbian and gay soldiers to undergo &lsquosex-change&rsquo operations in the 1970&rsquos and the 1980&rsquos, and submitted many to chemical castration, electric shock, and other unethical medical experiments. Although the exact number is not known, former apartheid army surgeons estimate that as many as 900 forced &lsquosexual reassignment&rsquo operations may have been performed between 1971 and 1989 at military hospitals, as part of a top-secret program to root out homosexuality from the service.

Army psychiatrists aided by chaplains aggressively ferreted out suspected homosexuals from the armed forces, sending them discretely to military psychiatric units, chiefly ward 22 of 1 Military Hospital at Voortrekkerhoogte, near Pretoria. Those who could not be &lsquocured&rsquo with drugs, aversion shock therapy, hormone treatment, and other radical &lsquopsychiatric&rsquo means were chemically castrated or given sex-change operations.

Although several cases of lesbian soldiers abused have been documented so far&mdashincluding one botched sex-change operation&mdashmost of the victims appear to have been young, 16 to 24-year-old white males drafted into the apartheid army.

Dr. Aubrey Levin (the head of the study) is now Clinical Professor in the Department of Psychiatry (Forensic Division) at the University of Calgary&rsquos Medical School. He is also in private practice, as a member in good standing of the College of Physicians and Surgeons of Alberta.

This study was not necessarily unethical, but the results were disastrous, and its sheer infamy puts it on this list. Famed psychologist Philip Zimbardo led this experiment to examine that behavior of individuals when placed into roles of either prisoner or guard and the norms these individuals were expected to display.

Prisoners were put into a situation purposely meant to cause disorientation, degradation, and depersonalization. Guards were not given any specific directions or training on how to carry out their roles. Though at first, the students were unsure of how to carry out their roles, eventually they had no problem. The second day of the experiment invited a rebellion by the prisoners, which brought a severe response from the guards. Things only went downhill from there.

Guards implemented a privilege system meant to break solidarity between prisoners and create distrust between them. The guards became paranoid about the prisoners, believing they were out to get them. This caused the privilege system to be controlled in every aspect, even in the prisoners&rsquo bodily functions. Prisoners began to experience emotional disturbances, depression, and learned helplessness. During this time, prisoners were visited by a prison chaplain. They identified themselves as numbers rather than their names, and when asked how they planned to leave the prison, prisoners were confused. They had completely assimilated into their roles.

Dr. Zimbardo ended the experiment after five days, when he realized just how real the prison had become to the subjects. Though the experiment lasted only a short time, the results are very telling. How quickly someone can abuse their control when put into the right circumstances. The scandal at Abu Ghraib that shocked the U.S. in 2004 is prime example of Zimbardo&rsquos experiment findings.

While animal experimentation can be incredibly helpful in understanding man, and developing life saving drugs, there have been experiments which go well beyond the realms of ethics. The monkey drug trials of 1969 were one such case. In this experiment, a large group of monkeys and rats were trained to inject themselves with an assortment of drugs, including morphine, alcohol, codeine, cocaine, and amphetamines. Once the animals were capable of self-injecting, they were left to their own devices with a large supply of each drug.

The animals were so disturbed (as one would expect) that some tried so hard to escape that they broke their arms in the process. The monkeys taking cocaine suffered convulsions and in some cases tore off their own fingers (possible as a consequence of hallucinations), one monkey taking amphetamines tore all of the fur from his arm and abdomen, and in the case of cocaine and morphine combined, death would occur within 2 weeks.

The point of the experiment was simply to understand the effects of addiction and drug use a point which, I think, most rational and ethical people would know did not require such horrendous treatment of animals.

In 1924, Carney Landis, a psychology graduate at the University of Minnesota developed an experiment to determine whether different emotions create facial expressions specific to that emotion. The aim of this experiment was to see if all people have a common expression when feeling disgust, shock, joy, and so on.

Most of the participants in the experiment were students. They were taken to a lab and their faces were painted with black lines, in order to study the movements of their facial muscles. They were then exposed to a variety of stimuli designed to create a strong reaction. As each person reacted, they were photographed by Landis. The subjects were made to smell ammonia, to look at pornography, and to put their hands into a bucket of frogs. But the controversy around this study was the final part of the test.

Participants were shown a live rat and given instructions to behead it. While all the participants were repelled by the idea, fully one third did it. The situation was made worse by the fact that most of the students had no idea how to perform this operation in a humane manner and the animals were forced to experience great suffering. For the one third who refused to perform the decapitation, Landis would pick up the knife and cut the animals head off for them.

The consequences of the study were actually more important for their evidence that people are willing to do almost anything when asked in a situation like this. The study did not prove that humans have a common set of unique facial expressions.

John Watson, father of behaviorism, was a psychologist who was apt to using orphans in his experiments. Watson wanted to test the idea of whether fear was innate or a conditioned response. Little Albert, the nickname given to the nine month old infant that Watson chose from a hospital, was exposed to a white rabbit, a white rat, a monkey, masks with and without hair, cotton wool, burning newspaper, and a miscellanea of other things for two months without any sort of conditioning. Then experiment began by placing Albert on a mattress in the middle of a room. A white laboratory rat was placed near Albert and he was allowed to play with it. At this point, the child showed no fear of the rat.

Then Watson would make a loud sound behind Albert&rsquos back by striking a suspended steel bar with a hammer when the baby touched the rat. In these occasions, Little Albert cried and showed fear as he heard the noise. After this was done several times, Albert became very distressed when the rat was displayed. Albert had associated the white rat with the loud noise and was producing the fearful or emotional response of crying.

Little Albert started to generalize his fear response to anything fluffy or white (or both). The most unfortunate part of this experiment is that Little Albert was not desensitized to his fear. He left the hospital before Watson could do so.

In 1965, psychologists Mark Seligman and Steve Maier conducted an experiment in which three groups of dogs were placed in harnesses. Dogs from group one were released after a certain amount of time, with no harm done. Dogs from group two were paired up and leashed together, and one from each pair was given electrical shocks that could be ended by pressing a lever. Dogs from group three were also paired up and leashed together, one receiving shocks, but the shocks didn&rsquot end when the lever was pressed. Shocks came randomly and seemed inevitable, which caused &ldquolearned helplessness,&rdquo the dogs assuming that nothing could be done about the shocks. The dogs in group three ended up displaying symptoms of clinical depression.

Later, group three dogs were placed in a box with by themselves. They were again shocked, but they could easily end the shocks by jumping out of the box. These dogs simply &ldquogave up,&rdquo again displaying learned helplessness. The image above is a healthy pet dog in a science lab, not an animal used in experimentation.

The notorious Milgrim Study is one of the most well known of psychology experiments. Stanley Milgram, a social psychologist at Yale University, wanted to test obedience to authority. He set up an experiment with &ldquoteachers&rdquo who were the actual participants, and a &ldquolearner,&rdquo who was an actor. Both the teacher and the learner were told that the study was about memory and learning.

Both the learner and the teacher received slips that they were told were given to them randomly, when in fact, both had been given slips that read &ldquoteacher.&rdquo The actor claimed to receive a &ldquolearner&rdquo slip, so the teacher was deceived. Both were separated into separate rooms and could only hear each other. The teacher read a pair of words, following by four possible answers to the question. If the learner was incorrect with his answer, the teacher was to administer a shock with voltage that increased with every wrong answer. If correct, there would be no shock, and the teacher would advance to the next question.

In reality, no one was being shocked. A tape recorder with pre-recorded screams was hooked up to play each time the teacher administered a shock. When the shocks got to a higher voltage, the actor/learner would bang on the wall and ask the teacher to stop. Eventually all screams and banging would stop and silence would ensue. This was the point when many of the teachers exhibited extreme distress and would ask to stop the experiment. Some questioned the experiment, but many were encouraged to go on and told they would not be responsible for any results.

If at any time the subject indicated his desire to halt the experiment, he was told by the experimenter, Please continue. The experiment requires that you continue. It is absolutely essential that you continue. You have no other choice, you must go on. If after all four orders the teacher still wished to stop the experiment, it was ended. Only 14 out of 40 teachers halted the experiment before administering a 450 volt shock, though every participant questioned the experiment, and no teacher firmly refused to stop the shocks before 300 volts.

In 1981, Tom Peters and Robert H. Waterman Jr. wrote that the Milgram Experiment and the later Stanford prison experiment were frightening in their implications about the danger lurking in human nature&rsquos dark side.

Dr. Harry Harlow was an unsympathetic person, using terms like the &ldquorape rack&rdquo and &ldquoiron maiden&rdquo in his experiments. He is most well-known for the experiments he conducted on rhesus monkeys concerning social isolation. Dr. Harlow took infant rhesus monkeys who had already bonded with their mothers and placed them in a stainless steel vertical chamber device alone with no contact in order to sever those bonds. They were kept in the chambers for up to one year. Many of these monkeys came out of the chamber psychotic, and many did not recover. Dr. Harlow concluded that even a happy, normal childhood was no defense against depression, while science writer Deborah Blum called these, &ldquocommon sense results.&rdquo

Gene Sackett of the University of Washington in Seattle, one of Harlow&rsquos doctoral students, stated he believes the animal liberation movement in the U.S. was born as a result of Harlow&rsquos experiments. William Mason, one of Harlow&rsquos students, said that Harlow &ldquokept this going to the point where it was clear to many people that the work was really violating ordinary sensibilities, that anybody with respect for life or people would find this offensive. It&rsquos as if he sat down and said, &lsquoI&rsquom only going to be around another ten years. What I&rsquod like to do, then, is leave a great big mess behind.&rsquo If that was his aim, he did a perfect job.&rdquo

In 1965, a baby boy was born in Canada named David Reimer. At eight months old, he was brought in for a standard procedure: circumcision. Unfortunately, during the process his penis was burned off. This was due to the physicians using an electrocautery needle instead of a standard scalpel. When the parents visited psychologist John Money, he suggested a simple solution to a very complicated problem: a sex change. His parents were distraught about the situation, but they eventually agreed to the procedure. They didn&rsquot know that the doctor&rsquos true intentions were to prove that nurture, not nature, determined gender identity. For his own selfish gain, he decided to use David as his own private case study.

David, now Brenda, had a constructed vagina and was given hormonal supplements. Dr. Money called the experiment a success, neglecting to report the negative effects of Brenda&rsquos surgery. She acted very much like a stereotypical boy and had conflicting and confusing feelings about an array of topics. Worst of all, her parents did not inform her of the horrific accident as an infant. This caused a devastating tremor through the family. Brenda&rsquos mother was suicidal, her father was alcoholic, and her brother was severely depressed.

Finally, Brenda&rsquos parents gave her the news of her true gender when she was fourteen years old. Brenda decided to become David again, stopped taking estrogen, and had a penis reconstructed. Dr. Money reported no further results beyond insisting that the experiment had been a success, leaving out many details of David&rsquos obvious struggle with gender identity. At the age of 38, David committed suicide.

This article is licensed under the GFDL because it contains quotations from Wikipedia.


Watch the video: What is HOFLING HOSPITAL EXPERIMENT? What does HOFLING HOSPITAL EXPERIMENT mean? (January 2022).