The Normal, the Pathological & the Perfect: Lessons from Genetics

A reflection on what constitutes "normal" and "pathological" in medicine, and how the science of genetics clarifies and challenges established notions of these concepts. Genetics demonstrates that in fact it's "normal" to be "pathological", and perfection has little to do with the absence of defects... These insights have implications for philosophical consideration of "personhood" and "dignity".



Dignity, Healthcare & the Incarnation

This presentation given on November 12, 2022 at the Fall Conference of the de Nicola Center for Ethics & Culture, University of Notre Dame attempts to summarize the uses and senses of the term "dignity" in different historical periods, in order to consider whether or not the word is helpful in contemporary bioethical debate.



The Criminal Mind & the Moral Imagination: What Psychopaths can teach us about Conscience

This article was published on February 6, 2018 in Public Discourse: The Journal of the Witherspoon Institute. It is the product of an ongoing project on the "notion" of conscience, and how it has been understood over different periods of history.

Does every human being have a conscience? Are some people simply unable to engage in moral deliberation?

Plato’s Alcibiades, Shakespeare’s Iago, and Hollywood’s Hannibal Lecter have become stereotypes of the amoral: individuals whose callous disregard for the wellbeing of others seems almost a caricature of a real human being. Yet increasing evidence suggests that a small fraction of human beings are, in fact, morally incapacitated. From an early age, nothing restrains them from engaging in deceit, manipulation, and violence.

These conscience-free individuals first came to the attention of psychologists studying hardened criminals, but it’s become increasingly apparent that many—perhaps most—remain in society. And they extract a toll of conflict and suffering disproportionate to their numbers. Only now are psychologists beginning to realize who they are and how they operate. By considering what happens to people when a moral conscience is lacking, we may better appreciate just how our conscience makes us human.

The Conscience as an Awakening

The development of the human personality may be seen as a progressive awakening to ourselves, to others, and to the world. In most cases, we first begin to experience this process within the intimacy of a family. Within a family, we first learn if and why we are valued. We learn to value others—or not to. We learn to act within the bounds of an external authority, which can present itself to us as benevolent and just, indifferent and passive, or threatening and arbitrary.

As we acquire our sense of self and learn to relate to others, we acquire the capacity to understand and share the feelings of another person, particularly when they suffer or are in some way distressed. Researchers have demonstrated evidence of empathic responses in children as young as age one.

Over time, we learn to classify actions as morally “good” or “bad.” Learning to “do good” and “avoid evil” happens in many ways: by immediately witnessing the actions of others (example); by reading, viewing or hearing the accounts of actual or fictional events (literature); even by simple explanation of moral values (instruction). We populate our psyches with content drawn from these experiences. This learning process leads to the development of what some have called a “moral imagination”: a store of past experiences from which an individual may draw in attempting to resolve concrete moral dilemmas.

As our capacity for moral reasoning develops, we become aware of an internal witness to our actions, which passes judgment on our thoughts, words, and deeds, nudging us toward the good. A refined conscience is one that struggles sincerely and carefully to discern and do what is good and avoid evil whatever the cost. It fosters a heightened sensitivity to the welfare of others. It leads us to take responsibility for our actions.

What Happens When Conscience Doesn’t Develop?

In some people, however, this moral awakening never takes place. In 1939, Scottish psychiatrist David Henderson published a book called Psychopathic States describing patterns of extreme egocentric behaviors in otherwise sane, rational individuals. These people stood out because, at the time, most criminals were believed to be either “imbeciles,” “insane,” or both. Two years later, US psychiatrist Hervey Cleckley published the first edition of The Mask of Sanity, a series of character portraits of individuals who were cunning, heartless liars acting without any ethical restraint. They weren’t crazy. They simply had no inner sense of right and wrong.

Before Henderson and Cleckley, few in the psychiatric establishment had conceived of a sane, intelligent person repeatedly committing vicious, often violent crimes without hesitation or remorse. Their work served as the basis for a modern understanding of the psychopathic personality.

In the 1980’s, Canadian psychologist Robert Hare began a lifelong project to define scientifically rigorous diagnostic criteria for psychopathy based on structured interviews and analysis of the lives of hardened criminals. His subjects were identified based on the list of traits described by Cleckley. He and his graduate students at the University of British Columbia developed what is today known as the “Psychopathy Check List—Revised”. It has become the standard diagnostic test for psychopathy. According to Dr. Hare, psychopaths are

social predators who charm, manipulate and ruthlessly plow their way through life, leaving behind a broad trail of broken hearts, shattered expectations and empty wallets. Completely lacking in conscience and in feelings for others, they selfishly take what they want, do as they please, violating social norms and expectations without the slightest sense of guilt or regret.

When the Hare test is given to normal control subjects, most score a one or two on a scale of forty. Most general population inmates score about twenty or twenty-one. Psychopaths, by definition, score thirty or higher. Among incarcerated criminals in the US, about 20 percent fulfill the Hare diagnostic criteria for psychopathy. But this relatively small fraction accounts for at least 50 percent of all violent crime in the United States. They are also much more likely to re-offend, with recidivism rates of about 80 percent.

Most psychopathic individuals have experienced a traumatic and dysfunctional childhood. Data supporting the inheritance of predisposing genetic factors has been offered as well. Many have specific, reproducible anatomical defects identified by functional MRI. When individuals with psychopathic PCL-R scores are evaluated by functional MRI, characteristic patterns of abnormalities have been described. The brains of psychopaths are markedly deficient in neural areas critical for three aspects of moral judgment: they are unable to recognize situations as “moral”; they are unable to reach a decision about a moral issue; and they are unable to suppress a response pending the resolution of a moral dilemma.

Understanding Psychopathy

Given the reproducible correlation between clinical diagnostic criteria and neuroanatomical abnormalities, some have proposed classifying psychopathy as a distinct illness. For now, the psychiatric establishment classifies psychopathy as a type of antisocial personality disorder. The fifth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual on Mental Disorders (DSM-V) defines anti-social personality disorder as “a pervasive pattern of disregard for, and violation of, the rights of others that begins in childhood or early adolescence and continues into adulthood.”

The word “pervasive” implies that every aspect of such a person’s personality is committed to this type of behavior. And, by definition, psychopathic behavior patterns emerge before age fifteen. These are individuals consumed by the desire to possess and dominate by any means at their disposal. The following video clip is a case in point: a young man with a violent criminal history describes his behavior. The complete video is very disturbing and should be viewed with discretion.

In their interpersonal relations, psychopaths are charming manipulators, incapable of attachment to others. They exhibit a callous disregard for the wellbeing of others. For them, lying is habitual and automatic. With their lies, they define their own reality in order to maintain some control of their surroundings. Relationships are just opportunities for manipulation and control. Other people are dehumanized and valued only as possessions. Psychopaths are stimulated by exerting control over others. Sometimes they live a nomadic life, moving from place to place to live for a time off the people they’ve charmed until they get bored and move on to their next victim. They completely lack empathy.

Their ability to feel and express emotion is absent or, at best, shallow. They are rarely anxious. They feel no remorse or guilt. If they do exhibit an emotion, it is a “show,” the “mime” of a reaction they have observed in others. It is a deception intended to facilitate their manipulation of others. While intelligent and rational, they are unable to identify any reason to restrain themselves from committing a crime or hurting others. They easily—even eagerly—violate social norms.

They don’t take criticism well and often retaliate against their critic. If confronted with the harm they have inflicted, they aim to portray themselves as the victim. They never acknowledge responsibility for their actions and are insensitive to punishment. Some investigators have postulated that they experience an intense, chronic anger; but their anger is cold, without any emotional arousal. It may be of interest here to consider that Aquinas understood anger as a form of sorrow.

Some of these traits are apparent in this clip of Richard Kuklinski, a man convicted of multiple contract murders in the greater New York City area and who confessed to hundreds more. While committing his crimes over decades, he lived a superficially ordinary life — punctuated by episodes of domestic violence—with a wife and three children in suburban New Jersey. After describing his crimes in detail over several hours, the interviewer offers to turn the tables, and asks Kuklinski if he wants to ask him a question. Kuklinski asks him, “What do you think of me?”

Given the fact that Kuklinski’s life had been given over to murder, deceit, and all forms of violence, his references to “love,” “caring,” and “friendship” may be attempts to manipulate the interviewer.

Not All Psychopaths Are Criminals

Yet not all psychopaths are violent criminals. Research suggests that about 2 percent of males and perhaps 0.5 percent of females in the United States are psychopaths by Hare’s criteria, making it likely that most psychopaths are never convicted of crimes. Assuming a total US population of 325 million, then there would be about 3.8 million psychopaths in the population, or about 3 million males and 785,000 females. These people live in our society. They may rise to influential positions in large corporations, government, academic institutions, hospitals, and church organizations. Like their criminal counterparts, they don’t feel bad when they do bad things. They might be recognized as strategic thinkers who plot their personal advancement at the expense of others, enjoying the pain they inflict.

Psychologists Paul Babiak and Robert Hare have summarized the research regarding these “industrial” or “corporate” psychopaths in the book Snakes in Suits. The book reviews how psychopaths manipulate their way toward promotions, the effects of their presence on the work environment, and the superficial similarities and substantial differences between leadership skills and psychopathic traits. Based on early data using the Hare psychopathy test, they found that around 3 percent of 203 corporate professionals selected by their companies for management development programs fulfilled criteria for psychopathy. These authors are developing a clinical test of psychopathy designed for the workplace.

We can easily identify likely psychopathic personalities in recent history: the tyrants of twentieth-century Europe and Asia, government leaders of North Korea and Cuba, those responsible for the global financial crisis, and so on. A very small number of ill-intentioned individuals can generate immense suffering and turmoil.

At a more personal level, however, what do psychopathic behaviors tell us about the conscience? To lack a conscience means:

– To consciously misrepresent our intentions toward others;

– To manipulate others for the sake of domination and control;

– To mask our selfish intentions by apparently altruistic actions;

– To divide others into pawns, patrons or patsies based on their utility;

– To find satisfaction in the turmoil we cause;

– To lack the experience of the range of emotions that animate a human life;

– To deny responsibility for our actions;

– To see our self as the only thing that matters.

To have a conscience means:

– To recognize the evil in every human heart beginning with our own;

– To seek a better way of living;

– To recognize the healing value of remorse;

– To value empathy and compassion toward the suffering of others;

– To take responsibility for our actions;

– To take responsibility for the wellbeing of others.

Just as evil may be understood as the absence of good, darkness as the absence of light, cold as the absence of heat, the absence of conscience presents itself as profoundly dehumanizing and destructive. How these human beings have become so callous and self-centered remains a mystery. No treatment seems to help, and many express the conviction—mistaken, perhaps—that they are unable to change their behavior.

What is clear is that fostering the development of a refined moral conscience—for example, through stable family life, literature, and artful explanation—should be a priority for both individual parents and for society as a whole.

A Medical History of the Contraceptive Pill

This article was first given on November 7, 2008 at the conference of the Center for Ethics & Culture, University of Notre Dame. A version also appeared on the MercatorNet website on January 15, 2010. A short time later, the well known Australian feminist Anne Summers quoted from it in her column in the Sydney Morning Herald.

The acceptance of the hormonal contraceptive pill has been cited as one of the most important historical events of the 20th century because of its effects on marriage and family life. In this paper, I would like to discuss the medical history of the development of the pill, presenting historical events primarily from the perspective of science and scientists, but also – and necessarily – from the perspective of other personalities outside of science whose contributions to the cause were just as important.

I will focus on the 50 year period from 1910 – the year of the birth of reproductive endocrinology as a scientific discipline – to 1960 – the year in which the first orally active hormonal contraceptive was first approved for sale to the general public in the US.

At the turn of the 20th century, there was growing confidence in the power of the medical sciences to finally understand human physiology and the pathophysiology of diseases. The source of this confidence was due in no small part to advances in the field of endocrinology: the study of hormones and the glands that produce them.





 

Ernest Starling (1866-1927) was the British physiologist who in 1905, along with William Bayliss, established the existence of “hormones”, chemicals which were produced in one tissue of the body, delivered into the bloodstream and elicited a response from another distant tissue.





The term "hormone" was coined in 1905 by the British physiologist Ernest Starling, after the Greek word meaning "to incite to activity". In the early 20th century, a variety of chemicals were found to have "hormonal" effects in humans: they were produced in one tissue, entered the bloodstream and incited a specific effect on another distant and unrelated tissue. Insulin, thyroxine, testosterone and cortisone were discovered at this time and were found to have remarkable restorative properties when given to patients with a number of common diseases. This enthusiasm for the therapeutic potential of "hormones" also extended to the area of human reproduction In the late 19th century, Professor Brown-Sequard, at the age of 72, injected himself with the extracts of guinea pig testicles and reported their rejuvenating effects. Between 1910 and 1930, the hormones estrogen and progesterone were found to play important roles in the physiology of female mammalian reproduction, and so by around 1940, it became apparent that in human females, fertility depended upon the complex interactions of a hierarchy of hormones which affected the ovaries first, and then the uterus. The ovaries were found to be the source of a "female factor" in human development (the ova or egg): complementary to and just as necessary for human reproduction as the "male factor". This picture -- which seems so clear today -- was in fact a dramatic insight, considering that even in the second half of the 19th century many scientists still believed that conception occurred when the male factor – a seed – was sown into the female womb– the soil. A woman’s contribution to conception was thought to be that off a passive receptacle offering a favorable environment for the germination of the seed. The picture that now emerged meant that fertility in women was in the normal case orderly and cyclical, and therefore predictable. This new understanding of human reproduction seemed to lift the veil on an awesome event that, until then, had been shrouded in mystery. Science began to expose the mystery, to reveal the mechanics of how human beings came about, and made that event accessible and open to manipulation. 

Controlling fertility 

The notion that fertility in mammals could be controlled by the manipulation of reproductive hormones was first proposed by the Austrian physiologist Ludwig Haberlandt, who based his hypothesis on his own work with laboratory animals He and others observed that the ovarian follicle would not mature and ovulation would not occur during normal pregnancy, and that this suppressive effect was mediated by progesterone.





Ludwig Haberlandt (1885-1932) was the Austrian physiologist who first suggested that the female reproductive cycle could be controlled by the administration of hormones.

 







Progesterone is produced by the corpus luteum during the second half of the normal menstrual cycle, and its blood levels remain elevated only if pregnancy occurs. As long as certain levels of progesterone persist in the circulation, hormonal signals favoring the shedding of the endometrium, ripening of additional ovarian follicles, and release the ova, would not occur. If pregnancy does not occur during a normal menstrual cycle (above), the corpus luteum will stop producing progesterone, the endometrium will stop growing and menstrual bleeding will occur. If pregnancy occurs, the zygote produces human chorionic gonadotropin (HCG) which prevents the involution of the corpus luteum. Persistent progesterone production by the corpus luteum sustains the viability of the endometrium so that development of the embryo can continue. So scientists like Haberlandt, who would seek to develop a birth control pill, focused their efforts on finding a chemical that would mimic the normal effects of progesterone. In effect, they sought a chemical that would induce a pseudo-pregnant state Haberlandt was probably the first to propose that "hormonal sterilization" -- which he had successfully induced in several animal models -- could and should be applied to humans. He even developed a progestin-based oral contraceptive which – a few decades after his death in 1932 – was studied and distributed as the contraceptive "Infecundin" in Eastern Europe by Marxist governments. His original observations on the contraceptive potential of progesterone in animals proved to be of use to others years later in planning strategies for control of human fertility.

Overcoming cultural resistance

Before moving on with the story, a brief digression: it is worth noting that while scientists were unraveling the mysteries of human reproduction, cultural attitudes during the first half of the 20th century strongly discouraged public discussion of sexual matters, and many -- if not most -- people agreed that the use of contraceptive devices was somehow wrong. In many states, dispensing contraceptives or information about them was a felony. So advocates of the birth control pill had to overcome not only important scientific hurdles but also widely held cultural and religious objections. There was the clear perception among early advocates of birth control that acceptance of a contraceptive pill would be difficult to achieve. Margaret Sanger led the campaign in the US that would gradually -- over decades -- desensitize the general public on matters of sex. A brilliant and remarkably tenacious woman, she wrote pamphlets, published newspapers and books, smuggled birth control devices, founded birth control clinics and got arrested -- all to raise the issue of birth control from the perspective of women’s rights, at the same time publicly downplaying her own anarchist and eugenicist leanings. 


Margaret Sanger (1879-1966) was an early and ardent supporter of birth control, founding the American Birth Control League in 1921. In 1942, the group was renamed Planned Parenthood Federation of America, today a major provider of contraception and abortion throughout the world.


She succeeded in her efforts, and she and her friends were pleasantly surprised when after the pill’s release in 1960, popular opposition to birth control rapidly diminished. While Sanger was primarily a political activist using the language and methods of class warfare to foster grassroots support for her movement, others sought to justify the need for birth control through geopolitical arguments. Generously funded by important private foundations, eugenicist academics like Frank Notestein and Kingsley Davis developed demographic models that were appealing by virtue of their simplicity and logic. They were also founded on a vision of the human person where self-interest was the driving force of every human action and a human being’s capacity for transcendence was a priori disallowed.


The demographic transition theory,
[1]first proposed in 1929 and developed further by Notestein, Davis and Coales, attempts to explain how economic growth can determine changes in human population. These early advocates of birth control argued that aggressive use of family planning could accelerate the achievement of low birth rates and low death rates (phase 4) in developing countries, a situation they believed to be optimal for social stability and human fulfillment. 

 Aided by their academic prestige and an abundance of financial resources, they argued the need for birth control as the most effective way of avoiding the otherwise inevitable depletion of the earth’ resources by a growing population of consumers. Both these groups skillfully influenced public opinion in the post-war period to soften opposition to the pill.

Scientific obstacles

The scientific obstacles to development of the pill were substantial as well. Most experts had settled on progesterone as the preferred agent, but progesterone --like estrogen and most other steroid hormones -- was digested, not absorbed, when taken by mouth. Chemists sought to alter the structure of the naturally occurring hormone so that it could be absorbed orally and still retain its natural effects. Raw progesterone was very expensive because it was very hard to come by. Until the 1940s, European pharmaceutical firms held a virtual monopoly on steroid hormone production, but their sources were limited to crude biological material from animal products: glands obtained from slaughterhouses, hormone derivatives obtained from the urine of pregnant animals. For example, the drug "Premarin", commonly used as an estrogen supplement, is an acronym of the terms "pregnant mare urine". These inefficient low-yield processes were inadequate to meet the high demand for hormones. The challenge was to discover alternate sources of steroid molecules. This was possible because the hormones involved in human reproduction belonged to a large group of natural chemical compounds with a very similar structure. All the steroids have4 basic rings and their different effects in the body depend upon the addition or deletion of side chains to the rings. Among those taking up the challenge was Russell Marker, a botanist and biochemist who sought and found an important source of steroid hormones in plants. Between 1939 and 1943, Marker and his team demonstrated that plant compounds called "sapogenins", could be used as precursors of steroid synthesis. He devoted himself to identifying plants with high concentrations of sapogenins. In 1941, while on a search for plant sources in New Mexico he stumbled on a reference book describing a tuberous plant of the Dioscorea family: a common yam plant called "barbasco" by the natives of eastern Mexico (Veracruz) where it grew abundantly. He decided to analyze Dioscorea plants, obtained samples and confirmed very high levels of a sapogenin called diosgenin. The yam species Dioscorea villosa was particularly suited to his work.

The Mexican connection

Marker developed a simple, five-step method of converting diosgenin to progesterone, a process called the "Marker degradation". He approached Parke-Davis, an American pharmaceutical firm that had supported his early research at Penn State, encouraging them to set up a production center in Mexico, but they declined. Being a pragmatic person, he sought local support by checking the Mexico City telephone book for chemistry labs that might be willing and able to work with steroid hormones. He found "Laboratorios Hormona", a company founded by two eastern European émigrés, Emeric Somlo and Federico Lehmann. When they first met, Marker showed them a sample of his work: 80 grams of pure progesterone, representing almost one third of the world’s supply of the chemical. They agreed to form a new company, naming it Syntex Laboratories, from "synthesis" and "Mexico". After a year of successful work, Marker and his partners disagreed over finances, and he left the company, taking the instructions for synthesizing progesterone from diosgenin along with him.





Russell Marker (1902-1995) developed a procedure to produce large quantities of progesterone from “diosgenin”, a chemical extracted from the root of a yam plant which grows abundantly in northeastern Mexico. Unable to interest U.S. pharmaceutical firms in his method, he co-founded Syntex Laboratories in Mexico with two Eastern European businessmen, but was no longer associated when Syntex scientists succeeded in producing the first oral contraceptive.







Syntex executives then hired George Rosenkranz, a Swiss-trained Hungarian chemist who had been stranded in Havana since 1942. The Pearl Harbor attack occurred while he was on his way to Quito, Ecuador to found a university chemistry department there. Rosenkranz was eventually able to piece together Marker’s process, and within a year Syntex was once again producing progesterone from diosgenin. By the 1970s, Syntex had become a billion dollar corporation, and one of the world’s largest producers of steroid hormones. It should be noted that Syntex sought first to produce industrial amounts of corticosteroids. Progesterone derivates were a secondary priority. But when Upjohn and Pfizer in the US beat out the Eastern Europeans from Syntex in developing efficient techniques for mass production of cortisone, Syntex priorities shifted. The Mexican chemist and technicians who had been assigned to the lower priority research – synthesizing an orally absorbed version of progesterone -- now held the key to the company’s future.






Mexican biochemist Luis Miramontes (1925-2004), working under Carl Djerassi, succeeded in producing a chemical with progesterone-like effects that was absorbed into the body after oral ingestion. This chemical -- called norethindrone – eventually became the first birth control pill.




It was a young Mexican chemist, Luis Miramontes, who succeeded under the supervision of the Austrian-born Carl Djerassi. They called the orally-active progestational substance norethindrone. Djerassi later claimed that they did not at that time intend to produce a hormonal contraceptive specifically, simply a compound with potentially marketable uses. Regardless, he has been identified and identifies himself as the father of the birth control pill, and he has expressed his sense of fatherhood artistically, in the form of irreverent theatrical works exploring contemporary attitudes toward human reproduction.

Expansion to the United States

Syntex proved to be a very efficient hormone factory, but as a Mexican company, they lacked access to the US market and so sought to develop partnerships with established US firms. Their efforts eventually led them to Shrewsbury, Massachusetts and the Worcester Foundation for Experimental Biology.
Gregory Pincus was a zoologist, an authority in the reproduction of mammals and an eccentric. He was the first to succeed at in vitro fertilization in mammals. He used rabbits. Later in the 1930s, he produced a rabbit by parthenogenesis, and he allowed his work to be profiled in Collier’s magazine. In that article – the cover story -- he was misquoted admitting his intention to attempt IVF in humans. As a result, according to pill biographer Bernard Asbell, the public came to view him as kind of Dr Frankenstein. He was denied tenure at Harvard, and so became an independent consultant, founding the Worcester Foundation for Experimental Biology in 1944.





Gregory Pincus (1903-1967) performed studies in animals to confirm the contraceptive effects of norethinodrel. His data were used to justify human research using the same chemical. He collaborated closely with the obstetrician John Rock, and was supported financially and politically by Katherine Dexter McCormick, Margaret Sanger and other birth control activists.





The Worcester Foundation was intended to be a place for drug companies to send promising chemicals to test their pharmacological effects in animals. Their initial attempts, working primarily for GD Searle, were disasters and the Worcester Foundation almost went out of business several times. The change in his fortunes began in 1951 when he met Margaret Sanger at a dinner hosted by Abraham Stone, an executive of the Planned Parenthood Federation. Sanger had been looking for a scientist with his skills, who would also be willing to take on a controversial project. Pincus’ collaboration with Sanger and Planned Parenthood started him down the path to the Pill, but their financial support was sporadic and limited. Real progress in identifying an orally active birth control pill did not occur until Katherine Dexter McCormick became involved, lending abundant financial support to the project. McCormick was the heiress by marriage to the International Harvester fortune. Born into a distinguished family of progressive Chicago attorneys, she was the second woman ever to graduate from MIT, the first with a degree in the sciences. She gave up plans to study medicine and reluctantly married Stanley McCormick, who within 18 months had to be institutionalized for schizophrenia. They had no children, and lived apart for most of their lives. She was an ardent supporter of women’s suffrage and birth control, having first crossed paths with Margaret Sanger in 1917.






Katherine Dexter McMormick (1875-1967) used her immense fortune to support the clinical development of the oral contraceptive pill. Historians acknowledge that her active assistance was critical to the success of the scientific work.






After Mr. McCormick’s death in 1947, she immediately stopped funding schizophrenia research, and shifted her attention to other projects, in particular birth control. Historians acknowledge the critical importance of her financial backing in accelerating the development of the pill. Because of McCormick and a few other private benefactors, the pill was produced using not a single dollar of public money.

The road to the Pill

With McCormick’s close involvement and funding, Pincus was able to ratchet up his efforts. He and his colleague, Min Chueh Chang, screened hundreds of hormonal products in animal models, and in the end concluded that two –norethindrone, discovered by Miramontes and Djerassi at Syntex in 1951; and norethinodrel, an almost identical compound produced by Frank Colton at GD Searle two years later – were the best candidates for human trials. Pincus found that both of these compounds retained potent progestational effects when given orally and were effective contraceptives in a variety of mammals. The next step along the path to approval of the Pill would require studying these compounds in women, and so they turned to a physician – John Rock – who was an obstetrician-gynecologist. Harvard-trained, a pioneer in the treatment of human infertility and Irish-Catholic, Rock -– from the perspective of politically astute birth control activists -- was the perfect man for the job. When he was approached by advocates of birth control, Rock had already devoted decades to the study of human reproduction and embryology, and was renowned for being the first -– along with his Harvard colleague Arthur Hertig -– to successfully fertilize a human egg in vitro.






Harvard gynecologist John Rock (1890-1984) was among the first modern students of human embryology. He performed hysterectomy procedures on his patients very in early in pregnancy to obtain his study samples. In 1944, he and his assistant Miriam Menkin published the first report of a successful human in vitro fertilization procedure. He also directed the first human clinical trials of the hormonal contraceptive pill. This research led in 1960 to the formal approval of the Pill                    for human use.









Pincus and Rock had known each other in the1930s. Pincus had followed Rock’s attempts to develop methods to detect ovulation and had provided him with chemicals for clinical testing. They had lost touch in the 1940s, but renewed their contact during a chance meeting at a scientific convention in 1952. At the meeting, they learned that both were using the same compounds – estrogen and progesterone – to achieve opposite ends: Pincus contraception infertile rabbits and Rock conception in infertile women. Within a short time, they agreed to collaborate to develop an oral contraceptive pill.

Clinical trials begin

In 1954, Rock began the first clinical trials under the guise of another fertility study. This was the first-ever human trial of an oral contraceptive. The drug Rock used was the one developed by Colton at Searle, named norethinodrel. Rock and Pincus selected norethinodrel because Djerassi’s version –norethindrone -- caused the enlargement of male rat’s testicles and so it was feared that it might have "masculinizing effects" that would hamper its acceptance as an oral contraceptive. What they did not know was that the Searle product they selected had been inadvertently contaminated with a small amount of estrogen, and that likely masked the androgenic effect of norethinodrel. The first human trials of norethinodrel were designed to measure ovulation rates and other effects on the reproductive system. It was first tested in Brookline, Massachusetts, on 50 women who were patients of Rock’s infertility clinic. None of them showed evidence of ovulation while taking the pill. The second group tested included 23 female medical students from the University of Puerto Rico. Within a few months of the trial, half of the women withdrew from the study -- despite veiled threats of adverse academic repercussions by some of the investigators -- citing side effects and cumbersome data collection requirements. A third group --patients at the local psychiatric hospital in Worcester – was also given norethinodrel and provided data that eventually led to the approval of large-scale trials to monitor its contraceptive effects specifically The first large-scale study of norethinodrel's contraceptive effect was conducted in Puerto Rico, a site not chosen randomly. Comstock laws were never in force there; it was an island with very high population density; and the local government was very cooperative, having established a network of family planning clinics with the assistance of Planned Parenthood years before In addition, Pincus noted that the US press would be less likely to interfere if studies were conducted away from the mainland. The study began in April, 1956 under the supervision of Edris Rice-Wray, an American-trained physician with ties to the medical school, the public health department of Puerto Rico. She had run a family planning clinic there, and trained health care workers in contraceptive use. Norethinodrel proved to be a highly effective contraceptive, despite the side effects that began to emerge. Pincus, Rock and Searle executives were soon satisfied that the contraceptive efficacy of norethinodrel had been adequately demonstrated. In 1957, G.D. Searle marketed norethinodrel as Enovid -- not for contraception but for "menstrual problems". The package insert prominently noted as a "warning" that the drug could induce temporary infertility as a side effect. This was in effect Searle’s trial balloon. They quickly learned that prescriptions for the Pill far exceeded the number of women who had previously complained of menstrual problems.

The 50th anniversary

Three years later -- on May 9, 1960 -- the FDA approved the Pill to be used for birth control. This was the first product to be approved by the FDA that was not designed to treat an illness but rather to modify a normal physical process. The half century defined by the emergence of the science of human reproduction around 1910 and the approval of the first hormonal contraceptive pill in 1960 was a period of dramatic scientific and social change. Science demystified human reproduction, breaking down the process into its basic components and making it accessible to manipulation. These new scientific insights offered some people – a relatively small group of influential individuals persuaded by a materialist worldview and eugenicist principles – an opportunity to advance their ideology of social engineering on a global scale. They were radical political activists working together with brilliant, ambitious scientists and academics, and wealthy agnostics, who eventually succeeded in presenting birth control as a moral imperative for modern societies. An analysis of the effects of the hormonal contraceptive on individuals and societies over the ensuing 50 years should provide important opportunities to value the mystery of the generation of new human life more fully.

For further reading 
+ The Fertility Doctor by Margaret Marsh and Wanda Ronner is a recent account of the tragic life of John Rock written by two sisters, one a historian the other a gynecologist. They were given access to personal papers by the family of Dr. Rock. 
+ Birth Control in America: The Career of Margaret Sanger by David Kennedy steers a course between hagiography and hostility. + Katherine Dexter McCormick: Pioneer for Women's Rights takes a benevolent view of its subject, but does not skate over her troubled life. 
+ The scholarly book Sexual Chemistry by Lara Marks is an excellent detailed account of the history of the Pill. A scholarly article by Marks and Suzanne White may be found online here. 
+ For a less detailed history see “On the Pill: A social history of oral contraceptives, 1950 -1970” by Elizabeth Siegel Watkins. 
+ Carl Djerassi – self-proclaimed father of the birth control pill -- has written two entertaining books about his own role, “This Man's Pill” and “The Pill, Pygmy Chimps and Degas' Horse”. 

Footnotes
[1] For a detailed discussion of the demographic transition model see this Rand Corporation publication.

Popular Attitudes toward Voluntary Death: Suicide and Euthanasia from Antiquity to the Post-Modern

This is a draft of a paper given at the "Dialogue of Cultures" Conference sponsored by the Center for Ethics & Culture at the University of Notre Dame from November 30 to December 2, 2007. The conference intended to consider the difficulties and opportunities of dialogue in a time of conflict. The conference was inspired by the Regensburg address of Benedict XVI, a lecture given on September 12, 2006 at a university in Germany where he had been a professor of theology from 1969 to 1971. In his talk, the Pope quoted a Byzantine emperor who had been in dialogue with his islamic opponent during a war been Christians and followers of Muhammad in the 14th century. The point of the lecture was to illustrate the irrationality of using violence to impose religious faith, and to trace the philosophical roots of how violence could be justified. Following this lecture, islamic activists rioted in different parts of the world in response to their apparent misreading of the text and the purpose of the address.

While the majority of human deaths are “natural” – the result of aging, injury or disease – the choice of “voluntary” death -- suicide, assisted suicide and euthanasia (SASE) – has been a human response to the problems of life since the beginning of recorded history. In the West, social acceptance of these actions has varied over time. Two periods of transition in popular attitudes toward voluntary death can be identified. The first shift – from approval to disapproval -- may be defined by the transition from the pagan cultures of antiquity toward a new Judeo-Christian civilization built on Greco-Roman foundations. The second shift – which we are currently living through – is characterized by the emergence of cultural attitudes more tolerant of voluntary death driven by a gradual loss of the sense of the transcendence of human existence.

In this talk, I hope to point out some of the historical factors which surrounded and to a certain degree caused these shifts in attitude. These factors include conflicting views of the nature of human life; how diseases were understood and medicine was practiced; and how scientific advances might have influenced the transitions.

For the Greek and Roman societies that preceded the appearance of Christianity, a “good death” could be either natural or voluntary. The practice of voluntary death was widespread, and depending upon the circumstances was considered to be a reasonable act. To relieve the pain or distress of an incurable illness, to avoid a humiliation or indignity, to end an unhappy or tiresome life or to express a sense of triumph over Fate by ending one’s life voluntarily in old age were felt to be justifiable or even honorable reasons to end one’s own life. In some cases, the governing regime of a Greek or Roman city would reserve doses of the appropriate poisons to give to those to whom voluntary death was permitted. In certain areas of the Greco-Roman world, suicide was a privilege reserved for the social elites and not permitted to soldiers, slaves or criminals. When a person committed suicide without apparent justification, their corpses were sometimes mutilated and buried shamefully in unmarked graves.

Pagan antiquity was characterized by a pessimistic attitude toward human existence, and individual human life lacked special significance or value. Within this context, the various philosophical schools shaping Greco-Roman culture reflected an eclectic and tolerant attitude toward voluntary death. In general terms, materialist schools, including the Stoics and Epicureans, adopted permissive or supportive attitudes toward SASE. Their view of the human individual included no sense of personal immortality. Death was annihilation, a natural, personal dissolution. With death, individuality and personality ceased to exist. In contrast, Greek philosophical schools admitting the existence of transcendent, spiritual realities in the Cosmos tended to limit or caution against voluntary death. These groups -- including the Platonic, Aristotelian and Pythagorean schools -- acknowledged the possibility of a personal, individual existence after death.

From the Pythagorean school, a unique group of philosopher-physicians emerged who challenged the attitudes toward SASE that prevailed at the time. Best known among them was Hippocrates. These physicians distinguished themselves for several reasons.

First, they attempted to work according to a set of well-defined professional standards, entered into by means of a solemn oath to the gods. Often the healers of antiquity were merchants of tonics and cure-alls, “root cutters”, slaves or soldiers, and so they did not always enjoy the confidence of the general public. The oath taken by these physicians, recognized today as the Hippocratic Oath, included the first known prohibition of physician assisted suicide in Western history. This prohibition represented a minority opinion at the time.

Second, they tended to reject the widespread, popular notion of disease as divine punishment, and sought natural rather than supernatural explanations for them. They developed a rational process of clinical problem solving – patient interview and examination, diagnosis, prognosis and therapy – which continues to be the standard medical practice today.

Third, they viewed human existence as bound up with the whole of nature, in an orderly not chaotic way. They recognized a tension and balance of opposing qualities throughout the Cosmos, and so they developed an understanding of human physiology made up of four humors (blood, phlegm, yellow bile, black bile) located in four organs (heart, brain, gall bladder and spleen). These elements were correlated with human personality traits or temperaments (sanguine (cheerful), phlegmatic (sluggish), choleric (irritable), melancholic (sad)). Similar patterns could be identified in nature including the four elements of the cosmos (fire, earth, water, air) and the four seasons of the year (summer, autumn, winter, spring), the four natural sensations (hot, cold, dry, wet) and in the four geographical points (east, north, south, west). Diseases resulted from disequilibrium between man’s humors and nature’s elements. The task of medicine was to restore the equilibrium. For over 1500 years -- through the Middle Ages and into the 18th century -- medical practice was based on these principles.

With the progressive collapse of the Roman political order and the gradual emergence of a Christian culture, popular acceptance of voluntary death declined. According to some scholars (for example, Rodney Stark), the clear contrast between pagan and Christian approaches toward the sick was an important factor contributing this change in attitude toward SASE. In the Greco-Roman world, basic forms of welfare and philanthropy were based on principles of reciprocity and self interest. There was no public duty toward the sick, and sympathy for strangers was considered irrational.

Below you will read the eye-witness accounts of two plagues: one by the historian Thucydides describing what he saw during the plague of Athens in 431 BC; the other by Pontius the Deacon and Cyprian, bishop of Carthage describing the plague that struck that city in 251 AD. Thucydides captures the despair and lawlessness which overcame the Athenians when confronting a devastating epidemic:

“People were afraid to visit one another, and so they died with no one to look after them, and many houses were emptied because there was no one to provide care. … The doctors were incapable of treating the disease because of their ignorance of the right methods…. Equally useless were the prayers made in the temples, consultation of the oracles, and so forth. In the end people were so overcome by their sufferings that they paid no further attention to such things. The great lawlessness that grew everywhere in the city began with this disease, for as the rich suddenly died and the poor took over their estates, people saw before their eyes such quick reversals that they dared to do freely things they would have hidden before, things they would never have admitted they did for pleasure. And so, because they thought their lives and their property were equally ephemeral, they justified seeking quick satisfaction in easy pleasures. As for doing what had been considered noble, no one was eager to take any further pains, because they thought it uncertain whether they should die or not before they achieved it. But the pleasure of the moment and whatever contributed to that were set up as standards of nobility and usefulness. No one was held back in awe either by fear of the gods or by the laws of men: not by the gods because men concluded that it was the same whether they worshipped or not, seeing that they all perished alike; and not by the laws, because no one expected to live till he was tried and punished for his crimes. But they thought that a far greater sentence hung over their heads now, and that before this fell they had a reason to get some pleasure in life. Such was the misery that weighed on the Athenians.”[1]

Confront this account with letters written by the Christians of Carthage. They had just (barely) survived a persecution by the Emperor Decius and now faced a devastating epidemic similar in every respect to the plague of Athens. Pontius describes the reaction of the pagan population and Cyprian describes the Christian response.

"There broke out a dreadful plague, and excessive destruction of a hateful disease invaded every house in succession of the trembling populace, carrying off day by day with abrupt attack numberless people, every one from his own house. All were shuddering, fleeing, shunning the contagion, impiously exposing their own friends, as if with the exclusion of the person who was sure to die of the plague, one could exclude death itself also. There lay about the meanwhile, over the whole city, no longer bodies, but the carcases of many, and, by the contemplation of a lot which in their turn would be theirs, demanded the pity of the passers-by for themselves."[2]

From Cyprian of Carthage we read:

"This trial -- that now the bowels, relaxed into a constant flux, discharge the bodily strength; that a fire originated in the marrow ferments into wounds of the fauces; that the intestines are shaken with a continual vomiting; that the eyes are on fire with the injected blood; that in some cases the feet or some parts of the limbs are taken off by the contagion of diseased putrefaction; that from the weakness arising by the maiming and loss of the body, either the gait is enfeebled, or the hearing is obstructed, or the sight darkened -- is profitable as a proof of faith. What grandeur of spirit it is to struggle with all the powers of an unshaken mind against so many onsets of devastation and death! What sublimity, to stand erect amid the desolation of the human race, and not to lie prostrate with those who have no hope in God; but rather to rejoice, and to embrace the benefit of the occasion; that in thus bravely showing forth our faith, and by suffering endured, going forward to Christ by the narrow way that Christ trod, we may receive the reward of His life and faith according to His own judgment!"[3]

Clearly, this way of looking at the brutal world in which they lived was different than that of the pagans: the Christians had discovered new values that influenced the way they faced the dilemma of human suffering. There are well documented accounts of Christians during this plague caring for and not abandoning the dying, including those who had lapsed under the recent persecution and those who had been their persecutors.

And so this first transition from approval to disapproval of voluntary death -- a change that might accurately be characterized as revolutionary rather than evolutionary – seems to have been initiated by the heroic attitude of service of ordinary people toward their suffering peers, actions which clashed with the prevailing standards of socially acceptable behavior. In these first steps of transition, we can glean a merging of those rich elements of Greco-Roman culture compatible with Christian anthropology, an anthropology that was first lived, and then explained.

In the Regensburg lecture, Benedict XVI alludes to the “inner rapprochement between Biblical faith and Greek philosophical inquiry” as an event of decisive historical importance. The first systematic argument against suicide in Christian thought appears to occur in the 4th century, when Saint Augustine argued in “The City of God” against the justification of suicide by Christian women who had been violated by barbarian soldiers. He made his case based on both logical argumentation and reference to Scripture. Elsewhere, he also admonished against assisted suicide, in the following terms:

"…it is never licit to kill another: even if he should wish it, indeed if he request it because, hanging between life and death, he begs for help in freeing the soul struggling against the bonds of the body and longing to be released; nor is it licit even when a sick person is no longer able to live".[4]

Later the philosophy of Thomas Aquinas marked a high point in the philosophical development of Christian culture. It may be considered to be the fruit of centuries of practical human experience, and of the engagement of human reason with the sources of divine revelation. The Thomistic argument against voluntary death was unequivocal and grounded on the analogy between the Creator and his creature.

Over 1500 years -- from the 4th to the 19th centuries -- notions of reverence for individual human life, of the dignity of self-sacrifice on behalf of others, and of the redemptive value of suffering took root in the popular imagination. In time pagan tolerance toward voluntary death came to be considered profoundly objectionable. This objection was expressed throughout society in popular customs, literary works, legal systems and medical practices which formally and enthusiastically prohibited SASE.

Over the same period of time, while the intellectual foundations of a Catholic culture were being laid, devastating epidemics swept through Europe and other parts of the world over and over again. Despite profound ignorance of their causes, despite a lack of effective treatment to cure or prevent them, despite terrible suffering, there was universal opposition toward suicide, assisted suicide or mercy killing during the historical period in which Judeo-Christian attitudes prevailed among the general population. During this time, the care of the sick by Christians grew in efficiency and organization, the direct precursors of today’s hospitals.

Weakening of the collective social disapproval of SASE began with a decline in the moral authority of the Roman Catholic Church. This decline culminated in the protestant reformation of the 16th century, and eventually led to the institutionalization of “secular” opposition toward the Catholic Church. In Regensburg, Benedict XVI traced this decline to an idea of God as hyper-transcendent, unapproachable being who can contradict himself in his creation. As God receded from the human orbit, the social customs, legal principles, academic and political institutions, and economic practices that reflected the principle of reverence for individual human life slowly weakened. Eventually intellectuals, scientists and clergy would begin to seek justifications for SASE.

The first break probably occurred among English intellectuals who began to debate a variety of rationalizations of suicide and voluntary death in the 18th and 19th centuries, and with particular enthusiasm following the French Revolution. The debates were motivated in part by an apparent epidemic of suicides throughout England in the 18th century, a phenomenon which led to suicide being designated “the English malady”. These debates were limited to elite intellectual circles, with very little sympathy gained among the general population. By the mid 19th century, several additional factors contributed to growing interest in and impetus for justifications of SASE.

First, social and political upheavals (the Reformation, the Thirty Years War, the Enlightenment, the Reign of Terror, the Napoleonic Wars) led to a generalized pessimism and moral relativism throughout Europe.

Second, materialist philosophical projects developed in direct opposition to the basic principles of Christian anthropology. Among these projects was the theory of “transmutation” or “evolution” which held that matter possessed the intrinsic capacity to randomly develop all known forms of life over very long stretches of time. By offering evidence for the “natural selection” of traits that conferred survival advantages on animals, Charles Darwin came up with a scientific support for the theory of evolution. His findings were used by others, including his cousin Francis Galton and the philosopher Herbert Spencer, to advocate a social philosophy promoting the improvement of human hereditary traits through selective breeding of humans, birth control and euthanasia in order to create healthier, more intelligent people, to save society's resources and to lessen human suffering. Darwin and others reasoned that charitable efforts to treat the sick and support the mentally or physically disabled could adversely affect the human race, leading to a “degeneration” of the human condition by favoring the survival of defectives. Their approach became known as “eugenics” or “the self-direction of human evolution”, a view that found support among intellectual circles in Anglo-Saxon countries first and later -- with particularly terrible consequences -- in Germany.

Third, developments in science led to a surge in popular confidence in the ability of physicians to uncover the basic causes of human diseases and to treat them effectively. Two early scientific advances are especially relevant to our topic.

The first is the verification of the “germ theory” of Koch and Pasteur. It could now be said that the mysterious diseases that repeatedly killed large fractions of European and other peoples over the centuries were caused by invisible living organisms. Soon after, vaccines proved effective in preventing these diseases, and by the first half of the 20th century, antibiotics began to cure many of those infected.

A second event affecting the SASE debates was the discovery of analgesic and anesthetic chemicals. In preceding centuries, physicians had little to offer patients in pain, but toward the middle of the 19th century, chemicals that could reversibly alter human consciousness and pain perception -- including chloroform, ether and morphine – could be given to the sick and dying to ease their distress. Some intellectuals argued that these chemicals should be used to cause the deaths of persons who were suffering excessively. However, no physicians are known to have publicly recommended this practice for their patients until the “euthanasia movement” began, first in Great Britain and soon after -- by the early 20th century -- in the United States.

Historical evidence suggests that the euthanasia movement of the late 19th and early 20th century and the contemporary right-to-die movement are campaigns organized by social elites – including intellectuals, Protestant and Unitarian clergy and wealthy agnostics – to overcome deep-seated popular opposition to voluntary death in its various forms.

In summary, over the course of Western history, changes in social acceptance of SASE have been driven primarily by re-examinations of the dominant cultural views regarding the nature and meaning of human existence in a particular historical period, and to a lesser extent by developments in medical care. One could argue that the first shift -- from approval to disapproval of voluntary death -- was driven by a “grass roots” movement, where ordinary people, inspired by a new understanding of who they were, acted with extraordinary heroism and encouraged others to do the same. The second shift on the other hand presents itself as a “top-down” imposition of ideology on a popular culture which struggles to retain its Judeo-Christian identity. The first shift involved the assimilation of Greco-Roman values of rationality into a lifestyle characterized by the radical gift of self. The second involves the withdrawal of man into himself – according to Benedict, into the “realm of the subjective” -- and his alienation from God, the source of rationality.

------------------------------------------------------
Thucydides. The Peloponnesian Wars, numbers 50-54.

From Pontius, The Life and Passion of Cyprian, the full text of which may be found at http://www.users.drew.edu/ddoughty/Christianorigins/persecutions/cyprian.html

From Saint Cyprian of Carthage, De mortalitate, the full text of which may be found at

http://www.ewtn.com/library/PATRISTC/ANF5-15.TXT

St. Augustine. Epistola 204; 5: Corpus Scriptorum Ecclesiasticorum Latinorum 57, 320. Quoted in Pope John Paul II, Evangelium vitae, 66.