Pages

Tuesday, October 21, 2014

The Woeful State of Our Knowledge of the Brain, and the Director of the National Institute of Mental Health




The major theme of this blog is how family systems issues have been denigrated in psychiatry and even psychology in favor of a disease model for everything.  

For this reason, you might think that I would be highly critical of the National Institute of Mental Health and its director, Dr. Thomas Insel. He has led the agency to focus almost entirely on neuroscience to the exclusion of research into family dynamics and other types of social psychological phenomena. Without an understanding of those latter factors, I believe it is impossible to really understand even the neuroscience, let alone human behavior in general.

Dr. Thomas Insel

Insel has been particularly supportive of President Obama’s BRAIN initiative (Brain Research Through Advancing Innovative Neurotechnologies), with its emphasis on such things as bio materials, engineering and nanoscience. He has been critical of the field’s diagnostic manual, the DSM 5, because it is limited to just observable signs and symptoms rather than the causes (etiology) of the various mental disorders.

While I certainly am critical of his neglect of handing out research dollars for social psychological and other such issues in mental disorders, I certainly have nothing against studying the brain in more detail.

I was pleasantly surprised by a recent article about Insel in Clinical Psychiatric News (August, 2014) that showed that he was at least realistic about both the current state of our knowledge about the brain as well as the prospects for our ever really understanding it.

I was particularly pleased about the following facts he emphasized to the newspaper:

“We can get cells to turn into neurons, but getting cells to turn into circuits is still a challenge. We don’t even know what a neural circuit is. We don’t know where it begins, where it ends; we don’t know how big it has to be; we don’t know exactly what the dynamics are.”

“…it may turn out that our brains simply aren’t smart enough to figure out how they work…It may be just a cosmic joke that we’re evolved enough to ask these questions, but not evolved enough to answer them. We’ll have to see.”

And then there’s his comments about the whole issue of emergent properties that characterize anything as complicated as the human brain. As an analogy, think of a car. Is it just a collection of bolts, screws, gears and metal shafts? Well, yes and no. 

It is definitely comprised of those things, but if you just look at those things alone (reductionism), you’d be hard pressed to understand a vehicle which can transport humans and other cargo over long distances. Other properties of the car emerge from the interactions of the parts.

So for the brain:

“…there is this whole body of work now that says, ‘Don’t worry about those hundred cells, or even those individual cells, and don’t even worry about looking at the circuitry because the key activity in the brain that is associated with attention, and thought, and consciousness is very slow oscillatory activity
…these oscillations that go in and out of the cortex create the dynamic of the cortex – some people call these cortical avalanches – seem to be pretty important for the way the mind works.

It would be like saying we want to know what’s on a television screen, and that you actually do better if you step back and get the whole picture, but don’t worry about any given pixel, because the emergent property of that television show is actually the whole thing together.’”

You can say that again!

Tuesday, October 14, 2014

Book Review: How Not to Be Wrong by Jordan Ellenberg





A while back (11/2/11) I reviewed the book Stats.con by James Penston. That book discussed how the statistics used in randomized clinical trials can be highly deceptive. How Not to be Wrong also covers some aspects of statistical misuse, in more detail, and certainly in a much more entertaining way. Some of his comments are funny as hell. 


Jordan Ellenberg

Consider the widespread use of a statistic call the p value, which estimates the probability that the result of a study could have just been a chance coincidence rather than an actual meaningful finding. A study is generally considered positive if the p value is 5% or less.

5% is of course not 0%. There is a one in twenty probability that the study results that are deemed positive were in fact negative. But what happens if journals only publish the positive studies and not the negative ones, when there might be a large number of negative studies, and when the positive study results are not reproduced (replicated) in a second study? Well, people start believing things that are not true, that's what.

The reason for this is because, as the author points out, improbable things actually happen quite frequently. Especially if you do lots and lots of things – like experiments. 

Another issue he mentions is that, if your sample size is too small, the chances increase dramatically that one of your subjects will be an outlier that dramatically but artificially changes the average for whatever characteristic you are measuring. With a small sample, you are more likely to get a few extra prodigies or slackers in a study of people's ability to perform certain tasks. A famous example: if Bill Gates walks into a bar with a few other people, the average guy in the room is a billionaire.  

Here’s how the author starts out a discussion of the p value problem (pages 145-16) :

"Imagine yourself a haruspex; that is, your profession is to make predictions about future events by sacrificing sheep and then examining the features of their entrails...You do not, of course, consider your predictions to be reliable merely because you follow the practices commanded by the Etruscan deities. That would be ridiculous. You require evidence. And so you and your colleagues submit all your work to the peer-reviewed International Journal of Haruspicy, which demands without exception that all published results clear the bar of statistical significance.    
      
Haruspicy, especially rigorous evidence-based haruspicy, is not an easy gig. For one thing, you spend a lot of your time spattered with blood and bile. For another, a lot of your experiments don't work. You try to use sheep guts to predict the price of Apple stock, and you fail; you try to model Democratic vote share among Hispanics, and you fail…The gods are very picky and it's not always clear precisely which arrangement of the internal organs and which precise incantations will reliably unlock the future. Sometimes different haruspices run the same experiment and it works for one but not the other — who knows why? It's frustrating…
      
But it's all worth it for those moments of discovery, where everything works, and you find that the texture and protrusions of the liver really do predict the severity of the following year's flu season, and, with a silent thank-you to the gods, you publish
      
You might find this happens about one time in twenty.
      
That's what I'd expect, anyway. Because I, unlike you, don't believe in haruspicy. I think the sheep's guts don't know anything about the flu data, and when they match up it's just luck. In other words, in every matter concerning divination from entrails, I'm a proponent of the null hypothesis [that there is no connection between the sheep entrails and the future]. So in my world, it's pretty unlikely that any given haruspectic experiment will succeed.
      
How unlikely? The standard threshold for statistical significance, and thus for publication in IJoH, is fixed by convention to be a p-value of .05, or 1 in 20... If the null hypothesis is always true — that is, if haruspicy is undiluted hocus-pocus —then only one in twenty experiments will be publishable.
      
And yet there are hundreds of haruspices, and thousands of ripped-open sheep, and even one in twenty divinations provides plenty of material to fill each issue of the journal with novel results, demonstrating the efficacy of the methods and the wisdom of the gods. A protocol that worked in one case and gets published usually fails when another harupex tries it, but experiments without statistically significant results do not get published, so no one ever finds out about the failure to replicate. And even if word starts getting around, there are always small differences the experts can point to that explain why the follow-up study didn't succeed."

The book covers many subjects about which the non-mathematically-inclined can learn to think in a mathematical way in order to avoid coming to certain wrong conclusions and to zero in on correct ones. Many of these, however, are irrelevant to this blog – the chapters on lotteries come to mind. I of course found those parts a bit less interesting. But the chapters relevant to medical studies are so right on.

Another important topic the author covers is known mathematically as regression to the mean. This phenomenon can lead, as examples, to overestimates about the genetic component of human traits and explains why fad diets always seem to work at first but then later on everyone seems to forget about them. As mentioned, when you average any measurement applied to human beings, the averages can be deceptive.  

In addition to the sample size considerations described above, you can get into trouble if you start with a sample that contains people who are higher or larger on average on the relevant variable that the average person in the general population.

If two tall people marry, their progeny will usually be, on average, tall compared to others in the general population. However, they are not all that likely to be taller than their parents. As Ellenberg states, “…the children of a great composer, or scientist, or political leader, often excel in the same field, but seldom so much as their illustrious parents” (p. 301). Their heredity mingles with chance environmental considerations, and pushes them back toward the population average. That is the meaning of regression to the mean.

To understand this, think about those who embark on weight loss diets. One needs to consider the fact that most people’s weight tends to fluctuate a few pounds either way depending on a lot of chance factors, such as their happening by an ice cream truck. And when are people most likely to start a diet? When their weight is at the top of their range! So by the law of averages, they are probably in many instances going to lose weight whether they diet or not. But when they do diet, guess what happens? They attribute the loss to the fantastic new diet!

I can not say for certain, but I wonder if studies on borderline personality disorders (BPD) yield misleading results because of regression to the mean. Long term follow-up studies on patients with the disorder seem to indicate that it seems to go away after a few years in a significant percentage of subjects. This finding is misleading, however, when you look closer. 

To make the BPD diagnosis, the subject needs to exhibit 5 of the 9 possible criteria. Many of the "improved" subjects merely went from 5 criteria down to 4 of them, and were therefore not diagnosed with BPD any longer. Actually, they became just what we call "subthreshold" for the disorder. Their problematic relationships, however, were still pretty much the same.

These results could mean that subjects with BPD may naturally vacillate between meeting criteria for the disorder and being subthreshold, or between exhibiting a high number of the criteria and a lower one. Which would mean that if they qualified for the diagnosis at the beginning of the long term follow-up study, a significant proportion of the long-term study subjects were at their worst. If so, the study results may indicate regression to the mean, and therefore say nothing else significant about the long term prognosis for the disorder.

Other important statistical issues the author discusses clearly and brilliantly include assumptions that two variables are related in a linear fashion when the are not (non-linearity - cause and effect relationships that are not based purely on an increase in one variable always leading to either an increase or decrease in another); torturing the data until it confesses (running multiple tests on your study data, controlling for different things, until something significant seems to pop up); and the following problem inherent in studies designed to see if two things like being married and smoking are correlated: 

"Surely the chance is very small that the proportion of married people is exactly the same as the proportion of smokers in the whole population. So, absent a crazy coincidence, marriage and smoking will be correlated, either positively or negatively."

Any one who is serious about critically evaluating the medical literature owes it to themselves to read this book.

Tuesday, October 7, 2014

Electronic Health Records: A Slippery S.(L) O.A.P.




In the last few years, the federal government has been pushing doctors to adopt software for recording medical records electronically on computer, and in response several companies climbed all over one another trying to sell Electronic Health Record (EHR) systems. 

In Medicare, the law authorized a higher fee for service rates to “reward” those doctors who began to use them - which in actuality is a payment penalty for those who did not. The cost of the software programs is, by the way, exorbitant - almost prohibitively so for doctors in individual practices or small practice groups in relatively low-paying specialties like pediatrics and psychiatry.

The EHR’s were supposed to increase efficiency and produce cost savings. Lab results from all of the patient’s doctors would be instantly available so that tests would not be repeated needlessly. Doctors would have instant access to prior records without having to mail away for them, and every current doctor would be able to see what the patient’s other current doctors were doing.

Those are admirable goals. Unfortunately, there are some problems with the currently available software that actually have had the effect of negatively impacting patient care. 

Whether these problems were foreseen or unforeseen is a debatable proposition.  George Dawson in his blog Real Psychiatry certainly makes the case that some of the “changes” we have seen in recent practice patterns that were caused by the use of EHR’s are highly consistent with the goals of the money-grubbing, profiteering-at-the-expense-of-patient-health managed care insurance industry. Or as we like to call it, mangled care.

I complained about some aspects about one EHR system I was using in a previous post. I must admit I had been wondering if I might be unusual in having noticed  that there are significant problems.

Well, the American Medical Association (AMA) has noticed them, and they have had enough! According to the AMA Wire on 9/16/14, the AMA has belatedly pointed out the obvious and has taken some action. Well, sort of. Of course, it will probably go absolutely nowhere.

It’s no secret that many physicians are unhappy with their electronic health record (EHR) systems, thanks in large part to cumbersome processes and limited features that get in the way of patient care. Now a panel of experts has called for EHR overhaul, outlining the eight top challenges and solutions for improving EHR usability for physicians and their patients.

This new framework (log in) for EHR usability—developed by the AMA and an external advisory committee of practicing physicians and health IT experts, researchers and executives—focuses on leveraging the potential of EHRs to enhance patient care, improve productivity and reduce administrative costs. Here are the eight solutions this group identified to address the biggest challenges.

In my previous post on this issue, I discussed the extraneous forms like treatment plans and symptom checklists that waste my time, as well as the difficulty in locating specific information in the overly-long patient record. In this post, rather than list the eight proposed "solutions," I will instead focus on a problem that was near the top of the concerns expressed in the above article:

Poor EHR design gets in the way of face-to-face interaction with patients because physicians are forced to spend more time documenting required information of questionable value. Features such as pop-up reminders, cumbersome menus and poor user interfaces can make EHRs far more time consuming than paper charts.

Although physicians spend significant time navigating their EHR systems, many physicians say that the quality of the clinical narrative in paper charts is more succinct and reflective of the pertinent clinical information. A lack of context and overly structured data capture requirements, meanwhile, can make interpretation difficult.


EHRs need to support medical decision-making with concise, context-sensitive real-time data. To achieve this, IT developers may need to create sophisticated tools for reporting, analyzing data and supporting decisions. These tools should be customized for each practice environment."

Ah yes, the quality and the interpretability of the proverbial doctor’s progress notes has gone down the toilet.

So what makes a good progress note? A good progress note does not just describe what the patient looks like during a visit at that particular time coupled with a plan concerning what the doctor is going to do next. It should also indicate what the doctor is thinking about the patient, the patient's symptoms, and the diagnosis. Specifically, which of the patient’s symptoms have changed, and if so, what is the change due to? The medication prescribed? Side effects? A misdiagnosis? A placebo effect?

Does a change in the patient’s clinical picture suggest an alternate diagnosis? Are there any side effects from the medications that the doctor prescribed? How does the patient's clinical presentation relate to any treatment that has been rendered? Does any observed changes in the patient's condition mean the doctor should change the treatment or continue it as is? If a change in medication is planned, over which symptoms is the doctor trying to get better control? If there has been no response to treatment, to what does the doctor think this lack of improvement should be attributed?

In reading over a medical report, another doctor can fairly easily ascertain the answers to the above question from a relatively brief narrative.  On the other hand, the answers to these questions cannot be ascertained from a simple checklist. No how, no way.

In the old days when I trained, we were instructed to use a so-called “S.O.A.P” note.  The abbreviation stands for the different types of information that should be included in the note:

Subjective: A description of the report from the patient regarding his or her own symptoms and overall improvement or lack thereof, as well as any reports of side effects. In the case of antidepressants, the timing of any changes in symptoms should also be ascertained and described, to help rule out a placebo effect.

Objective: What does the doctor observe when looking at the patient? Are there any changes in the patient's physical examination? What changes in the patient’s outward mental status have transpired since the last visit? What are the results of any lab tests that have been ordered?

Assessment: What does the doctor think these results mean regarding the patient’s diagnosis and treatment?

Plan:  What is the doctor going to do next to handle any problematic side effects of treatment, or to handle any failure of the patient to improve?

In psychiatry, a good progress note should also contain information about any changes in the patient's psychosocial situation - particularly any stressors: job changes, divorce, major family battles, deaths, children getting into trouble and the like. This information is important in determining whether any changes in the patient's clinical picture are due to environmental stressors or psychological reactions, and not due to the medication or its failure.

At the multispecialty clinic where I work part time, the useful S.O.A.P. progress note format is at risk of being abandoned. There are still good notes, but many of the progress notes contain almost no indication of what the doctor was thinking about the effects, let alone the pro’s and con’s, of the patient’s treatment.  

Between scrolling through all of the the checklists and the extraneous notes and next to worthless notes, I find myself  wasting an amazing amount of valuable time that I could be spending actually talking to my patient.

I certainly wish the AMA well in addressing these problems. I’m not holding my breath.

Tuesday, September 30, 2014

Dependency Conflicts in People Who Practically Raised Themselves




In my post of December 17, 2013, Older Siblings and Neglectful Parents, I described one interesting pattern of family dysfunction that I have been seeing in my practice. It developed in families in which the parents had abdicated their responsibilities as parents in one way or another. I showed how an older sibling would sometimes step into the void thusly created to take the parental reins, so to speak, and the younger siblings would later displace their anger at the parents on to the “substitute.”

This post is about a different (or at times additional) pattern that may develop in families in which the parents are not doing their job.

In this particular situation, the parents were emotionally unavailable to their kids most of the time as they grew up. To complicate matters further, they did not set any limits on their child’s behavior during his or her teenage years. Teens from such families would be allowed to come and go as they pleased. They might start skipping school or not doing homework - and the parents would do nothing about it. They might come home drunk or stoned, and the parents would not seem to even notice. They might start getting into minor trouble with the authorities.

One might say that children in this kind of environment pretty much raise themselves. Some continue to get into trouble and do poorly, while others may settle down and make something of themselves. In either event, when it comes to their romantic relationships, anyone who might be interested in them eventually finds themselves in a very specific damned if you do, damned if you don’t bind.

Children who had been neglected in this way are missing something important, and they want it. They secretly long for someone who will love them and show an interest in them and take care of them and even set limits with them in all the ways that their parents did not. And from the outside they seem to other people to need those things desperately. They often seem out of control in some way, and seem to be in need someone to give them the proper guidance.

So what happens when someone tries to take care of them? They get angry or even rageful! The logic behind this goes something like this. “I had no one in my life who parented me the way I needed. I had to take care of everything myself and make all of my own decisions. How dare you tell me how to live my life???"

In therapy speak, this is one form of a classic dependency conflict:  I desperately want someone to take care of me and guide me, but I resent it when anyone tries to do that. It’s like they are asking, “Where were you when I really needed  you? No Johnny-come-lately is going to question me about my own decisions!”

Add to this another and additional family system issue: The neglectful parents often had been neglectful because deep inside they felt themselves to be too inadequate to parent well. They secretly feel guilty about what their children had to do to survive. If their child seems to be independent and self-sufficient, they feel less guilty. 

On the other hand, seeing someone do for their child what they did not makes them feel even guiltier, so there is pressure on the child to be self-sufficient and to not depend on others. If they are not independent, their parents may become depressed on act out self-destructively.

Rather than having a “Dependent Personality Disorder,” as the DSM might suggest, these "adult children" are actually counter-dependent. They are deathly afraid of their own dependency needs, and continue to try to manage their lives all by themselves, just like they always had to.

In a way, this type of family situation is the polar opposite of the intrusive helicopter parenting which is also a common occurence in todays's American culture. Despite being the seeming opposite of neglectfulness, helicopter parenting can also lead to a situation in which its victims looks like they need someone to take care of them but then resent it when anyone tries. 

This follows from something I call the principle of opposite behaviors - opposite family behavior leads to the same or very similar result. It occurs because the extreme polarized behavior of the parents represents opposites poles of the exact same conflict - or two sides of the same coin if you will. 

In a future post, I will show how an internal conflict in parents such as these can lead to a situation in which two brothers or two sisters develop characteristics that seem like extreme opposites, or how one generation of family members can go to one extreme with a particular behavior, the next to the other extreme, and the third back to the first extreme. That phenomenon would be next to impossible to explain if behavior were primarily determined by ones's genetic propensities.

Tuesday, September 23, 2014

Hidden Assumptions in Conclusions about Research Data in Psychology





In evaluating the conclusions of the authors from the results of any “empirical” study, two important questions one should ask oneself are: What assumptions are the authors making, and are those assumptions justified?

In today’s world, particularly in studies of the psychology of human beings, study authors often make assumptions which they do not bother to spell out in their reports, so their conclusions may seem logical. However, if they were to spell out those assumptions, everyone would immediately recognize them as completely and obviously preposterous.

In his book How Not to Be Wrong, Jordan Ellenberg mentions an illustrative anecdote about the importance of hidden assumptions that involved a group of government scientists from World War II. Their task was to determine where on warplanes to best place armor, since too much armor weighed the planes down and decreased their maneuverability. The scientists closely examined the airplanes that were returning home safely

At first, they inspected the planes in order to determine where the bullet holes mostly were. They figured that the parts of the plane that were hit the most often should be where the most armor should be placed, since (as the thinking went) those places must be where being hit was the most likely. Strangely, the engine seemed to be the part of the planes most frequently spared from bullet holes.

Wrong strategy. They should have been looking at where the bullet holes mostly were not. The planes hit in those places were the ones that were not making it home safely! If the engine got hit, the plane crashed. If a plane had been hit in the places they were looking at, it was apparently much less likely to crash, since it made it home. The armor should therefore be put around the engine. But only one scientist in the group made this seemingly obvious point before everyone else saw how it obvious it was! And these were some of the best minds in the field.

So let me take a study that I recently found during my weekly literature search on borderline personality disorder (BPD) on the medical database Ovid.  I’m just going to discuss the abstract, since that is all most doctors are ever going to read, if they read anything at all. (The authors did not spell out their assumptions any better in the body of the paper, but the odds are no one is going to actually read that anyway).  

Here is the abstract:

Authors:  Nicol K.  Pope M.  Sprengelmeyer R.  Young AW.  Hall J.
Title: Social judgment in borderline personality disorder.
Source: PLoS ONE [Electronic Resource].  8(11):e73440, 2013.
Abstract:
  BACKGROUND
:  Those with a diagnosis of BPD often display difficulties with  social  interaction and struggle to form and maintain interpersonal relationships.  Here we investigated the ability of participants with BPD to make social inferences from faces.
  METHOD: 20 participants with BPD and 21 healthy controls were shown a series of faces and asked to judge these according to one of six characteristics (age, distinctiveness, attractiveness, intelligence, approachability, trustworthiness). The number and direction of errors made (compared to population norms) were recorded for analysis.
  RESULTS: Participants with a diagnosis of BPD displayed significant impairments in making judgments from faces. In particular, the BPD Group judged faces as less approachable and less trustworthy than controls. Furthermore, within the BPD Group there was a correlation between scores on the Childhood Trauma Questionnaire (CTQ) and bias towards judging faces as unapproachable.
  CONCLUSION: Individuals with a diagnosis of BPD have difficulty making
  appropriate social judgments about others from their faces. Judging more faces as unapproachable and untrustworthy indicates that this group may have a heightened sensitivity to perceiving potential threat, and this should be considered in clinical management and treatment.

Now, many other studies have shown that patients with BPD are actually better at reading faces than controls, so in trying to draw any conclusions of course we have to figure out why different studies get different results. But ignoring that for the time being, let us just look at this one study abstract in isolation.

The conclusions was that the subjects with BPD had “significant impairments” and "difficulties" in making judgment. To be fair, the authors also used the words "heightened sensitivity to perceiving potential threat," which is actually a far more accurate description of their findings. But it is the words "impairments'" and "difficulties" that will be the ones that will jump out at most readers. And in the body of the paper, those terms are if fact more in line with the conclusions discussed by the authors than the phrase "heightened sensitivity."

In using these nouns, the authors are making some rather strange assumptions. A clue that they are doing that is also in the abstract: It mentions that the patients with BPD were far more traumatized as children than the controls.

That being the case, it is highly likely that the people in the social environment of the BPD subjects were far more likely to have hostile intentions than those of the controls. In such an environment, you’d have to be an idiot not to generally have a high index of suspicion when evaluating the faces of people. 

The assumption the authors seem to be making is that somehow the BPD subjects were just naturally worse at reading faces, rather than they were justifiably more suspicious of other people - the latter conclusion being one that would be predicted by error management theory.

So the assumptions they seem to making that need to be questioned are:

1        1. We can just ignore the social context of research subjects in making these sorts of judgments about people’s abilities.

          2. It is true that people rarely if ever use their brains to develop strategies for dealing with other people that have little to do with their innate abilities.

Clearly, those are really stupid assumptions.

Tuesday, September 16, 2014

Faking Psychiatric Conditions for Fun and Profit


The nurse was on to McMurphy's ruse...because she was actually paying attention


A story from the New York Times on  August 27, 2014 caught my eye:

“Ex-Police Officer Pleads Guilty to Playing Role in a Disability Fraud Scheme  By JAMES C. McKINLEY Jr.

A former New York City police officer accused of playing a major role in a scheme to defraud the Social Security Administration pleaded guilty on Wednesday and agreed to testify against his co-defendants. Prosecutors said that the former officer, Joseph Esposito, was one of four people who concocted a scheme that bilked the federal government out of more than $27 million. 

The group allegedly helped scores of police officers, firefighters and other city workers obtain disability benefits by feigning mental illnesses, in some cases by falsely claiming they had been psychologically scarred by the terrorist attacks on the city on Sept. 11, 2001…

Court papers… described Mr. Esposito’s role as pivotal. He recruited many of the people who applied for the benefits and introduced them to three others accused of helping to run the operation …referred most of the applicants to two psychiatrists for treatment and to establish a year’s worth of medical records. On several telephone calls recorded by the authorities, Mr. Esposito was captured coaching applicants on how to mimic the symptoms of depression and post-traumatic stress when being examined by doctors…

With diagnoses and treatment records from the doctors in hand, Mr. Hale and Mr. Lavallee would complete and submit applications to Social Security, using stock phrases like “I don’t have any interest in anything” and “I am up and down all night long.”

Psychiatric symptoms cannot be measured objectively under the best of circumstances.  Doctors must rely on patients' self reports or on how they appear in the examining room. And people can be excellent actors in situations like this without ever having taken an acting lesson in their life.  Faking a psychiatric syndrome is in most cases extremely easy to do.

So it does not necessarily follow that a psychiatrist is not doing his job correctly if he or she is deceived into thinking a patient meets DSM criteria for one disorder or another.  This is especially true when a patient is only seen in the doctor’s office, where an appointment may last a relatively short time. It is obviously more advantageous if a psychiatrist has a way of observing patients when the patients do not realize they are being observed.  In a hospital setting, for example, patients may let down their guard during a quiet afternoon spent socializing with other patients, and not realize that a nurse is watching them out of the corner of her eye.

However, the job of the schemer/faker has gotten considerably easier, whether they are trying to fake a disability claim, looking for an amphetamine prescription, or even trying to enroll in a study for which subjects get paid. This is because diagnostic interviews have gotten shorter and shorter, and doctors have begun to rely on the use of shortcuts such as symptom checklists – two things that I have been ranting about frequently on this blog. 

Under these circumstances, dishonest patients do not have to worry much about being caught in an apparent contradiction, nor do they need concern themselves with describing their symptoms in detail in a way which might seem to the examiner atypical for the condition they are faking. The doctors ask no follow-up questions, the answers to which might then raise suspicions that they are possibly being duped.

The use of the all important follow-up questions is particularly vital in sorting out the clinical significance of a psychiatric symptom that may seem to be present. A good psychiatrist functions much like a good investigative news reporter.  He or she can look for signs that the patient does not know exactly what the doctor needs to know, is exaggerating symptoms, or is possibly making some unspoken assumptions. The doctor can then ask for further clarification, which is an excellent technique for unmasking possible fabrications or half-truths.

Another recent trend that makes it easier for a patient with a hidden agenda to fake a psychiatric disorder is the tendency of some doctors to type away on an electronic medical record while the patient reports his or her symptoms - instead of making eye contact with the patients and observing them carefully while they talk. Cues to fakery that involve facial expressions and body language will of course be missed.  Not to mention that the doctor's attention is being split between two tasks instead of just one, making all clues to dishonestly less likely to be noticed.



Of course, even a doctor who does a real and complete diagnostic interview the way it is supposed to be done can still be faked out. But doctors who do not do one are far more likely to be duped. Apparently, many of them do not really care if they are – as long as they get paid.

Tuesday, September 9, 2014

Corruption of the Evidence Base in Evidence-based Medicine




"‘Published' and ‘true' are not synonyms" ~ Brian Nosek, psychology professor at the University of Virginia in Charlottesville

Publish or perish. Obtain outside funding for research or lose your teaching position. 

Academic medicine and psychology have always been like that to some extent, but it’s been getting worse and worse lately. It’s a wonder anyone wants to become an academic these days. In academic medicine, there has also been a new push: invent something you can patent like a new drug or device that will make a profit for the University.

Is it any wonder that some academics start cheating when their livelihoods depend on it and they are under this sort of pressure? Or that business interests would try to take advantage of their plight to enhance their own sales and their bottom line? This sort of hanky panky had been increasing at an alarming rate.

Now of course, I am not arguing against the practice of doing clinical research and randomized controlled studies of various treatments, or against experimental psychology. These activities remain important even in light of all the corruption that is going on. It is one of the major differences between real scientists and snake oil salesmen, like those we see in much of the so-called “complementary and alternative medicine” industry. And just because a study is industry funded, that does not automatically mean that it is dishonest and not to be trusted.

What the increasing level of corruption means is that we have to pay more and more careful attention to the details of the studies that do make it into print.

First, we have to be on the lookout for outright fraud. An article published in the Proceedings of the National Academy of Science by Fang, Steen and Casadevall (October 16, 2012, 108 [42], pp. 17021-17033) found that the percentage of scientific articles that have been retracted because they were found to contain outright fraudulent data has increased about tenfold since 1975!

Journals also retract articles because of problems with a study that do not involve actual faking data, but the Fang article found that only 21.3% of retractions were attributable to innocent errors. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), or plagiarism (stealing other people's material) (9.8%). 

The authors also found that journals often soft-peddle the reasons for any retractions that they do make. Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. Zoe Corbyn of the journal Nature opined that authors and journals may use opaque retraction notices to save face or avoid libel charges.

Second, we have to pay more attention to the design of the studies, the outcome measures used, and the statistical tricks employed to arrive at the study’s conclusions. We have to look to see if the abstract of a study, which is what most practitioners read if they read anything at all, actually summarizes the findings correctly

We have to look closely if the results are suspect because of the way the sample of subjects was selected and/or screened. I described in a previous post an excellent example of authors completely mischaracterizing the sample of subjects in a journal article published in the premier medical journal of our times.

Research in psychology and psychiatry has problems that are unique to those fields and which are very important. In fact, Cook and Cambell in their book (Quasi-Experimentation: Design and AnalysisIssues for Field Settingspoint out that randomized trials in our field are not really truly experimental in the scientific sense, but are instead what they call “quasi” experimental. 

This is primarily because of a major problem in such studies that concerns the nature of subjects selected for a study.

People are by their very nature extremely complicated. True scientific experiments must assign subjects at random to various treatment or placebo groups. However, in the social sciences, subsets of research subjects are very likely to differ from each other in many ways other than the presence of the treatment whose effects are being tested.

Conclusions from studies about cause and effect are much more complicated in psychiatry and psychology than they are in, say, physics. In physics, the matter under study is isolated from external sources of influence. Obviously, most controlled studies in medicine do not keep the subject under wraps, and under the complete control of the experimenters, for months at a time. 

Second, in physics, other variables that change over time can be kept out of the experiment's environment. Not so with aspects of people’s lives. Third, measurement instruments in psychology are often based on highly subjective criteria such as self-report data or rather limited observations interpreted by the experimenter.

Cook and Campbell also show how experimenters can manipulate the subjects in ways that can determine in advance the results they are going to get. This is because experimenters are usually dealing with variables that are distributed continuously rather than classified as one way or the other on the basis of some discreet characteristic. As examples, how much does someone have to drink in order to be classified as an alcoholic? How often do you have to engage in risky impulsive behavior to be classified as having poor impulse control?



Both potential causes and potential effects in psychology and psychiatry are distributed in the usual manner - in a bell-shaped distribution curve. Let's say that the variable (on the "X axis") above is how often subjects engage in risky behavior. Some people will rarely do so, others will do so often. Both extremes are seen infrequently in the populatiom, however. Most people fall somewhere in the middle. So in determining whether one group of subjects (say people with a certain diagnosis) are more prone to risky behavior than another, where should we draw the line on the X axis in determining who has a problem with this and who does not?

As it turns out, a potential cause for any given effect can appear to be necessary (the effect never appears unless the cause does), sufficient (the presence of a cause alone is enough to produce the effect, although other causes may produce the effect as well), both, or neither in a given experiment depending on where the experimenter chooses to draw the line for both the cause and the effect in determining whether they are present or absent, as shown in the following graph:



At points A, the cause appears to be both necessary and sufficient. If points B are used, the cause appears to be necessary but not sufficient. Dichotomize the variables at points C, and the cause appears to be sufficient but not necessary! A tricky experimenter can use this knowledge to design the study in advance to get the results he or she wants to get.

In fact, there are probably no necessary or sufficient causes for most medical and psychiatric conditions, but only risk factors which increase the likelihood that a condition will appear. To steal an analogy from another field of medicine, there will always be people who smoke a lot but who do not get lung cancer, and there will always be people who never smoke who do.