The UK Science Council define science as ”the pursuit and application of knowledge and understanding of the natural and social world following a systematic methodology based on evidence”
As the Council explain, this methodology has some criteria…
”Scientific methodology includes the following:
Objective observation:Measurement and data (possibly although not necessarily using mathematics as a tool)
Evidence
Experiment and/or observation as benchmarks for testing hypotheses
Induction: reasoning to establish general rules or conclusions drawn from facts or examples
Repetition
Critical analysis
Verification and testing:critical exposure to scrutiny, peer review and assessment
Sounds all pretty straight forward doesn’t it? A bit too straight forward perhaps. We don’t all have a background in science so it’s not surprising that at times there is some confusion over what science is, how it is done, and what it can actually tell us about the natural and social world. Fortunately for us, there have been three excellent open access articles have been published in The Conversation this week which address some of the issues for communicating science:
Where’s the proof in science? There is none
Astrophysist Geraint Lewis from the The University of Sydney kicks of this fabulous trio by explaining one of the misconceptions of science: proof. Yes that’s right, science doesn’t really prove anything. So what does it do? In his article, Geraint hands the last word over to Richard Feynman. I think I will to:
”I have approximate answers and possible beliefs in different degrees of certainty about different things, but I’m not absolutely sure of anything.”
Clearing up confusion between correlation and causation
Correlations are all to do with relationships between different factors. Like chocolate and Nobel Prize winners. Take a look at the graph below, produced by Franz Messerli of Columbia University. Apparently there is a correlation between chocolate consumption. We can even put a number on this – a ‘P value’ to describe the ‘strength’ of the correlation. In this case the value was 0.0001 – which means that “there is a less than one-in-10,000 probability of getting results like these if no correlation exists” . But as Mathematicians Jon Borwein and Michael Rose from the University of Newcastle (UON), Australia explain, correlation does not imply causation, which means that increasing your chocolate consumption won’t increase your chances of winning the Nobel Prize. http://theconversation.com/clearing-up-confusion-between-correlation-and-causation-30761
Why research beats anecdote in our search for knowledge
_”Certainty is seductive” writes Philosopher Tim Dean from the University of New South Wales, _”so we tend to cling to it. We hunt for evidence that buttresses it, while ignoring or rejecting evidence that threatens to undermine it”. Research, on the underhand, embraces uncertainty. It isn’t about finding evidence to back your point of view, it’s about increasing our knowledge – and doing so with a scientifically backed evidence base. For researchers, sometimes that means re-evaluating your point of view. http://theconversation.com/why-research-beats-anecdote-in-our-search-for-knowledge-30654
Image: From Franz Messerli’s paper Chocolate Consumption, Cognitive Function, and Nobel Laureates published in the New England Journal of Medicine. Franz added a disclaimer to his paper, that he ”regular daily chocolate consumption, mostly but not exclusively in the form of Lindt’s dark varieties.” Unfortunately the paper is behind a paywall, but if you fancied a look you can find it here www.dx.doi.org/10.1056/NEJMon1211064
Check out this great video of not only the Peacock Spider dance, but the music sound they make during the dance. Tune into Science Friday on NPR for more.
Hopefully, this should clear up many misconceptions on the scientific method, along with our favorite post on the definition of a theory (no, it’s never going to “graduate” into a Law!)http://goo.gl/4xqQIf
Fareed Zakaria explains some ideas behind conspiracy theories.
Science is a process guided by simple set of rules scientists follow to make sure that what they do actually works. We have learned over the past four centuries that these are the bare minimum common sense rules to follow. Anything less and you will make mistakes and mislead people into believing falsehoods. Cold hard experience has taught us this over the generations.
Too many people believe science is like some kind of religion, where people just hypothesize and decide that their hypotheses are true, and believe in these hypotheses dogmatically. Evolution and climate change have been specifically targeted by politicians who want people to believe this about science, and all of science suffers as a result of this misinformation. All of science suffers when more and more people misunderstand what it is.
Quote:
1. Make an Observation — “What is happening?”
An Observation is when you notice something in the world around you and decide you want to find out more about it.
2. Define the Question — “Why is this happening?”
Defining the question creates an idea that can be tested using a series of Experiments.
3. Form a Hypothesis — “I think this happens because…”
A Hypothesis is a statement that uses a few Observations, without any experimental evidence, to define why something happens.
4. Perform Experiments — “Let’s test my Hypothesis…”
An Experiment is a series of tests to see if your Hypothesis is correct or incorrect. For each test, record the data you discover.
5. Analyze the Data — “Was my Hypothesis right?”
Analyzing data takes what you found in your Experiments and compares it to your Hypothesis. If needed, perform another Experiment to gather better data.
6. Conclusion — “Experiments show my Hypothesis was…”
Forming a Conclusion presents the Experimental Data and explains how it supports or rejects the Hypothesis. Often, Scientists will take this Conclusion and perform other Experiments on it to discover new things.
(end quote)
7. Request Peer Review — “Did you get the same answer as me?”
Ask other scientists to perform the same Experiments you did to check your work and make sure you didn’t make mistakes, see if they come to the same Conclusion as you did. The more people who get the same answers as you, the more confidence everyone has that you are right.
(thanks to Earl Matthews for sharing this to my stream)
For those that like science, I’m sure you can appreciate the value of having real scientists on G+ share their knowledge. Zuleyka Zevallos does an excellent job addressing 3 issues that I also work on:
Correlation does not equal causation
Sensationalized headlines are dangerous
Science outreach without jargon is important
The study itself is fascinating and her discussion of it puts a different emphasis on it that helped me gain more from it. I naturally focused on the imaging aspect.
The images that they show are volume differences from one group to another. The very complex part of it, is that you have to use a brain atlas to make sure you are comparing the hippocampus in subject A with the hippocampus in subject B, for example. The atlas is necessary because, as the study demonstrates, not everyone has the same size substructures of the brain. John Csernansky is one of my boss’s collaborators, so I can talk to him about the study.
This is why we stress in our community that posts should summarise the science behind an article in detail. The link should be there for people who want to read further (and if you write a good post, people will be more willing to click on a link!).
I’ve talked about sensationalized headlines and science outreach before:
Why Correlation is not Causation: Cannabis Use & Schizophrenia
As so often happens, a post from Science on Google+, a community I help moderate, has got me thinking about how easy it is for headlines to quickly lead to #ScienceMediaHype . A post with a link to a news story has the headline, “Teen Marijuana Use Linked with Schizophrenia” (http://goo.gl/w09d7L). As a sociologist with an interest in mental health, this sets off alarm bells. The discussion on our community quickly turned into a debate about the correlation presented in the headline. As a few of our community members pointed out, correlation does not equal causation. My post provides a summary of the actual study and I discuss the sociological problems associated with media coverage of mental illness.
Study on Working Memory
The linked article does an okay job of describing the study but the headline and its focus over-extends the study’s findings. This is the problem with media stories: headlines can shape the way the public understands scientific findings. Journalists present quotes from scientists that fit the angle of their story, putting less emphasis on other aspects of the research.
The study is published in Schizophrenia Bulletin. The sample includes 44 healthy controls, 10 people with a history of cannabis use disorder (CUD), 28 schizophrenia participants with no history of substance use, and 15 schizophrenia patients with a CUD history (http://goo.gl/EQQx2K). Ninety percent of the participants who had schizophrenia already had a CUD history prior to their mental illness. Most of the cannabis users were heavy users, smoking cannabis daily or at least weekly, and most also smoked cigarettes heavily, a variable that the researchers wanted to test.
The study actually tests working memory deficiency not the cause-effect relationship between schizophrenia and cannabis use. Participants were given memory activities and then brain imaging was used to see their brain patterns. I’ve included the three images from the study. Neuroscience is not my area of research, but I include the diagrams in case other researchers should be interested.
The researchers note that without substance abuse, schizophrenia inhibits cortical development. They note less is known about how cannabis affects brain symmetry, though it is known to disrupt the hippocampus, which is related to our limbic system. The hippocampus is linked to information retention for both short and long-term memory as well as other functions like spatial navigation.
The study finds that use of cannabis at an early age impacted memory function. The sample had an average age of 24, so patterns associated with longer-term development need further study.
The study observes that cannabis users and the participants living with schizophrenia both have problems with memory tasks. At the same time, the researchers not that the brain asymmetry observed when carrying out memory tasks may be linked to a” neurobiological vulnerability” among schizophrenia sufferers. That is, that the observed pattern may be the outcome of a predisposition to substance abuse. For example, the researchers note that similar brain patterns are found amongst cocaine users. So, to put it another way, schizophrenia users may be drawn to substance abuse. More on this below.
Finally, the authors conclude what many of our community members had been discussing: that there is no direct cause and effect relationship. “Although our data may be compatible with a causal hypothesis, the cross-sectional data do not allow us to test causal relationships or reject alternative explanations. Thus, the shape differences could be explained as either due to the effects of chronic cannabis abuse or the presence of biomarkers that characterize a vulnerability to the effects of cannabis.”
Their research notes that with laws changing, cannabis may be more readily available to youth with a predisposition towards schizophrenia. This makes their research all the more pivotal. This is both in terms of better understanding how brain development is affected by schizophrenia and the social, health and subjective reasons why youth may engage in cannabis use at different stages of their disease.
Research on Correlation
There are many studies that have linked self-reported cannabis use in early adulthood to an increased risk of developing schizophrenia later in life. One of the most widely cited studies involves Swedish conscripts of 1969, with a follow up study confirming the results (http://goo.gl/7uw0hK). Nevertheless, the direct association between cannabis use and schizophrenia is disputed. For example, there is more to be learned about the relationship between the time that someone starts using cannabis and their first schizophrenic episode (http://goo.gl/waOgfR).
There are three hypotheses to explain why young people with schizophrenia use cannabis (http://goo.gl/ZOSk7H).
1) Cannabis triggers the disease in people with a predisposition, as is suggested by the study at hand.
2) People with schizophrenia use cannabis as a way to self-medicate or manage their experience of the disease. The research does not support this strongly, though cannabis use may give sufferers a perception of control over their disease. This is not something to be dismissed and requires further research. One study from 2012 suggests that one component of marijuana, cannabidiol, may be used to treat schizophrenia. This component does not contain THC which is responsible for the intoxication effects of traditional marijuana. Instead, cannabidiol may reduce the symptoms of psychosis, but further research is needed to fully test this treatment (http://goo.gl/NPGKjm).
3) Cannabis use can trigger schizophrenia through confounding variables. This is the prevailing medical view, though the association is not so neatly weaved together.
Problems with Media Coverage of Mental Illness
The article that covered this story went for a shock value headline. Headlines prime audiences about what they should expect from a scientific article. Sensationalised headlines invite personal opinion based on individual experience. This may range from “I smoke pot and I’m fine” to disparaging comments about “crazy” people. This is the problem with the way in which news headlines shape public discussions of science. Should people read an article before discussing the soundbite? Of course. Does this happen in practice? Not as much as it should. The idea that sensational headlines “sell” is flawed. Shock headlines sometimes get people to click on a link. On a social media site like Google+, some people will go off the headline and the text in a post. In fact, most of the social media research shows that people rarely want to click away from the social site they’re currently on (more on this in another post). This is why we stress in our community that posts should summarise the science behind an article in detail. The link should be there for people who want to read further (and if you write a good post, people will be more willing to click on a link!).
While research suggests that people are sceptical of media reports, not everyone is trained to think about research the way scientists do. Paywalls also stop people from reading the study for themselves (as well as the technical language used in academic journals). This is why scientists need to step up and debunk bad science journalism.
Moving Beyond Individual Speculation
The research has established a correlation, but causation is disputed. The evidence strongly suggests that cannabis compounds schizophrenia and the gravity of this finding cannot be reduced. Nevertheless, media stories that run with a causation headline only serve to spread misinformation.
Bad science writing invites cannabis users to say: “I use it and I’m fine!” It also serves to reinforce a cultural stereotype that people with mental illness may have avoided their condition if they’d only stayed away from recreational drugs. In the end, the causation narrative does more damage and serves only to stigmatise both cannabis users and schizophrenic sufferers as deviants. People living with schizophrenia are doubly stigmatised as their cannabis use makes it seem as if they are wilfully contributing to their illness. Research is seeking to better understand why some people with schizophrenia rely on cannabis.
Mental health is a serious matter. It shouldn’t be reduced to an alarmist headline. Mental illness is part of the human condition. Addressing the treatment of schizophrenia requires compassion, not judgement. Most of all it needs solid scientific research, not dismissive condemnation based on personal or social conjecture.
Thanks Zuleyka Zevallos for this fantastic post about Popular Science disabling comments on their website. I can’t agree more that scientists need to reach out more to the general public. There is so much misinformation out there.
Here’s the LiveScience summary of the article you referenced.
Trolls’ Online Comments Skew Perception of Science
One way to deal with the “nasty effect” is to delete comments and block people. Some people will say, that doing so is censorship. I have to thank A.V. Flox for writing an outstanding post about why that is not the case.
Of course that only works on your posts. For other posts, you’re at the mercy of the owner of that post. I know there is a ton of misinformation about vaccines and pharmaceutical companies, and that has the potential to actually harm people. I go out of my way to explain to people how herd immunity is compromised when one person believes in the anti-vaccine nonsense. This is too important to let a few trolls get in the way.
On the plus side, I can say how nice it is when my science posts cause someone to reach out to me to ask about science, especially young students. Here’s a post about one example.
How Informed Science Can Counter the “Nasty Effect”
Popular Science recently announced they were closing down their comments section. This has lead to many debates, including discussions on our community. I will discuss the role of public science moderation in context of one scientific study that Popular Science used to support its decision to close their comments section. The research shows that people who think they know about science are easily swayed by negative internet discussions, but these people more likely to be poorly informed about science in the first place. For this reason, popular science publications and scientists need to step up their public engagement, not shy away from it due to the so-called “nasty effect” of negative comments made through social media.
Problems with Measuring the “Nasty Effect”
In support of their decision to close down comments on its blog, Popular Science cited a study published in July by the Journal of Computer-Mediated Communication. The study set out to measure online incivility, or as the researchers call it, the “nasty effect” that online comments can have on people’s understanding of emerging technologies.
The researchers surveyed around 2,300 people measuring their “familiarity” with science (in their study, nanotechnology). The researchers did not measure levels of general education nor scientific knowledge specifically. They measured socioeconomic status by aggregating education and income. This variable was not tested against knowledge. This matters because education shapes not simply our ability to think critically. It also gives us the mental tools to process new information, as well as giving us the research skills to seek out alternative and reputable sources of information. Scientific training teaches us how to read articles and data from an objective perspective, using objective theories, concepts and methods. More importantly, it teaches us to argue from a place of knowledge, not from emotion or personal opinion.
The researchers did not measure where people got their information, lumping different newspapers into one category, TV in another, and then the internet. The problem here is that if people are generally getting most of their information from poor sources, their thinking is already coloured by misinformation.
The researchers find that irrespective of their subjective ideas about how much they think they know about science, negative comments influenced people’s opinion. Religious people and those who already held low levels of support for nanotechnology were more likely to perceive a risk of this technology after reading negative discussion. The researchers do not engage with these findings.
Understanding, support and risks associated with science might be understood as the socialisation of science. These biases don’t just exist in individual minds; they are shaped by prior education and exposure to poor scientific debate either through their family culture, religious schooling, or media use.
What this tells us is that people who think they know about science are swayed by others’ negativity. The distinction between “surface” science and “deeper” science might help put this into perspective.
Surface versus Deep Science Communication
Many people think they know science because they find science news and certain factoids and images interesting. This might be seen as “surface level” science. Pop science is lots of fun, but there is wide scope for science to be misleading when it is reported incorrectly. This is the tip of the ice berg as far as science communication is concerned.
Nurturing deeper level scientific engagement is achieved by reading the science directly. This is difficult if you don’t have a science degree because science is written in technical language. Plus articles are hidden behind paywalls that require institutional access. Unless you have a personal fortune to invest in these collections, it’s hard to get access.
The other way to achieve deeper scientific knowledge is by engaging with scientists directly. This is where blogs and social media can help make science debates more accessible. In a community setting, the conversation is shaped through moderation. This was not measured in the study, and this is something that Pop Science has essentially given up on.
How might opinions be swayed when real scientists jump in to lead, moderate and comment on popular science discussions?
Science is about informed debate, not personal opinions. There’s no point putting out science into the public if we give up on informed discussion.
A Call for Scientists to Support Public Debate
It’s interesting that Popular Science is keeping their other social media channels open for discussion, suggesting perhaps that they are happy to support debate so long as it’s not in their direct domain (their website). This suggests, perhaps, that they are washing their hands of moderation, and letting people comment on Facebook, Twitter and so on, without feeling the same pressure to respond to comments. This will only feed the same “familiarity” with science, without the informed discussion. In this way, it only contributes to poor public engagement with science, rather than supporting spaces where the public might learn to think more critically about science.
I sympathise with the difficult task of moderation from personal experience here and in the other communities that I help moderate. It is much easier to publish in journals read by our peers and to present at conferences where everyone already has the same training. But if scientists and popular science news publications give up on public debate, what’s the point of putting out science into the world? The public will continue to write and debate science, picking up little snippets – which are often incorrect. The only outcome is that science continues without informed discussion.
If you’re a qualified scientist and you’re a part of our community, consider contributing to the discussion. We’d like to see more posts written by experts who can make science more accessible. Even if you tell us about your latest research project, or if you do a critical summary of your latest publication, this would improve science outreach. Don’t just throw out a link to your blog post or copy and paste your abstract, tell us about the science!
I was intrigued that so many scientists wrote about their research on this thread about our future community hangouts (http://goo.gl/iLzZCI). I wonder why more of these people are not writing to the rest of the community about their work. Could there be a fear of the “nasty effect”? Is it simply too daunting to write for a larger audience, or is there a fear that it might be too time consuming? Write about what you know. Write about the science you’re currently reading. Write about your lab work. Remember that basic concepts, theories and methods that seem old hat to you would be interesting to others. Link to original sources to give people an opportunity to read the science directly if they have access.
It’d be great to see more of you sharing your research with our community.