Alarming science discovery…

Alarming science discovery…

Here’s a news article, In cancer science, many ‘discoveries’ don’t hold up, about an oncology researcher trying to replicate some preclinical studies before moving forward with potential drug development. (thanks to a post via Branimir Vasilić http://goo.gl/wJyMx)

The news article summarizes a commentary in the journal Nature, titled, Drug development: Raise standards for preclinical cancer research.

Notice the difference in the titles? Here’s a similar discussion where Rajini Rao points out that the news article is titled, Eggs unlimited: an extraordinary tale of scientific discovery vs. Potential Egg Stem Cells Reignite Debate in the journal Science. Similar discussion here: http://goo.gl/Yq1ls

I want to focus on the oncology debate since I do cancer research. However, the comments from the article and me are relevant to many areas of research.

http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

Drug development: Raise standards for preclinical cancer research

C. Glenn Begley & Lee M. Ellis

Nature 483, 531–533 (29 March 2012) doi:10.1038/483531a

Published online 28 March 2012

Here are 5 reasons why oncology research might not be replicated

Endpoints

As the authors point out, endpoints in cancer research can be less quantitative compared to say statin research where cholesterol level is the endpoint. In cancer studies sometimes tumor size is an endpoint. As an imaging person, my field very frequently frowns on this, as a drug can cause tumor swelling, i.e., increase in size, while actually causing tumor cell death. Not everyone has access to expensive imaging equipment or the skills to utilize many imaging modalities. So a lot of cancer drug researchers rely on caliper measurements of the tumor even though most would acknowledge that a tumor is rarely a perfect sphere where one only needs to measure the diameter.

Cutting edge

The authors suggest that some of the irreproducible results could be due to publications that were cutting edge, i.e., a researcher found something completely new or unexpected and published quickly. Also some technology might not be available to Amgen that was used in one of the publications. For example oxygen imaging is available in maybe 3-4 labs in the world.

Competition

Although this may sound terrible to the general public, there have been cases where researchers have omitted a key ingredient or step on a method in order to keep a competitive advantage.

Narrow scope

Begley and Ellis state that the robustness of some results were checked. For example, a publication might get phenomenal results with a particular tumor cell-line or model. When Amgen tried to broaden the scope, e.g., trying a different cell-line or model, the “narrow” promising results turned out to be less robust.

Statistics

Another issue is improper statistics. Quite often scientist haven’t had enough statistical training or do not consult a statistician and therefore use an incorrect method or interpretation.

Conclusion

Interestingly, Begley mentions that the results do not use enough predictive biomarkers (an area of focus for my research which I hope to contribute a solution). The authors’ suggestion to try to show tumor models where there is a negative result is often not possible when a grant funds a particular cancer or model. I totally agree about the selective presentation aspect of their paper. Unfortunately, I don’t think it is uncommon for a publication to have a figure that is stated to be “representative” of all the data, when in fact it was carefully selected as the best example. As some commenters on the online version of this Nature article state, it’s interesting that Begley and Ellis do not list the publications they tried to replicate, thereby limiting the possibility to replicated their article. Transparency?

Edit: I want to be clear that I don’t condone some of these reasons for the lack of reproducible publications. I want to emphasize that there are some reasons why a drug company might not be able to replicate a publication and therefore, there is no need for Reuters or Yahoo news to say the sky is falling for scientist.

For ScienceSunday

#sciencesunday #scienceeveryday

0 Comments

  1. Rajini Rao
    April 8, 2012

    Chad Haney , First of all, kudos on the excellent observation on differences in the choice of titles between the more sensationalistic lay press and the measured, more scientifically accurate ones in peer reviewed professional journals. I wish these media would tone down the hype and do the public the favor of presenting more nuanced reports. Second, the problem is that we’ve become very good at curing cancer in mice. The standard assay of using athymic/nude/immunodeficient mice to graft in human tumors gives promising results that fail to hold up in the clinic. I have a story to add about the importance of proper data management and statistical analysis: the Duke scandal in which clinical trials had to be halted because data were mishandled apparently began with improper merging of Excel spreadsheets!

    The cutting edge criticism applies to all fields of science: sadly, the most retractions are associated with highest profile journals.

    Reply
  2. Chad Haney
    April 8, 2012

    Branimir Vasilić I don’t think I properly thanked you for passing along the news article. Your comments will hopefully stimulate others to join the discussion.

    There are two main points to my post. First, the news media tends to create titles and articles that sensationalize issues. The second part is that I wanted to diffuse the “alarming” aspect of the news article by give some examples where publications might not be replicated.

    I’m having difficulty editing the post. Once I sort that out, I will modify the conclusion to make it more clear that I don’t necessarily condone the sources of non-repeatability. I agree that the authors bring up some serious issues with publishing research.

    Regarding, not reproducing results because the conditions are very particular and therefore may not be generalized, maybe the biologist/physiologist can give some insight on that. However, here’s another example. Say a group develops a transgenic mouse that has a certain gene knocked out. Suppose that gene is responsible for resistance to a particular therapy. If that model helps one understand the resistance part, is it not still valuable even though it may be extremely difficult to replicate? Another example: Dr. Polllard at Notre Dame stumbled upon a rat that spontaneously produces prostate cancer. That rat colony is unique to him and Notre Dame, if I’m not mistaken. So is his work invalid because no one else has that rat colony? Again I’m not saying the system doesn’t have it’s faults. I’m trying to say that there are legitimate reasons why some work may not be reproduced by a drug company and that the news article title was sensationalized.

    I do agree with the authors that journals should create a system where comments could be posted for articles to highlight any issues that the reviewers may have missed.

    As with some of the online commentors of the article, I agree that some of the suggestions by the authors are either disingenuous or naive. In light of the budget for NIH research, their comment about “cancer researchers must commit to making the difficult, time-consuming and costly transition towards new research tools..” It’s great in theory but in practice, if a foundation gives you funding to research prostate cancer, you can’t justify using the funds to also test colon cancer. With everyone fighting for the every decreasing research dollar, some labs are letting people go. Unfortunately, that means there are not enough people to do some of the things the authors suggest. If there were more funding and the US government put more emphasis on research and science, we would have the resources to implement some of their suggestions.

    Reply
  3. Chad Haney
    April 8, 2012

    Very well said, Branimir Vasilić To follow up on my comment about the rats from Dr. Pollard, I had to go through a lot of paperwork so that he could bring a few of his rats to the University of Chicago. When you add the legal issues of dealing with a drug company, I could see where an academic researcher might not want to bother “helping” Begely & Ellis (e.g. non-disclosure agreement)

    Reply
  4. Rajini Rao
    April 8, 2012

    Too many variables in biology, particularly in whole animal studies. That’s why they’re so hard. Don’t forget to applaud the efforts of researchers who attempt to see the forest for the trees. The innate complexity and variability of a physiological system is orders of magnitude higher than a physicist may encounter, IMO. Researchers use genetically defined, inbred strains of mice and follow standard (institution approved) protocols. Yet, mice will respond differently because we cannot control for all the differences between individual mice, let alone compute any potential differences between one animal facility and the other. That’s why regulations for animal use are so tight. If a scientist wants to bring a colony of mice to our facility, they have to pass lengthy quarantine and testing procedures for fear of spreading mites and all manner of disease.

    Having worked with purified proteins and tightly controlled enzymatic reactions, or even so-called simple unicellular organisms myself, I have a deep appreciation for animal models which we have been using for the past year. One just gets used to seeing larger error bars and “whacky results” (translation, we don’t understand them).

    Reply
  5. Chad Haney
    April 8, 2012

    Rajini Rao I hope our other science friends jump in here. I agree with some of the concerns that the authors bring up but I disagree in the way that Yahoo news and Begley and Ellis present/discuss the issues. Branimir Vasilić mentioned that he’s a physicist and you are a biologist. I’m an engineer that was drawn from chemical engineering to biomedical engineering for the very issue you mention; variability. In chemical engineering you more or less can control all of the variables in an experiment. In biomedical research, you hope to control as many variables as possible, but with an intact living animal, you simply can’t control everything. I love that challenge. Doing research non-invasively is a challenge that is translational to the clinic.

    Reply
  6. Zephyr López Cervilla
    April 8, 2012

    I bet if they try to reproduce the experiments using an computer simulation model, then they will be able to reproduce the same results about 100 times out of 100.

    Reply
  7. Zephyr López Cervilla
    April 9, 2012

    Chad Haney said:

    1. “The news article summarizes a commentary in the journal Science, titled, Drug development: Raise standards for preclinical cancer research.”

    2. “Notice the difference in the titles? Here’s a similar discussion where +Rajini Rao points out that the news article is titled, Eggs unlimited: an extraordinary tale of scientific discovery vs. Potential Egg Stem Cells Reignite Debate in the journal Science.

    3. “As some commenters on the online version of this Science article state, it’s interesting that Begley and Ellis do not list the publications they tried to replicate, thereby limiting the possibility to replicated their article.”

    – It is not Science, it is the journal Nature: nature.com/nature/journal/v483/n7391/full/483531a.html

    4. “there is no need for Yahoo news to say the sky is falling for scientist.”

    – It is not Yahoo News, it is Reuters: reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

    Sharon Begley works for Reuters (Senior U.S. Health & Science Correspondent) and previously covered science and medicine at Newsweek and WSJ:

    <>

    Yahoo was simply reproducing a Reuter’s report word by word.

    Additionally, the issue has been also covered by other specialized news agencies such as Medical Xpress with a similar interpretation of the paper:

    <<(Medical Xpress) -- C. Glenn Begley, formerly head of cancer research at pharmaceutical giant Amgen and Lee M. Ellis a cancer researcher at the University of Texas, have published a paper together in Nature that is sure to cause a storm of controversy in the cancer research community. They say they have found that more than ninety percent of papers published in science journals describing "landmark" breakthroughs in preclinical cancer research, describe work that is not reproducible, and are thus, just plain wrong.>>

    <>

    <>

    medicalxpress.com/news/2012-03-duo-preclinical-cancer-results-plain.html

    Finally the editors of Nature, a peer-reviewed journal, also considered the lack of irreproducibility of enough significance to accept Ellis and Begley’s article for publication.

    Reply
  8. Chad Haney
    April 9, 2012

    Zephyr López Cervilla Thanks for catching the mix up in journals. However, you made the same mistake I made. The other thread I referred to did talk about the journal Science and I apparently still had my mind on that thread. I’ve corrected the post.

    I don’t think it really matters if it’s Reuters, Yahoo News, or any other news outlet. I pointed out that the titles differ and that one is more sensationalized.

    I think it’s hypocritical for Begley and Ellis to not list the “landmark” publications that they attempted to replicate. I’ve edited the post to emphasize that Begley and Ellis wrote a commentary not a peer-reviewed article. I say at the end that I don’t condone some of the issues and I agree that the system is not perfect.

    Reply
  9. Zephyr López Cervilla
    April 9, 2012

    Those were comments based on their own research over the last decade, not simply their personal opinion.

    Besides, this lack of reproducibility hasn’t been isolated finding. For instance, here you are another paper published by in the section correspondence in the journal Nature Reviews Drug Discovery tackling the same issue:

    – Prinz, F., Schlange, T. & Asadullah, K. Believe it or not: how much can we rely on published data on potential drug targets? Nature Rev. Drug Discov. 10, 712 (2011).

    Open access: nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html

    In my opinion it isn’t hypocritical for Begley and Ellis not to list the publications that they couldn’t replicate. As Begley explains:

    <

    Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings. “The world will never know” which 47 studies — many of them highly cited — are apparently wrong, Begley said.>>

    So the alternative to not disclosing the publications whose results they failed to replicate would have been not having the cooperation of the original authors to try to find out the cause of the lack of reproducibility of those results.

    Reply
  10. Rajini Rao
    April 15, 2012

    Chad Haney and Zephyr López Cervilla , sorry that I am late to this discussion. I just had a chance to read the Nature commentary by Begley and Ellis and frankly, I am appalled that they are allowed to get away by essentially maligning the work of their peers without showing us any evidence to support their claims. Whether confidentiality agreements were signed, or whether teams of 100s of scientists at Amgen were involved, this is just hearsay and cannot be considered credible information. No wonder it is just a “commentary”. The scientific method is to publish a peer reviewed paper in a reputable journal that rebuts the earlier finding. This is hard work, but contradictory findings do get published all the time, and articles do get retracted as a result. This is the only valid way ahead.

    Zephyr López Cervilla , what if you were a scientist and your competitor published such as commentary stating that all the work done in your field was not reproducible? If this happened to my field, you bet that there would be some nasty letters to the editor for a start.

    Chad’s comment on sensationalization by the lay press is a good one. It cuts both ways: sometimes, modest gains are hailed as the next miracle cure, other times flaws and errors are blown out of proportion as impending crisis. Both do a disservice to the public.

    I’m also not clear exactly what aspect of the papers being evaluated was not found “robust”. Specifically the response of a cancer to a particular drug or the paper in its entirety (much more serious)? Often, drug response is only part of a publication especially if it is mechanistic in focus, or trying to establish the role of a gene in cancer. In such cases, the knockdown of the gene alters tumor proliferation or metastasis and then a drug known to target the particular protein under study is introduced into the mouse as “proof of principal” to show that similar trends are observed. It would be good to know if the papers in question focused on testing a lead compound against the tumor. That would be more likely if the research was coming out of an academic lab instead of industry.

    Finally, Zephyr López Cervilla , I was not sure if you had access to the Nature paper as it is behind a paywall. In case you had not, I’m cutting/pasting some of the comments in response to the article. I would be interested in your thoughts (sorry for making this post too long, but I know you specialize in long posts Zephyr, so you won’t mind).

    ———–

    Comments taken from Nature Commentary by Begley and Ellis:

    Greg M said: The claims presented here are pretty outlandish. Particularly relevant to “Hematology and Oncology” we now know that mice housed under different conditions with different microflora can have vastly different outcomes in any model, not just cancer. To suggest academic incompetence or outright unethical behavior is offensive, and is a particularly narrow view of why experiments are difficult to reproduce. Further, as indicated in Table 1, the entire definition of not-reproducible hinges on a priori profit motive of “robust” differences (whatever that means). There is always room for improvement in science, but this entire article is disingenuous and belittling to those of us who are on the front lines.

    Marcelo Behar said: At first I thought this was an April Fool’s joke: an article complaining about non-reproducible results and poor publishing practices that did not show the data underlying their own “results”. I laughed at loud at the claim “The scientific community assumes that the claims in a preclinical study can be taken at face value”… thought it was pretty hilarious. But I am not so sure this is a prank… so just in case here are my 2 cents.I will not deny that cherry-picked results, poor controls, inadequate number of repeats, non-publishable negative results, or bad experimental habits in general are real problems in all scientific disciplines including biomedical research. However, this article is just sensationalism at it worst: making over-generalizing, grandiose claims without providing any supporting evidence. Which specific articles were picked, what criteria was used to categorize something as a Landmark finding, how were the claims tested, what reproducibility criteria were used, etc… speaking of cherry picked results, lack of controls, and poor publishing standards! I am not familiar with the internal decision-making process in big pharma but if this article is serious, perhaps they should consider hiring scientists from a community that does not “assumes that the claims in a preclinical study can be taken at face value”. Leaving aside dishonest data manipulation, problems arising from incomplete data, bad controls, poor practices, or limited applicability of the results are usually evident from a critical review of the protocols/methods. Cheers

    Uli Jung said: While I applaud every kind of whistle blowing that helps improve transparency and decrease fraud and such in science – this comment is rather ridiculous. Half-way whistle blowing does not work, sorry. Who finds fraud, write about it in a reputable magazine but does not make sure the people committing it are actually exposed … What is that, a self-serving publicity boost while supporting the network of the fraudulent actions through silence?

    Usually there is that “conflict of interest” declaration in publications. A comment where someone labelled as “consultant” acts as the “good guy” who gets “halfway” exposes fraud but diligently avoids supporting the credibility (and maybe avoid personal attacks?) by not exposing the data of the claim … is not needing a conflict of interest declaration?

    —————————-

    Reply
  11. Chad Haney
    April 15, 2012

    Rajini Rao thanks for the reply and support. I refrained from copying the online commentary to support my comments as I fear it wouldn’t make a difference. However, maybe other will appreciate the online commentary.

    Reply
  12. Rajini Rao
    April 15, 2012

    I pasted the commentaries because they echoed my own thoughts precisely and I was too lazy to make them myself. There is a correct way to “whistle blow” and this Commentary is not the way.

    Reply
  13. Zephyr López Cervilla
    April 16, 2012

    “I am appalled that they are allowed to get away by essentially maligning the work of their peers without showing us any evidence to support their claims. Whether confidentiality agreements were signed, or whether teams of 100s of scientists at Amgen were involved, this is just hearsay and cannot be considered credible information.”

    – What you call appalling is in the very essence of investigative/watchdog journalism Quite often journalists will resort on the granted right/privilege of not having to disclose their sources of information, a mechanism to favor the right of the public to be informed, even in the cases in which the information has been obtained by illicit or questionable means.

    You should take into account that they are not only reporting their results while trying to reproduce previous works, but also they give testimony for the explanations (or if you prefer, confessions) that other researchers had given about the lack of reproducibility of their published experiments:

    <>

    1. Ellis LM and Begley CG. Drug development: Raise standards for preclinical cancer research. Nature 483, 531–533 (29 March 2012)

    nature.com/nature/journal/v483/n7391/full/483531a.html

    <<"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning.">>

    2. Sharon Begley. In cancer science, many “discoveries” don’t hold up. Reuters. March 27, 2012

    reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

    That’s exactly what many journalists quite often have to do.

    In my opinion the way they present information in this article fits rather well to the entitlement on protection of information sources that both domestic and international law recognize to reporters:

    <>

    en.wikipedia.org/wiki/Protection_of_sources

    “The scientific method is to publish a peer reviewed paper in a reputable journal that rebuts the earlier finding. This is hard work, but contradictory findings do get published all the time, and articles do get retracted as a result.”

    – As I explained above, in this commentary article they aren’t doing science but investigative journalism and a proposal of changes. Therefore, the method followed, the way to present their findings, and the criterion applied by the publisher to publish the article don’t have to necessarily be the same as in a peer-reviewed article:

    <>

    <>

    en.wikipedia.org/wiki/Investigative_journalism

    You may argue that they aren’t professional reporters, yet their article ha been published in a periodical journal. On the other hand other non-professional reporters (e.g., Julian Assange) have also resorted to this “reporters’ privilege” for not disclosing their sources. Additionally, any scientific researcher can be considered as a reporter in some extent since they frequently publish articles in scientific journals, even though they’re not paid by the publishers.

    “your competitor published such as commentary stating that all the work done in your field was not reproducible? If this happened to my field, you bet that there would be some nasty letters to the editor for a start.”

    – Regardless of its relieving effect, it’s unlikely that your letter could convince many readers on the falsity of those statements (I’m assuming you would write an open letter).

    In my opinion, it’d be more effective to give some references supporting that much of the work published in your research field has been reproduced by other independent researchers, or that most findings have been useful in other research, or that they have found some practical applications.

    Also, are researchers who work for pharmaceutical companies the competitors of the researchers who devoted to basic science? They don’t usually compete for the same funding sources. I’d tend to believe that rather their work complements each other. Without basic research, pharmaceutical companies couldn’t develop new useful applications. Without improvements in pharmaceutic applications, most of the funding provided for basic research in biomed by public organisms and charitable institutions would be withdrawn. So in the end there’s a mutual interest in other’s success.

    “Chad’s comment on sensationalization by the lay press is a good one. It cuts both ways: sometimes, modest gains are hailed as the next miracle cure, other times flaws and errors are blown out of proportion as impending crisis.”

    – In this particular case, the lay press is a reputed news agency, and the journalist is their senior health and science correspondent, former science editor and science columnist at Newsweek and former science columnist at WSJ. Besides, in many occasions she’s just quoting the comments of her interviewees rather than interpreting freely their words in a sensationalized way:

    <

    “It drives people in industry crazy. Why are we seeing a collapse of the pharma and biotech industries? One possibility is that academia is not providing accurate findings,” he said.>> [2]

    “I’m also not clear exactly what aspect of the papers being evaluated was not found “robust”. Specifically the response of a cancer to a particular drug or the paper in its entirety (much more serious)?”

    – I guess he referred to statistical robustness:

    http://en.wikipedia.org/wiki/Robust_statistics

    As it’s mentioned in the article written by Ellis and Begley:

    <>

    They probably had fixed a priori a defined criterion to evaluate the results of all the experiments, analogously to the criterion applied to determine if some results have statistical significance.

    “we now know that mice housed under different conditions with different microflora can have vastly different outcomes in any model, not just cancer.”

    – That’s why they repeated the experiments (according to the quotes provided by Reuters in some cases even 50 times, see above) and contacted to the authors so they could try to reproduce the experiments under as similar conditions as possible:

    <> [1]

    And they recommended some alternative approaches to prevent these issues:

    <> [1]

    “Marcelo Behar said:”… “However, this article is just sensationalism at it worst: making over-generalizing, grandiose claims without providing any supporting evidence. Which specific articles were picked,”

    “Which specific articles were picked, what criteria was used to categorize something as a Landmark finding, how were the claims tested, what reproducibility criteria were used, etc… speaking of cherry picked results, lack of controls, and poor publishing standards!”

    – Marcelo Behar seems to be confused. This article written by Ellis and Begley isn’t a research paper but a journalism article. The primary sources of information are the testimonies of the same reporters about their findings. In this case there’s a good reason that explains why they don’t provide supportive evidence, as I argued above. If they provided the articles that they had picked, they would be disclosing which experiments they had tried to reproduce without success, something that they had agreed not to do to ensure the collaboration of the original authors.

    As for over-generalizing, that isn’t actually the case. Ellis and Begley provide the number of papers analyzed, and the number of those of which they couldn’t confirm the results. It is up to the reader to decide how generalizable are their results:

    <>

    The only good point that Marcelo Behar makes other than his explicit recognition of some of the main conclusions of Ellis and Begley,

    “I will not deny that cherry-picked results, poor controls, inadequate number of repeats, non-publishable negative results, or bad experimental habits in general are real problems in all scientific disciplines including biomedical research.”

    is his demanding to know the reproducibility criteria that they applied, information that Ellis and Begley could probably provide without disclosing the experiments they had tried to reproduce.

    His references to April’s Fool’s jokes, to his laughing at loud, or the convenience that “big-pharma” hire scientists “from a community that does not “assumes that the claims in a preclinical study can be taken at face value”” are of very poor taste and don’t add any extra support to his criticism, but rather the other way around (it’s remarkable his implied call to peer-unionism.)

    Additionally, he states something that isn’t necessarily true:

    “problems arising from incomplete data, bad controls, poor practices, or limited applicability of the results are usually evident from a critical review of the protocols/methods.”

    There’s no way that a critical review of the protocols/methods can make evident the number negative results obtained during the experiments that haven’t been published (the direct consequence of what he calls “cherry-picked results”), information that may be crucial for the applicability of the findings.

    “Uli Jung said: While I applaud every kind of whistle blowing that helps improve transparency and decrease fraud and such in science”… “Half-way whistle blowing does not work, sorry. Who finds fraud, write about it in a reputable magazine but does not make sure the people committing it are actually exposed”

    – As I wrote above, the nondisclosure of your information sources is a “privilege” granted to investigative reporting (and more so in general proposals). In this particular case the whistleblowers are the same people whose work is under scrutiny.

    <

    – Numerous interviews with on-the-record sources as well as, in some instances, interviews with anonymous sources (for example whistleblowers)>>

    en.wikipedia.org/wiki/Investigative_journalism

    Additionally, it wasn’t in the intention of the article written by Ellis and Begley to expose those people committing what Uli Jung refers to as “fraud” (expression explicitly discarded by them to refer to that kind of practices), but to propose a number of

    <> [1]

    that in their opinion could largely prevent that such flaws in the system could prevail in the future. Their suggestions are listed in the ‘Recommendations’ box ([1], page 533) and also mentioned in the main text of the article:

    <>

    5. Fanelli, D. PLoS ONE 5, e10271 (2010).

    <

    But there are no perfect stories in biology. In fact, gaps in stories can provide opportunities for further research — for example, a treatment that may work in only some cell lines may allow elucidation of markers of sensitivity or resistance. Journals and grant reviewers must allow for the presentation of imperfect stories, and recognize and reward reproducible results, so that scientists feel less pressure to tell an impossibly perfect story to advance their careers.>>

    Many outside observers could easily interpret most of your criticism and overzealousness as a defensive corporatist reaction, what would be detrimental to the public perception of scientific research. Researchers aren’t expected to be accountable for the actions of their peers. You’re legitimized to defend your work, and in some extend the work of others who have been working with you, you can also evaluate legitimately the validity of particular results from the work of your peers, but vehement claims denying any possible misconduct of your unknown colleagues go beyond your competence, will probably be perceived as lack of truthfulness and honesty, and can generate more distrust.

    In contrast, if you limit your comments to compare any criticism with your personal experience in a restrained way, and/or you can give well-known examples that don’t fit the criticism (if they exist), your opinion will be regarded as more reliable and honest by most outside observers.

    —————-

    Reply
  14. Chad Haney
    April 16, 2012

    Zephyr López Cervilla what do you do for a living? Being succinct is clearly not required for your occupation.

    Reply
  15. Chad Haney
    April 26, 2012

    Rhona Finkel It’s a public post. You are welcome to reference it in your blog. I’m glad someone got something from it. Let me know if you need help referencing it.

    Reply
  16. Chad Haney
    April 26, 2012

    Rhona Finkel for every post, if you click on the date, it will bring you to the permanent link like this:

    https://plus.google.com/107896084561441926092/posts/LqAHEDPPgYt

    Reply

Leave a Reply to Zephyr López CervillaCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.