No really it is. Jon Hiller and I are both in Chicago and both work with imaging but that’s not the point. The point is, you might not ever get a chance to see an electron microscope in action or see an experiment live at a National Lab. So mark your calendar for another #SSHOw
Originally shared by ScienceSunday
Join us for another Science HOA, brought to you by ScienceSunday as we talk to Dr Jon Hiller about electron microscopy! Jon is a electron microscopist in the Nanoscience & Technology Division at Argonne National Laboratory. Jon’s research includes the development of state-of-the-art electron and ion beam instrumentation for materials and nanoscale research. He is most well known for his work in 3 dimensional Focused Ion Beam (FIB) tomography and complex sample fabrication for electron microscopy. His characterization of diamond thin films has lead to the development of the artificial retina.
We will be discussing all this, along with a live on air demonstration of scanning electron microscopy! Jon has also kindly offered to allow you, the audience to choose any objects that you would like to see under the electron microscope! So if you have any questions for Jon, or suggestions for samples for imaging, please leave them on the Event page as always.
Buddhini Samarasinghe and Scott Lewis will be hosting this event.
Dwarf species of fanged dinosaur emerges from southern Africa
The link below is from The University of Chicago it’s slightly different than the other news blurbs you’ve probably seen already. Also the full article is Open Access. Be warned the PDF is 125 MB. There is a 25 MB version.
Paul C. Sereno, “Taxonomy, Morphology, Masticatory Function and Phylogeny of Heterodontosaurid Dinosaurs,” ZooKeys online, Oct. 3, 2012.
I’ve done some imaging for Paul. Leave a comment if you would like Dr. Paul Sereno at The University of Chicago to discuss Pegomastax africanus on a #SSHOw I will try to tempt Paul into a HO if there is enough interest.
Round 3 of the #SSHOw (ScienceSunday HO-woot) will be about the upcoming “Birds of Egypt” exhibit at the Oriental Institute, focusing on how medical imaging helped.
#ScienceEveryday when it isn’t #ScienceSunday
Originally shared by ScienceSunday
Join us this Sunday where JP Brown from the The Field Museum Christian Wietholt from VSG, Rozenn Bailleul-LeSuer from the Oriental Institute (OI), and Chad Haney from the University of Chicago (ScienceSunday co-curator) preview the upcoming Birds of Egypt exhibit. They will be discussing their contributions to the project, mainly focusing on how computed tomography (CT) was helpful in examining the artifacts, non-destructively.
We hopefully guided you through the maze of the GMO Corn hysteria in the media. Here’s a link to some of the statistical issues. http://goo.gl/epcnr Next weeks #SSHOw will be on the mummy bird that I posted a while ago (http://goo.gl/sbzJq).
Today’s #SSHOw (http://goo.gl/0eGrh) will be discussing GMO corn and in particular a poorly done experiment/publication that sparked the media storm (Séralini et al Food and Toxicology 2012). http://goo.gl/5GOWa
Orac dissects the paper quite nicely, although I think he repeatedly says mice when he means rats. http://goo.gl/SSE2F
The two areas I will comment on are the tumor rat model and a statistical issue dealing with multiple comparisons.
Spontaneous Tumors in Rats
Orac points to a study in 1979 where 81% of the Sprague-Dawley rats, the same strain used in the Séralini paper, develop tumors. When you use an animal model looking at tumor development, you need to know the prevalence of spontaneous tumor development. The control group(s) have to be designed such that you can differentiate between “normal” spontaneous tumor development in the control groups vs. the experimental groups. Part of that design is having sufficient number of animals to have statistical power. Using previously published data, the authors could have done a power analysis to determine the proper sample size. For these types of studies, where you are not doing intricate daily or weekly interventions/experiments, i.e., just keeping the animals for long periods while looking for mortality, it is not uncommon to have 3-5 times the number of animals used in the Séralini study.
As an example, I had the privilege to collaborate with Prof Morris Pollard at Notre Dame who developed the Lobund-Wistar rat model. Lobund-Wistar rats spontaneously develop prostate adenocarcinoma (PA) at a mean age of 26 months. In the publication of the model, out of 72 L-W rats, 19 (26%) developed large PAs. Imagine if Prof Pollard only used 10 male rats as in the Séralini study.
We use a transgenic mouse model for spontaneous ductal carcinoma in situ. Invasive carcinomas develop in 100% of the mice. The mice are called SV40-Tag mice based on the C3(1)/Tag mice. SV40-Tag stands for Simian virus 40 T-antigen, a trans-activating protein, which are essential for viral gene expression.
The point is you have to know the tumor prevalence in the rodent model you are using and plan the control groups accordingly.
Multiple Comparisons/Sample Size
The study mentions that they use Discriminant Analysis (DA) to partition groups, i.e., you lump all the variables (factors) together and use DA to flesh out which factors influence the outcome, e.g., tumor size, biochemical markers, etc. In image analysis we use Linear Discriminant Analysis (LDA,http://en.wikipedia.org/wiki/Discriminant_analysis, http://goo.gl/oyNzh) to segment (classify) pixels. Say you want to automatically segment tumor from normal tissue using several image types of the same sample. You have thousands of pixels to work with, not 10. The method isn’t robust with 10 or less samples in the 20 groups used (note I separated the male and female groups in the Séralini study). Also, in the context of machine learning, you have to have a training set. In my example you give the program a set of pixels that you know belong to each group before testing the pixels you want to classify.
A quick review on null hypothesis testing. A type I error is when the null hypothesis is true but is rejected, i.e., a false positive. A type II error is when the null hypothesis is false but is incorrectly accepted as true, a false negative. Remember the null hypothesis can never be proven.
Here are examples from the Wiki:
Suppose the treatment is a new way of teaching writing to students, and the control is the standard way of teaching writing. Students in the two groups can be compared in terms of grammar, spelling, organization, content, and so on. As more attributes are compared, it becomes more likely that the treatment and control groups will appear to differ on at least one attribute by random chance alone.
Suppose we consider the efficacy of a drug in terms of the reduction of any one of a number of disease symptoms. As more symptoms are considered, it becomes more likely that the drug will appear to be an improvement over existing drugs in terms of at least one symptom.
Suppose we consider the safety of a drug in terms of the occurrences of different types of side effects. As more types of side effects are considered, it becomes more likely that the new drug will appear to be less safe than existing drugs in terms of at least one side effect.
Statistical power is the probability of committing a type II error, false negative. Prof Pollard’s study has a statistical power of around 97% while the Séralini study is probably closer to 45%.
In Memoriam
I the process of digging up the study by Prof Pollard, I realized he had passed away. I met him when he was in his late 80s, to do an experiment for him at the University of Chicago. He was an impressive man and scientist. It is really a shame he is most often known for his son.
Pollard worked at all of these things until his very last days. “I can’t imagine doing anything else,” he said recently. “I think if you are doing something meaningful and important and you stop doing it, you’ll always look back with regret.”