Minimal volume with minimal information

Minimal volume with minimal information

The WIRED interview with Elizabeth Holmes, founder and CEO of Theranos, has been making the rounds. In Di Cleverly’s post

https://plus.google.com/+DiCleverly/posts/XTn2SDnE9LB

I mentioned that in addition to the minimal amount of blood used, there is a minimal amount of details. Searching elsewhere, even the Theranos website, there aren’t many details. In Di Cleverly’s post, the only decent info was from patents. If you’ve ever read a patent, then you know that it’s often difficult to sort out what’s really going on. So, with Di Cleverly’s help, we have a better picture of what’s going on. A lot of this post are my guesses about some of the details, partly because I’m busy, partly because I’m lazy, and partly because there isn’t a lot out there without really digging. Did I mention I’m lazy, I mean busy?

What was mentioned: small volume and centralized facility

For those that haven’t seen the WIRED or Medscape articles, Elizabeth Holmes dropped out of college at Stanford at the age of 19 and eventually started Theranos with her college funds. The interview talks about how the small volume of blood, from a pin prick, can make the experience, and therefore patient compliance, better. Ms. Holmes talks about reduced and transparent pricing. Essentially none of the technology is discussed. A centralized facility is mentioned. So is that an essential part, i.e., how much can be done off site (e.g. at Walgreens)? Before any young readers decide to drop out like Ms. Holmes or Bill Gates, I think Dave Thomas, founder of Wendy’s makes a good example.

Thomas, realizing that his success as a high school dropout might convince other teenagers to quit school (something he later claimed was a mistake), became a student at Coconut Creek High School. He earned a GED in 1993.)

http://en.wikipedia.org/wiki/Dave_Thomas_(businessman

Detective Work: ESR and microfluidics

On Di Cleverly’s post some detective work was done and a few things came to light, mostly via the patents. The small “nanotainer” is used in a novel centrifuge to get information about the blood sample. Red blood cells (RBC) are called erythrocytes and are just one component of blood. If you put whole blood in a glass tube, eventually the RBCs will sink to the bottom and the plasma will stay at the top. You can speed up this process by using a centrifuge (a device that spins the tubes at many times the force of gravity). The rate that the RBCs go to the bottom is called the erythrocyte sedimentation rate or ESR. ESR alone can tell you something about your health.

An increased ESR rate may be due to:

Anemia

Cancers such as lymphoma or multiple myeloma

Kidney disease

Pregnancy

Thyroid disease

Common autoimmune disorders include:

Lupus

Rheumatoid arthritis in adults or children

Very high ESR levels occur with less common autoimmune disorders, including:

Allergic vasculitis

Giant cell arteritis

Hyperfibrinogenemia (increased fibrinogen levels in the blood)

Macroglobulinemia – primary

Necrotizing vasculitis

Polymyalgia rheumatica

An increased ESR rate may be due to some infections, including:

Body-wide (systemic) infection

Bone infections

Infection of the heart or heart valves

Rheumatic fever

Severe skin infections, such as erysipelas

Tuberculosis

Lower-than-normal levels occur with:

Congestive heart failure

Hyperviscosity

Hypofibrinogenemia (decreased fibrinogen levels)

Low plasma protein (due to liver or kidney disease)

Polycythemia

Sickle cell anemia

Source: http://goo.gl/zKstuW 

The patent mentions a novel centrifuge device with either video or still images of the sample. There are two greyscale figures from the patent in the album below. With image analysis the ESR can be measured without human intervention which minimizes errors.

Microfluidics

Another patent talks about microfluidic devices. I’m assuming those are lab-on-a-chip (LOC) devices. LOCs use microelectromechanical systems (MEMS) to do analysis on very small volumes of fluid. Here’s an example from Harvard that captures trace amounts of tumor cells.

http://goo.gl/jPGZlr

Although genechips or DNA microarray’s aren’t LOCs, it is possible they are being used by Theranos. An image of an Affymetrix Genechip is included in the album below. http://goo.gl/GpMyjx Note the small Eppendorf tubes in the foreground. Those are larger than the Theranos “nanotainer” but they do make Eppendorf tubes the same size as the “nanotainer”. Both the “nanotainer” and Eppendorf tubes have conical bottoms to facilitate removal of all of the liquid. The genechips have target DNA probes attached to the device. If a target gene is expressed, it will bind with the probe on the chip. The readout is typically some type of light whether chemiluminescence, fluorescence, or some combination. The amount of information from these genechips has caused an explosion in bioinformatics and computer processing dedicated to speeding up the analysis of these microarrays.

http://en.wikipedia.org/wiki/DNA_microarray

Because the samples are going to a centralized facility, it’s possible that real-time polymerase chain reaction (RT-PCR) is also being used. RT-PCR is a technique that is used to amplify DNA samples.

Therapeutics and Diagnostics =  Theranostics

I didn’t find any information to suggest that the name Theranos has anything to do with the term theranostics, i.e, therapeutics and diagnostics.

Pharmacogenomics aims to identify the genetic basis of variability in drug efficacy and safety, and ultimately develop diagnostics that can individualize pharmacotherapy. Theragnostics, a term denoting the fusion of therapeutics and diagnostics, is receiving increasing attention as pharmacogenomics moves to applications at point of patient care.

Shifting emphasis from pharmacogenomics to theragnostics

http://goo.gl/6nrkmp

Rapid molecular theranostics in infectious diseases.

Picard FJ1, Bergeron MG.

Drug Discov Today. 2002 Nov 1;7(21):1092-101.

http://www.ncbi.nlm.nih.gov/pubmed/12546841

An example of theranostics from my boss and colleagues is a platform that combines doxorubicin (cancer therapy), herceptin (targeting for diagnosis), and DOTA-Gd(III) (for MRI detection, i.e, diagnosis). So the herceptin targets the product to cancer cells. Gadolinium, chelated to the construct (DOTA-Gd(III)) allows you to see it with MRI (enhances the contrast from background tissue) and the doxorubicin provides therapy at the target (tumor).

pH-Responsive Theranostic Polymer-Caged Nanobins: Enhanced Cytotoxicity and T1 MRI Contrast by Her2 Targeting

http://goo.gl/4PHkLo

So that’s what I could sort out with the help of Di Cleverly’s post and my own digging through a couple patents. If you have ideas or comments, feel free to ask.

#ScienceSunday  

Impact

Impact

During a HOA there was a discussion about Open Access journals. Brent Neal wrote a follow up post

Open Access

http://goo.gl/PbQaO0

where I mentioned that journal impact factors should be discussed as they play a role in accessing the quality of a journal. Some Open Access journals are good and some are not so much. How can you tell? There’s some nuance and disagreement about impact factors but I’ll get to that later.

First, I want to give a little background and continue the conversation about Open Access journals. In Brent’s post, he mentioned predatory publishers and that we have all gotten spam from them, i.e., requests to consider Open Access journal X when we publish our next manuscript. One of the negative sides of predatory Open Access that I’ve experienced is related to peer review and the role of the editor. After you have done your job reviewing a manuscript and recommended whether or not the manuscript should be accepted for publication, sent back for major revision or rejected outright, the editor takes into consideration the recommendations from all referees and informs the author(s) of his/her decision. The problem is that some predatory Open Access journals charge a significant amount to the authors, sometimes more than $1,500. In the case that I am thinking of, the manuscript was poorly written and was essentially what is known as a quick communication that was being submitted as a full research article. The manuscript was very verbose to try justify full article vs. quick communication. The editor kept pushing to accept the article and to accept it as a full research article. I can only guess the motivation for that was the fee that is charged to the author(s).

Removing the issue of predatory journals, how does one assess the quality of a journal and more importantly a specific journal article. I’ll discuss an example. You probably hear scientists on G+ request peer reviewed citation when “debating” with people. I put debating in quotes because people often don’t know what it really means, it does not mean arguing but I’ll save that for another post. In a debate with a commenter on one of my posts (sorry I couldn’t find the comment to link), he finally gave a link to a peer reviewed article in Bulletins in Insectology. I’m not an entomologist so I have no idea of the accuracy or impact of that particular article. So what do you do?

Phone a friend

Like the Who Wants to be a Millionaire show, one option is to ask an expert. Maybe you know an entomologist. One of the great things about G+ is that you might actually have one in your circles. Alas, I don’t know any or at least couldn’t think of one. The next option is to assess the quality of the journal using impact factors.

Impact Factor (IF)

Journal Citation Reports are made by Institute for Scientific Information (ISI) and can be found on the ISI Web of Knowledge site, owned by Thomson Reuters. You have to have a subscription to the site so this ties in with the Open Access discussion from an access point of view as well. Impact factor is a calculation of the average number of citations per paper for a 2 year period divided by the total number of citable items. The idea of IF is that it gives you an idea of the average importance or impact of articles for a journal. You can imagine where there can be problems with this, e.g., what if a journal publishes a low number of articles per year? The Wiki below goes through more explanations and some alternatives like Page Ranking. A good example in the Wiki is an article that was cited over 6,000 times, yet the other citations for that journal are much lower.

Getting back to the Bulletins in Insectology journal, I looked it up in the Citations Reports. It’s impact factor is 0.44. In the post with the “debate” I had referenced an article in the Proceedings of the National Academy of Science (PNAS). It’s impact factor is 9.737. Just for reference, Science has an IF of 31.027 and Nature has an IF of 38.597. You’ll see them along with other details in the citation report, in the attached figure. Here’s the problem, most people agree that you can’t really compare IF from different disciplines. One reason is that some research might take longer to complete and publish. So if one discipline churns out more publications, that will affect the IF. The number of articles from Bulletins in Insectology is only 42. Remember, IF divides by the number of citable items and a low number should help.

Entomology is a reasonable category to compare Bulletins in Insectology  with other journals, in the same discipline. In the next figure below, you’ll see a screenshot of the first page of results (sorted by IF) for journals in entomology. The range of IF is from 13.589 to 1.926. Without being able to “phone a friend”, one would conclude that Bulletins in Insectology is either an obscure journal, new journal, or one that is not ranked high in entomology.

So the Bulletins in Insectology article that was linked is peer reviewed, which is good, but it likely has some issues preventing it from being published in a better journal in the field of entomology.

In our own fields of research, we often don’t pay too much attention to IF because we know which journals our peers/colleagues are publishing in. Unfortunately some academic administrations will use impact factors to judge the quality of someone’s track record for promotion purposes. Again, if they are not in your field, it is one way to assess quality, albeit with the caveats mentioned.

http://en.wikipedia.org/wiki/Impact_factor

Homer GIF via Reddit

#ScienceSunday  

Peer review causes humility

Peer review causes humility

If only the average Joe or Jane could experience having their manuscript ripped to shreds during peer review. Sometimes it’s legitimate and sometimes it’s just a referee that woke up on the wrong side of the bed. Either way, the author has to suck it up and genuflect.

Brian Koberlein explains why science is humbling.

#ScienceSunday  

Originally shared by Brian Koberlein

Humility

Yesterday’s post about the big bang and cosmic origins struck a few nerves.  Responses ranged from vulgar insults to dismissals of the post as “just a theory.”  But more subtle were the criticisms that declared the post lacked humility.  Scientific knowledge is never perfect, and to claim the validity of the big bang is to go too far.  When communicating to the general public scientists should never say “we know”, only that “we might know.” Scientists should show more humility.

Such criticism fails to recognize that the power of science is its humility.  In fact, the scientific process is based on the assumption that individual scientists won’t easily show humility on their own, so it is imposed upon them. There are three basic tenets of scientific research: it must be based upon verifiable data, it must be done publicly, and it must be open to criticism.

Most people view scientific evidence as repeatable experiments that can be done in the lab.  For this reason the findings of evolution or cosmology are often countered with “you weren’t there.”  But verifiable data is much broader than simply lab experiments.  It is a process of gathering data that clearly documents when, where and how the data was gathered.  If you gather observational data, the burden is on you to document its origin.  If you use data gathered by others, you must clearly cite your sources.

Once you have your observational results or theoretical work, the next step is to present it publicly.  This could be a conference, a preprint archive, a book, or submission to a research journal.  A scientific discovery is meaningless if it isn’t disseminated.  Publication provides a record of the work, so it can’t be tossed down the memory hole.  Make a significant discovery, and the record is there.  Make a foolish claim, and that’s there too.  It’s the latter possibility that strikes fear into scientists everywhere, because  publishing your work isn’t sufficient.  When you make your research public your colleagues now have a chance to pull the work apart and see if it really says what you think it says.  It gets subjected to peer review.

Peer review can be the most frustrating and most humiliating aspect of scientific research.  That’s why it’s considered the gold standard of science.  Having research published in a peer-reviewed journal means that the work has been examined by other experts in your field, and has been found clear and without obvious error.  It doesn’t mean its perfect, but it does mean the work has been held to a high standard and survived.  This is why when I write about new scientific work I focus on peer reviewed articles.  When I write about work that hasn’t been peer reviewed, I clearly say so.

Of course even after conducting your research, organizing your results, checking it with friendly colleagues, presenting it publicly and submitting it to peer review, you still aren’t done.  You’re never done, because at any time someone can critically review your work again.  If you have a great theory and your predictions don’t support new findings, we look for something better.  No matter how famous, or how many awards you may have, anyone can be toppled by new scientific discovery.

That’s the deal.  Keep pushing back against ideas.  Keep working to develop better theories.  Always, always keep in mind that your theories might just be wrong.

What survives is an understanding of the universe that it robust.  It is a confluence of evidence that supports a deep theoretical framework.  It is knowledge humbly gathered, and put forward with humility.  Through a process that recognizes human fallibility.  It is humanity’s best understanding of what is real and true about the cosmos.

This is why I present ideas like the big bang with the claim that we know.  We Know.  We know because thousands of individuals have devoted their lives to understanding the universe.  Devoted their lives to getting it right.  Relying on a process that forces us to be humble, and forces us to defend our ideas over and over.

In my posts I always strive to present our best understanding of the universe in a way that is clear and meaningful.  That’s why I try to limit moderation of the comments.  It is a kind of peer review.  I write about science to the best of my ability, and everyone is free to criticize it.  I’ve made mistakes in my posts and been called on them.  I’ve been praised and thanked for making things clear.  I’ve also been called a liar. A fool. Prideful. Deceitful. Ignorant. Arrogant.

Fair enough.  That’s the deal.

Image:  Excerpt from da Vinci’s notebooks.

An Academic Valentine: Blue for you or Pretty in pink?

An Academic Valentine: Blue for you or Pretty in pink?

Rajini Rao’s #AcademicValentine reminded me of this post about how pH can determine the color of Hydrangeas. Enjoy some science on St. Valentine’s day.

An Academic Valentine: The Science Behind Flower Color

http://goo.gl/8eOG6o via Rajini Rao 

#ScienceEveryday

Originally shared by Chad Haney

Blue for you or Pretty in Pink?

About  week ago I posted some pictures of my Hydrangeas that were just starting to bloom. http://goo.gl/Gn47h  I noticed that on the same plant, some of the flowers were blue and others were pink. I knew that pH played a role but I found out that it is actually the aluminum in the soil that make the blue pigment possible. So for ScienceSunday curated by Allison Sekuler Rajini Rao Robby Bowles and me, I had to dig up more info to post along with pictures from today.

When the pH is acidic, aluminum in the soil, mostly from clay, allows a metal complex of aluminum and a anthocyanin, named delphinidin 3-monoglucoside, to form. After the pictures, the first figure is of the aluminum complex. The next figure shows various blue flowers with sections cut revealing the pigment cells and protoplasts.

Although the next two figures are about Morning glories, they were too interesting to pass up. A certain ScienceSunday co-curator always has her eyes on certain channels. Similar to the previous figure, there is a cross section-cut revealing the pigmented cells. However, the paper and figure go on to discuss how the Morning glory does not have metal complexation. The petal color changes during flower opening due to pH changes which were measured in the second part of the figure. The final figure show the purported ion channel mechanism.

Plants can be beautiful. When you throw in a dash of science, they can be beautiful and intriguing.

Edit I forgot to add that a lot of insects leave hydrangeas alone. Why? Aluminum toxicity – win – win for us gardeners.

Sources: 

Kumi Yoshida ,  Mihoko Mori and Tadao Kondo

Nat. Prod. Rep., 2009,26, 884-915

DOI: 10.1039/B800165K http://goo.gl/VGlZH

http://goo.gl/CcFg6

So is it Men At Work – Blue For You (1983) or The Psychedelic Furs – Pretty In Pink ?

#ScienceSunday #ScienceEveryday

Science imagery

Science imagery

I was going to write a post about the Visualizing Science 2013 contest but my #ScienceSunday  co-curators beat me to it. Check out the images and videos. If you have questions, the ScienceSunday team will try to get you an answer.

Originally shared by ScienceSunday

Visualizing Science

Science you’d hang on your living room wall

Earlier this week, we shared a great example of scientific visualization, showcasing the   Pseudomonas    bacteria in a large green, bacteria covered hand (http://goo.gl/bWtKeP, via William McGarvey).

That was just one of many amazing scientific images from the 2013 Visualization Challenge sponsored by   Science   and National Science Foundation  , so here are several more beauties to behold. 

The challenge includes entries in several categories, including illustration,  posters & graphics, photography, games & apps, 

and video. So even this group of images just scratches the tip of the iceberg from the 200+ entries they received. You can see many more of the entries yourself, and learn about the science behind the images here: http://goo.gl/Bgx1n1

The images we highlight here illustrate a range of scientific results and phenomena, the description of which are from the   Science   article linked above:

Spherical Nucleic Acids

(by Quintin Anderson, The Seagull Company, Midland, Texas; Chad Mirkin and Sarah Petrosko, Northwestern University, Evanston, Illinois)

The floating golden sphere, bristling with corkscrew strands of RNA, drifts majestically toward the jostling lipid bilayer that surrounds a cell. Slowly, gently, it squeezes through the layer until it is inside the cell.

Breezing across cell membranes is just one talent of these spherical nucleic acids (SNAs) developed by nanotechnology pioneer Chad Mirkin at Northwestern University. Once inside a cell, they can fend off attacks from enzymes, which makes them hot prospects as vehicles for delivering gene therapy treatments. SNAs also bind strongly to complementary strands of genetic material, an ability being used in a commercial medical diagnostics system called Verigene.

Mirkin commissioned Quintin Anderson, creative director at scientific animation firm The Seagull Company, to create a video explaining his research to colleagues and funders. The toughest part, Anderson says, was creating the lipid bilayer. “There are hundreds of thousands of lipids in those scenes and it required a complicated mathematical algorithm to create the random movements.”

The Life Cycle of a Bubble Cluster: Insight from Mathematics, Algorithms, and Supercomputers

(Robert I. Saye and James A. Sethian, Lawrence Berkeley National Laboratory and the University of California, Berkeley)

“Isn’t that just a photograph of soap bubbles?” Robert Saye and James Sethian hear that all the time when people see their poster. “Naturally we are eager to point out that it is in fact a visualization of a physics computational model,” says Saye, who recently completed his Ph.D. with Sethian at the Lawrence Berkeley National Laboratory and the University of California, Berkeley.

Predicting how bubbles in a foam rearrange and rupture is a tough modeling problem, because it involves intricately coupled processes that operate at very different scales. The soap films are only micrometers thick, while the gas pockets themselves might be centimeters across. Meanwhile, individual films rupture in milliseconds; bubbles rearrange in a fraction of a second; and liquid inside the film drains over tens of seconds or longer.

Running a simulation at the smallest scales to predict the macroscopic effects would eat up vast amounts of computer power. “Instead, we found a way to separate distinct time and space scales, and allow these to communicate so that the most important physics affecting foam dynamics are captured,” Saye says. The model, published last year (Science, 10 May 2013, p. 720), could be useful in devising lightweight materials or optimizing industrial processes, he and Sethian suggest.

This image is a part of a larger poster that was entered in the contest,and you can see a video of the foam simulation at Bursting Bubbles at UC Berkeley

Cortex in Metallic Pastels

(Greg Dunn and Brian Edwards, Greg Dunn Design, Philadelphia, Pennsylvania; Marty Saggese, Society for Neuroscience, Washington, D.C.; Tracy Bale, University of Pennsylvania, Philadelphia; Rick Huganir, Johns Hopkins University, Baltimore, Maryland)

With a Ph.D. in neuroscience and a love of Asian art, it may have been inevitable that Greg Dunn would combine them to create sparse, striking illustrations of the brain. “It was a perfect synthesis of my interests,” Dunn says.

Cortex in Metallic Pastels represents a stylized section of the cerebral cortex, in which axons, dendrites, and other features create a scene reminiscent of a copse of silver birch at twilight. An accurate depiction of a slice of cerebral cortex would be a confusing mess, Dunn says, so he thins out the forest of cells, revealing the delicate branching structure of each neuron.

Dunn blows pigments across the canvas to create the neurons and highlights some of them in gold leaf and palladium, a technique he is keen to develop further.

“My eventual goal is to start an art-science lab,” he says. It would bring students of art and science together to develop new artistic techniques. He is already using lithography to give each neuron in his paintings a different angle of reflectance. “As you walk around, different neurons appear and disappear, so you can pack it with information,” he says.

The painting was commissioned for the Johns Hopkins University School of Medicine’s Brain Science Institute, but, Dunn says, “I want to be able to communicate with a wide swath of people.” He hopes that lay viewers will see how the branching structures of neurons mirror so many other natural structures, from river deltas to the roots of a tree. “I want to help people to appreciate the beauty of the brain.”

You can read Greg Dunn’s description of how he came to merge art and science in this uniquely beautiful way at http://goo.gl/yYNmgc, and you can check out much more of his art+science work – and even order a print of this image to hang on your wall – here: www.gregadunn.com.

Invisible Coral Flows

(Vicente I. Fernandez, Orr H. Shapiro, Melissa S. Garren, Assaf Vardi, and Roman Stocker, Massachusetts Institute of Technology, Cambridge)

The swirling patterns moving around these coral polyps may look like fireworks streaking across a long-exposure photograph—but they are the result of a cunning technique that uses false colors to help compress time and movement into a single picture.

The image shows two Pocillopora damicornis polyps roughly 3 millimeters apart, colored pink. To reveal how the corals’ wafting cilia beat the water into a vortex, the team tracked particles in the water by video and super-imposed successive frames to highlight the flow (gold). About 90 minutes later, the coral polyps have changed position (shown in purple), altering the water flow (cyan), “but the vortex stayed roughly the same,” says Massachusetts Institute of Technology environmental engineer Vicente Fernandez, part of the research team that produced the image. The spacing between points in the vortex tracks even reveals the speed of the particles, he adds: “Up close you can see the steps of individual particles, see where the flow is strongest.” Fernandez says that the team drew inspiration from the palette used by Andy Warhol in his Flowers prints, which feature vivid, strongly contrasting colors.

The vortex helps draw nutrients toward the coral and sweep away waste products, says Fernandez’s colleague Orr Shapiro, an ecologist at the Weizmann Institute of Science in Rehovot, Israel. “Everywhere I look at corals now I find these vortical swirls,” he adds.

h/t to DJ Spin for inspiring the post

#ScienceSunday   #scisunABS  

Evolution vs. Creation

Evolution vs. Creation

Watch/listen to Bill Nye debate Ken Ham about evolution and creation.

edit

I forgot to link this article (h/t Filippo Salustri) about why this debate is a waste of time. via Salon http://goo.gl/ZelyN1

#ScienceEveryday  

Originally shared by Liz Krane

RIGHT NOW:  Bill Nye Debates Creationist Ken Ham Live

The videos that sparked the debate:

Bill Nye: Creationism Is Not Appropriate For Children

Ken Ham Responds to Bill Nye “The Humanist Guy”

Why is Bill Nye even doing this in the first place? The Science Guy says, “I decided to participate in the debate because I felt it would draw attention to the importance of science education here in the United States.”

“Tuesday’s debate will be about whether Ham’s creation model is viable or useful for describing nature. We cannot use his model to predict the outcome of any experiment, design a tool, cure a disease or describe natural phenomena with mathematics.

These are all things that parents in the United States very much want their children to be able to do; everyone wants his or her kids to have common sense, to be able to reason clearly and to be able to succeed in the world.”

http://religion.blogs.cnn.com/2014/02/04/why-im-debating-creationist-ken-ham/

NPR will be covering the debate here:

http://www.npr.org/blogs/thetwo-way/2014/02/04/271648691/watch-the-creationism-vs-evolution-debate-bill-nye-and-ken-ham

http://www.youtube.com/watch?v=z6kgvhG3AkI

Holy mackerel, Yonatan Zunger tells us where we are in the cosmos

Holy mackerel, Yonatan Zunger tells us where we are in the cosmos

Sorry no TL;DR on this one. This post shows a pure love for knowledge. Wow

Originally shared by Yonatan Zunger

Climbing the Cosmic Ladder: How we know where we are in the universe

Almost every question you could ask about astronomy begins with asking where things are. Is that star you see big, bright, and far away, or small, dim, and close by? Do the stars and galaxies fall into patterns, or are they spread out at random, like raisins in a plum pudding? Or even more basically – just how far away are the stars? As late as the 1920’s, one of the great open questions in astronomy was whether our Sun was the center of the galaxy, and whether our galaxy was the entire universe. Many other galaxies had been observed by then – but we had no idea what they were, other than bright, fuzzy things full of stars, which might be close by or far away.

Today I’d like to explain to you just how we measure out the universe. The steps are all much more straightforward than you might think; in fact, many of the first steps involve nothing more complicated than a stick and a hole in the ground. This was inspired by seeing Lauren Gunderson’s play Silent Sky, which is about the life of Henrietta Leavitt: she was the early-twentieth-century astronomer who made one of the most critical discoveries in this entire list, and made it possible to (at last!) prove the existence of other galaxies. We’ll meet her in some detail, as well as many of the other figures in this story – but most of all, I want you to get a chance to see how science is actually done.

The challenge of measuring space, of course, is that you can’t just go out to the nearest star dragging a tape measure behind you. (I checked at Home Depot, and they didn’t carry any tape measures of nearly the right size) What we’re going to do instead is use a sequence of tricks, each of which lets us use one thing which we measure to measure something bigger: first the distance between two cities, then the size of the Earth and the Moon, then the distance to the Sun, and so on and so on until we’ve measured the entire universe. The combined series of steps we can use to measure any distance is called the “Cosmic Distance Ladder,” since each step lets us get to the next. The tools we’ll need will start out extremely simple – rules, sticks, and holes in the ground – and will gradually add on telescopes, cameras, and computers. 

From the Earth to the Sun, with Greeks

The very first measurement of space we’d like is to know the distance from the Earth to the Sun. This turns out to be a very simple measurement, in that it requires no fancy equipment or deep ideas – in fact, even the ancient Greeks knew how to do it. It turns out to be a little complicated because without at least a little fancy equipment (namely, telescopes) it’s hard to measure precisely enough, which is why we didn’t get good numbers for this distance until the 1600’s. But despite its simplicity, this idea has stuck with us, and in fact it remained the best way to measure the distance from the Earth to the Sun until 1964, when it became possible to measure it directly using radar – and it’s still the way we measure the distance to the nearest stars. 

The basic idea, which we’ll use in numerous ways, is the sort of trigonometry you learned in high school about measuring tall buildings. Say you want to measure the distance to some far-away object overhead. Go to two different places which are as far apart as you can manage; that distance is your “baseline distance.” Measure the angle of the object above the horizon in both places. The difference between the two angles is called the parallax; with that and some triangles (I’ll put the details at the bottom of the article, if you want to see them – they’re not hard) you can calculate the distance to the distant object. 

Eratosthenes, the “father of geography,” used this method to measure the circumference of the Earth around 240BCE. He made his measurements in a clever way. Say that you have a deep well somewhere along the Tropic of Cancer. The Tropics have the interesting property that at exactly one moment of the year – noon on the Summer Solstice – the Sun is directly overhead, which means that the Sun would shine directly down the well and illuminate it. (North of the Tropic, the Sun is never directly overhead; South of the Tropic, the Sun is overhead more than once a year. Having it be overhead exactly once a year is useful because, in a day before telephones or wristwatches, knowing that the Sun is directly overhead at a known place at one precise moment is the best way to coordinate measurements from a long way away) And there was, in fact, a well famous for exactly this behavior in the town of Syene, present-day Aswan. So all Eratosthenes had to do was to go somewhere due north of Syene (not east or west so that the time of noon would be the same – he measured from Alexandria, which is almost due north) and measure the angle of the Sun right then; he would now know the angle of the Sun in Syene, its angle wherever he was, and the distance between the two points. With a small bit of math (see below if you want the details), that makes it easy to compute the curvature of the Earth.

Eratosthenes didn’t use any fancy tricks to measure the Sun’s angle, either. Don’t think of a sextant: think of a pole sticking up and casting a shadow. If you draw that triangle out, you’ll see that the tangent of the Sun’s angle in the sky is precisely the ratio of the height of the pole to the length of the shadow.

Using nothing more than this, Eratosthenes measured the circumference of the Earth to be 250,000 stadia – about 25,000 miles, within a percent of its true value. Almost all of his error came down to him not having a good figure from the distance from Syene to Alexandria. 

The same method can be used to measure many other objects, and since we’re going to keep using it, it’s worth talking a bit more about the math. Instead of working out how to measure things from the surface of a sphere, we can do a slightly simpler calculation about how to measure things when standing on a flat surface. It’s not hard to add the extra factors for the Earth being round, but this abstracts out the key bits of the math, and we’ll actually need the flat version of this later. 

Say you take your two measurements a baseline distance b apart, and you measure two angles, A and A+P. A, the lesser of the two angles, is called the “inclination;” P, the difference between the two angles, is the “parallax.” Then the distance to the distant object – again, you can find the derivation (and pictures) below – is 

L = b sin A / sin P

This equation is important enough that I want to give it here and talk about it, because we can learn a lot of things just by looking at it.

First, for almost all of the things we’re going to measure, L is going to be much bigger than b. (For example, the distance to the Sun versus the distance between two cities) Since the sine of any number is between -1 and 1, this means that the ‘sin A’ term can’t make things any bigger; that means that the only way for L to get big is for sin P to be very close to zero, which means that P is going to be very close to zero, as well. That means that the angle of parallax is going to be tiny, and so the limiting factor in this measurement technique is going to come from how small of an angle difference you can accurately measure.

Second, if you have a minimum P you can measure, then the biggest L you can possibly measure is b / sin P. That means that the bigger of a baseline you can get, the less of a problem angular resolution becomes – so you want the biggest one you can find. 

Third, the quantity in the numerator is (b sin A). The sine of A is equal to one when A is 90°– i.e., when the object is directly overhead – and zero when A is 0° – i.e., when the object is at the horizon. So when the object isn’t directly overhead, that’s the same as having a smaller b, and that means that this will work well for overhead objects and not so well for the rest.

Now, the Greeks tried to use this method to measure the distance to both the Sun and the Moon. It turned out to work fairly well for the Moon, but not so well for the Sun, because the practical limit on how small an angle he could measure was about seven arc-minutes. (60 arc-minutes is a degree) It turns out that, even with the entire Earth as a baseline, you would need to measure an angle about one-sixtieth of that size, so they couldn’t directly measure the distance to the Sun. But they could measure the distance to the Moon, and in the second century BCE, Hipparchus measured that distance to within 7%, still using nothing more complicated than sticks. Aristarchus then tried to use this to calculate the distance to the Sun, using the rather clever observation that when the Moon is half-full, the line from the center of the Moon to the center of the Sun must be perpendicular to the line from the center of the Moon to the observer’s eye. From that, a measurement of the angle in the sky between the Sun and the Moon, and the distance to the Moon, he could therefore calculate the distance to the Sun.

Aristarchus ended up off by a factor of 20, but if you consider that he was trying to measure this angle without a protractor or a sextant, but just with clever configurations of sticks, that’s pretty damned impressive. That was already enough to realize that the Sun must be immensely larger than the Earth, which led to the first proposals that the Earth might not, in fact, be at the center of the universe. 

The Age of Exploration

Despite these errors, the Greek measurements were actually the best in the world all the way until the 17th century, when two major developments happened. The first was the invention of the telescope, which allowed the angles to be measured much more precisely. This allowed Godefroy Wendelin to measure the distance to the Sun, using exactly the same method, to within a factor of two in 1635. 

The second breakthrough is what the telescope allowed: a theory of how planets move about the Sun. Kepler took Copernicus’ idea of a heliocentric universe, and the new telescope, and worked out the actual trajectories of the planets around the Sun. He didn’t know the distance from the Earth to the Sun any better than anyone else, but his equations did explain how the period of a planet’s orbit – the time it takes to go around the Sun – was related to its distance from the Sun, and so he could use this to calculate the ratio of the distance from the Earth to the Sun to that same ratio for any other planet.

With this, people quickly realized another possibility. Every so often, Venus passes between the Earth and the Sun. With a telescope, this could be very accurately observed, down to the second that it crossed the Sun’s disk. By measuring this transit time from different points on Earth, and using the known ratio of the Earth-Sun and Venus-Sun distances – which Kepler had calculated – we could measure the distance to the Sun very precisely!

And now for the hard part: the transit of Venus happens only a few times per century. The first opportunity was in 1639, and Jeremiah Horrocks made an observation which agreed with Wendelin’s direct result. The next chance wasn’t until the pair of transits in 1761 and 1769 – when Edmund Halley (discoverer of the famous comet) rallied astronomers to travel around the world to measure the transit from everywhere they could. 

The results were dramatic, dangerous, and often lethal: this quest was happening in the midst of the Seven Years’ War, which was essentially the first worldwide war, and involved considerable challenge since the problem of measuring longitude hadn’t been solved yet, either, which meant that only a few places’ longitudes were well-enough known to take the measurement. One of these was Tahiti, which led to the commissioning of a British captain by the name of James Cook to go there and measure it – leading to his first voyage around the world. Another pair were Charles Mason and Jeremiah Dixon, who survived an attack by French gunships to make a measurement from Cape Town; they did well enough that it landed the pair of them another job, surveying the Pennsylvania-Maryland border. d’Auteroche made his 1761 observation while trying to avoid being lynched by peasants who thought that his telescope was causing floods; his 1769 observation went better, but the entire crew and a nearby village were then overtaken by a plague, with only one survivor making it back to France with the precious data.

The result was good: by 1771, when the data could be gathered and compared, the distance from the Earth to the Sun was for the first time known to within a few percent. (And in the process, Venus was discovered to have an atmosphere, much of the South Pacific got explored, and several people were bankrupted or killed: see http://goo.gl/kzGRFU and http://goo.gl/r46wu3 for a bit more of this mad history)

The answer: the mean distance from the Earth to the Sun is approximately 93 million miles, and the Sun is about 110 times the diameter of the Earth. 

From the Sun to the Stars

What’s especially beautiful about this technique is that it can be used to measure the nearby stars as well, with the addition of one simple trick: instead of measuring the stars from two different points on the Earth, measure the star from the same point on Earth twice, six months apart. The Earth will have moved to the opposite side of the Sun – which means that your baseline is now twice the distance from the Earth to the Sun, about 300 million kilometers. (Which is why it was so useful to measure that distance first!) Measurement technology improved considerably as well with the invention of two more simple tools: the camera and the micrometer, which made it possible to photograph the sky and then measure angles (by the beginning of the 20th century) as small as about a tenth of an arc-second. (One arc-second is 1/60th of an arc-minute, or 1/3,600th of a degree)

In fact, this leads to the definition of the most common unit used in describing space, the “parsec” (“pc” for short) which is an abbreviation for “parallax-second.” One parsec is the distance that a star would have to be for its angle of parallax to be one arc-second, about 3.26 light-years. This turns out to be a remarkably convenient unit for talking about stars: the nearest star to us, Proxima Centauri, is about 1.3pc away, while the Milky Way is about 32,000pc (32kpc) across. It also makes our earlier formula simpler, since for very small angles P, sin P ≈ P, so for things that are directly overhead, L ≈ b / P. Since by definition 1pc = b / 1 arc-second, L (in parsecs) is just 1 / P. (In arc-seconds)

So here’s how you would measure the distance to a star at the beginning of the twentieth century. Point your telescope at a star, precisely measuring the angle of the scope, and take a photograph. Measure each star’s position relative to the edge of the photographic plate, using a micrometer. With this and your scope angle data, you should be able to find the exact angular position of each star you photographed. Repeat every few months. On any given plate, most of the stars will be far enough away that they don’t move; you can use this fact to cancel out the errors from your telescope angle measurements always being slightly off. The stars that have moved, you can measure the distance to. 

The very first such measurement was taken in 1838 (without photography; this was done by measuring directly on the telescope!) and was of 61 Cygni, 3.5pc away. By the year 1900, we had measured the distance to over 60 stars. 

In fact, this method didn’t get noticeably easier until the invention of modern computers, which can do the painful alignment and measurement for us. In 1990, the Hipparcos satellite made a survey of over 100,000 stars to a precision of one-thousandth of an arc-second, giving us the most detailed map to date of our immediate stellar neighborhood. This remains the gold standard for all other measurement of the stars: every other measurement we’ll discuss builds on top of this.

Now, if you’ve been following closely, you’ll have noticed one major problem. I just said that the best that modern technology can do is to measure angles to within one milliarcsecond, which is to say, distances within 1,000pc. Our galaxy alone is 32,000pc across; how can we even know that, much less understand distances beyond that? It turns out that direct measurement – what we’ve been discussing so far – is fundamentally limited. To go beyond it required a new insight.

Henrietta Leavitt’s Standard Candle

The critical discovery that made it possible to measure the entire universe was due to two people in the early 20th century: Henrietta Leavitt and Ejnar Hertzsprung. It had long been understood that if you had a “standard candle” – a star whose brightness you somehow knew – then by comparing the star’s actual brightness to its apparent brightness in the sky, you could immediately tell how far away it was. (Detailed math below, if you want) The hard part, of course, was that nobody had any way to know how bright any particular star actually was.

Leavitt was a “computer” at Harvard Observatory; her job was to analyze the photographic plates produced by the great telescope there and build a catalogue of all of the stars. While doing this, she realized that she could identify a special class of stars called Cepheid Variables – stars whose brightness gradually oscillates, going lighter and darker over a period of days, weeks, or months – by overlaying (and carefully position-matching) photographic plates. By doing this, she built up a catalogue of nearly 20,000 Cepheid variable stars; prior to this, only a few dozen were known.

She then looked at the Cepheids inside two of those mysterious blobs in the sky, the Greater and Lesser Magellanic Clouds, which between the two of them contained nearly 1,800 variable stars. She made the assumption that those clouds were each bounded objects, and so all of the Cepheids within each cloud were roughly the same distance from the Earth – and therefore, their relative apparent brightnesses were also their relative absolute brightnesses. When she did this, she discovered that their period (the time it takes them to go from bright to dim) and their peak brightness were directly related! This suddenly meant that if you measured the period of a Cepheid variable (easy, using a clock), you could know its brightness relative to all other Cepheid variables.

Hertzsprung immediately sprang into action, using parallax to measure the distances to a few nearby Cepheids – and combining this with Leavitt’s result, we suddenly had the first map of not just the nearby stars, but the distant ones as well. 

And the Magellanic Clouds? The larger one turned out to be about 50,000 parsecs away – the first galaxy discovered outside of our own.

This led to an immense outpouring of discoveries. One of Leavitt’s colleagues at Harvard was Annie Cannon, who discovered that most stars could be classified entirely by their color, with a host of their other properties following directly from that. Hertzsprung, working with Henry Russell, used the new distance measurements to build up a table of the actual brightnesses of stars, and discovered that those, too, were correlated with a star’s classification. They made a plot of brightness against color – the famous “Hertzsprung-Russell diagram” (see the album) – and saw that on this plot, all of the stars lay along a few distinct curves. One curve, in particular, contained most of the stars: the “main sequence,” the ones for which Cannon’s classification had predicted the most. Suddenly, it became clear what was happening: main sequence stars were simply ordinary stars in several different sizes, and the stars which fit on the other curves represented different kinds of object altogether, such as white dwarfs of supergiants. With the understanding that certain stars were all of the same type, it became possible to start to build an understanding of how they worked on the inside – which led to even more extraordinary discoveries, such as the 1957 “B2FH” paper which showed how all of the matter of the universe is manufactured inside stars. (And, from nothing but nuclear physics and stellar dynamics, managed to calculate the abundances of the different chemical elements in the universe to surprising precision!) 

As a side note about history, Harvard Observatory became one of the major factories of female astronomers in the early twentieth century, being home to Leavitt, Cannon, and later Cecilia Payne-Gaposchkin, discoverer of the chemical composition of stars. Originally, the observatory, under its director Edward Pickering, had an entirely male staff of computers. Frustrated with their poor performance, Pickering one day exclaimed that his housekeeper could do a better job – and to prove the point, fired the lot of them and hired his housekeeper, Williamina Fleming, who proceeded to build a team (entirely composed of women) which ended up making the most thorough star catalogue to date, and a whole host of major scientific discoveries. Leavitt’s work was, perhaps, the most important of them all, and it’s considered likely that she would have received the Nobel Prize had she not died of cancer only a few years afterwards, before the full significance of her result could be proved. (The Nobel is not awarded posthumously.)

Building the Cosmic Ladder with Explosions

Almost every advance in the measurement of the universe since Leavitt has been the discovery of a new kind of standard candle. In each case, the method is the same. First, some category of stellar object is discovered whose brightness is related to some other property which is easily measurable. (This generally requires finding a bunch of them in some situation where we have reason to believe that they’re right next to each other, and then looking for a pattern, just like Leavitt did) Next, the nearer members of the class have their distance measured using existing techniques, and now we have a new way to measure distance. The new candles can be useful if they show up in places where the old candles don’t – perhaps more commonly in our area, or perhaps farther away.

To give you an example of this, consider one of our most important candles today, the type Ia supernova. Supernovae are what happens when a star ends its life and explodes; they’re the brightest things in the universe, with a single supernova converting perhaps a third of the star’s mass directly into energy over the course of a few minutes. A few decades ago, it was noticed that if you plotted how the brightness of a supernova changed over time, you got a wide range of curves, but there was some significant fraction of supernovae for which the curves were all shaped the same.

It took some theory to convince ourselves that this wasn’t a coincidence, and there really was a reason why these particular supernovae were all matching up. Normally, a star is compressed by gravity, and is held up by the force of the fusion reactions which power it. When the star runs out of fuel, the reaction stops, and the star collapses. Sometimes, the increased pressure as the star collapses further lets new kinds of fusion reaction start, and this is how supergiants (etc) are formed; but ultimately, the star runs out of fuel completely and keeps collapsing. As it does this, very soon most of the star is falling in faster than the speed of sound, and when it breaks the sound barrier it forms a sonic boom. That boom compresses the center even more (triggering various nuclear reactions) and propels the rest of the star’s matter outwards at high speed: a supernova. 

However, if the star is small enough – the size of our Sun or so – this doesn’t happen. Instead of going through more and more fusion reactions and then exploding, the star simply settles down to a quiet state known as a “white dwarf,” where all the matter collapses down until it is supported not by fusion, but by the simple, quantum-mechanical refusal of electrons to be in the same space as one another. White dwarfs are extremely simple: they have almost no internal structure, and they are held up by principles simple enough that they’re a standard subject for undergraduate thermodynamics classes. They also have a fairly narrow range of masses, which is well-understood: above a certain critical mass (the Chandrasekhar limit, about 1.38 times the mass of the Sun), this electron pressure is no longer enough to hold them up. Instead, the electrons get absorbed directly into the protons to form neutrons (yes, this can happen at extremely high pressures). The result is what’s called a “neutron star” – just like a white dwarf, only now it’s supported by the quantum-mechanical refusal of neutrons to be in the same space as one another. A bit of math (more than I’m going to go into in this post, because it’s “easy” if you know quantum mechanics and statistical thermodynamics) shows that the neutron stars are a lot smaller, but stable up to a much higher mass. (At which point they will collapse into black holes)

Now, imagine that you have a binary star – two stars orbiting one another – and one of the stars finally dies, forming a white dwarf. In the process of collapsing to form a white dwarf, there’s generally some exploding going on, and some destabilization of orbits, with the net result that quite often the surviving star is now gradually losing its own gas as it gets pulled off by its white dwarf neighbor. This means that the white dwarf is gradually gaining mass… which it will do, until it hits the Chandrasekhar limit.

And at the moment that the star hits this limit, it’s going to collapse from a white dwarf into a neutron star. The reaction will start in the center of the star, where the pressure is highest; and since neutron star matter is several thousand times denser than white dwarf matter, the core simply collapses as the reaction takes place, leading to more matter falling in and reacting. The same sort of high-speed collapse happens, and we get a supernova. 

What’s beautiful is this: White dwarfs are really simple systems; two white dwarfs of the same size are basically all alike. (They don’t have chemical composition or anything; the atoms are long-since smashed) Neutron stars are similarly simple. And the collapse of a white dwarf to a neutron star happens at a well-defined mass, which means that all of these collapses look basically the same. Which means that we should, in fact, have a bunch of identical-looking supernovae around the universe!

And so now we do the same thing we did before. Say you have a Type Ia supernova in a galaxy that also has Cepheid variables in it. This means that you know, to within the size of a galaxy or so, how far away that supernova was – which means that you can calculate its brightness. And you do this with a bunch of Type Ia supernovae, and check your math, and suddenly you know how bright all of them are: including the ones which happen much further away. (Note that, if you measure enough supernovae, you can fix up a lot of the “to within the size of a galaxy or so” problem, because about half of the supernovae you see will be a galaxy’s-width closer than the Cepheid and the other half will be farther, so if you average it out that all cancels out)

The expanding Universe

There is one final trick we have up our sleeves, which is perhaps the most magical of all, and is the reason that Edwin Hubble had a telescope named after him. He discovered that the universe is expanding.

Let’s first explain what he discovered, and then talk about what it means, and finally how we can use it to measure things. It turns out that it’s actually very easy to measure the speeds of stars, using the Doppler effect. This is the effect that makes a siren sound higher-pitched when it’s coming at you, and lower-pitched when it’s moving away from you; the same effect applies to light, with lights moving towards you being shifted towards blue, and lights moving away from you being shifted towards red. (That’s actually a consequence of Relativity)

We know the original colors of the stars because of a very useful fact about chemistry and quantum mechanics: when you light any atom or molecule on fire, the light which it emits is a particular color – that is, if you look at the light through a prism, you’ll see a very particular spectrum – which is a “fingerprint” of that substance. (In fact, it was through this sort of analysis of the colors of stars that Payne-Gaposchkin discovered their chemical composition in detail) These fingerprints are very well-known, so if you see a Hydrogen fingerprint shifted a certain distance in the blue or red direction, you immediately know how fast the star is moving.

Hubble made a chart of the distances and speeds of several thousand stars, and made a fascinating pair of discoveries.

First, almost all of the stars are moving away from us.

Second, the speed of a star is proportional to its distance from us.

You might think, at first, that this is a sign that we are simply tremendously unpopular among the heavens. But when you draw a three-dimensional picture, it becomes clearer what’s going on. Imagine a balloon, with dots (representing the stars) drawn all over it. As you inflate the balloon, the distances between the dots increases; every dot would see the other dots moving away from it. What we’re seeing isn’t that the stars are moving away from us in particular, but that our “balloon” – the spacetime of the universe itself – is expanding.

This fit perfectly in with Einstein’s theory of relativity, which had predicted that it was possible for the universe to steadily expand or contract. (Einstein himself had noted that the equations allowed it, and then discounted the possibility as being overly complicated; he later referred to that as his “biggest blunder.”) Ever since Hubble’s discovery, we have surveyed the set of moving stars in great detail in order to establish the geometry of the universe: and in fact, the universe appears to be steadily expanding. In fact, since the light we see from distant stars is light from the distant past, we can use this to study how the expansion is changing over time. It appears, although it is not yet certain, that the universe is what’s technically known as “asymptotically flat:” that is, the speed of expansion is steadily decreasing, but it would take an infinitely long time for it to come to a complete stop. (Alternatives would be that the speed is increasing, or that it is decreasing, will come to a stop, and then the universe will collapse under its own gravity, in a sort of “big crunch.” The data, so far, suggest that neither is the case – which has profound implications for theoretical physics, a subject that’s beyond the scope of this post.

Now, apart from being an interesting subject, how is this useful for measurement? Well, remember that Hubble discovered that an object’s speed is proportional to its distance from us, and this has been confirmed for a huge variety of objects. Since speed is easy to measure, this gives us our best method of all of estimating the distance to the most distant objects of all in the universe. (It doesn’t work well for nearby objects, since their motion within or relative to our galaxy becomes more significant; but if it’s far enough away that we aren’t pulling towards or away from it because of gravity, it works out. This starts at a distance of 10Mpc or so) Distances to such remote objects are, in fact, typically measured in “redshift:” the factor by which light wavelengths get multiplied. This is related to distance by the simple formula

z = D/Dₕ

where D is the distance and Dₕ is the Hubble distance, about 4.4 billion parsecs. (There’s actually a lot more to this measurement than that, since the curvature of the universe isn’t perfectly simple – see the references if you want to know more. This number, and this equation, follow from studying the speeds and distances of hundreds of thousands of objects, which had to be obtained using all of the other methods we discussed before; and because this stands as the topmost rung of our “cosmic ladder,” it has the most uncertainties in it. Expect the values, and possibly also the shape of the equation, to change in coming years as we get more data)

This is what lets us examine the most distant objects in the entire universe. There are about 50 known objects at a distance of z=8 or farther; the farthest of them, the romantically named protogalaxy UDFj-39546284, is at z=11.9, 2.7 billion parsecs away. In different units, that comes out to 13.37 billion light-years – which means that the light from this object has been on its way to us for that long, and this is one of the oldest objects in the universe. (In fact, the universe itself is believed to be only 13.8 billion years old, so this galaxy formed in the very first days of the cosmos)

So I hope that I’ve given you a good tour of the way we know where we are in our universe. The ideas start out very simple, with trigonometry, sticks, and holes, and end up involving space telescopes and the fundamental structure of space-time, but they all fit into a single progression: and hopefully, now you’ll never feel lost in space again.

Further Reading

You can read an overview of the Cosmic Distance Ladder, including all the parts I skipped, at http://en.wikipedia.org/wiki/Cosmic_distance_ladder .

Eratosthenes and his campaign to measure absolutely everything: http://en.wikipedia.org/wiki/Eratosthenes

and the history of measuring the Earth:

http://en.wikipedia.org/wiki/History_of_geodesy

The article on Hipparchus has a lot more details about how he measured the distance to the Moon:

http://en.wikipedia.org/wiki/Hipparchus

The history of measuring the distance from the Earth to the Sun: http://en.wikipedia.org/wiki/Astronomical_Unit#History

The crazy, dangerous history of measuring the Transit of Venus:

http://www.astronomy.ohio-state.edu/~pogge/Ast161/Unit4/venussun.html

You can read about Henrietta Leavitt at

http://en.wikipedia.org/wiki/Henrietta_Swan_Leavitt

If you’re in the San Francisco Bay Area, you can see Silent Sky, a truly excellent play about her and about these discoveries:

http://www.theatreworks.org/shows/1314-season/silentsky

If you want to know about Cecilia Payne-Gaposchkin and her work, you can start here:

https://plus.google.com/+YonatanZunger/posts/NbZzUHmybti

If you want to know about B2FH and how matter is produced in stars, you can start here:

https://plus.google.com/+YonatanZunger/posts/EfmdR6VWvRM

If you enjoy the astronomy in this story, you should look at this article about the main sequence of stars:

http://en.wikipedia.org/wiki/Main_sequence

It walks you through the Hertzsprung-Russell diagram, and will lead you down a splendid rabbit hole of what the stars are made of, how they work, the kinds of stars out there, what might support life, and so on.

For the story of the Great Debate of early twentieth-century astronomy, about whether or not there was anything beyond our galaxy, you can start with

http://en.wikipedia.org/wiki/Great_Debate_(astronomy)

For Type Ia supernovae, start with 

http://en.wikipedia.org/wiki/Type_Ia_supernova

If you want to know more technical details of white dwarfs, you might start from Kittel & Kroemer’s textbook on statistical thermodynamics:

http://books.google.com/books/about/Thermal_Physics.html?id=c0R79nyOoNMC

There’s a very good non-technical discussion of the expansion of the universe at

http://en.wikipedia.org/wiki/Metric_expansion_of_space

and of Hubble’s Law in particular at

http://en.wikipedia.org/wiki/Hubble’s_law

This may lead you to a list of the most distant objects in the universe

http://en.wikipedia.org/wiki/List_of_the_most_distant_astronomical_objects

and ultimately, to how we know the age of the universe itself:

http://en.wikipedia.org/wiki/Age_of_the_universe

For those of you who feel like you didn’t get enough math

Which is fair enough, since I skimmed over most of it. Let’s start with Eratosthenes’ calculation of the circumference of the Earth. You can see the figure he used in one of the attached images. The two lines with white arrowheads represent the local verticals in Alexandria and Syene; the lines with black arrowheads represent the direction to the Sun. (Since the Sun is so far away, those two lines are parallel to each other) The lines at Syene match, because the Sun shone straight down the well; the lines at Alexandria differ by an angle θ, which is also the angle at which the shadow was cast. But if you continue the white-headed lines down to the center of the Earth, where they meet, that means that the angle between them must also be θ. That means that the arc from Alexandria to Syene is the same fraction of the circumference of the Earth that the angle of the shadow is a fraction of a complete circle. Eratosthenes measured this fraction to be about 1/50 – that is, an angle of 7°12’ – and so the circumference of the Earth must be 50 times the distance between the cities, or 250,000 stadia.

Next, let’s look at the origins of the equation for measuring overhead objects. The next figure in this post shows two angle measurements being taken of a distant star. They read two different angles, A and A+P. The distance between the two measurements is b, and the unknown distance from the measurement to the star is L. The inner angle opposite to A+P is 180°-A-P; since the sum of the angles of a triangle is 180°, the angle at the star is P.

If we draw the line x, we now have two right triangles. It then follows from the definition of the sine and looking at the lower triangle that x = b sin A; and similarly, from the upper triangle, x = L sin P. Combining these two equations, we immediately find that L = b sin A / sin P.

Note that if we had measured the other leg of the large triangle instead, its length Y would be (by the Pythagorean Theorem) Y² = L² + b² + 2 b L sin(A+P) = L²(1 + 2(b/L) sin(A+P) + (b/L)²). Taking the square root of both sides and using the series expansion for √(1+x), this gives that Y = L (1 + (b/L) sin(A+P) + O(b/L)²), which is to say that the difference between Y and L is a correction of order b/L, which is tiny; thus we can safely consider either leg to be the one we measure.

Finally, to understand how the luminosity of a star relates to distance, imagine a star glowing in empty space. Luminosity is a measure of how much light it emits per second; imagine all of these photons streaming away from it. Because the star is spherical, this light is emitted equally in all directions. 

Draw an imaginary sphere of radius R centered on the star. All of the star’s light must hit this sphere, since there’s nothing in between which could absorb or emit light, and we know that the light has to be evenly spread across the sphere. 

Now imagine there’s a light detector, such as an eye or a telescope, somewhere on the surface of this sphere, a distance R from the star, with a surface area A. The fraction of the star’s light which strikes this detector is equal to the fraction of the sphere’s area which A covers, that is, A/4πR². This means that the brightness that a given detector will see decreases as the square of its distance from the star. (This is also why the Sun would roast you if you were right next to it, but not if you’re at the distance of the Earth) If you were to move the same detector out to a new distance L, it would see a brightness A/4πL² – that is, the power measured would be multiplied by a factor of (R/L)².

When measuring raw power output, we would stop here, saying that the power striking A is equal to PA/4πR². (P being the power emitted by the star) When measuring brightness, however, it’s often convenient to describe the brightness not in absolute terms, but compared to some standard brightness. It’s also common (both in astronomy and photography) to talk about brightness in terms of the magnitude, which is the logarithm of brightness, because the human eye’s sensitivity to brightness is logarithmic. In astronomy, we use the magnitude, defined to be

m = m₀ – 2.5 log₁₀ (P / P₀)

where m₀ and P₀ are reference values. Different reference values are chosen for different colors of light, so that a single magnitude value corresponds to a given “brightness” as seen by the naked eye. The minus sign means that smaller magnitudes are brighter; the factor of 2.5 is chosen so that five steps of magnitude is roughly equal to a factor of 100 in light intensity. The Sun has an apparent magnitude, seen from Earth, of -26.74. 

The “absolute magnitude” of a star is defined to be its magnitude as viewed from a distance of exactly 10pc, and so measures the star’s intrinsic brightness. That is, 

M = m₀ – 2.5 log₁₀ (P(10pc) / P₀)

If we instead were to view it from a distance L, then the power we receive would be equal to the power at 10pc times (10pc/L)²; that is,

M(L) = m₀ – 2.5 log₁₀ ((P(10pc) / P₀) * (10pc/L)²)

Using the normal rules for a logarithm, and letting L be in units of parsecs, we can clean that up to read

M(L) = m₀ – 2.5 log₁₀ (P(10pc) / P₀) + 5 (log₁₀ L – 1) 

or

M(L) = M + 5 (log₁₀ L – 1)

We therefore have a simple relationship between the absolute magnitude M, the magnitude M(L) as viewed from Earth, and the distance L between the star and the Earth. This formula has to be slightly corrected for distant objects, because in that case the motion of the distant object due to the expansion of the universe causes its color to shift, and so different calibration constants have to be used for M and M(L).

Photography uses a similar logarithmic scale, but instead of using the brightness of Vega as a reference point, it uses the faintest difference capturable by ISO 100 film. EV₁₀₀ is related to apparent magnitude by the formula

EV₁₀₀ = -1.32(M + 1.35)

which is useful in astrophotography. (So for example, the Sun has an EV₁₀₀ of +33.57; viewed through an S/N 14 welding mask – equivalent to 18.5 stops – it then has an EV₁₀₀ of approximately 15, comparable to a scene illuminated by full sunlight.)

Science breakthrough of 2013

Science breakthrough of 2013

I know it’s a bit late. I almost forgot about this. The Scientific American top ten is here:  http://goo.gl/n2PzKV What’s your favorite from Science?

You can read more about the runner ups here:

http://news.sciencemag.org/2013/12/sciences-top-10-breakthroughs-2013

and more detail here:

✣ Sleep: The Ultimate Brainwasher?

http://goo.gl/i8uxa5

✣ Source of High-Energy Cosmic Rays Nailed at Last

http://goo.gl/hAAdEe

✣ The CRISPR Craze

http://goo.gl/5G0owy

✣ ScienceShot: Bringing Up Brains

http://goo.gl/DC1AEe

✣ Gut Bugs Could Explain Obesity-Cancer Link

http://goo.gl/NdyPYU

✣ Appendix Evolved More Than 30 Times

http://goo.gl/3meOcN

✣ New Solar Cell Material Acts as a Laser As Well

http://goo.gl/24pPSG

✣ Structural Biology Triumph Offers Hope Against a Childhood Killer

http://goo.gl/VYnBvM

✣ Cell Investigating Breakthrough Stem Cell Paper

http://goo.gl/7XG0fp

✣ ScienceShot: Another Way to a Clear View

http://goo.gl/HYL15e

✪ Cancer Immunotherapy

http://goo.gl/h3oO6s

This is my lazy #ScienceSunday  post. Ask questions and I’ll try to dig up more info on the topics that I know about.

http://www.youtube.com/watch?v=9X-Cl9CMVzg&feature=share

Wow, such science. Many fun. So clever

Wow, such science. Many fun. So clever

Happy #FidoFriday  This article about an old, very old, sexually transmitted cancer in dogs is being shared a lot. I think the version from Tommy Leung is clever and fun. He’s such a science hipster.

#ScienceEveryday  

Originally shared by Tommy Leung

Such Transmissible Cancer. Much Old. So Doge. Wow.

As far as sexually-transmitted diseases goes, Canine Transmissible Venereal Tumour (CTVT) is one hell of a weird one. It is one of only two known lines of cancer cells which actually acts as an infectious agent (the other being the Devil Facial Tumour Disease DFTD see: https://plus.google.com/u/0/111479647230213565874/posts/ZjPVkCK52nU) (For a review of these two clonally transmissible cancers, see: http://www.nature.com/onc/journal/v27/n2s/abs/onc2009350a.html).

Whereas there are various other pathogens/infectious agents such as the Human papillomavirus (HPV) which can trigger the growth of cancer, in the case of CTVT, it is the cancer cell itself which is the infectious agent.

Essentially, CTVT is a line of dog cells which have evolved into something that acts like a clonally-reproducing pathogen. Genetic analyses indicates that this cell line originated about 11000 years ago and that this CTVT contains traces of DNA which links it back to the earliest days of dog domestication. In a new study published in Science, it seems that the original animal which gave rise to CTVT might have been wolf-dog hybrid that was closely related to an Alaskan malamute. 

To find out more follow this link here: http://www.newscientist.com/article/dn24926-infectious-cancer-preserves-dog-genes-for-11000-years.html

#scienceeveryday   #doge   #cancer   #molecularbiology   #genetics  

Be The Match, I’m a match

Be The Match, I’m a match

In 1998 I registered to be a bone marrow donor with the Be The Match organization. Nothing really happened until a week ago. I received an email in addition to the letter below, stating that I might be a match for a six-year-old boy. There was additional paperwork to go through to make sure that I was willing to continue and consent to further testing. Be The Match was able to use my old sample to confirm that I am a match for the boy (second letter from LifeSource below). So now I wait until the boy is ready for the transplant. I can’t tell you how it feels to possibly be able to help this boy.

☺Why bone marrow transplant?

There are many diseases that can benefit from bone marrow transplants, e.g. leukemia, lymphoma, and sickle cell anemia. In the case of sickle cell anemia, the patient’s blood can become crescent shaped (hence the name) and get stuck in the capillaries. It’s very painful. For leukemia and lymphoma the patient often receives either chemotherapy or radiation therapy to essentially wipe out the cancerous blood cells. In all conditions, the donor’s bone marrow helps make healthy blood for the patient.

☺How do they do the transplant?

Many of you have probably heard of the big needles that are used to get the bone marrow from the donor. Under anesthesia, special needles are used to extract liquid bone marrow from the left and right sides of the pelvic bone (from the back). http://goo.gl/l0byD6 The liquid bone marrow is injected into the patient intravenously. It takes about 15 days for the donor stem cells to engraft, i.e., find their way to the bone marrow and start producing blood cells. A more recent method, peripheral blood stem cell (PBSC), is more like a real-time platelet donation. With a platelet donation, the donated blood is spun at high speed and centripetal force drives the blood cells to the bottom of the tube and plasma (including platelets) stays in the liquid portion of the sample. In the PBSC method, the donor is given drugs to increase the circulating stem cells. Basically blood is taken out from one arm, the stem cells are removed using a technique called apheresis. The donor’s blood is returned, minus the stem cells.

I don’t know much about the patient and I won’t know for some time. If the patient’s family chooses, I may learn more later. Do me a favor and have positive thoughts for this six-year-old boy so that everything goes well.

More info:

http://bethematch.org/

Happy #ScienceSunday