The Big Bang and Cosmology

The night sky is, for the most part, dark.

This observation has undoubtedly been made many times over, but in the nineteenth century, Heinrich Olbers realized that this simple fact contradicted the existing steady-state model of the Universe. If the Universe is infinitely old and infinitely large (and also homogeneous on large scales), then no matter which way one looks in the night sky, one’s line of sight must eventually find a star and therefore the night sky should be a blazing sphere of light. This is Olbers’s Paradox — he was not the first to draw this conclusion, but he is the most well-known for it.

Even when we consider the fact that less light reaches us from more-distant stars, Olbers’s Paradox is still not resolved. If we picture a series of “shells” of stars, centered on the Earth, we can see that while the light reaching us from each individual star is proportional to \frac{1}{{d}^{2}} as a consequence of the Inverse Square Law, the number of stars per shell is directly related to the surface area of the shell and scales with {d}^{2}. Therefore the light reaching us from each shell is the same and the paradox still stands.

Illustration of Olbers's Paradox

Illustration of Olbers’s Paradox (Credit: Htkym / Wikipedia)

Of course, the solution to Olbers’s Paradox is that the Universe is not both infinitely old and infinitely large.

The theory of the Universe beginning with a Big Bang was first proposed by Georges Lemaître in 1927, as part of a solution to the Einstein field equations. (Einstein himself believed in a static universe and had to introduce a “cosmological constant”, Λ, into his equations to compensate.) Edwin Hubble demonstrated concrete proof of an expanding Universe when he noticed that galaxies further from the Earth were moving away at a faster rate; this is now called Hubble’s Law, with recessional velocity equaling Hubble’s constant (H0) times distance. Hubble’s constant is thought to currently be ~ 71 \frac{km/s}{Mpc}, but it is worth noting that Hubble’s constant is not truly a constant, as it may change with the expansion of the Universe.

Rough Timeline of the Big Bang / Early Universe

  • t = 0 — BANG! (not really an explosion)
  • t < {10}^{-44} sec — Planck era, quite literally our idea of it is limited to “?????”
  • t = {10}^ {-36} to {10}^{-34} sec — Universe inflates dramatically, increasing in size by 10^(50) times, strong force becomes distinct
  • t = {10}^{-12} to {10}^{-10} sec — Electromagnetic and weak forces become distinct
  • t = {10}^{-6} to {10}^{-5} sec — hadrons (protons, neutrons, etc.) and leptons (electrons, positrons etc.) form from quarks
  • t = 1 sec — annihilation of matter and antimatter has slowed, matter dominates even though they should have been created in equal amounts
  • t = {10}^{2} sec — the nuclei of Hydrogen and Helium (and small amounts of Lithium and Beryllium) are formed in what’s called “Big Bang Nucleosynthesis”
A brief history of everything -- click to expand to a readable size

A brief history of everything — click to expand to a readable size (Credit: CERN)

About 380,000 years after the Big Bang, the Universe had finally cooled off enough for atoms to form (on the order of 3000 K), thus allowing photons to travel through space without being constantly scattered by free electrons. Due to the ongoing expansion of the Universe, the radiation from this point in time has been cosmologically redshifted to the point where it now falls in the microwave part of the EM spectrum. Thus, we call it the Cosmic Microwave Background Radiation (CMBR). The CMBR was first detected by Arno Penzias and Robert Wilson in the 1960s, as a faint bit of microwave noise coming from, well, everywhere. Data most famously collected by the Cosmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP) shows that this “noise” matches almost exactly with what we would expect from a cosmologically redshifted version of the 3000 K blackbody radiation curve.

The CMBR as observed by Planck -- yes, it's not from WMAP, Planck is more recent and we like it more

The CMBR as observed by Planck — yes, it’s not from WMAP, Planck is more recent and we like it more (Credit: ESA and the Planck Collaboration)

After around 1 billion years, stars and galaxies have finally formed and the universe as we know it has started to take shape. But as we observe the beginnings of the Universe, we also wonder, what will be its eventual fate? To determine this, we must turn to some rather complicated cosmology.

Cosmologists define a density parameter {\Omega}_{total}, which is the density of (for lack of a better term) stuff in the Universe — we can’t call it matter because only a fraction of it is matter — as compared to a critical density, {\rho}_{crit}. This critical density is just enough for the Universe’s expansion rate to slow down to zero as time approaches infinity. There are three main possibilities:

  • {\Omega}_{total} > 1 (ρ > {\rho}_{crit}) — a closed Universe, it will eventually reach a maximum size and then start collapsing (“Big Crunch”)
  • {\Omega}_{total} = 1 (ρ = {\rho}_{crit}) — a flat or critical Universe, the expansion rate of the Universe will approach zero as time goes on
  • {\Omega}_{total} < 1 (ρ < {\rho}_{crit}) — an open Universe, it will expand forever, but the rate of expansion may be constant or it may be increasing
Different fates of the Universe -- orange = closed, green = flat/critical, blue = open, red = open and accelerating

Different fates of the Universe — orange = closed, green = flat/critical, blue = open, red = open and accelerating (Credit: NASA GSFC)

Data from distant Type Ia supernovae and the CMBR appear to support a model where ρ is very close to {\rho}_{crit}, but the Universe is still accelerating. However, this raises another problem — the amount of mass that we can see and measure out in the universe is about 4% of what is required to match {\rho}_{crit}. This is where the stuff we referred to earlier comes into play.

First of all, astronomers have noticed that galaxies and clusters appear to contain much more mass than we can actually see. Rather unimaginatively, they named this invisible source of mass dark matter. Even the combined amount of regular matter and dark matter is nowhere near enough to match {\rho}_{crit}, but the presence of dark matter still doesn’t explain why the expansion rate of the Universe is increasing. Cosmologists believe that another type of stuff, dark energy, actually creates a “negative pressure” (that is to say, it repels other stuff), thus causing the acceleration.

Distribution of matter and energy in the Universe

Distribution of matter and energy in the Universe (Credit: NASA / JPL-Caltech / T. Pyle)

Whew.

We thank you for sticking with us through that, as theoretical cosmology is not exactly our strong suit as astronomy geeks, but we hope you enjoyed our attempt to — quite literally — explain the Universe.

==========

Sources and links for further reading:

Advertisements

Coordinates

Coordinates. You have to have them for everything you do (at least in astronomy), they’re a pain in the neck to convert, and you have to learn to read all of them. There are three that we need to discuss—the altitude azimuth coordinate system, the equatorial coordinate system, and the galactic coordinate system. All of these should be written in reference to the J2000 epoch.

First, we’ll discuss the simplest of these—the altitude-azimuth (or horizon) coordinate system. It’s a system based on you (yes, for once, the universe does indeed revolve around you). This system basically takes where you are right now and puts all the stars and whatnot in relation to you. To understand this system, you first must know what a zenith is. For any of you who are slightly literate, you know that “zenith” means that something is at its highest point. So in astronomy, the point that is directly above you is the zenith. Because the heavens are generally referred to as a sphere kind of thing, but you can only see half of it at any given time, the altitude is the angle from the flat horizon line to the height of the star. Now, we have to measure the azimuth. The azimuth is basically the angle that the star is located on in reference to North. Confused? Here’s a diagram:

For reference (although you have doubtless already figured this out), the altitude-azimuth system is measured in degrees, just like our very own latitude/longitude system. This system is difficult to use in practice, purely because it completely depends on where you are. Because of this, the coordinates of everyone in the world are different. For this reason, astronomers hate using this system, preferring instead the equatorial coordinate system, which we’ll talk about next.

The equatorial coordinate system is the one that you all have heard of—the one with the right ascension and declination stuff. (And if you haven’t, then what the heck are you doing in astronomy?!) The system, as its name implies, is based on Earth’s latitude/longitude system, but isn’t affected about the rotations and all. Declination (DEC) is the astronomy version of latitude—and it’s measured from the equator, too! It’s just the degrees north or south of the celestial equator, which has the same plane as Earth’s equator, only more expanded. Right ascension (RA) is—you guessed it—the longitude of astronomy, and this is based on the vernal equinox, which is just astronomy’s version of the Prime Meridian. Right ascension is usually measured in hours, minutes, and seconds, going all the way up to 24 hours. Every 360 degrees equals 24 hours, so 15 degrees equals one hour, and one degree equals 15 minutes. Lovely picture here:

Because this is based on something that doesn’t change as the Earth rotates, the RA/DEC coordinates don’t change on a day to day—or even year to year—basis. This means that whatever the values of RA or DEC are, they remain constant.

Again, this is the coordinate system most often used by astronomers—and it is also the system in common use in Science Olympiad. If you are competing in Astronomy/Reach for the Stars, please know this system better than the back of your hand, and you will have a chance of not failing miserably in these events.

The last system you really need to know is the galactic coordinate system. The thing about the other two coordinate systems is that it’s based off of Earth—and Earth is tilted about 62.87° away from the equator of the Milky Way (henceforth referenced as “the Galaxy”). And that’s a problem (as are our egos, but we won’t go into that). This is really a lot like the equatorial system, only it’s based off the Galaxy midplane, which is the Galaxy’s equator. (Technically, it is based off the Sun drawing a plane parallel to the Galaxy Midplane, but really, the two are so close together that it doesn’t make a difference.) Latitude runs parallel to the Galaxy midplane, and longitude is an angle in reference to the North Galactic Pole (NGP), which is perpendicular to the Galaxy midplane. We measure the latitude and longitude, however, from the Sun. (We design a system so as to not be based off of Earth, so what do we base it off of? Yes, that’s right—the Sun, despite the fact that it’s nowhere near to the centre of the galaxy.) Before we go any further, a diagram to help all of you mind-numbingly confused people.

As you can see from the diagram, there is a line that connects the galactic centre to the Sun. This is where latitude (the universal symbol of which is b) is measured from. It goes from this line until it hits the line where it passes directly beneath the star, and there you have it! Your latitude coordinates.

We measure longitude (the symbol of which is l) very similarly. Since the Sun just happens to be directly beneath the NGP (what a coinkydink, don’t you think?), we use the NGP as a reference point. When we measure the longitude, we measure the angle between the Galaxy midplane and the height of the star in reference to the distance between it and the Sun. (Use your logic and the Pythagorean theorem here—assuming the same height from the Galaxy midplane, the farther out your star is, the smaller the angle. This is why you have to know the distance.)

Or, for a slightly better description, taken from Carroll and Ostlie:

“The Galactic coordinate system exploits the natural symmetry introduced by the existence of the Galactic disk. The intersection of the midplane of the Galaxy with the celestial sphere forms what is very nearly a great circle, known as the Galactic equator. This orientation is depicted in Fig. 24.16. Galactic latitude (b) and Galactic longitude (l) are defined from a vantage point taken to be the Sun, as shown in Fig. 24.17. Galactic latitude is measured in degrees north or south of the Galactic equator along a great circle that passes through the north Galactic pole. Galactic longitude (also in degrees) is measured east along the Galactic equator, beginning near the Galactic center, to the point of intersection with the great circle used to measure Galactic latitude.

By international convention, the J2000.0 equatorial coordinates of the north Galactic pole (b=90°) are:

­a NGP: 12 h 51 m 26.28 s

­d NGP: 27°7′ 41.7”,

­and the origin of the Galactic coordinate system (l0 0°, b0 0°) corresponds to

­a0: 17h 45m 37.20s

­d0: 28°56′ 9.6”.”

Now, for conversions. Brace yourselves for some brutal math rearing its ugly head. If you do not know how to do spherical trigonometry—which is nowhere near nice, normal plane trigonometry—then stay away and keep your sanity. YOU NEED TO HAVE TAKEN PRECALC TO BE ABLE TO UNDERSTAND THIS (or at least advanced trig).

Since the altitude-azimuth system is based on, well, us, we can’t convert it to anything. But equatorial system to the galactic coordinate system and vice versa? Astronomy says yes.

The equations for converting equatorial coordinates to galactic coordinates are as follows:

\sin {b} =\sin { {\delta}_{NGP} } \sin {\delta} +\cos { {\delta}_{NGP} } \cos {\delta} \cos { (\alpha -{\alpha}_{NGP}) } \\ \cos {b} \sin { ({l}_{NCP}-l) } =\cos {\delta } \sin { (\alpha -{a}_{NGP}) } \\ \cos {\delta} \cos { ({l}_{NCP}-l) } =\cos { {\delta}_{NGP} } \sin {\delta} -\sin { {\delta}_{NGP} } \cos {\delta} \cos { (\alpha -{\alpha}_{NGP}) }

The equations for converting galactic coordinates to equatorial coordinates are as follows:

\sin {\delta} =\sin { {\delta}_{NGP} } \sin {b} +\cos { {\delta}_{NGP} } \cos {b} \cos { ({l}_{NCP}-l) } \\ \cos {\delta} \sin { (\alpha -{\alpha}_{NGP}) } =\cos {b} \sin { ({l}_{ NCP }-l) } \\ \cos {\delta} \cos { (\alpha -{\alpha}_{NGP}) } =\cos { {\delta}_{NGP} } \sin {b} -\sin { {\delta}_{NGP} } \cos {b} \cos { ({l}_{NGP}-l) }

The “l” is the same as l –I’m just not good enough with LaTeX to make fancy fonts work. I won’t work examples this time because if you saw the equations that I had to type in…well, suffice it to say you wouldn’t want to either.

And that’s a wrap, folks. Do have fun.

__________________________________________________________

TL;DR: The altitude-azimuth coordinate system, the equatorial coordinate system, and the galactic coordinate system are the three main coordinate systems seen in astronomy. The altitude-azimuth system is based on where you are standing and is measured in degrees. The equatorial system is based on the orientation of the Earth. Longitude (right ascension) is measured in hours, while latitude (declination) is measured in degrees. The galactic coordinate system is based on the orientation of the Milky Way and the Sun, and measures longitude in hours and latitude in degrees. The math for converting is awful—brutally difficult and takes absolutely forever.

Further Reading:

http://spider.seds.org/spider/ScholarX/coords.html

http://www.shodor.org/refdesk/Resources/Applications/AstronomicalCoordinates/

http://www.physics.uc.edu/~sitko/AdvancedAstro2011/1-TheSky/Sky.pdf

https://dept.astro.lsa.umich.edu/ugactivities/Labs/coords/index.html

http://astronomy.swin.edu.au/cosmos/N/North+Galactic+Pole

http://www.physics.uc.edu/~sitko/Fall2002/1-Sky/sky.html

*Credit for information also goes to Carroll/Ostlie.

Light Curves

First, a bit of housekeeping: we apologize for the fact that this week’s post is slightly late, and unfortunately it will also be somewhat shorter, as we have both been insanely busy this week (we fear this will be a recurring theme), but ’tis a post nevertheless.

———-

Light curves are almost exactly what they sound like — they plot the brightness of an object over time. They’re typically used with all kinds of variable stars, in particular eclipsing binaries, pulsating variables, and supernovae.

If the light curve seems to repeat itself, like these idealized examples, you’ve got some kind of periodic variable. You can determine the period of the star simply by looking at how long it takes the light curve to start repeating a cycle.

Credit: Davison E. Soper at University of Oregon

If it looks something like this, you’ve got yourself an eclipsing binary — the brightness is mostly constant, except for the dips where one star passes in front of the other and blocks some of the light from it.

Eclipsing Binary light curve

Credit: Institute for Astronomy at the University of Hawaii

If it looks like this, then you’ve got a cataclysmic variable star, more specifically, a supernova. Light curves from Type Ia supernovae are particularly important because they can be calibrated as standard candles to let us determine the distance to the exploded star (as for why this is possible, that is a topic for a future post). Furthermore, Type II supernovae are classified based on their light curves, with Type II-P having a “plateau” of relatively constant brightness shortly after maximum magnitude before decaying away, while Type II-L tend to just fade away in a relatively linear fashion.

SN light curves

Credit: University of Oregon

A very useful astronomy tool based off light curves is the O-C diagram, typically used for periodic variable stars. O-C stands for “observed minus calculated”, and (this seems to be another recurring theme today) it’s exactly what it sounds like. First, you look at collected data for a varstar, and then try to create a model that will be able to predict the future behavior of the star. To create an O-C diagram, you plot Time on the x-axis, just like for a light curve, but you subtract your calculated brightness from your observed brightness and plot that on the y-axis.

O-C diagram for AB And

Credit: AANDA (“Starspots and photometric noise on observed minus calculated (O-C) diagrams” by A. Kalimeris, H. Rovithis-Livaniou and P. Rovithis)

If your O-C diagram shows a straight horizontal line at zero, like in the first half of the diagram above, then your model accurately predicts the behavior of the star, and you should pat yourself on the back. However, as always in science, you can be wrong. If the line has a positive slope (like in the second half of the diagram above), then the real period is longer than what you thought it was; if the line has a negative slope, the real period is shorter than your predicted period. And finally, if the O-C diagram shows a curved line, then the period is changing for some reason, which may warrant further investigation. Of course, there are more complicated ways in which you can be wrong, but we won’t address them here, in order to save time and minimize confusion.

———-

Sources and links for further reading:

Keeping Time in Astronomy

We are sorry this post was so untimely, but you see it was to show how important keeping time is (okay, just bear with us).  Yes, it’s about time for this post!  But why?  That’s because it’s about time of course!  Time is completely derived from watching the motions or being able to see the light of the Sun, Moon, and other objects.  Things can appear slow, fast, or like nothing in terms of time, it’s all relative of course.  In fact to an extent we can say that these clocks have driven us cuckoo!

The basis of time is the SI unit, the second, a special little s that is the only unit that can’t follow our normal SI system of 10.  Where could this even come from?  It used to be one second of minute of one hour of one solar day, therefore being 1/86,400 of a solar day.  Now we can use the wonders of the atomic clock!  The reason is because of all sorts of interference with complex “leap” times; there is even a leap second along with the leap year.  These leap times were done to correct the calendar due to all sorts of errors.  All these factors have led scientists and astronomers to develop many definitions of time.

To start, we have the year.  On average it is about 365.25 days.  So, where does the decimal come from?  To start we have a few different ways to keep time.  Sidereal time, or sidereal motion, looks to the revolution of the Earth with respect to DISTANT STARS.  This comes from observing the sky.  Solar time, also known as synodic motion, is with respect TO THE SUN, it is a daily observation to see when it rotates to get to the same place.  How much of a difference could this make?  Well, the solar day is about 24 hours.  The sidereal day is 23h 56m 4s.  In addition to the slight error, think about how the stars are moving in space.  We are slowing down/speeding up throughout the year, and on the scale of billions of years, or even a few years, these errors can make a fair amount of difference.  To be direct, the motions of the Earth are quite inaccurate.  That alone is reason to develop more accurate time-keeping.  Also, the sidereal year is an orbit around the sun relative to stars, while the tropical year measures between two successive spring equinoxes.  This alone creates a difference of 20 minutes in the year, so  this too builds up over time.

You can see not only is this revolutionary, but it is also timely (from Prof. Richard Pogge, Astronomy 161: An Introduction to Solar System Astronomy, http://www.astronomy.ohio-state.edu/~pogge/Ast161/Unit2/time.html, listed below ).

What would astronomy or science be without have more than a few ways to do something?  There is also standard time.  This was using railroads and telegraphs to standardize time.  It synchronizes clocks of different locations within a time zone not exactly using solar time.  This goes into time zones, dividing the Earth into zones of 15 degrees of longitude.  But this links into Universal Time (UT).  This was used to develop time offsetting from the Prime Meridian.  It was to replace the Greenwich Mean Time (GMT) which had multiple definitions.  UT is technically closer to a Mean Solar Time, with Greenwich as the reference.

But then there is more of course.  Eventually, with all these errors scientists decided that our definitions were a bit faulty.  So, the second was defined again.  The interesting thing about the second is it’s the only unit that isn’t regularly used with multiples of 10.  So, this develops into atomic time.  By using Cesium-133 (this is a specific isotope, but if you get your hands on cesium in general…well, please be responsible/have fun with the explosion) has a specific number of cycles with decay.  This has developed  into the notable atomic clock.

Another type of advanced time keeping is Ephemeris Time (ET), based on observing the motions of the planets and the sun.  ET was briefly used to define the SI second, but it has since been phased out as we have discovered better ways of timekeeping.  Now we’ll return to something nuclear.  Nuclear time involves an H-3 (tritium) isotope that beta decays to He-3.  When tritium reaches its half life a nuclear time elapses.  Next we have something very astronomical: pulsar time, the use of binary pulsars (yes, massive stars rotating around each other) to find periods varying by less than a second because of their relatively definite motion.

Lastly, we have one of the more important astronomy-related methods of keeping time.  These are Julian Dates (JD).  This is a continuous count of days since noon Universal Time on January 1, 4713 BCE (this would be on our everyday Julian calendar).  This may seem quite arbitrary, but the reasoning was that at the time of its development, there were no known historical events before this year, so as to avoid negative dates or BC/BCE/AD.  It also links to solar and lunar cycles.  About 2.5 million days have occurred since then, and it may not seem obvious, but this calculation has to take into account leap years, days, minutes, seconds, and other inaccuracies.  However, it is much more accurate and can better show second differences in data collection.  To make life easier, below we have these formulas:

a=\frac { 14-month }{ 12 } \\ y=year+4800-a\\ m=month+12a-3

For dates in the Gregorian calendar:

\\ JD=day+\frac { 153m+2 }{ 5 } +365y+\frac { y }{ 4 } -\frac { y }{ 100 } +\frac { y }{ 400 } -32045

For dates in the Julian calendar:

\\ JD=day+\frac { 153m+2 }{ 5 } +365y+\frac { y }{ 4 } -32083

Aside from this we should note what a common notation is-J2000.  This is related to epochs, saying that time is starting from the JD on the date January 1, 2000.

==========

TL;DR

Time can be taken from keeping track of specific stars in the sky, relative to the sun, from atomic clocks and pulsars, or simply by measuring the amount of time from a specific date.  While time may not seem directly important, but a lot of work has been put into this concept.  It is the basis of a large portion of physics and technology.  Astronomy itself benefits immensely from being able to orderly be able to keep track of time for objects.  So next time your clock wakes you up in the morning, remember to not throw it across the room, because it’s just another way of reminding us how important time is.  Also, it means that you should get the heck out of bed or else you’ll be late.

==========

Sources:

Time in general

http://physics.nist.gov/cuu/Units/second.html 

http://www.astunit.com/astunit_tutorial.php?topic=time

http://www.maa.mhn.de/Scholar/times.html

http://www.maa.clell.de/Scholar/calendar.html

http://www.astronomy.ohio-state.edu/~pogge/Ast161/Unit2/time.html

http://astronomy.nmsu.edu/nicole/teaching/ASTR505/lectures/lecture08/slide08.html

http://curious.astro.cornell.edu/timekeeping.php

http://www.skyandtelescope.com/howto/basics/3304611.htm

http://www.optcorp.com/edu/articleDetailEDU.aspx?aid=2193

Julian Dates

http://aa.usno.navy.mil/data/docs/JulianDate.php

http://scienceworld.wolfram.com/astronomy/JulianDate.html

http://curious.astro.cornell.edu/question.php?number=88

http://www.tondering.dk/claus/cal/julperiod.php#formula

Metallicity and Star Populations

No, star populations are not the number of stars in a specific galaxy or the universe or whatnot. No, it has nothing to do with numbers of stars. In fact, these “populations” are actually classes of stars.

To begin with, we need to discuss metallicity. As you (probably) know, stars are almost completely made up of hydrogen and helium. So any other element found in a star would be considered a “metal”. In astronomy, carbon, neon, and fluorine are all considered metals. (And also every other element not hydrogen or helium.) So when you study astronomy, you really must forget all those silly little things taught to you in chemistry class (assuming you have ever had a chemistry class before, of course). Metallicity is essentially the amount of a star that isn’t made up of hydrogen or helium, and the symbol for metallicity is generally Z.

Anyways, the amount of metal found in a star is its metallicity. And metallicity is also divided up into several classes (called populations), depending on the amount of metal in the star. And that, friends, is what a star population is.

There are three categories of metallicity: Population I, Population II, and Population III. Each population, respectively, has decreasing metallic content and increasing age (theoretically—I don’t think anyone has ever tried to go out and sample stuff from a star…). To make all this a lot clearer, I’m going to try to explain in the next few paragraphs.

First off, let’s talk about Population I stars. These are the young stars, having more metal than the other populations (because of the development of heavier elements over billions and billions of years). In a galaxy, they are usually located towards the centre of the galaxy, usually in the disk of the galaxy. For these stars, Z~0.02, and up to 0.03.

The next class is called Population II stars. They are the old stars, with very little metal (because they were formed in a time when heavier elements were nonexistent). They are found more at the outer edges of galaxies, significantly above or below the disk. The metallicity content of these stars is about Z~0.001. And since the kinematics, positions, and chemical compositions of Population I and II stars are different, they provide us with a wealth of information about the Milky Way.

Between Population I and Population II stars, we have the intermediate, or disk population. Those are the stars that just kind of loiter around somewhere in the galaxy, somewhat between the Population I and II stars. They have a medium amount of metal, and they just kind of float around the place. These are your average, every-day, middle-age stars. In fact, they are so average that they get a special name, just to make them feel better. Although sometimes, astronomers like to just put these stars in Population I or II categories, which personally I think is just idiotic. They don’t match the definition, in any case. Do note, though, that just like middle-aged people, they’re there…but the other people on either side of their age range are just as numerous. They don’t have a set metallicity or even a set range; they just are.

And then we have Population III stars. We don’t know if these exist. That’s right, folks, even the existence of this class of stars is purely theoretical. These are the stars that are thought to contain no metal at all. So their metallicity is essentially Z~0. Cool, huh? But the thing is…we’ve never found a Population III star. Those would have to have been formed right at the Big Bang, and with such high mass and energy, they would have burned out fairly (very) quickly. Theoretically, at the time of the big bang, there was only hydrogen and helium, with trace amounts of lithium and beryllium. There were no heavier elements, and apparently lithium and beryllium didn’t really make its way into the stars. So these stars contain just about no metal, are indefinably old, and are purely theoretical. My personal favourite, really. So if on a test they ask you “What population is <star>?”, don’t put Population III. They don’t exist (at least, not definitely).

Our galaxy is actually mostly Population I stars (assuming the classification of disk population into Populations I and II, of course), because Population II stars are very old and most of them have burned out. Really, the only Population II stars that are easily visible to the eye are the globular star clusters, and I’m sure that all of you have studied globular star clusters very extensively.

By percentage, our sun has a 1.8% of metals. But in metallicity, that’s apparently not relevant. The Sun is a standard of comparison for any other star’s metallicity (the sun’s metallicity being equal to 0). By using the metallicity calculations (which are underneath this paragraph), any star with a metallicity<0 is automatically Population II, and any star with a metallicity>0 is automatically Population I. (Again, this irritates me to no end because of the fact that there is a disk population.) Because we are egotistical gits, everything has to be based on our sun, so the range of metallicities in our galaxy ranges from -5.4 to 0.6. Bit skewed? Yes. Obviously, our sun is on the metal-rich side, and yet, we still call it “neutral”.

Then we have the calculations for metallicity. They are complicated, insane, and involve higher mathematics. If you have not taken Algebra 2 (or know what logs are), you might want to stay away from this section. It’s like the equilibrium constant stuff, just not quite as complex. But if you don’t know logs and enjoy torturing yourself greatly, go ahead and read this.

The measurement of metallicity actually comes from the amount of iron in the star. Not every bit of non-hydrogen/helium in the star, but simply iron. It serves as a basis for comparing the ratios of iron to hydrogen in relation to our sun, and it’s the general way to figure metallicity. Now, iron is not the most abundant metal in stars (or even close to it, really), but it’s among the easiest to measure with the technology that we have now. So since we decided to be lazy, this is what we get.

VERY IMPORTANT: The equation below is a RATIO of how much metal is in the star. It is the most commonly used form, but it is NOT a value. The value is what is in the paragraphs above. I hope that clears up any confusion.

Now, the general formula for deriving metallicity is this:

If all of you know the law of logs that states that  , then you can simplify that thing up there to:

which, by the way, is much, much more helpful to solve. The whole  thing is in fact not division, but actually a representation of the logarithmic ratio of the abundance of iron in a star in comparison to the Sun.  means the number of iron atoms in a given amount of volume, and  is the number of hydrogen atoms in a given amount of volume.

According to the age-metallicity relation, a higher amount of iron means it’s a younger star. However, the universe doesn’t like simple laws that are set in stone, and therefore has to make it more confusing. Because the age of a star is basically based on the amount of iron it has, the universe uses Einstein’s probability and statistics laws to make everything very misleading. For example, Type Ia supernovae (abbreviated as SN Ia) are responsible for the vast majority of iron production, and significant numbers of them don’t even appear until about 10,000,000,000 years after star formation begins. So there’s really not that much iron going around the interstellar medium. And even after the SN Ia events occur, it’s not going to mix evenly throughout everything. So essentially, one region may get lots of iron, and another region? Not so much. So even though the stars in both those regions are the same age, the stars in region 2 seem older. Bit problematic, right?

And that concludes metallicity and star populations. If you have any questions, you can (attempt) to email me.

__________________________________________________________

TL;DR – Metallicity is the metal content of a star (in astronomy, a metal is anything that’s not H or He). Stars are divided into populations based on their metallicity, with Pop I stars being metal-rich, Pop II stars being metal-poor, and theoretical Pop III stars having no metals at all. The equations for metallicity compare the amounts of iron and hydrogen in the star to the amounts in the sun (because we base everything off our own star). Metallicity doesn’t necessarily show the age of a star, since Type Ia supernovae scatter iron unequally throughout their surroundings.

__________________________________________________________

I like typing in British English. A lot. So if some things look misspelled…check to see if it’s the British spelling first before calling me out.

ALSO:

Q: How many astronomers does it take to change a light bulb?

A: What’s a light bulb?

Spectral Classes and the H-R Diagram

We previously mentioned the development of spectral classification in our History posts, but now we can really understand the science behind it.

Edward Pickering and Williamina Fleming (you remember them, don’t you?) originally classified stars based on the strength of hydrogen spectral lines — stars with the strongest hydrogen lines were class A, the next strongest were class B, and so on through class N. Later, Antonia Maury started to rearrange Fleming’s spectral classes, and then Annie Jump Cannon further rearranged them into the modern order OBAFGKM, which orders stars by temperature. O-stars are the hottest, typically >30 000 K, while M-stars are the coolest at between 2000 K and 3500 K. These classes are each divided into ten subclasses 0-9; for example, the hottest F-stars are F0 and the coolest are F9.

Stellar Spectra

Credit: University of Arizona [click for full-size image]

Characteristic spectral lines:

  • O — relatively weak H; strong HeII, neutral He; Si IV; double-ionized N, O, and C.
  • B — stronger H; neutral He (max intensity around B2); ionized O, N , Ne, Mg, Si.
  • A — very strong H (strongest at A0); ionized metals (Fe II, Mg II, Si II, Ca II).
  • F — weak H; both ionized and neutral metals.
  • G — weaker H; neutral metals (Fe, Ca, Na, Mg, Ti), especially Ca II in hotter stars.
  • K — even weaker H; Ca II, neutral Ca; neutral metals; TiO in cooler stars.
  • M — very weak H, if visible at all; neutral Ca; molecules like TiO, VO, and CN.

As one can see in the image below, the relative strength of spectral lines depends heavily on a star’s temperature. Spectral line strength can be described by the combination of the Boltzmann and Saha equations (both rather complicated). Basically, as the temperature rises, more atoms will be able to elevate their electrons to higher energy levels and thus produce absorption lines. However, if temperatures are too high, the atoms will have absorbed enough energy to be ionized and will not have the necessary electrons to excite to produce absorption lines.

Temperature Dependence for Spectral Lines

Credit: KCVS at The King’s University College (Alberta, Canada)

It should be noted that there are more spectral classes than just OBAFGKM — here we present a brief survey of those you’re mostly likely to cross paths with in Astronomy. Classes L, T, and Y are reserved for stars/substellar objects with temperatures progressively cooler than 2000 K. Class W (or WR) indicates a Wolf-Rayet star, a supergiant whose powerful stellar winds have blown away most of the hydrogen in its atmosphere. Carbon stars, dying supergiants with a large amount of carbon in their atmospheres, are spectral class C. White dwarfs are classified as D, because they are made of degenerate matter. Objects like neutron stars and black holes aren’t given a spectral class, because they are stellar remnants rather than stars.

———-

However, our discussion of spectral classes merely serves as an introduction to the Hertzsprung-Russell diagram, which is perhaps the most single important graph-chart-diagram-thing that you’ll ever encounter in astronomy (a slightly idealized version is shown in the image below). The H-R diagram is named after Ejnar Hertzsprung and Henry Norris Russell; they discovered it independently after plotting the luminosity of stars against color or spectral type, both of which are proxies for the star’s temperature.

H-R Diagram

Credit: ESA

This H-R diagram has double axes, showing the link between temperature and spectral class, and between absolute magnitude and luminosity. The numbering on some of the axes does seem to be going the wrong direction, but by convention, high temperatures are on the LEFT and larger absolute magnitudes are towards the BOTTOM (this is partly due to the fact that the entire magnitude system is “backwards”). Alternately, the x-axis may be labeled in terms of the stars’ B-V index, but no matter, it’s still an H-R diagram.

The main sequence appears as a diagonal line across the H-R diagram, clearly showing the link between luminosity and temperature for these stars — actually, both are dependent on a third factor, but we’ll cover that later. Above the main sequence are giant stars, which are relatively cool but have swelled up enough to cause an increase in overall luminosity. Even further above that lie the supergiants, very large stars that are also extremely luminous. Below the main sequence lies white dwarf stars, which are dim but rather hot. They will eventually migrate to the lower right as they cool off.

———-

When Hertzsprung and Russell plotted their data, they found for classes G, K, and M, there were a wide range of stellar luminosities. Independently, they both decided to call the brighter stars “giants”, since for two stars of the same temperature, the more luminous one must also be larger.

This led to the development of the Morgan-Keenan spectral classification, which sorts stars by not only spectral class but also by luminosity class. Stars in luminosity classes I-V are progressively less luminous and smaller in size for their spectral class. The luminosity classes correspond to supergiants, bright giants, giants, subgiants, and dwarfs (NOT white dwarfs, just main sequence stars). Class 0 has been tacked on for incredibly luminous “hypergiants”, while on the other end of the luminosity spectrum, classes VI and VII used to designate subdwarfs and white dwarfs but have since fallen out of common usage.

To give a familiar example, the Sun is a G2V star — its spectral class is G2, so it is slightly cooler than the 6000 K of a G0 star, and it is a main sequence “dwarf” as indicated by luminosity class V.

———-

Perhaps most practically, H-R clusters can be used to determine the age of star clusters. Imagine that a cluster has just been formed, with stars of all masses. This forms a “perfect” main sequence (ZAMS, the Zero-Age Main Sequence) when the H-R diagram of that cluster is plotted. As the highest-mass stars end hydrogen core fusion, they migrate away from the ZAMS towards the red giant branch, and the H-R diagram for the cluster starts to form a hook shape. The point where the hook starts to diverge from the ZAMS is called the main-sequence turnoff point.

H-R Diagram Clusters

Credit: Mike Guidry, University of Tennessee (through Penn State University)

After pinpointing the main-sequence turnoff, we can now find the age of the cluster. The stars located right at the turnoff point have just reached the end of their main-sequence lifetime; therefore, by determining their ages, we can determine the age of the cluster as a whole. The main-sequence life expectancy of a star is approximately: T = 1/(M^2.5), where M is in solar masses and T is in solar lifetimes (1 solar lifetime ≈ 10 billion years).

As shown in the image above, young clusters like h + χ Persei have turnoff points towards the left of the main sequence, since only the most massive stars in the cluster have had time to evolve off the main sequence. Old clusters like M67 have turnoff points closer to the middle or right side of the ZAMS because less massive stars have also had time to evolve off the main sequence.

And thus concludes our explanation of spectral classes and the H-R diagram. We do hope it was…stellar.

———-

TL;DR — Stars are divided into spectral classes O, B, A, F, G, K,  and M (and several others that are less common), each of which is characterized by certain absorption lines. The absorption lines depend on temperature, so spectral class is also an indicator of temperature. The H-R diagram typically plots temperature against luminosity; the main sequence stretching across the diagram shows the correlation between greater luminosity and greater temperature. The Morgan-Keenan luminosity classes show how luminous a star is for its temperature. The age of a cluster can be determined through its H-R diagram, by seeing where the cluster stars are beginning to leave the main sequence.

———-

Sources and links for further reading:

Apparent/Absolute magnitude, Color Index

Before we begin, for those who didn’t see the About page, we’re sorry but we have to limit posts to one per week.  This is because we both have piles of schoolwork to do, but we will try to keep up with one post a week.

This post will involve more about light, as we said it’s quite important.  In fact, this post will discuss how we can use light to predict distances from Earth.  At this point, math should be expected, along with checking units and significant figures.  In fact, we will be introducing one of the most important equations for distance calculations in an Astronomer’s arsenal.

Now we go on the next part of our journey, to Greece of course, where we are joined by the man Hipparchus. He developed a system of apparent magnitudes (denoted as m), which determines how bright stars were by looking at them here on Earth.  For some reason he decided that it would be more logical to say that as the numbers decreased the stars became brighter, resulting in the scale ranging from m=1, the brightest stars, to m=6, the dimmest.  This was mainly because no highly accurate equipment was available, but this is still extremely important as it describes how objects would appear from an observer on Earth.  Originally the system was just based on naked-eye observations, but modern astronomers decided to fix it up.

Now the scale is logarithmic and compares ratios of apparent magnitudes for stars.  Apparent magnitude is now considered to be brightness or flux measured in Watts per square meter.  It was decided on this scale that 100 would correlate to a magnitude difference of m=5.  This should be emphasized as a difference since for the brightness ratio of  B1/B2 should be equal to the magnitude difference of m2-mwith the formula:

B2/B1=100(m1-m2)/5

Taking the log of both sides we get:

m1-m2 = -2.50 log(B1/B2)

With that we can show that when the brightness ratio equals 100 then we take log(100) which equals 2, multiplying by -2.50 to get -5.  But this still works since the scale shows that the object is brighter as the magnitude value decreases.  In addition, that means that if you were to measure between one magnitude it would be a factor of 100.4 which is equal to approximately 2.512 since it comes from 1001/5.  So, a 1st magnitude star would be 2.512 times as bright as a 2nd magnitude star, and 2.512or about 6.310 times as bright as a 3rd magnitude star.  Also, Hipparchus’s scale has had an increased range of magnitudes.  The Sun for example is now m=-26.83.

Next we need to establish how we can show radiant flux, denoted F, or those brightnesses.  Earlier we mentioned it as watts per square meter, which is exactly shown by a familiar manipulation using light and surface area of a sphere.  This is the inverse square law (we can call this brightness or flux, we will now be using F for flux):

F=L/(4πr2)

Now that we have explained how we can view objects with the unaided eye and defined brightness we have to show how to find the actual magnitudes of all objects.  How would this be done, though?  Astronomers decided to create a system of absolute magnitudes, denoted as M or Mv, which shows what the magnitude of stars and objects would be at a set distance of 10 parsecs.  This works since instead of having all sorts of objects with different actual magnitudes and different distances from the Earth an established sphere of points can be used to better show the magnitudes.  With that and the inverse square law in mind we can create a flux ratio to show how much the magnitudes would be from this set distance.  Again, 5 magnitudes separate the apparent magnitudes of two stars which would show a flux ratio of 100.  This is the very same brightness comparison formula we had earlier.  We can actually manipulate that into something called the distance modulus.  We can say that:

100(m-M)/5=F10/F=(d/10 pc)2

This shows that for the flux ratio of a star’s apparent magnitude to its absolute magnitude and for F10, which would show how the star would appear from 10 parsecs, would equal the distance to the star if it were at 10 pc away.  This can therefore have multiple manipulations to show a star’s distance away:

d=10(m-M+5)/5 pc

Or a star’s apparent and absolute magnitudes:

m-M=5log(d/10)=5log(d)-5

If you were wondering how this could be useful if we don’t necessarily know the distance or absolute magnitudes of every star (you could certainly find the apparent magnitude as it is defined by how an observer would see it from Earth), that is a very good question.  Later we will discuss that for certain stars the absolute magnitude is extremely consistent and can be used to find distances very well.

There is still more to this story.  The apparent and absolute magnitudes mentioned are measured as bolometric magnitudes, which detect flux from a star across all wavelengths of light.  It would be nice to do this, but it is generally easier to target specific wavelengths especially since certain objects can be analyzed better in them.  For this we have to look at what we use.  UBV wavelength filters are used to find a star’s apparent magnitude and color.  U is the ultraviolet magnitude with a filter at 365 nm and bandwith of 68 nm, B is blue magnitude with a filter at 440 nm and a bandwith of 98 nm, and V is for the visual magnitude (sometimes considered green) with a filter at 550 nm and a bandwith of 89 nm.  This creates multiple color indices which compare the different wavelength filters to show a star’s apparent or absolute magnitude.  The actual device, the bolometer, uses an association between temperature and color as mentioned in the last post to show them.  The indices are differences between magnitudes of the U, B, and V to equal absolute magnitudes shown as:

U-B = Mu – Mb

and

B-V = Mb – Mv

Since we already noted as the magnitudes increase the brightness decreases we can say that a star with a smaller B-V index would be bluer (as the blue was filtered out more) and would show both that the star is brighter and hotter.  The same would apply for the U-B in that lower values would be more ultraviolet and therefore be brighter and hotter as well.  So, overall the purpose of U-B and B-V is to quantitatively show what the color and temperature of a star is.

With this we can next say that there is the bolometric correlation, or BC, which shows the comparison between bolometric and visual magnitudes (mboland Mbol are really just m and M):

BC = mbol– V = Mbol – Mv

There is another factor influencing these formulas.  It is called interstellar extinction which creates the effect known as interstellar reddening, denoted A as another magnitude.  When it isn’t noted in the question you should ignore this, but it is good to know all the factors influencing this important distance equation.  Interstellar extinction refers to the presence of interstellar dust that absorbs or scatters light from an object.  The effect is stronger at shorter wavelengths, which interact more strongly with dust. Therefore, red light can be seen more,  and if something appears more red than it “should”, then dust is present.  This was proven after comparison between expected and observed emissions showed that there was an inaccuracy.  If a question mentions some amount of reddening the following corrections are made:

The distance modulus becomes d = 10 0.2 (m – M + 5 – AV)

B-V values since the color is changed it becomes

True color = (Bo-Vo), Observed color is (B-V)

(B-V)=(B+Ab) – (V+Av)

(B-V)=(Bo-Vo) + (Ab – Av) = Intrinsic color + color excess

Since extinction will occur more in the lower wavelengths, this increases the V values relative to the B or U values.  A test can also ask about how this can show in ratios, where it would show Ab/Av or Au/Av.  In this case you would have to be given the value of the Av, and then multiplying the two ratios by the Av would get Ab and Au, such that you can correct either your B-V or U-B values.

The last thing to note is the color-color diagram.  This relates U-B and B-V indices for stars, and it can show temperature and color as well.  Stars actually aren’t perfect blackbodies, so even if they get close the diagram won’t form a straight line.  Here is an example, but know that a color-color diagram can be applied to objects with many stars, which will look different:

==========

TL;DR: Developing methods of organizing stars and understanding distances is important in Astronomy since this allows Astronomers to better understand our place in the universe and to construct formulas which explain it well quantitatively.  Apparent magnitude is how bright something appears to be, while absolute magnitude shows how bright something actually is from a set distance of 10 pc.  With this the distance modulus can be derived to find the distance to most objects.  Color indices also are studied to show the temperature, color, and characterize stars better.  Lastly, a correction must be made for interstellar dust.

==========

Sources and further reading: