Star light, star bright

Written by      

From the very beginnings of history human beings have attempted to make sense of the world and its varied, mutable contents. Long before the first cities were built or the first writing developed, the night sky had been pat­terned and named. The brightest stars were known individually, and formed the key points of constella­tions on to which various mythically significant beings were mapped. Also noticed were five wander­ing “stars” which varied not only in position but also in brilliance, and so were accorded particular significance.

At first, the universe was thought of as a limited system—originally a flat earth overarched with a hemispherical cover upon which the stars hung as lamps. Thus the apparent variation in the brightness of the stars one from another was easily ac­counted for by assuming the sizes of these lamps varied. Sirius and the major stars in Orion were clearly large lamps burning high-quality oil, whereas the barely distinguishable scatter of minor stars were little more than rush lights.

Following Aristotle, European cosmology of the Middle Ages had the Earth at the centre of a set of concentric spherical shells. The outermost were those of the Firmament, which bore the fixed stars, and the Primum Mobile which moved all the other spheres in the system, and thus the stars, planets, sun and moon. The Firmament was thought of as being opaque, but punctured with holes through which the divine light flooding the Primum Mobile could be seen.

Clearly, since the world was at the centre, then all the stars were effectively the same distance from us, and their differences in brightness could be ac­counted for by having holes of appropriate size.

In these models of the cosmos there was a neat match of appearance with reality: stars which ap­peared bright were indeed bright, and those which appeared dim were indeed dim. The effects of diffrac­tion also make bright lights, even if they are pinpoint-sized sources, such as stars, appear larger than faint ones. Thus a Crucis looks bigger than 8 Crucis, and, until the nineteenth century, stars were as often talked about by size as by brilliance; a was said to be larger or greater than S. It is from this fallacy that the term “magnitude” is derived—a term which we use when discussing stellar brightness today.

In the sixteenth century the geocentric model of the universe began to creak under the strain of its inability to make long-term predictions of planetary positions. The idea of spherical shells became suspect, and that of a geocentric universe was challenged. In 1576

Thomas Digges, supporting Copernicus’s heliocentric cosmology, put forward the idea that beyond the orbit of Saturn there was an “. . . orb of stars fixed infinitely up [which] extendeth itself in altitude spherically and [is] therefore immovable”. Thus he no longer thought of the stars as being all at the same, not very great, distance from us, but saw the solar system imbedded in infinite space with stars throughout, and some so distant that they would be invisible.

During the eighteenth century the question of the shape and size of the universe became the obsession of William Herschel, whose astronomi­cal career was spectacularly launched by his discovery of Uranus, the first addition to the family of the five traditional planets.

Herschel became the first maker of large tel­escopes, and with these he visually discovered more celestial objects than has any observer before or since. Believing that his great 48-inch-diameter telescope enabled him to see to the edge of the heavens, he hoped to establish the distance of the furthest stars by gauging their brightness, on the basis that the further away the star the dimmer it will appear.

A generation before Herschel, Edmund Halley had postulated that perhaps all stars were of about the same intrinsic brightness, and that their variation of apparent brightness was due only to their varying distance from us. Because there existed neither an adequate theory of the nature of stars, nor instru­ments of sufficient accu­racy to make direct meas­urements of the distance of even the closest stars, this was not an unreasonable conjecture.

Herschel seized upon Halley’s suggestion and used it as the foundation of his project. Indeed, if he wished to advance his programme at all he could not do otherwise, for if the stars were to be of various brightness then he could not distinguish between a dim one close at hand and a bright one far distant. He would have no basis for judging distance.

The very excellence of Herschel’s telescopes yielded strong evidence against this idea of approxi­mate similarity, for they revealed not only large numbers of clusters of stars but also the fact that these contained stars of widely varying brightness. Profes­sional astronomers, notably the Astronomer Royal, Nevil Maskelyne, pointed out to Herschel that such clusters could be explained in either of two ways. It could be that they were actual clusters of stars having various intrinsic brightnesses. Alternatively, if their constituent stars were generally similar then they must be long fingers of stars lying along the observer’s line of sight out into the depths of space. This last was shown to be statistically improbable, and thus the betting must be that stars were of various brightnesses. Herschel obstinately refused to countenance this objection until very late in his career.

Around 130BC Hipparcos, generally regarded as the greatest astronomer of antiquity, established that the naked eye could detect five equal steps of brightness be­tween the brightest and faintest visible stars. The bright stars, such as Aldebaran and Altair, he described as being of the first magnitude, and those on the edge of visibility as being of the sixth magni­tude. It is a refined form of this scale which we still use today, having ex­tended it to cover both brighter and dimmer objects than those consid­ered by Hipparcos.

Originally, astronomers spoke of stars of, say, the first or third magnitude, and it was quite natural that the ordinals started with the brightest and increased numerically for the fainter objects. How­ever, today we have dropped the ordinal, and talk of a star being magni­tude two or magnitude four point five. This can be confusing, as we generally associate higher numerical values with increases in quantity, whatever the measurement may be.

Since the magnitude scale originated with stars which are not, in fact, the brightest, the scale has had to be extended with negative values to include more brilliant objects such as Sirius, the planets at their brightest, or the Sun. Also, as the light grasp of telescopes exceeds that of the naked eye, so the range of positive magnitudes has been extended to describe the fainter stars and those faint patches of light which mark the galaxies.

As Herschel strained to measure the distances of the stars he greatly refined the techniques of compar­ing their brightness, and demonstrated that a star appearing to be of magni­tude 1 was delivering to the observer 100 times as much light as one of magnitude 6. This was confirmed by Pogson in 1856, who showed that a difference of one magnitude corre­sponded to a difference in luminosity of x2.512, for 2.5125 = 100.

This means that the magnitude scale obeys the Weber-Fechner Law which states that the physiological response of the eye to a physical stimulus is proportional to the loga­rithm of the energy flux. Our hearing obeys the same general law: perceived even steps in the intensity of sound are caused by multiplications of the impinging sound energy. This is why apparently insignificant increases on the decibel scale, dB, correspond to large in­creases in the perceived noise level.

While Herschel devel­oped techniques which enabled him to compare the brightnesses of stars, he was still restricted to judging apparent magnitudes, that is, their brightness as seen from Earth.

He understood the inverse square law, which states that the light falling on a given point is propor­tional to the intensity of the source and inversely proportional to its distance from the point. But this did not help him as much as it might have, for he had no standard light source out amongst the stars which was at a known distance, and so could be used for comparison.

It was not until some ten years after Herschel’s death that the accuracy of the appropriate instruments was developed sufficiently for Bessel, in 1838, to measure the annual paral­lax of the star 61 Cygni, and thus its distance from Earth. Its real brightness—its absolute magnitude—could then be calculated.

The absolute magnitude of a star is now defined as a star’s brightness when viewed from the standard distance of 10 parsecs, 32.6 light years away. Today the absolute magnitude at visual wavelengths (Mv) has been established for many stars, and this enables us to compare them directly one with another.

Since certain classes of stars have distinctive characteristics, once we have established their absolute magnitude then we can calculate their actual distance from their apparent magnitude (my) and the inverse square law. The Sun, my = -22.5, which is impressively bright, is My = 5, which places it, along with a Centauri, amongst the rather modest stars. In contrast Canopus, my = -0.7, is Mv = -8.5,which is 2.512135 times as bright, i.e. better than 250,000 times as luminous as the Sun.

Between 1908-12 the American astronomer Henrietta Leavitt discov­ered the relationship between the pulsation period and light variation of the type of star known as Cepheid variables. These are rather bright variable stars, and once the distance and hence absolute magni­tude of one of them was established they were used as standard candles to measure the distances of the further parts of our galaxy, and even of neigh­bouring galaxies. This was a great leap forward, for annual parallaxes, which use the diameter of the Earth’s orbit as their baseline, are feasible only out to about 30 parsecs (100 light years) which is small compared to the diameter of our galaxy, which is about 40,000 parsecs.

One of the jobs of the Hubble telescope is to observe Cepheids out to distances of about 15,000,000 parsecs, some­thing only possible in the clarity of space. These measurements will greatly improve our estimates of the size of the observable universe as they will provide accurate distances up to fifteen thousand times the distance of Maffei I, the most distant galaxy of the “local group”, that cluster of some 17 galaxies which forms our neck of the cosmic wood.

We, on the other hand, are bound to the surface of our planet, and our view is dimmed by the atmosphere with its burden of water and dust. Also, the incom­ing light is distorted by the movement of the air, particularly rising and falling parcels of different temperatures, and masked by the scattered light from our various nocturnal activities. The map of the Southern Cross shows all the stars in it down to apparent magni­tude 6.5, which is the limit for the unaided human eye under the very best condi­tions: no moon and neither cloud nor haze. The magnitudes are shown in tenths.