Pogson’s Ratio was derived to quantify the brightness difference between stars using a logarithmic scale. The origin of the modern system for measuring stellar magnitude is typically credited to Norman Robert Pogson, an English astronomer who, in 1856, formalized a method for astronomers to consistently compare the apparent brightness of celestial objects.

Pogson defined the magnitude scale such that a difference of 5 magnitudes corresponds to a brightness difference of exactly 100 times. This choice was related to the fact that the ancient Greek astronomer Hipparchus’ original magnitude scale, based on naked-eye observations, categorized the brightest stars as first magnitude and the dimmest visible ones as sixth magnitude.

To derive this, Pogson applied a logarithmic approach because human perception of brightness follows a roughly logarithmic progression (a principle known as Weber-Fechner Law). Therefore, mathematically, a change in magnitude of 1 corresponds to a brightness change factor of the fifth root of 100 (since 5 magnitudes equals a factor of 100), which is approximately 2.512.

Thus, the formula for the difference in magnitude between two stars is defined as:

\[ m_1 – m_2 = -2.5 \log_{10}(\frac{F_1}{F_2}) \]

where \( m_1 \) and \( m_2 \) are the apparent magnitudes of the two stars, and \( F_1 \) and \( F_2 \) are their respective fluxes (or brightness). The negative sign ensures that a brighter star, with higher flux, results in a lower (more negative) magnitude, aligning with the historical scale where brighter stars have lower magnitudes.

This derivation allows astronomers to accurately quantify and compare the brightness of stars and other celestial objects, using a scale that reflects the logarithmic nature of human perception. Pogson’s system laid the foundation for the modern photometric systems used in astronomical observations.