“Everybody talks about the weather,” an aphorism attributed to Mark Twain goes, “but nobody does anything about it.” It’s not from lack of trying: Our ancestors have been attempting to do something about the weather — or at least predict it — since 650 BC, when Babylonians based their prognostications on cloud patterns.
Without accurate instruments or a way to rapidly communicate measurements, however, early weather forecasting remained mostly a matter of aphorisms. Variations of “Red sky at night, sailor’s delight; red sky at morning, sailor take warning” pop up in the Bible (Matthew 16:2-3) and in Shakespeare’s Venus and Adonis.
Aristotle codified much of what the ancients thought they knew about weather in Meteorologica, the standard treatise for nearly 2,000 years. Unfortunately, Aristotle was mostly wrong. For example, he thought west winds were cold because they blew from the sunset.
Progress had to wait until the invention of instruments to measure humidity, temperature and air pressure. Nicholas Cusa, a German cardinal and mathematician, first designed a hygrometer to measure humidity in the mid-1400s; the first practical device was made by Francesco Folli in 1664. Galileo Galilei developed one of the first thermometers around 1592. Another Italian, Evangelista Torricelli, invented the barometer in 1643.
With those tools, observing the weather became a popular hobby. Thomas Jefferson recorded the temperature in Philadelphia on July 4, 1776, as 76 degrees. He made regular observations at Monticello, collected weather reports from as far away as Quebec and the Mississippi River, and participated in the new nation’s first simultaneous weather-observation project. George Washington, too, was a weather buff, making his last data entry the day before he died.
Another amateur weatherman, British chemist Luke Howard, came up with the modern system for classifying clouds. He set out his nomenclature in an 1802 talk, explaining that clouds are “good visible indicators” of atmospheric conditions, much as “the countenance [is] of the state of a person’s mind or body.” In 1805, Sir Francis Beaufort, an Irish-born British admiral, codified the measurement of wind speeds. His original 13-step Beaufort Scale, developed aboard the HMS Woolwich, categorized wind conditions by their effects on the sails of a man-of-war, from “just sufficient to give steerage” to “that which no canvas sails could withstand.” A version is still used today.
But there was still no speedy way for weather observers to compare notes. Weather forecasting — as opposed to just recording — didn’t take off until the late 1830s and early 1840s, after the invention of the telegraph. In 1849, the Smithsonian Institution took advantage of the new technology, distributing weather instruments and creating an observation network of 150 volunteers whose reports were combined into weather maps. “Synoptic weather forecasting,” based on the observation of surface wind, storm systems and other conditions was born.
In 1870, a joint congressional resolution called on the secretary of war “to provide for taking meteorological observations at the military stations in the interior of the continent and at other points in the States and Territories” — the beginning of what would become the National Weather Service. The Department of Agriculture took over the job from the military in 1890, overseeing what was now officially the Weather Bureau.
For the new agency to make accurate forecasts, however, it had to have a way to look upward, to where the weather action was. William Abner Eddy, an American accountant and journalist, made the first observation of temperatures aloft in 1894. Eddy had acquired a Malay kite at the World Columbian Exposition in Chicago. He modified it to create his own diamond-shaped kite, five of which, linked together, carried a self-recording thermometer skyward.
Kites were replaced by aircrafts and then radiosondes, first used in the United States in 1937. (Sonde is French for probe.) These hydrogen- or helium-filled balloons carry devices transmitting to a ground station; they can reach an altitude of more than 20 miles before bursting. Today, nearly 900 stations all over the globe launch radiosondes every 12 hours.
Even with these improvements, forecasting remained mostly a matter of observing weather in one place and guessing where it would move. In 1904, Norwegian physicist Vilhelm Bjerknes suggested a better way: Use mathematical equations to understand and predict changes in the atmosphere. Unfortunately, an early attempt at “numerical weather prediction” by British mathematician Lewis Fry Richardson showed how impractical such an endeavor then was. He spent several months calculating a six-hour forecast near Munich that proved wildly inaccurate. Undaunted, in a 1922 book Richardson envisioned 64,000 mathematicians performing the calculations simultaneously.
Two other technological breakthroughs made global weather prediction possible. The first was the computer. Hungarian-American mathematician John Von Neumann realized that weather prediction was an ideal application for his new ENIAC computer. He assigned Jule Charney to lead a team that streamlined and computerized Richardson’s equations. In April 1950, Charney’s group made a series of successful 24-hour forecasts for North America.
The next breakthrough came a decade later, on April 1, 1960, with the launch of the polar orbiting satellite TIROS 1 (an acronym for Television Infrared Observation Satellite). Its 78-day mission gave meteorologists their first look at weather from above. Weather satellites soon went far beyond pictures, employing instruments such as “atmospheric sounders,” which measure various levels in atmospheric columns. Though similar to radiosonde observations, satellite data are more complete spatially, filling in gaps between ground stations.
Technology likewise improved the ability to get the word out about weather forecasts. Edward B. Rideout became the first radio weatherman in 1925 on WEEI in Boston. He finagled his way on the air by convincing the station’s engineers that the weather affected radio-transmission quality. James Fidler, who’d nonetheless been billed as “radio’s original weatherman” for his broadcasts during the severe winter of 1933-1934 at Ball State’s station in Muncie, Ind., became the first TV weatherman in 1940 at WLW-TV in Cincinnati. Fidler went on to develop the direct broadcast program for the Weather Bureau, which became the National Weather Service in 1970.
TV weather announcers became ubiquitous in the 1950s, and forecasting continued to improve. Various models predict from six to 16 days of weather, though it’s commonly accepted that long-range forecasts of two weeks or more are likely to be inaccurate. The Weather Channel began running 24 hours a day on cable television in May 1982, ensuring that we would nevermore want for something to talk about.
- 340 BC: Aristotle writes Meteorologica
- 1592: Galileo makes a thermometer
- 1643: Barometer is invented
- 1805: Beaufort Scale measures wind strength
- 1890: Weather Bureau founded as part of agriculture department
- 1894: William Eddy records air temperature with kites
- 1915: First radio weather forecast airs
- 1919: American Meteorological Society forms
- 1937: US begins using radiosondes
- 1950: First computerized forecasts are issued
- 1960: First weather satellite is launched
- 1982: The Weather Channel goes on air
- The Weather Bureau published its first weather map, covering only the District of Columbia area, in 1895.
- Robert B. Thomas founded The Farmer’s Almanac in 1792. His secret forecasting formula, still used today, is kept in a black tin box at the publication’s Dublin, NH, offices.
- The decision to invade Normandy on June 6, 1944, was made after weather forecasts predicted a favorable combination of tides and winds.
From the November 2011 issue of Family Tree Magazine
More great genealogy resources from Family Tree Magazine: