Who Invented Night Vision and How Does It Work?

By | July 16, 2024

When the air campaign of Operation Desert Storm began on January 17, 1991, television viewers across the world were presented with some of the most awe-inspiring images of modern, high-tech warfare ever broadcast: stealth bombers dropping precision “smart bombs” on Iraqi command posts, helicopters and ground attack aircraft picking off swathes of enemy vehicles, and tanks duking it out in the desert – all captured in the eerie green glow of night vision. Lifting the protective cloak of darkness has been the dream of all armies since the dawn of human civilization, and today night vision technology is so advanced that battles can now be effectively fought at any hour, day or night. But how does this technology actually work, and who invented it? Well, slap on your NODs as we dive into the fascinating science and history of seeing in the dark.

The story of night vision begins in the 1790s with British astronomer and polymath Sir John Herschel. While trying to develop a light filter that would allow him to better observe the surface of the sun, Herschel made a curious discovery:

What appeared remarkable was that when I used some of them, I felt a sensation of heat, though I had but little light; while others gave me much light, with scarce any sensation of heat.”

To determine which parts of the visible light spectrum transmitted the most heat, Herschel built a device he dubbed a spectroradiometer. Sunlight was passed through a prism to split it into its constituent colours, which were projected onto a screen. Herschel then used a thermometer to measure the temperature within each coloured band. He discovered that the violet end of the spectrum transferred the least heat, and the red end the most. Many scientists might have left it at that, but Herschel decided to go one step further, placing his thermometer just beyond the red band, into an area with no visible light. To his surprise, this region was hottest of all, leading Herschel to conclude:

“…that the full red falls still short of the maximum of heat; which perhaps lies even a little beyond visible refraction. In this case, radiant heat will at least partly, if not chiefly, consist, if I may be permitted the expression, of invisible light; that is to say, of rays coming from the sun, that have such a momentum as to be unfit for vision.”

In the 1880s, this “radiant heat” or “invisible light” was dubbed infrared, meaning “below the red end of the spectrum.” Today, we know that light is a form of electromagnetic radiation and is composed of waves (yes: also particles, but we will save the quantum physics of it all for another video), and that its colour and other properties are determined by its wavelength. The part of the electromagnetic spectrum which humans can perceive extends from 380 to 700 nanometres. Above this, extending from 400 to 10 nanometers, is the ultraviolet band; while below, extending from 750 to 1000 nanometers, is the infrared band. Infrared radiation is given off by all objects hotter than absolute zero, and is excellent at transferring thermal energy; indeed, much of the heat we feel from the sun is transferred to our bodies via infrared radiation. Objects at different temperatures give off different wavelengths of infrared; for example, humans mainly radiate heat in the long-wavelength infrared band from 800-1500 nanometers while hotter objects like vehicle engines also emit short and mid-wavelength infrared in the 140-800 nanometer band.

However, Herschel’s method of detecting infrared radiation was crude and cumbersome, making this “radiant heat” difficult to study. Then, nearly a century later in 1878, American inventor Samuel Langley – most famous as a direct rival of the Wright Brothers – invented an infrared detection instrument called a bolometer. This comprised two thin strips of platinum or palladium coated in lampblack, one shielded from light and the other not. When infrared radiation struck the unshielded strip, it was absorbed by the lampblack and heated up the strip, causing its electrical resistance to change. This change could then be detected using a sensitive instrument called a galvanometer. While simple, Langley’s bolometer was remarkably sensitive, able to detect the body heat of a cow at a range of 400 metres.

At around this same time, scientists like Ferdinand Braun in Germany and Jagadish Chandra Bose in India discovered that certain minerals could be used to detect electromagnetic waves – a phenomenon known as photoconduction. These discoveries later led to the development of crystal radios, the first widely-available detectors for receiving commercial radio broadcasts. They also inspired one of the first attempts to use infrared radiation for practical purposes. In 1917, an American inventor named Theodore W. Case discovered that the compound Thallous Sulphide exhibited photoconductivity in the infrared band. Funded by the U.S. Army, Case attempted to exploit this effect to communicate over longer distances and through hazier atmospheres than was possible using regular signal mirrors or heliographs. And while he succeeded in transmitting infrared messages over 28 kilometres, the unreliability of his Thallous Sulphide detector and its tendency to break down with repeated exposure to light soon put an end to his research.

However, all of the infrared detection devices developed to this point could only measure the presence or intensity of infrared radiation; they could not display any sort of image of said radiation’s source. The first device capable of doing so was the evaporograph, developed in 1929 by Dr. Marianus Czerny from the University of Frankfurt. Originally intended to allow anti-aircraft gunners to spot their targets by the heat of their engines, the evaporograph consisted of a sealed, semi-evacuated chamber containing silicone oil vapour and a thin, transparent celluloid membrane. When infrared radiation such as the from the heat of an enemy aircraft’s engines – was focused by a germanium dioxide lens onto the membrane, where it caused differential evaporation and condensation of the oil and optical distortions that could be picked up either by the human eye or a television imaging tube. Though not used during the Second World War, the technology of the evaporograph was considered so strategically important that it remained classified in the UK until 1956.

Five years later, engineers G. Holst and H. De Boer, working for Philips in the Netherlands, developed a fully-electronic infrared detector which would form the basis of nearly all night vision technology to come. Known as an image converter tube or Holst Glass, this comprised an evacuated glass tube with one end coated in a thin layer of caesium and silver oxide to form a photocathode. Behind this were a series of tubular accelerating and focusing anodes, and finally a phosphor-coated screen. When infrared radiation struck the photocathode, it released electrons via the photoelectric effect. These electrons were then accelerated and focused by the anodes onto the phosphor screen, which converted them into a visible image of the infrared source. In 1941, the Holst Glass was refined by Radio Corporation of America engineer Vladimir Zworykin – a key figure in the development of television – to create the RCA 1P25 image converter tube, which was widely used in American night vision gear near the end of the Second World War.

However, the first military to deploy electronic night vision gear in combat was that of Nazi Germany. German infrared detector were based on the work of Edgar Kutzcher of the University in Berlin, who in 1933 discovered that Lead Sulphide – better known as Galena – exhibits photoconductivity within the short or mid-wavelength infrared band. Among the first such devices the Fahr-und-Zielgerät (AKA “Driving and Aiming Device”) or FG 1250 Sperber, was developed by optics company Carl Zeiss AG and first issued in 1941. However, since short and mid-wavelength infrared radiation is only given off by very hot objects, these detectors were by necessity active, and had to be used with large infrared spotlights to illuminate the target. This not only made early night vision gear like the FG 1250 extremely heavy and bulky – meaning it could only be carried aboard tanks, half-tracks, and other vehicles – but it rendered the user extremely visible and vulnerable if the enemy also happened to have infrared detection capability. Later in the war, German weapons manufacturer C.G. Haenel developed a miniaturized version of this technology called the Zielgerät 1229 or Vampir. This consisted of an infrared spotlight and detector scope mounted atop an Stg.44 assault rifle, powered by a large battery in a wooden box and a smaller battery fitted into a standard-issue gas mask carrier tube. This whole assembly was strapped to a regular infantry backpack frame and weighed a whopping 15 kilograms. Carried by specialized troops known as Nachtjäger or “Night Hunters”, Vampir units were used in small numbers on the Eastern Front starting in February 1945.

Meanwhile, the US military was developing night vision gear based on a completely different – and far simpler – technology. Known as Metascopes, these devices were developed by the Institute of Optics at the University of Rochester in New York and used a series of special phosphor compounds to convert infrared into visible light. In a typical metascope, a spherical mirror gathered infrared light and focused it into a phosphor-coated button. The resulting visible light image was then viewed using a periscopic magnifying optic. In order to function, these phosphors first had to be “excited” or “charged” by exposing them to visible or ultraviolet light or even ionizing radiation. This charge gradually wore off with prolonged exposure to infrared light, so Metascopes featured double-sided, rotating phosphor “buttons” so that one side could be charged using an internal battery-powered lamp or radioactive radium source while the other was being used – allowing near-continuous operation.

Metascopes were first used in combat during the Operation Torch landings in North Africa in November 1942. Compared to other contemporary night vision technology, the images produced by metascopes were relatively low-resolution, making them unsuited for general observation work. Instead, they were largely used by the U.S. Navy for clandestine ship-to-ship signaling at night, using regular Morse Code signal lamps fitted with infrared filters. Later, most Navy ships were fitted with a system of mast-mounted infrared signal lamps code-named NANCY. Smaller handheld versions were also developed for the U.S. Army, and were largely used – along with infrared flashlights – by paratroopers for regrouping after night drops. And to learn more about how these forgotten devices worked, please check out the author’s video on the subject over on his channel Our Own Devices.

Meanwhile, the National Defense Research Council or NDRC – an organization set up in 1941 to help the U.S. Armed Forces with weapons-related research and development – was developing a series of practical electronic night-vision scopes based on the RCA 1P25 image converter tube. The first of these, the C1, and C3 telescopes, were developed for the U.S. Navy as more sensitive and high-resolution replacements for the earlier metascopes. Around 13,500 were produced by the end of the war. The C1 and C2 had actually been trialled by the Army, but were found to be too heavy and bulky for field use. Instead, the NDRC developed a more compact infrared scope called the Type D, two of which could be joined together to form infrared binoculars known as the Type B. Various hands-free mounts were devised to allow jeep and tank drivers to operate their vehicles in pitch-darkness, illumination being provided by a set of infrared headlights powered by an onboard generator. Another planned use to allow assault glider pilots to home in on infrared beacons set up on the landing zone.

Between July 1941 and April 1943, extensive testing of the Type B infrared binoculars was conducted at Fort Benning, Georgia; Fort Belvoir, Virginia; Aberdeen Proving Grounds; Maryland and Fort Knox, Kentucky. While the tests proved that driving military vehicles in pitch darkness was entirely feasible, unfortunately the movement of the binoculars relative to the driver’s eyes tended to produce severe motion sickness. The solution, it was determined, was to mount the binoculars to the driver’s head instead of the vehicle, and to this end a rather goofy-looking night vision helmet was duly developed. However, refinement of this concept proved difficult, and the equipment was not ready by the time the war ended.

However, in July 1943 U.S. Army Ground Forces headquarters requested the development of two portable infrared devices – one handheld and one for mounting on a rifle. These devices had to include both an imaging scope and an infrared spotlight, weigh no more than 15 pounds, and have a 6-hour power supply. RCA duly developed a pair of devices dubbed the Snooperscope and the Sniperscope. The Snooperscope, intended for reconnaissance work, mounted a detector scope and 30 watt infrared lamp on a single handle and was powered by a 4 kilogram power supply carried in a separate satchel. This contained a 6-volt lead-acid battery and electronic oscillator to step up the battery output to the 4,000 volts needed to run the imaging tube. The total weight of the equipment was 10 kilograms. The Sniperscope was nearly identical, though designed to be mounted on a specially-modified M1 Carbine known as the T3.

Trials of the Snooperscope and Sniperscope took place at Fort Belvoir and Fort Bending in January and February of 1944. Though initial testing revealed several flaws, such as poor image resolution, difficult-to-manipulate controls and lamp lenses prone to cracking in the rain, later trials with improved prototypes proved that the basic concept was sound, with soldiers being able to identify and accurately hit targets at ranges of up to 200 feet in pitch darkness. After improvements were made to the prototypes to improve their reliability and ruggedness, 1420 Snooperscopes and 715 Sniperscopes were manufactured by Electronic Laboratories of Indianapolis and shipped to the European, China/India/Burma, and Pacific Theatres for field testing. However, hostilities ended in the first two theatres before the scopes could reach combat, so in April 1945 the remaining units were distributed among 7 U.S. Army and U.S. Marine Corps Divisions participating in the invasion of Okinawa – the final objective in the American island-hopping campaign before the invasion of the Japanese Home Islands.

The weight and bulk of the infrared equipment made it unsuitable for use on combat patrols, so it was mainly used to defend static positions against infiltration by Japanese combat engineers at night. In this role, the Snooperscope and Sniperscope proved remarkably effective, accounting, by some estimates, for nearly a third of Japanese casualties inflicted by the Divisions issued with this gear. But the newfangled devices were not without their issues. Beyond the weight problem, the short range of the equipment made it unsuited to the relatively open terrain on Okinawa; indeed, the evaluation team’s final report stated that the Sniperscope was ideally suited to jungle combat as encountered on other Pacific islands, where this limited range was less of an issue. Another major problem on Okinawa was U.S. Forces’ extensive use of star shells for battlefield illumination, which constantly blinded the infrared scopes and their operators and made them difficult to use effectively. Yet despite these shortcomings, it was an impressive debut for the first generation of military night vision gear.

In the post-war era, the Sniperscope was upgraded and re-designated the M3, in which form it saw service during the Korean War. Its direct descendant, the AN/PAS-4, also saw service in the early stages of the Vietnam War. In 1956, however, an RCA engineer named A.H. Sommer discovered a new tri-alkali photocathode material composed of various mixtures of sodium, potassium, antimony, and caesium which was not only far more sensitive than the earlier silver-caesium-oxide combination but also had a broader spectral response, allowing it to detect light in the visible and near-infrared range. This allowed the construction of completely passive image intensifier tubes which could detect and amplify faint sources of light such as airglow, moonlight, or starlight to produce a visible image. As such starlight scopes did not require active illumination, they could be much lighter and much safer for the operator to use. However, these advantages came with one big caveat: as they needed some kind of faint ambient light to operate, starlight scopes could not be used in pitch darkness. This level of night vision technology is typically termed Generation 1, while WWII-era active infrared scopes are retroactively termed Generation 0.

The first starlight scopes to see combat were the AN/PVS-1 and AN/PVS-2 – also known as the Surveillance, Target Acquisition and Night Observation or STANO. These were developed by the U.S. Army Electronics Command and Wollensak Optical Company of Rochester, New York starting in 1964 and began reaching U.S. troops in Vietnam in 1967. Measuring 45 centimetres long and weighing a whopping 2.7 kilos, the AN/PVS-2 contained three image intensifier tubes stacked one behind the other, so that each amplified the output of the one in front of it. These scopes were designed to be mounted on a variety of weapons, including the M14 and M16 rifles, the M60 machine gun, and even the M79 grenade launcher and M67 recoilless rifle; in practice, however, they were mainly used on the former two, as the recoil from heavier weapons tended to shake the delicate scopes to pieces while the muzzle flash temporary “bloomed” or whited out the intensifier tube, making aimed follow-up shots impossible.

There were other problems as well. While significantly lighter than earlier active scopes, the AN/PVS-2 proved too heavy and bulky to carry on active combat patrols, while the high-pitched whine produced by its electronics tended to give its users’ position away. Thus, like its WWII predecessors, the scope was mainly used in the static role to defend outposts against enemy attacks at night. Still, the sheer weight of the scope often caused it to shake loose from its mount, making it impossible to maintain zero. Consequently, they were typically used for observation and to direct the fire of other weapons rather than as practical weapons sights.

Despite these early teething problems, the AN/PVS-2 formed the basis for nearly every passive night vision optic up until the present day, and the technology was rapidly improved to make it more compact, sensitive, and versatile. For example, the AN/PVS-2B introduced Automatic Brightness Correction or ABC, which automatically compensated for rapid changes in ambient light and minimized blooming. Then, in the mid-1970s, the Optic Electronic Corporation of Dallas, Texas developed the Generation 2 image intensifier tube, which added a third component called a microchannel plate – composed of thousands of tiny glass tubes – between the photocathode and the phosphor screen.

When electrons from the photocathode strike the microchannel plate, they bounce around inside the channels and release more electrons via a process known as an electron avalanche. This results in significantly greater amplification within a single intensifier tube, eliminating the need to cascade multiple tubes together and allowing night vision scopes to be made lighter and more compact. The first Generation 2 scope to enter U.S. military service was the AN/PVS-4, officially adopted in 1978. 15 centimetres shorter and one kilogram lighter than its Vietnam-era ancestors, the AN/PVS-4 proved highly successful, with over 150,000 units being manufactured between 1985 and 2002. In the mid-1980s, the original Generation 2 intensifier tube was replaced with a more advanced Generation 3 model, which differed from previous generation tubes in two main respects. First, the older tri-alkali photocathode material was replaced with an even more sensitive Gallium Arsenide composition; and second, the electrostatic focusing electrodes were removed in favour of a fibre optic inverter assembly – a bundle of optic fibres twisted 180 degrees to flip the image from the phosphor screen right-side up. This allows the tube to be lighter and more compact and the viewing eyepiece to be simpler.

And this brings us neatly to the present day. While many manufacturers of civilian night vision gear claim that their products are “Generation 4”, according to official U.S. military nomenclature, there is no such thing, with all current passive night-vision technology technically being Generation 3 with various upgrades. For example, most current night vision optics feature a system called Bright Source Protection or BSP, which modulates the voltage supplied to the microchannel plate to prevent concentrated light sources from blooming out the tube. Another common feature called autogating rapidly switches the tube power supply on and off, reducing the duty cycle – that is, the total amount of time the tube is turned on – and extending its service life.

Increasingly, traditional image enhancement-based night vision is being replaced on the battlefield by thermal imaging, sometimes known as Forward-Looking Infrared or FLIR when used aboard aircraft. Thermal imaging scopes like the U.S. Military’s AN/PAS-13 operate in the medium-to-long wavelength infrared band, allowing them to detect human bodies, vehicle engines, and other common heat sources. They can also see further through fog and smoke than visible light scopes. Technologically speaking, most thermal scopes and cameras are very similar to ordinary digital cameras, using special charge-coupled devices and other photosensors designed to respond to infrared wavelengths. Other designs use miniaturized versions of the bolometer circuit invented by Samuel Langley in 1878. In all cases, however, the focusing lenses cannot be made of glass, which is opaque to infrared. Instead, most use special ceramic lenses made from Germanium oxide, calcium fluoride, or crystalline silicon. Another design challenge unique to thermal scopes and cameras is preventing the thermal emissions of the camera itself from overwhelming the detector. For this reason, many thermal detectors must be actively cooled in order to function properly – either with cryogenic gases, electric heat pumps, or solid-state thermoelectric coolers called Peltier Devices.

And that, dear viewers, is the story of night vision up to the present day. As you can see, what we typically think of as “night vision” is a bit of a misnomer, as this technology needs at least a small amount of visible light or near-infrared light to function and cannot be used in total darkness. Still, combined with thermal imaging, image intensifier scopes have succeeded in lifting the age-old protective cover of night, leaving few truly safe places on the battlefield.

Expand for References

The post Who Invented Night Vision and How Does It Work? appeared first on Today I Found Out.

Source