NASA Jet Propulsion Laboratory


FLC Region

Security Lab



MS (180-800C)
4800 Oak Grove Drive
Pasadena, CA 91109
United States

Want more information? Contact a representative below.

Laboratory Representative


The Jet Propulsion Laboratory (JPL) has been a NASA Field Center since NASA was created in 1958. JPL, an operating division of the California Institute of Technology (Caltech), performs research, development and related activities for NASA. The people of JPL share a common objective: research and development in the national interest.


Three characteristics shape JPL's philosophy, mission and goals: As part of Caltech, JPL pursues the highest standards of scientific and engineering achievement. Excellence, objectivity and integrity are the guiding principles. As NASA's lead center for unmanned exploration of the solar system, JPL directs unmanned planetary missions for the United States. JPL helps the United States solve technological problems and performs research, development and spaceflight activities for NASA and other agencies.

Technology Disciplines

Displaying 1 - 10 of 558
"Earth Now" iPhone App
"Eyes on the Earth 3D"
(This replaces NTR#47891) Radiometric Calibration of UAVSAR Images
3D Endoscope to Boost Safety, Cut Cost of Surgery
3D Orbit Visualization for Earth Observing Missions
A 90-102 GHz CMOS Based Pulsed-Echo Fourier Transform Spectrometer: A New Tool for In Situ Detection and Millimeter-Wave Cavity-Based Molecular Spectroscopy
A Closer Look at Quality Control
A Free Space Optical Receiver for Data detection and Radio Science Measurements
A Generic, Extensible, Configurable Push Pull Framework for Large Scale Science Missions
A GN&C Covariance Analysis Tool (G-CAT) for Descent and Landing


Displaying 1 - 10 of 62
10-Ft Vertical Space Simulator
24 inch Telescope
25-Ft Space Simulator
48-inch Telescope
Acoustics Noise Test Cell
ARC Scientific Instrumentation Evaluation Remote Research Aircraft (SIERRA)
Atomic Oxygen Test Facility, 121-100
Computer Vision Lab.
DFRC B-200 Super Kingair Research/Support Aircraft



No Equipment


The Experimental Program to Stimulate Competitive Research,or EPSCoR,establishes partnerships with government, higher education and industry that are designed to effect lasting improvements in a state's or region's research infrastructure, R&D capacity and hence, its national R&D competitiveness.

The EPSCoR program is directed at those jurisdictions that have not in the past participated equably in competitive aerospace and aerospace-related research activities. Twenty-four states, the Commonwealth of Puerto Rico, the U.S. Virgin Islands, and Guam currently participate.Fivefederal agencies conduct EPSCoR programs, including NASA.

NASA EPSCoR Jurisdictions and their Directors
View EPSCoR Directors by State/Jurisdiction

The goal of EPSCoR is to provide seed funding that will enable jurisdictions to develop an academic research enterprise directed toward long-term, self-sustaining, nationally-competitive capabilities in aerospace and aerospace-related research.

Far West
Lab Representatives

No Funds


No Publications


No News


Image Sensors Enhance Camera Technologies

Originating Technology/NASA Contribution

Buzz Aldrin standing on the stark surface of the Moon. The towering gas pillars of the Eagle Nebula. The rocky, rust-colored expanses of Mars. Among NASA’s successes in space exploration have been the indelible images the Agency’s efforts have returned to Earth. From the Hubble Space Telescope to the Hasselblad cameras in the hands of Apollo astronauts, many of NASA’s missions involve technologies that deliver unprecedented views of our universe, providing fuel for scientific inquiry and the imagination.

Less known than Hubble’s galactic vistas or the Mars rovers’ panoramic landscapes is the impact NASA has had on the era of digital photography on Earth. While the first digital camera was built by Eastman Kodak in 1975, the first to actually develop the concept of the digital camera was Jet Propulsion Laboratory (JPL) engineer Eugene Lally, who in the 1960s described the use of mosaic photosensors to digitize light signals and produce still images. During the following decades, NASA continued the work of developing small, light, and robust image sensors practical for use in the extreme environment of space.

In the 1990s, a JPL team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to significantly miniaturize cameras on interplanetary spacecraft yet maintain scientific image quality. An image sensor contains an array of photodetectors called pixels that collect single particles of light, or photons. (The word “pixel”—short for picture element—was first published in 1965 by JPL engineer Frederic Billingsley.) The photons entering the pixel are converted to electrons, forming an electrical signal a processor then assembles into a picture. CMOS sensors represented a number of appealing qualities for NASA compared to the charge coupled device (CCD), the prevalent image sensor at the time. Crafted by the same process used to build microprocessors and other semiconductor devices, the CMOS image sensors can be manufactured more easily than CCDs and at a lower cost. The CMOS sensor components are integrated onto a single chip, unlike CCDs, which have off-chip components. This integrated setup consumes as much as 100 times less power than CCDs, allows for smaller camera systems, and can be designed with radiation-hard pixel architectures for space applications.

At JPL, Fossum invented the CMOS active-pixel sensor (CMOS-APS), which integrates active amplifiers inside each pixel that boost the electrical output generated by the collected photons. The CMOS-APS featured improved image quality over passive-pixel sensors (without amplifiers) and included a number of on-chip functions, providing for complete miniature imaging systems that operate quickly with low power demands. JPL validated the technology through a series of prototypes.


Fossum realized the CMOS-APS technology would be useful not only for imaging in space but on Earth as well. In 1995, he, his colleague and then-wife Sabrina Kemeny, and three other JPL engineers founded Photobit, based in Pasadena, California. Photobit exclusively licensed the CMOS-APS technology from JPL, becoming the first company to commercialize CMOS image sensors.

“We saw an expanding number of applications for these miniaturized cameras,” says Roger Panicacci, one of Photobit’s founders. The company quickly positioned itself on the cutting edge of the field of CMOS imaging, and by June 2000, it had shipped 1 million sensors for use in popular Web cameras, machine vision solutions, dental radiography, pill cameras, motion-capture, and automotive applications. The company was featured in Spinoff 1999 and founders Fossum, Panicacci, Kemeny, and Robert Nixon were inducted into the Space Foundation’s Space Technology Hall of Fame that same year.

In 2001, the company was acquired by semiconductor memory producer Micron Technology, of Boise, Idaho, and became a division of Micron Imaging Group. With the exploding popularity of the camera phone in the mid-2000s, the CMOS-APS proved ideal for crafting cameras that fit into slim cell phones and produce good photos without draining batteries. Riding the wave of camera phone demand, in 2006 the group became the world’s leading supplier of CMOS image sensors. In 2008, Micron Imaging Group was spun off from Micron to form Aptina Imaging Corporation, based in San Jose, California. That same year, it shipped its 1 billionth sensor.

Product Outcome

Aptina has continued to improve on the original, NASA-developed CMOS-APS. The company has invented increasingly small pixel architectures, as well as a process for optimizing the amount of light that hits a pixel, boosting sensitivity and image quality while allowing the company’s customers to design more compact camera systems.

“Our technology is taking advantage of semiconductor innovation,” says Panicacci, now Aptina’s vice president of product development. “As transistors shrink, we can build smaller pixels, meaning that, in a given area of silicon, we can provide higher and higher resolution for products like camera phones.”

Aptina’s line of sensors allows for advanced camera features like electronic pan, tilt, and zoom, as well as applications requiring motion detection, target tracking, and image compression. The sensors are also incorporated into the company’s line of system-on-a-chip (SOC) devices—synergistic packages that enhance imaging, are easier and cheaper to integrate into products, and provide benefits like anti-shake compensation that corrects blurring from subject motion or an unsteady camera.

Aptina has grown from its NASA roots into a leader of the CMOS image sensor industry. Its sensors are currently integrated into one of every three cell phone cameras and are part of every major brand personal computer camera worldwide, as well as many embedded cameras for notebook computers. The company is also advancing CMOS sensors for digital still and video cameras—products that have traditionally featured CCD sensors, and has produced the first 10-megapixel CMOS image sensor for point-and-shoot cameras, a device that incorporates the company’s High Speed Serial Pixel Interface (HiSPi) capabilities, enabling a camera to create high-definition (HD) imagery.

The NASA-derived CMOS-APS can also be found in other, less obvious applications. Aptina produces tiny sensors for use in endoscopes for minimally invasive medical diagnostic procedures. The sensors do not generate potentially painful heat during examinations and are cheap enough to allow for disposable scope tubes, eliminating potential complications from improperly sterilized scopes. The company also worked with a medical imaging partner to develop the PillCam, an ingestible camera for imaging a patient’s gastrointestinal tract.

The automotive and surveillance industries represent other major markets for Aptina’s sensors. Major international auto brands like Daihatsu and Volvo employ Aptina designs for applications like backup cameras that help with parking and ensure safe reverse motion. Aptina estimates its customers will be building up to 25 million backup camera systems annually by 2011, potentially reducing backover accidents by about 20,000 per year. The company also has partnerships within the field of network surveillance, designing imaging technology that can spot cheating in casinos or intruders in unauthorized areas.

Last year, Aptina became a stand-alone company, and while Photobit once saw 1 million sensors shipped as a major milestone, says Panicacci, Aptina often ships over 1 million sensors a day. As demand rises for high-end capabilities like HD imaging and the market for camera products booms, Aptina’s NASA-developed technology should play an even greater role in products benefiting the public every day. Research by the International Data Corporation, an independent market research company, predicts annual sales of more than 1 billion camera phones beginning in 2010, all featuring one or more CMOS-APS cameras.

HiSPi™ is a trademark of Aptina Imaging Corporation. PillCam® is a registered trademark of Given Imaging Ltd.

GPS Software Packages Deliver  Positioning Solutions

Originating Technology/NASA Contribution

To better understand and predict global climate, scientists look to the Earth’s oceans. Natural forces like wind, storms, and heat affect ocean surface and sea level, and these changes can shed light on short- and long-term global climate patterns.

With the goal of tracking ocean currents and temperature over time, in 1979, NASA’s Jet Propulsion Laboratory (JPL) started planning for TOPEX, a topography experiment to launch a satellite altimeter into space to measure the height of the world’s oceans. Before scientists could launch TOPEX, however, they needed a way to obtain precise location information about where TOPEX was when it took measurements. To this end, JPL developed an innovative software program called the GPS (global positioning system)-Inferred Positioning System and Orbit Analysis Simulation Software, abbreviated as GIPSY-OASIS, to process and calculate data to determine a spacecraft’s position.

In 1992, TOPEX was sent into orbit, and JPL achieved better-than-expected results. GIPSY-OASIS was able to pinpoint the location of TOPEX within just 2 centimeters. In addition, long-term results from TOPEX allowed scientists to observe El Niño, an oscillation of the ocean-atmosphere system characterized by unusually warm ocean temperatures, for the very first time.

Since the success of TOPEX, JPL has refined GIPSY-OASIS to become a sophisticated system that calculates accurate positioning information—not just for NASA, but for commercial entities as well. A companion software package called Real-Time GIPSY (RTG) was developed to provide positioning in certain time-critical applications.


First featured in Spinoff 1999, the GIPSY and RTG software packages incorporate special GPS algorithms developed at JPL to deliver highly accurate positioning capabilities to a broad array of space, airborne, and terrestrial applications. In 2004, the precision GPS software was inducted into the Space Foundation's Space Technology Hall of Fame. Hundreds of commercial and non-commercial licenses for GIPSY and RTG have been released including more than 200 science and non-profit user licenses of GIPSY on a no-fee research basis.

One of the commercial organizations licensing GIPSY is Longmont, Colorado-based DigitalGlobe. In 2004, Doug Engelhardt, a principal systems engineer at DigitalGlobe, was looking for a new method for accurate orbit determination of the company’s high-resolution imaging satellites. When he learned about the advanced capabilities of GIPSY, including its processing speed and accuracy, he decided to try it out.

Product Outcome

While TOPEX takes measurements of Earth’s oceans, DigitalGlobe’s satellites take pictures of Earth’s surface. Like JPL, DigitalGlobe needs to know precisely where a satellite is when it gathers information. In order to place pictures accurately on a map, DigitalGlobe must know the location of the satellite when it shoots pictures of Earth, as well as the satellite direction when it took the picture. By combining this information, DigitalGlobe can assemble the pictures accurately to create high-resolution imagery of Earth.

Engelhardt explains that DigitalGlobe’s satellites receive location information in space just like a hand-held GPS unit receives location information on Earth. The satellite then transmits the GPS location data as well as image data to one of the company’s ground stations. It is sent back to DigitalGlobe’s headquarters where GIPSY processes it. “As soon as we receive the data, the GIPSY process is kicked off. For every image that is processed, we need to know the precise location of the satellite, at the time it was taking the picture. After this is determined, the precise location data is linked with the images,” he says.

By utilizing the licensed JPL technology, DigitalGlobe is able to produce imagery with highly precise latitude and longitude coordinates. This imagery is then made available to customers through an online platform and image library.

As one of two providers of high-resolution Earth imagery products and services, DigitalGlobe supplies imagery for a variety of uses within defense and intelligence, civil agencies, mapping and analysis, environmental monitoring, oil and gas exploration, infrastructure management, Internet portals, and navigation technology. As of March 2010, the company’s content library had more than 1 billion square kilometers of Earth imagery, with 33 percent of it being less than 1 year old.

The company has a variety of city, state, and county government clients. Among other projects, these groups use the imagery for emergency response and infrastructure planning. “They keep track of the landscape in areas that are being developed with shopping centers, roads, and housing. They might look at the amount of trees and landscaping in an area, and then run computations on the amount that is paved, and how much runoff they need to account for in the drainage systems,” describes Engelhardt.

Another major client of DigitalGlobe is Google, a provider of Internet search tools and services. Google uses the imagery for its Google Earth and Google Maps applications, providing a base layer of satellite imagery that Google can place overlay information on, such as roads, residences, and businesses. With Google Maps, users enter a single address or a starting and ending address, and then have the option to view the location or route with or without satellite pictures. If users choose the satellite view, they can see landmarks and other points of interest along the way. With Google Earth, users can enter an address or location and view, zoom-in, and fly through detailed satellite imagery of that specific location.

For a similar mapping platform, DigitalGlobe supplies imagery to Bing, a provider of Internet search tools and services from Microsoft. In addition, Nokia purchases DigitalGlobe imagery to use with its GPS service on select Nokia cell phones. Insurance companies also purchase DigitalGlobe’s satellite imagery to get a distant look at damage resulting from natural disasters like hurricanes and floods, without having to visit the location. The news media are another frequent user of DigitalGlobe’s satellite imagery to provide a bird’s eye view of a newsworthy location.

Engelhardt says the NASA license has been invaluable to the company’s success. “The capability provided by JPL has been huge. A vast amount of research and effort went into the software so it could be used by industry.”

For the future, DigitalGlobe plans to focus on standardizing the imagery so it is easy for clients to use. “We have imagery and the customer has applications, but getting the formats to go between one another is a lot of work. As we build the commercial market, that is one of our big efforts. The government has been using satellite imagery for years, but for commercial customers, it is brand new.”

Google Earth™ and Google Maps™ are trademarks of Google Inc. Bing™ is a trademark of Microsoft Corporation.

“People told me, ‘You’re an idiot to work on this,’” Eric Fossum recalls of his early experiments with an alternate form of digital image sensor at NASA’s Jet Propulsion Laboratory (JPL).

His invention of the complementary metal oxide semiconductor (CMOS) image sensor would go on to become NASA’s single most ubiquitous spinoff technology, dominating the digital imaging industries and enabling cell phone cameras, high-definition video, and social media as we know it.

By the early 1990s, sensors based on the charge-coupled device (CCD), had enabled high-quality digital photography, but Fossum believed he could make imagers with smaller and lighter machinery using CMOS technology to create what he called active pixel sensors. It had been tried before, but CMOS technology had since improved, and Fossum and his team figured out how to eliminate visual noise that had stymied earlier attempts.

Using CMOS sensors, they were able to produce images using lower voltages and charge transfer efficiencies than CCD imagers required, and almost all the other camera electronics could be integrated onto the computer chip with the pixel array, a development that would make CMOS imagers more compact, reliable, and inexpensive.

With a license from the California Institute of Technology, which manages JPL, in 1995 Fossum and colleagues founded a company, which they later sold, to develop the technology.

In the end, it was the cell phone camera, which needed to be small and energy-efficient, that drove the widespread mass production of CMOS image sensors. Resulting improvements to the technology and its manufacture drove costs down and quality up until CCD-based devices couldn’t compete.

CMOS imagers have enabled small, high-definition video cameras, including the popular body-mountable action cameras marketed by San Mateo, California-based GoPro.

By 2015, the technology’s market, which also includes the automotive, surveillance, and medical industries, reached nearly $10 billion.

There has been much talk of self-driving cars lately, but farmers have enjoyed self-driving tractors for more than a decade, in part due to a partnership between John Deere and the Jet Propulsion Laboratory (JPL).

In the 1990s, scientists at JPL, where the first global tracking system for Global Positioning System (GPS) satellites had been developed, were working to stream satellite tracking data in real time via the Internet. The result was the Real-Time GIPSY (RTG) software. GIPSY refers to the GNSS-Inferred Positioning System, wherein GNSS stands for Global Navigation Satellite System.

RTG ended up being one of NASA’s most important contributions to modern society, enabling accurate GPS navigation anywhere on the planet.

In 2001, NavCom, owned by John Deere, licensed the RTG software and also contracted with JPL to receive data from the center’s global network of reference stations. John Deere, based in Moline, Illinois, had already developed its own GPS receivers for tractor guidance, but when the company released the first receivers to tap into NASA’s ground stations and incorporate JPL’s software in 2004, it could finally offer self-driving equipment worldwide.

The trackers were accurate down to about four inches, not quite as accurate as John Deere’s real-time kinematics (RTK)-based trackers, but much more affordable. The RTK system required the purchase of one or more signal towers.

Typically, when a farmer crisscrosses a field pulling a seeder, plow, or other equipment, the rows overlap by about 10 percent, meaning a significant portion of the field receives double the necessary resources, and the job takes longer than necessary. Eliminating overlap also cuts down on fuel costs and wear and tear on the machinery. And higher accuracy also means more reliable yield maps, which are created by combining location data with mass flow data from sensors on a harvesting combine.


No Licenses