I’m not quite sure how I got myself to work this morning and if I’m being honest I don’t really care. Between when I opened my eyes and when I opened my laptop, my brain sent out a steady stream of instructions that got me up, dressed, fed and out the door – and I was conscious of only a fraction of those commands.
That’s human intelligence at work, seamlessly fusing millions of data points together to create a complete picture.
Artificial intelligence works the same way and as data fusion technology advances, a richer more detailed picture is coming into focus.
Consider the variety of sources and types of data now available in the earth intelligence space.
To begin, Terris accesses images from five primary sources.
- Satellites: There are two main types of satellites; the high altitude satellites, which are the original satellites and are very costly to launch, and the low earth orbit (LEO) satellites, which are cheaper to build and launch.
- Aerial: This category includes airplanes, helicopters, and balloons, basically any vehicle that can cover large distances, with cameras and sensors attached. Aerial images can be clearer than satellite images but can’t cover as much area.
- Unmanned aerial vehicles (UAVs): aka Drones. These are the cheapest to obtain, easiest to operate, and offer the clearest images, but of the three aerial sources, UAVs cover the least amount of terrain.
- Ground (terrestrial) images: These are the images we create every day on our phones, cameras, and video equipment that show us the view from where we or the camera happen to be.
- Underwater (bathymetric): Light-based and acoustic data that measures water depth and the topography of the land beneath the water.
From there, each of these sources can produce visual data such as these three common forms.
- Realistic images are the real-life photos and videos we see and create.
- Ultraviolet (UV) images, helps with remote sensing of rocks, minerals and other natural earth surface materials.
- Infrared (IR) images detect radiation energy in the form of heat from the earth’s surface.
Those sources can also layer on elevation data.
- Radar uses radio waves to detect the distance, direction and velocity of objects, weather and terrain.
- Lidar uses light in the form of lasers to measure the distance of an object or landmass and is commonly used to create high-resolution maps across a wide variety of applications including autonomous vehicles, gaming, mining, transportation and agriculture.
- Sonar uses sound waves to measure distances and detect objects underwater.
Now comes the customization by layering any combination of geographic information systems (GIS) data, both publicly-available and proprietary to the client, that completes the picture by providing deeper, more specific insights of any place on earth.
Examples of this type of data includes:
- weather and climate patterns;
- property boundaries and ownership;
- climate trends;
- terrain;
- infrastructure, such as roads, railways, airstrips, energy fields, transmission lines;
- vegetation, such as trees, shrubs, and grass;
- waterways;
- buildings;
- migration patterns; and,
- Solar irradiance, which measures the sun’s solar energy.
That’s the opportunity data fusion offers: the ability to seamlessly mix and merge layers of data to create images that provide the viewer with a deeper understanding of their world, so they can make faster, better informed decisions.
Earth intelligence about any place, visible in one place.
I like the sounds of that.