Many tasks in industry are perfect for robots. They are repetitive, located in hazardous or remote environments, and require a great deal of manual data collection. This is a great pain for thousands of industrial companies across the globe, hence also a good opportunity for Cognite. We are in a unique position where we facilitate digital twins for our customers. Hence, we can add context to the data captured by robots - which in turn will enrich the operational digital twins.
In this read from Hart Energy you can read more about Solving the Robot Data Problem with Industrial DataOps by Francois Laborie, Cognite President of North America,
Cognite Data Fusion (CDF) gives industrial companies a powerful data foundation for automation. With access to sensor data, asset hierarchies, and spatial information in one place, robotics systems — from drones to wheeled and four-legged robots — can connect to digital twins and collect data automatically through their APIs. CDF makes this integration efficient and scalable, making inspection mission planning easy and simple.
With CDF as the backbone in our robotic endeavors, we can enable our customers to perform both telepresence and autonomous inspections at scale. And while doing so, we enable our customers to gather whatever data they need. Some examples of data our customers see will aid in optimizing production and maintenance:
-
Evaluating the physical state of the plant by monitoring whether or not a valve is open
-
Detecting gas leakages, hot spots or faulty equipment using thermal images
-
Monitoring the pressure, temperature, tank and lubrication levels through gauge reading
-
Detecting, locating and monitoring the growth rate of corrosion
-
Adapting rapidly to failures by evaluating all captured photos for liquid spills
-
Being the first responder or planning operations through telepresence from a remote operation center or while working from home
-
Asset health monitoring, object tracking and change detection through custom computer vision models
Use Cases
There are a loads of tasks robots do just as well or better than human beings. In reality, this makes it possible for industrial companies to streamline their operations and enhance the way they perform day to day tasks. Some examples of the tasks/use cases where robots can do just as well as human operators are:
-
Detecting blocked escape routes
-
Detecting improper use of HSE equipment
-
Daily updated 360 photos for street-view like experiences
-
Daily updated photogrammetry models
-
Surface temperature monitoring
-
P&ID line walks
-
Vibration inspection
-
Temperature signature monitoring (IR) for machine health monitoring
-
Detect broken cables
-
Locating Industrial tools
-
Security monitoring
-
Noise / audio change detection
-
Evaluate load bearing structures
-
General change detection of area
-
Faster and safer first responder
-
Check the state of emergency lights
-
Track where equipment or vehicles are located and keep digital twin updated, making every physical object searchable
-
Carry base stations to collect data from transmitters with limited bandwidth
Robot Data Management - the data storage problem
Cognite Data Fusion is the ideal place to store data captured by robots. Whenever the robot finishes a certain mission, it can easily allocate inspection data directly to the relevant asset. Cognite Data Fusion will assist the robot on what asset the data should be linked with its AI powered contextualization engine – reading tags in images or the video stream or by using computer vision and spatial information, and comparing it with the asset’s 3D assembly.
Robot Data Management - navigation with minimal training
Not only is Cognite Data Fusion the ideal place to store and contextualize data captured by robots, but it also accelerates robot deployment. By using the contextualized 3D model Cognite Data Fusion can be used in assisted telepresence by enabling a user to navigate to a specific equipment location through a two-click operation. Autonomous inspection missions are assisted by the AI powered contextualization engine – reading tags in images or the video stream or by using computer vision and spatial information, and comparing it with the asset’s 3D assembly. This ensures that the appropriate data is collected in every mission.