The MindConnect Service exposes an API that enables shop floor devices to send data securely and reliable to MindSphere. It opens the MindSphere platform to custom applications to collect and send data which shall be stored and used by applications in the cloud.
The MindConnect Service enables the development of custom data collectors, also referred as custom agents. This software applications act as data sources that upload the collected data into MindSphere.
For accessing this service you need to have the respective roles listed in MindConnect API roles and scopes.
The custom agent needs a field-side network infrastructure to forward and route outbound HTTPs requests to the Internet.
MindConnect supports multiple agent device classes: strong hardware platforms as well as resource constrained devices. All target agent platforms must comply to the following minimum requirements:
- HTTP processing
- JSON parsing
- JSON Web Token (JWT) generation
- HMAC generation (preferably SHA2 based hashing)
Data Source Configuration¶
If an agent is to upload data to MindSphere, MindSphere needs additional configuration to know how to interpret the agent's data stream. This configuration requires the following definitions within MindSphere:
- Data Source Provisioning
- Property Set Provisioning
- Mapping for Data Source and Property Set
A Data Source is a logical group that holds so called Data Points. Data Points hold metadata about a specific metric that the agent generates or measures.
For example, if an agent measures ambient temperature and pressure data, each of these two measurements need to be defined as a separate Data Point:
- Data Point 1: Temperature measurement
- Data Point 2: Pressure measurement
A Data Source on the other hand is an encapsulating/grouping object for the Data Points.
Data Points are set by a Data Source Configuration. MindSphere provides the
dataSourceConfiguration endpoint of the Agent Management Service. For more details refer to Creating a Data Source Configuration.
Standard data types¶
The MindConnect service uses so called standard data types.
Standard in this context means:
- The API defines how standard data types have to be transmitted, e.g. how metadata and production data needs to be formatted as HTTPs payloads.
- For each of the standard data types, there are predefined routing mechanisms which allow an automated parsing and storing of that information to (virtual) assets within MindSphere.
- For each of the standard data types, there is a preconfigured mass data storage available.
- Data of standard types can be accessed and queried in a standardized way by applications and analytical tools in MindSphere.
In contrast to custom data types, there is no additional configuration or any coding required for parsing and storing data provided by a custom agent. It is a fully automated functionality provided by MindSphere.
MindSphere supports the following standard payload data types for production data:
- Time Series
Time Series are simple Data Point values that change constantly over time, e.g. values from analog sensors like a temperature sensor. This also applies to any other measured values that have an associated timestamp.
Events are based on machine events, e.g. emergency stop or machine failure occasions. However, this mechanism can also be used to propagate custom agent driven notifications, e.g. if you do on-site threshold monitoring and want to report a broken threshold.
With this data type you can upload files of up to 9MB per exchange call. The files are attached to the corresponding (virtual) asset, e.g. device log files or complex sensor structures. Files that are uploaded can be referenced by their parent (virtual) asset in MindSphere. The content of these files is not parsed by MindSphere and it requires custom applications or analytical tools to interpret and visualize the data.
- Data Models
Description of the agent-side asset hierarchy and configuration including measurement points. For some custom agents it is more convenient to upload the data model to MindSphere directly. This data is used by MindSphere to dynamically create (virtual) assets, aspects, variables or mappings.
Data Point Mapping¶
A Data Point Mapping needs to be defined for MindSphere to interpret the data flowing from the agent to MindSphere. The Data Source configuration holds metadata about the agent side where a Property Set holds metadata about the IoT side. Finally, MindSphere needs information to map from Data Point meta to Property meta. This configuration is called Data Point Mapping and defines a mapping for each Data Point to a property.
For more details refer to Creating a Data Point Mapping.
exchange endpoint of the MindConnect API provides the agent with the capability of uploading its data to MindSphere. This data can be of type:
- Time Series
- File Upload
- Event Upload
The format conforms to a subset of the HTTP multipart specification, but only permits nesting of 2 levels.
For more details refer to Consuming exchange services.
A custom agent consumes the MindConnect Service for realizing the following tasks:
- Upload time series
- Upload files
- Poll for events
- Create, receive and acknowledge business events (Alarms)
- Describe and upload asset data models
- Upload data of custom data types for custom handling in MindSphere
- Download files from MindSphere repository (e.g. for firmware or configuration updates)
The manager of a wind farm wants to collect sensor data of a wind turbine.
A developer writes a field application (agent) which collects the sensor data. The data is sent to MindSphere via the MindConnect API.
Any questions left?
Except where otherwise noted, content on this site is licensed under the MindSphere Development License Agreement.