Skip to content

Integrating an AWS Cloud - via MindConnect Integration

MindConnect Integration can transfer data from any cloud storage service into MindSphere. In this example, multiple variable definitions for an aspect are read from a CSV file, which is stored in an AWS S3 bucket. A pipeline is set up to automatically create an asset, which has an aspect with these variables.

General Information

Duration: 60 mins
Tested with MindSphere version: Release Notes 8th October 2018.

Prerequisites

Preparing Data in AWS

This section describes how to create an S3 bucket in AWS and upload aspect data from a CSV file to it.

Creating an AWS User with API Access

  1. Open the AWS IAM console via https://console.aws.amazon.com/iam/ (login required).
  2. Choose "Users" and then "Add user" in the navigation pane.
  3. Enter a user name for the user.
  4. Select "Programmatic access".
  5. Click on "Next: Permissions".
  6. Select "Attach existing policies to user directly" and pick the "AmazonS3ReadOnlyAccess" policy. (You can update the policies later, if necessary.)
  7. Finish the process by clicking "Next: Review" and then "Create user".
  8. Download the access key ID and secret access key and save them. You will not have access to these keys again after this step.

The generated access keys provide access to the AWS S3 APIs.

Creating an S3 Bucket

  1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Click on "Create bucket".
  3. Enter a bucket name and select an AWS region, e.g. "sensor-bucket-mindsphere" and "EU (Frankfurt)".
  4. Do not make any other configurations and click on "Create".

The new S3 Bucket is available in the Amazon S3 console.

Uploading a CSV file to the S3 Bucket

  1. Create a CSV file for an aspect with multiple variables using the same format as shown below:

    1
    2
    3
    4
    5
    6
    AspectName;SensorType;DataType;Unit;
    breweryAspect;temperature;DOUBLE;°C;
    breweryAspect;motorVoltage;DOUBLE;V;
    breweryAspect;fluidFlow;DOUBLE;m³/h;
    breweryAspect;pressureBefore;DOUBLE;bar;
    breweryAspect;pressureAfter;DOUBLE;bar;
    
  2. Select your S3 bucket (here "sensor-bucket-mindsphere") in AWS.

  3. Drag and drop the CSV file into the Amazon S3 console window.
  4. Click "Upload".

The CSV file has been uploaded to your S3 bucket.

Integrating an Amazon S3 Bucket into MindConnect

A MindConnect integration is set up in three steps:

  1. Creating an account: MindConnect Integration requires an account for storing access information to connect to other cloud-based applications, e.g. Amazon S3 bucket or Siemens MindSphere.
  2. Adding an operation: MindConnect Integration uses application specific operations for reading or writing data to the connected cloud-based applications. Each operation performs one application specific task.
  3. Creating a pipeline: MindConnect Integration uses pipelines to define processes for transmitting, interpreting and transforming data. Pipelines can combine multiple operations and other pipelines to define multistep workflows.

Creating a MindConnect Integration Account

  1. Open MindConnect Integration from the MindSphere Launchpad.
  2. Log in with your MindConnect Integration credentials.
  3. Go to "Connect".
  4. Open the application "Amazon Simple Storage Service (S3)".
  5. Click on "Add New Account". Add New account
  6. Enter a name for the account, e.g. "My_S3_Bucket".
  7. Enter the access key ID and secret access key generated before.
  8. Do not change any other settings.
  9. Click "Save".

The account has been created.

Adding a new Operation

  1. Switch to the "OPERATIONS" tab and click on "Add New Operation". Add New Operation
  2. Enter a name for your operation, e.g. "RetrieveS3Object".
  3. Select the account (here "My_S3_Bucket") from the drop-down and click on "Next".
  4. Select "GetObject" from the list of operations.
  5. Click "Next" without making any further configurations in the following dialogues.
  6. Check the information and click "Finish".

The "GetObject" operation is now visible in the list.

Setting up a Pipeline

This pipeline transfers the data from the S3 bucket into MindConnect. It is constructed using building blocks, which first retrieve the data from the CSV file, then convert it into bytes and finally store them in a document. The building blocks, as well as the input and the output for the pipeline itself, are configured after the assembly.

  1. Switch to the "INTEGRATIONS" tab and click on "Add New Integration". Add New Integration
  2. Select "Orchestrate two or more applications" in the pop-up window.
  3. Enter a name for your integration, e.g. "My_S3_Bucket_Pipeline".
  4. Open "Applications" in the tool bar on the left.
  5. Search for "Amazon Simple Storage Service (S3)" and drag it under the integration block so it connects with the anchor point.
  6. Click on the settings icon of this block and select the account (here "My_S3_Bucket") and operation (here "RetrieveS3Object") from the drop-down menus.
  7. Open "Services" in the tool bar on the left.
  8. Drag "IO" from Services under the "Amazon Simple Storage Service S3" block so it connects with the anchor point.
  9. Open the drop-down menu of the "IO" block and select "streamToBytes".
  10. Drag "Flat File" from Services under the "IO" block so it connects with the anchor point.
  11. Open the drop-down menu of the "Flat File" block and select "delimitedDataBytesToDocument".
  12. Click "Save".

The finished pipeline looks as shown below: My_S3_Bucket_Pipeline

Note

For detailed information on Orchestrated Integrations and Point-to-Point Integrations, refer to the MindConnect Integration documentation.

Configuring the Input/Output Signature

Every MindConnect integration requires an input/output signature, which must define at least one input parameter. Output parameters are optional. This integration shall take an S3Object as input parameter and output a document with rows and columns.

  1. Click on the menu icon at the very right of the integration block on top of your pipeline. Menu Icon
  2. Select "Define Input/Output Signature".
  3. Click on the plus button in the Input tab to create an input field.
  4. Enter a name, e.g. S3Object, and set the type to "String".
  5. Switch to the Output tab and click the plus button to add an output field of type "Document".
  6. Click the plus button again to add another field of type "Document" and activate the "Array" checkbox. This field is nested inside the other Document and represents the rows of the CSV file.
  7. Add four fields of type "String" representing the columns of the CSV file. The intended structure is shown below: My_S3_Bucket_Pipeline Output Signature
  8. Click "Apply" and then "Save".

The Input/Output signature is defined.

Configuring the Mapping for the Operation

The "Amazon Simple Storage Service (S3)" block is configured to retrieve an S3Object from the Amazon S3 bucket and forward it as a stream.

  1. Click on the menu icon at the very right of the "Amazon Simple Storage Service (S3)" block.
  2. Select "Map Input and Output".
  3. Configure the input mapping as shown below: My_S3_Bucket_Pipeline Operation Input
  4. Double-click on the field bucketName in "RetrieveS3ObjectInput" and enter the name of your S3 bucket (here "sensor-bucket-mindsphere").
  5. Click "Next".
  6. Configure the output mapping as shown below: My_S3_Bucket_Pipeline Operation Output
  7. Click on "Finish" and then "Save".

The mapping enables the operation "RetrieveS3Object" to read the data from the S3 bucket and output it as a stream.

Configuring the Mapping for the IO Service

The "IO" block is configured to convert the stream into bytes and forward it.

  1. Open the "Map Input and Output" dialogue of the "IO" block.
  2. Configure the input mapping as shown below: My_S3_Bucket_Pipeline IO Input
  3. Configure the output mapping as shown below: My_S3_Bucket_Pipeline IO Output
  4. Click "Finish" and then "Save".

The mapping enables the streamToBytes service to convert the input stream into bytes.

Configuring the Mapping for the Flat File Service

The "Flat File" block is configured to interpret the bytes and store them in a document with rows and columns.

  1. Open the "Map Input and Output" dialogue of the "Flat File" block.
  2. Configure the input mapping as shown below: My_S3_Bucket_Pipeline Flat File Input
  3. Double-click on the other four fields in "delimitedDataBytesToDocument Input" and fill in the following values:

    Parameter Selection
    fieldQualifier "Semicolon"
    textQualifier "none"
    useHeaderRowForFieldNames "true"
    Encoding "windows-1252: Windows Latin"
  4. Configure the output mapping as shown below: My_S3_Bucket_Pipeline Flat File Output

  5. Click on "Finish" and then "Save".

The mapping enables the delimitedDataBytesToDocument to interpret the bytes it receives as text and store it in a document.

Testing the Integration (optional)

  1. Click "Test" at the top right.
  2. Enter a name for the output document, e.g. "MindSphereAsset_Creation.csv".
  3. Click "Run".

The integration is executed in real time and the results are displayed on the "Test Results" panel. Make sure the success message is shown and verify that the data from the CSV file is displayed in the document.

Integrating the AWS Cloud into MindSphere

The following steps show how to set up an automatic pipeline to create an asset in MindSphere, which has an aspect with the variables given in CSV file. This pipeline automatically creates the required aspect type, asset type and asset.

Creating an Account for Connecting to MindSphere

If you already have an account for the "Siemens MindSphere" application in MindConnect Integration, jump to Adding MindSphere Operations.

  1. Open the application "Siemens MindSphere" in MindConnect Integration.
  2. Click on "Add new account".
  3. Enter an account name, e.g. "MindSphere_AWS_Integration".
  4. Leave the other settings as default and click "Save".

MindConnect Integration can now import data into your MindSphere tenant.

Adding MindSphere Operations

The pipeline requires operations for the "Siemens MindSphere" application to create assets, asset types and aspect types, as well as read aspect types.

Create these operations using the configuration details listed below. The required steps are the same as described above.

Custom Name (Step 1) Operation (Step 2)
"CreateAsset_AWS" "Create An Asset"
"CreateAssetType_AWS" "Create Or Update An Asset Type"
"CreateAspectType_AWS" "Create Or Update An Aspect Type"
"GetAspectType_AWS" "Read An Aspect Type"

Setting up the Pipeline for Creating Aspect Types

This pipeline shall create a new aspect type, if an aspect type of this name does not exist yet.

  1. Add a new orchestrated integration.
  2. Set up the integration "MindSphereAspectType_AWS" as shown below using the following blocks:

    Group Block name Configuration
    Control Flow try catch -
    Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
    Operation: "CreateAspectType_AWS"
    Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
    Operation: "GetAspectType_AWS"

    CreateAspectType Pipeline

  3. Click "Save".

Configuring the Input/Output Signature

Define the input signature as shown below in order to fill the required fields for creating aspect types using input parameters.

  1. Open the "Define Input/Output Signature" dialogue of the "MindSphereAspectType_AWS" block.
  2. Create 3 input fields of type "String" and 1 input field of type "Document" as shown below:

    CreateAspectType Pipeline Signature

  3. Click on "Apply" and then "Save".

Configuring the Mapping

The first "Siemens MindSphere" block is configured to fill all required fields for creating a new aspect type. The second "Siemens MindSphere" block is configured to check if an aspect type with the given aspectTypeId already exists on the tenant.

  1. Configure the input mapping for the "Siemens MindSphere" block in the "try" section according to the figure and table below:

    CreateAspectType Pipeline Try Input

    Field Value
    category "dynamic"
    scope "private"
    searchable "true"
    length Do not set any value
    qualitycode "false"
  2. Configure the output mapping as shown below: CreateAspectType Pipeline Try Output

  3. Click "Finish" and then "Save".
  4. Configure the input mapping for the "Siemens MindSphere" block in the "catch" section as shown below:

    CreateAspectType Pipeline Catch Input

  5. Configure the output mapping as shown below: CreateAspectType Pipeline Catch Output

  6. Click "Finish" and then "Save".

Testing the Pipeline for Creating Aspect Types (optional)

Test the integration by manually providing the input values as shown below: Input details

Note

If your test fails and asks you to update the If-match header, try using a different aspect name.

Setting up the Pipeline for Creating Asset Types

  1. Add a new orchestrated integration.
  2. Set up the integration "MindSphereCreateAssetType_AWS" as shown below using the following blocks:

    Group Block name Configuration
    Control Flow for each -
    Services String Service: "concat"
    Services String Service: "concat"
    Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
    Operation: "CreateAssetType_AWS"

    CreateAssetType Pipeline

  3. Click "Save".

Configuring the Input/Output Signature

Define the input signature as shown below in order to fill the required fields for creating asset types using input parameters.

  1. Open the "Define Input/Output Signature" dialogue of the "MindSphereCreateAssetType_AWS" block.
  2. Create 2 input fields of type "String" and 1 input field of type "Document" as shown below:

    CreateAssetType Pipeline Signature

  3. Click on "Apply" and then "Save".

  4. Select "/document/rows" for the input field of the "for each" block.
  5. Click "Save".

Configuring the Mapping

The "String" blocks are configured to construct a string analogous to {tenantName}.{assetTypeId}. The "Siemens MindSphere" block is configured to fill all required fields for creating an asset type.

  1. Configure the input mapping for the upper "String" block according to the figure and table below:

    CreateAssetType Pipeline String1 Input

    Field Value
    inString1 "."
  2. Configure the output mapping as shown below: CreateAssetType Pipeline String1 Output

  3. Click "Finish" and then "Save".
  4. Configure the input mapping for the lower "String" block according to the figure below: CreateAssetType Pipeline String2 Input
  5. Configure the same output mapping as in step 2.
  6. Click "Finish" and then "Save".
  7. Configure the input mapping for the "Siemens MindSphere" block as shown below:

    CreateAssetType Pipeline MindSphere Input

    Field Value
    scope "private"
    parentTypeId "core.basicasset"
  8. Configure the output mapping as shown below: CreateAssetType Pipeline MindSphere Output

  9. Click "Finish" and then "Save".

Testing the Pipeline for Creating Asset Types (optional)

Test the integration by manually providing the input values as shown below: CreateAssetType Pipeline Test

Setting up the Pipeline for Creating Assets

This pipeline retrieves data from AWS and transfers it into MindSphere as aspect data of an asset.

  1. Switch to the tab "Develop".
  2. Add a new orchestrated integration.
  3. Set up the integration "MindSphereCreateAsset_AWS" as shown below using the following blocks:

    Group Block name Configuration
    Integrations My_S3_Bucket -
    Control Workflow Transform Pipeline -
    Integrations MindSphereAspectType_AWS -
    Control Workflow Transform Pipeline -
    Integrations MindSphereCreateAssetType_AWS -
    Services String Service: "concat"
    Services String Service: "concat"
    Applications Siemens MindSphere 3.0 Account: "MindSphere_AWS_Integration"
    Operation: "CreateAsset_AWS"

    CreateAsset Pipeline

  4. Click "Save".

Configuring the Input/Output Signature

The pipeline receives variable definitions from a CSV file in AWS. In order to create an asset with these variables, the user must provide required parameters which cannot be read from the input file and specify the input file. The user input is defined in the input signature as shown below.

  1. Open the "Define Input/Output Signature" dialogue of the "MindSphereCreateAsset_AWS" block.
  2. Create 4 input fields of type "String" as shown below:

    CreateAsset Pipeline Input Signature

  3. Create 1 required output field of type "String" and name assetId (optional).

  4. Click on "Apply" and then "Save".

Configuring the Mapping

The following configurations enable the integration to read an input file from AWS and create an aspect type with the variables provided in the file. Afterwards, the integration creates an associated asset type and instantiates it.

My_S3_Bucket_Pipeline

This block retrieves the user defined CSV file and forwards the content as document.

  1. Configure the input mapping for the "My_S3_Bucket_Pipeline" block according to the figure below:

    CreateAsset Pipeline My_S3_Bucket_Pipeline Input

  2. Configure the output mapping as shown below: CreateAsset Pipeline My_S3_Bucket_Pipeline Output

  3. Click "Finish" and then "Save".
Upper Transform Pipeline

This block reads the first entry in the document and forwards it as AspectName.

  1. Open the mapping dialogue for the upper "Transform Pipeline" block.
  2. Add a new field of type "String" and name AspectName in the Pipeline Output.
  3. Configure the mapping as shown below: CreateAsset Pipeline Transform1
  4. Click "Finish" and then "Save".
MindSphereCreateAspectType_AWS

This block creates an aspect type using the AspectName on the user defined tenant and forwards its configuration as document.

  1. Configure the input mapping for the "MindSphereCreateAspectType_AWS" block according to the figure below:

    CreateAsset Pipeline Aspect Type Input

  2. Click "Next", "Finish" and then "Save".

Lower Transform Pipeline

This block reads the aspectTypeId and name of the aspect type and forwards them as aspectTypeIdArray.

  1. Open the mapping dialogue for the lower "Transform Pipeline" block.
  2. Add a new field of type "Document" and name aspectTypeIdArray in the Pipeline Output.
  3. Add 2 fields of type "String" and names aspectTypeId and name in this document.
  4. Configure the mapping as shown below: CreateAsset Pipeline Transform2
  5. Click "Finish" and then "Save".
MindSphereCreateAssetType_AWS

This block creates an asset type using the aspectTypeIdArray on the user defined tenant and forwards its details as document. The assettypeId is given by the input signature.

  1. Configure the input mapping for the "MindSphereCreateAssetType_AWS" block according to the figure below:

    CreateAsset Pipeline Asset Type Input

  2. Click "Next", "Finish" and then "Save".

Upper String

This block creates a string starting with "." followed by the assetTypeId and forwards it as assetTypeIdWithPrefix.

  1. Configure the input mapping for the "MindSphereCreateAspectType_AWS" block according to the figure and table below: CreateAsset Pipeline String1 Input

    Field Value
    inString1 "."
  2. Create a new field of type "String" and name assetTypeIdWithPrefix "Pipeline Output".

  3. Configure the output mapping according to the figure below: CreateAsset Pipeline String1 Output
  4. Click "Finish" and then "Save".
Lower String

This block creates a string starting with tenantPrefix followed by assetTypeIdWithPrefix and forwards it as assetTypeIdWithPrefix.

  1. Configure the input mapping for the "MindSphereCreateAspectType_AWS" block according to the figure below: CreateAsset Pipeline String2 Input
  2. Configure the same output mapping as for the upper "String" block.
  3. Click "Finish" and then "Save".
CreateAsset_AWS

This block creates an asset using the assetTypeIdWithPrefix on the user defined tenant. The assetName is given by the input signature and the parentId is set to a fixed value.

  1. Configure the input mapping for the Siemens MindSphere" block according to the figure and table below: CreateAsset

    Field Value
    parentId Enter the ID of the desired parent asset
    Parent ID

    Open the desired parent asset in the Asset Manager and get the parentId from the URL as shown below. ParentId

  2. Click "Next", "Finish" and then "Save".

Testing the Pipeline for Creating Assets (optional)

Test the integration by manually providing the input values as shown below: Input Details

In addition to verifying the results in "Test Results" window, you can open the Asset Manager from the MindSphere Launchpad in order to inspect the created asset. Asset Manager

Any questions left?

Ask the community


Except where otherwise noted, content on this site is licensed under the MindSphere Development License Agreement.