Skip to content

Managing Docker Models

Docker Image Overview and Constraints

In addition to supporting Python (2, 3) and R models developed in Jupyter or Notebook, Docker is among the types of models PrL supports. Docker models have the advantage of being able to run any custom code, in any program language, and also Linux distribution preferred by users. The default operating system for all other types of models is the AWS AMI Linux distribution. There are few constraints related to data ingestion and persistence functions in the docker image setup. The Docker image persisted in Model Management:

  • Consumes data from the /data/input folder
  • Persists data in the /data/output folder

These folders are set up for automated execution by the Job Manager service, which retrieves the data for the job and persists it in /data/input, as well as the data written to the /data/output folder; Job Manager services place the data in the designated persistence service location, like Data Exchange, Predictive Learning Storage, or Integrated Data Lake (IDL).

About Creating a Docker Image to use in Model Execution

If you want to create your own Docker image to hold your code or model, you will require at a very minimum a Dockerfile. Usually, you 'inherit' one of the public images that provides minimal support for your code or model. Here's a short example:

ARG BASE_CONTAINER=python:3.9-slim-bullseye
FROM $BASE_CONTAINER

USER root  

RUN ["mkdir", "/tmp/input"]
RUN ["mkdir", "/tmp/output"]
RUN chmod 777 -R /tmp
RUN ["mkdir", "/data"]
RUN ["mkdir", "/data/input"]
RUN ["mkdir", "/data/output"]
RUN chmod 777 -R /data
RUN ["mkdir", "/iot_data"]
RUN ["mkdir", "/iot_data/input"]
RUN ["mkdir", "/iot_data/output"]
RUN ["mkdir", "/iot_data/datasets"] 
RUN chmod 777 -R /iot_data
RUN ["mkdir", "/prl_storage_data"]
RUN chmod 777 -R /prl_storage_data

RUN pip install awscli
RUN apt-get update
RUN apt-get install wget -y
RUN apt-get install curl -y
RUN apt-get install jq -y

COPY . .

ENTRYPOINT ["python3", "./my_python_script.py"]

The lines that create folders 'RUN ["mkdir", ...]' will create the proper folders for Job Manager to copy in input files, or to copy from results. If you do not pass in any inputs or outputs to your container when the image is executed as a job, then, these are not needed. In addition, if you want your Docker image to contain additional libraries, you can install these here using 'RUN apt-get install ...'. These commands depend on your operating system, and they should be adapted to each. For detailed instructions on how to design your Dockerfile please check Dockerfile reference. More instructions on how to build your Docker image can be found here Docker build.

Persisting a Docker Image in Model Management

Follow these steps to create a new Docker model:

  1. Click the "New Version" button on the Manage Analytical Models page. The Create New Version pop-up window opens.
  2. Select "Docker Image" from the Type drop-down list. The system displays two Docker-relevant controls: a "Generate Token" button, and a text field.
  3. Enter a complete Docker image repository path and tag version in the text field.
  4. Do not click the "Generate Token" button. Read the information below, then proceed with the steps below.

Generate Token Option

Before a Docker Image can be associated with a Model, it must be brought into the Predictive Learning (PrL) service, which requires that you push the Docker Image to the PrL service repository.

Important: Time Constraints Involved with the Generating Token Option

Please note the following time constraints before proceeding:

  • If you fail to complete all steps involved in generating a token within the 2-hour and 24-hour time windows discussed below, you will need to start the process over again. Also, once the two-hour window closes you cannot make updates to the uploaded Docker image.
  • Within 24 hours of the token being generated you must link the Docker image, tag, and repository path to the model. If not completed within this time frame, the repository is automatically deleted.

Finish Generating a Token for a Docker Image

Follow the steps below and complete them within two hours of generating your token:

  1. Click the "Generate Token" button. The service generates a unique repository.
  2. Create a tag for your Docker image to serve as a reference to the upload version.
  3. Log in and push the tagged Docker image into the service's repository using the Docker push command.

Next Steps

Once you upload your Docker image, you can:

  • Associate the Docker image with your model by referencing the correct repository and tag in Managing Models.
  • Safely close dialog window between uploading the Docker image and creating the Model Management entry.

FMU & ONNX Model Execution Support

Predictive learning supports executions of FMU & ONNX type models:

  1. This feature is availble only for predictive learning internal customer i.e. Product Twin Application.
  2. Product Twin application stores their FMU & ONNX models in predictive learning model management.
  3. Predictive learning application is supporting the execution environment to run their FMU & ONNX models.
  4. Product Twin application triggers the predictive learning job by providing the FMU & ONNX model details in the job request.
  5. Predictive learning has special runners to run the product twin FMU & ONNX models stored in model management.
  6. To know more on FMU & ONNX model development part, refer product twin documentation

Advantages

  1. FMU & ONNX models executions runs on container based environment.
  2. Initialization and setup of the container based environment is fast & quick.
  3. FMU & ONNX models executions starts quick and finishes based on its expected run time.

Disadvantages

  1. Max 15 mins execution time is supported to run FMU & ONNX models executions.
  2. Max 4 cpu and 16 GB memory execution environment is supported to run the models.
  3. Currently this feature is available only from Product Twin Application.

Last update: December 20, 2023

Except where otherwise noted, content on this site is licensed under the Development License Agreement.