Prepare a model for deployment.

Model preparation

Once the model is trained and placed in the git repo you need to add two files to your main directory such as and requirements.txt. Those files are necessary to recreate the working environment and establish a proper run configuration to your model.

├── ...
└── requirements.txt

It is the main file that is responsible for running a model. Input and output data are always sent in the form of JSON format in the payload variable.

class PythonPredictor:
def __init__(self, config):
"""This method is required. It is called once before the API
becomes available. It performes the setup such as downloading /
initializing the model.
:param config (required): Dictionary passed from API configuration.
def predict(self, payload):
"""This method is required. It is called once per request.
Preprocesses the request payload, runs inference, and
postprocesses the inference output.
:param payload (optional): The request payload
:returns : Prediction or a batch of predictions.

The good practice is to load/download all additional files not placed in repo (e.g. model weights) in the __init__ method.

The input to your model can be sent in different formats such as JSONformat, andstarlette.datastructures.FormData. Go to API input types to explore each one in detail.

The output of your model (predictor method) can be returned also in three different formats such as JSON-parseable, string, and bytes objects. Explore in detail in the API response section.


File needed to recreate the environment for the model. It is important to give information about the exact version of the library.

You can


Sample models

If you are still not comfortable with the above approaches, please go to our Sample Model Repository to explore more models.

More options

With the Syndicai Platform, you can also deploy models using Cortex and Seldon engines.