Azure-docs: Updating an ML Service deployment

Created on 11 Jul 2019  Â·  13Comments  Â·  Source: MicrosoftDocs/azure-docs

The following code snippet is not working with 1.0.48 (https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#update)
service.update(models = [new_model])

It gives the following error:
Error, both "models" and "inference_config" inputs must be provided in order to update any of the parameters in inference_config.'
Also, the update() method does not take the image parameter into consideration. I wanted to populate the dependency parameter which is available on the ImageConfig, but not on InferenceConfig. This is a problem, as if I provide a dependency to the first deployment, I cannot update it. My point of view is, that either make the image parameter count for the update method, or extend the InferenceConfig with the dependency parameter.

Also it is not really clear what is the difference between the InferenceConfig and the ImageConfig.


Document Details

⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

Pri1 assigned-to-author corsubsvc doc-bug machine-learninsvc triaged

Most helpful comment

Hi @n-a-sz Environments docs have been published. You can find a section on private wheels at https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-use-environments.

please-close

All 13 comments

@n-a-sz Thank you for your feedback, I have assigned your feedback to author for proper update. ^^

@jpe316 Hi Jordan, could you please take a look? Thank you.

Any update on this?

I would add something here -
There is NO way to introduce or update 'dependencies' set with AZ CLI.
My deployment is based on CLI, and I do not want to switch to Python SDK.
What shall I do?

Following up on this.

In order to update a service you need to provide a collection of models as well as an inference_config.
We have removed the 'dependency' parameter from InferenceConfig in favor of taking files from a specified source directory based on customer feedback.

We are slowly phasing out ImageConfig in favor of InferenceConfig, please move to using InferenceConfig going forward.

@dmitry-lif you should be able to use source_directory from the CLI also.

Thank you. Pushing aux source code along with models sounds reasonable.
Pls, could you provide an example how to push sources dir + multiple models using CLI ?

@jpe316
Ok, thanks for the update!
Will the InferenceConfig support additional dependencies as the ImageConfig? In my deployment script, I'm adding inhouse python packages as wheel files to the image, and specifying the wheel file in the conda env file, so I can have a proper python environment.
Currently, I can only update my deployment by removing it and creating a new one.

@dmitry-lif Sorry for the late response. To register multiple models, put them in a directory and use az ml model register -p ./models -n sentiment -w myworkspace -g myresourcegroup to add them all as one registration. See https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-existing-model for an example this.

To add a source directory (score.py and other files for running the models) would go in the inference config YML file. You can find the format/entries for it documented in https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where.

@n-a-sz Sorry for the late response. Investigating how this can be done with the InferenceConfig path now.

@n-a-sz The way using a private .whl file works with InferenceConfiguration isn't fully documented today. It relies on the Environment object, and the docs for that are being written currently. Environments are a way to register and version the environments needed for training or deployment. For example, you might have a standard training environment that you re-use for multiple trainings. Instead of having to declare the environment all the time, you would just say "use this named environment from my workspace."

The environment object is live in the most recent SDK, so it can be used today. Just might take a few more days before the docs are finalized on it. Here's the pattern:

from azureml.core import Environment
from azureml.core.environment import CondaDependencies

# Upload the whl file to storage for your workspace and get back a URL
whl_url = Environment.add_private_pip_wheel(ws, "./sample-0.1.0-py2.py3-none-any.whl", exist_ok=True)
# Load or build the conda dependencies
conda_dep = CondaDependencies(conda_dependencies_file_path="./myenv.yml")
# Add a reference to the dependencies
conda_dep.add_pip_package(whl_url)
# Create an environment and add conda dependencies to it
myenv = Environment(name="myenv")
myenv.python.conda_dependencies = conda_dep

from azureml.core.model import InferenceConfig

# Use the environment in the Inferenceconfig
inference_config = InferenceConfig(source_directory="./test",
                                   entry_script="score.py",
                                   environment=myenv)

# deploy as normal using Model.deploy() (not shown)

To update a deployment that was made with InferenceConfig, you can call the update() method and specify a new/updated InferenceConfig. For example, service.update(inference_config=new_config).

I'll update the deployment docs to talk about this, and publish the update once the environment docs go live.

Hi @n-a-sz Environments docs have been published. You can find a section on private wheels at https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-use-environments.

please-close

Hi @Blackmist This looks cool, thanks for updating me on this!

Hello,
I have the same problem. I want to programmatically update my webservice that was built with the visual designer. I would like to use the python SDK.
I am able to retrain and register the model from a jupyter notebook.

My problem is also that I get the same error for "service.update(models = [new_model])"

=> Error, both "models" and "inference_config" inputs must be provided in order to update any of the parameters in inference_config.'

Yet to create an inferenceConfig I need an entry script (score.py) that knows how to handle my model. When I created the model from the designer and deployed the webservice everything was taken care of automatically. But now I don't know the code to load and use the model.

So my question is: how to just update the model with the same generic score.py-entry script? Is there an easy way where I only need to state the new model? Or do I have to supply an own file?

Thanks for any help
Philipp

Hi @philippschaefer4 Thanks for asking about this. Unfortunately, using the SDK to update a web service deployed with the designer is not a supported scenario at this time. I'll add a note to the documentation to call out this limitation.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

bityob picture bityob  Â·  3Comments

spottedmahn picture spottedmahn  Â·  3Comments

Favna picture Favna  Â·  3Comments

jharbieh picture jharbieh  Â·  3Comments

bdcoder2 picture bdcoder2  Â·  3Comments