https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-run-batch-predictions#build-and-run-the-batch-inference-pipeline
-> DEFAULT_CPU_IMAGE is wrong, DEFAULT_GPU_IMAGE is correct.
[Now (wrong)]
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_GPU_IMAGE
batch_conda_deps = CondaDependencies.create(pip_packages=["tensorflow==1.13.1", "pillow"])
batch_env = Environment(name="batch_environment")
batch_env.python.conda_dependencies = batch_conda_deps
batch_env.docker.enabled = True
batch_env.docker.base_image = DEFAULT_CPU_IMAGE
batch_env.spark.precache_packages = False
[Correct]
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_GPU_IMAGE
batch_conda_deps = CondaDependencies.create(pip_packages=["tensorflow==1.13.1", "pillow"])
batch_env = Environment(name="batch_environment")
batch_env.python.conda_dependencies = batch_conda_deps
batch_env.docker.enabled = True
batch_env.docker.base_image = DEFAULT_GPU_IMAGE
batch_env.spark.precache_packages = False
⚠Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
@damochiz Thanks for the feedback. We are investigating into the issue and will update you shortly.
@damochiz The change is now submitted and the doc should be updated in the next 24 hours.