Azure-docs: Azure Detect faces API,how to change the URL picture to a local picture?

Created on 4 Jun 2018  Â·  18Comments  Â·  Source: MicrosoftDocs/azure-docs

This issue is created by @dapsjj from https://github.com/MicrosoftDocs/feedback/issues/410

I use azure Detect faces API But I find image_url is a picture of a network. Now I want to use my local picture,how to change the URL picture to a local picture?
Can you give me code?I use python 3.6


Document Details

⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

assigned-to-author cognitive-servicesvc product-question triaged

Most helpful comment

The following code works for me, with specifying local image path:

import requests
import matplotlib.pyplot as plt
from PIL import Image
from matplotlib import patches
from io import BytesIO
import os

# If you are using a Jupyter notebook, uncomment the following line.
#%matplotlib inline

# Replace <Subscription Key> with your valid subscription key.
subscription_key = "Type_in_Subscription_key_please" #DSFace API

# Set image path from local file.
image_path = os.path.join('Specify the image path on your local machine')

assert subscription_key

# You must use the same region in your REST call as you used to get your
# subscription keys. For example, if you got your subscription keys from
# westus, replace "westcentralus" in the URI below with "westus".
#
# Free trial subscription keys are generated in the westcentralus region.
# If you use a free trial subscription key, you shouldn't need to change
# this region.
face_api_url = 'https://westus.api.cognitive.microsoft.com/face/v1.0/detect'

image_data = open(image_path, "rb")

headers = {'Content-Type': 'application/octet-stream',
           'Ocp-Apim-Subscription-Key': subscription_key}
params = {
    'returnFaceId': 'true',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,' +
    'emotion,hair,makeup,occlusion,accessories,blur,exposure,noise'
}

response = requests.post(face_api_url, params=params, headers=headers, data=image_data)
response.raise_for_status()
faces = response.json()

# Display the original image and overlay it with the face information.
image_read = open(image_path, "rb").read()
image = Image.open(BytesIO(image_read))

plt.figure(figsize=(8, 8))
ax = plt.imshow(image, alpha=1)
for face in faces:
    fr = face["faceRectangle"]
    fa = face["faceAttributes"]
    origin = (fr["left"], fr["top"])
    p = patches.Rectangle(
        origin, fr["width"], fr["height"], fill=False, linewidth=2, color='b')
    ax.axes.add_patch(p)
    plt.text(origin[0], origin[1], "%s, %d"%(fa["gender"].capitalize(), fa["age"]),
             fontsize=20, weight="bold", va="bottom")
_ = plt.axis("off")
plt.show()

print(faces)

All 18 comments

@Powerhelmsman Thanks for the feedback! I have assigned the issue to the content author to investigate further and update the document as appropriate.

@noellelacharite Hi, this is a product question/document-enhancement issue, can you please check it to see if you can help and update the document as necessary? Thanks a lot!

@dapsjj for awareness.

import requests

headers = {'Content-Type': 'application/octet-stream', 
                    'Ocp-Apim-Subscription-Key': <KEY>}
face_api_url = 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect'

data = open('C:\img.jpg', 'rb')
requests.post(face_api_url , headers=headers, data=data)

@sonfiree Sir,can you give me a whole code?
I modify my code like this:


import requests
import matplotlib.pyplot as plt
from PIL import Image
from matplotlib import patches
from io import BytesIO


subscription_key = "ZZZZZZZZZZZZZZZZZ"
assert subscription_key
face_api_url = 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect'
# headers = {'Ocp-Apim-Subscription-Key': subscription_key}
headers = {'Content-Type': 'application/octet-stream', 'Ocp-Apim-Subscription-Key': subscription_key}
params = {
    'returnFaceId': 'true',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,' +
    'emotion,hair,makeup,occlusion,accessories,blur,exposure,noise'
}
data = open(r'D:/liuneng1.jpg', 'rb')
response = requests.post(face_api_url , headers=headers, data=data,params=params)
faces = response.json()
image = Image.open(BytesIO(response.content))
plt.figure(figsize=(8, 8))
ax = plt.imshow(image, alpha=0.6)
for face in faces:
    fr = face["faceRectangle"]
    fa = face["faceAttributes"]
    origin = (fr["left"], fr["top"])
    p = patches.Rectangle(
        origin, fr["width"], fr["height"], fill=False, linewidth=2, color='b')
    ax.axes.add_patch(p)
    plt.text(origin[0], origin[1], "%s, %d"%(fa["gender"].capitalize(), fa["age"]),
             fontsize=20, weight="bold", va="bottom")
plt.axis("off")
plt.savefig('D:/test.jpg')
plt.show()

But the error is :

Traceback (most recent call last):
File "E:/test_opencv/test_120AzureFaceRecognization.py", line 22, in
image = Image.open(BytesIO(response.content))
File "E:\Anaconda3\lib\site-packages\PIL\Image.py", line 2519, in open
% (filename if filename else fp))
OSError: cannot identify image file <_io.BytesIO object at 0x000002021DC8E4C0>

@dapsjj
At you all in an error it is written
Bad
data = open(r'D:/liuneng1.jpg', 'rb')
Good
data = open('file:///D:/liuneng1.jpg', 'rb')

@sonfiree When I modify my code as you told me.the error message is :
Traceback (most recent call last):

data = open('file:///D:/liuneng1.jpg', 'rb')
OSError: [Errno 22] Invalid argument: 'file:///D:/liuneng1.jpg'

@dapsjj The complete error code in Debug

@sonfiree
The complete error code in Debug:

E:\Anaconda3\python.exe E:/test_opencv/test_120AzureFaceRecognization.py
Traceback (most recent call last):
File "E:/test_opencv/test_120AzureFaceRecognization.py", line 20, in
data = open('file:///D:/liuneng1.jpg', 'rb')
OSError: [Errno 22] Invalid argument: 'file:///D:/liuneng1.jpg'

Process finished with exit code 1

@dapsjj
data = open('D:\liuneng1.jpg', 'rb')

@sonfiree Sorry,the error message has not changed.

@dapsjj
For this to work edit your code like this -:

import httplib, urllib, base64,json,requests

headers = {
    # Request headers
    'Content-Type': 'application/octet-stream',   # this should be the content type
    'Ocp-Apim-Subscription-Key': 'Your key',
}

params = {
    # Request parameters
    'returnFaceId': 'true',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': 'age,gender'  
}


data = open('D:\\test.jpg', 'rb').read()  

face_api_url = 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect'
response = requests.post(face_api_url, params=params, headers=headers, data=data)
faces = response.json()
print faces

Let me know if it works

@pranavdheer
Can you give me a python3.X code?

Below code should work:

import requests
from io import BytesIO
from PIL import Image, ImageDraw

def draw_face(img):

    subscription_key = 'yourkey'  # Replace with a valid subscription key (keeping the quotes in place).
    BASE_URL = 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect'  # Replace with your regional Base URL
    headers = {
    # Request headers
    'Content-Type': 'application/octet-stream',   # this should be the content type
    'Ocp-Apim-Subscription-Key': subscription_key,
    }
    response = requests.post(BASE_URL,  headers=headers, data=img)
    faces = response.json()
    print(faces)
    def getRectangle(faceDictionary):
        rect = faceDictionary['faceRectangle']
        left = rect['left']
        top = rect['top']
        bottom = left + rect['height']
        right = top + rect['width']
        return ((left, top), (bottom, right))

    output_image = Image.open(BytesIO(img))
    #For each face returned use the face rectangle and draw a red box.
    draw = ImageDraw.Draw(output_image)
    for face in faces:
        draw.rectangle(getRectangle(face), outline='red')
    return output_image

image_path = "path_to_image"

image_data = open(image_path, "rb").read()

image = draw_face(image_data)
image.show()

@james-tn
Sorry,sir!
The code format is too messy.

The following code works for me, with specifying local image path:

import requests
import matplotlib.pyplot as plt
from PIL import Image
from matplotlib import patches
from io import BytesIO
import os

# If you are using a Jupyter notebook, uncomment the following line.
#%matplotlib inline

# Replace <Subscription Key> with your valid subscription key.
subscription_key = "Type_in_Subscription_key_please" #DSFace API

# Set image path from local file.
image_path = os.path.join('Specify the image path on your local machine')

assert subscription_key

# You must use the same region in your REST call as you used to get your
# subscription keys. For example, if you got your subscription keys from
# westus, replace "westcentralus" in the URI below with "westus".
#
# Free trial subscription keys are generated in the westcentralus region.
# If you use a free trial subscription key, you shouldn't need to change
# this region.
face_api_url = 'https://westus.api.cognitive.microsoft.com/face/v1.0/detect'

image_data = open(image_path, "rb")

headers = {'Content-Type': 'application/octet-stream',
           'Ocp-Apim-Subscription-Key': subscription_key}
params = {
    'returnFaceId': 'true',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,' +
    'emotion,hair,makeup,occlusion,accessories,blur,exposure,noise'
}

response = requests.post(face_api_url, params=params, headers=headers, data=image_data)
response.raise_for_status()
faces = response.json()

# Display the original image and overlay it with the face information.
image_read = open(image_path, "rb").read()
image = Image.open(BytesIO(image_read))

plt.figure(figsize=(8, 8))
ax = plt.imshow(image, alpha=1)
for face in faces:
    fr = face["faceRectangle"]
    fa = face["faceAttributes"]
    origin = (fr["left"], fr["top"])
    p = patches.Rectangle(
        origin, fr["width"], fr["height"], fill=False, linewidth=2, color='b')
    ax.axes.add_patch(p)
    plt.text(origin[0], origin[1], "%s, %d"%(fa["gender"].capitalize(), fa["age"]),
             fontsize=20, weight="bold", va="bottom")
_ = plt.axis("off")
plt.show()

print(faces)

@YutongTie-MSFT please reassign this issue to @PatrickFarley

@YutongTie-MSFT This is not a doc bug but a support issue. Please reassign this issue to support. Thank you.

This is indeed a support issue. In any case, the conversation ended months ago, so I think we can consider it resolved.

please-close

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jebeld17 picture jebeld17  Â·  3Comments

Agazoth picture Agazoth  Â·  3Comments

monteledwards picture monteledwards  Â·  3Comments

varma31 picture varma31  Â·  3Comments

spottedmahn picture spottedmahn  Â·  3Comments