Posting An Image from Webcam to Azure Face Api

前端 未结 4 1917
广开言路
广开言路 2021-01-06 19:12

I am trying to upload an image that I get from my webcam to the Microsoft Azure Face Api. I get the image from canvas.toDataUrl(‘image/png’) which contains the Data Uri. I c

相关标签:
4条回答
  • 2021-01-06 19:45

    So I got the answer finally by sending the image as a blob object. You first grab the image from canvas with:

    let data = canvas.toDataURL('image/jpeg');
    

    Afterwards, you can reformat it to a blob data object by running:

    fetch(data)
      .then(res => res.blob())
      .then(blobData => {
        // attach blobData as the data for the post request
      }
    

    You will also need to switch the Content-Type of the post request to "application/octet-stream"

    0 讨论(0)
  • 2021-01-06 19:46

    For saving someone's 6 hours, I appended my right code. I hope this code helps you.

    Tools

    • React
    • Typescript
    • React-webcam
    • Mac OS
    • Axios

    Code

    index.tsx

    Constants and ref

    /**
     * Constants
     */
    const videoConstraints = {
      width: 1280,
      height: 720,
      facingMode: 'user',
    };
    /**
     * Refs
     */
    const webcamRef = React.useRef<Webcam>(null);
    

    Call back function

    const capture = React.useCallback(() => {
      const base64Str = webcamRef.current!.getScreenshot() || '';
      const s = base64Str.split(',');
      const blob = b64toBlob(s[1]);
      callCognitiveApi(blob);
    }, [webcamRef]);
    

    In render

    <Webcam audio={false} ref={webcamRef} screenshotFormat="image/jpeg" videoConstraints={videoConstraints} />
    <button onClick={capture}>Capture photo</button>
    

    base64toBlob

    Thanks to creating-a-blob-from-a-base64-string-in-javascript

    export const b64toBlob = (b64DataStr: string, contentType = '', sliceSize = 512) => {
      const byteCharacters = atob(b64DataStr);
      const byteArrays = [];
    
      for (let offset = 0; offset < byteCharacters.length; offset += sliceSize) {
        const slice = byteCharacters.slice(offset, offset + sliceSize);
    
        const byteNumbers = new Array(slice.length);
        for (let i = 0; i < slice.length; i++) {
          byteNumbers[i] = slice.charCodeAt(i);
        }
    
        const byteArray = new Uint8Array(byteNumbers);
        byteArrays.push(byteArray);
      }
    
      const blob = new Blob(byteArrays, { type: contentType });
      return blob;
    };
    

    callCognitiveApi

    import axios from 'axios';
    
    const subscriptionKey: string = 'This_is_your_subscription_key';
    const url: string = 'https://this-is-your-site.cognitiveservices.azure.com/face/v1.0/detect';
    export const callCognitiveApi = (data: any) => {
      const config = {
        headers: { 'content-type': 'application/octet-stream', 'Ocp-Apim-Subscription-Key': subscriptionKey },
      };
      const response = axios
        .post(url, data, config)
        .then((res) => {
          console.log(res);
        })
        .catch((error) => {
          console.error(error);
        });
    };
    

    Result

    0 讨论(0)
  • 2021-01-06 19:50

    To extend Dalvor's answer: this is the AJAX call that works for me:

    fetch(data)
    .then(res => res.blob())
    .then(blobData => {
      $.post({
          url: "https://westus.api.cognitive.microsoft.com/face/v1.0/detect",
          contentType: "application/octet-stream",
          headers: {
            'Ocp-Apim-Subscription-Key': '<YOUR-KEY-HERE>'
          },
          processData: false,
          data: blobData
        })
        .done(function(data) {
          $("#results").text(JSON.stringify(data));
    
        })
        .fail(function(err) {
          $("#results").text(JSON.stringify(err));
        })
    

    Full demo code here: https://jsfiddle.net/miparnisari/b1zzpvye/

    0 讨论(0)
  • 2021-01-06 19:53

    Oh you're in such luck, i've just (successfully!) attempted this 2 days ago.

    Sending base64-encoded JPEGs to Face API is seriously inefficient, The ratio of encoded output bytes to input bytes is 4:3 (33% overhead). Just send a byte array, it works, the docs mention it briefly.

    And try to read as JPEG not PNG, that's just wasting bandwidth for webcam footage.

        ...
    
        var dataUri = canvas.toDataURL('image/' + format);
        var data = dataUri.split(',')[1];
        var mimeType = dataUri.split(';')[0].slice(5)
    
        var bytes = window.atob(data);
        var buf = new ArrayBuffer(bytes.length);
        var byteArr = new Uint8Array(buf);
    
        for (var i = 0; i < bytes.length; i++) {
            byteArr[i] = bytes.charCodeAt(i);
        }
    
        return byteArr;
    

    Now use byteArr as your payload (data:) in $.ajax() for jQuery or iDontUnderStandHowWeGotHereAsAPeople() in any other hipster JS framework people use these days.

    The reverse-hipster way of doing it is:

    var payload = byteArr;
    
    var xhr = new XMLHttpRequest();
    xhr.open('POST', 'https://SERVICE_URL');
    xhr.setRequestHeader('Content-Type', 'application/octet-stream');
    xhr.send(payload);
    
    0 讨论(0)
提交回复
热议问题