问题
I'm trying to download an audio file (about 250KB) from Firebase Storage and send it to IBM Cloud Speech-to-Text, using Firebase Cloud Functions (Node 8). I'm using axios
to send the HTTP GET request to the download URL. axios
returns a stream so I use fs.createReadStream(response)
to stream the file to IBM Cloud Speech-to-Text. I don't get an error message, rather nothing is sent to IBM Cloud Speech-to-Text.
exports.IBM_Speech_to_Text = functions.firestore.document('Users/{userID}/Pronunciation_Test/downloadURL').onUpdate((change, context) => { // this is the Firebase Cloud Functions trigger
const fs = require('fs');
const SpeechToTextV1 = require('ibm-watson/speech-to-text/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const speechToText = new SpeechToTextV1({
authenticator: new IamAuthenticator({
apikey: 'my-api-key',
}),
url: 'https://api.us-south.speech-to-text.watson.cloud.ibm.com/instances/01010101',
});
const axios = require('axios');
return axios({
method: 'get',
url: 'https://firebasestorage.googleapis.com/v0/b/languagetwo-cd94d.appspot.com/o/Users%2FbcmrZDO0X5N6kB38MqhUJZ11OzA3%2Faudio-file.flac?alt=media&token=871b9401-c6af-4c38-aaf3-889bb5952d0e', // the download URL for the audio file
responseType: 'stream' // is this creating a stream?
})
.then(function (response) {
var params = {
audio: fs.createReadStream(response),
contentType: 'audio/flac',
wordAlternativesThreshold: 0.9,
keywords: ['colorado', 'tornado', 'tornadoes'],
keywordsThreshold: 0.5,
};
speechToText.recognize(params)
.then(results => {
console.log(JSON.stringify(results, null, 2)); // undefined
})
.catch(function (error) {
console.log(error.error);
});
})
.catch(function (error) {
console.log(error.error);
});
});
The problem is that the response from axios
isn't going to fs.createReadStream()
.
The documentation for fs.createReadStream(path)
says path <string> | <Buffer> | <URL>
. response
is none of those. Do I need to write response
to a buffer? I tried this:
const responseBuffer = Buffer.from(response.data.pipe(fs.createWriteStream(responseBuffer)));
;
var params = {
audio: fs.createReadStream(responseBuffer),
but that didn't work either. That first line is smelly...
Or should I use a stream?
exports.IBM_Speech_to_Text = functions.firestore.document('Users/{userID}/Pronunciation_Test/downloadURL').onUpdate((change, context) => {
const fs = require('fs');
const SpeechToTextV1 = require('ibm-watson/speech-to-text/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const speechToText = new SpeechToTextV1({
authenticator: new IamAuthenticator({
apikey: 'my-api-key',
}),
url: 'https://api.us-south.speech-to-text.watson.cloud.ibm.com/instances/01010101',
});
const axios = require('axios');
const path = require('path');
return axios({
method: 'get',
url: 'https://firebasestorage.googleapis.com/v0/b/languagetwo-cd94d.appspot.com/o/Users%2FbcmrZDO0X5N6kB38MqhUJZ11OzA3%2Faudio-file.flac?alt=media&token=871b9401-c6af-4c38-aaf3-889bb5952d0e',
responseType: 'stream'
})
.then(function (response) {
response.data.pipe(createWriteStream(audiofile));
var params = {
audio: fs.createReadStream(audiofile),
contentType: 'audio/flac',
wordAlternativesThreshold: 0.9,
keywords: ['colorado', 'tornado', 'tornadoes'],
keywordsThreshold: 0.5,
};
speechToText.recognize(params)
.then(results => {
console.log(JSON.stringify(results, null, 2));
})
.catch(function (error) {
console.log(error.error);
});
})
.catch(function (error) {
console.log(error.error);
});
});
That doesn't work either.
回答1:
The problem was that I was passing response
from axios
when it should have been response.data
. I would have figured this out in five minutes with Postman, but Postman doesn't work with streams.
The other problem was as jfriend00 said, fs.createReadStream
was unnecessary. The correct code is:
audio: response.data,
No need for these lines:
const fs = require('fs');
response.data.pipe(createWriteStream(audiofile));
来源:https://stackoverflow.com/questions/60822895/node-js-confusion-about-buffers-streams-pipe-axios-createwritestream-and-cr