How to store files with meta data in LoopBack?

前端 未结 7 1708
别跟我提以往
别跟我提以往 2020-11-28 03:05

What I want to do: Have an html form, with a file input inside. When a file is chosen, the file input should upload the file, and get a file id, so when the form is submitte

相关标签:
7条回答
  • 2020-11-28 03:38

    For anyone else having that problem with loopback 3 and Postman that on POST, the connection hangs (or returns ERR_EMPTY_RESPONSE) (seen in some comments here)... The problem in this scenario is, that Postman uses as Content-Type "application/x-www-form-urlencoded"!

    Please remove that header and add "Accept" = "multipart/form-data". I've already filed a bug at loopback for this behavior

    0 讨论(0)
  • 2020-11-28 03:41

    Just pass the data as "params" object and at server you can get it as ctx.req.query

    For example

    At client side

    Upload.upload(
    {
        url: '/api/containers/container_name/upload',
        file: file,
        //Additional data with file
        params:{
         orderId: 1, 
         customerId: 1,
         otherImageInfo:[]
        }
    });
    

    At Server side

    Suppose your storage model name is container

    Container.beforeRemote('upload', function(ctx,  modelInstance, next) {
        //OUPTUTS: {orderId:1, customerId:1, otherImageInfo:[]}
        console.log(ctx.req.query); 
        next();
    })
    
    0 讨论(0)
  • 2020-11-28 03:45

    Here's the full solution for storing meta data with files in loopback.

    You need a container model

    common/models/container.json

    {
      "name": "container",
      "base": "Model",
      "idInjection": true,
      "options": {
        "validateUpsert": true
      },
      "properties": {},
      "validations": [],
      "relations": {},
      "acls": [],
      "methods": []
    }
    

    Create the data source for your container in server/datasources.json. For example:

    ...
    "storage": {
        "name": "storage",
        "connector": "loopback-component-storage",
        "provider": "filesystem", 
        "root": "/var/www/storage",
        "maxFileSize": "52428800"
    }
    ...
    

    You'll need to set the data source of this model in server/model-config.json to the loopback-component-storage you have:

    ...
    "container": {
        "dataSource": "storage",
        "public": true
    }
    ...
    

    You'll also need a file model to store the meta data and handle container calls:

    common/models/files.json

    {
      "name": "files",
      "base": "PersistedModel",
      "idInjection": true,
      "options": {
        "validateUpsert": true
      },
      "properties": {
        "name": {
          "type": "string"
        },
        "type": {
          "type": "string"
        },
        "url": {
          "type": "string",
          "required": true
        }
      },
      "validations": [],
      "relations": {},
      "acls": [],
      "methods": []
    }
    

    And now connect files with container:

    common/models/files.js

    var CONTAINERS_URL = '/api/containers/';
    module.exports = function(Files) {
    
        Files.upload = function (ctx,options,cb) {
            if(!options) options = {};
            ctx.req.params.container = 'common';
            Files.app.models.container.upload(ctx.req,ctx.result,options,function (err,fileObj) {
                if(err) {
                    cb(err);
                } else {
                    var fileInfo = fileObj.files.file[0];
                    Files.create({
                        name: fileInfo.name,
                        type: fileInfo.type,
                        container: fileInfo.container,
                        url: CONTAINERS_URL+fileInfo.container+'/download/'+fileInfo.name
                    },function (err,obj) {
                        if (err !== null) {
                            cb(err);
                        } else {
                            cb(null, obj);
                        }
                    });
                }
            });
        };
    
        Files.remoteMethod(
            'upload',
            {
                description: 'Uploads a file',
                accepts: [
                    { arg: 'ctx', type: 'object', http: { source:'context' } },
                    { arg: 'options', type: 'object', http:{ source: 'query'} }
                ],
                returns: {
                    arg: 'fileObject', type: 'object', root: true
                },
                http: {verb: 'post'}
            }
        );
    
    };
    

    For expose the files api add to the model-config.json file the files model, remember select your correct datasources:

    ...
    "files": {
        "dataSource": "db",
        "public": true
    }
    ...
    

    Done! You can now call POST /api/files/upload with a file binary data in file form field. You'll get back id, name, type, and url in return.

    0 讨论(0)
  • 2020-11-28 03:48

    For those who are looking for an answer to the question "how to check file format before uploading a file".

    Actual in this case we can use optional param allowedContentTypes.

    In directory boot use example code:

    module.exports = function(server) {
        server.dataSources.filestorage.connector.allowedContentTypes = ["image/jpg", "image/jpeg", "image/png"];
    }
    

    I hope it will help someone.

    0 讨论(0)
  • 2020-11-28 03:55

    For AngularJS SDK users... In case you would like to use generated methods like Container.upload(), you might want to add a line to configure the method in lb-services.js to set Content-Type headers to undefined. This would allow client to set Content-Type headers and add boundary value automatically. Would look something like this:

     "upload": {
        url: urlBase + "/containers/:container/upload",
        method: "POST",
        headers: {"Content-Type": undefined}
     }
    
    0 讨论(0)
  • 2020-11-28 04:01

    Depending on your scenario, it may be worth looking at utilising signatures or similar allowing direct uploads to Amazon S3, TransloadIT (for image processing) or similar services.

    Our first decision with this concept was that, as we are using GraphQL, we wanted to avoid multipart form uploads via GraphQL which in turn would need to transfer to our Loopback services behind it. Additionally we wanted to keep these servers efficient without potentially tying up resources with (large) uploads and associated file validation and processing.

    Your workflow might look something like this:

    1. Create database record
    2. Return record ID and file upload signature data (includes S3 bucket or TransloadIT endpoint, plus any auth tokens)
    3. Client uploads to endpoint

    For cases where doing things like banner or avatar uploads, step 1 already exists so we skip that step.

    Additionally you can then add SNS or SQS notifications to your S3 buckets to confirm in your database that the relevant object now has a file attached - effectively Step 4.

    This is a multi-step process but can work well removing the need to handle file uploads within your core API. So far this is working well from our initial implementation (early days in this project) for things like user avatars and attaching PDFs to a record.

    Example references:

    http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html

    https://transloadit.com/docs/#authentication

    0 讨论(0)
提交回复
热议问题