Performance of Firebase with large data sets

后端 未结 1 1911
一整个雨季
一整个雨季 2020-11-27 10:39

I\'m testing firebase for a project that may have a reasonably large numbers of keys, potentially millions.

I\'ve tested loading a few 10k of records using node, and

相关标签:
1条回答
  • 2020-11-27 10:41

    It's simply the limitations of the Forge UI. It's still fairly rudimentary.

    The real-time functions in Firebase are not only suited for, but designed for large data sets. The fact that records stream in real-time is perfect for this.

    Performance is, as with any large data app, only as good as your implementation. So here are a few gotchas to keep in mind with large data sets.

    DENORMALIZE, DENORMALIZE, DENORMALIZE

    If a data set will be iterated, and its records can be counted in thousands, store it in its own path.

    This is bad for iterating large data sets:

    /users/uid
    /users/uid/profile
    /users/uid/chat_messages
    /users/uid/groups
    /users/uid/audit_record
    

    This is good for iterating large data sets:

    /user_profiles/uid
    /user_chat_messages/uid
    /user_groups/uid
    /user_audit_records/uid
    

    Avoid 'value' on large data sets

    Use the child_added since value must load the entire record set to the client.

    Watch for hidden value operations on children

    When you call child_added, you are essentially calling value on every child record. So if those children contain large lists, they are going to have to load all that data to return. Thus, the DENORMALIZE section above.

    0 讨论(0)
提交回复
热议问题