Microsoft Sync Framework - Performance and scalability

为君一笑 提交于 2019-12-03 05:04:23

some things to keep in mind:

  1. No. of scopes - you might want to keep a 1-to-1 scope ratio for client-to-server instead of 1500 client scopes against one server scope. this isolates the sync knowledge of each client from one another, you may even drop and recreate a client scope without affecting other scopes. sync knowledge will be much compact as well.

  2. Scope definition - dont dump all tables in one scope. different tables have different characteristics (e.g., download only, read-only, less frequent update, frequently updated, etc...). group tables based on their characteristics.

  3. Batching - if the changes are small, dont batch. you incur performance overhead in batching since the batching has to write files and later on, it has to reconstitute the change dataset from the files.

  4. Metadata Cleanup - setup metadata retention and metadata cleanup process. this should reduce the sync metadata (rows in the tracking tables and the sync knowledge).

  5. WCF config -watch out for your WCF config entries such as timeout, message size, etc... be aware of this issue as well: http://support.microsoft.com/kb/2567595

EDIT:

Also, have a look at other Scope considerations here: Sync Framework Scope and SQL Azure Data Sync Dataset Considerations

The sample provided here : http://www.rajneeshnoonia.com/blog/2012/03/n-tier-sync-framework/

Is some thing close to your requirements

1:1 scope defined => we defined template and configure scope for each client based on this template.In this senerio T1 is defined in S1 and S2 however filters are used to identify row level records for each scope.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!