问题
Working on single page applications i have to write a lot of boilerplate code in order to synchronise with the server side data.
PouchDB offers an elegant solution to this problem allowing to access the data locally on the client side.
What i don't understand, is whether Pouch is suitable as a database proxy or not, in cases when the database is too big to fully fit in the browser memory.
As far as i can read, Pouch works duplicating a whole remote database, and thus can be used just in those cases when the whole database fits in the browser memory.
Example use case
Let's say that i have a database with all Wikipedia articles and i want to manipulate part of them on the client side. Replication is not the way to go, what is needed is proxing. For example when a query is issued locally in the client side, just the matching results should be transferred. It is not feasible to run a query just on the replicated values, because it is not possible to replicate the whole database locally.
回答1:
You're right that PouchDB sync wouldn't really do what you want it to do. It's designed to sync entire databases, or predefined subsets of a database using server-side design docs.
If I were you, I would probably still use PouchDB, but I would handle the syncing manually. Something like this:
var localDB = new PouchDB('localDB');
var remoteDB = new PouchDB('http://some-site.com:5984/somedb');
function searchForDocs(docId) {
// try the local DB first
localDB.get(docId).catch(function (err) {
if (err.name !== 'not_found') {
throw error;
}
// not found, so fall back to the remote DB
return remoteDB.get(docId).then(function (doc) {
// cache in the local DB
delete doc._rev;
return localDB.put(doc).then(function () {
return doc;
});
});
}).then(function (doc) {
// do something with our doc
}).catch(function (err) {
// handle any errors along the way
});
}
Using get()
is a little simplistic here; in your Wikipedia case you would probably want to do allDocs({startkey: query, endkey: query + '\uffff'})
to find all docs whose ID start with a query. Or you could use a secondary index.
So although you wouldn't be getting the benefits of PouchDB's built-in sync, you are getting the benefits of being able to write the same code against the server as the client, plus PouchDB's cross-browser support. So I don't think this is a bad way to go about it.
来源:https://stackoverflow.com/questions/25060568/can-pouchdb-proxy-a-big-database-on-the-client-side