pymongo

Mongo query in python if I use variable as value

守給你的承諾、 提交于 2019-12-24 17:05:06
问题 Am trying to find documents from the mongo collection using the following query. db.collection_name.find({"id" : Id}) where Id is the variable am getting as input. But it doesn't work. If I hard code the value like this db.collection_name.find({"id" : "1a2b"}) it works. "id" is of string type and am using pymongo to access mongo DB. code : client = MongoClient("localhost:27017") db = client['sample_database'] Id = raw_input("enter id") cursor = db.collection_name.find({"id" : Id}) 回答1: Try

Pymongo cursor not looping

隐身守侯 提交于 2019-12-24 16:34:20
问题 This works- cursor = collection.find(query).limit(10678) for doc in cursor: print "entered loop" This doesn't- cursor = collection.find(query).limit(10679) for doc in cursor: print "entered loop" It just doesn't enter the loop. Ofcourse this upper-limit-number is different for different collections, which may be because of difference in size of each doc. So is there any size limitation for cursors in pymongo ? Any clues? 来源: https://stackoverflow.com/questions/14685492/pymongo-cursor-not

Insert the $currentDate on mongodb with pymongo

情到浓时终转凉″ 提交于 2019-12-24 10:30:56
问题 I need test the accuracy of a server mongodb. I am trying to insert a sequence of data, take the moment and it was sent to the database to know when it was inserted. I'm trying this: #!/usr/bin/python from pymongo import Connection from datetime import date, timedelta, datetime class FilterData: @classmethod def setData(self, serialData): try: con = Connection('IP_REMOTE', 27017, safe=True) db = con['resposta'] inoshare = db.resposta inoshare.insert(serialData) con.close() except Exception as

MongoDB : Can't insert twice the same document

落花浮王杯 提交于 2019-12-24 10:12:13
问题 On my pymongo code, inserting twice the same doc raises an error : document = {"auteur" : "romain", "text" : "premier post", "tag" : "test2", "date" : datetime.datetime.utcnow()} collection.insert_one(document) collection.insert_one(document) raises : DuplicateKeyError: E11000 duplicate key error collection: test.myCollection index: _id_ dup key: { : ObjectId('5aa282eff1dba231beada9e3') } inserting two documents with different content works fine. Seems like according to https://docs.mongodb

Pymongo aggregation - passing python list for aggregation

蓝咒 提交于 2019-12-24 09:58:46
问题 Here is my attempt at performing the aggregation (day-wise) based on timestamp if all the elements are hardcoded inside the query. pipe = [ { "$match": { "cid": ObjectId("57fe39972b8dbc1387b20913") } }, { "$project": { "animal_dog": "$animal.dog", "animal_dog_tail": "$animal.dog.tail", "animal_cat": "$animal.cat", "tree": "$fruits", "day": {"$substr": ["$timestamp", 0, 10]} }}, { "$group": { "_id" : "$day", "animal_dog" : {"$sum": "$animal_dog"}, "animal_dog_tail": {"$sum": "$animal_dog_tail"

Flask app broken after bson update in Heroku

风格不统一 提交于 2019-12-24 04:25:12
问题 I have a Flask app that uses mongoengine and running on Heroku, init I use the bson package and after I updateded it from 0.5.6 to 0.5.7 I started getting the following error message: [2018-11-23 05:56:43 +0000] [39] [INFO] Worker exiting (pid: 39) [2018-11-23 05:56:43 +0000] [40] [ERROR] Exception in worker process Traceback (most recent call last): File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() File "/app/.heroku

Why does PyMongo encode uuid.uuid1() as a BSON::Binary?

巧了我就是萌 提交于 2019-12-24 04:09:36
问题 I'm adding a 'GUID' key with a value of uuid.uuid1() (from python uuid module) for all my documents in Mongo. I noticed they are being stored not as strings, but as type BSON::Binary . I've done some Googling already, but I still don't understand what the purpose/advantage to this serialization is. Can someone explain? Should I be converting the uuid.uuid1() to strings before storing? How can I use a string to find() by the GUID value like db.myCol.find({ 'GUID' : aString })? 回答1: The default

pymongo replication secondary readreference not work

走远了吗. 提交于 2019-12-24 03:50:49
问题 we have MongoDB 2.6 and 2 Replica Set, and we use pymongo driver and connect Mongo Replicat Set with the following url mongodb://admin:admin@127.0.0.1:10011:127.0.0.1:10012,127.0.0.1:10013/db?replicaSet=replica with python code from pymongo import MongoClient url = 'mongodb://admin:admin@127.0.0.1:10011:127.0.0.1:10012,127.0.0.1:10013/db?replicaSet=replica' db = 'db' db = MongoClient( url, readPreference='secondary', secondary_acceptable_latency_ms=1000, )[db] db.test.find_one() # more read

Is there a workaround to allow using a regex in the Mongodb aggregation pipeline

纵饮孤独 提交于 2019-12-24 03:43:32
问题 I'm trying to create a pipeline which will count how many documents match some conditions. I can't see any way to use a regular expression in the conditions though. Here's a simplified version of my pipeline with annotations: db.Collection.aggregate([ // Pipeline before the issue {'$group': { '_id': { 'field': '$my_field', // Included for completeness }, 'first_count': {'$sum': { // We're going to count the number '$cond': [ // of documents that have 'foo' in {'$eq: ['$field_foo', 'foo']}, 1,

PyMongo’s bulk write operation features with generators

微笑、不失礼 提交于 2019-12-24 03:27:11
问题 I would like to use PyMongo’s bulk write operation features which executes write operations in batches in order to reduces the number of network round trips and increaseses rite throughput. I also found here that it was possible to used 5000 as a batch number. However, I do not want is the best size for batch number and how to combine PyMongo’s bulk write operation features with generators in the following code? from pymongo import MongoClient from itertools import groupby import csv def iter