NDB not clearing memory during a long request

[亡魂溺海] 提交于 2019-11-30 15:20:04

问题


I am currently offloading a long running job to a TaskQueue to calculate connections between NDB entities in the Datastore.

Basically this queue handles several lists of entity keys that are to be related to another query by the node_in_connected_nodes function in the GetConnectedNodes node:

class GetConnectedNodes(object):
"""Class for getting the connected nodes from a list of nodes in a paged way"""
def __init__(self, list, query):
    # super(GetConnectedNodes, self).__init__()
    self.nodes = [ndb.model.Key('Node','%s' % x) for x in list]
    self.cursor = 0
    self.MAX_QUERY = 100
    # logging.info('Max query - %d' % self.MAX_QUERY)
    self.max_connections = len(list)
    self.connections = deque()
    self.query=query

def node_in_connected_nodes(self):
    """Checks if a node exists in the connected nodes of the next node in the 
       node list.
       Will return False if it doesn't, or the list of evidences for the connection
       if it does.
       """
    while self.cursor < self.max_connections:
        if len(self.connections) == 0:
            end = self.MAX_QUERY
            if self.max_connections - self.cursor < self.MAX_QUERY:
                end = self.max_connections - self.cursor
            self.connections.clear()
            self.connections = deque(ndb.model.get_multi_async(self.nodes[self.cursor:self.cursor+end]))

        connection = self.connections.popleft()
        connection_nodes = connection.get_result().connections

        if self.query in connection_nodes:
            connection_sources = connection.get_result().sources
            # yields (current node index in the list, sources)
            yield (self.cursor, connection_sources[connection_nodes.index(self.query)])
        self.cursor += 1

Here a Node has a repeated property connections that contains an array with other Node key ids, and a matching sources array to that given connection.

The yielded results are stored in a blobstore.

Now the problem I'm getting is that after an iteration of connection function the memory is not cleared somehow. The following log shows the memory used by AppEngine just before creating a new GetConnectedNodes instance:

I 2012-08-23 16:58:01.643 Prioritizing HGNC:4839 - mem 32
I 2012-08-23 16:59:21.819 Prioritizing HGNC:3003 - mem 380
I 2012-08-23 17:00:00.918 Prioritizing HGNC:8932 - mem 468
I 2012-08-23 17:00:01.424 Prioritizing HGNC:24771 - mem 435
I 2012-08-23 17:00:20.334 Prioritizing HGNC:9300 - mem 417
I 2012-08-23 17:00:48.476 Prioritizing HGNC:10545 - mem 447
I 2012-08-23 17:01:01.489 Prioritizing HGNC:12775 - mem 485
I 2012-08-23 17:01:46.084 Prioritizing HGNC:2001 - mem 564
C 2012-08-23 17:02:18.028 Exceeded soft private memory limit with 628.609 MB after servicing 1 requests total

Apart from some fluctuations the memory just keeps increasing, even though none of the previous values are accessed. I found it quite hard to debug this or to figure out if I have a memory leak somewhere, but I seem to have traced it down to that class. Would appreciate any help.


回答1:


We had similar issues (with long running requests). We solved them by turning-off the default ndb cache. You can read more about it here




回答2:


In our case this was caused by AppEngine Appstats enabled.

After disabling, the memory consumption is back to normal.




回答3:


You could call gc.collect() at the start of each request.



来源:https://stackoverflow.com/questions/12095259/ndb-not-clearing-memory-during-a-long-request

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!