Accelerate a slow loop in Abaqus-python code for extracting strain data from .odb file

你说的曾经没有我的故事 提交于 2020-05-11 00:42:51

问题


I have a .odb file, named plate2.odb, that I want to extract the strain data from. To do this I built the simple code below that loops through the field output E (strain) for each element and saves it to a list.

from odbAccess import openOdb
import pickle as pickle

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a list
E = []
for i in range(1000):
    E.append(odb.steps['Step-1'].frames[0].fieldOutputs['E'].values[i].data)   

# save the data
with open("mises.pickle", "wb") as input_file:
    pickle.dump(E, input_file)

odb.close()

The issue is the for loop that loads the strain values into a list is taking a long time (35 seconds for 1000 elements). At this rate (0.035 queries/second), it would take me 2 hours to extract the data for my model with 200,000 elements. Why is this taking so long? How can I accelerate this?

If I do a single strain query outside any Python loop it takes 0.04 seconds, so I know this is not an issue with the Python loop.


回答1:


I found out that I was having to reopen the subdirectories in the odb dictionary every time I wanted a strain. Therefore, to fix the problem I saved the odb object as a smaller object. My updated code that takes a fraction of a second to solve is below.

from odbAccess import openOdb
import pickle as pickle

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a list
E = []
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
for i in range(1000):
    E.append(EE.values[i].data)  

# save the data
with open("mises.pickle", "wb") as input_file:
    pickle.dump(E, input_file)

odb.close()



回答2:


I would use bulkDataBlocks here. This is much faster than using the value method. Also using Pickle is usually slow and not necessary. Take a look in the C++ Manual http://abaqus.software.polimi.it/v6.14/books/ker/default.htm at the FieldBulkData object. The Python method is the same but at least in Abaqus 6.14 it is not documented in the Python-Scripting-Reference (it is available since 6.13).

For example:

from odbAccess import openOdb
import numpy as np

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a numpy array
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']

# get a numpy array with your data 
# Not using np.copy here may work also, but sometimes I encountered some weird bugs
Strains=np.copy(EE.bulkDataBlocks[0].data)

# save the data
np.save('OutputPath',Strains)

odb.close()

Keep in mind, that if you have multiple Element Types there may be more than one bulkDataBlock.




回答3:


Little late to the party, but I find using operator.attrgetter to be much faster than a for loop or list comprehension in this case

So instead of @AustinDowney

E = []
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
for i in range(1000):
    E.append(EE.values[i].data) 

do this:

from operator import attrgetter
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
E = map(attrgetter('data'), EE.values)

This is about the same speed as list comprehension, but is much better if you have multiple attributes you want to extract at once (say coordinates or elementId)



来源:https://stackoverflow.com/questions/46573959/accelerate-a-slow-loop-in-abaqus-python-code-for-extracting-strain-data-from-od

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!