Python multiprocessing from Abaqus/CAE

若如初见. 提交于 2019-12-21 02:48:14

问题


I am using a commercial application called Abaqus/CAE1 with a built-in Python 2.6 interpreter and API. I've developed a long-running script that I'm attempting to split into simultaneous, independent tasks using Python's multiprocessing module. However, once spawned the processes just hang.

The script itself uses various objects/methods available only through Abaqus's proprietary cae module, which can only be loaded by starting up the Python bundled with Abaqus/CAE first, which then executes my script with Python's execfile.

To try to get multiprocessing working, I've attempted to run a script that avoids accessing any Abaqus objects, and instead just performs a calculation and prints the result to file2. This way, I can run the same script from the regular system Python installation as well as from the Python bundled with Abaqus.

The example code below works as expected when run from the command line using either of the following:

C:\some\path>python multi.py         # <-- Using system Python
C:\some\path>abaqus python multi.py  # <-- Using Python bundled with Abaqus

This spawns the new processes, and each runs the function and writes the result to file as expected. However, when called from the Abaqus/CAE Python environment using:

abaqus cae noGUI=multi.py

Abaqus will then start up, automatically import its own proprietary modules, and then executes my file using:

execfile("multi.py", __main__.__dict__)

where the global namespace arg __main__.__dict__ is setup by Abaqus. Abaqus then checks out licenses for each process successfully, spawns the new processes, and ... and that's it. The processes are created, but they all hang and do nothing. There are no error messages.

What might be causing the hang-up, and how can I fix it? Is there an environment variable that must be set? Are there other commercial systems that use a similar procedure that I can learn from/emulate?

Note that any solution must be available in the Python 2.6 standard library.

System details: Windows 10 64-bit, Python 2.6, Abaqus/CAE 6.12 or 6.14

Example Test Script:

# multi.py
import multiprocessing
import time

def fib(n):
    a,b = 0,1
    for i in range(n):
        a, b = a+b, a
    return a

def workerfunc(num):
    fname = ''.join(('worker_', str(num), '.txt'))
    with open(fname, 'w') as f:
        f.write('Starting Worker {0}\n'.format(num))
        count = 0
        while count < 1000:  # <-- Repeat a bunch of times.
            count += 1
            a=fib(20)
        line = ''.join((str(a), '\n'))
        f.write(line)
        f.write('End Worker {0}\n'.format(num))

if __name__ == '__main__':
    jobs = []
    for i in range(2):       # <-- Setting the number of processes manually
        p = multiprocessing.Process(target=workerfunc, args=(i,))
        jobs.append(p)
        print 'starting', p
        p.start()
        print 'done starting', p
    for j in jobs:
        print 'joining', j
        j.join()
        print 'done joining', j

1A widely known finite element analysis package

2The script is a blend of a fairly standard Python function for fib(), and examples from PyMOTW


回答1:


I have to write an answer as I cannot comment yet.

What I can imagine as a reason is that python multiprocessing spawns a whole new process with it's own non-shared memory. So if you create an object in your script, the start a new process, that new process contains a copy of the memory and you have two objects that can go into different directions. When something of abaqus is present in the original python process (which I suspect) that gets copied too and this copy could create such a behaviour.

As a solution I think you could extend python with C (which is capable to use multiple cores in a single process) and use threads there.




回答2:


Just wanted to say that I have run into this exact issue. My solution at the current time is to compartmentalize my scripting. This may work for you if you're trying to run parameter sweeps over a given model, or run geometric variations on the same model, etc.

I first generate scripts to accomplish each portion of my modelling process:

  1. Generate input file using CAE/Python.
  2. Extract data that I want and put it in a text file.

With these created, I use text replacement to quickly generate N python scripts of each type, one for each discrete parameter set I'm interested in.

I then wrote a parallel processing tool in Python to call multiple Abaqus instances as subprocesses. This does the following:

  1. Call CAE through subprocess.call for each model generation script. The script allows you to choose how many instances to run at once to keep you from taking every license on the server.

  2. Execute the Abaqus solver for the generated models using the same, with parameters for cores per job and total number of cores used.

  3. Extract data using the same process as 1.

There is some overhead in repeatedly checking out licenses for CAE when generating the models, but in my testing it is far outweighed by the benefit of being able to generate 10+ input files simultaneously.

I can put some of the scripts up on Github if you think the process outlined above would be helpful for your application.

Cheers, Nathan



来源:https://stackoverflow.com/questions/44146116/python-multiprocessing-from-abaqus-cae

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!