How to run several Keras neural networks in parallel

此生再无相见时 提交于 2019-12-19 05:46:08

问题


I'm trying to use Keras to run a reinforcement learning algorithm. In this algorithm, I'm training a neural network. What's different from other learning problems is that I need to use the neural network itself to generate training data, and repeat this after it updates. I run into problem when I'm trying to generate training data in parallel.

The problem is that I can't tell Theano to use GPU while training because it will also use GPU when generating training data and cause problems if invoked by multiple processes.

What's more, I Theano wont run in multi-thread mode even when I write THEANO_FLAGS='floatX=float32,device=cpu,openmp=True' OMP_NUM_THREADS=4 before python command, either. This won't cause any error, but I can see that there is only one thread running.

Here are my codes. It a simplified version.

import numpy
from numpy import array
import copy
from time import time
import multiprocessing

from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from keras.models import model_from_json

def runEpisode(qn):
    # Some codes that need qn.predict
    result = qn.predict(array([[1, 3]])) # That's just for demo

    return ([1, 2], 2) # Generated some training data, (X, Y)

def runMultiEpisode(qn, queue, event, nEpisode): # 'queue' is used to return result. 'event' is set when terminates.
    # Run several Episodes
    result = []

    for i in range(nEpisode):
        result.append(runEpisode(qn))

    # Return the result to main process
    queue.put(result)
    event.set()

def runEpisode_MultiProcess(nThread, qn, nEpisode):
    processes = []
    queues = []
    events = []

    rewardCount = 0.0

    # Start processes
    for i in range(nThread):
        queue = multiprocessing.Queue()
        event = multiprocessing.Event()
        p = multiprocessing.Process(target = runMultiEpisode, 
            args = (qn, queue, event, int(nEpisode/nThread)))
        p.start()

        processes.append(p)
        queues.append(queue)
        events.append(event)

    # Wait for result
    for event in events:
        event.wait()

    # Gather results
    result = []

    for queue in queues:
        result += queue.get()

    print 'Got', len(result), 'samples'

    return result

def train(qn, nEpisode):
    newqn = copy.copy(qn)

    # Generate training data
    print 'Running episodes'
    t = time()
    result = runEpisode_MultiProcess(2, qn, nEpisode)
    print 'Time:', time() - t

    # Fit the neural network
    print 'Begin fitting'
    t = time()
    newqn.fit([x[0] for x in result], [x[1] for x in result])

    return newqn

qn = Sequential([Dense(2, input_dim = 2, activation = 'sigmoid'),
    Dense(1, activation = 'linear'),])
qn.compile(optimizer =
    SGD(lr = 0.001,
        momentum = 0.9),
    loss = 'mse',
    metrics = [])

train(qn, 100)

So I have two questions:

  1. Can I tell Theano to use GPU only when fitting the data?
  2. What might cause Theano not using multi threading?

Edit: I find that Theano will use multi threading on my own machine, but not on a remote server. I'm wandering whether this is caused by some wrong configuration.

来源:https://stackoverflow.com/questions/38526762/how-to-run-several-keras-neural-networks-in-parallel

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!