I run multiple spiders concurrently by posting new start_urls in scrapyd and it creates separate processes.
start_urls
How can I get all of the crawled items in the mem