Access File through multiple threads

后端 未结 10 778
天涯浪人
天涯浪人 2021-01-31 10:52

I want to access a large file (file size may vary from 30 MB to 1 GB) through 10 threads and then process each line in the file and write them to another file through 10 threads

相关标签:
10条回答
  • 2021-01-31 11:22

    Be aware that the ideal number of threads is limited by the hardware architecture and other stuffs (you could think about consulting the thread pool to calculate the best number of threads). Assuming that "10" is a good number, we proceed. =)

    If you are looking for performance, you could do the following:

    • Read the file using the threads you have and process each one according to your business rule. Keep one control variable that indicates the next expected line to be inserted on the output file.

    • If the next expected line is done processing, append it to a buffer (a Queue) (it would be ideal if you could find a way to insert direct in the output file, but you would have lock problems). Otherwise, store this "future" line inside a binary-search-tree, ordering the tree by line position. Binary-search-tree gives you a time complexity of "O(log n)" for searching and inserting, which is really fast for your context. Continue to fill the tree until the next "expected" line is done processing.

    Activates the thread that will be responsible to open the output file, consume the buffer periodically and write the lines into the file.

    Also, keep track of the "minor" expected node of the BST to be inserted in the file. You can use it to check if the future line is inside the BST before starting searching on it.

    • When the next expected line is done processing, insert into the Queue and verify if the next element is inside the binary-search-tree. In the case that the next line is in the tree, remove the node from the tree and append the content of the node to the Queue and repeat the search if the next line is already inside the tree.
    • Repeat this procedure until all files are done processing, the tree is empty and the Queue is empty.

    This approach uses - O(n) to read the file (but is parallelized) - O(1) to insert the ordered lines into a Queue - O(Logn)*2 to read and write the binary-search-tree - O(n) to write the new file

    plus the costs of your business rule and I/O operations.

    Hope it helps.

    0 讨论(0)
  • 2021-01-31 11:33

    I would start with three threads.

    1. a reader thread that reads the data, breaks it into "lines" and puts them in a bounded blocking queue (Q1),
    2. a processing thread that reads from Q1, does the processing and puts them in a second bounded blocking queue (Q2), and
    3. a writer thread that reads from Q2 and writes to disk.

    Of course, I would also ensure that the output file is on a physically different disk than the input file.

    If processing tends to be faster slower than the I/O (monitor the queue sizes), you could then start experimenting with two or more parallell "processors" that are synchronized in how they read and write their data.

    0 讨论(0)
  • 2021-01-31 11:33

    One of the possible ways will be to create a single thread that will read input file and put read lines into a blocking queue. Several threads will wait for data from this queue, process the data.

    Another possible solution may be to separate file into chunks and assign each chunk to a separate thread.

    To avoid blocking you can use asynchronous IO. You may also take a look at Proactor pattern from Pattern-Oriented Software Architecture Volume 2

    0 讨论(0)
  • 2021-01-31 11:35

    I have encountered a similar situation before and the way I've handled it is this:

    Read the file in the main thread line by line and submit the processing of the line to an executor. A reasonable starting point on ExecutorService is here. If you are planning on using a fixed no of threads, you might be interested in Executors.newFixedThreadPool(10) factory method in the Executors class. The javadocs on this topic isn't bad either.

    Basically, I'd submit all the jobs, call shutdown and then in the main thread continue to write to the output file in the order for all the Future that are returned. You can leverage the Future class' get() method's blocking nature to ensure order but you really shouldn't use multithreading to write, just like you won't use it to read. Makes sense?

    However, 1 GB data files? If I were you, I'd be first interested in meaningfully breaking down those files.

    PS: I've deliberately avoided code in the answer as I'd like the OP to try it himself. Enough pointers to the specific classes, API methods and an example have been provided.

    0 讨论(0)
  • 2021-01-31 11:39
    • You should abstract from the file reading. Create a class that reads the file and dispatches the content to a various number of threads.

    The class shouldn't dispatch strings, it should wrap them in a Line class that contains meta information, e. g. The line number, since you want to keep the original sequence.

    • You need a processing class, that does the actual work on the collected data. In your case there is no work to do. The class just stores the information, you can extend it someday to do additional stuff (E.g. reverse the string. Append some other strings, ...)

    • Then you need a merger class, that does some kind of multiway merge sort on the processing threads and collects all the references to the Line instances in sequence.

    The merger class could also write the data back to a file, but to keep the code clean...

    • I'd recommend to create a output class, that again abstracts from all the file handling and stuff.

    Of course you need much memory for this approach, if you are short on main memory. You'd need a stream based approach that kind of works inplace to keep the memory overhead small.


    UPDATE Stream-based approach

    Everthing stays the same except:

    The Reader thread pumps the read data into a Balloon. This balloon has a certain number of Line instances it can hold (The bigger the number, the more main memory you consume).

    The processing threads take Lines from the balloon, the reader pumps more lines into the balloon as it gets emptier.

    The merger class takes the lines from the processing threads as above and the writer writes the data back to a file.

    Maybe you should use FileChannel in the I/O threads, since it's more suited for reading big files and probably consumes less memory while handling the file (but that's just an estimated guess).

    0 讨论(0)
  • 2021-01-31 11:41

    Any sort of IO whether it be disk, network, etc. is generally the bottleneck.

    By using multiple threads you are exacerbating the problem as it is very likely only one thread can have access to the IO resource at one time.

    It would be best to use one thread to read, pass off info to a worker pool of threads, and then writing directly from there. But again if the workers write to the same place there will be bottlenecks as only one can have the lock. Easily fixed by passing the data to a single writer thread.

    In "short":

    Single reader thread writes to BlockingQueue or the like, this gives it a natural ordered sequence.

    Then worker pool threads wait on the queue for data, recording its sequence number.

    Worker threads then write the processed data to another BlockingQueue this time attaching its original sequence number so that

    The writer thread can take the data and write it in sequence.

    This will likely yield the fastest implementation possible.

    0 讨论(0)
提交回复
热议问题