chunking

F# array_chunk for Sequence

十年热恋 提交于 2019-11-28 00:58:53
问题 I'm having some trouble making a sequence. Basically I need to chop a sequence into a sequence of arrays. Seq.windowed almost does it but I don't want duplicate elements. I can get what I want by reading everything into an array first but I'd rather use a sequence. let array_chunk s (a:int[]) = Array.init (a.Length / s) (fun i -> Array.sub a (i * s) s) someSequence |> Seq.to_array |> array_chunk 5 回答1: Here's a nice imperative one that'll work with seq and generate arrays of any size. The

Splitting a File into Chunks with Javascript

旧街凉风 提交于 2019-11-28 00:42:26
问题 I'm trying to take a single file object and split it into chunks by a specified chunk size. In my example, trying to split a single file into 1MB chunks. So I figure out how many chunks it would take, then I'm trying to slice the file starting from the 'offset' (current chunk I'm on * chunk size), and slicing off a chunk size. My first slice comes out properly at 1MB but my subsequent slices turn out to 0, any ideas why? Have a working codepen here: http://codepen.io/ngalluzzo/pen/VvpYKz

split file on Nth occurrence of delimiter

强颜欢笑 提交于 2019-11-27 22:19:45
Is there a one-liner to split a text file into pieces / chunks after every Nth occurrence of a delimiter? example: the delimiter below is "+" entry 1 some more + entry 2 some more even more + entry 3 some more + entry 4 some more + ... There are several million entries, so splitting on every occurrence of delimiter "+" is a bad idea. I want to split on, say, every 50,000th instance of delimiter "+". Unix commands "split" and "csplit" just don't seem to do this... FatalError Using awk you could: awk '/^\+$/ { delim++ } { file = sprintf("chunk%s.txt", int(delim / 50000)); print >> file; }' <

Download file in chunks (Windows Phone)

喜你入骨 提交于 2019-11-27 20:53:05
问题 In my application I can download some media files from web. Normally I used WebClient.OpenReadCompleted method to download, decrypt and save the file to IsolatedStorage. It worked well and looked like that: private void downloadedSong_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e, SomeOtherValues someOtherValues) // delegate, uses additional values { // Some preparations try { if (e.Result != null) { using (isolatedStorageFile = IsolatedStorageFile.GetUserStoreForApplication()

Python fastest way to read a large text file (several GB) [duplicate]

孤街浪徒 提交于 2019-11-27 19:42:49
This question already has an answer here: How to read a large file line by line 10 answers i have a large text file (~7 GB). I am looking if exist the fastest way to read large text file. I have been reading about using several approach as read chunk-by-chunk in order to speed the process. at example effbot suggest # File: readline-example-3.py file = open("sample.txt") while 1: lines = file.readlines(100000) if not lines: break for line in lines: pass # do something**strong text** in order to process 96,900 lines of text per second. Other authors suggest to use islice() from itertools import

In Clojure, are lazy seqs always chunked?

烈酒焚心 提交于 2019-11-27 15:57:53
问题 I was under the impression that the lazy seqs were always chunked. => (take 1 (map #(do (print \.) %) (range))) (................................0) As expected 32 dots are printed because the lazy seq returned by range is chunked into 32 element chunks. However, when instead of range I try this with my own function get-rss-feeds , the lazy seq is no longer chunked: => (take 1 (map #(do (print \.) %) (get-rss-feeds r))) (."http://wholehealthsource.blogspot.com/feeds/posts/default") Only one

What is the best way to chop a string into chunks of a given length in Ruby?

只愿长相守 提交于 2019-11-27 09:43:07
问题 I have been looking for an elegant and efficient way to chunk a string into substrings of a given length in Ruby. So far, the best I could come up with is this: def chunk(string, size) (0..(string.length-1)/size).map{|i|string[i*size,size]} end >> chunk("abcdef",3) => ["abc", "def"] >> chunk("abcde",3) => ["abc", "de"] >> chunk("abc",3) => ["abc"] >> chunk("ab",3) => ["ab"] >> chunk("",3) => [] You might want chunk("", n) to return [""] instead of [] . If so, just add this as the first line

Large File uploading to asp.net MVC

随声附和 提交于 2019-11-27 09:33:17
问题 I need a way to upload large files (600 mb to 4 gb) in an asp.net mvc website. Currently I am using swfupload; it works well enough, but it is a huge hit on the webserver because it sends it in one big upload, plus I have to set it in the web.config to allow that huge of a file, which is a huge security risk. In the past when I was doing web forms development I used Neatupload which breaks up the file into chunks and uploads them individually. I am looking for a way to upload large files in

Efficient (memory-wise) function for repeated distance matrix calculations AND chunking of extra large distance matrices

孤者浪人 提交于 2019-11-27 06:26:22
问题 I wonder if anyone could have a look at the following code and minimal example and suggest improvements - in particular regarding efficiency of the code when working with really large data sets. The function takes a data.frame and splits it by a grouping variable (factor) and then calculates the distance matrix for all the rows in each group. I do not need to keep the distance matrices - only some statistics ie the mean, the histogram .., then they can be discarded. I don't know much about

Is there an elegant way to process a stream in chunks?

…衆ロ難τιáo~ 提交于 2019-11-27 05:31:57
问题 My exact scenario is inserting data to database in batches, so I want to accumulate DOM objects then every 1000, flush them. I implemented it by putting code in the accumulator to detect fullness then flush, but that seems wrong - the flush control should come from the caller. I could convert the stream to a List then use subList in an iterative fashion, but that too seems clunky. It there a neat way to take action every n elements then continue with the stream while only processing the