large-files

Javascript: Reading only the last x amount of lines of large server text file

本小妞迷上赌 提交于 2019-12-10 21:34:16
问题 today I was working with very large log files and I want to display them via lighttpd on my RPi, but they grow bigger every day and thus soon will take a very long time to load. To prevent this problem I figured I would be able to have a button to read, say the last 500 lines from this log file. Of course I have not much experience with javascript, but I think it is doable, right? Anyway, I couldn't find any good tutorial describing how to do this in javascript, although I found this in PHP,

HTML5 - Canvas - Optimization for large images

不羁岁月 提交于 2019-12-10 19:22:03
问题 I need to build a HTML5 canvas which contains an very large image, maybe up to 10-15MB. My first idea was to split the image into several chunks which will be loaded when moving horizontally through the canvas. Any thoughts about this idea? Is it a good one? Maybe I'm missing some optimization feature already implemented? 回答1: You're spot on, and such tiling is how most apps that serve large images (like google maps) work. Unfortunately, there aren't any other clever optimizations to be had

Getting the Notepad++ XML parsing error “Extra content at the end of the document” even though there is none

╄→尐↘猪︶ㄣ 提交于 2019-12-10 17:08:33
问题 I am getting the abovementioned error message when trying to validate my 55 mb XML file in Notepad++. The first enocountered error is here (line 1441520 out of 22258651): Screenshot from Notepad++ I have turned on Show all characters. Nothing suggests that there should be any illegal characters at the end of the line. As you can see on the screenshot there are no other hidden characters than CR+LF. EDIT: Below is a copy of the record that causes the parsing error in Notepad++: <?xml version=

Phonegap : Downloading large files on Windows Phones

痴心易碎 提交于 2019-12-10 15:27:35
问题 I'm developing a mobile app with Phonegap and I need to download zip-files which might be over 50MB. I'm using Cordova's file-transfer plugin to do this job. Now, on iOS and Android this works just fine, but when I do the same thing on Windows Phone the application hangs out files that are larger than 20MB. System.OutOfMemoryException: Insufficient memory to continue the execution of the program. PhonGap documentation has this line which says that downloading is supported only on iOS and

Generate large file and send it

青春壹個敷衍的年華 提交于 2019-12-10 15:25:01
问题 I have a rather large .csv file (up to 1 million lines) that I want to generate and send when a browser requests it. The current code I have is (except that I don't actually generate the same data): class CSVHandler(tornado.web.RequestHandler): def get(self): self.set_header('Content-Type','text/csv') self.set_header('content-Disposition','attachement; filename=dump.csv') self.write('lineNumber,measure\r\n') # File header for line in range(0,1000000): self.write(','.join([str(line),random

Binary search over a huge file with unknown line length

荒凉一梦 提交于 2019-12-10 10:38:28
问题 I'm working with huge data CSV file. Each file contains milions of record , each record has a key. The records are sorted by thier key. I dont want to go over the whole file when searching for certian data. I've seen this solution : Reading Huge File in Python But it suggests that you use the same length of lines on the file - which is not supported in my case. I thought about adding a padding to each line and then keeping a fixed line length , but I'd like to know if there is a better way to

Serving Large Protected Files in PHP/Apache

与世无争的帅哥 提交于 2019-12-10 05:22:30
问题 I need to serve up large files (> 2gb) from an Apache web server. The files are protected downloads, so I need some kind of way to authorize the user. The CMS I'm using uses cookies checked against a MySQL database to verify the user. On the server, I have no control over max_execution_time, and limited control over memory_limit. My technique has been working for small files. After the user has been authorized in PHP (by the CMS), I use readfile() to serve the file, which is stored above the

Processing Huge Files In C#

吃可爱长大的小学妹 提交于 2019-12-10 03:49:10
问题 I have a 4Gb file that I want to perform a byte based find and replace on. I have written a simple program to do it but it takes far too long (90 minutes+) to do just one find and replace. A few hex editors I have tried can perform the task in under 3 minutes and don't load the entire target file into memory. Does anyone know a method where I can accomplish the same thing? Here is my current code: public int ReplaceBytes(string File, byte[] Find, byte[] Replace) { var Stream = new FileStream

Is using istream::seekg too much expensive?

佐手、 提交于 2019-12-10 03:37:55
问题 In c++, how expensive is it to use the istream::seekg operation? EDIT: How much can I get away with seeking around a file and reading bytes? What about frequency versus magnitude of offset? I have a large file (4GB) that I am parsing, and I want to know if it's necessary to try to consolidate some of my seekg calls. I would assume that the magnitude of differences in file location play a role--like if you seek more than a page in memory away, it will impact performance--but small seeking is

Parse xml in c# : combine xmlreader and linq to xml

浪子不回头ぞ 提交于 2019-12-09 23:00:37
问题 I have to parse large XML file in C#. I use LINQ-to-XML. I have a structure like <root> <node></node> <node></node> </root> I would like use XmlReader to loop on each node and use LINQ-to-XML to get each node and work on it ? So I have only in memory the current node. 回答1: You can do something like that: string path = @"E:\tmp\testxml.xml"; using (var reader = XmlReader.Create(path)) { bool isOnNode = reader.ReadToDescendant("node"); while (isOnNode) { var element = (XElement)XNode.ReadFrom