out-of-memory

Compress large file using SharpZipLib causing Out Of Memory Exception

大城市里の小女人 提交于 2020-01-24 23:59:27
问题 I have a 453MB XML file which I'm trying to compress to a ZIP using SharpZipLib. Below is the code I'm using to create the zip, but it's causing an OutOfMemoryException . This code successfully compresses a file of 428MB. Any idea why the exception is happening, as I can't see why, as my system has plenty of memory available. public void CompressFiles(List<string> pathnames, string zipPathname) { try { using (FileStream stream = new FileStream(zipPathname, FileMode.Create, FileAccess.Write,

What are the workaround options for python out of memory error?

浪子不回头ぞ 提交于 2020-01-24 17:36:24
问题 I am reading a x,y,z point file (LAS) into python and have run into memory errors. I am interpolating unknown points between known points for a project I am working on. I began working with small files (< 5,000,000 points) and was able to read/write to a numpy array and python lists with no problem. I have received more data to work with (> 50,000,000 points) and now my code fails with a MemoryError. What are some options for handling such large amounts of data? I do not have to load all data

Hadoop streaming “GC overhead limit exceeded”

无人久伴 提交于 2020-01-24 12:20:08
问题 I am running this command: hadoop jar hadoop-streaming.jar -D stream.tmpdir=/tmp -input "<input dir>" -output "<output dir>" -mapper "grep 20151026" -reducer "wc -l" Where <input dir> is a directory with many avro files. And getting this error: Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at org.apache.hadoop.hdfs.protocol.DatanodeID.updateXferAddrAndInvalidateHashCode(DatanodeID.java:287) at org.apache.hadoop.hdfs.protocol.DatanodeID.(DatanodeID.java:91)

Python memory error when memory is available

戏子无情 提交于 2020-01-24 00:27:08
问题 I have a Python program that reads lines of files and analyzes them. The program intentionally reads many lines into the RAM. The program started getting MemoryError while appending a line (as str) to list. When I check in the task manager (the program runs on Windows 10), I see that the memory of the program is on 1635MB (stable) and the total memory use of the machine is below 50%. I read that Python does not limit the memory, so what could be the reason? Technical details: I use Python 3.6

Out of memory exception in my code

天涯浪子 提交于 2020-01-23 16:50:52
问题 I am running a code for long hours as part of a stress test on an Oracle db and using java version "1.4.2". In a nutshell, what I am doing is : while(true) { Allocating some memory as a blob byte[] data = new byte[1000]; stmt = fConnection.prepareStatement(query); // [compiling an insert query which uses the above blob] stmt.execute(); // I insert this blob-row in the database. stmt.close(); } Now I want to run this test for 8-10 hrs. However apparently after inserting about 15million records

Out of memory exception in my code

岁酱吖の 提交于 2020-01-23 16:50:01
问题 I am running a code for long hours as part of a stress test on an Oracle db and using java version "1.4.2". In a nutshell, what I am doing is : while(true) { Allocating some memory as a blob byte[] data = new byte[1000]; stmt = fConnection.prepareStatement(query); // [compiling an insert query which uses the above blob] stmt.execute(); // I insert this blob-row in the database. stmt.close(); } Now I want to run this test for 8-10 hrs. However apparently after inserting about 15million records

Gradle java.lang.OutOfMemoryError: Metaspace

ⅰ亾dé卋堺 提交于 2020-01-23 12:26:46
问题 Currently i am working on Spring boot 2.1 project configured with Gradle 5.2.1. But i got out of memory error when building project and could not understand the exact reason. Please find the attached log Caused by: org.gradle.cache.CacheOpenException: Could not open proj generic class cache for build file '/Users/mac/project/build.gradle' (/Users/mac/.gradle/caches/5.2.1/scripts/eajdx6l75dt1ypyljdsfupplm/proj/proj3ca90766b0adfce53d4b035e7e9dc5fe). at org.gradle.cache.internal

OutOfMemoryException when a lot of memory is available

ぃ、小莉子 提交于 2020-01-22 17:23:10
问题 We have an application that is running on 5 (server) nodes (16 cores, 128 GB Memory each) that loads almost 70 GB data on each machine. This application is distributed and serves concurrent clients, therefore, there is a lot of sockets usage. Similarly, for synchronization between multiple threads, there are a few synchronization techniques being used, mostly using System.Threading.Monitor . Now the problem is that while application is running and the data is traveling between these server

Java Out of memory automatic heap dump file name

家住魔仙堡 提交于 2020-01-22 10:36:04
问题 I have several Java processes and I am trying to manage the heap dumps created when OOM error occur. When I say manage I mean name the heap dump differently, based on the originating process delete older heap dumps to preserve disk space When dumping heap on OOM with -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp the JVM creates a file with the following name java_pidXXXX.hprof in the specified /tmp folder (where XXXX is the PID of the process). Is there anyway to specify a different

java stackoverflowerror thrown in infinite loop

空扰寡人 提交于 2020-01-22 02:15:30
问题 I have the following function that starts a jsvc daemon for receiving UDP messages: @Override public void start() throws Exception { byte[] buf = new byte[1000]; DatagramPacket dgp = new DatagramPacket(buf, buf.length); DatagramSocket sk; sk = new DatagramSocket(1000); sk.setSoTimeout(0); byte[] rcvMsg = null; run(sk, dgp, rcvMsg); } With a timeout of 0, the socket blocks until a another message comes in. This is what triggers the continuous run through the following while loop: