diskspace

Unicorn Eating Memory

こ雲淡風輕ζ 提交于 2019-12-04 10:47:41
问题 I have a m1.small instance in amazon with 8GB hard disk space on which my rails application runs. It runs smoothly for 2 weeks and after that it crashes saying the memory is full. App is running on rails 3.1.1, unicorn and nginx I simply dont understand what is taking 13G ? I killed unicorn and 'free' command is showing some free space while df is still saying 100% I rebooted the instance and everything started working fine. free (before killing unicorn) total used free shared buffers cached

What is the unix command to see how much disk space there is and how much is remaining?

那年仲夏 提交于 2019-12-04 10:02:55
问题 I'm looking for the equivalent of right clicking on the drive in windows and seeing the disk space used and remaining info. 回答1: Look for the commands du (disk usage) and df (disk free) 回答2: Use the df command: df -h 回答3: df -g . Option g for Size in GBs Block and . for current working directory. 回答4: I love doing du -sh * | sort -nr | less to sort by the largest files first 回答5: If you want to see how much space each folder ocuppes: du -sh * s – summarize h – human readable * – list of

How to cleanup disk space on openshift when 'rhc tidy' has not enough disk space?

感情迁移 提交于 2019-12-04 08:53:59
My quota on openshift has exceeded: Filesystem blocks quota limit grace files quota limit grace /dev/mapper/EBSStore01-user_home01 1048572 0 1048576 6890 0 80000 I found a different stackoverflow questions that the disk space can be cleaned by 'rhc app-tidy' But when i run this command i get the following error: Warning: Gear xxx is using 100.0% of disk quota Failed to execute: 'control start' for /var/lib/openshift/xxx/mysql When i run the following command do see which files are using the most space du -h * | sort -rh | head -50 I get the following ouput: 605M wildfly 320M git/mythings.git

Getting “No space left on device” for approx. 10 GB of data on EMR m1.large instances

一曲冷凌霜 提交于 2019-12-04 06:47:30
I am getting an error "No space left on device" when I am running my Amazon EMR jobs using m1.large as the instance type for the hadoop instances to be created by the jobflow. The job generates approx. 10 GB of data at max and since the capacity of a m1.large instance is supposed to be 420GB*2 (according to: EC2 instance types ). I am confused how just 10GB of data could lead to a "disk space full" kind of a message. I am aware of the possibility that this kind of an error can also be generated if we have completely exhausted the total number of inodes allowed on the filesystem but that is

Does Postgres rewrite entire row on update?

心已入冬 提交于 2019-12-04 03:44:01
问题 We run Postgres 9.0 on Windows 2008 Server. There is a large table contains a bytea column for storing binary data ranging from 0-5MB in each row: CREATE TABLE files ( file_id serial NOT NULL, data bytea NOT NULL, on_disk boolean, CONSTRAINT files_pkey PRIMARY KEY (file_id) ) Recently we have been updating the on_disk field for each row (not touching the data field). We believe this has eaten up space in our temp tablespace (or something), for two reasons: 1) We started receiving this error

How can I reduce the size of a Subversion repository?

点点圈 提交于 2019-12-04 01:10:07
I have a pair of svn repositories which are significantly larger than others. They're not too big for svn, but they're taking a lot of disk space I'd rather be using for something else. What strategies are available for reducing the disk use of svn repositories? I tried the "removing dead transactions" section described here but that didn't get me anywhere. What else should I try? ETA: Is this question better asked on Server Fault? If the used disk space is more important for you than the version history, then you could make a clean checkout and reimport your projects into a new repository.

Get available diskspace in ruby

此生再无相见时 提交于 2019-12-03 14:51:00
问题 What is the best way to get diskspace information with ruby. I would prefer a pure ruby solution. If not possible (even with additional gems), it could also use any command available in a standard ubuntu desktop installation to parse the information into ruby. 回答1: You could use the sys-filesystem gem (cross platform friendly) require 'sys/filesystem' stat = Sys::Filesystem.stat("/") mb_available = stat.block_size * stat.blocks_available / 1024 / 1024 回答2: How about simply: spaceMb_i = `df -m

Git disk usage per branch

て烟熏妆下的殇ゞ 提交于 2019-12-03 13:15:58
Do you know if there is a way to list the space usage of a git repository per branch ? (like df or du would) By "the space usage" for a branch I mean "the space used by the commits which are not yet shared accross other branches of the repository". Chronial This doesn’t have a proper answer. If you look at the commits contained only in a specific branch, you would get a list of blobs (basically file versions). Now you would have to check whether these blobs are part of any of the commits in the other branches. After doing that you will have a list of blobs that are only part of your branch.

Easiest way to simulate no free disk space situation?

筅森魡賤 提交于 2019-12-03 08:14:55
问题 I need to test my web app in a scenario where there’s no disk space remaining, i.e. I cannot write any more files. But I don’t want to fill my hard drive with junk just to make sure there’s really no space left. What I want is to simulate this situation withing a particular process (actually, a PHP app). Indeed, temporarily prohibiting disk writes to a process must be enough. What’s the easiest way to do this? I’m using Mac OS X 10.6.2 with built-in Apache/PHP bundle. Thanks. Edit : Disk free

Unicorn Eating Memory

99封情书 提交于 2019-12-03 06:25:37
I have a m1.small instance in amazon with 8GB hard disk space on which my rails application runs. It runs smoothly for 2 weeks and after that it crashes saying the memory is full. App is running on rails 3.1.1, unicorn and nginx I simply dont understand what is taking 13G ? I killed unicorn and 'free' command is showing some free space while df is still saying 100% I rebooted the instance and everything started working fine. free (before killing unicorn) total used free shared buffers cached Mem: 1705192 1671580 33612 0 321816 405288 -/+ buffers/cache: 944476 760716 Swap: 917500 50812 866688