How to solve erratic disk usage statistics from du (Linux command)

Updated on September 3, 2017

Recently on my Linux server box, one of the disk partition was full and had to delete 100’s of MB of large files to clear up the space. After clearing the space, still the partition was showing full and no free space to save my files. Later to my surprise noticed the disparity below:

The disk usage command (du) showed the used up space as only 11G!

-bash-3.2# du -chs /myhome/
11G     /myhome/
11G     total
-bash-3.2#

The above command doesn’t include hidden dot files and directories. Use the below command to find the disk usage by hidden dot files and directories too.

-bash-3.2#du  -sch .[!.]* /myhome/*

If you still further would like to list the exact files and its absolute paths, use the below command:

-bash-3.2# du -h -x /myhome/*

But the df command showed as the disk partition /myhome/ as full!!!

-bash-3.2$ df -h
Filesystem           	 Size  	Used 	Avail 	Use% 	Mounted on
/dev/cciss/c0d0p8      	20G  	  14G  	4.7G  	75% 	/
/dev/cciss/c0d0p9    	 9.7G 	 1.3G  	8.0G  	14% 	/tmp
/dev/cciss/c0d0p3    	 30G	 4.3G    24G  	16% 	/usr
/dev/cciss/c0d0p10        44G     41G     0G  	100% 	/myhome
/dev/cciss/c0d0p6    	  30G     27G  	670M  	98% 	/var
/dev/cciss/c0d0p5    	  30G     20G  	8.0G  	72% 	/opt
/dev/cciss/c0d0p2   	   78G    21G    54G  	28% 	/export
/dev/cciss/c0d0p1   	  4.9G    173M  4.5G   	4% 	/boot
tmpfs                	 7.9G  	 0  7	.9G   	0% 	/dev/shm
tmpfs              	 491M   61M  	431M  	13% 	/var/lib/ganglia/rrds

Now the challenge is to find out where is the remaining space? Is it hidden somewhere?

Use LSOF command to find the files held open

You might have deleted some of the files, but the process would have still held open. Until you close those files, the space will not be freed up. Below “lsof” command  will tell you which deleted files are still held open!

-bash-3.2# /usr/sbin/lsof | grep deleted |grep /myhome
java      10249            ramya    1u      REG             104,10 45866276291    4645619 /myhome/var/container.log (deleted)
java      10249            ramya    2u      REG             104,10 45866276291    4645619 /myhome/var/container.log (deleted)

Now you would have some clue is it not? Yes, the file /myhome/var/container.log which was deleted is still open by the process id : 10249. Now go to the corresponding process id directory in /proc as shown below:

-bash-3.2# cd /proc/10249/fd/

List the files to see the file handler linking to the deleted file.

-bash-3.2# ls -lrt | grep /myhome/
lrwx------ 1 ramya ramya 64 Feb 13 14:04 2 -> /myhome/var/container.log (deleted)
lrwx------ 1 ramya ramya 64 Feb 13 14:04 1 -> /myhome/var/container.log (deleted)

Now clear the corresponding file handler to free the space using below command:

-bash-3.2# cat /dev/null > /proc/10249/fd/1
-bash-3.2# cat /dev/null > /proc/10249/fd/2

Note that the inode would be still open, but now it’s of 0 length.

-bash-3.2# /usr/sbin/lsof | grep deleted |grep /myhome
java      10249            ramya    1u      REG             104,10 0    4645619 /myhome/var/container.log (deleted)
java      10249            ramya    2u      REG             104,10 0    4645619 /myhome/var/container.log (deleted)

Now check the disk partition to have a smile in your face.

-bash-3.2# df -h |grep /myhome
/dev/cciss/c0d0p10     44G   12G   30G  28% /myhome

Was this article helpful?

Related Articles

Leave a Comment