No Space Left on Device Error on Linux When There is Space Available
On Linux, the error “No space left on device” can be encountered while an application writes data when the disk is full. There are times this may confuse you, because it can happen that your disk shows it is not full and has plenty of space available. How can this happen? It’s called inodes.
Inodes are indexes on a disk
Every file and directory is assigned an inode on an unix-style filesystem. It makes up the structure. When there are many small files, the inodes will run out. This may happen due small session and cache files from an application which does not clean up itself.
The source of these small files can easily be found and a temporary quick fix may get yourself up and running within minutes again.
Having a lot of small files also affects the i/o times of a specific folder. Your server most likely had performance issues before it ran out of space.
Note: the command output examples in this article are not related to each other and are gathered from different file systems.
Check the general disk usage first
Before assuming the inodes are the issue, always check the general disk space
using the command:
$ df -h Filesystem Size Used Avail Use% Mounted on udev 480M 0 480M 0% /dev tmpfs 99M 13M 87M 13% /run /dev/vda1 48G 39G 6.7G 86% / tmpfs 494M 8.0K 494M 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 494M 0 494M 0% /sys/fs/cgroup tmpfs 99M 0 99M 0% /run/user/1001
If your disk is not full, proceed to the inode section.
If it’s full, proceed to the next section.
Fixing general disk usage
Find the big files using your favorite disk usage tool
$ sudo du -hs /* 2>/dev/null 8.7M /bin 1.4M /etc 16M /home 620K /init 44M /lib 120K /root 6.1M /sbin 619M /usr 681M /var
-hfor human format (no bytes)
-sfor getting a summary of the child folders (
2>/dev/nullfor hiding any permission errors that may occur
It is possible to exclude folders using
ncdu. It is an useful graphical cli tool to easily navigate folders and delete items.
$ sudo ncdu / ncdu 1.13 ~ Use the arrow keys to navigate, press ? for help --- / -------------------------------------------------------------------------- 680.6 MiB [XXXXXXXXXX] /var 618.7 MiB [XXXXXXXXX ] /usr 43.4 MiB [ ] /lib 16.0 MiB [ ] /home 8.6 MiB [ ] /bin 6.0 MiB [ ] /sbin 1.3 MiB [ ] /etc 620.0 KiB [ ] init Total disk usage: 1.3 GiB Apparent size: 1.3 GiB Items: 68674
It is possible to exclude folders using
Check disk inodes
You can check the inode usage using the command:
$ df -i Filesystem Inodes IUsed IFree IUse% Mounted on udev 122664 363 122301 1% /dev tmpfs 126362 523 125839 1% /run /dev/vda1 3145728 336326 2809402 11% / tmpfs 126362 2 126360 1% /dev/shm tmpfs 126362 3 126359 1% /run/lock tmpfs 126362 17 126345 1% /sys/fs/cgroup tmpfs 126362 10 126352 1% /run/user/1001
Do you have high inodes (100% or near)? Proceed to the following section.
Finding high inode usage
The command below shows the number of files per folder. You can start executing
in the root folder of your disk. For example
/. After running,
cd into the
biggest folder. Repeat until you find a source of small files.
cd / for i in ./*; do echo $i; find $i |wc -l; done
If a single directory took a long time, you could just cancel the current loop (CTRL+C) and cd into the directory already.
Fixing high inode usage
Fixing high inode usage is just a matter of deleting all the small files.
In the past I encountered a folder from Wordpress in
If it’s just cache like in this case, it is safe to delete the files before fixing the issue within the software. Be aware that deleting may take a long time.
rm -Rf /home/domains/domain.nl/web/app/cache/object/*
If you have a (rsnapshot) backup running, delete all entries in this backup too.
rm -Rf /home/backup/rsnapshot/hourly.x/localhost/home/domains/domain.nl/web/app/cache/object rm -Rf /home/backup/rsnapshot/daily.x/localhost/home/domains/domain.nl/web/app/cache/object rm -Rf /home/backup/rsnapshot/weekly.x/localhost/home/domains/domain.nl/web/app/cache/object rm -Rf /home/backup/rsnapshot/monthly.x/localhost/home/domains/domain.nl/web/app/cache/object
Be sure to exclude any known folders in your backup too (such as the
/etc/rsnapshot.conf config). An example below.
exclude web/app/cache exclude var/session
Implementing quick fixes
Find quick fixes in your system before finding the root cause.
For wordpress I set the following as a quick fix in the config:
define( 'WP_CACHE', false );
If the above is not working for any reason. It is possible to create a temporary cronjob that runs every day. For example:
1 1 * * * root rm -Rf /home/domains/domain.nl/web/app/cache/object/*
An even better solution would be the use of
tmpreaper. This only deletes
files that were not accessed for X days. This preserves cache that is read
Example for 1 day:
1 1 * * * root /usr/sbin/tmpreaper 1d --runtime=360 /home/domains/domain.nl/web/app/cache/object/ > /dev/null
Hope this helps!
Thanks for reading!