VFS

Schematics

Virtual Filesystem

VFS is the mid layer for FS, when a program call a write() or read() it is sent to VFS that then send it to the appropriate FS/other code. provide a standard for simple operations.

for the big brain: [1][2]

Directory entry cache / inode cache

the dentry contain: name, pointer to parent inode, pointer to inode, bool for network driver (to say that you revalidate with DCACHE_OP_REVALIDATE)

Inodes have unique inode number that are derived but not identical to FS inode number

Dentry/inode cache are purely volatile beings, they don't get saved

inodes can be one of :

- regular file

- directory

- symbolic link

- block device

- character device

- socket

- FIFO (First in, first out: Named Pipe)

VFS also takes care of registering and mounting FS

Registering with struct file_system_type (Big linked list)

Mounting with the superblock object

History:

0.96a:

VFS added to the kernel (Right before EXT 0.96c)

6.12

VFS now allows bigger block size than the system page size, first implementation with XFS This will allow bigger FS/Filesize and optimization with some hardware. [3]

Reduced the file struct size, representing an open file, Right now, (focusing on x86) struct file is 232 bytes. After this series struct file will be 184 bytes. they mostly did this by shoving this out that shouldn't have been there in the first place [4]

EXPLAINATION: [5]

6.13

There is a new sysctl knob, fs.dentry-negative, that controls whether the virtual filesystem (VFS) layer deletes a file's kernel-internal directory entry ("dentry") when the file itself is deleted. It seems that some benchmarks do better when dentries are removed, while others benefit from having a negative dentry left behind, so the kernel developers have put the decision into the system administrator's hands. The default value (zero) means that dentries are not automatically deleted, matching the behavior of previous kernels.[6]

There have been some deep reference-counting changes within the VFS layer that yield a 3-5% performance improvement on highly threaded workloads[6]

Sources: