Help and Support
Ask a question, report a problem, request a feature...
<<  Back To Forum

Instant allocation with SetFileValidData?

by willianwrm on 2024/10/29 07:28:58 PM    
The Sparse file allocation is really fast but leads to fragmentation, the FastAllocate is faster than Full Pre-Write but still to slow.
So I was digging around it and found about the Instant File Initialization (IFI) used by SQL Server and how they implement it.

The Win32 call consists in:
- CreateFile, using Normal file attribute, don't use sparse here
- SetFilePointerEx, lets walk virtually to the position we want, let's say 20GB
- SetEndOfFile, just set that the file ends here (similar to sparse until here)
- SetFileValidData, the magic happens here, just pass the same 20GB

But there is two catchs:
1 - need admin rights...
2 - need to enable the SeManageVolumePrivilege privilege

Found a working implementation here: https://stackoverflow.com/a/76854526/459583
I did try it in C# and it did work good, allocated 100GB in less than one second, checked the file in Defraggler and it has only one fragment.

So, is there any chance that this could be implemented in Tixati? If no, is there any way the file allocation be transferred to some shell command?

Thanks for the great app!
by janet on 2024/10/31 01:25:26 AM    
willianwrm-> Thanks for your suggestion, it has been sent to the Dev. He will be looking into this.
by notaLamer on 2024/11/02 11:21:23 PM    
you can use SetFileValidData to skip this step; the new part of the file will then contain random data from previously deleted files.

Addendum:

   The rules for sparse files are different.

   You should not use SetFileValidData on a file that non-privileged users have read access to; this could leak content from deleted files that belonged to other users.
https://stackoverflow.com/questions/12228042/what-does-setfilevaliddata-doing-what-is-the-difference-with-setendoffile
That's interesting, though not an issue for those running under admin accounts :D Still it begs the question why don't OSes/FSes asynchronously clear old deleted files to have space for defragmented allocation available immediately. Is it because so few applications want instant random I/O over big files?
Finally, yet another instance of MSSQL getting preferential treatment.
by notaLamer on 2024/11/03 12:04:28 AM    
by willianwrm on 2024/11/04 02:18:47 PM    
asynchronously clear old deleted files
my guess it's something that have a super low compensation: high implementation cost and low to none importance; and a plus that FBI would not like it.




This web site is powered by Super Simple Server