Last active
July 29, 2024 17:23
-
-
Save olivertappin/0fc0b6a76e6555e51f884f9638249de2 to your computer and use it in GitHub Desktop.
Create a large file for testing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Please note, the commands below will create unreadable files and should be | |
# used for testing file size only. If you're looking for something that has | |
# lines in it, use /dev/urandom instead of /dev/zero. You'll then be able to | |
# read the number of lines in that file using `wc -l large-file.1mb.txt` | |
# Create a 1MB file | |
dd if=/dev/zero of=large-file-1mb.txt count=1024 bs=1024 | |
# Create a 10MB file | |
dd if=/dev/zero of=large-file-10mb.txt count=1024 bs=10240 | |
# Create a 100MB file | |
dd if=/dev/zero of=large-file-100mb.txt count=1024 bs=102400 | |
# Create a 1GB file | |
dd if=/dev/zero of=large-file-1gb.txt count=1024 bs=1048576 | |
# Create a 10GB file | |
dd if=/dev/zero of=large-file-10gb.txt count=1024 bs=10485760 | |
# Create a 100GB file | |
dd if=/dev/zero of=large-file-100gb.txt count=1024 bs=104857600 | |
# Create a 1TB file (careful now...) | |
dd if=/dev/zero of=large-file-1tb.txt count=1024 bs=1073741824 |
Nice thanks
Thanks :) As @tandersn mentioned, on a ZFS dataset with compression enabled files written with /dev/zero
will compress to almost nothing. But using /dev/random
turned out to be really slow. For testing purposes I turned off compression for the ZFS dataset.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Depending on what your are testing, this may not give you what you want. If the filesystem has inherent and enabled compression (like zfs), these files made with /dev/zero will be compresssed to almost nothing. Use /dev/random in that case.