Entropy and randomness
Introduction
Entropy is described as 'a numerical measure of the uncertainty of an outcome' and is often associated with chaos or disorder however is often more simply called randomness.
It is important for a secure operating system to have sufficient quantities of entropy available for various crypotographic and non-cryptographic purposes, such as:
- Generation of cryptographic keys
- Address Space Layout Randomisation (ASLR) - used by default in Alpine of course ;)
- TCP port randomisation (NAT, outbound connection)
- TCP sequence number selection (see this too)
- Writing random files for testing network functionality and throughput
- Overwriting hard disks prior to reuse or resale or encryption
Entropy is contained within a pool, which draws its entropy from various sources. To view the current amount of entropy in the pool:
more /proc/sys/kernel/random/entropy_avail
To view the maximum limit of entropy that the pool can hold:
more /proc/sys/kernel/random/poolsize
On a standard system the limit is 4096 bits (512 bytes). The gr-sec patch used on Alpine increases this limit to 16384 bits (2048 bytes). Entropy is added to the pool in bits from various sources, "the relative number of unknown bits per event is roughly 8/keyboard, 12/mouse, 3/disk, 4/interrupt" source meaning that on a headless server (without mouse and keyboard attached), which ironically is often a system requiring the most entropy, entropy generation is somewhat limited.
The entropy from the pool can be accessed in two ways by default:
/dev/random - On Kernels older than 5.6, this is a blocking resource, so it will use available entropy from the pool. If more entropy is required than is available, the process will wait until more entropy is available in the pool. Due to this behaviour, /dev/random is best used where small amounts of high quality randomness are required, such as for cryptographic keys.
/dev/urandom - Is a non-blocking resource on all Kernel versions. It uses a seed value from the same entropy pool as /dev/random and therefore, if little entropy is available in the pool, it is recommended not to use /dev/urandom until more entropy is made available in the pool. It runs the seed value through an algorithm and so is a pseudo-random number generator, operating much faster than /dev/random. /dev/urandom is best used for non-cryptographic purposes such as overwriting disks.
Writing to /dev/random or /dev/urandom will update the entropy pool with the data written, but this will not result in a higher entropy count. This means that it will impact the contents read from both files, but it will not make reads from /dev/random faster. For more information see the random manpage
It is generally recommended wherever entropy is used heavily to supply additional entropy sources; some possibilities are below. Adding more sources of entropy to feed into the pool is a good idea. It makes an attackers job more difficult, because there will be more sources they have to gain control over (or at the very least observe at source), and adding more sources of entropy, even weak ones, can only result in higher entropy.
If you are desperate for more entropy and are working on a headless server with no internet connection, you could try generating some via disk activity. Just don't expect any miracles! Here's an example:
dd if=/dev/zero of=/var/tmp/tempfile bs=1M count=200 && find / -size +1k && ls -R / && rm /var/tmp/tempfile && sync
If your server is a 'run-from-ram' setup and so you have no disks to create churn but require more entropy, it is strongly recommended to add alternative entropy sources as discussed below.
Alternative/Additional entropy sources
This material is obsolete ... Much of the content in this section is obsolete since Kernel 5.6, or roughly Alpine Linux version 3.13.0 (Discuss) |
Haveged
Haveged generates entropy based on CPU flutter. The entropy is buffered and fed into the entropy pool when write_wakeup_threshold is reached. Write a value (the number of bits) to it if you wish to change it:
echo "1024" > /proc/sys/kernel/random/write_wakeup_threshold
Or change it via haveged:
haveged -w 1024
Install haveged, then start and set to autostart at boot:
apk -U add haveged && rc-service haveged start && rc-update add haveged
Further configuration is possible however the defaults should work fine out of the box.
Other possibilities
Some other possibilites for entropy generation are:
- timer entropy daemon - should provide on-demand entropy based on variances in timings of sleep command.
- video entropy daemon - requires a video4linux-device, gathers entropy by taking a couple of images and calculating the differences and then the entropy of that. Can be run on demand or as a cron job.
- audio entropy daemon - requires alsa development libraries and an audio device. Generates entropy by reading from audio device and de-baising data.
- GUChaos[Dead Link] - "Give Us Chaos" provides on-demand entropy, by retrieving random blocks of bytes from the Random.org website, and transforms them with a polynumeric substitution cipher before adding them to /dev/random until the entropy pool is filled.
and hardware entropy generators such as:
- Entropy Key - USB hardware entropy generator
It is also possible to replace /dev/random with EGD, the Entropy Gathering Daemon, or to use this on systems that are not able to support /dev/random. However, this is not required (or recommended) under normal circumstances.
Testing entropy with ENT
It is possible to test entropy to see how statistically random it is. Generally, such tests only reveal part of the picture, since some numbers can pass statistical entropy tests whilst they are not actually random. Failing a statistical randomness test is not a good indicator of course!
Make a folder for testing, and get hold of ENT:
mkdir /tmp/test/make cd /tmp/test/make wget http://www.fourmilab.ch/random/random.zip unzip random.zip make mv ./ent /tmp/test/ cd /tmp/test
Create some random data. In this example we read from /dev/urandom:
dd if=/dev/urandom of=/tmp/test/urandomfile bs=1 count=16384
Run the ENT test against it:
./ent /tmp/test/urandomfile
Try the same test whilst treating the data as a stream of bits and printing an account of character occurrences:
./ent -b -c /tmp/test/urandomfile
Note any differences against the previous test.
I propose also generating larger streams of data (10's or 100's of MB) and testing against this too. Any repeating data or patterns (caused by a small/poor seed value for instance) will make spotting any weaknesses and a lack of randomness much easier across large amounts of data than across small amounts.
I also suggest running the test against known non-random files, so you may see that some tests show that such a file can have some characteristics of a random file, whilst completely failing other randomness tests.
Finally, once you are done testing with ENT, it's good practice to delete the working folder:
rm -r /tmp/test/
Other tests
Other tests include diehard and dieharder
Further reading
RFC 4086 - Randomness Requirements for Security
Random number generation: An illustrated primer
Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices, PDF
Analysis of the Linux Random Number Generator, PDF 🔓
How to Eat Your Entropy and Have it Too — Optimal Recovery Strategies for Compromised RNGs, PDF
Security Analysis of Pseudo-Random Number Generators with Input: /dev/random is not Robust, PDF