The plain simple reality of entropy

Or how I learned to stop worrying and love urandom

If you suspend your transcription on, please add a timestamp below to indicate how far you progressed! This will help others to resume your work!

Please do not press “publish” on to save your progress, use “save draft” instead. Only press “publish” when you're done with quality control.

Video duration
Entropy, the randomness used in many critical cryptography processes including key generation, is as important as it is misunderstood. Many myths are fueled by misleading documentation. This presentation aims to provide simple and actionable information while explaining the core technical details and real world implementations.

Randomness is as simple as critical. An application wants some bytes which an attacker can't predict. The clearest example is generating a cryptographic key, but a wide array of functions depend on randomness.

Any time a key is generated, any time a DSA signature is made, any time the memory layout is randomized, applications rely on being able to create strings of bytes impossible to predict. If that comes short everything fails: cryptographic keys are compromised, exploits protections are ineffective.

Entropy, the unpredictable raw material, is usually collected by the Operating System and exposed to the applications that need it. Once enough bits of entropy have been collected, it becomes impossible to predict the output of the CSPRNG (cryptographically secure pseudo-random number generator), a stirrer of sorts that expands a seed into unlimited whitened random bytes, often based on stream ciphers or hashes.

Real risks include trying to use a CSPRNG early on in the boot process, when not enough random events have been collected, or using a userspace CSPRNG instead of the kernel one and forgetting to seed it. Or using a non-CS PRNG.

That's just about it. However, there is a lot of misunderstanding on "decreasing entropy". It's a widespread myth that using random bytes decreases the "amount" of entropy. Reality is, to an attacker who's basically trying to predict the CSPRNG output there's no decrease in difficulty no matter how much output is drawn, so developers can avoid introducing additional complexity because of this.

This is all backed up by showing a simple toy CSPRNG design, and reasoning about its properties.

More practically, the points above translate into "in Linux, just use /dev/urandom or the get_random syscall". That's the kernel interface for the system CSPRNG. Its inner working are presented and they will hopefully make it clear why there is no meaningful difference from the "counting" /dev/random.

Talk ID
Hall 2
9:15 p.m.
Type of
Filippo Valsorda
Talk Slug & media link

Talk & Speaker speed statistics

Very rough underestimation:
126.3 wpm
713.4 spm
126.6 wpm
713.6 spm
100.0% Checking done100.0%
0.0% Syncing done0.0%
0.0% Transcribing done0.0%
0.0% Nothing done yet0.0%

Work on this video on Amara!

Talk & Speaker speed statistics with word clouds

Whole talk:
126.3 wpm
713.4 spm
Filippo Valsorda:
126.6 wpm
713.6 spm