If you suspend your transcription on amara.org, please add a timestamp below to indicate how far you progressed! This will help others to resume your work!
Please do not press “publish” on amara.org to save your progress, use “save draft” instead. Only press “publish” when you're done with quality control.
Snapping a photo captures more than just image data. Information about the camera and its lens, shutter speed and aperture, date and time, &c, have been bundled into the JPEG since the early days of digital photography. By now, that photo is likely to include a GPS trace as well, and as soon as it leaves your camera, computers are hard at work assisting you in identifying and tagging people and places, with auto-completing textual clarity and database precision. Meanwhile NSA spooks try to reassure us that they are only interested in the <i>metadata</i> of our communications--the who and the when, and maybe some keywords. Without denying a power and efficacy to machine-readable metadata, I argue that for humans to navigate and find meaning in unknown and unsorted material, this search will require multi-media tools that immerse us and augment our powers of perception, rather than reduce all navigation to textfields, transcripts, and tags. For temporal media (sound and video), codecs have given us greater and greater instantaneous fidelity, but leave us with few techniques to skim, seek, and survey.
Using case studies of documentary film, Freedom of Information Law document dumps, soundbanks, and a hacker conference, I will demonstrate experiments and results of several years developing open source tools to reorient the idea of documentary around its documents.