WebGL in-browser interactive 3D map of the brain by James Gao:
This viewer shows how information about thousands of object and action categories is represented across human neocortex. The data come from brain activity measurements made using fMRI while a participant watched hours of movie trailers. Computational modeling procedures were used to determine how 1705 distinct object and action categories are represented in the brain.
Try it out here
This is the chart of our artistic generation.
Content growth is skyrocketing but time spent consuming content has hit a ceiling. Whichever organization helps us find the content we’re willing to pay for, thereby supporting the valuable artists, wins all the marbles.
(Via The Economist)
Access to Tools: Publications from the Whole Earth Catalog, 1968-1974
Videofreex. The Spaghetti City Video Manual (Praeger, 1973).
im sorry i ruined ur lives and crammed eleven cookies into the vcr
You go to take a photo but the last picture you took was with your phone’s front camera, so you get a moment of your own face, slack and unposed, before you have the chance to close your mouth, tuck your chin back in, and pretend death is not swiftly approaching.
same experience as catching your reflection in a suddenly darkened, say, ipad screen.
The Descriptive Camera works a lot like a regular camera—point it at subject and press the shutter button to capture the scene. However, instead of producing an image, this prototype outputs a text description of the scene. Modern digital cameras capture gobs of parsable metadata about photos such as the camera’s settings, the location of the photo, the date, and time, but they don’t output any information about the content of the photo. The Descriptive Camera only outputs the metadata about the content.
As we amass an incredible amount of photos, it becomes increasingly difficult to manage our collections. Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they’re doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections. Of course, we don’t yet have the technology that makes this a practical proposition, but the Descriptive Camera explores these possibilities.
The technology at the core of the Descriptive Camera is Amazon’s Mechanical Turk API. It allows a developer to submit Human Intelligence Tasks (HITs) for workers on the internet to complete. The developer sets the guidelines for each task and designs the interface for the worker to submit their results. The developer also sets the price they’re willing to pay for the successful completion of each task. An approval and reputation system ensures that workers are incented to deliver acceptable results. For faster and cheaper results, the camera can also be put into “accomplice mode,” where it will send an instant message to any other person. That IM will contain a link to the picture and a form where they can input the description of the image.
The camera itself is powered by the BeagleBone, an embedded Linux platform from Texas Instruments. Attached to the BeagleBone is a USB webcam, a thermal printer from Adafruit, a trio of status LEDs and a shutter button. A series of Python scripts define the interface and bring together all the different parts from capture, processing, error handling, and the printed output. My mrBBIO module is used for GPIO control (the LEDs and the shutter button), and I used open-source command line utilities to communicate with Mechanical Turk. The device connects to the internet via Ethernet and gets power from an external 5 volt source, but I would love to make a another version that’s battery operated and uses wireless data. Ideally, The Descriptive Camera would look and feel like a typical digital camera.
After the shutter button is pressed, the photo is sent to Mechanical Turk for processing and the camera waits for the results. A yellow LED indicates that the results are still “developing” in a nod to film-based photo technology. With a HIT price of $1.25, results are returned typically within 6 minutes and sometimes as fast as 3 minutes. The thermal printer outputs the resulting text in the style of a Polaroid print.
It may not be out of place to note here the difference between gray as spelt with an a, and grey as spelt with an e, the two names being occasionally confounded.
GRAY is a semi-neutral, and denotes a class of cool cinereous colours, faint of hue; whence we have blue grays, olive grays, green grays, purple grays, and grays of all hues in which blue predominates; but no yellow or red grays, the predominance of such hues carrying the compounds into a the classes of brown and marone [maroon], of which gray is the natural opposite.
GREY is neutral, and composed of or can be resolved into black and white alone, from a mixture of which two colours it springs in an infinite series.
Field’s Chromatography (1856) is blowing my mind.
The fine language and discussion of color in this book is almost magical. It’s like a spell book for wielding hues and tints; something that must have been a magical talent in the mid 19th century. Field’s evokes a way of seeing the world and its color in a time before omnipresent photography, digital displays, and ink-jet printing. To communicate color you needed both spellings of grey and evokative words like “cinereous” (an ashy gray, similar Latin root to ‘cinders’).
At times Field’s discusses pigments like they’re new technology (which they essentially are at that time). Something that holds yellow in a fixed way allows new subjects to be communicated. Subjects which paintings could never previously present. Passages like these, for someone whom never imagines a hue out of reach, are spellbinding.
|—||Typotheque: Typeface As Programme by Jürg Lehni re: donald knuth|
In 2009 Kodak announced that they were to stop making Kodachrome film, one of the most distinctive types of films ever created, because the company could not afford to keep up with the digital camera market.
Steve McCurry – the photographer who shot perhaps the most famous Kodachrome image of all time – was given the very last roll of Kodachrome film. This is frame 36 of 36 – the very last photograph taken with Kodachrome film – taken in a cemetery not far from the Kodachrome factory.
You can view the full gallery of all 36 photographs taken with the last roll of film here.
Sesame Street “What is a computer?” (1984)
|—||Marshall McLuhan in a March 1968 Playboy interview, quoted by Mark Larson. Do bees resent flowers? Do they bemoan the ubiquity of beckoning floral distractions? (via mills)|
Ephemeralization, a term coined by R. Buckminster Fuller, is the ability of technological advancement to do “more and more with less and less until eventually you can do everything with nothing”.