‘As We May Think’ – part 3

Besides creating audio records instead of writing information down, an image says more than a thousand words. Vannevar Bush already envisioned an entirely different experience for researchers.

One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two together.[1]

Although still images already provide a lot of information together with their tags, commenting while photographing might be easier using a video camera. Most devices such as laptops, tablets and mobile phones already contain cameras, but a Go Pro or other hands free camera increases the ease of use. However video production consists of planning during pre-production, capturing during production and processing in the post-production phase.[2] Bush was already aware of these phases.

As he ponders over his notes in the evening, he again talks his comments into the record. His typed record, as well as his photographs, may both be in miniature, so that he projects them for examination.[3]

The process of recording information is constantly improving, especially after the invention of an optical head-mounted display such as Google Glasses. It displays information and communicates with the internet using natural language voice commands. Furthermore the glasses include a touchpad and a camera which can take photos or record videos just by voicing the command “record a video”.[4] This technology can improve the research process since it “1) provides workflow guidance to the user, 2) supports hands-free operation, 3) allows the users to focus on their work, and 4) enables an efficient way for collaborating with a remote expert”.[5] Although the glasses were already tested in the healthcare and industry maintenance fields, the Glass Explorer program ceased to exist. Ivy Ross and Tony Fadell took over, but the release date of their new product remains unknown.[6]

[1] Bush. “As We May Think,” Chapter 3.

[2] Dustin Freeman, Stephanie Santosa, Fanny Chevalier, Ravin Balakrishnan and Karan Sing, “LACES: live authoring through compositing and editing of streaming video,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014, 1207-1216, doi: 2556288.2557304.

[3] Bush, “As We May Think,” Chapter 3.

[4] “Google Glass,” Wikipedia, accessed March 7, 2016. https://en.wikipedia.org/wiki/Google_Glass.

[5] Xianjun Sam Zheng, Patrik Matos da Silva, Cedric Foucault Siddharth Dasari, Meng Yuan, Stuart Goose. “Wearable Solution for Industrial Maintenance,” Proceedings of the 33rd Annual ACM Conference Extended Abstract on Human Factors in Computing Systems, 2015, 311-314, doi: 702613.2725442.

[6] Nick Bilton. “Why Google Glass Broke,” The New York Times, accessed March 7, 2016, http://www.nytimes.com/2015/02/05/style/why-google-glass-broke.html?smid=nytcore-iphone-share&smprod=nytcore-iphone&_r=1.

‘As We May Think’ – part 2

In the first few chapters of “As we may think” Bush discusses the difficulties on how to create a record. He already envisioned solutions to shorten the process of creating and spreading research.

To make the record, we now push a pencil or tap a typewriter. Then comes the process of digestion and correction, followed by an intricate process of typesetting, printing, and distribution. To consider the first stage of the procedure, will the author of the future cease writing by hand or typewriter and talk directly to the record?[1]

After the invention of the Voder which emitted recognizable speech when typed to, the Vocoder did exactly the opposite. “Speak to it, and the corresponding keys move.” Another technology already in existence at the time Bush wrote his influential article, was the stenotype “which records in a phonetically simplified language”. “Combine these two elements, let the Vocoder run the stenotype, and the result is a machine which types when talked to.”[2] The technology that enables devices to respond to spoken commands called speech recognition, exists in both mobile phones and tablets, as well as laptops. Most common uses include dictation, search and giving commands to computers. Another example of speech recognition is Apple’s Siri, a personal assistant on their smartphones.

In the Humanities, and Digital Humanities especially, researchers have shown an interest in non-text materials because of the “massive increase in the quantity and availability of audiovisual (AV) materials and a rapid development in technology for handling such materials”.[3] Speech recognition is mostly linked to transcribing audio-visual materials, especially speech-to-text transcription.

Picture1
Schematic model of humanities research with AV materials

Since speech recognition systems are so domain specific, they cannot handle every domain and “even speech from the same domain that differs from the ‘training’ data may be problematic”.[4]

Speech-to-text transcriptions have historically comprised an unpunctuated and unformatted stream of text. There has been considerable recent research into generating ‘richer’ transcriptions annotated with a variety of information that can be extracted from the audio signal and/or an imperfect transcription. […] Investigations have often used speech from only a small set of domains, such as broadcast news and conversational speech. Emotion-related work in particular is very preliminary.[5]

Strategies on how to expand the role of audiovisual media in Digital Humanities was discussed during the Digital Humanities Conference 2014 in Lausanne, at a workshop by researchers involved in AXES.[6] Research fields which might benefit from speech recognition include: film, television, radio, and oral history.

[1] Vannevar Bush, “As We May Think,” Atlantic Monthly, July 1945, http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/, chapter 3.

[2] Bush, “As Wy May Think,” chapter 3.

[3] Alan Marsden, Adrian Mackenzie and Adam Lindsay, “Tools for Searching, Annotation and Analysis of Speech, Music, Film and Video – A Survey,” Literary and Linguistic Computing, no. 22,4 (2007): 469-488, accessed January 9, 2017, doi: 10.1093/llc/fqm021.

[4] Marsden et. Al., “Tools for Searching”.

[5] Marsden et. Al., “Tools for Searching”.

[6] “AV in DH Special Interest Group,” Access to Audiovisual Archives, February 12, 2015, http://www.axes-project.eu/?p=2419.

‘As We May Think’ – part 1

When Vannevar Bush wrote As we may think in 1945 he could never have imagined the technologies existing today.[1] A few of his ideas became reality in some or other form but where he “urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge”, his ideas actually affected a much larger population.[2] He came across one particular problem, namely that “publication has been extended far beyond our present ability to make real use of the record.”[3] Even in 1945 machines were relatively cheap and dependable, so according to Bush they would provide a solution. In his opinion a record could only be useful to science when it is continuously extended, stored, but “above all it must be consulted”.[4] First he discusses multiple solutions for recording knowledge, such as a combination of a vocoder and a stenotype to create a “machine which types when talked to”. He also imagined advanced arithmetical machines that perform 100 times present speeds or more.[5] Bush even describes what we would call data mining or machine learning.

In fact, every time one combines and records facts in accordance with established logical processes, the creative aspect of thinking is concerned only with the selection of data and the process to be employed and the manipulation thereafter is repetitive in nature and hence a fit matter to be relegated to the machine.[6]

Furthermore, he explains the difference between simple selection that examines every item and the selection mechanism of a telephone that narrows down its selection by classes and subclasses, represented by each digit. However both these selection methods use indexing, while the human mind “operates by association”.[7] He figures his memex will provide the solution, for it creates trails that tie multiple items together.[8]

Screen Shot 2017-05-08 at 13.57.57
THE MEMEX

Finally Bush concludes that not only new forms of encyclopedias will appear, with trails running through them, he also asks himself this: “Must we always transform to mechanical movements in order to proceed from one electrical phenomenon to another?” Imagine that instead of typing this document, a machine would intercept the electrical impulses and type, without the interference of the mechanical movement of my hands on the keyboard.[9]

[1] Vannevar Bush, “As We May Think,” Atlantic Monthly, July 1945, http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/.
[2] Bush, “As We May Think”, Introduction.
[3] Bush, “As We May Think”, Chapter 1.
[4] Bush, “As We May Think”, Chapter 2.
[5] Bush, “As We May Think”, Chapter 3.
[6] Bush, “As We May Think”, Chapter 4.
[7] Bush, “As We May Think”, Chapter 5-6.
[8] Bush, “As We May Think”, Chapter 7.
[9] Bush, “As We May Think”, Chapter 8.