This weekend was spent both at the ESRI Education User Conference…
…as well as at various local taco places:
I’m in San Diego for the next week for a GIS conference put on by ESRI, the maker of ArcMap and other geo-spatial software. ESRI’s programs are sort of like the Photoshop of the GIS world: expensive, difficult-to-learn, encumbered by decades’ worth of legacy interfaces and workflows — but also incredibly capable. Nearly any task you can think of with a map is accomplishable, if you can figure out how.
In my Digital Humanities work, I more often work with geo-spatial software at the web browser level: Leaflet.js, CartoDB, and similar. These technologies, among others, help power some of the maps on Yale’s Photogrammar project. But there’s no question that some problems and datasets require the kitchen-sink tools and computational power of ESRI’s Windows-only software stack. So I’m at ESRI User Conference to learn more about these tools, and bring any knowledge I can back to the Yale Digital Humanities Lab when I return.
I have to admit I was also looking forward to a different class of Mexican food in San Diego, and Común Taqueria did not disappoint. They put Marita chili ash on top of their chips, which can led you to wonder exactly what the black stuff is on the chip you’re about to put on your mouth — but which is ultimately delicious.
Macworld magazine recently ceased print publication, but an earlier victim of the shift to online news was MacWeek, a restricted-circulation industry broadsheet that was passed around at user group meetings and tech offices alike. Between 1987 and 1999, this weekly tabloid-size glossy was one of the best ways for Mac fans to keep up with the latest news from Cupertino.
I’ve scanned the cover of the first issue, from April 1987, below. Inside are some interesting tidbits, including the launch of PowerPoint (Mac-only, and not yet owned by Microsoft) and the first piece from gossip columnist Mac the Knife.
While home over the holidays I was interested in seeing what the earliest digital document I could find would be. I think the best contender is this circa-1985 5.25” floppy disk, which probably holds WordStar files:
I have a few machines with disk controllers that can use such a floppy disk drive — the drives themselves go for about $10-$30 on eBay. The problems I’m likely to encounter are both media failure due to physical degradation, and/or random electromagnetic radiation from the sun having flipped some of the bits. Both of these could turn part or all of the files into gibberish. In that case, there’s an modern floppy controller called KryoFlux that hooks up to a modern PC and uses more advanced/heroic techniques to try and read the bad parts of the disk repeatedly, hundreds of thousands of times. With luck, even badly-damaged disks can give up some of their secrets.
In early November I had a chance to travel to South Carolina to attend the Charleston Conference.
Together with colleagues from Yale and ProQuest, I presented a panel on Data Mining on Vendor-Digitized Collections. We focused on our analysis of ProQuest’s Vogue Digital Archive — a collection of every issue since Vogue’s inception in 1892 — as a case study of what libraries and scholars can do with vendor data. Our examples were mostly drawn from our public website that showcases various visual and textual experiments with the Vogue data:
Here’s how we framed the larger issue:
This session delves into the rapidly emerging topic of text and data mining (TDM), from the perspectives of a digital humanist, a librarian, a collection development officer and a product manager for a major vendor of digitized content. We will show concrete examples of TDM on a large vendor-digitized in-copyright collection: the Vogue Archive from ProQuest, with over 400,000 pages of text and images dating from 1892 to the present. Several projects in progress at Yale have illuminated the appeal of TDM applications on Vogue to researchers across disciplines ranging from gender studies to art history to computer science. We will address issues of copyright and licensing, file formats and research platforms, new forms of research enabled by TDM, and how vendors and librarians can work to support digital humanities projects. Session attendees who are new to this topic will learn what TDM is and how they might engage with it in their own work. Audience members who have familiarity with TDM will be encouraged to share their experiences and insights.
After the conference was over I had a chance to enjoy a day in the city free of presentation responsibilities. The weather was very pleasant and the sky cooperated to show off the architecture in its best light:
Just got back from the 2014 Digital Humanities conference, held this year in Lausanne Switzerland.
As you can see from that interior shot of the auditorium, the facilities were more than adequate. Here’s a shot of the SwissTech Convention Center from outside:
The natural setting was dramatic as well, although the mountains were shrouded by clouds when we were there:
Together with a grad student, I presented on a project to map the 170,000 Farm Security Administration photographs taken during the 1930s-40s:
I also co-presented a poster with Mats Malm from Gothenburg, about ways of surfacing related content in large digital literary collections:
We were pretty busy during the week, but there was some time to view the sights in town, such as the Escaliers du marché:
One of the things I am trying to do is take spur-of-the-moment street photographs, alongside more traditional tourist shots of city halls and churches. This couple was sitting in front of the (very impressive) entryway to the cathedral:
Around the corner, a young girl was amusing herself by jumping off of a low wall:
Had a great tour of the University of Las Vegas Lied Library, as part of the ALA’s annual convention. The highlight for me was seeing their work in building interactive exhibit walls out of multiple multi-touch displays:
Although the hardware comes from a vendor, the software layer is all written in-house, and supports great features like multiple users zooming different photographs all at the same time:
They’re using this system to highlight their special collections, which include fantastic information about local history:
The Digital Collections team also has an iPad app for use in their exhibits:
Overall this tour was a real wake-up call to think about what libraries can accomplish when they focus on their unique collections and think about presenting their material in new and more accessible ways. Special Collections reading rooms at many institutions can be rather intimidating places, with rules on how to handle delicate material. These rules are there for are reason, but they tend to discourage shoving artifacts around a table to juxtapose or compare. The mass digitization of Special Collections material gave new life to these items on our computer screens, but didn’t do much to let us physically manipulate these images: we struggle with resizing browser windows and spawning new tabs to get all our material situated on our 11” laptop screens.
The multi-touch exhibit panels I saw at Lied Library, when coupled with UNLV’s own software layer, point towards a future where multiple users can grab, drag, resize and otherwise physically manipulate artifacts on a large surface. Because the images are linked into the metadata in the digital library (ContentDM), there’s good contextual information about what you’re seeing — but it never gets in the way of the visual materials themselves.
Speaking of the visual, I was also struck by some great design work in a newly remodeled multipurpose room:
UNLV describes its goals for this space as follows:
This space will serve as a state-of-the-art venue to showcase UNLV Libraries’ special collections and comprehensive records that document our region’s history—making them accessible to everyone to experience our past by touching and feeling these artifacts. This event space will serve as a center for academic and cultural dialog, panel discussions, readings and lectures by gaming fellows, authors and visiting scholars.
The ‘newest’ computer I’ve added to my collection is a 1987 machine designed by Jef Raskin, the Canon Cat. Based on the ideas of modeless text editing that Raskin had developed while at Apple, including the Swyft hardware and software enhancements to the Apple //, the Cat arguably represents the original vision for the Macintosh project.
Raskin is actually depicted in the Ashton Kutcher film Jobs, in a brief scene where Steve takes over the Macintosh team, unceremoniously ejecting the bearded and professorial Raskin from the team that Jef had led since 1979. The machine that emerged from the new, Steve Job-managed Mac team was very different that the minimalist appliance that Raskin envisioned: a high-resolution bitmapped display, a mouse, and sophisticated software that required larger amounts of RAM all pushed up the price to $2,500 at launch.
The Canon Cat, which came to market three years later, is the closest vision of what Raskin’s original idea for the Mac might have been. Raskin was able to extend the ideas of the “Leap” keys that he had pioneered on the Apple //-based Swyft systems, giving users two new meta-keys (LEAP FORWARD and LEAP BACKWARD) that, when held down during typing, zapped the user to the exact place in the text where those words occurred. With such a radical system of navigation, there was no need for a visible file system, or discrete documents in different windows — the Cat provided a scrolling window containing everything you had ever written (or at least as much could fit on a 3.5” disk.) The closest parallel today would be navigating a webpage by using the browser’s “Find on page” command.
Although such a system is a big cognitive leap from how most software (previously and since) worked, Raskin claimed that keeping all your writing in a big scrolling list would avoid several levels of cognitive abstraction that traditional GUIs required the user to master. Reading through the Canon Cat manual is interesting because — whether due to Raskin’s focus on simplicity and appliance computing, or the sponsoring corporation Canon’s historical focus on office equipment — one encounters a machine much more limited and simple than the Apple Macintosh, despite shipping three years later. This was a machine for office workers, writers, and others who only needed to manipulate text; Desktop Publishers need not apply. But that doesn’t mean there wasn’t room of innovation: after mastering the distinctive LEAP key system, users could select text and “compute” it using the built-in math functions, or select a phone number underneath a friend’s name and have the Cat’s built-in modem dial them directly. Restricting the functional domain of the computer down to the realm of A-Z meant that the user experience could be tightly honed, the computer booting in mere seconds and the screen instantly responsive with an image of the exact place you had left off typing.
Just a sneak preview of a website I’m building — the idea is to have an algorithm read through a bunch of magazine articles, find place names, and map those places onto a city or region of a country. Then, when the user hovers over the place, a list of sentences appear on the right, showing the context for each occurrence. Each small red dot is a place mentioned in the text; larger blobs of color show concentrations of places.
I was curious what it would look like to take this Rand McNally map of Chicago public transportation networks circa 1926 and overlay it on Google’s modern aerial photography of the city.
When I first came to Hyde Park in 1993, there was an El track along 63rd street. That part of the Green Line was torn down by the time I left in 1997.
63rd is certainly lighter and more open, but the demolition of the overhead tracks has hardly spurred economic development. The colored lines of the 1926 map hover over a mostly-vacant streetscape of today.
I did this experiment with ArcGIS (for rectification) and GeoServer (for serving the map tiles into Google Earth.)