Review of “Humanizing Data: Data, Humanities, and the City”

By Katie Mulkowsky (Gallatin ’19)

A variety of activists, community organizers, academics and data practitioners came together on Cooper Square this past weekend for a day-long symposium called “Humanizing Data: Data, Humanities, and the City.” Co-sponsored by the Urban Democracy Lab, NYU Gallatin, NYU Shanghai Center for Data Science and Analytics, Asian/Pacific/American Institute at NYU, and the Institute for Public Knowledge, the April 8 event explored how urban humanities can be both enhanced and complicated by innovative data-centric, digitized projects.

When introducing the symposium’s second panel, “Decolonizing Data,” UDL Associate Director Rebecca Amato perfectly characterized the question underscoring each of the day’s presentations: who would want their neighborhood, community, or life to be flattened into a single data set? Surely, data science is complex and profoundly useful, but can also fail to capture the very nuanced human experiences which drive policy decisions and render public spaces personal. Each panel grappled with this idea in different ways, highlighting projects which have utilized data to bolster rather than reduce the narratives composing particular community histories.

As such, the day kicked off with “Queering the Web,” a panel featuring Kimon Keramidas of NYU Draper, Jonathan Ned Katz of, Elizabeth Heard of NYU Performance Studies, and Cindi Li, a NYU MA candidate in Social and Cultural Analysis. Together, they pondered how social norms are reinforced by digital media’s computational and design-based paradigms—analyzing how notions of gender and sexuality might subliminally construct the displays on our screens.

Their discussion was closely followed by “Decolonizing Data,” which engaged Heather Lee of NYU Shanghai, NYU Professor Jack Tchen, NYU graduate student Noah Fuller, and Gallatin students Jane Choe (BA ’18) and Molly Elizabeth Smith (BA ’18) in debate surrounding the power dynamics of data collection. They discussed the ways in which urban data science can reproduce knowledge and assumptions, namely those concerning notions of private and public—and even extending to those which determine price. In this context, Professor Tchen asked what a complete data set might represent about the data that it leaves unrepresented. Dr. Lee spoke about the elasticity of knowledge that citizens are open to, within this “world of fuzziness” we now live.

The panel discussion also centered on ways in which urban data can be used, along with digital media, to reconstruct very salient historical narratives. Tchen and Fuller, for instance, co-teach a Gallatin class in partnership with The Wayfinding Lab called “Indigenous Futures: Decolonizing NYC—Documenting the Lenape Trail.” The seminar acts as a collaborative research project that engages with Algonquian language scholars, digital mappers, and artists to explore the indigenous history of what we now call Broadway. It was incredibly interesting to hear them speak of the course, and to hear the students speak of their experiences taking it, as the project exemplifies ways in which NYU’s campus community can humanize data to its own scholarly advantage. Tchen also effectively “humanized” the meaning of data itself—when speaking of his research, he mentioned information-gathering processes surrounding “the data existing in dumpsters,” noting that how we think about “data” might need to shift before we ask foundational questions about its potential purpose and scope.

After lunch, this conversation was built upon by the day’s third panel, “Activist Geographies.” Each speaker presented a data-oriented project which targets questions of social justice, community space, and memory—illustrating the key point of the symposium, that effective data displays can revolutionize digital humanitarian scholarship. In more accessible terms, they can also be potent tools for activists and legal advocates. Grinnell College Professor Caleb Elfenbein, for example, presented his Mapping Islamophobia project, which traces instances of “anti-Muslim graffiti and offhand comments, vandalism, verbal and physical assault, employment and other forms of discrimination, anti-Muslim protests and public campaigns, local ordinances and state-level legislation targeting Muslim communities in some way, and political rhetoric at the local, state, and national level.” These obviously all concern very tangible and personal acts, none of which can truly be captured by a data point. But, mapping them together creates a kind of power in numbers, and even if this “power” does not deeply humanize each instance, it certainly demonstrates the magnitude of a humanitarian problem.

Erin McElroy of the San Francisco Tenants Union then presented her Anti-Eviction Mapping Project, an incredibly detailed database of dispossession and resistance in the California Bay Area. Hailing from Rollins College, Julian Chambliss showed a similar level of narrative drive in his Black Social World in Central Florida. Visiting Scholar Joshua Jelly-Schapiro then closed out the panel, illustrating the various capacities of mapping through presentation of his book, Nonstop Metropolis. Each speaker used digital resources to engage with specific social problems or stories at a broader level, constructing informational archives which not only provide concrete evidence of specific plights, but also those which can be used by humanities scholars comparatively, in research and in action.

After a series of workshops, a keynote by Gergely Baics of Barnard College and Leah Meisterlin of Columbia University closed off the evening. In all, the symposium provided various examples of research, community engagement, and digital activism that represent a crucial shift in the landscape of digital humanitarian study. It served to demonstrate that data, when harnessed and represented effectively, can make personal numbers which seem, at their barest, to be impersonal. The artful rendering of stories, problems, and solutions in digital form can elevate them from specificity and circumstance into archives that are not only “real” and “human,” but also lasting.

Brought to you from the NYU Urban Democracy Lab:

Katie Mulkowsky (Gallatin ’19) is a sophomore at NYU Gallatin concentrating in Sustainable Development and Urban Theory.


By Beatrice Glow (Artist-in-Residence at the Asian/Pacific/American Institute at NYU)

I strive to uncover invisible, suppressed stories that lie in the geopolitical shadows of colonialism and migration. As the 2016-17 Artist-in-Residence at the Asian/Pacific/American Institute at NYU, I will research the social history of plants via spice routes and botanical expeditions to create a multiplatform project, Rhunhattan, that will include psychogeographic and immersive tech experiences, as well as object and olfactory work to bring forth the historical and contemporary relationship between the islands of Rhun (located in present-day Banda Island Archipelago of Indonesia) and Manaháhtaan (original Lenape name of Manhattan).

During 17th century Spice Wars, Dutch Nieuw Amsterdam was captured by the British and renamed “New York.” By 1667, the Dutch relinquished their claim to the colony in exchange for Rhun, the sole British colony in the Banda Islands of present-day Indonesia, thereby gaining monopoly of the lucrative nutmeg and mace trade. This pivotal moment came at a bloody cost for Indigenous peoples: both for the Bandanese and the Lenape people of Manaháhtaan. Over the centuries, as the spice trade faded, Rhun also settled into the background while Manaháhtaan rose to unprecedented financial success. The remaining colonial landmarks that continue to link these islands are the present day National Museum of American Indian at Bowling Green, which occupies the original site of Fort Amsterdam, and Fort Nassau of the Banda Islands; both forts share the same diamond-shaped architectural structure. In the visual narrative that I will be developing I see the identical forts act as portals between the two contested sites to collapse the time and distance of these two islands.

To tell this story of two islands with intertwined fates of land dispossession and erasure during the birthing of imperial globalization propelled forward by countless caravans and ships transporting spice, sugar, and silk, I am reeducating myself about the broken human relationship with land and waters. We are living in debt to our future generations and must learn how the Lenape sustainably managed the island for the sake of futurity over millennia. In a time when massive glaciers the size of lower Manhattan crashing into the ocean doesn’t make a media splash, we have a great responsibility to fight apathy. We are living in urgent times and there is a need to revitalize indigenous cultures and knowledge for environmental stewardship. We need a paradigm shift from falsely believing that human beings are landlords of Earth to seeing humans as being part of the ecosystem.

In the past year, through developing the Wayfinding Project at A/P/A Institute at NYU, I have been learning about indigenous geography through the groundbreaking work of ecologist Eric Sanderson of the Mannahatta Project at the Bronx Zoo’s Wildlife Conservation Society, and the Hōkūleʻa, a Polynesian double hull canoe circumnavigating the world with ancestral knowledge with the message of Mālama Honua “To Care for Mother Earth.” These experiences inform a series of upcoming projects dedicated to Manaháhtaan, with an emphasis on native plants. I choose to work with native plants to honor the land that feeds and nurtures us. In addition, planting native plants has positive effects on supporting pollinators, thereby strengthening the environment. In the ecosystem, the air, the insects, the algae, the soil, the stones, the human and non-human animals all depend on each other and this interdependency must be respected as afterall, we are not all going to Mars.

The first event that launched my residency was A Tale of Two Islands that took place on September 27, 2016 which began with planting three native trees at NYU’s Native Woodland Garden followed by a performance lecture. Chief Reggie Herb Dancer Ceaser of the Matinecock Nation guided the cultural protocols for the planting along with the participation of NYU Native American and Indigenous Student Group (NAISG) and Professor Jack Tchen’s Lenape Trail Seminar students. The native trees we planted include hornbeam, which used to grow in the NYU area. On Indigenous Peoples Day, in conjunction with the Wayfinding Project, we will have Lenapeway, a 24-hour window exhibit at 715 Broadway on view from October 10 – December 9, 2016 that realigns the spine of Manaháhtaan – Lenapeway, presently Broadway – with its Lenape heritage.  This exhibit will be activated with a guided walk through NYU’s Native Gardens on October 18, of which there are eleven on campus created by NYU Grounds Manager George Reis. I am also partnering with Highway 101, ETC (Experiential Tech Community) to create a virtual reality experience, Mannahatta VR, to reimagine precolonial Manhattan and the possibilities of Indigenous Futurism.

In the Spring semester I am planning for travel-research to the Banda Islands to grasp the other side of this watershed historical moment through interviews, community engagement, visiting fragments of Fort Nassau, researching in archives and connecting with the landscape. My hopes are to produce a 360-degree video as well as select site-specific augmented reality experiences that will allow viewers to experience both places simultaneously while thinking through these shared experiences of the two islands. In the face of global environmental degradation, inequality, and polarizing debates over political and cultural borders, it is key to recognize that all ethnospheres and biospheres are, like archipelagos, connected beneath the surface.

2016-17 Artist-in-Residence Events

Tuesday, September 27, 2016, 6–9PM

Tuesday, October 18, 2016, 5:30-6:30PM

Monday, November 14, 2016, 2-5PM

Monday, December 12, 2016, 2-5PM

Thursday, December 8, 2016, 6:30–8:30PM

Please visit to receive updates on upcoming activities.

Distracted Reading: Acts of Attention in the Age of the Internet

By Marion Thain (Associate Director of Digital Humanities, NYU)

Date: September 27, 2016


Central to the humanities is the theorization and practice of modes of attention (to cultural artifacts, and to other aspects of the world). Indeed, many of us devote much time to finding ways to redirect our students’ attention away from the distractions of their electronic gadgets. But what if we consider how their distributed focus might model new acts of attention and new ways of reading: how might we rethink pedagogy and/or our own research methods in an era of hyper-connectivity? This event is cosponsored by the NYU FAS Office of Educational Technology.

Panel 1: Sound & Image

Jaime E. Oliver La Rosa
Assistant Professor of Music, NYU

Marina Hassapopoulou
Visiting Assistant Professor of Cinema Studies, NYU

Martha Hollander
Professor, Dept. of Fine Arts, Design, Art History, Hofstra University

Panel 2: Text

Suzanne England
Professor of Social Work, NYU

Ethna D. Lay
Assoc. Professor of Writing Studies, Assoc. Director of Digital Research Center, Hofstra University

Even moderator/organizer: Marion Thain (Associate Director of Digital Humanities, NYU)

This event is part of a project that is being represented in a special issue of Digital Humanities Quarterly; come along to see how you can get involved.


Event Location:
NYU Center for the Humanities
20 Cooper Square
New York, NY
United States

From Smart City to Quantified Community: A New Approach to Urban Science

By Constantine E. Kontokosta (NYU CUSP & NYU Tandon)

Urban planning as a profession shifted radically after World War II. A result of the military development of systems engineering and optimization processes for radar and missile control, planners attempted to apply complex systems models and new decision-making algorithms to create optimized solutions to dynamic problems. The reliance on quantitative analysis and the ascendance of social scientific and technical solutions to planning problems led to the acceptance of a rational comprehensive model of decision-making at the expense of political and contextual realities. This pivot left more significant activities, such as agenda-setting and goal-formation, to those government officials or special interests who wielded political or financial capital.

Today, the convergence of two phenomena – the ability to collect, store, and process an expanding volume of data and the increasing level of global urbanization – presents the opportunity and need to use large-scale datasets and analytics to address fundamental problems and challenges of city operations, policy, and planning. Unfortunately, the techno-centric marketing rhetoric around “Smart Cities” has been replete with unfulfilled promises, and the persistent use (and mis-use) of the term Big Data has generated confusion and distrust about potential applications of technology in cities. Despite this, the reality remains that disruptive shifts in ubiquitous data collection (including mobile devices, GPS, social media, and synoptic video) and its analysis will have a profound effect on urban policy and planning and our collective understanding of urban life.

There is an opportunity now to learn from the mistakes of the past and to use new data streams and computing capabilities not in a singular quest for optimal solutions, but rather to enhance and support how communities identify, define, and collectively try to address their most pressing challenges. Problems vary by neighborhood, time, and demographics. Needs are defined by personal expectations, feelings, and values. Practitioners in the emerging field of urban informatics should recognize the importance of difference and develop a grounded appreciation of the social and behavioral dynamics of place.

At NYU’s Center for Urban Science and Progress (CUSP), I am leading work on a major research initiative called the Quantified Community (QC), which will soon expand as it becomes a founding partner of NYC’s Neighborhood Innovation Labs initiative. The intent is to use new methods to collect, fuse, and analyze data to enable improved neighborhood planning and urban design, and, ultimately, positively impact quality-of-life for those who live in cities by addressing persistent questions on how the built environment shapes individual and collective outcomes. This goal is grounded in the need to engage the local community and let residents better understand and ultimately define problems and needs, and to use data analytics to advance potential ways to reduce or eliminate these challenges. It is an experiment in every sense, as many of the “what, why, and how” questions of community data science still remain to be answered, although we are making progress.

We have initially launched the QC in three very distinct neighborhoods: at Hudson Yards, a ground-up “city-within-a-city” on the far west side of Manhattan; in Lower Manhattan, a mixed-use neighborhood that attracts residents, workers, and visitors; and, most recently, in Red Hook, Brooklyn, an economically-distressed community facing significant development and demographic changes. In each of these communities, we are working with different constituents to define problems and build an “informatics infrastructure” to support community planning and local decision-making. At Hudson Yards, which is still a construction site, our partner is the developer who is designing and building the project. In Lower Manhattan, we are partnering with the local non-profit Business Improvement District, whose goals are to improve quality-of-life in the area to increase the neighborhood’s attractiveness to residents, workers, and tourists. And in Red Hook, we are collaborating with a community organization that provides social service support and workforce training for neighborhood residents.

Our work in Red Hook is perhaps the most compelling opportunity to test to the potential of data analytics and internet-of-things (IoT) technology to actually enhance well-being in a traditionally under-served community. The case is not clear, and the outcome not assured. But already the partnership has proven to demonstrate that technology – when used transparently, guided by community problem-solving, and translated in a way that can be understood by a range of stakeholders – can be both a direct source of economic opportunity and a means to re-think how we guide and evaluate urban planning decisions and the role of citizen engagement in that process.

Digital Humanities and Educational Technology; Two Perspectives

By Armanda Lewis and Robert L. Squillace

Connecting Digital Humanities Scholarship and Instructional Practice

By Armanda Lewis (Director, FAS Office of Educational Technology, and Adjunct Faculty, Steinhardt School of Culture, Education, and Human Development, NYU)

During a keynote address given in 1990, prominent educator and scientist Seymour Papert ruminated on disciplinary innovation (13).  Imagining both a nineteenth-century surgeon and teacher with the ability to time travel, he noted that the surgeon would be completely out of sorts in the world of modern medicine, though the teacher would find the contemporary scene more familiar than not.  While the doctor of the 1800s would be unable to perform procedures and operate tools advanced by digital technologies, the teacher would find the representative instructional practices of the late twentieth century largely routine.  Papert’s main appeal was to harness the power of computers to give students opportunities to create new and deep knowledge, challenge existing ideas, and become critical thinkers.

There are important comparisons to make between this instructional call to action and the promise of the Digital Humanities (DH), which explores how emerging technologies can advance and inform centuries-old humanities research methods and epistemes.  As a literary scholar and educational technologist, I frequently create connections between research practices, and teaching and learning.  Of interest here are the parallel developments that have allowed both traditional humanist scholarship and instruction to advance in a digitally-mediated environment. Though there are many connections to make between these fields, I will highlight three areas: constructionist activities, interdisciplinary collaborations, and multimodality.

A humanist lens involves the scholar’s own systematic reflection on a theme, and rests on rich interpretation, contextualization of arguments, and critical synthesis.  It is a deeply interpretivist approach that relates directly to the main active learning paradigm that has underpinned educational technology theories during the past few decades (Hoover 1). This approach recognizes learning as an ongoing and participatory process transformed by one’s own experiences and perspectives. Humanities thinking has long positioned the researcher as the chief agent through which knowledge building happens and is negotiated. Likewise, active learning reflects the learner’s role in his or her own development.

One trend in both humanities research and teaching practice is the move towards more concrete building (aka constructionist) activities, expanding deep thought processes to include creation practices that are iterative and often technologically mediated (Papert and Harel 3).  Within the teaching and learning space, this has led to maker movements where the active thinking that underpins cognitive theories is extended to include students’ participation in artifact creation.  This includes everything from circuit building, to digital storytelling, to mobile application design.  Within the humanities, the trend towards constructionist practices relates to what Burdick, Drucker, and colleagues term “thinking-through-practice … Digital Humanities is a production-based endeavor in which theoretical issues get tested in the design of implementation, and implementations are loci of theoretical reflection and elaboration” (13).  Humanists now design, prototype, and disseminate applications that facilitate citation management (e.g. Zotero), archival storage (e.g. Omeka), and data visualization (e.g. Voyant). In both scholarly and instructional cases, an entrepreneurial and autodidactic spirit prevail.

Another comparable development between humanities research and instruction is the recognized need for interdisciplinary collaborations to advance theory and practice.  Interdisciplinarity is not new within the humanities, as we have examples of scholars being inspired by and integrating knowledge from other disciplines.  What has emerged is the move from what Ryan Cordell and others have characterized as understanding interdisciplinarity from the perspective of one’s own discipline, to developing competence in diverse epistemological frameworks and accompanying methods.  Humanists, for example, are not necessarily becoming coders, but are developing computational literacies that will unlock new methods and enable them to collaborate with programmers to design new research interfaces and functionality.  One study that reveals new ways of viewing art is only possible through merging neuroscience and aesthetic studies; it asks questions that could not have been posed through mono-disciplinary approaches (Vessel, Starr, and Rubin 258). From the learning side, theory and practice have advanced from learning as conditioned behavior to learning as a complex interplay between the individual and the environment. As such, instructional practice now applies theories from critical studies, design, neuroscience, psychology, and more.

One additional connection is the move towards multimodality, or communication through distinct forms of representation (e.g. textual, spatial, linguistic, aural, etc.).  Multimodality, connects to the two previous points since it is foundational to constructionist knowledge building (i.e. it aggregates the cognitive, the social, and the tactile) and it evokes interdisciplinarity (i.e. different disciplines represent information differently). Increasingly, and to varying degrees of success, humanist argumentation uses evidence delivered through a variety of modes.  At its best, this multimodal argumentation supports claims impossible to address with a single representational form, and contrasts from traditional textual arguments enhanced but not transformed with visual or aural appendices.    Born digital scholarship seeks this transformational quality of communicating claims. Learning scholars recognize the importance of supporting multimodality to accommodate distinct learning styles, and also take advantage of multimodal learning artifacts that can provide a richer understanding of outcomes.

The development of digital technologies has facilitated the advancement of humanities research towards more collaborative, interdisciplinary, and multimodal endeavors. Similarly, deep, contextualized learning that can happen anywhere, anytime, is made possible through networked systems. In thinking about the parallel developments in the humanities and in learning, it is the hope that there be continued input from core participants –researchers and practitioners who span perspectives and fields – to explore new ways to connect, challenge, and adapt technologies for knowledge building. Digitally-enhanced collaborations have the potential to bring about distributed, gestalt knowledge building that not only advances existing understanding, but creates new forms of understanding.

Works Cited

Burdick, Anne, Johanna Drucker, Peter Lunenfeld, Todd Presner and Jeffrey

Schnapp. Digital_humanities. Cambridge, MA: MIT Press, 2012.

Cordell, Ryan. “DH, Interdicsiplinarity, and Curricular Incursion.” Ryan Cordell

Blog, 20 Feb. 2012. Web. 9 Dec. 2015.

Hoover, Wesley A. “The Practice Implications of Constructivism.” SEDL Letter 9.

3 (1996): 1. Web. 13 Dec. 2015.

Papert, Seymour. “Perestroika  and  Epistemological  Politics.” Constructionism.

Eds. Idit Harel and Seymour Papert. Norwood, NJ: Ablex Publishing Corporation, 1991. 13-28.

Papert, Seymour and Idit Harel. “Situating Constructionism.” Constructionism.

Eds. Idit Harel and Seymour Papert. Norwood, NJ: Ablex Publishing Corporation, 1991. 1-12.

Vessel, Edward, G. Gabrielle Starr, and Nava Rubin. “Art reaches within:

aesthetic experience, the self and the default mode network.” Frontiers of Neuroscience 7 (2013): 258. Web. 12 Dec. 2015.

Epithalamium on the Auspicious Nuptials of Digital Humanities and Educational Technology

By Robert L. Squillace (Associate Dean for Academic Affairs and Educational Technology Liaison, Liberal Studies, NYU)

Digital Humanities and Educational Technology have been like two rivers that arose from different sources and, for many years, ran parallel but separate courses, with little traffic between them. Digital Humanities originated (much earlier) in scholarly projects, like Father Busa’s Index Thomisticus, and long seemed but distantly related to undergraduate education, while Educational Technology (beginning much later) focused on the package of digital conveniences for easing the scut-work of an instructor’s life known as the Learning Management System. Workers in DH communicated almost exclusively with other workers in DH; workers in Ed Tech communicated almost exclusively with other workers in Ed Tech. Even the employment structure of the two fields differed radically, with the development of Digital Humanities being driven largely by faculty members (and DH positions often being faculty lines) while Educational Technology was largely the province of IT departments, its agenda driven by developer/designer teams who were not academics or by instructional technology specialists who held degrees in Education and did not have appointments in humanities departments like English or Art History.

But a confluence between these two streams is nearly at hand. The goals of the two have always been consonant with each other. One shared goal is simply the automation of previously time-consuming tasks – compiling a concordance, averaging grades – by transferring resources and operations from the physical world of paper, ink, and binding (whether a scholarly index or a gradebook) to virtual space. The more visionary goal of both remains a paradigm shift regarding the production and dissemination of knowledge that changes the character of the scholarly and pedagogic enterprises themselves. When realized, its consequences will be many. As scholarship relocates to online platforms in which every action leaves a legible trace, peer review can occupy a post facto position: the actual use other scholars make of one’s work, rather than the opinions of a small review panel, can be what credentials it. Similarly, teaching excellence might finally be better credentialed as the artifacts it produces are made visible and shareable. Easy shared access to visual materials also promises to break the hegemony of print in teaching; a course need no longer be built on a reading list simply because books can be reproduced and shared more cheaply than work in any other medium (in turn, disciplinary boundaries are likely to become even more permeable). And, at its most innovative, Digital Humanities promises to open new ways of seeing humanities artifacts that were not conceivable in a pre-digital world, while Educational Technology similarly promises to re-center education on the activity and achievement of the student rather than the transfer of hoarded knowledge from the professor.

But common goals do not a community make. A community grows out of shared use; the imminent confluence between the DH and Ed Tech streams will come about when the two more fully share the same digital tools. And that integration is beginning to happen. In teaching a humanities course, for instance, an instructor might use a blogging platform like WordPress to develop an off-site dialogue on the major themes of the class while at the same time using the platform as a CMS for disseminating and gathering comment on his or her scholarly work. A curation platform like Omeka has equal application in scholarly work and in courses. The MLA’s nascent Digital Pedagogy in the Humanities ( platform demonstrates this growing confluence, showing a very widespread use of platforms like WordPress for the presentation of course content, and a smaller but significant attempt to teach students the humanities applications of such platforms.

As these examples multiply, the use of educational technology for humanities courses will focus increasingly on acquainting students with whatever emerge as the standard platforms for creating and sharing scholarly work, rather than on facilitating management functions like keeping track of grades. In particular, the tools that emerge as leaders in the confluence between DH and Ed Tech will share three characteristics:

  • They will be web-delivered (I imagine they will also be mobile-responsive, but that almost goes without saying now)
  • They will be LTI-compliant
  • They will support multiple levels of sophistication in their use, so that both students and scholars can fruitfully employ them

The Learning Tools Interoperability (LTI) standard is breaking the strangle-hold of the Learning Management System on large-scale educational technology initiatives, allowing for a new level of integration of the same tools that Humanities professors use in their scholarship into their teaching; indeed, DH tools that meet the LTI standard will have a considerable advantage in gaining adoption and are likely to emerge as the heretofore elusive “industry standards” for digital scholarship. A recent Google Group query(!topic/omeka-dev/scsQNp7QskA) regarding the LTI-compliance of Omeka illustrates the point. While educational technology developers have not always been terribly imaginative in the uses they have made of the special capabilities of digital platforms, the developers of DH tools can take at least one cue from them. Educational technologists have always conceived their work in relationship to the course – a common space shared by students and faculty (and populated automatically from a Student Information System). Their development of a standard that allows tools to be integrated in a single, course-based learning platform is a logical extension of that focus. Indeed, it is a practical necessity if a tool is going to scale up for use by many instructors across many sections.

On the other hand, Digital Humanities tools, while often open source, have not always been built with an eye to use in the context of a course shell. That a tool like Gephi, for instance, is not web-delivered complicates its use in pedagogical contexts – it stands alone and outside, while for any wide adoption it would need to be able to stand within some kind of course site, congregated with other tools that serve different pedagogical needs than network mapping. Gephi is ill-designed for scalable use; it requires download to one’s own device, it is not LTI-compliant, and it supports only sophisticated users, its settings options being couched in a highly technical vocabulary. Were one to use Gephi in a humanities course, one would need to spend a great deal of time on teaching students to use Gephi, at the expense of the humanities artifacts it was meant to illuminate. But if the larger claims of DH are valid, the best way to teach students humanities ought to be by teaching them to employ digital practices that illuminate humanities artifacts, which means developing tools that can fit more seamlessly into the educational technology environment. The measure of any tool, after all, is how much easier it makes the task it was designed to perform.

At the same time, for those DH practitioners who use digital means to explore digital artifacts themselves (coding languages, network structures, etc.) from a humanistic perspective, the tool itself becomes the object of study, rather than a neutral implement. But tools, by the very fact that they aim to facilitate and simplify, tend to close off exactly what humanistic inquiry seeks to open up – the assumptions and choices that make an instrumentality work the way it does. A watch is a tool to tell time, but a watch-smith’s tools do not facilitate the telling of time; they facilitate opening the watch to see its workings and to manipulate its parts. Tools for bringing that effort into the classroom, for helping students to see the watch from the perspective of the watchsmith, should accompany the development of first-order tools, if the marriage of DH and Ed Tech is not to result in the same mystification of the assumptions behind the inquiry that has so often been criticized in DH.

Analysis Beyond Analytics: Exploring the Transformative Crossover Between Cinema & Media Studies and the Digital Humanities

By Marina Hassapopoulou

Cinema Studies has always been attuned to technological developments and their impact on machine-made art. Even before the first cinematic experiments in interactive storytelling and database narratives in the 1990s (including USC’s The Labyrinth Project, led by Marsha Kinder, and UCLA experiments in cinema forensics, led by Stephen Mamber), the pre-digital work of visionary filmmakers such as Dziga Vertov, Sergei Eisenstein and Luis Buñuel prefigures the database logic that is exemplary of the ways in which digital culture now organizes and interacts with data (Manovich 2001; Kinder). The idea of “making the invisible visible” that drives today’s data visualization design is, then, nothing new if we relate this impetus back to early filmmakers like Vertov, who reinvented the language of cinema to reveal recombinatory patterns in the editing of audiovisual “data” (Manovich 2013, 47). This anachronistically algorithmic approach to film suggests that approaches to film-as-data in Cinema Studies predate current data mining and digital visualization tools, yet overlap with humanistic inquiry at the core of the Digital Humanities to propose new ways of understanding and generating knowledge.


[Vertov’s diagram of editing sequences for Man with a Movie Camera (1929), the Hair salon sequence. The blue boxes annotate shot contents, while the red line between them visualizes the intervals in the editing. Image taken from Cinemetrics, ]


[A flowchart tracing the various possibilities of nonlinear scene navigation in the interactive digital film, Late Fragment (Anita Doron, Daryl Cloran, Mateo Guez, 2007)Image taken from Late Fragment, ]

Although the argument, that the treatment of film as a database with computational and mineable data has existed before digital technology, has already been made in relation to filmmaking, little attention has been paid so far to the fact that Digital Humanities (DH) approaches to the study of film are also evident as far back as early film theory. In fact, certain influential theorists such as Eisenstein and Lev Kuleshov – now “critical makers” in DH terms – used metric approaches to cinema in their film theories, in addition to their critical practice. Kuleshov, in particular, was keen on investigating the differences in the cultural perception of films produced in different countries by comparing the number of shots and the impact of montage in certain national cinemas. Kuleshov and his team then proceeded to analyze the potential psychological reasons and ideologies behind the cross-cultural discrepancies in the assemblage of nationally specific films, linking them to causes such as the impact of capitalism in various sociopolitical contexts. Kuleshov’s methodology sets a productive precedent for metric-driven inquiry, where the emphasis is not on the methods for data collection and retrieval (in this case, the number and frequency of shots), but on the humanistic research questions that actually drive the need for gathering data in the first place.


[A Kuleshov diagram visualizing the intra-shot montage and the splices of montage segments (marked by lines A, B, C, D, E, F). Reprinted in Critical Visions for Film Theory, p. 143 (see Works Cited).]

The use of metrics – or, as now called, cinemetrics – in film history and theory gained more popularity in the recent interdisciplinary work of contemporary scholars such as David Bordwell, Barry Salt, James Cutting, and Kristin Thompson. Yuri Tsivian’s Cinemetricssoftware offers digital tools for recording data for movie analysis, and publishes the data on the Cinemetrics website to create a collaborative, open-ended database for all users to access and contribute to. These tools, along with the reusable databases they create, make entire periods in film history easier to study and compare.


[The Cinemetrics interface, taken from ]


[A user-submitted cinemetric breakdown of Hitchcock’s North by Northwest (1959)]

Just like Kuleshov’s pre-digital data-gathering methods, digital tools must be strategically employed to address compelling research questions in order to justify their use in the analysis of cinema. As Yuri Tsivian argues, “in science as in scholarship, progress is measured not by new answers given to old questions, but by new questions put to old answers” (Cinemetrics). It is imperative that certain established methodologies in Cinema Studies, such as close reading, cultural studies, philosophical inquiry, and ideological investigation must not be forgotten for the sake of privileging distant reading, data analytics, and other software driven methods. It is my concern that, as the analytical tools for the study of cinema are shifting towards computational methods, so is some of the scholarship being produced; this can get to the point where the analytical depth and inquiry are in danger of being reduced to a show and tell of the functions of automated systems and to a diminished regard for important “so what?” questions. This shift in focus can lead to significant omissions in the study of film if only digitally driven methodologies are emphasized (especially in light of institutional funding and the reorientation of priorities within the field). As Tara McPherson suggests, a multimodal humanist is not only one who “brings together databases, scholarly tools, networked writing, and peer-to-peer commentary” but also one who can leverage “the potential of visual and aural media.” (McPherson 120). In the study of film, as in other areas that overlap with Digital Humanities inquiry, the moments of dissonance between the computational and the humanistic ways of knowing can be just as productive, as Miriam Posner has argued (Posner). Moments of dissonance and inconclusiveness in clusters of data can shed light on, for instance, the diversity in film audience reception practices precisely in their resistance to being neatly classified into patterns.

Even before the rise of DH and the proliferation of digital media in our daily lives, film historians, theorists, and cinephiles began to draw attention to “a people’s history of cinema” that recognizes the importance of local and community production/ consumption practices as fundamental aspects of movie history (Klenotic). While computational methods, text mining, GIS mapping, and data analytics have made certain aspects of these micro-histories more accessible – such as the collaborative “new cinema history” project AusCinemas, an interactive map visualization of Australian cinemas – there is room for even more innovation on how to approach “topics not previously thought to possess a history” by fusing traditional methodologies and research questions with digital tools (Maltby 32).

Therefore, it is important for Cinema and Media Studies to propose broader definitions of Digital Humanities approaches in order to maintain certain productive modes of inquiry that stem from older methods of analyzing the moving image; this serves the dual purpose of also broadening DH methodologies and textual/media analysis beyond analytics. In addition, as scholars such as Katherine Groo and Geoffrey Cubitt have advocated, the methodological boundaries of our fields need to become more open in order to fill in epistemological gaps in the production of historical consciousness and in the study of cinema in all its permutations (Groo; Cubitt 234). The critical repurposing of existing media platforms and the use of film remixing practices, for instance, should be considered part of DH inquiry when they meet the objective of generating new modes of media literacy. While DH projects focusing primarily on the digitization of analog resources highlight the need for greater democratization of access to the collections of cultural institutions, the potential of these digital archives to have a life of their own should also be emphasized. Initiatives that invite the public to critically and interactively engage with archival material – such as the 2009 remix challenge posed by the EYE Film Institute in Amsterdam for the public to remix twenty-one digitally restored fragments from its early, underutilized Dutch films collection – can result in innovatively productive ways of rethinking both historical archives and the ways in which audience participation can produce new modes of inquiry and historiography. The breakdown of hierarchies and institutional gate keeping in the areas of knowledge production and consumption is therefore essential if we want to cover new epistemological ground and explore the unprecedented opportunities offered by digital tools and user-oriented platforms.

The transformative potential of the amalgamation of Digital Humanities methods and Cinema and Media Studies research is the inspiration behind the upcoming conference co-organized by NYU’s Cinema Studies department and Columbia University’s Film Studies graduate program, and supported by NYU’s Center for the Humanities. ‘Transformations I: Cinema and Media Studies Research Meets Digital Humanities Tools’ is the first installment of this groundbreaking endeavor, taking place at NYU on April 15-16, 2016. The objective of the conference is to expand both the Digital Humanities and the field of Cinema and Media Studies by means of interrelation, and explore the diversity of new modes of inquiry that emerge from the convergence of these fields. The conference aims to provide a cross-disciplinary conversation between Humanities scholars, computer programmers and software engineers, and to further investigate the nuances of the term “Digital Humanities.” The conference will feature influential critical makers in the field of Digital Humanities with a special emphasis on Cinema and Media Studies work. We will consider, both practically and philosophically, the academic preparation of DH-ers and how it differs from – and the ways it can enhance – teaching and research in motion pictures and digital media.

For more information about the conference, including an annotated list of projects that use DH tools for Cinema and Media Studies research, and a selected list of Cinema Studies/Digital Humanities scholarship, visit the Transformations conference website:

Works Cited:

Groo, Katherine. “Cut, Paste, Glitch, and Stutter: Remixing Film History.” Frames Cinema Journal 2 (2012): n. pag. Web. 15 Feb. 2016.

Kinder, Marsha. “Designing a Database Cinema.” Future Cinema: The Cinematic Imaginary after Film. Ed. Jeffrey Shaw and Peter Weibel. Cambridge, MA: The MIT Press, 2003. 346-353. Print.

Klenotic, Jeffrey. “Four Hours of Hootin’ and Hollerin’: Moviegoing and Everyday Life Outside the Movie Palace.” Going to the Movies: Hollywood and the Social Experience of Cinema. Eds. Richard Maltby and Melvyn Stokes. Exeter: University of Exeter Press, 2007. 130-154. Print.

Kuleshov, Lev. “The Principles of Montage.” Critical Visions for Film Theory. Eds. Timothy Corrigan, Patricia White and Meta Mazaj. New York: Bedford/St. Martin’s, 2011. 137-144. Print.

Maltby, Richard. “New Cinema Histories.” Explorations in New Cinema History: Approaches and Case Studies. Ed. Richard Maltby, Daniel Biltereyst, and Philippe Meers. Malden: MA: Wiley-Blackwell, 2011. 3-40. Print.

Manovich, Lev. The Language of New Media. Cambridge, MA: The MIT Press, 2001. Print.

Manovich, Lev. “Visualizing Vertov.” Russian Journal of Communication 5.1 (2013): 44-55. Print.

McPherson, Tara. “Introduction: Media Studies and the Digital Humanities.” Cinema Journal 48.2 (2009): 119-123. Print.

Posner, Miriam. “Digital Humanities and Media Studies: Staging an Encounter.” Society for Cinema and Media Studies Annual Conference. The Drake Hotel, Chicago. 8 March 2013. Workshop lecture.

Tsivian, Yuri. “Taking Cinemetrics into the Digital Age (2005-Now).” Cinemetrics. Web. 2 Mar. 2016. <>

NYU Digital Humanities Project Showcase

We are pleased to announce an NYU Digital Humanities Project Showcase to be held on Friday April 29th at NYU’s Center for the Humanities (5th floor: 20, Cooper Square). This event provides a forum for faculty, staff, and students to learn about each other’s work, create connections, and start new conversations. Open to an audience from both inside and outside the university, the event will feature the work of NYU’s vibrant and diverse DH community. Presentations will include 10-minute project presentations and two-minute lightning talks, and we will end with a roundtable discussion devoted to identifying priorities for supporting and building the DH community at NYU.

Members of NYU interested in sharing a DH project should fill out the application form at
Applications deadline: March 14th.

This is part of a series of NYU DH events run though the Center for the Humanities by Marion Thain, each in collaboration with colleagues from across the university.

We look forward to hearing about your work,

The Project Showcase organizing committee:
Zach Coble, Digital Scholarship Specialist, NYU Libraries; Kimon Keramidas, Clinical Assistant Professor of DH at Draper; Marion Thain, Associate Director of DH, NYU.

(Please note: this event substitutes for the symposium on the nature of digital evidence mentioned in the blog post below. That symposium is being rescheduled: watch this space!)