Evaluating DH projects

One way to get to know DH projects is to look at a few and really think about their comparative merits.   Your assignment, should you decide to accept it, is to evaluate at least one tool and at least one content-oriented project.  More on how I’m defining those below; oh, and you don’t really have a choice about this assignment, but I couldn’t resist the reference.  At least this exercise doesn’t involve nuclear warheads.

As you know, this is shaping up to be a tough nut to crack in DH: how do we evaluate DH projects?  There are of course several challenges here.  One is that DH products vary so widely that criteria for one might not be appropriate for another.  A second is that, despite evolving best practices, many do not offer the kind of information about their creators, creation process, and technical details that might be necessary for evaluation, depending upon what criteria we might think important.  Finally, many of these kinds of projects haven’t been around long enough for scholars to reach a consensus on what’s most important.

By analogy, think of another kind of scholarly production that by now we know how to evaluate: books.  There are a zillion intergoogle guides on how to write scholarly book reviews.  They vary, but have common elements: what ground the book covers, or its scope; the book’s thesis; the book’s methodology; the extent to which it succeeds in its aims; and its place in the relevant scholarly literature.  You might think, that’s all there is to books, anyway.  But you’d be wrong.  With rare exceptions, we don’t discuss the aesthetics of the book itself.  We treat authors as sole actors, rather than considering the role of editors.  Most journals don’t ask that authors address audience, or the strength of a book’s writing, its length given its main points, its bibliographic apparatus (which can be as much as a third of a book’s word count), and so on.  We’ve come to a consensus on how we’d like to judge books, and this is what these many guides reflect.

Now, the assignment: before class on Wed., March 19, I’d like you to post in the Evaluating DH forum two evaluations, one of a tool and one of a content-oriented project of your choice.  Post these in the Project Evaluations area.  Each must be between 500 and 750 words.  Each must refer to at least two other tools or content-oriented projects for purposes of comparison.  Each must not only explain the project, but be evaluative: that is, what did it do well, and what way not so well?  Finally, I’d like a third post, again 500-750 words, as a response in the Evaluation Criteria area.  What criteria did you choose, and why?  What’s fair to evaluate a project on, and what should be off-limits?  What’s important, and what’s trivial?

Of course, we’re not the first to consider this problem. Some things to go on: the MLA has come up with criteria, although they’re necessarily pretty general.  The best general discussion of evaluating digital humanities is the Journal of Digital Humanities Vol. 1, No. 4, which focuses almost entirely on evaluation.

Defining the different kinds of projects is not hard-and-fast.  For the sake of this exercise, let’s call tools something that you could use on a very wide variety of materials.  Most of these are pretty clear.  For example, Zotero or Omeka are clearly tools that can be used by humanists regardless of material.  By contrast, content-oriented projects revolve around particular materials or intellectual questions, like Mapping the Enlightenment or the Old Bailey project.  That said, I admit that this can be a very porous line that’s being drawn here, along the Potter Stewart “I know it when I see it” variety.  Many projects have both a methodological component and content that makes them special.  Feel free to muddy the waters as you see fit.

Leave a Reply