Reply To: Evaluation Criteria

Home Forums Evaluating DH Evaluation Criteria Reply To: Evaluation Criteria

#199
Alex Koch
Participant

Dan’s post touches on one a key aspect of the evaluation process in the Digital Humanities: context of the work. Just as Presner was quoted above, virtually every evaluation guide I came reviewed, refers to examining the content in the form of which it was intended. My question is, when does that stop?

For example if this is a web database or interactive online map, I think we all agree that a screenshot is not going to accurately depict the project’s purpose or how the project might possibly assist other scholars – meaning an evaluation committee should at least open a web browser before submitting their review. However, one of the review guides I found [which I believe I have linked to already, but if not it’s in one of the 25 tabs I currently have open on my browser and I apologize] goes on to say that if it is a web-based project, the elevator is expected to also use the preferred web browser, install any necessary plugins, etc – in order to display the project “as intended.” Now perhaps this is the Communication courses of my undergrad talking, but if we’re to be evaluated on the usability of the project, how much responsibility should be placed on the audience to get the site to work properly? In some cases, the site might even work properly – but fails to illustrate the information in an easily readable format. For example, the 1st runner up for the Digital Humanities Award of Best infographic or visualization was e-Diasporas Atlas is an impressive project with beautiful visuals and a ton of information cited, but I cannot for the life of me cannot get the graphs/visuals to appear any larger than the average Ad banner on a website. There’s an offer to download the source file, and although some of the intended audience may be fairly tech savvy, they may not know how to open a .gexf file. In fact, I’m certain there are plenty of digital humanists not familiar with XML or Gephi.

I suppose what I’m getting at is that there are potentially any number of users, even those that consider themselves digital humanists, who may not be able to adequately use the features of certain projects. And while the notion that these projects aren’t valid or strong examples of DH scholarship because we all may not have the knowledge or the computer power to adequately evaluate or even use these projects seems unfair or absurd to those doing the research – it also feels counter intuitive to the spirit of the digital humanities. Additionally, it seems counter-productive for the scholar in question as many of the evaluation guidelines refer to other scholarly cross-promotion via links or peer review.

So does it fall to us to make the content fool-proof, in order to get the reviews and evaluations we need… or is the responsibility on the evaluation committees and the academy?