Evaluation Criteria

Home Forums Evaluating DH Evaluation Criteria

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
    Posts
  • #179
    Andy Schocket
    Keymaster

    How do we evaluate DH work? What should be considered? Let us know here.

    #191
    Daniel Fawcett
    Participant


    In my other post, I gave a quick evaluation of the Mapping the Republic of Letters project. And I indicted that it was a gorgeous, slick project with a lot of interesting things going on. But I also indicated that it wasn’t really a digital humanities project.

    Of course, as I am wont to do, I made that strong statement, and then back-pedaled a bit. Well, a lot. I said that it really was a DH project, but a project that used DH tools to accomplish a very traditional humanities project. But the more I think about it, the less sure I am of that stance.

    Part of my dithering comes from the “distant reading” idea that Matthew Jockers introduced to us (by way of Franco Moretti). The Mapping project certainly does give us a distant-reading approach to the work of several scholars. How in the world would it be possible to trace the connections between multiple thinkers and all of their intellectual networks in a pre-digital environment? This is of particular concern when we realize that many of these texts probably exist only in isolated libraries scattered throughout the world, and that the digital tools allow scholars who are more geographically bound to at least access the information and metadata, if not the texts themselves.

    But Todd Presner’s How to Evaluate Digital Scholarship is, I think, even more helpful in analyzing this particular site (as well as Terralingua, the other site I examined). Presner first asks us to evaluate the work “in the medium in which it was produced and published” (2012). In this way, my comparing the Mapping project to, say, other attempts to situate thinkers in a context in non-DH contexts is like comparing apples to submarines. The Mapping project is an attempt to situate the thinkers in a large, data-rich, visual context, and this is something that the Web is simply better at doing than print. On the other hand, Greil Marcus’ book Lipstick Traces, another attempt to situate various thinkers and artists within their context, simply must be evaluated as a book because that is how it was written. They were attempting to do different things. If the Mapping project had wanted to be a traditional humanities project, it would have been published as a book.

    But I still can’t help shaking the notion that there is something… “traditional,” I suppose, about the mapping project. In that way, I think it falls short of Presner’s criteria of “risk-taking.” The project seems a bit mundane, just enhanced in size.

    Those are simply some initial thoughts. I hope that we can discuss them further in class.

    #199
    Alex Koch
    Participant

    Dan’s post touches on one a key aspect of the evaluation process in the Digital Humanities: context of the work. Just as Presner was quoted above, virtually every evaluation guide I came reviewed, refers to examining the content in the form of which it was intended. My question is, when does that stop?

    For example if this is a web database or interactive online map, I think we all agree that a screenshot is not going to accurately depict the project’s purpose or how the project might possibly assist other scholars – meaning an evaluation committee should at least open a web browser before submitting their review. However, one of the review guides I found [which I believe I have linked to already, but if not it’s in one of the 25 tabs I currently have open on my browser and I apologize] goes on to say that if it is a web-based project, the elevator is expected to also use the preferred web browser, install any necessary plugins, etc – in order to display the project “as intended.” Now perhaps this is the Communication courses of my undergrad talking, but if we’re to be evaluated on the usability of the project, how much responsibility should be placed on the audience to get the site to work properly? In some cases, the site might even work properly – but fails to illustrate the information in an easily readable format. For example, the 1st runner up for the Digital Humanities Award of Best infographic or visualization was e-Diasporas Atlas is an impressive project with beautiful visuals and a ton of information cited, but I cannot for the life of me cannot get the graphs/visuals to appear any larger than the average Ad banner on a website. There’s an offer to download the source file, and although some of the intended audience may be fairly tech savvy, they may not know how to open a .gexf file. In fact, I’m certain there are plenty of digital humanists not familiar with XML or Gephi.

    I suppose what I’m getting at is that there are potentially any number of users, even those that consider themselves digital humanists, who may not be able to adequately use the features of certain projects. And while the notion that these projects aren’t valid or strong examples of DH scholarship because we all may not have the knowledge or the computer power to adequately evaluate or even use these projects seems unfair or absurd to those doing the research – it also feels counter intuitive to the spirit of the digital humanities. Additionally, it seems counter-productive for the scholar in question as many of the evaluation guidelines refer to other scholarly cross-promotion via links or peer review.

    So does it fall to us to make the content fool-proof, in order to get the reviews and evaluations we need… or is the responsibility on the evaluation committees and the academy?

    #200
    Alex Koch
    Participant

    I failed to mention in my previous post that these non-tech oriented, super-computer-less DHers would count for 4 of the 6 categories outlined by James Smithies.

    #202
    Shane Snyder
    Participant

    In my other two posts, I looked over four tools and digital projects–DOAJ, Bamboo Dirt, Republic of Letters, and London Lives 1690 to 1800–all of which had their problems. Something about these projects reeked of conservatism, of turgid academia, and the public aspect of the site decomposed along with the lack of interest in reaching an audience that might otherwise find interest in parsing the data.

    The Republic of Letters and London Lives, in particular, exposed the wounds of DH. It becomes difficult, from my perspective, to justify the idle collecting of data that reaches the eyes of the few, that has no interest in extending itself beyond this age-old habit of canonizing thinkers. As problematic a thinker as he was, Howard Zinn attempted to take history out of the hands of the ruling classes (the so-called “intellectual elites”) and place it into the hands of the people. Again, there are problems with this model that I do not have the historical background to debate. The fact, also, that the medium is different (a book) changes the conditions. But the question remains: why do public, open-access digital work if bringing the information to the public peddles the ideology that the “public” historically had no say in cultural productions?

    For example, the Republic of Letters can trace the complex networks of correspondence Voltair, Benjamin Franklin, and John Locke found themselves a part of without themselves being able to recognize how, visually, that network over such vast geographic space looked. But the research question is reducible down to prominent thinkers that the public has been told in history classes are responsible for the visage culture took on at the time.

    I recall a debate on Youtube between Noam Chomsky and Michel Foucault. Foucault and Chomsky agree with on a big point. History, Foucault argues, should not be traceable back to the prominent figures that those in power have told us should become signifiers of that history. This seems especially logical from the standpoint of an internet culture that conducts the bulk of research from a tattered armchair at home. The internet is a subversive technology, after all (which might account for all of the net neutrality debates cropping up in the wake of the FCC’s recent defeat at the hands of Verizon and its greedy data hoarding). The public thus has a say in what creates and maintains cultural trends.

    The Digital Humanities is in a position to popularize academia by doing the same work as Neil Degrasse Tyson, who like Carl Sagan before him has taken the mantle of the avuncular scientist and philosopher who waxes poetic about the wonder of the cosmos. Digital projects can be academic, yes, but they should be dynamic, too. They should become multi-media experiments in answering big, overarching research questions. They should be like games for the public to interact with. They should be teaching tools for the professor struggling for a topic in the classroom. They may, one day, render the university sterile, but I have a hunch that that won’t be the case.

    #207
    Katlin Humrickhouse
    Participant

    I guess I am going to take this question a little different than others posted. I am going to answer it in two ways: how do I evaluate and how should we evaluate. I worked through this assignment much like I would work through a paper or a skeleton review. Professor Schocket has discussed this approach a couple of times – even in the prompt. So – here were my steps.

    First, I looked at the title and “cover”, or home page, much like I would when evaluating a book. Does the title or cover mean anything? Can anything be taken from it? Or, in this instance, was there much work put into it? Charles Darwin Library is pretty straight forward. I knew it was going to be knowledge based – academic in a way.

    Second, I looked at formatting. In a paper, I would generally look at citation style, spacing, organization, and other like traits. For this assignment, it was no different. I took note, probably more subconsciously, of the font style, spacing, and the organization. This all falls into the evaluation of the aesthetics. Was it confusing? Was it clean? Was it easy? This all plays into ease of use of the project. Organization, much like in a paper, is very important in a digital humanities project. In a paper, citations and a bibliography as well as organization of content make the paper easy to navigate. I know if a paper is written in Chicago Style, I will be able to look to the footnotes for a quick citation. If a paper is written in MLA, I will have to flip back to the bibliography. This sort of organization is also needed in a project such as the Charles Darwin Library. I think what I used most to evaluate the tools was ease of use and accessibility. Also, the more the support a tool has the better off it is.

    Third, I looked for a thesis or purpose. What or why need to be answered. If there isn’t a reason for the project or tool, or for a paper or book, then what purpose does it even serve? A thesis or purpose is a very important part of a paper or book as it is a very important part of a digital humanities project. And, much like a paper, a purpose for a digital humanities project can become something difficult to come up with. (As some of us, including myself, are finding with our semester long projects.)

    The big question is should we use these steps to evaluate digital humanities projects and tools. I say yes! If we can use these steps to evaluate projects and tools, we can come close to finding worthwhile resources in digital humanities. We can even use these steps to evaluate and improve on our own digital humanities work. Using these steps to evaluate our own work can make our projects more dynamic as well as academic – which Shane argues very much for, and I totally agree.

    #210
    Becky Jenkins
    Participant

    What criteria did I choose to use and why?

    For all of the projects and all of the sites, I provided background information, and evaluated the design, impact, and questioned the authoritativeness of each before coming to an opinionated conclusion.

    Under the category of design, I evaluated the navigation and search capabilities of the project sites, the overall aesthetic and overall usability for the projects, and ease of use, overall aesthetic and overall usefulness for the tools.

    Under the category of impact, I researched the reach and affiliations of the tools and projects. This included the amount of web traffic it received, the institutions they were affiliated with, and in general, how the information has been accessed.

    Under the category of authority, I questioned how rigorous in scholarship was for the projects and how rooted in scholarship the tools were. Offline scholarly work must be rooted in rigorous scholarship, and our tools should be held to the same standard.

    I chose these criteria because I thought they were among the most important standards to judge the individual projects on. For the websites, design is the number one, most important aspect of a site. As with food, we first taste a website with our eyes, and we make many judgments based on our first look. Most academics are at least proficient in web surfing, and many have come to expect certain amenities and aesthetics when using an online resource. Basics, like easy search tools and a clearly navigable path aren’t always included on a site, so we should evaluate each site for this most basic criteria. The final criteria for the web site projects was the overall usability of the site to gain and share academic information – if you can’t find the information on the site, even if it’s there somewhere, it isn’t very useful.
    For the tools, the criteria had to be adjusted, but only slightly. The most important function of any web or DH tool is its usability – for a tool to be useful, it must be used. The overall aesthetic was important, like the web tools, because users have come to expect clean designs, and will distrust tools that look unprofessional. Finally, overall usefulness was considered – will this tool help create new knowledge or help create a new way to share information?

    What is fair to evaluate a project on?

    Design and usefulness are the basic evaluation criteria that would be fair to judge any project on. They go hand in hand, but any project or tool should be aesthetically pleasing (or at least not an eye-sore) and it should have a useful design or outcome. I also think the ease of extracting information should be evaluated – in other words, can I use this site or tool to do what I want or to get the information I need? The Salem Project’s lack of a comprehensive search tool is a good example of what I mean – the information is all there, but it takes more work than should be necessary to extract it. A more inclusive method of collecting available data should be provided.

    What should be off limits?

    As the web ages, many sites are being left behind in design and functionality upgrades. I don’t think it’s fair to judge a site based on contemporary standards, but all sites should have some degree of usability, even the really old ones like the Salem Witch Archive project. It’s reasonable to assume that not all projects have budgets or resources to update, even every few years. We should simply be glad the resources are still available!

    What’s important or trivial?
    If I had to choose a single criteria to judge any DH tool or project on, it would be the overall ease of use. Even bad design can be overcome with easy to use tools. If a tool is not useful, it’s a waste of resources and time for the developers and the users. It’s also important to maintain a scholarly level of work – maintaining scholarly standards for documentation and peer-review.

    #211
    Andy Schocket
    Keymaster

    It seems that we’ve come up with some more ideas on our own, as well as in conjunction with the reading. Not only what a project does, but its scale; not only where it is, but the possibilities and limits of its medium; not only by whom, but with what sort of partners and input.

    Becky had asked about the criteria I was putting on a google doc during our session. Here it is, free for you (or, for that matter, anyone else) to add or revise: http://goo.gl/MrnDbk

    #215
    Matt Younglove
    Participant

    How to Evaluate a DH project

    Similar to Daniel’s approach, I found myself questioning whether these DH projects I was evaluating were really DH projects or glorified humanities projects that were aided by fancy tech. This made me redefine my idea of a DH project as just that: “A Humanities Project Aided by Technology.” This then gives us a set of parameters that we can then delve into further.
    1) How does the tech aid the research?
    2) How feasible would the research be without the tech?
    3) Does the DH project provide opportunity for more in-depth scholarship?
    4) What is the scope of possibilities provided by this project?
    5) What insights (thus far) has the project given? (This will change with time with a good project)
    Alex’s comment about how much responsibility is on the audience touches another interesting and valuable point. Evaluating a DH project on how easily it is used by the public is like evaluating music on how many people like it. Its subjective based on a lack of knowledge rather than a plethora of it. I think it’s more worthy to evaluate the project based on the results it can produce by those trained in using the tool at it’s highest level as opposed to the lowest. It shouldn’t be compared to the ease of google or facebook.

    This brings me to a question I’ve been dealing with quite a bit. I know we are in academia and evaluation is so critical for tenure and promotion, and that that is the current situation in which we are placed, but why is evaluation so important at this stage of the field? Why not let projects exist just see what we can accomplish with them? Maybe the easiest way to evaluate each project is to say “What does it want to accomplish?” and “Does it accomplish that goal?”

Viewing 9 posts - 1 through 9 (of 9 total)
  • You must be logged in to reply to this topic.