What criteria did I choose to use and why?
For all of the projects and all of the sites, I provided background information, and evaluated the design, impact, and questioned the authoritativeness of each before coming to an opinionated conclusion.
Under the category of design, I evaluated the navigation and search capabilities of the project sites, the overall aesthetic and overall usability for the projects, and ease of use, overall aesthetic and overall usefulness for the tools.
Under the category of impact, I researched the reach and affiliations of the tools and projects. This included the amount of web traffic it received, the institutions they were affiliated with, and in general, how the information has been accessed.
Under the category of authority, I questioned how rigorous in scholarship was for the projects and how rooted in scholarship the tools were. Offline scholarly work must be rooted in rigorous scholarship, and our tools should be held to the same standard.
I chose these criteria because I thought they were among the most important standards to judge the individual projects on. For the websites, design is the number one, most important aspect of a site. As with food, we first taste a website with our eyes, and we make many judgments based on our first look. Most academics are at least proficient in web surfing, and many have come to expect certain amenities and aesthetics when using an online resource. Basics, like easy search tools and a clearly navigable path aren’t always included on a site, so we should evaluate each site for this most basic criteria. The final criteria for the web site projects was the overall usability of the site to gain and share academic information – if you can’t find the information on the site, even if it’s there somewhere, it isn’t very useful.
For the tools, the criteria had to be adjusted, but only slightly. The most important function of any web or DH tool is its usability – for a tool to be useful, it must be used. The overall aesthetic was important, like the web tools, because users have come to expect clean designs, and will distrust tools that look unprofessional. Finally, overall usefulness was considered – will this tool help create new knowledge or help create a new way to share information?
What is fair to evaluate a project on?
Design and usefulness are the basic evaluation criteria that would be fair to judge any project on. They go hand in hand, but any project or tool should be aesthetically pleasing (or at least not an eye-sore) and it should have a useful design or outcome. I also think the ease of extracting information should be evaluated – in other words, can I use this site or tool to do what I want or to get the information I need? The Salem Project’s lack of a comprehensive search tool is a good example of what I mean – the information is all there, but it takes more work than should be necessary to extract it. A more inclusive method of collecting available data should be provided.
What should be off limits?
As the web ages, many sites are being left behind in design and functionality upgrades. I don’t think it’s fair to judge a site based on contemporary standards, but all sites should have some degree of usability, even the really old ones like the Salem Witch Archive project. It’s reasonable to assume that not all projects have budgets or resources to update, even every few years. We should simply be glad the resources are still available!
What’s important or trivial?
If I had to choose a single criteria to judge any DH tool or project on, it would be the overall ease of use. Even bad design can be overcome with easy to use tools. If a tool is not useful, it’s a waste of resources and time for the developers and the users. It’s also important to maintain a scholarly level of work – maintaining scholarly standards for documentation and peer-review.