Siva Vaidhyanathan on the value of public research

A great statement today in Slate by Siva Vaidhyanathan about the value of public research:

We Americans take these institutions for granted. We assume that private enterprise generates what is so casually called “innovation” all by itself. It does not. The Web browser you are using to read this essay was invented at the University of Illinois at Urbana-Champaign. The code that makes this page possible was invented at a publicly funded academic research center in Switzerland. That search engine you use many times a day, Google, was made possible by a grant from the National Science Foundation to support Stanford University. You didn’t get polio in your youth because of research done in the early 1950s at Case Western Reserve. California wine is better because of the University of California at Davis. Hollywood movies are better because of UCLA. And your milk was not spoiled this morning because of work done at the University of Wisconsin at Madison.

 These things did not just happen because someone saw a market opportunity and investors and inventors rushed off to meet it. That’s what happens in business-school textbooks. In the real world, we roll along, healthy and strong, in the richest nation in the world because some very wise people decided decades ago to invest in institutions that serve no obvious short-term purpose. The results of the work we do can take decades to matter—if at all. Most of what we do fails. Some succeeds. The system is terribly inefficient. And it’s supposed to be that way.

Along the way, we share some time and energy with brilliant and ambitious young people from around the world.

Important to realise this is also a selective list. Other things generated in whole or in part by publicly funded researchers and institutions include Unicode and XML.

Can anybody think of others?


“There’s no Next about it”: Stanley Fish, William Pannapacker, and the Digital Humanities as paradiscipline

In a posting to his blog at the Chronicle of Higher Education, William Pannapacker identified the Digital Humanities as an emerging trend at the 2009 Modern Language Association Convention.

Amid all the doom and gloom of the 2009 MLA Convention, one field seems to be alive and well: the digital humanities. More than that: Among all the contending subfields, the digital humanities seem like the first “next big thing” in a long time, because the implications of digital technology affect every field.

I think we are now realizing that resistance is futile. One convention attendee complained that this MLA seems more like a conference on technology than one on literature. I saw the complaint on Twitter.

The following year, he was able to say the discipline had arrived.

The digital humanities are not some flashy new theory that might go out of fashion. At this point, the digital humanities are The Thing.  There’s no Next about it. And it won’t be long until the digital humanities are, quite simply, “the humanities.”

As Pannapacker noted here and in yet another posting on the topic, these observations were met with some unease in the discipline. Some resented the perceived implication that the digital humanities were new; others were concerned about his observation that the field was beginning to take on the trappings of previous trendy topics, most notoriously the cliquishness and focus on exclusivity thought to be characteristic of “Big Theory.” Read the rest of this entry »


Fixing a problem with broken stylesheets in OJS 2.3.6

In recent days, we have encountered a problem at Digital Studies/Le champ numérique that has resulted in problems with the display of a number of our articles.

The symptom is that the article breadcrumb and menu bar appear below rather than beside the right navigation bar, as illustrated below.

Screen shot showing layout problem in OJS

Screen shot showing layout problem. Article on left shows the broken style; article on the right has had the problem corrected.

After some investigation, we narrowed the problem down to an issue with how OJS handles HTML-encoded articles. Read the rest of this entry »


Chasing the (long) tail: Was the Readabilty subscription model really a failed experiment?

More on the changing business models (see my earlier entries, “Won’t get fooled again: Why is there no iTunes for scholarly publishing” and “Does Project Muse help of harm the scholarly community…“).

Readability is an app developer whose main product is software for improving the long-form online reading experience. I’ve not used it (yet), but it seems to involve a combination of applying an optimised style to existing content and suppressing the surrounding ads and navigation clutter (contrary to the comment feed on their blog, Readability doesn’t seem to extract and resell content without producer’s permission: it seems to be more like a specialised kind of browser plugin for viewing content you already have access to).

The original business model appears to have involved collecting subscription money ($5/month) from users who wanted a better reading experience and then distributing that money (minus a commission, I imagine) to the publishers who registered with them. There are aspects of this that you might quibble with–for example, had they thought they could communicate with the owners of every site their user base tried to read using their app? But on the whole it seems like an interesting and innovative idea: extracting some part of the capital required to produce content by selling a better experience in its consumption. And since I’d have thought they probably didn’t need to offer to share the money with the publishers (given that they were only reformatting the content), this is a business model that actually seems to have been constructive rather than purely exploitative.

And apparently one that doesn’t work. Read the rest of this entry »


Schools of Schools of “Humanities Computing”

  When I went to Yale to begin my PhD in 1989, the English department—or perhaps just the graduate students, a group that tends to feel these things more strongly—was mourning the decline of the “Yale School”. New Historicism was the increasingly dominant critical approach at the time, and while it seemed that all the Deconstructionists had been at Yale, none of the major New Historicists were—Stephen Greenblatt got his PhD (and B.A. and M.A.) from Yale, but, like Michel Foucault, seems never to have held a faculty appointment there. I was thinking of this sense of “school” yesterday, while I was attending the University of Alberta’s Humanities Computing Graduate School conference. Read the rest of this entry »

Byte me: Technological Education and the Humanities

I recently had a discussion with the head of a humanities organisation who wanted to move a website. The website was written using Coldfusion, a proprietary suite of server-based software that is used by developers for writing and publishing interactive web sites (Adobe nd). After some discussion of the pros and cons of moving the site, we turned to the question of the software.
Head of Humanities Organisation: We’d also like to change the software. Me: I’m not sure that is wise unless you really have to: it will mean hiring somebody to port everything and you are likely to introduce new problems. Head of Humanities Organisation: But I don’t have Coldfusion on my computer. Me: Coldfusion is software that runs on a server. You don’t need it on your computer. You just need it on the server. Your techies handle that. Head of Humanities Organisation: Yes, but I use a Mac.
Read the rest of this entry »

Digital Plagiarism

I have recently started using plagiarism detection software. Not so much for the ability to detect plagiarism as for the essay submission- and grading- management capabilities it offered. But the system was, of course, originally designed to detect plagiarism—which means that I too can use it to check my students’ originality. To the extent that one semesters’ data is a sufficient sample, my preliminary conclusions are that the problem of plagiarism, at least in my classes, seems to be more-or-less as insignificant as I thought it was when I graded by hand, and that my old method of discovering plagiarism (looking into things when a paper didn’t seem quite “right”) seemed to work.1 This past semester, I caught two people plagiarising. But neither of them had particularly high unoriginality scores: in both cases, I discovered the plagiarism after something in their essays seemed strange to me and caused me to go through originality reports turnitin provides on each essay more carefully. I then went through the reports for every essay submitted by that class (a total of almost 200), to see if I had missed any essays that turnitin’s reports suggested might be plagiarised. None of the others showed the same kind of suspicious content that had led me to suspect the two I caught. So for me, at least, the “sniff test” remains apparently reliable. Read the rest of this entry »

Back to the future: What digital editors can learn from print editorial practice.

The last decade or so has proven to be a heady time for editors of digital editions. With the maturation of the digital medium and its application to an ever increasing variety of cultural objects, digital scholars have been led to consider their theory and practice in fundamental terms (for a recent collection of essays, see Burnard, O’Keeffe, and Unsworth 2006). The questions they have asked have ranged from the nature of the editorial enterprise to issues of academic economics and politics; from problems of textual theory to questions of mise-en-page and navigation: What is an Edition? What kinds of objects can it contain? How should it be used? Must it be critical? Must it have a reading text? How should it be organised and displayed? Can intellectual responsibility be shared among editors and users? Can it be shared across generations of editors and users? While some of these questions clearly are related to earlier debates in print theory and practice, others involve aspects of the production of editions not relevant to or largely taken for granted by previous generations of print-based editors. Read the rest of this entry »

If I were “You”: How Academics Can Stop Worrying and Learn to Love “the Encyclopedia that Anyone Can Edit”

The sense that the participatory web represents a storming of the informational Bastille is shared by many scholars in our dealings with the representative that most closely touches on our professional lives—the Wikipedia, “the encyclopedia that anyone can edit”. University instructors (and even whole departments) commonly forbid students from citing the Wikipedia in their work (Fung 2007). Praising it on an academic listserv is still a reliable way of provoking a fight. Wikipedia founder Jimmy Wales’s suggestion that college students should not cite encyclopedias, including his own, as a source in their work is gleefully misrepresented in academic trade magazines and blogs (e.g. Wired Campus 2006). Read the rest of this entry »

The Ghost in the Machine: Revisiting an Old Model for the Dynamic Generation of Digital Editions

In 1998, a few months into the preparation of my electronic edition of the Old English poem Cædmon’s Hymn (O’Donnell forthcoming), I published a brief prospectus on the “editorial method” I intended to follow in my future work (O’Donnell 1998). Less a true editorial method than a proposed workflow and list of specifications, the prospectus called for the development of an interactive edition-processor by which “users will […] be able to generate mediated (‘critical’) texts on the fly by choosing the editorial approach which best suits their individual research or study needs” (O’Donnell 1998, ¶ 1). Read the rest of this entry »

Why should I write for your Wiki? Towards an economics of collaborative scholarship.

I’d like to begin today by telling you the story of how I came to write this paper. Ever since I was in high school, I have used a process called “constructive procrastination” to get things done. This system involves collecting a bunch of projects due at various times and then avoiding work on the one that is due right now by finishing something else instead. 

And this was around the time I started taking care of myself when the doctors told me I should do it. The doctors told me that I should lose weight and he knew that it wasn’t going to be easy because of my hormonal unbalanced body. That is when he recommended me a system that was aimed to that. You can read cinderella solution reviews on that link for more information.

Read the rest of this entry »


O Captain! My Captain! Using Technology to Guide Readers Through an Electronic Edition

Most theoretical discussions of electronic editing attribute two main advantages to the digital medium over print: interactivity and the ability to transcend the physical limitations of the page. From a production standpoint, printed books are static, linearly organised, and physically limited. With a few expensive or unwieldy exceptions, their content is bound in a fixed, unchangeable order, and required to fit on standard-sized, two dimensional pages. Readers cannot customise the physical order in which information is presented to them, and authors are restricted in the type of material they can reproduce to that which can be presented within the physical confines of the printed page. Read the rest of this entry »

The Doomsday Machine, or, “If you build it, will they still come ten years from now?”: What Medievalists working in digital media can do to ensure the longevity of their research

It is, perhaps, the first urban myth of humanities computing: the Case of the Unreadable Doomsday Machine. In 1986, in celebration of the 900th anniversary of William the Conqueror’s original survey of his British territories, the British Broadcasting Corporation (BBC) commissioned a mammoth 2.5 million electronic successor to the Domesday Book. Stored on two 12 inch video laser discs and containing thousands of photographs, maps, texts, and moving images, the Domesday Project was intended to provide a high-tech picture of life in late 20th century Great Britain. The project’s content was reproduced in an innovative early virtual reality environment and engineered using some of the most advanced technology of its day, including specially designed computers, software, and laser disc readers (Finney 1986). Read the rest of this entry »

Disciplinary impact and technological obsolescence in digital medieval studies

First posted December 15, 2006 http://people.uleth.ca/~daniel.odonnell/Research/disciplinary-impact-and-technological-obsolescence-in-digital-medieval-studies. Published in The Blackwell Companion to the Digital Humanities, ed. Susan Schriebman and Ray Siemens. 2007.

In May 2004, I attended a lecture by Elizabeth Solopova at a workshop at the University of Calgary on the past and present of digital editions of medieval works1. The lecture looked at various approaches to the digitisation of medieval literary texts and discussed a representative sample of the most significant digital editions of English medieval works then available: the Wife of Bath’s Prologue from the Canterbury Tales Project (Robinson and Blake 1996), Murray McGillivray’s Book of the Duchess (McGillivray 1997), Kevin Kiernan’s Electronic Beowulf (Kiernan 1999), and the first volume of the Piers Plowman Electronic Archive (Adams et al. 2000). Solopova herself is an experienced digital scholar and the editions she was discussing had been produced by several of the most prominent digital editors then active. The result was a master class in humanities computing: an in-depth look at mark-up, imaging, navigation and interface design, and editorial practice in four exemplary editions.

Read the rest of this entry »


Follow

Get every new post delivered to your Inbox

Join other followers: