Jump to page content
The Pequod
Dr Alistair Brown | Associate lecturer in English Literature; researching video games and literature

Recent Posts

Twitter @alibrown18

New Essay

Through exploring the psychopathology of Capgras syndrome, in which a patient mistakes a loved one for an imposter, The Echo Maker offers a sustained meditation on the ways in which we project our own problems onto other people. As a reflection on the mysteries of consciousness, the novel offers some interesting if not especially new insights into the fuzzy boundaries between scientific and literary interpretations of the mind. Read more

Reflections on English: Shared Futures 2017

Tuesday, July 11, 2017

In the Civic Centre in Newcastle, where English Shared Futures was based, there's an enormous model of the city. It's a utopian panorama of planning: logical lines of highways, smooth boxes of buildings, aspirational glass structures rising at the centre. At the level of the streets there is no traffic, no litter, no congested crowds. It's an adaptable analogy for this enormous, 600-delegate, 100-panel strong conference surveying the field of English literature, language and creative writing.

On the ground of our universities, life feels tough. We are mired in a traffic jam of overwork, while the radio plays a shock jock conspiracy theory about REF and TEF, the employability of arts and humanities graduates, and the returns on their tuition fees. Underlying it all is a permanent anxiety about the value of being paid to read books (if you're lucky enough to be a permanent academic), and the literal value of your next pay cheque (if you're one of the precariat).

English Shared Futures was a chance to rise above this day-to-day existence, and to take a panoramic overview of the discipline as a whole. From this perspective, the virtues of what we do are clearer; while there were certainly panels diagnosing the challenges facing early career scholars or the difficulties of convincing government of the virtues of English, the overwhelming feeling of the conference was one of celebration and confidence.

English Shared Futures made me reflect that although we may grumble when we're required to justify the value of our research and to show how the established discipline of English engages with the brave new digital and economic world around us, we've actually adapted very well to this new landscape - at least if the panels I went to (which were only a fraction of what was on offer, and admittedly skewed towards digital humanities stuff) were anything to go by. Among other things I discovered:

Our discipline goes under the label of English and has its origins in a nineteenth-century vision of the virtues of literature as a civilising force. But just as a city evolves beyond the planner's capacity to contain or model it, so in 2017 'English' has developed along all sorts of unexpected routes, absorbing other fields and disciplinary territories along the way. We do some remarkable and diverse things, and we continue to contribute to the society that houses and pays for us. 

Huge thanks to all the organisers, helpers, speakers and coffee-break conversationalists for helping me - and judging by twitter many others as well - to recover this more utopian perspective.

Labels: , , ,

Higher Education is a Market (Except When it Isn't)

Wednesday, July 05, 2017

This morning the Institute for Fiscal Studies launched a report looking at the impact of higher university tuition fees. The headline was that students will graduate with more than £50 000 debt, but the Director of the IFS, Paul Johnson, also tweeted out the following 'highlight':

Note that when Johnson say 'increased subsidies' he doesn't necessarily mean subsidies from government in a direct sense, since costs for many degrees are covered solely by tuition fees, which may be paid by the government in the first instance but then in an ideal world are repaid with interest.

There's something perverse going on here. This is a market think-tank concerned about the fact that one group of disciplines in higher education - which remember we're continually told is a marketplace with student-consumers shopping for degrees - has found a way to deliver one product (arts and humanities) at low cost and high profit margins, to consumers who want to buy them. And government has contrived a system whereby the profits from these cheap-to-deliver subjects can be creamed off by institutions to subsidize the more expensive and allegedly beneficial STEM ones. Cheers, arts undergrads.

Some churlish folk might complain that this canny system depends on duping the student-consumer. Since arts graduates are likely to earn less than their counterparts in vocational subjects, they get a double whammy: they spend money that pays for their science counterparts, while they receive a less high return as they enter typically less well paid jobs (to be fair to the IFS they have a valid complaint that the taxpayer will therefore end up 'subsidising' them when they write off their loans later down the line).

But hey, from an institutional point of view this too is a positive thing, since (to use a ball park example without the figures to hand; feel free to supply) if arts and humanities students cost half as much to educate but are not half as badly paid once employed, there's an argument that the return on investment - again from the university perspective - from educating that particular student is reasonable. A cheap-to-educate student at least gets some employability bonus, even if not as much as the very-expensive-to-educate student.

And of course we as a society also need doctors and engineers. If universities can educate arts and humanities undergraduates, to support the expensive medical, engineering etc. degrees, all the better.

Of course, none of these arguments really seem quite to satisfy, do they? They are attempts to justify the structurally unjustifiable. Which is precisely my point. When a free market think tank complains that the system is broken because it's working too much like a market for the institutional supermarkets it is almost as if - call me crazy - Higher Education should not be treated as a market at all in the first place.

Labels: , , , , , ,

When Publishers Own the (Dead) Author on Facebook

Tuesday, February 14, 2017

An interesting phenomenon I've just spotted on Facebook: major authors like Charlotte Bronte or Charles Dickens have their own verified pages - that is to say, pages confirmed by Facebook with the little blue tick as being "an authentic page for this public figure."

But who "owns" these pages? Follow the links from the About section, and you'll end up at Penguin-Random House's own website, where naturally you can buy the author's books. Evidently these pages are managed not by some altruistic-minded eager reader, but by the publishing conglomerate.

The content of these pages seems generally good: there is lots of community discussion and informative link sharing. It's not just a stream of posts inviting you to buy the latest Random House edition. 

Nevertheless, these publications do feature heavily - though since many of these are by imprints such as Vintage, which are ultimately owned by Random House, it would be easy to miss that the page owner is solely promoting its own works. It's also questionable that pages such as the Jane Austen are badged as being "maintained by Jane Austen's U.S. & U.K. publisher Vintage Books" when of course, Austen has many US and UK publishers, and indeed her works are available free via the likes of Gutenberg. 

The way in which Facebook presents such pages as being the authentic location - "authentic" carrying the whiff of objectivity - raises ethical questions. Is it right that a publisher can colonise the long-dead author, and piggy back on his or her identity as a sales route? If readers are landing on these pages as the top results on Facebook (which most would do, as these are the unique, verified accounts) are they missing word on interesting books released by competing publishers? How are the news feeds being steered so that what looks to be a fan site actually ties in with a wider publishing (and economic) agenda?

Of course, I've no objection to publishers using Facebook to promote their activities. Neither with publishers hosting fan sites for authors. But to hide behind the persona of the author, curating his or her historical identity in the twenty-first century, when the ultimate aim is presumably to sell more texts makes me uneasy. Is anyone with me on this?

Labels: , , , , ,

Can we imagine a Statcheck for the arts and humanities?

Wednesday, February 08, 2017

Here's a wondering for a Wednesday. Can we imagine having software tools in the arts and humanities that do some of the dirty work of fact and data checking ahead of peer review?

The inspiration for this comes from the stir that has been created recently in the sciences - especially experimental psychology - by a tool called Statcheck. Experimental psychology often depends upon applying p-value assessments to data, to determine whether findings are statistically significant or simply the result of experimental bias or background noise. Statcheck was a program devised at Tilberg University, which automatically scanned a massive set of 250 000 published papers, recalculated the p-values within them, and checked whether the researchers had made errors in their original calculations.

The finding was that around half of all published papers have at least one calculating error within them. That's not to say that half of all published papers were fundamentally wrong, such that their findings have to be thrown out of the window entirely. Nevertheless, it does highlight significant deficiencies in the peer review and editorial process, where such errors should be picked up. And while one miscalculation in a series may not be in itself significant, a number of miscalculations might spur suspicion as to the credibility of the findings more generally. Miscalculation also offers a glimpse into the mindset of the paper's author(s) and the processes that went into its production: have calculations been produced by one author alone, or by two authors independently to cross-check? were calculations done on statistical software or by hand? and, most seriously, do miscalculations point to attempts to manipulate data to support a preconceived outcome?

In a time-pressured academic world, peer reviewers often take shortcuts. Among one of the many reasons peer review is flawed as a gate-keeping mechanism for excellence, we know that even though reviews are technically blind, reviewers are often looking for an implicit feeling about the unknown author's overall trustworthiness rather than scrutinising every single feature of the individual article in detail. Beyond exposing problems with the articles themselves, this is a revelation about peer review that may emerge from Statcheck. In the arts and humanities, peer review should ideally be based on an assessment of the clarity and reliability with which an author advances his or her claims, rather than whether we agree with the claims themselves. To make an analogy with philosophical logic, we're looking for validity, not soundness. One of the basic functions of peer review is to get a feel for the author's argument as being based on legitimate reason even if the outcome of that argument is not one with which we concur. In assessing this, where there are deficiencies in basic details these may point to deeper structural or logical flaws in the author's thought processes.

The existence of Statcheck got me thinking about whether in the arts and humanities, and English in particular, our published papers depend upon similar basic mechanisms like the p-value test and, if they do, whether the author's accuracy in using those mechanisms could be checked automatically as a prelude to peer review. Of course, even in the age of the digital humanities, arts and humanities still don't tend to deal in statistical data but rather in 'soft' rhetoric and argumentation. Still, are there any rough equivalents? And if so, could we envisage software capable of running papers through pre-publication tests (just as Statcheck now does) to get a general sense of the care authors have paid to the 'data' on which their argument depends, which might then cue peer reviewers or editors to pay closer attention to some of the deeper assumptions and the article's overall credibility?

Here are some very hypothetical, testing-the-waters assumptions about the sorts of quantifiable signals it might be useful to pick up programmatically (all of which we would like to think peer reviewers would notice anyway - but the lesson of Statcheck in experimental psychology suggests otherwise):
  • Quotation forms the bedrock of argumentation in the arts and humanities. As I constantly tell my students, if you have not quoted a primary or secondary text with absolute precision, how I am supposed to trust your arguments that depend upon that quotation? If someone is trying to persuade me about their reading of the sprung meter of a Gerald Manley Hopkins poem, but they have mistyped a key word in such a way that the meter is 'broken' in the quotation, this hardly looks good. A software tool that automatically checks the accuracy of quotations within papers, and highlights errors would in many ways be an inversion of plagiarism-testing software, but here we would be actively looking for a match between the quotation and the source.
  • Similar to the above, spelling of titles of texts and author's names. 
  • Referencing and citation are clearly important, and checking whether references - even or especially in a first draft - have been accurately compiled may highlight flaws in the author's record keeping.
  • Historical dates may provide another clue as to the author's own processes for writing and his or her strictness in self-verifying. In presenting a date in a paper, we may often be making a case for literary lineage, tradition, or the links between a text and its contexts. It matters that we get dates precise. In not double-checking every date (for example, because an author thinks they know off the top of their head) author's have missed a key step in the process. Erroneous dates may be a clue to problems in arguments that depend upon historical contingency.
  • If we're looking at novels in particular, there are key markers of place and character, and relationality within these, which need to be rendered precisely. To describe Isabella Linton as mother of Cathy Linton in Wuthering Heights or to write Thrushcross Grange when meaning the Heights might be easy mistakes. But these may also be symptomatic of an issue with the author's close (re)reading of the text. It should in principle be possible to apply computational stylistics to verify that an author really means who or what they refer to in the context of their writing.
I'm sure that there are more possibilities to add to this list - but I'm not sure that even if (and it's a big if for a host of technical reasons) we could devise programs to automatically parse papers for accuracy in areas like this it would be ultimately beneficial. Nevertheless, if peer review is a legacy mechanism for a pre-digital age, what harm in a little futuristic speculation now and again? 

And, since I'm feeling cheeky, imagine if we could do a Statcheck on a whole mass of Arts and Humanities articles. Wouldn't it be deliciously gossipy to see just how many big name scholars make basic errors?

Labels: , , , , , , , ,

The content of this website is Copyright © 2009 using a Creative Commons Licence. One term of this copyright policy is that Plagiarism is theft. If using information from this website in your own work, please ensure that you use the correct citation.

Valid XHTML 1.0. Level A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0. | Labelled with ICRA.