The Ostwald Test: Historiography for Dummies
These ideas aren’t particularly original to me, but I’ll pretend like they are by giving them a name.
How good (how rigorous, how scholarly, how systematic, how analytical, how serious, how developed…) is the historiography on any particular topic? The sophistication of a research field isn’t necessarily the same thing as being able to tell whether any given historiographical argument is true or not, but there’s a pretty high correlation between the two.
There are so many ways to test this idea of historiographical sophistication – I’ll assume everyone knows to ask whether any non-published, i.e. archival, sources are used (unless you’re only looking at the media or the public sphere). But here are a few really simple and easy rules of thumb that I rely upon. An individual work should, ideally, pass at least two (probably three) of the following criteria, It’s much more damning, however, to find multiple works within a field that fail these tests. That’s a sign of some systemic problems within the field.
Step 1: Look at the subject of a work (a book, an article…) on the topic of interest.
Step 2: Look at the other sources that secondary source uses (e.g. in the footnotes and bibliography).
Step 3: Ask the following four questions:
I. Citations’ Age
Do the sources incorporate recent thinking on the subject? How old is your historiography, i.e. how old are your citations? Is it past its Best Before date? Examples that have jumped out at me over my career:
- Scholars in the 1990s or later still basing their understanding of Vaubanian siegecraft upon Guerlac’s analysis from the 1940s, or Blomfield’s 1930s-era analysis. Not a good sign.
- Scholars in the 1990s or later relying upon hagiographical treatments of the Duke of Marlborough that are 70+ years old. Not a good sign.
- Scholars in the 1990s or later relying upon Wright’s 1930s-era discussion of the customs of siege warfare. Not a good sign.
In short, if no significant reassessment of a field has occurred in 70+ years, that’s a bad sign.
The diagnosticity of the first question can be improved by asking the next question as well:
II. Pages-to-Coverage Ratio
Is the coverage adequately deep or pathetically shallow? For the sources cited in a secondary source (and for the secondary source itself), what is the ratio of page length to number of years/countries/topics covered? Is it 100 pages on one decade of a single country’s ‘life’, or 100 pages covering an entire continent’s 1000-year ‘age’? If the basis for a discussion in the modern literature comes purely from topical “art of war” works that claim to describe a continent’s worth of conduct across three centuries in 25 pages, that’s a bad sign.
Not every source needs to be a monograph, but if there are almost no recent monographs cited on a subject, that’s a bad sign. Either it means nobody has looked at it closely, or perhaps somebody has, but the author you’re reading hasn’t. That’s a bad sign.
We can add a further corollary:
III. Argument Coverage-to-Sources Coverage Ratio
Does the argument match the sources? Compare the geographical and chronological coverage claimed in an author’s argument with what their sources actually cover. You can probably cut their title a little slack. Does someone covering western European siege warfare in the War of the Spanish Succession claim that this equally applies to siegecraft in 1500 Poland? If so, that’s a bad sign.
One final, should-be-obvious, rule of thumb:
IV. Language Correspondence
Is the author bothering to see what his subjects actually thought? Does the language of the citations match the language of the people under study? Do the languages used in the author’s sources coincide with the languages spoken where the events took place? If, for example, you’re trying to explain why the Dutch did something, it would help if you could point to Dutch sources.
Four simple questions that will tell you a lot about the state of your historiography. In active (which I think we can use as a reasonable proxy for rigorous) subfields, a majority of academic works will almost always pass these four tests. I’d really love to see them used (as summary statistics) in book reviews – would tell us a lot.
So try it for yourself. And feel free to recommend any additional tests in the comments.
The Ostwald Test. Kinda like the Turing Test, but with no chance of research funding.