Archive | May 2014

Bouchain back from the past

Has the 1711 siege of Bouchain faded from view? The town itself certainly hasn’t stood the test of time well, if Google Ngram Viewer is to be believed.

Screen Shot 2014-05-25 at 8.57.03 PM

This should be surprising, since the August attack on Bouchain was one of the more distinctive sieges in the Low Countries. Though a weak town, its investment required Allied troops to ferret out the French from their boggy trenches sheltering the town before they could carry on their trench attacks.

Attacks on lower and upper town of Bouchain, 1711

Attacks on lower and upper town of Bouchain, 1711

This impressive maneuver would later be commemorated in a well-known portrait of Marlborough and his engineer/quartermaster John Armstrong:

John_Churchill,_1st_Duke_of_Marlborough;_John_Armstrong_from_NPG

Bouchain 1711 was also distinctive among the Flanders sieges because the Allied and French commanders disputed whether the garrison had surrendered honorably or as prisoners of war after its capture – spoiler alert: the garrison ended up as prisoners. Then, after the town was in Allied hands, Marlborough’s army was forced to idle nearby for almost a month while its fortifications were repaired. To top it off, the siege was also the last major military operation conducted by the Duke of Marlborough. Attempts by his chaplain (Francis Hare) to describe Bouchain as a masterful siege failed to prevent Churchill’s ouster at the end of the year.

Observation and Relief armies, Bouchain 1711

Observation and Relief armies, Bouchain 1711

The town would be recaptured by resurgent French forces under Marshal Villars a year later, in half the time.

If the 1711 attack has faded from view, perhaps that is due to the faded view of the most famous representations of the siege, the three tapestries at Blenheim Palace commemorating the victory at Bouchain. Yet perhaps there’s hope. For as the linked Daily Mail news story (with lots of photos) indicates, this faded view of the siege has been cleaned and restored, with its brethren to follow.

While good news, even a newly-restored Bouchain Tapestry gives us minimal insight into the siege. The tapestry, like all of the Victory tapestries, provides little more than a stock representation of Marlborough and his entourage on horseback in the standard wooded foreground, with an ornamental border composed of vines and captured arms and the countryside receding into the distance. Hopefully the restoration will make the background, the actual siege itself, a bit more visible. Now if we could only get close-up photographs of those newly-laundered threads.

Advertisements

Military medicine again

Another entry in the early modern military medicine roundup:

Craig, Stephen. “Sir John Pringle MD, Early Scottish Enlightenment Thought and the Origins of Modern Military Medicine.” Journal for Eighteenth Century Studies, 2014.
Abstract:
Sir John Pringle published Observations on the Diseases of the Army in April 1752. Over the next two decades it was proclaimed by French, British and German authors as the premier volume on the new subject of military medicine. Although Pringle’s experiences in the War of the Austrian Succession furnished clinical substance, his education at St Andrews and Edinburgh universities, and medical instruction under Hermann Boerhaave, provided the enduring ethical foundation for the theory and practice of the art by medical and line officers.

Automatically parse your inventories

Historians owe a debt of gratitude to those turn-of-the-century archivists, whose nationalistic yearnings led to the creation of dozens of volumes of archive inventories and catalogs. If you do much work on French military history, you likely know the Inventaire sommaire des archives historiques: archives de guerre. This multi-volume inventory provides short summaries of each volume in the A1 correspondence series, more than 3500 volumes up to the year 1722. That’s a lot to keep track of, as I estimated awhile back. So much in fact, that you’ll likely be going back to that particular well again and again. If so, it might be worth your while to include those details in your note-taking system. Here’s how I did it in DTPO.

First step for any digitization process is to scan in the printed page and convert the page images into text. In ye olden days you had to do it yourself,  and then run it through OCR software. Nowadays it’s more likely that you can download an .epub version of it from Google Books, and then convert it to .txt with the free Calibre software. Worst case, download the PDF version and OCR it yourself.

AG A1 inventaire pdf

Now you find yourself with a text document full of historical minutiae. Import the text (and the PDF, just to be safe) into DTPO. Next, add some delimiters which will indicate where to separate the one big file into many little files. But do it smart, with automation. Open the text document in Word (right-click in DTPO or find it in the Finder), and then start your mass find-replace iterations to add delimiters, assuming there’s a pattern that you can use to add delimiters between each volume. Maybe each volume is separated by two paragraph marks in a row, in which case you would add a delimiter like ##### at the end of ^p^p.  You’ll end up with something like this:

AG A1 inventaire txt delimited

As you can see, the results are a bit on the dirty side – I’ll see if I can get a student worker to clean up the volume numbers since they’re kinda important, but the main text is good enough to yield search results.

Once you’ve saved the doc in Word and returned to DTPO, you can use the DT forum’s Explode with Delimiter script. Check the resulting hundreds of records – if there’s more than a few errors, erase the newly-created documents, fix the problematic delimiters in the original, and reparse. You’ll want to search not only for false positives, i.e. delimiters added where they shouldn’t have been, but also for false negatives, volumes that should have delimiters but were missed. For example, search ^p^# in Word to check for any new paragraphs starting with a number (assuming the inventory starts each paragraph with the volume number).

But wait, there’s more. Once you’ve parsed those, you can even take it a step further. The summaries aren’t usually at the document level, but there is enough detail that it’s worth parsing the descriptions within each volume.  After converting all the parsed txt files to rtf files, move each volume’s document to the appropriate provenance tag/group, and then run another parse on that volume’s record, with the ; as delimiter. In the case above, you might want to also parse by the French open quotation mark, or find-replace the « with ;«. Parsing this volume summary gives you a separate record for each topic within the volume, or at least most of them. With all these new parsed records still selected, convert to rtf and add the provenance info to the Spotlight Comments. Now you’re ready to assign each parsed topic document to whichever topical groups you want.

AG A1 inventaire DTPO

 

It’s not perfect, but it’s pretty darn good considering how little effort it requires; maybe an hour or so gets you 500+ volume inventories in separate records. Now you’ve got all those proper nouns in short little documents, ready to search, (auto-)group and sort.

Worth a(nother) look

Yes, I know I spend way too much time thinking about note-taking. What of it?

While reading some online discussions of software-based textual analysis, I came across a link to this excellent article summarizing the weaknesses of full-text search: Jeffrey Beall, “The Weaknesses of Full-Text Searching,” The Journal of Academic Librarianship 34, no. 5 (September 2008): 438-444. Abstract:

This paper provides a theoretical critique of the deficiencies of full-text searching in academic library databases. Because full-text searching relies on matching words in a search query with words in online resources, it is an inefficient method of finding information in a database. This matching fails to retrieve synonyms, and it also retrieves unwanted homonyms. Numerous other problems also make full-text searching an ineffective information retrieval tool. Academic libraries purchase and subscribe to numerous proprietary databases, many of which rely on full-text searching for access and discovery. An understanding of the weaknesses of full-text searching is needed to evaluate the search and discovery capabilities of academic library databases.

If you ever need to explain to your students why keywords and subject headings and indices (indexes) are useful tools, this article is a good place to start.

Full-text search is certainly better than nothing – particularly if you can use fuzzy searching, wildcards, and proximity – but I sometimes wonder if a keyword-only database (a digital index) would still be more helpful than a full-text database, everything else being equal.

Repeat after me: full-text searching must be combined with meta-data in order to search subsets and sort results.