Today’s first Voynich quote was overhead yesterday by Bill Tozier in an Ann Arbor restaurant (I presume?):-

I’m gonna find some fascist architecture!

Hmmm… might this have been that rarest of things, a Cipher Mysteries reader caught in the wild? Better still, might it have been a CM reader happy to ‘fess up? The comments section is ready and waiting for you here. 🙂

Of course, there has already been a theory linking the Voynich Manuscript with Michigan (for the simple reason that there is a theory linking everything with the VMs, if you’re bothered to look long enough). Specifically, Jan Hurych emailed me back in May about the f116v michiton oladabas page, saying:-

I got it – “mich::” stands for Michigan, apparently where the first sunflower came from 🙂

Today’s last Voynich quote (again from an unknown author) popped up on Google just over a month ago, courtesy of Wisam Mohammed:

Discoveries may excite our blood but mysteries sustain our soul. When we’re strong and arrogant, mysteries remind us how little we know of God’s world. And when we are weak and desperate, they only encourage us to believe that anything is possible.

So… who wrote that, then? 🙂

I recently blogged here about the difference between skepticism (which has at its heart both a guarded optimism and a realistic take on the practical difficulties involved in gaining knowledge) and cynicism (which by way of contrast is a denialist position, that says it is safer to believe nothing rather than get hurt by believing something that will turn out to be incorrect): but what I didn’t really go on to say was that I think there’s currently rather more cynicism at play in the Voynich research world than is properly healthy – and that perhaps the Wikipedia article simply reflects this critical imbalance.

So here’s my small wish for the day: that Voynich experts should try to use their insightful brains and creative historical imaginations not to construct yet more reasons why existing theories are wrong (which is, lets face it, about as hard as machine-gunning fish in a barrel), but instead try to construct questions they would really like to see answered. By doing this, we can start to map out the edges of our collective knowledge, and get some kind of frontier research mentality going again – perhaps it is simply this which is currently most conspicuous by its absence of late.

In this spirit but putting the codicological and palaeographical frontiers to one side (because the Beinecke doesn’t seem to be at all interested, and I suspect it will start to become clear over the next few months why this is so), here’s my proposal for an entirely new research front to open right up: Rome 1465-1467.

* * * * * *

The central cryptographic paradox of the Voynich Manuscript is that it manages to combine the simplicity of 14th century monoalphabetic ciphers (language-like and with a restricted alphabet size) with the mathematical inscrutability of 16th century polyalphabetic ciphers, yet has a (claimed) radiocarbon dating that sits between the two. Similarly, it contain a cipher letter pair (‘4o’) which was in use around Milan between 1440 and 1460, yet its cipher system is tangibly more sophisticated than anything found in the cipher ledgers of the day.

I’m going to put the radiocarbon dating on one side for the moment, and run with 1465-1467 – this was specifically when Leon Battista Alberti started researching in Rome not only how to break ciphers, but also how to make unbreakable ciphers. In fact, this precise time and place marked the birth of polyalphabetic ciphers, and arguably of modern cryptographic (and cryptologic) practice.

So far so well documented. But there’s a crucial element missing from this – the company Alberti kept in Rome while he was doing this. One of the only things I learnt from Gavin Menzies’ dismal “1434” (which I can’t even bring myself to review) was that while Regiomontanus was in Rome between 1461 and 1465, he often met up with Alberti and Paolo Toscanelli at Nicholas of Cusa’s house, though the mystery is what he was doing between 1465 and 1467  when “he seems to have disappeared”. [p.143] Of course, Nicholas of Cusa died in 1464, and though Toscanelli was a good friend of Nicholas, he only rarely ventured out of Florence, so this is already something of a simplification.

Yet here we have a critical moment when four polymathic giants of the Renaissance did somewhat more than cross paths (and one might throw others such as Filelfo, George of Trebizond, and [dare I say it] Filarete into this same mix): one might even speculate whether combining Nicholas of Cusa’s interest in concave lenses (De Beryllo, 1441) with Regiomontanus’ astronomy and with Toscanelli’s cosmography did indeed provide the conceptual spark that was to grow into the telescope (and then the microscope) during the course of the following century (even if the raw technology to make such an object was not yet there).

Might this intellectually rich time and place in some way be the loamy bed in which the seed of the Voynich Manuscript grew to its full fruition? To my eyes, there’s something innately multidisciplinary about the VMs, that speaks of subtle collaboration – people contributing to make something more than merely the sum of its parts.

Hence the new research frontier I propose is based on a single question: what are the archival resources that historians have used to reconstruct these meetings (and this community) in Rome in 1465-1467? Perhaps if we now revisit these same resources, we might notice a fleeting mention of the VMs in conception, in construction, in motion, or in retrospect, who knows?

* * * * * *

So, what question would you like answered? What research frontier would you like opened up in 2010?

Some more thoughts on the curious “key” sequence in the Beale Papers

Back in 1980, Jim Gillogly applied the Declaration of Independence codebook for the second Beale Paper (“B2”) to the first Beale Paper (“B1”), and discovered a very unlikely sequence in the resulting text: ABFDEFGHIIJKLMMNOHPP. The chance of the middle section alone (“DEFGHIIJKLMMNO”) occurring at random is about one in a million million, and what is even spookier is that the two aberrant letters in the longer sequence (“F” near the beginning, and “H” near the end) are one entry off from correct letters in the codebook (195 = “F” while 194 = “C”, and 301 = “H” while 302 = “O”).

Gillogly attributed these to encoding slips: but given that I’m wondering whether this string is perhaps a code-sequence of some sort, could it be that the encoder used a slightly different transcription of the Declaration of Independence from the one he/she used for B2? This would yield systematic single-number shifts: so let’s look again at the key-sequence and the adjacent letters in the B2 codebook:

112 T R G A I
18  P B W H C
147 T A O T A
436 L B A P U
195 C F L A T  <-- Gillogly's first apparently offset code
320 I D O T E
37  A E S T W
122 P F S T W
113 R G A I A
6   O H E I B
140 I I T R O  <-- this code might possibly be offset too?!
8   E I B N F
120 T J P F T
305 P K O G B
42  T L O N A
58  O M R R T
461 H M H H D
44  O N A O N
106 P O H T T
301 T H O T P  <-- Gillogly's second apparently offset code
13  O P T D T
408 O P U T P
680 C A U B O
93  C W C U R

Today’s observation, then, is that if the errors in the Gillogly key sequence arose from having used a slightly different codebook transcription of the Declaration of Independence and that the key string should have been ABCDEFGHIIJKLMMNOOPP (as seems to have been intended), then we have two definite (but possibly even three) places where the B1 codebook transcription may have slipped out of registration with the B2 codebook transcription: the code used for the first “I” (141) could equally well have been 140, because that also codes for “I”.

Yet because the sequence is long enough to contain codes that seem correct either side of these errors, we have the possibility of determining the bounds of those stretches in the B1 transcription where the variations (in this scenario) would have occurred. Specifically:-

122 P F S T W
 ?? -1
140 I I T R O
 ?? +1
147 T A O T A
147 T A O T A
 ?? -1
195 C F L A T
 ?? +1
 ?? +1
301 T H O T P
 ?? -1
305 P K O G B

So, if this scenario is correct, it would imply that (relative to the B2 codebook) the B1 codebook transcription dropped a character somewhere between #147 and #195, gained two somewhere between #195 and #301, and then lost another one between #301 and #305. There’s also the possibility that a character was dropped between #122 and #140 and then regained between #140 and #147… not very likely, but worth keeping in mind.

Between #147 and #195, the B1 code usage table looks like this (20 instances):-

148
150 150 150 150 – 154
160 – 162
170 – 172 – 176 176
181 – 184 – 189
191 – 193 – 194 194 194

Between #195 and #301, the B1 code usage table looks like this (64 instances):-

200 200 – 201 201 – 202 – 203 – 206 – 207 – 208 208
210 – 211 211 212 212 – 213 213 – 214 – 216 216 216 216 216 216 216 – 218 218 – 219 219 219 219
221 221 –  224 – 225 – 227
230 230 – 231 – 232 232 – 233 – 234 234 234 – 236
242 – 246 – 247
251
261 – 263 – 264
275 275
280 280 – 283 283 – 284 284 – 286
290 – 294

So, this proposed mechanism would offset up to 84 codes from B1, which may be sufficiently disruptive to have caused B1 to appear undecodable to cryptological luminaries such as Jim Gillogly. It is also entirely possible that (just as with the B2 codebook) there are other paired insertions and deletions to contend with here.

There’s an interesting observation here that many of the transcription errors in the B2 codebook fell close to 10-character (line) boundaries: if this is also the case for some of these (putative) B1 codebook transcription errors, then we should be able to reduce the number of possible variations to check.

It seems as though penetrating public cryptographic analysis of the three Beale Papers (B1, B2, and B3) halted abruptly in 1980 when Jim Gillogly pointed out a problem with B1. If, as he pointed out, you apply to B1 the same dictionary code used for B2 (famously derived from the Declaration of Independence), you get a ciphertext with some distinctive properties:- 

SCS?E TFA?G CDOTT UCWOT WTAAI WDBII DTT?W TTAAB BPLAA ABWCT
LTFIF LKILP EAABP WCHOT OAPPP MORAL ANHAA BBCCA CDDEA OSDSF
HNTFT ATPOC ACBCD DLBER IFEBT HIFOE HUUBT TTTTI HPAOA ASATA
ATTOM TAPOA AAROM PJDRA ??TSB COBDA AACPN RBABF DEFGH IIJKL
MMNOH PPAWT ACMOB LSOES SOAVI SPFTA OTBTF THFOA OGHWT ENALC
AASAA TTARD SLTAW GFESA UWAOL TTAHH TTASO TTEAF AASCS TAIFR
CABTO TLHHD TNHWT STEAI EOAAS TWTTS OITSS TAAOP IWCPC WSOTT
IOIES ITTDA TTPIU FSFRF ABPTC COAIT NATTO STSTF ??ATD ATWTA
TTOCW TOMPA TSOTE CATTO TBSOG CWCDR OLITI BHPWA AE?BT STAFA
EWCI? CBOWL TPOAC TEWTA FOAIT HTTTT OSHRI STEOO ECUSC ?RAIH
RLWST RASNI TPCBF AEFTB

Here you can see not only tripled letters (AAA, PPP), quadrupled letters (TTTT) and even quintupled letters (TTTTT), but also (and this is the part that ignited Gillogly’s cryptographic curiosity) the sequence ABFDEFGHIIJKLMMNOHPP. Even if you restrict your view to the DEFGH IIJKL MMNO monotonically increasing sub-sequence in the middle, the chances of that appearing at random would be (he calculates) about one in a million million. Making it even more improbable is the fact that the aberrant “F” near the start has code 195 where code 194 is “C”, and the aberrant “H” near the end has code 301 where code 302 is “O”, which makes it look a great deal as though these were simply encoding slips. And if these were intended to be C and O respectively, the unlikeliness of the sequence vastly increases again. 

Yet as far as the multiple letter groups go, we can do some simple probability calculations based on the 1321 characters Gillogly lists for the B2 codebook. From frequency analysis – T 255, A 167, O 145, H 80, I 69, S 62, F 62, P 59, W 59, C 53, B 48, R 41, D 37, E 36, L 35, M 30, U 28, G 19, N19, J 10, K 4, V 2, Y 1, X 1, Q 1, Z 0 – you can see that T, A, and P occur 19.3%, 13.5%, and 4.46% (respectively) of the time in the codebook. So, if the text letters were picked at random (as would pretty much be the case if B2’s codebook was completely the wrong codebook for B1), the chances of these patterns occurring randomly at least once in a 520-character sample would be something like this:- 

  • prob(TTTTT) = 1 – (1 – 0.193^5)^(520-(5-1)) = 12.9%
  • prob(TTTT) = 1 – (1 – 0.193^4)^(520-(4-1)) = 51.2%
  • prob(AAA) = 1 – (1 – 0.135^3)^(520-(3-1)) = 72.1%
  • prob(PPP) = 1 – (1 – 0.0446^3)^(520-(3-1)) = 4.5%

You would also expect to see a copious amount of TT and AA pairs scattered through the text, which is in fact exactly what we see (13 x TT and 10 x AA, quite apart from the TTTTT, TTTT and AAA listed above). 

And therein lies the basic Beale Papers paradox: though the distribution and clustering seem to imply that B2’s codebook was not B1’s codebook, the ‘Gillogly sequence’ seems to imply that the two are linked in some way. So, what’s it to be? Damoclean swords aside, how can we unpick this cryptologic knot? 

My observation here is that if there is also some kind of monoalphabetic substitution going on (i.e. in addition to the Declaration of Independence codebook), then it’s quite possible that the Gillogly sequence represents the keyword or keystring used to generate that substitution alphabet. This might well explain the doubled letters within the keystring (i.e. the II MM and PP): if so, we would be looking for a keystring with four doubled letters but where none of the vowels repeat. 

ABCDEFGHIIJKLMMNOOPP 

Hmmm… there can’t be many English words ending with two adjacent doubled letters: in fact, the only two I can think of are coffee and toffee (please let me know if you can think of any others!) ‘Toffee’ doesn’t sound very promising, so could it be ‘coffee’? The previous word would then need to end with “C” to make a doubled letter… not hugely promising, but perhaps it’s a start!- 

ABCDEFGHIIJKLM MNOOPP
xxxxxxxxxxxxxC COFFEE
xxxxxxxxxxxxxT TOFFEE

Alternatively, it might be a three letter word, like “TOO” or “OFF”. Had Eric Sams considered this, doubtless he would have happily constructed all kinds of valid key phrases that fit these constraints, such as:-

ABCDEF GHIIJ KLMMNO OPP
CLUNKY SPEED RABBIT TOO

OK, it’s true that the key phrase to the Beale Papers is not going to be “CLUNKY SPEED RABBIT TOO”, but maybe (just maybe) it’s a step in the right direction. 🙂 

Incidentally… the Wikipedia Beale Papers page notes that “In 1940, the famous cryptanalyst, Dr. White of Yale University, came close to solving the Beale ciphers after tracking down the suspected key hidden by Beale in St. Louis—he never spoke of his findings.” Though I did a bit of Internet sleuthing to try to work out who this Dr White was, I didn’t really get anywhere – I don’t think he was the Maurice Seal White (b.1888) who wrote the 1938 book “Secret writing : how to write and solve messages in cipher and code” (which I found listed in Lou Kruh’s bibliography and Worldcat) and who was a Columbia alumnus in 1920 (see p.212 here), but it’s hard to tell. Please let me know if you find out!

At the start of my own VMs research path, I thought it was important to consider everyone’s observations and interpretations (however, errrm, ‘fruity’) as each one may just possibly contain that single mythical seed of truth which could be nurtured and grown into a substantial tree of knowledge. Sadly, however, it has become progressively clearer to me as time has passed that any resemblance between most Voynich researchers’ interpretations (i.e. not you, dear reader) and what the VMs actually contains is likely to be purely coincidental.

Why is this so? It’s not because Voynich researchers are any less perceptive or any more credulous than ‘mainstream’ historians (who are indeed just as able to make fools of themselves when the evidence gets murky, as Voynich evidence most certainly is). Rather, I think it is because there are some ghosts in our path – illusory notions that mislead and hinder us as we try to move forward.

So: in a brave (but probably futile) bid to exorcise these haunted souls, here is my field guide to what I consider the four main ghosts who gently steer people off the (already difficult) road into the vast tracts of quagmire just beside it…

Ghost #1:  “the marginalia must be enciphered, and so it is a waste of time to try to read them”

I’ve heard this from plenty of people, and recently even from a top-tier palaeographer (though it wasn’t David Ganz, if you’re asking). I’d fully agree that…

  • The Voynich Manuscript’s marginalia are in a mess
  • To be precise, they are in a near-unreadable state
  • They appear to be composed of fragments of different languages
  • There’s not a lot of them to work with, yet…
  • There is a high chance that these were written by the author or by someone remarkably close to the author’s project

As with most non-trick coins, there are two quite different sides you can spin all this: either as (a) good reasons to run away at high speed, or as (b) heralds calling us to great adventure. But all the same, running away should be for properly rational reasons: whereas simply dismissing the marginalia as fragments of an eternally-unreadable ciphertext seems to be simply an alibi for not rising to their challenge – there seems (the smattering of Voynichese embedded in them aside) no good reason to think that this is written in cipher.

Furthermore, the awkward question here is that given that the VMs’ author was able to construct such a sophisticated cipher alphabet and sustain it over several hundred pages in clearly readable text, why add a quite different (but hugely obscure) one on the back page in such unreadable text?

(My preferred explanation is that later owners emended the marginalia to try to salvage its (already noticeably faded) text: but for all their good intentions, they left it in a worse mess than the one they inherited. And this is a hypothesis that can be tested directly with multispectral and/or Raman scanning.)

Ghost  #2: “the current page order was the original page order, or at least was the direct intention of the original author”

As evidence for this, you could point out that the quire numbers and folio numbers are basically in order, and that pretty much all the obvious paint transfers between pages occurred in the present binding order (i.e. the gathering and nesting order): so why should the bifolio order be wrong?

Actually, there are several good reasons: for one, Q13 (“Quire 13”) has a drawing that was originally rendered across the central fold of a bifolio as an inside bifolio. Also, a few long downstrokes on some early Herbal quires reappear in the wrong quire completely. And the (presumably later) rebinding of Q9 has made the quire numbering subtly inconsistent with the folio numbering. Also, the way that Herbal A and Herbal B pages are mixed up, and the way that the handwriting on adjacent pages often changes styles dramatically would seem to indicate some kind of scrambling has taken place right through the herbal quires. Finally, it seems highly likely that the original second innermost bifolio on Q13 was Q13’s current outer bifolio (but inside out!), which would imply that at least some bifolio scrambling took place even before the quire numbers were added.

Yet some smart people (most notably Glen Claston) continue to argue that this ghost is a reality: and why would GC be wrong about this when he is so meticulous about other things? I suspect that the silent partner to his argument here is Leonell Strong’s claimed decipherment: and that some aspect of that decipherment requires that the page order we now see can only be the original. It, of course, would be wonderful if this were true: but given that I remain unconvinced that Strong’s “(0)135797531474” offset key is correct (or even historically plausible for the mid-15th century, particularly when combined with a putative set of orthographic rules that the encipherer is deemed to be trying to follow), I have yet to accept this as de facto codicological evidence.

To be fair, GC now asserts that the original author consciously reordered the pages according to some unknown guiding principle, deliberately reversing bifolios, swapping them round and inserting extra bifolios so that their content would follow some organizational plan we currently have no real idea about. Though this is a pretty sophisticated attempt at a save, I’m just not convinced: I’m pretty sure (for example) that Q9 and the pharma quires were rebound for handling convenience – in Q9’s case, this involved rebinding it along a different fold to make it less lopsided, while in the pharma quires’ case, I suspect that all the wide bifolios from the herbal section were simply stitched together for convenience.

Ghost #3: “Voynichese is a single language that remained static during the writing process”

If you stand at the foot of a cliff and de-focus your gaze to take in the whole vertical face in one go, you’d never be able to climb it: you’d be overawed by the entire vast assembly. No: the way to make such an ascent is to strategize an overall approach and tackle it one hand- and foot-hold at a time. Similarly, I think many Voynich researchers seem to stand agog at the vastness of the overall ciphertext challenge they face: whereas in fact, with the right set of ideas (and a good methodology) it should really be possible to crack it one page (or one paragraph, line, word, or perhaps even letter) at a time.

Yet the problem is that many researchers rely on aggregating statistics calculated over the entire manuscript, when common sense shows that different parts have very different profiles – not just Currier A and Currier B, but also labels, radial lines, circular fragments, etc. I also think it extraordinarily likely that a number of “space insertion ciphers” have been used in various places to break up long words and repeating patterns (both of which are key cryptographic tells). Therefore, I would caution all Voynich researchers relying on statistical evidence for their observations that they should be extremely careful about selecting pragmatic subsets of the VMs when trying to draw conclusions.

Happily, some people (most notably Marke Fincher and Rene Zandbergen) have come round to the idea that the Voynichese system evolved over the course of the writing process – but even they don’t yet seem comfortable with taking this idea right to its limit. Which is this: that if we properly understood the dynamics by which the Voynichese system evolved, we would be able to re-sequence the pages into their original order of construction (which should be hugely revealing in its own right), and then start to reach towards an understanding of the reasons for that evolution – specfically, what type of cipher “tells” the author was trying to avoid presenting.

For example: “early” pages neither have word-initial “l-” nor do we see the word “qol” appear, yet this is very common later. If we compare the Markov states for early and late pages, could we identify what early-page structure that late-page “l-” is standing in for? If we can do this, then I think we would get a very different perspective on the stats – and on the nature of the ‘language’ itself. And similarly for other tokens such as “cXh” (strikethrough gallows), etc.

Ghost #4: “the text and paints we see have remained essentially unchanged over time”

It is easy to just take the entire artefact as a fait accompli – something presented to our modern eyes as a perfect expression of an unknown intention (this is usually supported by arguments about the apparently low number of corrections). If you do, the trap you can then fall headlong in is to try to rationalize every feature as deliberate. But is that necessarily so?

Jorge Stolfi has pointed out a number of places where it looks as though corrections and emendations have been made, both to the text and to the drawings, with perhaps the most notorious “layerer” of all being his putative “heavy painter” – someone who appears to have come in at a late stage (say, late 16th century) to beautify the mostly-unadorned drawings with a fairly slapdash paint job.

Many pages also make me wonder about the assumption of perfection, and possibly none more so than f55r. This is the herbal page with the red lead lines still in the flowers which I gently parodied here: it is also (unusually) has two EVA ‘x’ characters on line 8. There’s also an unusual word-terminal “-ai” on line 10 (qokar or arai o ar odaiiin) [one of only three in the VMs?], a standalone “dl” word on line 12 [sure, dl appears 70+ times, but it still looks odd to me], and a good number of ambiguous o/a characters. To my eye, there’s something unfinished and imperfectly corrected about both the text and the pictures here that I can’t shake off, as if the author had fallen ill while composing it, and tidied it up in a state of distress or discomfort: it just doesn’t feel as slick as most pages.

I have also had a stab at assessing likely error rates in the VMs (though I can’t now find the post, must have noted it down wrong) and concluded that the VMs is, just as Tony Gaffney points out with printed ciphers, probably riddled with copying errors.

No: unlike Paul McCartney’s portable Buddha statue, the Voynich Manuscript’s inscrutability neither implies inner perfection nor gives us a glimmer of peace. Rather, it shouts “Mu!” and forces us to microscopically focus on its imperfections so that we can move past its numerous paradoxes – all of which arguably makes the VMs the biggest koan ever constructed. Just so you know! 🙂

You may well have heard of the furore surrounding King’s College London’s recent decision to get rid of roughly 10% of its academic staff, including (perhaps most controversially) its Chair of Palaeography, currently held by David Ganz. I’ve been trying for months to raise a big enough head of Daily Mail-esque columnist steam to vent some anger about this downsizing… but I just can’t do it. I’m angry, but probably not for the reasons you might expect.

If Kings were instead talking about getting rid of a Chair of Etymology (perhaps sponsored by the authors of all those annoying books about banal words that seem to have taken over bookshop tills?), a Chair of Phrenology, or indeed a chair in any other of those useless Victorian sub-steam-punk nonsensical technical subjects, nobody would bat an eyelid. All the same, palaeography is arguably an exception because raw historical text is almost a magical thing: ideas written down have a slow life far beyond that of their author’s, making palaeography the art of keeping written ideas alive.

Yet one of the things muddying the waters here is that there are two quite distinct palaeographies at play: firstly, there’s the classic Victorian handwriting collectoriana side of Palaeography, by which vast collections of hands were amassed and (as I understand it) spuriously positivistic developmental trees constructed; while secondly, there is a modern technical, forensic side to the subject more to do with ductus, and closely allied with codicology. What the two sides share is that practitioners are good at reading stuff, and like to help people to read stuff they want to. Yet to my eyes, the dirty little secret is that the ductus / forensic side of the subject is rarely integrated with the craft knowledge / practitioner side of the subject.

Yet historians will always need to read texts: and the number of manuscripts scanned and available on the web must be at least doubling each year. So at a time when accessible texts are proliferating, why is palaeography itself in decline?

For me, the root problem lies in history itself. When I was at school, History was taught as The Grand Accumulation Of Facts About Grand Men In History (which, though a nonsensical approach, was at least a long-standing nonsensical approach): while nowadays, the ascendancy of Burkeian social history has turned vast swathes of the subject instead into a wayward empathy fest – Feeling How It Felt To Feel Like An Unprivileged Pleb Just Like You (but without a plasma screen and iPhone) In The Very Olden Days. No less nonsensical, no less useless.

Actually, my firm belief is that taught History should not be a recital of that-which-has-happened, but should instead be the process of teaching people of all ages how to find out what happened in the past for themselves. When I look at contemporary events and documents (dodgy dossiers, Dr David Kelly, Jean Charles de Menezes, etc), I interpret our shabby public response as a collective failure of history teaching. We are not taught how to think critically about documentary evidence, even though this is a skill utterly central to active citizenship.

And so I think History-as-taught-at-schools should be about primary evidence, about reliability of sources, about practising exercising judgment. Really, I think it should start neither with Kings & Queens nor with plebs, but instead with codicology and palaeography: if you believe in the primacy of evidence, then you should teach that as the starting point. It’s everything else about history that is basically bunk!

Personally, I would re-label codicology as “material forensics” and palaeography as “textual forensics” (I’m not sure how serious people are about wanting to rename the latter ‘diachronic decoding’, but that’s almost too ghastly a Dan-Brown-ism to consider), and would build the first year of historical curricula in schools around the nature and limits of Evidence – basically, the epistemology of pragmatic history. To me, the fact that palaeography only kicks in as a postgraduate module is what we should be ashamed of.

So, who signed Palaeography’s death warrant? Not King’s College’s vastly overpaid administrators, then, but instead all those historians who have chosen not only to back away from primary evidence but also to teach others to do the same. David Ganz should be teaching school teachers how to inspire children around evidence: and it is our own fault that palaeography has become so stupidly marginalized in mainstream historical practice that the King’s College administrators’ desire to get rid of it can seem so reasonable.

Trying to pin blame on King’s College is, I would say, missing the point: which is that we collectively killed palaeography already. If the overall project was to get rid of romantic, delusional, denialist History (and much social history as practised has just as romantic a central narrative thread as the Big Man history it aimed to supplant), fair enough: but rather than leave a conceptual vacuum in its wake, it should be replaced with skeptical, pragmatic History (based on solid forensic thinking and an appreciation of the internal agendas behind texts). I believe that this would yield good critical thinking skills as well as exactly the kind of good citizenship politicians so often say is missing.

But… what are the chances of that, eh?

I used to quite like Peanuts as a kid, though looking back I’d be hard-pressed to say which of the characters I particularly identified with. Perhaps identifying with characters is more of an adult way of relating to cultural objets d’art: I think I just liked the jokes.

Of course, nothing in the following badly-hacked Peanuts cartoon is anything to do with you, you understand, dear Cipher Mysteries reader: as with the hacked Garfield-does-Voynich strip, it’s just a bunch of asemic words arranged in speech bubbles beside someone else’s copyrighted images. So feel completely free to make of it precisely what you will. Enjoy!

For the most part, constructing plausible explanations for the drawings in the Voynich Manuscript is a fairly straightforward exercise. Even its apparently-weird botany could well be subtly rational (for example, if plants on opposite pages swapped their roots over in the original binding, in a kind of visual anagram), as could the astronomy, the astrology, and the water / balneology quires (if all perhaps somewhat obfuscated). Yet this house of oh-so-sensible cards gets blown away by the hurricane of oddness that is the Voynich Manuscript’s nine-rosette page.

If you’re not intrigued by this, you really do have a heart of granite, because of all the VMs’ pages, this is arguably the most outright alien & Codex Seraphinianus-like. Given the strange rotating designs (machines?), truncated pipes, islands, and odd causeways, it’s hard to see (at first, second and third glances) how this could be anything but irrational. Yet even so, those who (like me) are convinced that the VMs is a ‘hyperrational’ artefact are forced to wonder what method there could be to this jumbled visual madness. So: what’s the deal with this page? How should we even begin to try to ‘read’ it?

People have pondered these questions for years: for example, Robert Brumbaugh thought that the shape in the bottom left was a “clock” with “a short hour and long minute hand”. However, now that we have proper reproductions to work with, his claim seems somewhat spurious, for the simple reason that the two “hands” are almost exactly the same length. Mary D’Imperio (1977) also thought the resemblance “superficial”, noting instead that “an exactly similar triangular symbol with three balls strung on it occurs frequently amongst the star spells of Picatrix, and was used by alchemists to mean arsenic, orpiment, or potash (Gessman 1922, Tables IV, XXXIII, XXXXV)” (3.3.6, p.21).

Back in 2008, Joel Stevens suggested that the rosettes might represent a map, with the top-left and bottom-right rosettes (which have ‘sun’ images attached to them) representing East and West respectively, and with Brumbaugh’s “clock” at the bottom-left cunningly representing a compass in the form of the point of an arrow pointing towards Magnetic North. You know, I actually rather like Joel’s idea, because it at least explains why the two “hands” are the same length: and given that I suspect that there’s a hidden arrow on the “bee” page and that many of the water nymphs may be embellished diagrammatic arrows, one more hidden arrow would fit in pretty well with the author’s apparent construction style.

This same idea (but without Joel’s ‘hidden compass’ nuance) was proposed by John Grove on the VMs mailing list back in 2002. He also noted that many of “the words appear to be written as though the reader is walking clockwise around the map. The words inside the roadway (when there are some) also appear to be written this way (except the northeast rosette by the castle).” I’ve underlined many of the ’causeway labels’ in red above, because I think that John’s “clockwise-ness” is a non-obvious piece of evidence which any theory about this page would probably need to explain. And yes, there are indeed plenty of theories about this page!

In 2006, I proposed that the top-right castle (with its Ghibelline swallowtail merlons, ravellins, accentuated front gate, spirally text, circular canals, etc) was Milan; that the three towers just below it represented Pavia (specifically, the Carthusian Monastery there); and that the central rosette represented Venice (specifically, an obfuscated version of St Mark’s Basilica as seen from the top of the Campanile). Of course, even though this is (I think) remarkably specific, it still falls well short of a “smoking gun” scientific proof: so, it’s just an art history suggestion, to be safely ignored as you wish.

In 2009, Patrick Lockerby proposed that the central rosette might well be depicting Baghdad (which, along with Milan and Jerusalem, was one of the few medieval cities consistently depicted as being circular). Alternatively, one of his commenters also suggested that it might be Masijd Al-Haram in Mecca (but that’s another story).

Also in 2009, P. Han proposed a link between this page and Tycho Brahe’s “work and observatories”, with the interesting suggestion that the castle in the top-right rosette represents Kronborg Slot (which you may not know was the one appropriated by Shakespeare for Hamlet), with the centre of that rosette’s text spiral representing the island of Hven where Brahe famously had his ‘Uraniborg’ observatory. Kronborg Slot was extensively remodelled in 1585, burnt down in 1629 and then rebuilt: but I wonder whether it had swallowtail merlons when it was built in the 1420s? Han also suggests that other features on the page represent Hven in different ways (for example, the three towers marked ‘PAVIA?’ above); that the pipes and tall structures in the bottom-right rosette represent Tycho’s ‘sighting tubes’ (a kind of non-optical precursor to telescopes); that one or more of the mill-like spoked structures represent(s) Hven’s papermill’s waterwheel; and that the central rosette represents the buildings of Uraniborg (for which we have good visual reference material). Han’s central hypothesis (on which more another day!) is that the VMs visually encodes information about various supernovae: the suggestion here is that the ‘hands’ of Brumbaugh’s clock are in fact part of the ‘W-shape’ of Cassiopeia, which sits close in the sky to SN 1572. Admittedly, Han’s portolan-like ‘Markers’ section at the end of the page goes way past my idea of being accessible, but there’s no shortage of interesting ideas here.

Intriguingly, Han also points out the strong visual similarity between the central rosette’s ‘towers’ and the pharma section’s ‘jars’: D’Imperio also thought these resembled “six pharmaceutical ‘jars'”. I’d agree that the resemblance seems far too strong to be merely a coincidence, but what can it possibly mean?

Finally, (and also in 2009) Rich SantaColoma put together a speculative 3d tour of the nine-rosette page (including a 3d flythrough in YouTube), based on his opinion the VMs’ originator “was clearly representing 3D terrain and structures”. All very visually arresting: however, the main problem is that the nine-rosette page seems to incorporate information on a number of quite different levels (symbolic, structural, physical, abstract, notional, planned, referential, diagrammatic, etc), and reducing them all to 3d runs the risk of overlooking what may be a single straightforward clue that will help unlock the page’s mysteries.

All in all, I suspect that the nine-rosette page will continue to stimulate theories and debate for some time yet! Enjoy! 🙂

I have to say that I’m pretty humbled by the hit stats for the Wikipedia Voynich page: when the xkcd webcomic spike happened in June 2009, the Voynich page got a quite shocking 77k hits in a single day. In fact, its daily traffic has gone up from 500-1000 hits at the start of 2009 to 1000-2000 hits as of now (May 2010) (though interspersed with occasional 10k days).

And so I wondered if it was just about time for Jim Davis’ Garfield to take on the Voynich Manuscript: if he did, might it look something like this?

For any passing lawyers, I’m neither passing this off as an original Garfield strip by Jim Davis nor trying to make money from his intellectual property, it’s just one of his strips unconvincingly hacked to show what my idea of a Voynich-themed Garfield strip would look like for the benefit of my Cipher Mysteries readers. And yes, I’ll happily take it down if you ask me to. 🙂