Contents Home Top Prev Next
Map Refs Pics Search Pages

Text Analysis - Transcription of the Text


The main mystery of the Voynich MS is clearly its unknown writing. This topic is addressed from three different aspects, on three (sets of) pages:

This page addresses the second part, the transcription of the text. It summarises historical transcription efforts and describes the present state of things. This page is very relevant for those who are interested in the interpretation of the Voynich MS text, but it can be skipped by those who are not.

A section at the bottom of this page explains where one may find the main transcripion files.


The term transcription is used here to describe the conversion of the handwritten text of the Voynich MS into a computer-readable format (file). The purpose of this is to allow computer software to analyse the text, for example in order to derive statistics or to aid the interpretation and ideally translation of the text. The first time that this type of transcription was exercised was by William Friedman after WWII, for processing by so-called 'tabulating machines'.

Many years before that, in the 1930's, Fr. Th. Petersen of the Catholic University of America already made a complete hand transcription (i.e. a hand-written copy) of the MS using a complete photocopy of the MS and in some cases, in order to transcribe difficult parts, the actual MS. This document, in which he added many interesting annotations, is still preserved in the William F. Friedman collection of the Marshall Library in Lexington (Va) (1).

A transcription should be clearly distinguished from a proposed translation of the text. The only purpose of a transcription is to represent each handwritten character in the MS by a symbol in a computer-readable file in a consistent manner. It doesn't really matter which symbol is used for which character. Ever since the 1940's, different people have used different conventions, or transcription alphabets.



The earliest well-known transcription of large parts of the Voynich MS was made in the 1940's by the so-called First Study Group (FSG) of William Friedman, which was briefly described here. The group was working from photostatic copies of the pages of the MS, and they had defined a transcription alphabet agreed by all members of the team. Because of the desire for secrecy by Friedman, for a very long time nobody outside his team was aware of this transcription exercise or the alphabet they used.

The Friedman transcription was never meant to be a complete transcription of all text in the MS. They decided to concentrate on the linear text written in paragraphs, and not bother with the complicated diagrams with labels and text written in circles. This is very clear from handwritten annotatations made on the source copies saying "don't punch this" or "punch only this".

The FSG transcription alphabet uses capital letters and numbers. It has a rather unusual method (by later standards) for transcribing the 'intruding gallows', namely by using a special symbol (Z) for the pedestal (ch).

The FSG transcription effort is described in detail in Reeds (1995) (2). A printout of this transcription was discovered by Jim Reeds in the above-mentioned Marshall library, and together with Jacques Guy he entered it in computer readable form. The resulting file is >> still available for downloading. This transcription is far from complete for the reasons mentioned above.


In 1976 Prof. William R. Bennett of Yale University published a book on the use of the computer to analyse certain properties of language and text (3). In this book, he dedicated a chapter to an analysis of the Voynich MS text, and for this purpose he needed a transcription alphabet. Bennett's alphabet has not been used outside of this publication. The transcription was made from photographs and stored on paper tape. As reported by Brumbaugh (4), Bennett was assisted in his Voynich MS analysis by Jonina Duker, who was then a sophomore student (5).


Roughly at the same time, the cryptanalist Prescott Currier started a major transcription effort, using an alphabet he designed independently. This alphabet uses the capital letters A-Z and the numbers 0-9 (i.e. all 36). It does not represent some characters which the FSG alphabet does, and it uses single characters for what appear to be composite characters in the MS. When Currier presented his findings at the 1976 Voynich MS symposium, Mary D'Imperio suggested that it would be important that all researchers use a unified alphabet. She had also started transcribing parts of the MS using her own alphabet, which she abandoned in favour of Currier's, and the two files were merged. The combined transcription of Currier and D'Imperio, and the transcription alphabet designed by Currier, have been used well into the 1990's, also in the earliest years of the internet. In the following, the transcription may be referred to by the abbreviation "C-D".

The following table shows the three above-mentioned alphabets together.

Char Bennett FSG Currier   Char Bennett FSG Currier
D 4 4   I I I
Z2 2   IQIR T
ETS Z   NN(*) N
HH P   M M(*) M
KD F    K J
FF V    IK K
CKTDZ X    (6) 6
CFTFZ Y    (7) 7
A A A   Y Y (n)
C C C   V V (v)

(*) Note: Tiltman used the FSG alphabet, but instead of N and M wrote IL and IIL.

One thing that immediately appears from this table is that the various transcribers did not agree on what constitutes a single character in the Voynich MS text, and this turns out to be one of the most difficult points for transcription.

Later, Jim Reeds showed that many characters in the Voynich MS cannot be represented exactly by any of the existing alphabets. There are some 'rare' characters, and in addition there are what appear to be ligatures of several characters. These characters have traditionally been called 'weirdoes', though here I will use the term 'rare characters'. Jim Reeds produced a document with an overview of these rare characters, calling them 'X1' to 'X128'. An excerpt of this little known document is shown below.

Transcription in the age of the WWW

With the emergence of the "World Wide Web" and the start of the Voynich mailing list, transcription of the parts that were still missing from Currier's and D'Imperio's efforts was continued using Currier's alphabet. A number of characters using lower case were added to this alphabet. These are already included (in parentheses) in the above table. As part of this new transcription, which will be referred to as "Vnow", a file format for this transcription file was adopted, about which more will be said further below.

In parellel to this, a completely new type of transcription alphabet was designed:


The 'Frogguy' transcription alphabet was designed by Jacques Guy in 1991. It uses lower-case characters, numbers and diacritical marks. Rather than representing complete characters, it represents the 'strokes' of the script of the Voynich MS. As a result of this approach it manages to produce the closest similarity of the transcribed text with the script. At the same time, some characters in the MS which may be assumed to be a single character are represented by several using this alphabet. By its nature, it is the first alphabet that allows to properly represent many of the ligatured characters that were mentioned above. Jacques Guy also introduced the so-called 'capitalisation rule' which means that a character that is connected to the following one should be represented by a capital letter. The Frogguy transcription alphabet and its use are explained in more detail at a preserved copy of >> one of Jim Reeds' web pages.

An additional important contribution from Jacques Guy was a tool that allowed translation of transcription files from one transcrition alphabet to another. This tool was called BITRANS, was written in Pascal and ran in DOS (it still would). It could handle complex translation rules, such as needed for the Currier alphabet, where, for example, the translation of the character i depends on context.

The Frogguy alphabet is the closest in appearance to the Voynich MS text, and it solves the main inconsistency of the Currier alphabet, namely that Currier synthesised strings of consecutive i characters into single characters, while this was not done for strings of consecutive e characters (6). However, it has a distinct disadvantage in that it is not very well suited for the type of numerical analysis mentioned at the top of this page, primarily because of its frequent use of the apostrophe, and to a lesser extent the mixture of letters and numbers. No signficant transcriptions have been made using this alphabet.

When Gabriel Landini and myself embarked on a new transcription of the MS, a project that we named "EVMT" at the time, we decided that a new alphabet, but one based on the principle to Frogguy, would be needed.


EVA originally stood for European Voynich Alphabet, but later became 'Extensible Voynich Alphabet'. It was designed to be similar to Frogguy, but to use only alphabetical characters. It was designed by Gabriel Landini and myself with important contributions and suggestions from Jacques Guy. Its design was part of a larger scheme which included:

'Basic Eva' is the set of lower case characters that was identified, and 'Extended Eva' includes the representation of all types of rare characters. The basic Eva alphabet was chosen such that the transcribed text is almost pronouncible. This excellent idea from Gabriel was not to be able to 'speak Voynichese', but it makes it very easy for the human brain to recognise and remember transcribed words.

The rare characters can be classified into four categories:

  1. Unusual or rare single characters
  2. Ligatures of only 'Basic Eva' characters
  3. Ligatures including rare single characters
  4. Anything else

These were all identified in the course of the transcription exercise, which was based on two documents: the Petersen hand transcription of the MS already mentioned above (see note 1), and the Yale "copyflo" (9). As it turned out, there were three single characters that might be called unusual because they did not appear in any of the previously defined transcription alphabets, but which occurred more than 10 times in the MS. These three characters: b, u and z were assigned their own 'Basic Eva' letter (b, u and z respectively). The table of Basic Eva is included here:

Individual rare characters, or rare parts of ligatures, were assigned a 'high ASCII' code, and the way to include them into transcription files was defined as follows: &185; for ascii code 185. The following image shows all such rare characters (10).

Ligatures of characters may be represented in two ways. The first is the previously mentioned capitalisation rule introduced already by Jacques Guy. As the table of basic Eva above shows, this method allows an accurate representation of the text using the Eva True Type font. A capitalised character always connects to the next one. In addition, there are the following special cases:

Often, the characters Sh, cTh, cKh, cPh and cFh are simply written as sh, cth, ckh, sph and cfh respectively, which results that their rendition in the True Type Font is not optimal.

The second way to indicate ligatures in transcription files, which is more intuitive for the human reader, is to enclose the connected characters in parentheses. The following table shows examples of this. The script representation in this table uses the capitalisation rule.

EVA Capitalised EVA Using TTF
(ao) Ao Ao
(ee) Ee Ee
(cthh)ey cTHhey cTHhey
(ith) ITh ITh
(oy) Oy Oy
sh 1 Sh  Sh
(yk) Yk Yk

1 Example of standard though not optimal use.

Further significant efforts

The Eva alphabet, primarily in its basic form, found great reception in the community, and it was used by the Japanese Takeshi Takahashi to produce a new transcription, also based on the Yale "copyflo" (see note9), which was the first that could be considered almost complete. In addition, Gabriel Landini collected the important older transcriptions mentioned above into a single interlinear file, meaning that they were presented together line by line. They were all converted into Eva, since this conversion is consistent and revisersible, as mentioned above.

The brazilian Jorge Stolfi took this interlinear file and put very significant effort into improving it, adding his own transcriptions and partial transcriptions from several others, for example John Grove. As a result, this interlinear file has become the de facto source for transcription data. In the following, it will be referred to as the Landini-Stolfi Interlinear file, with abbreviation "LSI".


A few years later, Glen Claston embarked on a complete transcription of the Voynich MS and devised a transcription alphabet which he called Voynich 101 (here v101 for short). The v101 alphabet was designed to keep stroke combinations that appeared to him to be single signs as single characters. Furthermore, it distinguishes between several variants of characters that were considered to be one and the same in all previous alphabets. He transcribed the entire text of the Voynich MS using this alphabet. Following is his definition of the alphabet, and allocation to ASCII values.

Also for the v101 alphabet a True Type font has been designed, and like the Eva font it allows high-quality rendition of the Voynich MS text in electronic documents (Word, Excel). Also this font can be downloaded at the site map. In the following, this transcription will be indicated by the abbreviation "GC".

Transcription file format

For the analysis of the various Voynich MS text transcriptions, it is highly desirable that these transcriptions are contained in files with a well-defined (standardised) format. Such files should be 'annotated', meaning that they include so-called metadata, which is not the text itself, but information about the text. This metadata should indicate for each particular piece of text on which folio or page it is located, and where it is on the page. Additional relevant information about the pages or the text may be included. Analysis software should be able to either interpret this information or to ignore it.

In the earliest days of the internet, when the transcription by Currier was being extended, a format was agreed for this new file. This format was extended in the course of the transcription by Gabriel Landini and myself. It was further modified by Jorge Stolfi, and used in this final version in the main resource: the Stolfi-Landini interlinear file.

Unfortunately, the differences are quite significant, and the v101 transcription file uses yet another representation. Only a few general conventions can be given here.

Symbol(s) Meaning Comment
# (hash) at start of line The entire line is a comment. Not used in "GC".
{ ... } All text in curly brackets is equally a comment. This may appear anywhere in a line. Not used in "GC".
< ... > This is a locus indicator, and appears at the start of each text segment. It explains where this text is to be found. If it does not include a period (.), it is the start of a new page instead
[ ... ] Used for alternative or uncertain reading. [ao] means it could be an a or an o, but the transcriber is not certain. The options may be separated by a | which is optional in case there are only two, but obligatory when there are more, e.g. [r|s|d]. If possible, the most likely option should be the first one. Mainly used in "Vnow" and "LZ".
. A word space (word separator). Used by all
, An uncertain word space. Used by the majority
- A drawing intrusion in the text. In some transcriptions also used to mark the end of a line
= at end of line. End of a paragraph. Used by all
* A single illegible character. "GC" uses a ?

The structure of the locus types varies so much between files, and even inside the "LSI" file, that no summary can be provided. The "GC" file identifies loci by a line number, or a more descriptive identification in other cases.

Jorge Stolfi also introduced the useful feature in the "LSI" file where a single locus may be represented several times, as transcribed by different people. In this case the locus indicator is post-fixed by an "author ID", which is a single upper case character. It is inside the < > brackets and preceded by a semicolon.

To (pre-)process a transcription file for statistical analysis, I developed a simple tool called VTT (Voynich Transcription Tool). This is based on the transcription file format of the "LZ" file, and it allows several useful operations on the text, primarily the removal of the various types of annotations, and the selection of pages depending on their properties.

This 'page selection option' relies on the availability of specific comments in the page headers defining 'page variables'. These are used consistently in the "LSI" and "LZ" files. Without going into all detail, the following line in a transcription file:

<f1r> {$L=A $H=1 $I=H}

sets the three page variables L, H and I to the values A, 1 and H respectively. (These mean respectively: Currier language = A, Currier hand = 1, Illustration type = herbal). The tool VTT can be instructed to include or exclude pages with any combination of variables set to certain values. This useful feature has also been included in the "Voynich Information Browser" of Elias Schwerdtfeger, for which see below.

The state of affairs in 2017

To summarise the status of available transcription resources by the year 2017, we have significant achievements, but also very significant problem areas.


Several independent transcriptions of the Voynich MS are available. There are three that are almost complete:

A more complete list of transcription efforts is given below.

Problem areas

Transcription of the Voynich MS is particularly difficult, because one cannot read the text. One has no help from context. The biggest problem is to decide whether two almost-similar-looking characters are two different ones, or two representations of the same one, where the variation is just caused by handwriting variations. While transcribing, one is continually forced to make such decisions, and it is unavoidable that these are subjective. A similar problem exists in deciding about word spaces. Sometimes, characters are offset just a bit and it is hard to decide whether there is a space or not. Even the introduction of the comma to represent an uncertain space doesn't completely resolve this problem. The only way out of both these problems is to have these decision made more objectively, i.e. by a piece of software (OCR) but we are far from being able to achieve this.

A second general problem is the lack of standards. As mentioned above, the transcription files all have rather significant differences in format. There is no place where they are all collected together. Of the interlinear file there are also several versions.

It is presently impossible to present the "GC" transcription in any of the standard formats, because many of the symbols that have a special meaning in the "LSI" (and the "LZ") file are representations of Voynich MS text characters in the "GC" transcription. A solution to this problem needs to be found.

It is almost always impossible to repeat analyses done by others since one doesn't know which data they used, and there isn't really any clear way how they could have specified it. The file format(s) used nowadays are far removed from modern representation standards. Already more than a decade ago, Rafał Prinke suggested the use of TEI (Text Encoding Initiative) but it never took off. There are a few more comments about this general topic on the final page of this site, and I have prepared a separate page going into these problems in much more detail.

Next-generation transcription of the text

New initiative

Inventory of the transcribed text

As mentioned above, three nearly complete transcriptions of the text exist, but how complete are they, really? To answer this question, I have compared all three of them, and the digital images of the MS. This showed that there are some minor inconsistencies in counting the different loci, which could be resolved. It also showed that none of the transcriptions was really complete, and some loci were not included in any of them.

The text loci in the MS can be subdivided into four types (11):

Type Meaning Count
P Normal running text in paragraphs 4130
L Labels and dislocated words or characters 1032
C Writing along circles 84
R Writing along the radii of circles 142
Total   5388

An on-going analysis shows that the "LZ" transcription, which benefited strongly from the work of Theodore Petersen, is most complete. It lacks only:

The "LSI" transcription lacks about one third of the text in the Rosettes page. The completeness of the "GC" transcription is more difficult to assess, since most items that are not linear text have been grouped together. The total number of characters in the "GC" transcription, not counting spaces and annotation characters, is 158,959. This is the closest estimate we have of the number of characters in the MS. It is based on the definition of the v101 transcription alphabet (12).

New transcription file format

In order to be able to represent all existing transcriptions into a single file format, using their own original transcription alphabet, a number of issues need to be resolved, primarily related to the character set of the v101 alphabet, which was shown above. The most serious problems exist with the following characters:

Char Clash Solution
( This is a character in v101, so it cannot be used as a ligature marker Use { } for ligature markers, and <! > for in-line comments
& This is a character in v101, so it cannot be used to represent high-ascii codes. Use @ for high-ascii codes.
* This is a character in v101, so it cannot be used to mark an illegible character. Use ? for illegible characters

The following table explains the conventions used for the new format (called IVTFF).

The complete format description may be found here.

# (hash) at start of line The entire line is a comment.
<f ... > With the < character in the first position of the line. This is a locus indicator, and appears at the start of each text segment. It explains where this text is to be found. The format of locus indicators is explained further below.
<! ... > With the < *NOT* in the first position of a line. All text between the delimeters is a comment. This may appear anywhere in a line.
. A word space (word separator).
, An uncertain word space.
[ ... ] Used for alternative or uncertain reading. [ao] means it could be an a or an o, but the transcriber is not certain. The options may be separated by a : which is optional in case there are only two, but obligatory when there are more, e.g. [r:s:d]. If possible, the most likely option should be the first one.
{ ... } A ligature of standard characters. Only used with the Eva and Frogguy alphabets
@nnn; A high-ascii code. nnn must be in the range 128 to 255
<-> A drawing intrusion in the text.
<$> at end of line. End of a paragraph.
/ at end of line This locus is continued on the next line in the file. That line must also have an / in the first position.
? A single illegible character.
??? An uncertain number of illegible characters.

There are two types of locus indicators:

<f17r> A locus indicator without a period means the start of a new page, in this case f17r. There will be no Voynich MS text following this, but there may be a comment, with metadata referring to the entire page.
<f17r.N,@Ab> This is the start of a piece of text of locus type 'Ab'. The value of N must increase monotonously, starting at 1, for each page. The meaning of @Ab is explained in the detailed format description. A is one of the four options shown under the heading 'Inventory of the transcribed text', above.
<f17r.N,@Ab;T> Same as above, but identifiying the transcriber by the character T.

Location of main transcription resources

Following are links to the original transcription resources that are known to me. They are presented roughly in chronological order.

Code Description Original Copy in IVTFF
FSG A copy of the FSG transcription, created by Friedman's First Study Group, in the format prepared by Jim Reeds and Jacques Guy. >>Link beta version
C-D The original transcription of D'Imperio and Currier >>Link beta version
Vnow The updated version of this file, made during the earliest years of the Voynich MS mailing list. >>Link beta version
TT The original transcription by Takeshi Takahashi. At a >>set of separate pages at his web site. -
LSI The Landini-Stolfi Interlinear file At a >>page at Stolfi's site with links to this file in various compressed formats. beta version
GC The v101 transcription file. Local copy. -
ZL The "Zandbergen" part of the LZ transcription effort, updated to include all 5388 loci. - beta version

The >>Voynich Information Browser by Elias Schwerdtfeger allows extraction of individual transcriptions from the "LSI" file, with many useful options.


It is contained in items 1615.1 and 1615.2 of the William F. Friedman collection. For more about the Marshall Library, see here.
See Reeds (1995).
See Bennett (1976).
See Brumbaugh (1978).
I am grateful to Jonina Duker for her recollections of these events of 40 years ago.
Of course, we have no way of knowing whether either assumption is correct or not.
On the other hand, it should be clear that not all features of a transcription made directly in Eva can be converted into the older transcription alphabets in a reversible manner.
Such a font has been designed by Gabriel Landini, and is used extensively at this web site. See the site map.
A black-and-white printout of a microfilm made in the 1970's, see also here.
Gabriel Landini added all of them to the True Type font.
See also the page about the writing system.
The question marks that appear in the GC transcription file have been counted as characters. They have been assumed to represent illegible characters.


Contents Home Top Prev Next
Map Refs Pics Search Pages
Copyright René Zandbergen, 2017
Comments, questions, suggestions? Your feedback is welcome.
Latest update: 17/08/2017