Distant reading, driven by the development of digital technology in the human sciences, has emerged as one of the most prolific approaches to literary texts. Maps, graphs and trees, in Moretti’s (2005) words, allow us to reread famous works in a new way, or to look at large amounts of texts that have long been forgotten. However, often, approaches to distant reading disregard the acquisition of the data to be observed: Where do they come from? How are they created?

Our training school proposes to return to the crucial stage of data acquisition, focusing on details of the production chain of literary data. During the two-day course, we will start with OCR (optical character recognition), which makes it possible to transform an image into machine-readable text, addressing the difficulties introduced by the variation of graphic systems or the materiality of old artifacts. The second – decisive – step is the encoding in XML-TEI, which transforms the text into a usable database and allows to addition of more information to the text (e.g., author, gender, period) for ensuing analysis. The third and final step is the analysis with R, which allows hypotheses to be tested and patterns to be explored by analysing and visualising data.

Basel Public and University Library, venue for the Training School

With a strong emphasis on practical experience, this training school is geared towards building the framework for a first multilingual Swiss literary corpus (French, Italian and German). Tasks participating in its construction during the training school will provide an opportunity to discuss pertinent issues.

This course is part of a collective work carried out within the European COST “Distant Reading For European Literary History” project of which the organizers are the Swiss representatives: https://www.distant-reading.net/

The working language of the training school is English, knowledge of at least one of the three languages of the literary data (French, Italian, German) is also required.

All information, including the full training school programme, can be found in French, German, and Italian here and here.

Registration process: target group of the training school isdoctoral students affiliated with the universities of Basel, Bern, Fribourg, Geneva, Neuchâtel and Lausanne as well as from the EPFL. Post-doc researchers can apply via a short email pending registration of PhD students who have priority.

Please register by sending an email to alexandre.camus@unil.ch  Participation is free of charge for doctoral students. All travel and accommodation expenses are covered by the doctoral program.

Practical information:

  • Course title “Distant Reading – Tools and Methods”
  • Instructors: Simon Gabay, Berenike Herrmann, Simone Rebora, Elias Kreyenbühl
  • Date: 12 and 13 December 2019
  • Location: Basel Public and University Library (UB)
  • Schedule: 9am-5.30pm 

The First Workshop on Distant Reading in Portuguese will take place on 27-28 October 2019 at the University of Oslo, and will feature a presentation on our COST Action on Distant Reading for European Literary History by Isabel Araújo Branco, Diana Santos, Paulo Silva Pereira and Raquel Amaro.

Leitura distante em Português (Distant Reading in Portuguese)
Image created by Oriel Wandrass, Universidade Estadual do Maranhão (UEMA)

At the conference, which features additional presentations by members of our Action, participants will illustrate the state of the art, discuss research questions for the medium and long term, and to take a position on several Portuguese-related matters within the sphere of Distant Reading.

Further details, including the programme of the workshop and abstracts, are available here: Portuguese | English (via Google Translate).

This guest post, written by Monika Barget, project manager at the Centre for Digital Humanities, Maynooth University, Ireland, reports on one of the two recent Distant Reading Training Schools that were held at the inaugural European Association for Digital Humanities Conference in Galway, Ireland.

Two of my colleagues from Maynooth University and I attended the COST Action Training Schools at the EADH 2018 conference in Galway. While one of us signed up for the theory sessions, discussing different approaches to ‘style’ in the digital humanities, an early career researcher in Early Irish and I (currently working as project manager in Digital Humanities) attended the sessions on methods and tools of ‘Distant Reading’. Our workshop group was very international and brought people from various disciplines together. Most of us had only recently started to extensively use digital tools for data/text analysis and had limited programming experience. That was why the step-by-step introductions to different technology-supported methods of topic modelling, stylometry, and data visualisation were a perfect fit. We were introduced to downloadable software with elaborate graphic user interfaces (e.g. TXM and Gephi) as well as portable software (Dariah Topics Explorer) in development and a stylometry tool based on R-libraries, which required working in the command line.

Participants at the Methods & Tools Training School, Galway (5-7 December 2018)

The corpora chosen by the workshop facilitators were mainly selections of British fiction, the North-American Brown Corpus and some smaller fiction corpora in other European languages (French, Italian, Hungarian, Slovene). For me as a historian specialising in visual cultures and politics of the early modern period, these were uncommon sources, but at the end of most workshop sessions, I had some time to apply each method and tool to my own corpora (e.g. a collection of political letters from Ireland). The workshop facilitators made sure that all participants were able to keep step and competently answered our questions. As not all methods and tools presented to us will, however, be equally relevant to our future research, additional ‘experimental time’ to work with just one of the methods/tools in smaller groups on the last workshop day would have been even more beneficial. There was a lot to take in, and a longer supervised ‘lab’ session focusing on a chosen method and my own material would also have aided me to process and practice what I had learned. In this way, the instructors, too, could have received more in-depth feedback, especially in those cases where their tools were still being updated and improved.

Nonetheless, the overall timing of the workshop suited me very well as we had the opportunity to connect with other participants during coffee breaks, lunch, and in the evenings. It was interesting to hear how other scholars at a similar career level were going to use topic modelling, stylometry, or network analysis in their projects, and I learned a lot about the institutional frameworks and digital cultures in other universities. Finally, the vivid keynote lecture delivered by Prof. Christof Schöch was a convenient occasion to sit back and reflect on some of the overarching challenges behind digital literary analysis. I am very grateful for the opportunity to attend the COST Action Training School and will recommend it to my peers.

Following our recent Action meetings in Antwerp, WG2 member and Chief Content Architect at Wolters Kluwer Germany, Christian Dirschl, offered the following thoughts on our project from his perspective as an Information Scientist working in an industrial setting.

At the beginning of October, I participated in the meeting of Working Groups 2 and 3 in Antwerp. I am an Information Scientist who usually works on legal information and not literary texts, so I considered myself as an outsider to this group. Still, I joined WG2 and was very curious about how the digital humanities is dealing with the specific challenges it faces.

I felt very welcome! Both from the people at the meeting, but also from the discussions that were going on, which sounded quite familiar to me.

There were discussions about the balancing act of enriching documents by human experts versus automatically by machines. Another angle was about offering basic technological infrastructure or aiming at sophisticated and complex algorithms, which might not reach the maturity level that would be required in an operational environment. And then, there were open questions: whether to head for a single technology that serves all languages, or whether dedicated mono-lingual tools would be superior in the end—with the drawback that the results would hardly be comparable across the whole corpus.

Members of our Action bask in the Antwerp sunshine after three days of meetings last week.

My own experience with these technologies is very similar and obviously, there is no right or wrong answer. A complex challenge requires a complex solution—or a magic wand!

Although Deep Learning sometimes appears to be this wand, it was clear from the start that its application area in this Action is important, but limited. So, other solution streams also need to be investigated. I am looking very much forward to seeing what the final decision will be.

The Action has an interesting and ambitious goal and there were enough dedicated experts around the table to make sure that quite a lot will be achieved within the limited available resources.

What I have learned in the last five years or so is that technical progress needs to be aligned to customer needs, or rather, in this case, researchers’ requirements. And I have the impression that academia in general is still very much on an exploratory path. Most of the times, this will lead to more knowledge, but less applicability. So my advice is to spend quite some time on a regular basis on whether the intermediate results show progress on current (!) research requirements and not only in general and then to adapt to this feedback, so that an optimal practical solution is finally achieved. This may sound odd for some researchers, but in my experience this is the most efficient way to go forward.

I really enjoyed the two days in Antwerp and I am looking forward to further collaboration in the future. All the best to the Action and its participants!

Over the last few months, the Distant Reading COST Action has been present at several Digital Humanities conferences with a poster providing some basic information about our Action.

  • In May, the poster was presented at the DH Budapest conference by Jessie Labov. The conference has been organized by Gabor Pálko and his colleagues at the Centre for Digital Humanities of the Eötvös Loránd University.
  • In early June, Mike Kestemont presented our poster at the DH Benelux conference 2018 in Amsterdam.
  • And in late June, Maciej Eder, Christof Schöch and Mike Kestemont have presented the poster at the Digital Humanities Conference 2018 in Mexico City (DH2018).

The poster presentations were a good occasion to spread the word about our COST Action. The conversations show that many people really like the idea of the multilingual ELTeC! They also show that many people are still unaware of what a COST Action is and are really impressed when they hear that we are already a network of researchers from 30 countries across Europe and beyond.

The next poster presentation of our Action is at the conference on Language Technologies and Digital Humanities on 20 and 21 September 2018 in Ljubljanja (LTDH 2018).

For more information, please have a look at the poster, which is available from Zenodo. Also, an abstract with slightly more text is also available from the DH2018 abstracts page.