This guest post, written by Monika Barget, project manager at the Centre for Digital Humanities, Maynooth University, Ireland, reports on one of the two recent Distant Reading Training Schools that were held at the inaugural European Association for Digital Humanities Conference in Galway, Ireland.

Two of my colleagues from Maynooth University and I attended the COST Action Training Schools at the EADH 2018 conference in Galway. While one of us signed up for the theory sessions, discussing different approaches to ‘style’ in the digital humanities, an early career researcher in Early Irish and I (currently working as project manager in Digital Humanities) attended the sessions on methods and tools of ‘Distant Reading’. Our workshop group was very international and brought people from various disciplines together. Most of us had only recently started to extensively use digital tools for data/text analysis and had limited programming experience. That was why the step-by-step introductions to different technology-supported methods of topic modelling, stylometry, and data visualisation were a perfect fit. We were introduced to downloadable software with elaborate graphic user interfaces (e.g. TXM and Gephi) as well as portable software (Dariah Topics Explorer) in development and a stylometry tool based on R-libraries, which required working in the command line.

Participants at the Methods & Tools Training School, Galway (5-7 December 2018)

The corpora chosen by the workshop facilitators were mainly selections of British fiction, the North-American Brown Corpus and some smaller fiction corpora in other European languages (French, Italian, Hungarian, Slovene). For me as a historian specialising in visual cultures and politics of the early modern period, these were uncommon sources, but at the end of most workshop sessions, I had some time to apply each method and tool to my own corpora (e.g. a collection of political letters from Ireland). The workshop facilitators made sure that all participants were able to keep step and competently answered our questions. As not all methods and tools presented to us will, however, be equally relevant to our future research, additional ‘experimental time’ to work with just one of the methods/tools in smaller groups on the last workshop day would have been even more beneficial. There was a lot to take in, and a longer supervised ‘lab’ session focusing on a chosen method and my own material would also have aided me to process and practice what I had learned. In this way, the instructors, too, could have received more in-depth feedback, especially in those cases where their tools were still being updated and improved.

Nonetheless, the overall timing of the workshop suited me very well as we had the opportunity to connect with other participants during coffee breaks, lunch, and in the evenings. It was interesting to hear how other scholars at a similar career level were going to use topic modelling, stylometry, or network analysis in their projects, and I learned a lot about the institutional frameworks and digital cultures in other universities. Finally, the vivid keynote lecture delivered by Prof. Christof Schöch was a convenient occasion to sit back and reflect on some of the overarching challenges behind digital literary analysis. I am very grateful for the opportunity to attend the COST Action Training School and will recommend it to my peers.

Following our recent Action meetings in Antwerp, WG2 member and Chief Content Architect at Wolters Kluwer Germany, Christian Dirschl, offered the following thoughts on our project from his perspective as an Information Scientist working in an industrial setting.

At the beginning of October, I participated in the meeting of Working Groups 2 and 3 in Antwerp. I am an Information Scientist who usually works on legal information and not literary texts, so I considered myself as an outsider to this group. Still, I joined WG2 and was very curious about how the digital humanities is dealing with the specific challenges it faces.

I felt very welcome! Both from the people at the meeting, but also from the discussions that were going on, which sounded quite familiar to me.

There were discussions about the balancing act of enriching documents by human experts versus automatically by machines. Another angle was about offering basic technological infrastructure or aiming at sophisticated and complex algorithms, which might not reach the maturity level that would be required in an operational environment. And then, there were open questions: whether to head for a single technology that serves all languages, or whether dedicated mono-lingual tools would be superior in the end—with the drawback that the results would hardly be comparable across the whole corpus.

Members of our Action bask in the Antwerp sunshine after three days of meetings last week.

My own experience with these technologies is very similar and obviously, there is no right or wrong answer. A complex challenge requires a complex solution—or a magic wand!

Although Deep Learning sometimes appears to be this wand, it was clear from the start that its application area in this Action is important, but limited. So, other solution streams also need to be investigated. I am looking very much forward to seeing what the final decision will be.

The Action has an interesting and ambitious goal and there were enough dedicated experts around the table to make sure that quite a lot will be achieved within the limited available resources.

What I have learned in the last five years or so is that technical progress needs to be aligned to customer needs, or rather, in this case, researchers’ requirements. And I have the impression that academia in general is still very much on an exploratory path. Most of the times, this will lead to more knowledge, but less applicability. So my advice is to spend quite some time on a regular basis on whether the intermediate results show progress on current (!) research requirements and not only in general and then to adapt to this feedback, so that an optimal practical solution is finally achieved. This may sound odd for some researchers, but in my experience this is the most efficient way to go forward.

I really enjoyed the two days in Antwerp and I am looking forward to further collaboration in the future. All the best to the Action and its participants!

Over the last few months, the Distant Reading COST Action has been present at several Digital Humanities conferences with a poster providing some basic information about our Action.

  • In May, the poster was presented at the DH Budapest conference by Jessie Labov. The conference has been organized by Gabor Pálko and his colleagues at the Centre for Digital Humanities of the Eötvös Loránd University.
  • In early June, Mike Kestemont presented our poster at the DH Benelux conference 2018 in Amsterdam.
  • And in late June, Maciej Eder, Christof Schöch and Mike Kestemont have presented the poster at the Digital Humanities Conference 2018 in Mexico City (DH2018).

The poster presentations were a good occasion to spread the word about our COST Action. The conversations show that many people really like the idea of the multilingual ELTeC! They also show that many people are still unaware of what a COST Action is and are really impressed when they hear that we are already a network of researchers from 30 countries across Europe and beyond.

The next poster presentation of our Action is at the conference on Language Technologies and Digital Humanities on 20 and 21 September 2018 in Ljubljanja (LTDH 2018).

For more information, please have a look at the poster, which is available from Zenodo. Also, an abstract with slightly more text is also available from the DH2018 abstracts page.