ICMI

ICMI2020 is going virtual. With the help of our Virtual Conference chairs, the ICMI organisers are preparing the online and interactive program.

The workshop "Speech, Voice, Text, and Meaning" about Oral History and Technology will be on the 29th of October.

iconpdf Proposal

iconpdf Short paper

 

Aim of the Workshop

When considering research processes that involve interview data, we observe a variety of scholarly approaches, that are typically not shared across disciplines. Scholars hold on to engrained research practices drawn from specific research paradigms and they seldom venture outside their comfort zone. The inability to ‘reach across’ methods and tools arises from tight disciplinary boundaries, where terminology and literature may not overlap, or from different priorities placed upon digital skills in research. We believe that offering accessible and customized information on how to appreciate and use technology can help to bridge these gaps.

This workshop aims to break down some of these barriers by offering scholars who work with interview data the opportunity to apply, experiment and exchange tools and methods that have been developed in the realm of Digital Humanities.

Previous work

As a multidisciplinary group of European scholars, tools and data professionals, spanning the fields of speech technology, social sciences, human computer interaction, oral history and linguistics, we are interested in strengthening the position of interview data in Digital Humanities. Since 2016 we have organized a series of workshops, supported by CLARIN on this topic (See here on this website).

Our first concrete output was the development of the T-Chain, a tool that supports transcription and alignment of audio and text in multiple languages. Second, we developed a format for experimenting with a variety of annotation, text analysis and emotion recognition tools as they apply to interview data.

The workshop

This half-day workshop will provide a fruitful cross-disciplinary knowledge exchange session.
It will:

  1. Show how you can convert your AV-material into a suitable format and then use automatic speech recognition via the OH portal
  2. Demonstrate the correction of the ASR-results and the annotation of the resulting text
  3. Demonstrate how you can do some text-analysis (Voyant) and make nice graphics
  4. Demonstrate the possibility of emotion extraction with Open Smile