Web Scraping and Digital Archives: A Program for the Retrieval of the Transcripts of the International Criminal Tribunal for Former Yugoslavia
By Katarina Ristić and Nikola Ristić
Digital archives have created a number of opportunities for researchers, from accessing files and material collections irrespective of the archive’s location, to the possibility to search and obtain large amounts of material in a short time. At the same time, digital collections might be overwhelming, amounting to hundreds or even thousands of files that might be of interest for the research. In this blog article we present a method for the automated collection of web data – web scraping – applied to the specific problem of collecting transcripts from the website of the International Criminal Tribunal for former Yugoslavia (ICTY). An open-source program that can be used in further research is available at GitHub.
The digitization of material held by libraries, museums, and archives worldwide is creating an ever-growing collection of digital sources available for research and teaching. In addition, parliaments, courts, media, and international organizations offer valuable collections of digitized or born-digital material on their websites.[1] The same is true for higher education: Yale University and Harvard University, to take two forerunners as an example, are creating important digital archives, such as Yale’s The Avalon Projects and Harvard Law School Library’s progresses in digitizing the Nuremberg Trials. In Europe, there are larger networks collaborating on preserving European cultural heritage, like Archives Portal Europe or Europeana that offer millions of documents, images, and videos with thematic exhibitions and access to primary sources.
Such impressive availability of digital material and its potential contribution for the democratization of knowledge, whether in education or in research, should not be mistaken for digital equality and global accessibility of such archives and documents. The creation of postcolonial archives and their digitization is facing other epistemological, structural, and financial challenges as evident in the Aluka project in southern Africa.[2]
Scholarly literature on existing digital collections and their usage in the field of digital history is at the same time enthusiastic about its potential, and critical, sometimes even suspicious, about its benefits. One of the pioneers of digital history, Roy Rosenzweig, laments that historians are facing a “fundamental paradigm shift from a culture of scarcity to a culture of abundance”.[3] Digital documents are easily created, distributed, and stored, and potentially falsifiable, making their usage an additional challenge. At the same time, the preservation of digital records, especially of born-digital material, still requires legal regulation. And while Rosenzweig warns about the large amount of material, Terry Kuny, observes the opposite process, where digital material is disappearing without a trace, spurring fear of a “digital dark ages”.[4]
Those focusing on the positive side of digital archives stress the faster access to material, the diversity of multimedia documents, and the interactivity of use. For example, big data allows for the creation of new connections among material that might be inaccessible to human reading and therefore could not have been identified without these technical facilities.[5] At the same time, the accessibility of the records might conceal the selection process behind the establishment of the digital archive, while the search results might be determined by the algorithm operating on parameters unknown to the researcher.[6] The necessity of critically addressing the sources, the logic of the archive, and the digitization process does not differ much from the critical examination of traditional archives.[7]
Scholars also write and reflect on their work with concrete collections, like James Mussell, who wrote about the 19th century press in the digital age, or Hugo Bonin working on the digitized British Parliament Archives from the turn of the 18th to the 19th century.[8] Court records have been one of the classical sources for historians, and during the last three decades a number of International Criminal Tribunals were established, providing researchers with a rich collection of documents, audio–visual material, and evidence.[9] In the following section we look at one such tribunal, namely the International Criminal Tribunal for former Yugoslavia and its vast archival collection, which has been used in research as a material source – for judgments, victims’ testimonies, historians’ expertise, evidence collection, and even as an institution itself writing history.
ICTY Archives
The United Nations Security Council (UNSC) established Tribunal in 1993, in the middle of wars in Bosnia and Herzegovina and Croatia. In the following 24 years, the ICTY conducted hundreds of war crime trials until 2017 when the ICTY was closed and succeeded by the International Residual Mechanism for Criminal Tribunals. These trials’ documentation includes a massive amount of both digitized and born-digital materials – only the transcripts from the trials amount to 2,5 million pages. According to Bridget Sisk, the chief of the UN Archives and Records Management Section, each of these archives contains more than 1 ZB of information.[10] The existence of these archives is by no means accidental – they are an outcome of policies and archival strategies, formulated by the UNSC and the Court administration, that see archives as repositories of knowledge production and future reconciliation.[11] The trials did not reach local communities as sources of truth telling and dealing with the past, as they were ignored by the politicians, journalists, and communities in the region, as noted by Mirko Klarin. Scholars increasingly see the ICTY archive, which might be discovered by future generations, as the primary legacy of the Tribunal.[12]
The ICTY Archive is divided into two parts – the public archive, which contains court records and material presented in trials (JDB), and prosecution records (EDB), which contains material from the closed sessions, undisclosed governmental documents, and other documents that were not used in the trial.[13] The public archive, officially the ICTY Court Records Database, can be accessed online via http://icr.icty.org/. It contains different kinds of documents, from video recordings of trial proceedings and transcripts, to statements from witnesses and victims, expert reports, medical documents, media reports from the war, artifacts from the crime scenes, military diaries, maps, and legal documents like indictments and judgements.
Some parts of this database – like trial transcripts, indictments and judgements are available on the ICTY website. Each trial case contains several hundred documents with hundreds of thousands of pages, and their retrieval into an operable collection for analysis might be a time-consuming process. For example, the first case in front of the ICTY, the trial to Dušan Tadić, contains 123 transcripts, one for each day of the trial. In addition, there are indictments, judgments, evidence, and different exhibits in the court records. Even without surrounding administrative and outreach documentation, which includes chamber decisions, orders, and press releases, there are hundreds of files and thousands of pages that need to be downloaded. Downloading each transcript from the trial, day by day, is a repetitive and time-consuming process that can be automated. For these purposes, we created a python script which uses web scraping to automatically download all transcripts from one case and merge them into one HTML document, which can then be imported into any program for further analysis. This allows the researcher to create her own corpus of documents (selecting the relevant cases for example) and import all documents into the software for analysis (e.g., MaxQDA, Atlas.ti or NVivo).
Web Scraping
Web scraping is a method by which a program automatically extracts data from web pages for various reasons.[14] For example, search engines use web scraping to find relevant web pages for search queries. In the context of digital humanities, web scraping can be used to retrieve digital information for further research. This can be useful for various reasons: as content on websites is not permanent, it can be used to store the state of websites at one point in time. Furthermore, it can also be used to automate downloads of necessary information. One could even go further and use web scraping to find information in the first place. For example, by following links and searching for key words, researchers could create their own “search engines”, made for specific purposes to avoid the algorithms of general search engines.
Web scraping is not the only method to extract web content automatically. A lot of bigger web pages such as most social media sites provide application programming interfaces (APIs), protocols, and tools to enable programs to safely access information from sites. A big advantage of APIs is that the provider specifically designed the API with developers in mind; for example, offering a built-in limit on the number of requests programs can make within a certain time frame. A provider can also specify what content programs may access and what not. In the future, digital archives may provide their own APIs if digital methods become more prevalent in the field.
As the ICTY website does not have its own API, we have to use web scraping. Graham et al. provide several useful tools like Outwit Hub and Import.io that offer web scraping services, but these are not free.[15]
We created a program specifically for web scraping of the ICTY website. The code, together with the detailed instructions on how to install it, is available in this GitHub repository: https://github.com/nikolarist/transcripts. The program was made in Python to run on a Windows computer with Chrome installed. Depending on the size of the trial, the program might take several minutes to download all transcripts from the website. It will generate a new file which contains all transcripts of one trial ordered by date.
There are a few things to consider when using web scraping methods. Programmatic requests to websites take place much faster than manual requests by humans. Sending a lot of requests to a single website can overwhelm it and, in the worst case, crash it. The developers must therefore be careful with the frequency of requests to a single website. A lot of websites will provide a robots.txt file stored at the base URL of the webpage (e.g., http://example.com/robots.txt). This file specifies information relevant to programs that automatically access websites. Before interacting programmatically with a website, you should look at the robots.txt file (if it exists) and make sure to follow the rules set by the people running the site. In the case of the ICTY website, there was a crawl-delay of 10 seconds, meaning that programs should not send more than one request every 10 seconds. Our program respects this request, which leads to a somewhat longer waiting time.
Another concern with web scraping is the downloading of copyrighted material. Developers must be careful to make sure that they have the permission to use the data they are gathering with web scrapers.
In this example, we used web scraping to drastically reduce the time needed to collect material from the ICTY website. More generally we want to point out that noticing steps in research which could be automated, can open the door to the use of informatic methods and ultimately simplify research methods.
References
[1] For example, the US National Archives include the Presidential Library with all public speeches of presidents as well as records of the CIA. Accessed 16.11.2021.
[2] Isaacman, Allen, Premesh Lalu, and Thomas Nygren. “Digitization, History, and the Making of a Postcolonial Archive of Southern African Liberation Struggles: The Aluka Project”. Africa Today 52, no. 2 (2005): 55–77.
[3] Rosenzweig, Roy. Clio Wired: The Future of the Past in the Digital Age. New York: Columbia University Press, 2011.
[4] Kuny, Terry. “A Digital Dark Ages? Challenges in the Preservation of Electronic Information”. International Preservation News, 17, 1997.
[5] Graham, Shawn, Ian Milligan, and Scott Weingart. Exploring Big Historical Data: The Historian’s Macroscope. London: Imperial College Press, 2016.
[6] Škodrić, Ljubinka, and Vladimir Petrović. “Digitalna istorija: geneza, oblici i perspektive”. Istorija 20. veka 38, no. 2/2020 (1 August 2020): 9–38.
[7] Michelle T. King. “Working with/in Archives”. In Simon Gunn and Lucy Faire (eds.), Research Methods for History. Edinburgh University Press: Edinburgh, August 2016.
[8] Mussell, James. The Nineteenth-Century Press in the Digital Age. London: Palgrave Macmillan UK, 2012. Bonin, Hugo. “From Antagonist to Protagonist: ‘Democracy’ and ‘People’ in British Parliamentary Debates, 1775–1885”. Digital Scholarship in the Humanities 35, no. 4 (1 December 2020): 759–75.
[9] Very similar digital archives are available for other international and hybrid criminal tribunals, e.g., for the International Criminal Tribunal for Rwanda (ICTR), the International Criminal Court (ICC), or hybrid courts like Extraordinary Chambers in the Courts of Cambodia (ECCC). Accessed 16.11.2021.
[10] Sisk, Bridget. “The Role of the UN Archives in the Long-Term Legacy of the ICTY”. In: Richard H. Steinberg (ed.), Assessing the Legacy of the ICTY. Leiden ; Boston: Martinus Nijhoff Publishers, 2011.
[11] Emmerson, Elizabeth. “How Best to Preserve the Records of the ICTY”. In: Richard H. Steinberg, (ed.), Assessing the Legacy of the ICTY. Leiden; Boston: Martinus Nijhoff Publishers, 2011.
[12] For archival legacy of the ICTY see: Steinberg, Richard H. (ed.). Assessing the Legacy of the ICTY. Leiden; Boston: Martinus Nijhoff Publishers, 2011. Kaye, David. “Archiving Justice: Conceptualizing the Archives of the United Nations International Criminal Tribunal for the Former Yugoslavia”. Archival Science 14, no. 3–4 (October 2014): 381–96.
[13] Vukušić, Iva. “The Archives of the International Criminal Tribunal for the Former Yugoslavia”. History 98, no. 332 (2013): 623–35.
[14] Mitchell, Ryan. Web Scraping with Python: Collecting Data from the Modern Web. 1st. ed. Beijing: O’Reilly, 2015.
[15] Graham, Shawn, Ian Milligan, and Scott Weingart. Exploring Big Historical Data: The Historian’s Macroscope. London: Imperial College Press, 2016.
Katarina Ristić is a researcher at the Global and European Studies Institute, Leipzig University, where she has been head of the digitalization teaching unit since April 2020. She was a research associate at the Helmut-Schmidt University in Hamburg and at Magdeburg University. She obtained a PhD from Leipzig University, and her first monograph Imaginary Trials: War Crime Trials and Memory in Former Yugoslavia deals with the silencing of victims’ narratives and contested mediatization of trials in Croatia, Bosnia, and Serbia. Her recent publications include “Online Transnational Memory Activism”, with Orli Fridman (2020), published in Agency in Transnational Memory Politics, edited by Jenny Wuestenberg and Aline Sierp with Berghahn Books, and the forthcoming special issue of the journal Comparative Southeast European Studies on the NATO Kosovo Intervention, co-edited with Elisa Satjukow.
Nikola Ristić studies Biochemistry at the Ruprecht Karl University of Heidelberg. He works as a student assistant for Leipzig University at the Institute of Medical Physics and Biophysics on the development of bioinformatical web servers. His work on Voronoia, a web server for the analysis and visualisation of biomolecules, conducted in collaboration with René Staritzbichler, Andrean Goede, Robert Preissner, and Peter W. Hildebrand, was published in the paper “Voronoia 4-ever” in 2021 by Oxford University Press.
Further articles in the Digital Humanities Interface series on TRAFO:
Elton Barker, Semantic Geo-Annotation for Ancient History and Beyond, 22 June 2022
Diana Roig-Sanz, Towards a Truly Global Digital Humanities, 20 April 2022
Susan Grunewald, Ruth Mostern, and Karl Grossner, The World Historical Gazetteer: A Digital Humanities Interface for Transregional Research, 21 February 2022
Thorben Pelzer, Retracing Professional Mobility: Historical Network Analysis through CERD, 11 November 2021
Ninja Steinbach-Hüther and Thomas Efer, The Digital Humanities Interface – An Introduction, 28 September 2021
Citation: Katarina Ristić and Nikola Ristić, Web Scraping and Digital Archives: A Program for the Retrieval of the Transcripts of the
International Criminal Tribunal for Former Yugoslavia, in: TRAFO – Blog for Transregional Research, 14.09.2022, https://trafo.hypotheses.org/40678
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Editorial Board (14. September 2022). Web Scraping and Digital Archives: A Program for the Retrieval of the Transcripts of the International Criminal Tribunal for Former Yugoslavia. TRAFO – Blog for Transregional Research. Abgerufen am 1. November 2024 von https://doi.org/10.58079/ut5c
2 Antworten
[…] Web Scraping and Digital Archives, photo by Joshua Sortino on Unsplash (source de l’illustration). […]
[…] problème spécifique du site web du Tribunal Pénal International pour l’ex-Yougoslavie [lire : Web Scraping and Digital Archives: A Program for the Retrieval of the Transcripts of the Internation…]. Cet outil permet la collecte automatisée de données du site web qui n’a pas sa propre API. Le […]