AI-assisted bibliographic revision

This project aims to test an AI-assisted copy preparation system. The proposed experiment focuses on a specific aspect: the revision of bibliographies, whose structuring and formatting are prone to numerous errors despite the existence of tools like Zotero. The goal is to assess the needs of editors, test different tools and methodologies, and document the experience to ensure its reproducibility.

This project is envisioned as the first step in a long-term experiment designed to evaluate the potential of AI in editorial preparation work, while proposing technical and methodological solutions that respect the work of authors and editors. The ecological impact of the proposed solutions will also need to be considered.

Issues

While any editor can appreciate the automation of certain tasks, the introduction of AI into the revision and copy preparation process must be carefully considered to assess its impact on editorial work. Mastery of language and, more broadly, knowledge and control of various editorial standards (orthographic, bibliographic) emerge as key competencies of the editor-reviewer, establishing themselves as a marker of the profession. The experience gained by editors, who often work within the same journal or group of journals for many years, represents significant added value. This, along with the quality of the author-editor relationship, could be disrupted by the widespread automation of editorial tasks.

On a larger scale, the epistemological impact of automating proofreading and revision must also be examined. At a time when large industrial companies (like Microsoft) are already deploying their own AI systems to automate certain tasks (e.g., automatic figure transcription), it is important to determine how to adapt our specific requirements—journal recommendations, language, disciplinary field, etc.—to an open model.

Therefore, we aim to explore the possibility of "reusability." Given that references may recur from one article to another—especially within journals in specific fields—the development and utilization of a thematically organized and correctly formatted bibliographic database could save time and improve quality.

Technical challenges

Research activities

  1. Interviews with the editor of the HN journal: What difficulties are encountered, and what common errors occur?
  2. Overview of the various existing tools and selection of tools to be tested
  3. Implementation of the tools
  4. Documented experimentation on two articles from the HN journal

Deliverables

Findings and partial conclusions

Our experience has brought to light the challenges of automating scholarly editing work. While the use of generative AI appears to reduce working time, it nevertheless requires systematic verification of modifications, as the tool sometimes makes unprompted changes (such as deleting DOIs or altering punctuation). Reproducibility and reliability also present issues, with results deteriorating over successive iterations.

In light of these limitations, and specifically in the case of bibliographic revision, converting references into BibTeX using specialized models and tools (such as Pleias’s Bibtexer or AnyStyle) seems to offer a more virtuous solution (a return to structured data), though it raises the issue of their somewhat complex usability.

The experiment has ultimately highlighted structural problems: authors’ lack of interest in the quality of their bibliographies, the adoption of suboptimal tools for the sake of convenience, and the invisibilization of the subtleties of editorial work. The “last-mile barrier” remains a critical space where a journal’s distinctiveness, its intellectual value, the editor’s responsibility, as well as the human qualities involved in the collaboration with authors all come into play. That said, one could envision this barrier less as an obstacle than as a space of invention : one capable of restoring symbolic value to the creation and structuring of bibliographies.

People

Partners

Documents