du 15 juin 2026 au 19 juin 2026
  • Événements recherche

Publié le 7 mai 2026 Mis à jour le 7 mai 2026

Chatbots for Internet Research?

Lieu: CY Cergy Paris University, a dynamic university in the Parisian area. RER A station Neuville University.

Ecole d'été coordonnée par Luciana Radut-Gaghi du laboratoire LT2D

Chatbots have become a ‘part of the pipeline’ in a number of research methodologies in the social sciences and humanities, contributing to formatting, summaries, annotation, labelling and the generation of synthetic data. One question is how to go about using chatbots in the first place for such research tasks and make use of the many best practice guides that have been shared across the research landscape. These guides contain steps about how to prompt and interact with chatbots properly as researchers. But they also advise that the chatbots explain themselves and that researchers validate their outputs. How does one gain confidence in how the chatbots work for the researchers? What to do when the machine and the researcher findings misalign?

When gaining confidence in the chatbot output, one could consider how to ground chatbot findings. These moments raise a series of questions such as when to undertake a manual and/or multiple chatbot comparison. But it would also ask, how does the medium or the platform affect the data and the findings? Here is where guardrail auditing comes into the picture. How to detect the guardrails that have been put up by the chatbots so that they can interact with users without offence? How do they affect the quality of the data and the findings? The goal here is to identify medium and platform issues when using chatbots for research.