POLICIES


Center for Digital Humanities (CDH) Use Policy

The CDH may be utilized by current Southern Miss students, staff, and faculty. All users should abide by the following policies:

Also see:

Center for Digital Humanities (CDH) Acknowledgements Policy

One of the primary purposes of the Center for Digital Humanities at The University of Southern Mississippi is to promote and support the practices and use of digital humanities, not only among faculty, staff, and students at USM, but across the state of Mississippi. This requires a close working relationship between CDH faculty, staff, and student workers with individual and institutional researchers.

In instances where the CDH provides extensive support assistance on a project, it is best practice for the individual or institution to credit the CDH and its members’ contributions. “Extensive assistance” may include, but is not limited to:

  • Extensive (four or more) research consultations, either in-person, phone calls, and/or email;

  • Significant amount of time spent working on a project;

  • CDH involvement throughout several steps of the project;

  • CDH created visualizations and/or text for a project;

  • Financial support or in-kind support.

The CDH recommends that those supported by the CDH name the Center and display the Center’s logo on their site. In addition, we suggest: 

  • Listing the faculty, staff, and/or student assistants as authors on the specific page or visualization they created;

  • Listing the faculty, staff, and/or student assistants in a list of contributors or credits section.

Additionally, the CDH will list the project on our projects page, along with the name of the individual or institution, a logo or associated image, project name, and short description of the project.

Standards for the Use of AI at the Center for the Digital Humanities

PREAMBLE: New tools to assist with the preserving, mapping, and analyzing of data, images, and texts (the foundation of digital humanities research) are being developed daily and many employ the use of artificial intelligence (AI). These tools make the power of the digital humanities more accessible to many, but the rapidly evolving landscape risks that data will be corrupted, privacy violated, and research results will become disconnected from the underlying data and the stories that this data can tell. The Center for the Digital Humanities at Southern Miss recommends these guidelines for the use of AI for digital humanities research while recognizing that the field is evolving and these guidelines will have to evolve as well.

  • Transparency: All digital humanities undertakings should acknowledge the tools that were used and, when possible, make the data sets analyzed freely available to researchers.

    1. Citation and Copyright: Researchers using AI have an affirmative responsibility to acknowledge the sources they use for their research. If access to that information is not provided by the AI tool and fully citing sources or respecting copyright is not possible, the tool should not be used.

    2. Acknowledgement: When using AI digital humanities tools, extra care should be taken to describe how the tool works and any external inputs (including data sets used to train the AI tool) that have contributed to the research results.

    3. Sustainability: Care should be used to select AI tools that will remain available (and affordable) so that future researchers will have the opportunity to verify results.

  • Intentionality: The research process, including the researcher’s engagement with their sources, is often the most important part of the research process. It is in this engagement that unexpected flaws in the data are often revealed, critical insights are developed, and the scholarly narrative takes shape. The Digital Humanities are as much a method of thinking and research as a means of producing work products. When thinking about incorporating AI into one’s work, researchers should think carefully about what may be lost and whether AI is the best analytical tool or just the easiest.

  • Privacy: Researchers using AI should take every precaution to ensure that privileged data is not shared with AI tools and that the privacy of individuals, both contemporary and historical, is respected. Users of AI should be aware that some AI tools collect data and retain the right to use that data.

  • Limitations: Statistical analysis, whether generated by AI or more traditional methods, extrapolates results from data.  Researchers have an affirmative responsibility to acknowledge the degree of certainty of the results and the limits of any statistical model to describe the past or predict the future.

  • Labor: AI can make humans more productive but was created and continues to exploit human labor.  Researchers should develop policies that reward this human labor including employment policies that reward human ingenuity and do not impose financial penalties for increased productivity.

  • Images: Like AI-generated text, AI images have ethical dilemmas, including the lack of remuneration for artists whose art served as models, the documented issues with bias and stereotype of AI-generated images, and the possibility of other falsified images (e.g. deepfakes) being interpreted as real. Researchers should foreground the ethical concerns of artists and audiences in any creation, development, and/or use of AI-generated images.

[Updated January 29, 2025]