KADIR HAS UNIVERSITY INFORMATION CENTER
GENERATIVE ARTIFICIAL INTELLIGENCE POLICY AND STRATEGIC GOVERNANCE FRAMEWORK

Document Version: 2.0
Effective Date: 19 November 2025
Approved By: KHAS Information Center Directorate
Scope: All academic/administrative units, students, library services, and licensed electronic resources.

1. VISION AND PURPOSE

This policy regulates the integration of Generative Artificial Intelligence (Generative AI) into the service processes, collection management, and information ecosystem of the Kadir Has University Information Center (KHAS Information Center).
Our vision is an approach that embraces the innovative discovery and efficiency opportunities offered by AI in research processes, yet positions technology not as a substitute for human intelligence, but as a complement (Augmented Intelligence). In the age of algorithmic information, KHAS Information Center commits to acting as a “Verification Hub” where the reliability of synthetic data is questioned, and as the institutional guardian of academic integrity.

2. CORE ETHICAL PRINCIPLES

The Information Center adheres to the following five core principles in AI management and service delivery:

2.1. Transparency
When AI algorithms are utilized in digital assistants, catalog searches, or discovery services provided by the Information Center, this is explicitly disclosed to the user. Similarly, users are expected to transparently declare the use of AI in their methodologies within their academic outputs.

2.2. Data Sovereignty & Privacy
Users’ research history, borrowing data, and personal information are strictly protected. For institutionally licensed tools, a guarantee of “No Training on User Data” is required from suppliers. It is essential that sensitive institutional data is not shared with public models trained on open data.

2.3. Algorithmic Fairness
Algorithms used for information access are monitored to ensure they are free from bias. Access to sources that encompass local and cultural diversity, beyond Western-centric datasets, is prioritized.

2.4. Human-in-the-Loop
Artificial Intelligence is a “co-pilot.” In critical reference services, literature reviews, and bibliometric analyses, AI outputs are not presented as “final information” without the supervision and verification of expert librarians.

2.5. Accessibility and Inclusivity
The selection of AI tools and interfaces is conducted with strict adherence to international accessibility standards (WCAG) for users with disabilities. Technology is not permitted to create a new barrier of inequality.

3. INSTITUTIONAL STRATEGIC COMMITMENTS

3.1. Collection Management and Vendor Auditing

The Information Center audits the AI features within subscribed databases and discovery services from both technical and ethical perspectives. An “AI Transparency Statement” is demanded from vendors (Publishers, Data Providers).

  • Risk Management: Providers that fail to submit a transparency statement or do not meet data security standards (GDPR/KVKK compliance) are classified as “Risk / Cautionary Resource” within library systems, and users are warned about potential biases.

3.2. AI Literacy

The traditional Information Literacy curriculum has been updated to meet the requirements of the era. The goal is to equip users with the following competencies:

  • Effective and ethical Prompt Engineering,
  • Detection and verification of AI hallucinations (false information generation),
  • Distinguishing between synthetic content and peer-reviewed/editorial content.

3.3. Provision of Trusted Content

Unlike models trained on uncontrolled data piles on the internet, the Information Center continues to provide “Trusted Content” through peer-reviewed journals, books, and verified databases, prioritizing transparent models that support Open Science principles.

4. USAGE GUIDELINES FOR USERS

Researchers and students using Information Center resources and AI tools are subject to the following guidelines:

  • Verification Responsibility: Users are obliged to verify the accuracy of information, citations, and data obtained from AI tools against original sources.
  • Academic Integrity: AI may be used as a tool for idea generation, language correction, and summarization; however, generating an entire text via AI and presenting it as one’s own work without proper citation is considered plagiarism.
  • Copyright: The use of copyright-protected materials for training AI models without the permission of the rights holders is prohibited.

5. ENFORCEMENT AND REVIEW

Due to the rapid changes in AI technologies and the emergence of new ethical risks, this policy document is reviewed and updated every 6 months by the Information Center Management.

Kadir Has Üniversitesi Bilgi Merkezi Direktörlüğü

Email: [email protected] | Web: bilgimerkezi.khas.edu.tr