Purpose
The purpose of this policy is to provide clear guidelines for the use of generative artificial intelligence (AI) tools within the NIACC community. This policy aligns with NIACC’s commitment to academic integrity, data privacy, and ethical AI use. It applies to faculty, staff, students, and affiliates who engage with generative AI tools in their academic and administrative activities. The policy is not intended to restrict legal and ethical use of AI where confidential information is not at risk.
Generative AI refers to tools that can produce new content or ideas, including text, code, images, and other digital outputs, often based on user input. Examples include OpenAI’s ChatGPT, Google Gemini and Microsoft Co-pilot.
Guiding Principles
Ethical Use of AI: All members of the NIACC community are expected to use AI technologies responsibly, ensuring that AI-generated content is used ethically and does not violate academic, legal, or professional standards.
Commitment to Academic Integrity: The use of generative AI in academic submissions must be transparent and properly acknowledged. Misuse of AI tools, such as presenting AI-generated content as one's own work, will be considered a violation of academic integrity.
Privacy and Data Security Considerations: The use of confidential or sensitive data in generative AI tools, especially publicly available ones, is strictly prohibited unless there has been a prior security and privacy review. This includes FERPA-protected student information, HIPAA-protected health data, and unreleased institutional data.
Compliance with Institutional Policies and External Regulations: All AI use must comply with NIACC policies, as well as state, federal, and international laws, including data privacy laws (e.g., FERPA, HIPAA), intellectual property rights, and export controls.
Administrative Use of Generative AI Tools
NIACC permits the administrative use of specific AI tools that have undergone security and privacy reviews.
- Approved Tools: Tools such as Zoom AI Companion and Microsoft Co-pilot, which have been vetted for compliance with NIACC data security policies, are approved for use. Additional tools may be approved by the IT department following review.
- Authorized Use Cases: AI may be used to augment productivity in administrative tasks provided that the content generated is verified for accuracy.
- Security and Privacy Reviews: Any new generative AI tool acquired or used by NIACC employees must undergo a security and privacy review when personal or confidential information is accessible by the tool.
Prohibited and Restricted Use of Generative AI
Prohibited Data: Generative AI tools must not be used to process or store sensitive information such as:
- FERPA-protected student data
- HIPAA-protected health information
- Confidential employee data
- Intellectual property that has not been released for public use
Copyright and Intellectual Property Concerns: AI tools may inadvertently generate content that violates copyright. Users must ensure that AI-generated content does not infringe on others' intellectual property rights. Use of copyrighted material in AI tools without proper permissions is prohibited.
Prohibited Actions: Users must not engage AI tools in activities that violate NIACC policies or legal standards, including:
- Plagiarism or academic dishonesty
- Generating or distributing misinformation
- Enabling harassment, threats, or defamation
- Facilitating illegal activities or violating data use agreements
- Recording, transcribing, summarizing and/or distributing meeting notes where you are not the meeting host or host designee.
Academic Integrity and Generative AI
Faculty Guidelines: Faculty members will choose one of the approved academic affairs AI statements for syllabi which provides guidance on the acceptable use of AI in coursework, including citation expectations.
Student Guidelines: When AI use is permitted within a course (refer to course syllabi), students may not present AI-generated work as their own without proper acknowledgment or citation. Failure to do so will be considered plagiarism, subject to the NIACC's Student Code of Conduct.
Procurement and Acquisition of Generative AI Tools
Approval Process: Before acquiring or using generative AI tools, specifically those that process confidential data, a formal request must be submitted for review. This process ensures that the tool meets security, privacy, and compliance standards.
Security and Privacy Review: Technology Services must conduct a security review of any AI software before it is implemented. This includes both paid and free tools that will be used in conjunction with NIACC data.
Generative AI Tool Request Form: https://niacc.teamdynamix.com/TDClient/2830/Portal/Requests/ServiceDet?ID=55523
Incident Reporting and Accountability
Breach of Data Security: Any suspected breach of data security must be reported immediately to the Technology Services department.
Plagiarism and Academic Misconduct: Plagiarism related to the misuse of AI tools must be reported to the appropriate academic office.
Reporting Mechanisms: NIACC provides online reporting forms for data breaches and improper use of AI tools. All reports will be handled according to NIACC policies on data security and academic integrity.
Security Incident Response and Investigation Form: https://niacc.teamdynamix.com/TDClient/2830/Portal/Requests/ServiceDet?ID=49326
Enforcement and Sanctions
Violations of this policy may result in disciplinary actions, including suspension or termination for employees, and academic sanctions, up to and including expulsion, for students.
Resources and Contacts
Training and Educational Resources:
Faculty, staff, and students are encouraged to complete self-paced training on the ethical and secure use of AI tools.
Contact Information: For questions or concerns related to this policy, please contact the following:
- Chief Information Officer: CIO@niacc.edu
- Chief Security Officer: ITSecurity@niacc.edu
Version History
|
Version
|
Modified Date
|
Approved Date
|
Approved By
|
Reason/Comments
|
|
1.0.1
|
February 2025
|
|
|
Reviewed by AI Advisory Group
|
|
1.0.2
|
April 2025
|
|
|
Reviewed by President’s Council
|
|
1.0.3
|
April 2025
|
|
|
Reviewed by PAC
|
|
1.0.3
|
May 2025
|
05/06/2025
|
College Senate
|
|