1.0 Purpose
The following AI Usage Guidelines establish a framework for responsible, ethical, and strategic use of Generative Artificial Intelligence (GenAI) and Large Language Models (LLM) across Oakland University's academic, research, clinical, and business operations.
2.0 Guiding Principles
-
Responsible Innovation: Responsibly pursuing cutting-edge AI while maintaining a critical and measured approach to its implementation.
-
Ethical Deployment: Ensuring AI technologies are implemented with a fundamental respect for human values, individual rights, and societal well-being.
-
Data Protection: Pursuing a comprehensive strategy to safeguard Oakland University’s sensitive data from unauthorized access, misuse, or unintended disclosure throughout the AI ecosystem.
-
Academic Integrity: Preserving the fundamental values of intellectual honesty, original thought, and scholarly rigor.
-
Transparency: Demanding complete and honest communication about the use, capabilities, and limitations of AI technologies across all University activities.
-
Risk Mitigation: A proactive approach to identifying, assessing, and systematically reducing potential negative consequences of AI technology implementation.
-
Accessibility & Equity: Ensuring AI tools are inclusive and accessible, avoiding bias and disproportionate impacts on any community or population.
-
Human Oversight: All AI-assisted decision-making processes must include active human involvement and accountability.
3.0 Guidelines
3.1 Data Protection
-
Data used and generated by AI and LLM systems must be handled according to the University’s Data Classification Standard and external legal and regulatory frameworks.
-
Confidential \Internal data must be safeguarded from unauthorized access, upload, and misuse as described in University Policy 860.
-
Entering Confidential and Internal Data into unauthorized AI systems is a public disclosure and must follow the steps defined in Oakland University’s Cyber Incident Response Program (CIRP) per the Incident Response Process. Note: If you intend on entering Confidential Data in an AI tool please confer with UTS first.
-
All third-party AI tools must comply with applicable data privacy laws (e.g., FERPA, HIPAA, GDPR), and procurement must verify vendor compliance.
3.2 Authorized Use
-
GenAI and LLMs will not be solely used to make personnel, award, or disciplinary decisions.
-
Decision-making may not be delegated to GenAI systems.
-
The university has a rigorous review process for AI technology (Software, Applications, and Website Purchasing Checklist Request), ensuring all AI tools undergo thorough evaluation by cross-functional experts before implementation.
-
All approved AI systems are subject to re-evaluation upon major updates or model changes.
3.3 Academic Affairs Use
Research Applications
Teaching, Learning, and Academic Integrity
3.4 Research Considerations
Research Data Management & Collaborative Research
3.5 University Office Use
Finance and Administration
-
AI may be used for enhancing institutional efficiency, including resource optimization, workflow improvements, predictive maintenance, and strategic planning.
-
AI-generated recommendations must be reviewed and approved by appropriate human decision-makers.
Planning
-
AI may support strategic planning through data modeling and predictive analytics but must not replace human judgment.
-
Ongoing performance monitoring is required to detect bias, errors, or deviations from institutional values.
Human Resources
-
AI may support initial candidate screening or workload analysis but may not make personnel decisions (e.g., hiring, promotion, termination).
-
All HR-related uses of AI require human oversight to prevent bias and protect employee rights.
Student Affairs
-
AI may enhance administrative functions such as registrar processes, financial aid communication, and service efficiency.
-
AI is prohibited from making independent enrollment or disciplinary decisions, which require human review and authority.
4.0 AI Governance and Oversight
-
Oakland University will establish an AI Governance Council including representatives from Academic Affairs, IT, Legal, Ethics, Research, and Student Affairs to oversee policy implementation, review incidents, and evaluate new technologies.
-
These guidelines will be reviewed annually to remain current with evolving legal, technological, and educational developments.
5.0 Reporting and Compliance
-
Any misuse, unintended consequences, or ethical concerns regarding AI must be reported via the AI Incident Reporting Process or through the University’s CIRP.
-
Violations of these guidelines may result in disciplinary or administrative action, as outlined in applicable university policies.