Responsible AI and AI Governance: Identifying & Mitigating Risks in Design, Development and Operation of AI Solutions
Course Number: 17416, 17716, 19416, 19716
As AI and machine learning systems become integral to products and services across industries, it is critical to identify and mitigate the risks associated with their design, deployment, and operation. This course examines the evolving landscape of AI governance, exploring both technical and organizational strategies for developing trustworthy and responsible AI systems. In 2025, the course expands to cover responsible development and governance of Agentic
AI—systems capable of autonomous reasoning, planning, and collaboration. Students explore governance strategies across the AI lifecycle, including model alignment (RLHF, RLAIF), fairness, differential privacy, explainability, interpretability, and AI red teaming. The course integrates evolving policy and regulatory frameworks such as the EU AI Act, NIST’s AI Risk Management Framework, ISO/IEC 42001, and OECD guidelines. Case studies examine
responsible AI practices in foundation models, generative systems, and agent-based ecosystems.
The course combines technical, policy, and management perspectives to equip students with the
tools and frameworks needed to assess and mitigate AI-related risks.
Academic Year: 2025-2026
Semester(s): Spring
Required/Elective: Elective
Units: 6,9,12
Prerequisite(s): No deep technical knowledge of AI or machine learning is required. A basic understanding of probability and statistics is expected. The course is designed to accommodate students from diverse technical and non-technical backgrounds, including engineering, computer science, policy, design, and management. Students interested in AI engineering, product management, law and policy, design, or risk management will find the course particularly relevant. Selected sessions provide intuitive introductions to modern AI safety techniques—such as RLHF, adversarial testing, and explainability methods—and their governance implications.
Location(s): Pittsburgh, Remote
Format
LectureLearning Objectives
This course is designed for advanced undergraduates and graduate students preparing to design,
develop, deploy, or oversee AI-based systems. It introduces key principles, methodologies,
technologies, and best practices for responsible AI and risk mitigation.
Students will:
- Understand the governance implications of next-generation AI systems, including multi-
agent and Agentic AI architectures. - Learn technical and organizational approaches for ensuring transparency, accountability,
privacy, fairness, robustness, and safety. - Gain hands-on experience analyzing governance frameworks and applying responsible
AI techniques and tools such as red teaming, differential privacy, and interpretability
audits. - Examine regulatory, ethical, and policy issues shaping AI practice across sectors.
Lecture Topics (Spring 2025)
| Lecture | Topic | Focus |
| 1 | Introduction: The Expanding AI Ecosystem | Overview of AI lifecycle risks, governance frameworks, and Agentic AI |
| 2 | Ethical Principles and Foundations of Responsible AI |
Global ethical frameworks, values alignment, and trustworthiness |
| 3 | Governance, Data, and Privacy |
Differential privacy, data governance, and regulatory compliance |
| 4 | Fairness and Bias in AI Systems | Bias auditing, algorithmic fairness, and equitable model design |
| 5 | Transparency, Explainability, and Interpretability |
SHAP, LIME, causal interpretability, and documentation frameworks |
| 6 | Model Alignment and Oversight |
Reinforcement Learning with Human and AI Feedback (RLHF, RLAIF); human-in-the-loop governance |
| 7 | AI Red Teaming and Adversarial Evaluation | Probing model behavior, safety testing, and governance integration |
| 8 | AI Security and Robustness | Adversarial attacks, model extraction, and resilience strategies |
| 9 | Legal and Regulatory Landscape | EU AI Act 2025, U.S. Executive Orders, ISO/IEC 42001, and international harmonization |
| 10 | Organizational Governance and Compliance Practices |
Risk management systems, accountability structures, and assurance tools (incl. NIST AI RMF) |
| 11 | Governance of Agentic and Multi-Agent AI Systems |
Oversight of autonomous systems and emergent behavior |
| 12 | AI Safety and Societal Risks | Applications in critical systems: autonomous driving, healthcare, defense |
| 13 | Copyright, Intellectual Property, and Model Use Policies |
Generative AI, content provenance, and copyright compliance |
| 14 | Project Poster Fair | Award certificate(s) for best project(s) |