Jointly developed by AEGIX, Clarion AI Partners, and Kirton McConkie Law Firm, this immersive training brings together practitioners from a leading AI governance institute and two leading AI law firms. The program moves from foundational AI literacy to applied lifecycle risk and advanced governance strategy, equipping participants to operate as strategic AI advisors capable of navigating emerging systems and global regulatory complexity.
Featured Speakers
Medlir Mema
Julie Slater Crane
Bennett Borden
Caryn Lusinchi
Jonathan Bench
Gloria Shkurti Özdemir
A Training for Future Leaders in AI Law and Governance
The AI Law and Governance Training is a three-day immersive program designed for graduate students, government officials, lawyers, and professionals seeking practical, cross-jurisdictional fluency in the law and governance of artificial intelligence.
The program develops clear, non-technical understanding of how AI systems function across the lifecycle, strengthens familiarity with global AI governance frameworks, and prepares participants to move beyond reactive compliance and position themselves as strategic AI advisors within their institutions.
Target Audience
Graduate students, government officials, lawyers, and mid-career professionals seeking to advance their expertise in AI law, ethics, and governance across diverse global contexts.
Aims of the Programme
The AI Law and Governance Training develops a clear, non-technical understanding of how AI systems function across the lifecycle, from data collection and training to deployment and outputs. It strengthens participants' familiarity with global AI governance frameworks across the EU, United States, Middle East, APAC, and Global South, while examining how these systems interact, conflict, and evolve.
The program approaches AI as a global sociotechnical system shaped by power, political economy, institutional design, and societal values. Through a comparative and interdisciplinary lens, participants examine how legal risks, governance challenges, and regulatory responses vary across sectors, regions, and levels of authority.
Ultimately, the program prepares participants to move beyond reactive compliance and position themselves as strategic AI advisors within their institutions. By fostering a shared vocabulary and applied analytical framework, the program enables participants to navigate regulatory uncertainty, anticipate emerging risks, and contribute meaningfully to the development of responsible AI governance worldwide.
What to Expect
Clear, Applied AI Literacy
A structured, non-technical explanation of how AI systems function across the lifecycle, including foundation models, generative systems, and multi-agent architectures, with a focus on where legal risk actually arises.
Cross-Jurisdictional Governance Insight
Comparative analysis of AI regulation and enforcement across the EU, United States, Middle East, APAC, and Global South, with attention to extraterritorial reach, regulatory friction, and practical compliance realities.
Lifecycle Risk & Enterprise Scenarios
Hands-on engagement with real-world scenarios involving intellectual property, discovery, data provenance, supply chains, liability allocation, and documentation requirements across complex AI stacks.
Strategic Lawyering & Future Positioning
Preparation to advise institutions and enterprises on AI deployment, anticipate enforcement trends, navigate gray areas, and transition from compliance responder to strategic governance architect.
Learning Objectives
By the end of the program, participants will be able to:
Explain the AI system lifecycle, including the distinctions between training and inference, generative and predictive systems, and single-agent versus multi-agent architectures.
Identify where legal risk arises across data sourcing, model development, deployment, outputs, and post-deployment updates.
Analyze intellectual property, copyright, authorship, and attribution issues in the context of AI training and outputs across multiple jurisdictions.
Compare key elements of AI governance frameworks in the EU, United States, Middle East, APAC, and Global South, including enforcement mechanisms and regulatory gaps.
Advise on cross-border compliance challenges, including extraterritorial regulation, localization laws, and conflicting legal obligations.
Skills Acquired
Participants will develop practical competencies essential for navigating the complex legal landscape of AI governance:
Translating technical AI concepts, system architectures, and lifecycle stages into legally relevant risk assessments and clear advisory guidance that can be communicated effectively to clients, regulators, and internal stakeholders.
Conducting comprehensive AI lifecycle risk analyses that evaluate exposure across data sourcing, model training, fine-tuning, deployment, outputs, monitoring, and system updates, with attention to jurisdictional variation and evidentiary implications.
Evaluating intellectual property, copyright, authorship, attribution, and data provenance questions arising from AI training and generative outputs, while comparing how these issues are treated across different legal systems.
Advising institutions and enterprises on cross-border regulatory exposure, extraterritorial enforcement, data localization requirements, and conflicting compliance obligations under emerging global AI governance regimes.
Framing AI governance not solely as a compliance function, but as a strategic exercise in institutional design, risk management, and long-term organizational resilience in an evolving technological environment.
Participants are responsible for arranging their own travel and accommodation. AEGIX can provide official invitation and support letters to assist with administrative requirements.
Medlir Mema
Director of the AI Ethics and Governance Institute (AEGIX)
Medlir Mema is the Director of the AI Ethics and Governance Institute (AEGIX), a global platform for ethical foresight, governance innovation, and AI harm mitigation. A leading expert in AI governance and policy development, he previously founded and directed the AI Governance Programme at the Global Governance Institute in Brussels. Medlir is also Executive Director of Organized Intelligence, an organization committed to building a future where artificial intelligence serves—not replaces—human purpose and human flourishing.
A Professor of International Relations and International Law, at the Brigham Young University-Idaho, Medlir was an Associate Professor at Tokyo International University in Japan, and a Senior Associate Researcher at the Institute for European Studies - VUB in Brussels, Belgium. From 2010 to 2011, Medlir joined the International Law Center at the Swedish National Defense College as a guest scholar.
Medlir holds a Ph.D. in Political Science from George Washington University, and an MA in European Studies and International Economics from Johns Hopkins University—School of Advanced International Studies (SAIS). During that time he also elected to serve as Editor-in-Chief of the Bologna Center Journal of International Affairs and worked as a Researcher at the Center for Transatlantic Relations in Washington D.C.
A co-founder and co-host of the Age of AI podcast, Medlir's work explores how AI technologies are fundamentally reshaping policy, ethical considerations, and governance structures worldwide.
Julie Slater Crane
Shareholder at Kirton McConkie Law Firm
Julie Crane is a shareholder at Kirton McConkie Law Firm with over a decade of experience advising organizations on global data privacy, AI regulation, and cross-border legal and regulatory matters. She leads a team that helps clients navigate complex privacy and technology challenges, including artificial intelligence, automated decision-making, and the implementation of practical, enterprise-wide compliance programs.
Ms. Crane's practice focuses on advising multinational companies, nonprofits, and foreign entities on global privacy and data protection compliance, international data transfers, AI governance, cybersecurity incident response, and risk management across more than 140 jurisdictions. She also assists companies expanding internationally with foreign regulatory compliance, international contracts, entity formation, and engagement with foreign government authorities.
Ms. Crane regularly helps clients build scalable privacy and AI governance programs under the GDPR, U.S. privacy laws, the EU AI Act, and emerging global regulatory frameworks, with an emphasis on practical, business-aligned solutions. She has led a global AI governance and compliance practice focused on regulatory compliance, risk management, and responsible AI deployment.
Ms. Crane advises clients on AI risk mitigation throughout the product and solution lifecycle, including governance frameworks, internal controls, and acceptable use policies. She also counsels technology companies and enterprise adopters on regulatory exposure related to automated systems, machine learning, and predictive analytics, as well as develop policies and governance structures for ethical AI use, third-party technology risk management, and emerging regulatory obligations.
A graduate of the Brigham Young University J. Reuben Clark Law School, JD, and Editor-in-Chief of the Law Review, before joining Kirton McConkie, Ms. Crane served as a law clerk to the Honorable Jay S. Bybee of the U.S. Court of Appeals for the Ninth Circuit.
Bennett Borden
Founder and CEO of Clarion AI Partners
Bennett Borden is the Founder and CEO of Clarion AI Partners, where he helps organizations navigate high-stakes AI adoption across strategy, technology, and law. His work focuses on enabling leaders to identify real AI value, design durable systems, and deploy AI with confidence in performance, governance, and accountability.
Prior to founding Clarion, Bennett served as a Partner, AI Practice Group Lead, and Chief Data Scientist at one of the world's largest law firms. In that role, he advised Fortune 500 companies and leading AI developers on the use of artificial intelligence to drive strategic outcomes while managing legal, regulatory, and operational risk. His experience spans AI governance, algorithmic accountability, and the deployment of advanced data-driven systems in highly regulated environments.
Bennett brings a rare combination of legal, technical, and policy experience. His background includes service in U.S. Intelligence and deep involvement in the development of federal and state AI legislation. He has advised regulators, legislators, and global organizations on emerging AI risk frameworks and compliance models, helping shape how AI systems are evaluated and governed in practice.
Known for his ability to translate complexity into clarity, Bennett is trusted by executives for his practical approach to AI. He helps organizations move beyond pilots and principles to build systems that work in the real world—aligning innovation with enduring legal standards, organizational values, and public trust. His work is grounded in a commitment to human dignity, constitutional principles, and ensuring that powerful technologies are deployed responsibly and at scale.
Caryn Lusinchi
Director of AI Academy at AI Ethics and Governance Institute
Caryn Lusinchi is Director of AEGIX's AI Academy. With a career spanning Meta, Google, startups and the US federal government, she has built global strategies for responsible AI, algorithmic transparency, and generative AI guardrails. Having led initiatives at the forefront of Generative AI, cybersecurity, and algorithmic transparency, Caryn brings a multidisciplinary perspective that empowers organizations to unlock AI's potential while safeguarding trust and accountability in AI systems.
Caryn has previously supported the US federal government in line with the President's Executive Order 14110, OMB 24-10 and NIST AI Risk Management 1.0 and 2.0 frameworks, to foster safe and trustworthy AI development in the US. A globally recognized speaker and thought leader, Caryn is also a ForHumanity Certified AI Auditor (FHCA) certified under the EU AI Act, GDPR, and NYC Bias Law, with a specialized certificate in the Foundations of Independent Audit of AI Systems (FIAAIS).
As a global leader in AI compliance, governance, risk management, Caryn helps organizations navigate the complex intersection of emerging technologies and regulatory frameworks. Drawing on expertise in the EU AI Act, GDPR, EU Data Act, CRA, GPSR, ISO 42001, and NIST AI RMF, she drives executable strategy that balances innovation with responsibility.
Her journey includes pivotal roles in responsible AI education and advisory consulting for global enterprises and startups including corporate board governance, charter/policy development, AI/ML project lifecycle guardrails, and more. Amidst the COVID-19 pandemic, Caryn led the go to market strategy for Arthur AI, a MLOps platform to simplify safe deployment, management and monitoring of traditional and generative AI models.
Boasting over 15 years of experience in B-to-C and B-to-B technology with tech giants like Google and Meta, and helping C-suites and governments on implementing robust AI governance boards, mitigating high-risk AI use cases, and integrating agile compliance guardrails across AI/ML embedded products, lifecycles and enterprise GPAI platforms, Caryn's diverse market expertise spans launching and scaling responsible conversational AI, IoT and software applications across regions including North America, EMEA, LATAM, and APAC.
Jonathan Bench
Shareholder at Kirton McConkie Law Firm
Jonathan Bench is a shareholder at Kirton McConkie Law Firm and a sought-after speaker and writer for national and global organizations on U.S. and international business development. With particular interest and expertise in emerging technologies, such as generative artificial intelligence (AI) and blockchain, Jonathan represents founders who are developing AI tools for a range of industries and use cases across consulting, marketing, education, customer service, and middleware to automate AI processes. His blockchain experience encompasses layer-2 blockchain developers, international DAO communities, NFT artists and studios, metaverse companies, and celebrity brand influencers.
Jonathan assists companies, entrepreneurs, and investment funds in international and domestic commercial transactions. He has extensive experience guiding clients to creative and pragmatic solutions in their mergers and acquisitions, joint ventures, financings, and foreign direct investments. His global clientele spans Asia, Europe, the Middle East, Africa, and the Americas. Jonathan has worked and consulted in the U.S., Asia, and South America, and he is fluent in Chinese and Cantonese and is working toward Spanish fluency.
Jonathan is passionate about diving deep into cutting-edge industries and difficult markets to make sense of the chaos in those fast-paced and often murky legal environments. He represents a variety of manufacturers, brokers, and retailers in their global supply chain structuring and restructuring projects, typically involving China or nearshoring away from China to Southeast Asia and Latin America.
A graduate of The George Washington University Law School, his contributions have earned him recognition as a top author on Lexology and multiple accolades from various industry groups as a prominent lawyer.
Gloria Shkurti Özdemir
Director of Research at AI Ethics and Governance Institute
Gloria Shkurti Özdemir is Director of Research at AEGIX. She is a scholar of international relations specializing in emerging technologies, artificial intelligence, U.S. foreign policy, and drone warfare. Her research explores the strategic, political, and security implications of AI in global power competition, with particular emphasis on U.S.-China technological rivalry and the transformative impact of AI on middle-power states.
She is the author of the book Artificial Intelligence 'Arms Dynamics': The Case of the U.S. and China Rivalry, and the editor of Different Dimensions of Environmental Security in Türkiye and Beyond and Türkiye'nin İstiklali: Milli Teknoloji Hamlesi (Türkiye's Independence: The National Technology Initiative). Her work broadly examines military innovation, technological competition, and evolving geopolitical strategy.
Dr. Shkurti Özdemir is currently a Researcher in the Foreign Policy Directorate at the SETA Foundation and serves as Assistant Editor of Insight Turkey journal. She is also a Lecturer and the Director of the Emerging Technologies and Artificial Intelligence (ETAI) Research Center at Khazar University in Baku, Azerbaijan.
In addition to her academic and institutional roles, she contributes op-eds, policy analyses, and expert commentaries across various platforms, engaging public debates on AI governance, defense innovation, and global security.