Agentic AI & Responsible AI Summit 2026
Agentic AI is quickly moving from “interesting demos” to real enterprise workflows, which means accoun...
A$39
Level 1, 2, and 3, 315 Brunswick Street, Fortitude Valley, QLD 4006 Australia
Friday, March 13, 2026
9:00 AM AEST
Our Sponsors
Okta secures AI. Okta is The World’s Identity Company. Freeing everyone to safely use any technology—anywhere, on any device or app.
Gold Sponsor
Who this is for
Privacy Leaders
Compliance Leaders
Risk Leaders
AI and Data Leaders
Engineering Leaders
Technology Managers
Data & AI Architects
Why This Summit Now
At the same time, boards and executives are asking for evidence—not optimism—that AI is safe, controlled, and aligned to business outcomes, which is why “evidence-ready” practices are becoming the new baseline for enterprise adoption. This summit is built to align leadership, risk, and technology on the practical decisions that shape enterprise AI in 2026, with responsible AI as the operating standard and agentic AI as the emerging capability that needs the strongest clarity and controls.
You’ll leave with a clearer view of what “good” looks like for agentic AI governance in real organisations—covering decision rights, oversight models, guardrails for high-impact workflows, and the operating mechanisms needed to scale responsibly.
Agenda
Check in, grab a coffee and meet fellow attendees ahead of the opening session.
What’s changed, what leadership/boards now expect, and what “evidence-ready” means for AI decisions.
Dive into how AI agents are shaking up data work and making everything smarter and faster right now!
AI Adoption workshop led by Amit Tiwary, Author and Thought leader in AI in Australia
How risk, legal, data, security, and product/engineering align on “go/no-go” for agentic AI.
Governance structures, monitoring, escalation paths, and audit/assurance practices for enterprise adoption.
Cloud AI services, vendor accountability, and Vendor-neutral standards.
Map use cases to autonomy levels, define accountability, and identify the minimum responsible controls to launch.
The 90-day agenda for leaders who must keep Australians safe while scaling AI responsibly.
Submit Your Inquiry
Know more about this conference
Learning Outcomes
Deliverables
-
Agentic AI governance blueprint: Decision rights, accountability (RACI), autonomy levels, and human-in-the-loop / escalation design to deploy agentic ai safely in enterprise workflows.
-
Evidence-ready pack outline for leadership/boards: What to document (controls, monitoring, incidents, approvals, outcomes) so agentic ai initiatives are defensible and reviewable—supporting responsible ai expectations.
-
Responsible AI controls checklist for core workflows: Minimum guardrails for GenAI/agentic ai in production—risk triggers, monitoring signals, and “go/no-go” criteria for high-impact use cases.
-
90-day implementation plan template: A practical plan to move from summit insights to execution—owners, milestones, governance cadence, and rollout phases for responsible ai at scale.
Speakers
Michael Ridland
CTO
Michael Ridland is an AI leader with 25+ years of experience building enterprise software and AI-first products. As CTO – Software, Applications and AI at team400.ai, he helps organisations turn Generative AI and AI agents into real-world business impact.
David Alzamendi
Managing Director, Level Up Your Data Data & AI Leader
David Alzamendi is a Microsoft Data Platform MVP and Data & AI Strategy Leader specialising in data architecture, governance, and modern Azure-based analytics platforms that enable scalable, AI-driven decision-making.
Amit Tiwary
Former Enterprise Architect, Government of Australia
Enterprise Architecture & AI Governance Leader (40+ years); International Speaker; Author and Published Researcher; Advisor on Responsible AI, AI Governance, and Board-Level Technology Strategy.
Priyanka Shah
Director, AI & Analytics, Avanade
Microsoft AI MVP (5 times); CHANNEL ASIA WOMEN IN TECHNOLOGY 2023 award winner; Top Women Voice in AI award by Center for AI and Innovation; International Speaker; Author.
Sandeep Bhalekar
CEO & Founder, GIofAI (Global Institute of Artificial Intelligence)
AI governance and enterprise risk specialist; ISO 42001 expertise; GIofAI is the ISO of AI.
Voice of Agentic AI & Responsible AI Summit 2026
"Strong executive framing. The Vendor-neutral standards. angle made it easier to align stakeholders across risk and engineering. "
"Credible, practical, and board-aware—without turning into a vendor pitch. "
"The agenda hits what leaders are being asked about right now: GenAI, third-party exposure, and incident readiness. "
FAQs
The summit is produced by the Global Institute of Artificial Intelligence (GIofAI), which specialises in AI governance, enterprise risk and standards such as ISO 42001. GIofAI works with cross-functional leaders to help organisations move from AI intent to AI proof.
Yes—content is designed to be accessible for non-technical leaders while still being useful for practitioners; we focus on practical governance and adoption, not deep coding.
No—key concepts and real enterprise patterns will be explained, then applied to governance and responsible deployment decisions.
Yes—sessions include actionable frameworks (e.g., readiness and governance checklists) you can adapt to your organisation.
The focus is on AI used in workflows that can plan/act with varying autonomy, and what changes in governance, oversight, and accountability when systems become more agentic.
No. The summit is designed as a leadership and governance forum rather than a technical developer conference. While AI, data and security leaders attend, the focus is on decisions, controls and operating models, not on coding, architectures or tool training.
Attendees will leave with clearer frameworks for governing AI responsibly across the enterprise, aligning stakeholders and embedding AI into core workflows without losing control of risk. You can expect actionable models, language and patterns you can re-use with your own executive team and board.
Yes—the positioning is enterprise-oriented and aligned to local context and expectations reflected in the event’s governance/adoption focus.
Absolutely—this is intended to be cross-functional, and outcomes improve when AI, risk, legal, data, and security stakeholders attend together.
Yes—there is specific emphasis on moving from experimentation to scalable, accountable execution.
Yes—governance, monitoring, accountability, and operating model maturity become more important once AI is scaled.
Yes—governance and oversight for more autonomous workflows is a core theme.
Yes—registration, breaks, and lunch are built in for networking.
Sponsor Opportunity
Put your brand alongside the ISO of AI and a senior audience shaping enterprise AI decisions in 2026.
Support Vendor-neutral standards. and the mission to keep Australians safe—where responsible AI meets operational execution.