Blog
Guidelines for Agentic AI Design
- April 21, 2025
- Posted by: William Dorrington
- Category: Beginner CoPilot Data Science Frontiers Experience Design Technology

Table of Contents
Introduction
As artificial intelligence matures from passive assistants into intelligent, active problem solvers, a new category of AI is emerging, Agentic AI. These agents do more than respond to prompts; they perceive, plan, reason, and take action toward achieving goals, often independently. While their potential is transformative, such autonomy demands a considered, principled approach to design and is not something we should rush into straight to build like we’ve done in the past with Canvas Apps and other pieces of innovation. Structure adoption is crucial here!
This guide outlines foundational guidelines for designing Agentic AI systems that are helpful, transparent, reliable, and have appropriate guardrails for interacting with humans, and also without.
This guide’s purpose is to ensure that those who are starting their journey into adopting and building Agentic AI have a starting point and to ensure you don’t repeat some of my past mistakes (there’s many!)
Agentic 101
What is Agentic AI? Simply put it’s:
AI Systems that can AUTONOMOUSLY – PERCEIVE, DECIDE & ACT (AGENCY) to achieve GOALS
Agentic AI refers to artificial intelligence systems that can act independently to achieve goals. These agents don’t just respond to commands or messages/triggers they perceive their environment, make decisions, and take actions without needing constant human input with a constant goal in mind. At its core, Agentic AI is about autonomy and purpose. It’s what separates static chatbots from dynamic, goal-driven digital agents that will only help you reason but not act or overly learn from experience.
To put it even more simply, the difference between Generative AI and Agentic AI. If you were booking a holiday, generative AI could tell you where to go, and Agentic AI (if given the right permissions) would book it for you!
Here are the five essential traits that define agentic behaviour:
-
Autonomy: Agentic systems act on their own; they don’t wait for explicit instructions.
-
Goal-Oriented: They operate with a purpose, aiming to achieve specific outcomes.
-
Adaptability: They adjust their behaviour based on context, feedback, or environmental changes.
-
Memory & Knowledge: They retain and use information from previous interactions to make better decisions.
-
Action-Capable: They can carry out tasks and execute steps toward a goal, not just talk about it.
How does Agentic AI work (high-level)
While traditional AI systems wait to be prompted, Agentic AI actively senses its environment, plans based on goals, and takes action, refining itself through feedback. This is often described using the loop: Perceive > Decide > Act.
Let’s break down the core components in the diagram:
The Agent
The agent is the central actor that interacts with the environment. It perceives input, makes decisions, and executes tasks, not passively, but with intent and purpose.
Sensory Interface (Perceive)
This is how the agent receives input, from users, data streams, or sensor feeds (chat interfaces, IoT, Databases etc.). These inputs trigger its reasoning engine to interpret the current state.
Cognitive Core (Decide/Plan)
- Goals: The desired outcomes or targets that drive the agent’s decision-making.
- LLM (Brain): A large language model interprets input and plans next steps using reasoning and natural language understanding.
- Memory (Experience): The agent can store and retrieve context from both short-term and long-term memory to maintain state and personalise behaviour.
- Knowledge Base: Structured and unstructured content the agent uses to provide grounded and accurate answers.
Together, these enable decision-making, orchestration, and planning through mechanisms like chain-of-thought reasoning and subgoal decomposition.
Action Layer (Act)
Here the agent performs tasks via integrations, APIs, or platform tools. This is the “hands” of the agent, the output side of the pipeline where decisions are executed.
Feedback Loops (Refinement)
Agentic AI systems include internal feedback loops that allow for continual learning and refinement. Failed tasks or unexpected outcomes inform the next cycle of perception and planning.
Now let’s look at some design principles when we are building Agentic AI Systems.
Design Principles
This section walks the reader through some of the baseline Agentic Design principles, focusing on three key areas: space, Time, and core. It focuses on what should be considered when building out the design for your Agentic AI.
Credit: I found here a very useful source when writing this guide gifted by my friend Dona Sarkar!
Agent in Space: Designing for Environment and Interactions
- Clarity of Purpose and Boundaries: Define what the agent is responsible for and what it is not. Scope limitations should be clear to users and implemented to avoid overpromising or confusion.
- Consistency: Ensure a coherent and predictable user experience across platforms, devices, and interaction modes. An agent should behave reliably regardless of context, maintaining a unified personality, tone, and functionality throughout.
- Collaboration and Connectivity: Agents should connect users with systems, data, or people. They should support collaborative workflows and hand off tasks to humans when appropriate, rather than acting as a gatekeeper.
- Accessibility and Inclusivity: Ensure that the agent is usable by a wide range of users, including those with differing abilities or literacy levels. Support multimodal access, assistive technologies, and reduce cognitive load where possible.
Agent in Time: Responsiveness, Adaptation, and Memory
- Past: Analyse previous interactions, states and context. Enabling more relevant results and building better personalisation & confidence.
- Future (Adaptability): Enable agents to adjust over time based on user preferences, prior behaviour, and evolving contexts. Maintain a lightweight memory model where appropriate, while respecting privacy and reset functionality.
- Now (Proactive Assistance): Design agents to offer context-based help, not just requests. Timely prompts or suggestions should anticipate user needs without being intrusive.
Agent Core: Purpose, Trust, and Governance
- User Control: Preserve user autonomy. Allow users to override, pause, or customise agent actions and behaviours. Seek explicit confirmation for actions that impact user data or systems.
- Trust and Safety: Handle uncertainty gracefully. Avoid irreversible actions. Ensure agents stay within their domain and defer to humans or trusted systems when outside their competence.
- Simplicity: Design for clarity. Avoid unnecessarily complex conversation flows or over-engineered solutions. Simplicity aids comprehension, maintenance, and error recovery.
- Transparency: Users should always be aware when they are interacting with AI. Clearly communicate the agent’s purpose, capabilities, limitations, and actions taken on behalf of the user.
Agentic Boundaries
As more teams start building with Microsoft Copilot Studio or custom LLM-based orchestration, it’s important to talk not just about what agents can do, but what they shouldn’t do – there’s a Jurrasic park quote in there somewhere! That’s where agent boundaries come in. These boundaries are the behavioural guardrails that keep your agent safe, predictable (as possible), and aligned with user expectations. This isn’t about limiting innovation, it’s about designing with purpose and responsibility.
Below, I’ve outlined five high-level boundaries every agent builder should consider as a starting point, with practical reasoning behind each one.
1. Action Boundaries
- What it means: Your agent shouldn’t take irreversible actions (like deleting files or submitting payments) without user confirmation.
- Why it matters: Trust is hard-earned and easily lost. These moments are where the user needs to stay in control. A confirmation prompt or manual override preserves human agency.
2. Data Boundaries
- What it means: Limit what personal or sensitive data the agent can access or retain. Use context variables sparingly and intentionally.
- Why it matters: Agents that store or misuse sensitive data (even by accident) pose risks to privacy, compliance, and user safety. Boundaries protect against accidental leaks.
3. Scope Boundaries
- What it means: The agent should stay in its lane. For example, a customer support agent shouldn’t give legal advice or respond to HR queries unless specifically designed to do so.
- Why it matters: Agents that “try to do everything” often fail at most things. Domain creep leads to hallucinations, poor performance, and confused users.
4. Escalation Boundaries
- What it means: If the agent isn’t confident or context is unclear, it should hand off to a human or gracefully stop.
- Why it matters: Safety isn’t just about code, it’s about behaviour. Low-confidence guesses lead to frustrated users. Escalation is a strength, not a weakness.
5. Time Boundaries
- What it means: Define how long memory or context is kept are you building short-term sessions or long-term personalised agents? Always give users a reset option.
- Why it matters: Retaining context too long can violate privacy or create confusing agent behaviour. Boundaries keep your AI honest and human-friendly.
Bringing it All Together
A good agent isn’t just helpful, it’s controlled. Boundaries make it trustworthy. Whether you’re designing refund flows in Copilot Studio or building custom orchestration logic in Azure/OpenAI, these five categories help frame your thinking and ensure a level fo success.
When in doubt, ask: Would I want an agent doing this without asking me first? If the answer is no, that’s your boundary.
Ethical Considerations
If you’re building AI agents, whether using Microsoft Copilot Studio, Azure OpenAI, or another platform, technical capability is only half the story. Just because an agent can respond, doesn’t mean it should respond in certain ways. That’s where ethics come into play.
Designing ethical agents isn’t about ticking boxes. It’s about creating systems people can trust, systems that respect users, and systems that know when to step back.
Here are five key considerations to embed into your design process early, not as an afterthought.
1. Explainability
What it means: Users should always understand what the agent is doing and why.
Best practice: Make reasoning visible, especially when the agent’s output is driven by a knowledge base, action, or user variable. Explain outcomes in plain language.
Example:
“Here’s what I found in your FAQ, would you like to read more?”
2. Consent & Clarity
What it means: Make it obvious that users are talking to an AI, not a human, and clearly state what the agent can and can’t do.
Best practice: Use upfront welcome messages and disclaimers to set expectations. Let users know what kind of help they can expect.
Example:
“Hi, I’m an AI assistant. I can help with order refunds but not purchases.”
3. Non-Deception
What it means: Agents shouldn’t fake human emotion, empathy, or intentions.
Best practice: Avoid phrasing like “I understand how you feel” unless it’s obviously metaphorical. Let the agent be helpful, not theatrical.
Example:
Don’t say: “I totally understand how frustrating that must be.” Do say: “Let’s see how I can help you fix that.”
4. Bias Minimisation
What it means: Your agent shouldn’t embed or amplify social, racial, or gender biases in its answers, tone, or logic.
Best practice: Test prompts and responses regularly across edge cases and diverse user inputs. Monitor language drift over time.
Example:
Build prompt variations with different user names, tones, and demographics to see if the tone or treatment shifts.
5. Fair Failure (read fair not fear, mistyped on the image, but will update when I get a moment)
What it means: When the agent doesn’t know what to do, or shouldn’t answer, it needs to back off gracefully.
Best practice: Don’t guess. Have clear escalation paths or fallback responses that prioritise clarity and user experience.
Example:
“I’m not able to resolve this, but I can connect you to a support agent.”
Final Thought
Building ethical agents isn’t about making them sound human, it’s about making them behave responsibly.
From explainability to fairness, these principles help avoid confusion, reduce risk, and build trust. If you’re serious about building agentic AI that actually works in the real world, ethics isn’t optional, it’s core design.
Let’s build agents that don’t just respond, they respect.