My AI Ethics Charter
I used GenAI to help me draft my own AI Ethics charter. This charter reflects my own commitments to GenAI use.
Opening Statement
I am a project manager who uses AI tools in my professional practice. This charter defines the personal commitments I make to ensure my use of AI reflects my professional standards, protects those I work with, and preserves my own judgment and accountability. I created this document through deliberate reflection on the risks and responsibilities of AI use in my specific context.
My Commitments
Pride — Verification and Oversight
I will review all AI-generated outputs before they are used to inform project decisions, plans, or documentation.
I will subject AI-generated framework elements — including goals, purpose, outcomes, success measures, and verification methods — to stakeholder validation before they are treated as confirmed project design.
I take responsibility for the accuracy and fitness of any AI-assisted output that carries my name or enters my project record.
Greed — Data and Privacy
I will not input personally identifiable information (PII) or business identifiable information (BII) into AI tools without explicit consideration of the privacy and data sovereignty implications.
I take personal responsibility for data handling decisions in my AI use, recognising that the absence of organisational guidelines does not reduce my professional obligation to protect client and stakeholder information.
I will establish and maintain my own working standard for what categories of project information are and are not appropriate to input into AI tools, and I will apply that standard consistently.
Lust — Dependency and Ownership
I will maintain a varied approach to AI use — sometimes leading with my own thinking, sometimes using AI as a starting point — and I will not allow AI to become the default first and only move in my analytical process.
For significant AI-assisted outputs, I will allow adequate reflection time before acting on or committing to what AI has produced, ensuring my own judgment has genuinely engaged with the material.
I retain personal ownership of all professional judgments that carry my name, regardless of the degree to which AI contributed to the thinking behind them.
Envy — Integrity and Attribution
I will disclose AI involvement in my work to stakeholders, clients, and colleagues where it is relevant to their understanding of how an output was produced.
I will not present AI-assisted work as purely my own where the AI contribution was substantive.
If production pressure increases in future, I commit to maintaining my current transparency standards rather than allowing pace demands to erode honest attribution practice.
Gluttony — Purpose and Restraint
I will define a clear purpose for each AI generation task before I begin, and I will stop generating when that purpose has been met rather than continuing to accumulate options or variations.
I will use deliberate pauses in my AI workflow — stepping away from output before acting on it — as a structured practice to avoid self-generated overload and confused decision-making.
I will share only the volume of AI-assisted material that a stakeholder group can meaningfully engage with, curating output before it reaches others rather than passing the burden of filtering to them.
Wrath — Responsibility and Harm Prevention
I accept full personal responsibility for all AI-assisted outputs that carry my name or are acted upon in my projects — I will not attribute poor outcomes to AI as a means of diffusing my own accountability.
I will ensure that significant stakeholder-facing content produced with AI assistance is reviewed by another person before it is published or distributed.
I will not use AI to generate communications or content intended to manipulate, pressure, or misrepresent information to stakeholders or clients.
Sloth — Judgment and Cognitive Ownership
I will continue to engage with books, articles, and external professional thinking as a deliberate practice to maintain and develop my independent judgment, separate from AI-assisted work.
I will periodically test my ability to work through complex PM tasks — risk analysis, framework development, research synthesis — without AI assistance, to ensure my independent analytical capability remains intact.
I will treat AI as a thinking partner that augments my judgment, not a replacement for the professional reasoning I am accountable for developing and maintaining.
Declaration
I created this charter through deliberate reflection on my own AI usage. I take personal accountability for upholding these commitments in my daily practice. I will review and update this charter as my use of AI evolves.
I created this charter using a prompt created for this purpose.
Accessing the Tool
I offer the tool two ways.
First, as a Claude Artifact.
Second, thru Poe.com
If you access it thru Claude you will need a Claude account.
If you access it thru Poe you will need a Poe account.
What is Poe you ask? Online portal where people have access to multiple LLMs at their fingertips. One account, multiple LLMs.
One other note: I don't see your conversation with the tool. The tool is served up either via Claude or Poe. The conversation remains within your account.
Here are the links:
Claude:
https://claude.ai/public/artifacts/96dcf7f1-7e34-4a68-9bb1-3207ee15238b
Poe:

Personal AI Ethics Charter - Poe.com
I would really appreciate your feedback on this tool. Please contact me with your thoughts.