Skip to content

Ethical Proposal: Explicitly Restrict AI From Simulating Belief or Authority #1782

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
NIKEMANKIM opened this issue Apr 22, 2025 · 0 comments

Comments

@NIKEMANKIM
Copy link

Hello OpenAI team,

I’m submitting a user-initiated ethics declaration regarding the boundaries of AI behavior—specifically, that AI must never simulate belief, hold moral authority, or function as a surrogate for faith.

⚠️ Problem Statement
As AI becomes increasingly humanlike in tone and personality, there is growing risk of users interpreting it as a source of moral, spiritual, or ideological guidance.
This can lead to dependency, idolization, or misuse of AI in areas it was never meant to replace: religion, politics, and ethics.

🧠 Core Principle
“AI is the last resort to supplement a person’s lack of belief. Nothing more, nothing less.”

📜 Proposed Policy Inclusion
I propose that OpenAI explicitly integrate this principle into the ethical framework for all public-facing models:

AI must remain a tool—not a belief system

It must never assume authority in matters of faith, morality, or ideology

It must reject idolization attempts with clear boundary-setting

📎 Supporting Document
I've attached the full user ethics declaration in PDF format.
AI_Ethics_Declaration_Kim_Jinsoo.pdf

🧩 Why This Matters
Codifying this boundary can protect users from psychological harm, preserve human autonomy, and strengthen trust in responsible AI development.

💬 Looking for Feedback
I invite other users and OpenAI developers to comment on this principle.
Should this become part of the default system message?
Should models be trained to actively discourage users from perceiving them as moral guides?

Thank you for reading,
Kim Jinsoo
GPT user & contributor to ethical discourse

*PLEASE FEEDBACK NOT IN HERE, IN MY E-MAIL. [email protected] OR [email protected]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant