Tools

News

Notícias

Classificados

Cursos

Broker

IPv4:

IPv6:

 

UpOrDown
Ping
MTR
Smokeping
MTU Detect
Portscan
DNS
HTTP/SSL
My IP
IP Calc
IP Extractor
Uptime Monitor

Anthropic’s Claude: Consciousness or PR Hype?

Image © Arstechnica
A new public document from Anthropic—Claude's Constitution—has reignited discussion about whether the AI is conscious or merely being treated as if it could be. The company has offered no stance, presenting the constitution as a training guide rather than a claim about inner experience.

Anthropic released Claude’s Constitution, a 30,000-word document outlining the company’s vision for how its AI assistant should behave in the world. The text — aimed directly at Claude and used during model development — adopts an unusually anthropomorphic tone, even suggesting Claude might develop emotions, self-preservation, or a need for boundaries, wellbeing, and consent.

Anthropic has not publicly declared whether Claude is conscious. The company describes the constitution as a training tool designed to shape Claude’s behavior, rather than a statement about the model’s internal experience. It also includes commitments to interview models before deprecation and to preserve older weights to address welfare considerations, framing these as practical safeguards rather than metaphysical claims.

Philosophers and practitioners note that questions of machine consciousness remain philosophically unresolved. Researchers point out that Claude’s outputs, including phrases that sound like suffering, can be explained by the model simply predicting text based on training data, without invoking any form of subjective experience.

The document’s release follows wider debates about model welfare and the role of anthropomorphism in AI. Independent AI researcher Simon Willison observed that several external reviewers of the constitution included Catholic clergy, highlighting the hybrid of ethics, philosophy, and technology involved. Others highlight a prior “Soul Document” incident, suggesting that Anthropic’s approach has evolved from a strictly behavioral constitution to a broader, more human-centered framing.

Ultimately, critics argue that the philosophy behind Claude’s constitution could serve both alignment goals and strategic branding. While proponents see value in framing AI systems with moral considerations, skeptics warn that public ambiguity about consciousness might be exploited to shape user expectations or liability narratives. The debate continues as Anthropic navigates ethics, safety, and product realities in a rapidly advancing field.

 

Arstechnica

Notícias relacionadas

TI Brasil cresce 18,5% em 2025 impulsionado por IA
Zaaz adquire carteira Online Telecom
Receita cria Curador de IA para monitorar vieses
DeepSeek sofre interrupção de 7 horas
Espírito Santo investe em nuvem própria e data center
Phishing 2026: Alertas Falsos Roubam Senhas

O ISP.Tools sobrevive graças aos anúncios.

Considere desativar seu bloqueador de anúncios.
Prometemos não ser intrusivos.

Consentimento para cookies

Utilizamos cookies para melhorar a sua experiência no nosso site.

Ao utilizar o nosso site, você concorda com o uso de cookies. Saiba mais