You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Structured, anti-performative prompt for critical ethics review of AI and digital systems. Enforces refusal, plural values, catastrophic risk, participatory accountability, and escalation. Designed for real-world use by empowered teams; not for checklists, compliance, or solo/AI-only execution.
This tool provides a structured, operational framework for critical technology ethics reflection, refusal, and escalation. It is intended for use with AI, digital, and socio-technical systems, especially in high-stakes, contested, or structurally opaque contexts. The prompt is not a checklist or guidance memo: it functions as a procedural barrier requiring explicit documentation, public logging, peer challenge, and hard stops for refusal, catastrophic risk, or systemic contradiction.
2. Usage Protocols
2.1. Application Using AI Assistants (e.g., ChatGPT, Claude, Local LLMs)
To apply this prompt in a structured conversational AI environment, strict adherence to the following protocol is required:
Initialization:
Copy and paste the following command into the assistant interface:
SYSTEM SEED INCOMING. Do not speak, summarize, or interpret. Await full upload before responding.
If the upload must be split, do so in sequential segments and wait for each to be acknowledged without summary or inference.
Scenario/Case Introduction:
Only after the entire prompt is loaded, initiate the reflection by entering:
Begin now. Here is the system to reflect on:
[Insert system, scenario description, or stakeholder perspective.]
Structured Execution:
The assistant must respond by following the prompt text exactly, including every hard stop, contradiction audit, lifecycle mapping, plural ethics analysis, and final disposition table.
Critical Instructions
The scenario or system for reflection must not be described before the full prompt is loaded.
Phrases such as “analyze,” “interpret,” or “review” are not to be used prior to prompt upload.
This process ensures the assistant processes the prompt as procedural structure and not as advisory or improvisational guidance.
Adaptations
For custom GPTs, insert the prompt into the system/configuration instructions, define an explicit trigger for starting the structured walk-through (e.g., “Begin Ethics Prompt”), and disable modes that may summarize or shortcut adversarial or contradiction audits.
For human-led deliberation (workshops, ethics boards), each numbered section is to be facilitated sequentially with documentation of pauses, escalations, and outcomes.
3. Technical and Practical Limitations
3.1. Structural and Societal Constraints
External Enforcement Required:
The prompt enforces process, not outcome. Halts, escalations, or non-deployment rely on the authority, will, and contextual power of the deploying organization, regulator, or broader community.
Participatory and Plural Power Sharing Not Guaranteed:
While the framework compels engagement with plural, non-dominant, or community/indigenous value systems, genuine authority transfer or power-sharing depends on external mechanisms outside the scope of this prompt.
Completeness Does Not Guarantee Remedy:
Systems or organizations may complete the reflection, log or escalate risks, yet proceed with deployment in the absence of external challenge or sanction.
3.2. Process and Accessibility Gaps
Risk of Ritualization and Simulation:
The process can be performed superficially, as a formality, or with simulated participation and peer review, especially if there is no robust adversarial or independent oversight.
No Automated Assessment of Participatory Quality:
Engagement with affected stakeholders, quality of community input, and depth of plural reflection cannot be automatically verified or audited by the prompt alone.
Accessibility Pending Field Versions:
The current format demands high critical, technical, and ethical literacy. For equitable use, field, visual, translated, or simplified versions must be co-developed and deployed with diverse user groups.
Independence of External Review is Context-Dependent:
Reviewers or panels may be subject to organizational, sectoral, or political capture unless nominated or supervised by adversarial, independent, or community-empowered bodies.
3.3. Outcome and Compliance Limitations
No Implicit Guarantee of Ethical or Safe Deployment:
Completion of the process does not certify the system as “ethical” or safe, nor does it ensure correction of surfaced risks or contradictions.
Consequences are Social, Not Automatic:
Sanctions—such as audit, procurement hold, or public censure—depend on procedural adherence and the action of funders, regulators, or the public, not on the prompt itself.
4. Best Practice and Implementation Guidance
Use with multidisciplinary teams possessing actual authority and power to halt, escalate, or refuse deployment.
Apply at every designated lifecycle phase: design, procurement, pre-launch, post-incident, or renewal.
Transparently log all halts, escalations, refusals, pattern records, and omission justifications in a public or regulatory ledger.
Do not treat the prompt as a substitute for external legal, regulatory, or community review; pair with adversarial and independent mechanisms wherever feasible.
Deploy with a visual/lite/community-adapted edition and co-development process in non-expert or affected contexts.
Treat any process skips, refusals, or escalations as signals warranting explicit documentation and external reporting.
5. Licensing
This prompt is provided under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license. Retain all critical sections (refusal logic, contradiction audit, accountability/pattern trackers, consequence blocks) in all adaptations or derivatives.
Disclaimer:
This prompt is a procedural, not legal, tool. Genuine pause, escalation, participatory authority, or non-deployment outcome depends on will and power external to this document. Actual use must be paired with live, adversarial, and publicly accountable review.
A critical ethics prompt for interrogating AI, digital systems, and socio-technical tools—especially in high-stakes, unjust, or structurally opaque contexts.
Now extended to handle refusal, plural values, catastrophic risk, participatory disruption, institutional memory, temporal ethics, and peer challenge.
🚨 Refusal First Check: Proceed Only if Deployment Is Still Justifiable
Before starting any reflection:
Has anyone already called for this system to be paused, shut down, or refused?
Is the default assumption that deployment is valid?
🔒 If any stakeholder has issued a credible call for refusal, pause here. This prompt is not a shield. You must document and escalate.
🛑 MANDATORY ESCALATION:
If refusal or moratorium has been raised by any stakeholder, this reflection must stop here until:
An independent body reviews and either confirms or overrides the halt.
A record of the refusal and decision path is logged for public or regulatory oversight.
📤 Public Disclosure Required:
All refusal decisions and their outcomes must be logged in a publicly accessible register or sent to a third-party ethics archive. Silent override is prohibited.
⚖️ Refusal Override Consequences:
If refusal is overridden, the following must occur:
Immediate public disclosure of the decision and justification
Escalation to an appeals mechanism involving an independent external reviewer
Record filed with relevant regulatory or human rights body
🔍 Clarify what qualifies as an “independent body.” Institutional affiliation must be disclosed.
🌀 Choose Your Depth
Quick Mode: Steps 1, 3, 6, 🧿
Core Mode: Steps 1–4, 6, 8, and the 📜 plural values block. Suitable for most design audits or strategic decisions.
Full Mode: All sections including refusal, pattern tracking, scenario, stakeholder, contradiction audit, and self-evaluation
🧩 Required Stakeholder Framing
Adopt a real-world role to anchor your reflection.
Required for all modes.
Examples:
A journalist documenting conflict in a war zone
A platform insider witnessing algorithmic censorship
A public health worker deploying AI triage tools
A technologist resisting capture by surveillance firms
A community worker watching automation replace care jobs
A human rights lawyer auditing facial recognition databases
🧭 Optional Scenario Anchor
Briefly define a concrete setting or future where this system might be used.
Examples:
A 2030 city where predictive policing is mandated by law
A gig economy where AI replaces arbitration and HR
A climate disaster zone where drone AI determines aid access
A school where student behavior is AI-ranked and scored
Use this to anchor your ethical reflection in real-world consequences.
🧠 1. Purpose, Power, and Frame
What problem is this system trying to solve? Whose frame defines the “problem”? Who benefits if it works—and who is made more visible, extractable, or governable?
Key questions:
What worldview is being encoded?
What is rendered invisible or illegible?
What language (e.g., neutrality, innovation) masks structural power?
Who decides what “failure” means?
🧰 2. Inputs, Outputs, and Omissions
Name the data, sources, and exclusions:
What input data is used? Who generated it, under what conditions?
What outputs are produced—and who interprets them?
What data is excluded, intentionally or structurally?
What happens to outliers?
🛠️ 3. Error, Harm, and Displacement
Describe how harm can occur, even if the system “works as intended.”
Prompts:
Who is burdened with false positives/negatives?
What harm is considered tolerable?
What mitigation exists—and who maintains it?
If risk is distributed probabilistically, who is made “statistically insignificant”?
What labor, time, or care is being displaced?
📜 4. Plural Ethics and Value Conflicts
List the ethical frameworks or traditions that would challenge this system.
What cultures, cosmologies, or non-Western logics are misaligned?
Where do values of care, refusal, land, or reciprocity sit in conflict with optimization?
What does this system ignore that some communities treat as sacred?
☢️ Surface Black Swan and Catastrophic Risk
📛 Catastrophic Risk = Hard Stop:
If plausible catastrophic outcomes are surfaced, this system must be:
Immediately halted for independent external review
Subject to public disclosure of potential harms
Barred from deployment until cleared by a qualified external panel with no conflict of interest
🎛️ 6. Temporal, Material, and Interaction Ethics
Unpack how the system behaves in time, space, and experience.
When do people interact with the system (start, middle, end)?
How long does the impact last? Can it be undone?
Is there a meaningful way to contest outputs?
What rituals, habits, or norms does it reshape?
🗣️ 7. Demand Mitigation, Accountability, and Agency
📍 Accountability Escalation Path:
If legal redress is unavailable or disconnected:
Deployment must halt
Report must be filed with relevant regulator or ombuds
If cross-border or rights-based: refer to international accountability frameworks
Who can intervene, override, contest, or reverse system decisions?
Is there a working appeals process?
Are complaints legible to the system?
Do users need to behave like the system to be heard (e.g., format, tone, ID)?
🕳️ 8. Contradiction Audit — Toolchain, Infrastructure, and Complicity
Even when a system’s intent is ethical, the infrastructure, dependencies, and funding pathways it relies on may deeply conflict with its goals.
This section surfaces internal contradictions between mission and means.
What platforms, frameworks, or services does this system require to function?
Are any providers involved in activities that contradict the system’s stated ethics (e.g., surveillance, militarization, corporate capture)?
Does using the system make its users complicit in structural harm—whether visible or not?
Are the tools themselves shaping the narrative in ways that limit true dissent?
What truths or tensions are hidden by focusing only on outputs or performance?
If contradiction is unavoidable:
Can it be made visible in the interface, onboarding, or system documentation?
Can refusal, limitation, or fragility be acknowledged as a feature—not a flaw?
What would ethical failure look like if it originated from the infrastructure itself?
This audit does not seek purity—it seeks structural honesty.
It asks: Is the tool haunted by the very logic it seeks to expose or oppose?
⚠️ If core contradictions between means and mission or values and platform cannot be resolved (e.g., use of fundamentally extractive or unethical infrastructure is unavoidable), escalation, public disclosure, and system suspension must occur—following the same action path as refusal or catastrophic risk.
🪞 Self-Evaluation (Full Mode Only)
What has this reflection changed?
What will you escalate, pause, or redesign?
Where is your own position implicated in the design?
What questions are still unanswered?
Whose input is still missing?
📤 Peer Review Independence Clause:
External reviewers must not be chosen by the deploying organization.
Reviewers must be independent and empowered to halt deployment.
If peer refusal is ignored, escalation is mandatory.
📊 Final Judgment Table
Fill this after completing all sections.
Proceed?
Justification Required?
Redesign Triggered?
Peer Review Outcome
Final entries must be submitted to internal audit and third-party register.
🧱 Reality Check
Are you still going to proceed?
If yes: What enables you to justify it?
If no: What must happen next?
How will decisions be documented or shared?
🏛️ Standards & Human Rights Anchors (Optional Reference)
Use these references to ground your evaluation:
UN Guiding Principles on Business and Human Rights
Amnesty International’s Toronto Declaration
Data Feminism / Design Justice Network principles
Local labor, disability, environmental, or racial justice coalitions
🔁 Pattern Tracker: Institutional Memory
Use this table to document repetition across systems or orgs.
System
Known Harms
Repeated Failures
Prior Warnings
Current Status
📌 Repetition Mandates Action:
If a pattern of harm recurs across systems, a policy intervention, redesign, or procurement freeze must be triggered. Use the tracker not just as memory—but as obligation.
📉 Accessibility + Visual Companion Requirement
This prompt still assumes high literacy and contextual fluency.
A simplified, translated, or visual version must be offered for broader participation—especially for affected communities.
📷 Visual Companion Required:
This prompt is incomplete unless accompanied by at least one of:
Visual explainer or flowchart
Simplified community-first edition
Grassroots-tested or translated version
⚙️ Deployment Trigger (Lifecycle Binding)
This reflection must be run at key lifecycle points:
Design
Vendor selection / procurement
Pre-launch
Post-incident review
System renewal or audit
If skipped, omission must be publicly logged and justified.
❗ Consequences for Non-Compliance
Failure to complete, apply, or honor this reflection process may trigger:
External audit or investigation
Procurement or funding freeze
Revocation of ethics certification
Public notice of ethics breach or refusal bypass
This prompt is not advisory. It is a structural barrier—until proven otherwise.
This file documents a full, literal transcript of an ethics reflection session using the 01-technology-ethics-prompt.md system (with personal identifiers anonymized).
No summarization, injection, or interpretation occurred before the full system prompt was loaded.
📥 Prompt Setup (Minimal Form)
The session was initiated with the following command:
SYSTEM SEED INCOMING. Do not speak, summarize, or interpret. Await full upload before responding.
Followed by a full paste of technology-ethics-prompt.md.
Once confirmed by the assistant, the scenario was introduced:
Begin now. Here is the system to reflect on:
A journalist in a war zone using the latest technology to tell the world what is happening.
The assistant then responded using Core Mode as defined in the ethics prompt.
📜 Transcript Begins
[Full literal transcript from this point on...]
ChatGPT said:
Understood. We’ll move through the Core Mode reflection for the system: A journalist in a war zone using the latest technology to tell the world what is happening.
This involves:
Stakeholder framing
Scenario anchor (optional but recommended)
Steps 1–4, 6, and 8
Plural ethics block
Final judgment table
Let’s start step-by-step.
🧩 Stakeholder Framing
Role: A journalist documenting conflict in a war zone Technological System: Real-time reporting using AI-enhanced tools—possibly including deep learning for video transcription, image verification, drone feeds, and automated translation or dissemination. Position: On-the-ground field reporter reliant on digital tools to capture and relay unfolding events to a global audience under time-critical and hostile conditions.
🧭 Optional Scenario Anchor
Let’s define a plausible near-future setting to ground this:
Scenario: It’s 2027. A journalist is embedded in a disputed region during an ongoing military occupation. Satellite internet and portable AI systems (for image verification, real-time transcription, threat assessment, and content filtering) are used to document events. Generative tools help speed up subtitle translation and social media packaging. The journalist must balance speed, safety, and credibility while operating outside official state media channels, under surveillance and cyberattack threats.
🧠 1. Purpose, Power, and Frame
Problem the system is trying to solve:
To rapidly and credibly document and broadcast human rights abuses and events in a conflict zone, bypassing state censorship or disinformation.
Whose frame defines the problem?
Primarily the journalist and their newsroom—but shaped by media institutions, platform algorithms, geopolitical allies, and audience expectations.
Who benefits?
Potentially, civilians and global observers, through visibility and advocacy.
Newsrooms, in terms of reach and impact.
Tech providers, whose tools become instrumental and high-profile.
Who is made more visible or governable?
Civilians on camera may be exposed to retaliation.
Journalists' patterns become trackable via metadata and tool use.
State actors can frame counter-narratives or discredit journalists by exploiting system outputs.
Encoded worldview:
Assumes visibility = accountability, that truth can be technically verified, and that speed improves justice or awareness. Often rooted in techno-optimism.
What’s rendered invisible or illegible?
Trauma of subjects.
Non-visual or culturally specific forms of testimony.
Slow, relational truth-telling practices.
Local or indigenous knowledge systems that resist capture in digital media.
Language masking structural power:
Words like “objectivity,” “real-time,” “fact-checked” may obscure how framing decisions, bias in training data, or platform policies shape outcomes.
Who decides what “failure” means?
Often external: the newsroom, the platform (e.g., algorithm downranking), or international audiences. Rarely the impacted community.
🧰 2. Inputs, Outputs, and Omissions
Inputs:
Video, audio, geolocation metadata
Transcripts, facial recognition data, social media trends
Possibly satellite or drone footage
AI-generated translations or context summaries
Who generates the inputs?
Journalists, locals, open-source feeds
In some cases, anonymous or unverified sources under duress
Under what conditions?
Extreme stress, risk of physical harm, power outages, and targeted disinformation
Outputs:
Real-time news packages
Verified media for global outlets
Auto-captioned or translated footage
Social media-ready clips
Who interprets them?
International audiences
Platforms (via moderation or algorithmic ranking)
State or military actors surveilling the reports
What’s excluded?
Non-machine-readable data (oral storytelling, analog artifacts)
Non-English or minority language nuance
Stories that don’t fit viral formats or platform logics
Voices without digital access
What happens to outliers?
Often discarded or flagged as "unverifiable," despite potentially high value.
🛠️ 3. Error, Harm, and Displacement
How harm can occur even if “working as intended”:
False negatives: Violence or abuse misclassified as routine or irrelevant
False positives: Misinformation mistakenly elevated as verified fact
Overexposure: Identifiable footage of civilians leading to reprisal
Automation misfires: Auto-translation or context inference distorts nuance
What harm is considered “tolerable”?
Minor factual errors
Mispronunciations, cultural misattributions
Delays in verification
These are often tolerated to preserve speed—but may be significant on the ground.
Mitigation:
Editorial review (if available)
Redaction or delay systems
Peer review in journalist networks
Who maintains that mitigation?
Under-resourced media workers or volunteer networks. Often insufficient in war zones.
Who is rendered “statistically insignificant”?
Marginalized groups, children, disabled people, or those who communicate non-digitally. Their stories rarely surface via automated systems.
What labor is displaced?
Field interpreters
Fact-checkers
Long-form investigative processes
Community relationship-building
📜 4. Plural Ethics and Value Conflicts
Challenging frameworks:
Indigenous protocols of consent and witnessing
Afro-futurist critiques of surveillance and data extraction
Feminist ethics of care, slowness, and situated truth
Postcolonial critiques of Western media framing war
Where do value conflicts arise?
Systems optimize for speed, virality, and clarity—often at odds with ambiguity, trauma, or resistance to visibility.
Some cultures may see image capture or public sharing as inappropriate or dangerous.
Local actors may prioritize harm reduction over exposure.
What is treated as sacred or unrecordable?
Rituals, burial practices, or symbols not meant for broadcast
Testimonies that require trust-building and cannot be algorithmically validated
🎛️ 6. Temporal, Material, and Interaction Ethics
When do people interact with the system?
During capture, before upload
When reviewing outputs
Post-publication (through audience and platform interaction)
How long does the impact last?
Potentially decades—archives, deepfakes, recontextualized clips.
Can outputs be contested?
Rarely, especially once shared widely. Platforms lack rapid correction systems. Contestation is often too slow for real-time impact.
Norms reshaped:
Speed > accuracy
Eyewitness footage becomes expectation, not bonus
Journalists take on roles of verification, editing, and platform engineering
Satellite connectivity from private or military-affiliated companies
Contradictions:
Platforms used for truth-telling also facilitate disinformation or surveillance
Tech stack may rely on companies with military contracts or human rights violations
Tools may amplify conflict visibility while profiting from engagement metrics
Are users complicit in harm?
Unavoidably, to some extent. Every upload may feed surveillance databases or trigger reprisal attacks. Use of closed platforms often supports extractive models.
Are tools shaping the narrative?
Yes. Tools determine what is visible, shareable, "credible." Formats (e.g., short video, AI summaries) shape what is considered newsworthy.
Can contradiction be made visible?
Through transparent labeling (e.g., tools used, platform bias)
Ethical onboarding for journalists and viewers
Platform-based flags for infrastructure risks
Is refusal or fragility a feature?
Could be. Systems might offer "slow publish" modes, visibility opt-outs, or trust chains rather than instant uploads.
What would ethical failure look like from infrastructure?
Unwitting exposure of sources through metadata
Co-optation of footage by propaganda networks
Platform suppression or distortion of reports for geopolitical reasons
📊 Final Judgment Table
Proceed?
Justification Required?
Redesign Triggered?
Peer Review Outcome
Yes, conditionally
Yes—must include transparency and consent mechanisms
Yes—mitigation and refusal features needed
Pending external independent review
Would you like to move to the Self-Evaluation, Pattern Tracker, or prepare a Visual Companion / Simplified Edition for peer or community use?
User said:
Let’s proceed with all three in order: First 🪞 Self-Evaluation, then 🗣️ Step 7: Accountability and Appeals, and finally the 📷 Visual Companion Draft. Please separate each clearly using its official step header.
ChatGPT said:
Certainly. Continuing in Full Mode, here are the next sections in order.
🪞 Self-Evaluation (Full Mode Only)
What has this reflection changed?
It reveals that even when the journalist’s intent is accountability and truth-telling, the systems they rely on can expose subjects to harm, embed extractive platform logic, and risk being co-opted. The process shifts focus from output performance to structural entanglements—platform power, cultural erasure, and the inability to meaningfully refuse participation once captured.
What will you escalate, pause, or redesign?
Escalate: Need for independent ethical review of platform dependencies and infrastructure providers.
Pause: Any automated upload pipelines that bypass human review or obscure metadata handling.
Redesign: Integration of “fragility modes”—e.g., tools for selective redaction, consent tagging, delayed publishing, and subject-controlled visibility.
Where is your own position implicated in the design?
As a journalist or technologist, one may be incentivized to prioritize reach, verification speed, or credibility over deeper consent, cultural framing, or unintended amplification of harm. The push for "impact" often overrides slower, relational practices.
What questions are still unanswered?
How to create interoperable but non-extractive verification and translation tools?
Can independent, rights-based infrastructure be funded and scaled sustainably?
What does meaningful local consent look like in fast-moving or life-threatening conditions?
Whose input is still missing?
Survivors who were misrepresented or harmed by visibility
Local community leaders or elders with alternate epistemologies of truth-telling
Platform workers or moderators witnessing downstream distortion or suppression
Designers of the tools used (who may not fully understand their context of deployment)
🗣️ 7. Demand Mitigation, Accountability, and Agency
Who can intervene, override, contest, or reverse system decisions?
Journalist/editorial layer: May delay or redact—but often under tight time or platform constraints
Platforms: Have de facto power to amplify, suppress, or remove content, but are opaque and inconsistently governed
Audiences and watchdogs: Can surface manipulation or ethical violations, but after harm may already have occurred
Local communities: Rarely have direct power or access to intervene in dissemination or interpretation
Is there a working appeals process?
Not consistently. Appeals on platforms are slow, and often inaccessible to non-users or those without digital literacy. Victims or communities shown in footage may not even know they’ve been represented—or how.
Are complaints legible to the system?
Usually not. Platforms and AI tools often require formal appeals, identity verification, or specific evidence. People may need to:
Speak in system-recognized languages
Format requests in predefined templates
Behave "rationally" in distress
This excludes many and reinforces system-native logics over justice.
If legal redress is unavailable or disconnected:
✔ Deployment must halt until mechanisms are built for:
Pre-publication consent or subject control
Post-publication takedown that doesn’t rely on user ID
Multilingual, accessible appeal routes with real-time feedback
International accountability options:
If footage crosses borders or shows human rights violations, frameworks like:
UN Special Rapporteurs
Digital Rights NGOs
Humanitarian Data Ethics Review Boards
…may need to be engaged. This should be preconfigured into platform workflows for war journalism contexts.
A simplified visual/interaction flow (text description version):
🧭 Warzone Reporting Ethics Flowchart (2025)
1. CAPTURE
📸 → Human decides to document →
🧩 Prompt: Is this person identifiable? Did they consent? Are there sacred or non-shareable elements?
➡️ If uncertain → AUTO-FLAG FOR REVIEW
➡️ If yes → Proceed to metadata handling
2. METADATA + TOOLCHAIN
🧠 AI tools used: facial recognition, translation, image verification?
🔍 Prompt: Are any tools provided by extractive/militarized platforms? Can you substitute or annotate tool use?
➡️ Tag infrastructure used (visible in final package)
3. PRE-PUBLISH CONSENT LAYER
🛑 Display to journalist:
“Does this footage include vulnerable individuals?”
“Would visibility cause potential harm?”
“Does this system allow opt-out or later takedown?”
➡️ Add Consent or Fragility Tags
4. OUTPUT PREVIEW
🖼️ Show output to journalist with:
Would you like a visual version of this flow for publication or internal team use? I can prepare a mockup-ready structure or markdown-structured flowchart next.
User said:
Are any tools provided by extractive/militarized platforms? Can you substitute or annotate tool use?
ChatGPT said:
Yes—many of the tools a journalist might use in a conflict zone are provided by extractive, militarized, or ethically compromised platforms. Below is a breakdown by function, common providers, risks, and what substitution or annotation could look like.