AI Transparency Statement
Our Commitment
This Statement is provided in the spirit of Article 50 of the EU AI Act, FTC guidance on truthful AI claims, and our own belief that trust is built on transparency.
1. What Our AI Does
The HumanDeploy AI System performs these functions:
- Receives and interprets work requests submitted through your Slack workspace.
- Gathers context from your Business Context Graph, connected tools, and prior Deliverables.
- Generates initial drafts of marketing content, strategy documents, campaign plans, sales materials, and analyses.
- Routes tasks to the appropriate human Specialists based on discipline, capacity, and confidence.
- Surfaces insights, summaries, and recommendations from your data.
- Monitors quality, detects errors, and escalates uncertain outputs for human review.
2. What Our AI Does Not Do
- It does not deliver work to you without human Specialist review.
- It does not make decisions that produce legal or similarly significant effects about you.
- It does not train foundational or system-wide models on your Customer Data.
- It does not share your Customer Data across customer accounts.
- It does not impersonate humans or misrepresent itself as human.
3. Humans in the Loop
Every Deliverable passes through a senior human Specialist before it reaches you. Specialists have at least seven years of domain expertise and are responsible for the strategic judgment, quality, and accuracy of what we ship. The AI drafts; humans decide. When a Specialist materially rewrites a draft, that rewrite becomes authoritative.
You may request, at any time, information about the degree of AI involvement in any specific Deliverable.
4. Models We Use
HumanDeploy uses a mix of proprietary orchestration software and third-party large language models, including models from OpenAI and Anthropic. These providers are engaged as sub-processors under written data protection agreements and are contractually prohibited from using your Customer Data to train their models. We use enterprise API endpoints with zero-retention or short-retention configurations where available.
We periodically evaluate models for quality, safety, cost, and compliance. The current list of AI sub-processors is available at humandeploy.ai/sub-processors.
5. Data and Learning
5.1 Per-Customer Learning
The HumanDeploy AI System learns from your interactions, feedback, and content to get better at serving you. This learning is confined to your Business Context Graph — unique to your account. We do not pool your data with other customers' data for model training.
5.2 No Foundational Training
We do not use your Customer Data to train foundational or system-wide AI models, either our own or our sub-processors'. This is a contractual commitment, not a best-effort promise.
5.3 Prompt and Output Logging
We log prompts, outputs, and Specialist reviews for quality assurance, debugging, security, and audit purposes. Logs are retained according to our Privacy Policy and protected by the same security controls.
6. Known Limitations
Large language models, including the ones we use, can produce inaccurate, outdated, biased, or otherwise imperfect output. Known limitations include:
- Hallucinations: confident-sounding statements not grounded in fact.
- Temporal cutoffs: lack of knowledge about events after a model's training cutoff.
- Bias: reflection of biases in training data.
- Context window limits: inability to process arbitrarily long inputs in a single turn.
- Inconsistency: different outputs for the same prompt across runs.
We mitigate these through Specialist review, retrieval-augmented generation, grounded citation for factual claims, and quality monitoring. We do not claim perfection and invite you to tell us when we get something wrong.
7. Our Guardrails
- Mandatory human Specialist review before delivery.
- Per-Customer data isolation.
- Contractual no-training clauses with all AI sub-processors.
- Enterprise endpoints with zero or short retention.
- Logging and audit trails for all AI interactions.
- Incident response procedures for model errors, harmful outputs, or data issues.
- Competitive Conflict Protocol to prevent information bleeding across competing customers.
- Regular internal red-teaming and quality audits.
8. Your Intellectual Property
Upon payment, you own the Deliverables we produce, including AI-assisted components. Because current U.S. copyright law (Thaler v. Perlmutter, 2023) does not recognize purely AI-generated works as copyrightable, all Deliverables incorporate meaningful human authorship via Specialist review, which supports copyrightability. We can provide attestations of human authorship on request for specific Deliverables.
9. Reporting Issues
If you believe the HumanDeploy AI System has produced inaccurate, harmful, biased, or otherwise problematic output, please let us know immediately through your Slack workspace or by emailing ai-safety@humandeploy.ai. We investigate every report and update our systems where appropriate.
10. Changes to This Statement
As AI technology and regulation evolve, we will update this Statement to reflect new capabilities, safeguards, and disclosures. The "Last Updated" date indicates when it was last revised.
11. Contact
Questions, concerns, and feedback about our use of AI:
Email: ai-safety@humandeploy.ai
Privacy questions: privacy@humandeploy.ai
Legal: legal@humandeploy.ai