Contacts
OpusAI | Ethical, Effective, Equitable.
Book Discovery
Close

Contact

OpusAI is a trading form of Sensus InVista LTD.
Bridge Grove
Southport
PR8 5AA

contact@opusai.uk

OpusAI | Ethical, Effective, Equitable.

Ethical.

We build artificial intelligence for real-world use — and that means making ethics a non-negotiable foundation.

 Our products are designed to be secure, human-centred, transparent, and impact-aware from day one. By proactively designing systems that anticipate and mitigate ethical, regulatory, and operational challenges.

 1. Privacy First by Design.
Data protection and user consent are woven into every OpusAI deployment.

We don’t just build for privacy — we make it default.

  • Secure by Design: All systems use encrypted protocols and comply with data protection laws, including GDPR, from the ground up.
  • Custom Deployments: A prioritisation on the custom deployment of all OpusAI systems, including it’s own ObsidianAI models, helps to silo data and provide granular control over compliance measures.
  • No Retained PII: Identifiable data is never logged or stored. Our systems replace user identifiers with hashed placeholders — even in observability or error logs.
  • Guardrail Flexibility: Clients can apply custom privacy or data-source constraints (e.g. restrict model input/output to known domains or data categories).
2. Proactive Compliance & Governance.

We monitor and adapt to the evolving regulatory landscape so you don’t have to.

  • Our infrastructure supports continuous compliance updates across GDPR, AI Act developments, and localised standards.
  • Monthly stewardship reports provide visibility into:
    • Environmental footprint
    • Uptime metrics
    • Error rates and guardrail trigger logs
  • SLAs and Operational Metrics: Each deployment includes an agreed Service Level Agreement, covering uptime, reliability, and risk response windows.
3. Clear Ethical Positioning & Use Case Integrity.

We only create systems that are aligned with their intended purpose — and that purpose must pass ethical review.

  • Defined Purpose Clauses: All services come with a clear scope of acceptable use, written into contract and licensing terms.
  • Restricted Categories: We refuse to develop or support services for harassment, manipulation, disinformation, or pressure tactics (e.g. cold-call automation or coercive advertising).
  • Contractual Recourse: Misuse or deviation from agreed use may trigger revocation rights or protective action.
4. Priority of Accuracy, Observability & Source Integrity.

Accuracy isn’t a feature — it’s a responsibility.

  • Real-Time Knowledge: Obsidian-powered models are web-augmented by default, using Retrieval-Augmented Generation (RAG) with source citation and academic-style referencing.
  • Trust-Based Sources: Data pipelines can be scoped to only use verified sources including open data, or client-controlled knowledge bases.
  • Reasoning Transparency: Models include an observability layer showing how outputs were derived — from source citation, reading process to synthesis.
5. Commitment to IP, Copyright & Data Control

We protect both original creators and your business IP — before it could become problematic .

  • Bring Your Own Data (BYOD): Clients are encouraged to integrate their own knowledge assets to avoid copyright infringement and mitigate AI hallucination risk.
  • Client IP Integrity: All outputs, workflows, and derivative models remain under your ownership. No data is ever retrained unless explicitly defined and always ‘within’ a custom environment.
  • Model Safeguards: Our models are selected or fine-tuned to avoid generating outputs likely to breach known copyrighted material.
6. Assured Human-Centricity.

AI is built to support humans, not replace them.

  • No Staff Replacement Tools: We don’t build automation to eliminate jobs — only to enhance productivity, scalability, and value.
  • Cognitive Ease: Products are designed for non-technical users — especially in client-facing or BYOD scenarios — making AI integration easy, explainable, and manageable.
  • Accessibility through Simplicity: Whether you’re a startup founder or a social worker, our UI and workflows prioritize frictionless use, not complexity.
7. Transparent Reporting & Feedback Mechanisms Baked In.

Accountability matters — to both the organisation and the end user.

  • Red-Button Feedback: If an AI output is harmful, false, or misaligned, users (not just clients) can instantly flag it for review via a “report response” mechanism.
  • Model Response Mitigation: Flagged issues can trigger immediate model-side mitigation — with an audit trail to review changes or retrain strategies where applicable.
  • Client Monitoring Tools: Every Obsidian dashboard includes visibility into trigger logs, system status, and response exceptions — minus the complexity of model config.
8. Environmental Metrics & Impact Reporting as Standard.

Sustainability is more than a checkbox.

  • Model Efficiency: Where performance is equal, we select smaller, more efficient models to reduce energy usage.
  • No Continuous Retraining: Custom deployments feature re-training scheduling features to ensure priorities align with environmental and cost impact.
  • Monthly Environmental Reports: Clients receive an energy impact statement* aligned with their billing cycle, showing resource consumption by model/service.
Summary:

For OpusAI, “responsible AI” is not a service add-on — it’s an essential part of our overall operating position.

From contract to dashboard, every decision is backed or mitigated with enforceable policy, scalable safety, and tools designed to empower people — not just impress them.