Trust & Safety

AI Boundaries and Intended Use

Cardana uses advanced AI systems to support learning, exploration, and creation. These systems are powerful, but they are not autonomous agents and they are not substitutes for human judgment.

We believe clarity about boundaries is essential for trust.

Designed For

  • Support understanding and mastery
  • Help users organise knowledge and work
  • Provide explanations, structure, and feedback
  • Assist with exploration and ideation
  • Enable creation within defined contexts

Not Designed For

  • Replace human expertise or decision-making
  • Generate deceptive or misleading academic work
  • Simulate emotional relationships or companionship
  • Encourage dependency or over-reliance
  • Optimise for engagement over learning outcomes

Academic Integrity

Cardana is built to support learning, not shortcut it.

We intentionally design learning environments that encourage comprehension, testing, and reflection rather than passive answer consumption. Users remain responsible for how outputs are used, particularly in academic and professional settings.

AI Behaviour

Cardana does not anthropomorphise AI systems.

Our systems do not present themselves as people, companions, or authorities. They are tools — designed to behave consistently, transparently, and predictably within their environment.

Responsibility

AI systems can accelerate thinking, but they do not remove responsibility.

Users remain accountable for decisions, interpretations, and outcomes. Cardana aims to support good judgment, not replace it.

We believe trust is earned through clarity, not disclaimers.