Effective Date: 15 February 2026
Entity: SOTAStack Pty Ltd (ABN 89 693 630 349)
SOTAStack is committed to developing and deploying AI systems that are safe, fair, transparent, and accountable. This charter articulates our commitments, structured around the 10 guardrails of the Australian Government's Voluntary AI Safety Standard, and aligned with AS ISO/IEC 42001:2023 (AI Management Systems), the NIST AI Risk Management Framework, and the OECD AI Principles.
Guardrail 1: Accountability & Governance
We establish clear accountability structures for AI systems throughout their lifecycle. Our governance framework includes:
- Designated AI governance roles with defined responsibilities
- Board-level oversight of AI strategy and risk
- Regular review and audit of AI systems and their impacts
- Clear escalation pathways for AI-related concerns
- Alignment with AS ISO/IEC 42001:2023 management system requirements
Guardrail 2: Risk Management
We identify, assess, and mitigate AI-related risks proportionate to the context and potential impact:
- AI risk assessments conducted before deployment and at regular intervals
- Risk categorisation aligned with the NIST AI RMF (Map, Measure, Manage, Govern)
- Consideration of risks to individuals, communities, organisations, and the environment
- Documented risk registers with assigned owners and mitigation plans
- Ongoing monitoring for emerging risks and model drift
Guardrail 3: Data Protection & Security
We handle data with the highest standards of care and compliance:
- Compliance with the Privacy Act 1988 (Cth) and Australian Privacy Principles
- Data minimisation — we collect and process only what is necessary
- Encryption, access controls, and secure storage for all datasets
- Data provenance tracking and lineage documentation
- Regular data quality assessments and bias audits
Guardrail 4: Testing & Performance
We rigorously test AI systems to ensure they perform as intended:
- Comprehensive testing across diverse scenarios and edge cases
- Performance benchmarking against defined metrics and acceptance criteria
- Bias and fairness testing across protected attributes
- Red-teaming and adversarial testing for high-risk systems
- Ongoing performance monitoring post-deployment
Guardrail 5: Human Control & Intervention
We design AI systems that keep humans in control:
- Human-in-the-loop or human-on-the-loop oversight for consequential decisions
- Clear mechanisms for human override and intervention
- Graceful degradation — systems fail safely when AI components are unavailable
- Users are informed when they are interacting with AI
- No fully autonomous decision-making on matters affecting individual rights
Guardrail 6: Transparency & Disclosure
We are open about how our AI systems work and their limitations:
- Clear disclosure when AI is used in interactions or decision-making
- Plain-language explanations of AI system capabilities and limitations
- Model cards and datasheets for datasets used in development
- Transparent reporting on system performance and known issues
- Publication of this charter and related governance documentation
Guardrail 7: External Feedback
We welcome and act on feedback from users, stakeholders, and the broader community:
- Accessible channels for reporting concerns or issues with AI systems
- Formal process for reviewing and responding to feedback
- Engagement with industry bodies, regulators, and standards organisations
- Participation in responsible AI communities and knowledge sharing
- Contact: info@sotastack.com.au
Guardrail 8: System Transparency & Supply Chain
We maintain transparency across our AI supply chain:
- Documentation of third-party AI components, models, and data sources
- Due diligence on AI vendors and partners for ethical practices
- Tracking of model provenance and version history
- Assessment of supply chain risks (e.g., data poisoning, model tampering)
- Preference for open, auditable, and well-documented AI tools
Guardrail 9: Documentation & Compliance
We maintain comprehensive records of AI system development and operation:
- Documentation of design decisions, training data, and model architecture
- Records of risk assessments, testing results, and audit findings
- Compliance with applicable Australian laws and regulations
- Alignment with AS ISO/IEC 42001:2023 documentation requirements
- Regular internal and external audits of AI governance practices
Guardrail 10: Stakeholder Engagement & Fairness
We design AI systems that are fair and inclusive:
- Proactive identification and mitigation of bias in data and models
- Engagement with diverse stakeholders in system design and evaluation
- Consideration of impacts on vulnerable and marginalised groups
- Equitable access to AI benefits across communities
- Alignment with the OECD AI Principles on inclusive growth and sustainable development
Standards Alignment
This charter aligns with:
- Australia's Voluntary AI Safety Standard — 10 guardrails for safe and responsible AI
- AS ISO/IEC 42001:2023 — Artificial Intelligence Management System standard
- NIST AI Risk Management Framework (AI RMF 1.0) — Govern, Map, Measure, Manage
- OECD AI Principles — Inclusive growth, human-centred values, transparency, robustness, accountability
Review and Updates
This charter is reviewed annually, or more frequently in response to significant changes in technology, regulation, or standards. We welcome feedback at info@sotastack.com.au.