Stackle requires all authorised users to exercise a duty of care in the operation and use of its information systems. Beyond information published for public consumption, access is expected to be formally authorised and handled with regard to the rights, sensitivities, and security obligations attached to institutional and learner data.
This overview brings together the practical security commitments reflected across Stackle's privacy, breach-response, and governance materials so institutions can review them in one place.
The emphasis is straightforward: protect access, encrypt stored and transmitted data, limit who can touch sensitive systems, retain evidence when incidents occur, and provide institutions with clear paths for privacy, security, and incident questions.
Systems must be protected against unauthorised access.
Systems are expected to be secured against theft and damage to a level that is proportionate and practical.
Availability, backup, and recovery remain part of security, not separate from it.
Third parties entrusted with Stackle data are expected to understand and uphold their security responsibilities.
Stackle's production environment is described in the privacy materials as hosted on Laravel Cloud using Amazon Web Services in the Asia-Pacific Sydney region. This keeps the primary production data location in Australia and aligns with the jurisdictional expectations of many higher education institutions operating locally.
The storage and delivery stack emphasises encrypted infrastructure and transport-layer protection rather than vague claims about enterprise readiness.
2.1 Core Platform Controls
| Area | Current Position |
|---|
| Application hosting | Laravel Cloud on AWS |
| Primary data region | Australia, Asia-Pacific Sydney (ap-southeast-2) |
| Database | AWS RDS MySQL with AES-256 encryption at rest |
| File storage | AWS S3 with encryption at rest |
| Content delivery | AWS CloudFront |
| Edge protection | AWS WAF for rate limiting and bot control |
| Transport security | HTTPS with TLS 1.2+ enforced on all connections |
Current public materials do not state certifications or regional options beyond the items listed here.
Access to personal data within Stackle is limited to authorised personnel who need that access to perform their role. The security posture described across the privacy and breach-response materials is layered: identity controls, session controls, rate limiting, bot protection, monitoring, and auditable operational response.
For institutions using Stackle inside an LMS, access also benefits from the platform's LTI-based operating model, which keeps activity inside the institutional learning environment rather than routing users through unnecessary external workflows.
3.1 Identity and Monitoring Controls
| Control | Detail |
|---|
| Staff authentication | Secure credentials and MFA required for staff |
| Administrator accounts | 2FA required for administrators |
| User 2FA options | TOTP app-based 2FA or email-based 2FA with time-limited codes |
| Secret protection | 2FA secrets encrypted at rest using application-level encryption |
| Rate limiting | Applied on login, 2FA, and API endpoints |
| Bot controls | reCAPTCHA Enterprise and AWS WAF bot protections |
| Fraud and abuse prevention | Cloudflare and Google reCAPTCHA process session and browser data for bot detection and DDoS protection |
| Application monitoring | Laravel Nightwatch, Discord alerts, and local log files for monitoring and forensic review |
Stackle's published privacy position is clear that organisations remain central to the control context around user data. The platform acts as a data processor on behalf of institutions that deploy it, while direct website interactions are handled by Stackle as controller.
Stackle's public AI position supports a straightforward principle: institutional content should not be quietly repurposed for model training, and any enabled provider should sit within explicit organisational control.
AI-powered content summarisation is opt-in at the organisation level rather than switched on by default.
Only content text from activities or packages is sent to enabled AI providers.
Names, email addresses, and LTI identifiers are not included in those AI calls.
The AI providers referenced in Stackle's privacy materials do not use that data to train their models.
API keys for enabled providers are stored encrypted per organisation.
User contributions, including answers and comments, are treated as the user's intellectual property and are not used in marketing materials.
Stackle states that it does not sell personal data.
4.1 AI Provider Model
| Provider Type | How Stackle Frames It |
|---|
| OpenAI (GPT) | Available for enabled workflows and also referenced for support conversations |
| Anthropic (Claude) | Optional, organisation-supplied key |
| Google (Gemini) | Optional, organisation-supplied key |
| DeepSeek | Optional, organisation-supplied key, with cross-border risk assessment left to the institution |
Where an organisation enables an AI provider, the privacy materials place responsibility on organisation administrators to assess cross-border transfer risks before enabling that provider.
Security is not just about keeping systems locked down. It is also about being able to detect problems early, contain them quickly, preserve evidence, restore service safely, and notify the right people when thresholds are met.
Stackle's data breach policy provides the strongest operational detail here. That document outlines a virtual incident team, severity classifications, containment windows, notification rules, a central breach register, and post-incident review expectations.
Electronic data recovery is treated as an essential control, with backup and restore expectations tied to the importance of the system.
Logging retention must remain justifiable and aligned with privacy and regulatory obligations.
Any alert indicating potential unauthorised access to personal data is treated as a suspected data breach and reported immediately.
Where an eligible breach is confirmed, OAIC and GDPR-related notification paths are already defined in the separate breach policy.
5.1 Incident Readiness Snapshot
| Area | Current Position |
|---|
| Severity model | S1 to S4 with target containment windows from 8 to 72 hours |
| Regulatory timing | Assessment target within 72 hours for suspected eligible breaches |
| Internal escalation | Head of Technology and Senior Management notified of all confirmed breaches |
| Evidence handling | Logs and system evidence preserved where legal or regulatory action may follow |
| Registering events | All confirmed breaches recorded in a central breach register |
| Post-incident review | Formal review required for S1 and S2 breaches and strongly recommended for S3 |
| Recovery posture | Restore to a clean, known-good state and monitor for recurrence after recovery |
Stackle's published materials already align with several of the expectations institutional buyers look for on a security page: Australian Privacy Act and APP-aware handling, GDPR-aware rights and notification processes, documented breach response, controlled cookies, encryption, and layered access protection.
Institutions often want a concise overview, clear statements about ownership and training use, and a direct path for detailed security questions. That is the role of this overview, backed by the underlying privacy, breach, and trust-centre materials.
Australian Privacy Act and Australian Privacy Principles are reflected in Stackle's privacy and breach-notification materials.
UK and EU GDPR considerations are addressed through privacy-rights handling and breach-notification procedures.
The incident program references alignment with ISO 27001:2022 controls and SOC 2 trust-service criteria in the breach policy's compliance mapping.
Privacy requests are directed to admin@stacklehq.com.
Trust-centre materials are available at trust.stacklehq.com.
Institutional teams with deeper security questions can use the Trust Centre or contact Stackle directly for follow-up.
Deeper operational detail sits across Stackle's Privacy Policy, Data Breach Policy, and Trust Centre materials.