Security Policy

Security practices designed for institutional trust

This overview sets out Stackle's current security position across hosting, encryption, access control, AI handling, resilience, and incident response.

At a Glance

Security detail without the filler.

A concise view of the controls and governance commitments institutions usually need to review first.

Infrastructure

Data regions

We offer flexible data residency options to ensure compliance with local data protection laws and regulations. Clients can choose from multiple regions for data storage, ensuring both security and compliance with jurisdiction-specific requirements.

Privacy

User control over data

Stackle's privacy materials provide routes for access, correction, consent handling, and other privacy-rights requests, with institutions remaining central to the control model for deployed workspaces.

AI

Use any large language model (LLM)

Stackle's published model is organisation-controlled AI enablement with supported providers such as OpenAI, Anthropic, Gemini, and DeepSeek, rather than always-on model usage.

Encryption

Data encrypted in transit and at rest

Database and file storage are described as encrypted at rest, and HTTPS with TLS 1.2+ is enforced for data in transit.

Access

Single Sign On and Multi-Factor Authentication

Published Stackle materials explicitly document MFA and 2FA controls for staff, administrators, and users. LMS and LTI access are central to the platform model; institution-specific SSO requirements should be confirmed directly with Stackle.

Support

Security questions

Teams that need deeper detail can use the Trust Centre at trust.stacklehq.com or contact Stackle directly to continue the security and governance conversation.

1. Overview

Stackle requires all authorised users to exercise a duty of care in the operation and use of its information systems. Beyond information published for public consumption, access is expected to be formally authorised and handled with regard to the rights, sensitivities, and security obligations attached to institutional and learner data.

This overview brings together the practical security commitments reflected across Stackle's privacy, breach-response, and governance materials so institutions can review them in one place.

The emphasis is straightforward: protect access, encrypt stored and transmitted data, limit who can touch sensitive systems, retain evidence when incidents occur, and provide institutions with clear paths for privacy, security, and incident questions.

Systems must be protected against unauthorised access.

Systems are expected to be secured against theft and damage to a level that is proportionate and practical.

Availability, backup, and recovery remain part of security, not separate from it.

Third parties entrusted with Stackle data are expected to understand and uphold their security responsibilities.

2. Infrastructure and Encryption

Stackle's production environment is described in the privacy materials as hosted on Laravel Cloud using Amazon Web Services in the Asia-Pacific Sydney region. This keeps the primary production data location in Australia and aligns with the jurisdictional expectations of many higher education institutions operating locally.

The storage and delivery stack emphasises encrypted infrastructure and transport-layer protection rather than vague claims about enterprise readiness.

2.1 Core Platform Controls

AreaCurrent Position
Application hostingLaravel Cloud on AWS
Primary data regionAustralia, Asia-Pacific Sydney (ap-southeast-2)
DatabaseAWS RDS MySQL with AES-256 encryption at rest
File storageAWS S3 with encryption at rest
Content deliveryAWS CloudFront
Edge protectionAWS WAF for rate limiting and bot control
Transport securityHTTPS with TLS 1.2+ enforced on all connections

Current public materials do not state certifications or regional options beyond the items listed here.

3. Access Control, Authentication, and Monitoring

Access to personal data within Stackle is limited to authorised personnel who need that access to perform their role. The security posture described across the privacy and breach-response materials is layered: identity controls, session controls, rate limiting, bot protection, monitoring, and auditable operational response.

For institutions using Stackle inside an LMS, access also benefits from the platform's LTI-based operating model, which keeps activity inside the institutional learning environment rather than routing users through unnecessary external workflows.

3.1 Identity and Monitoring Controls

ControlDetail
Staff authenticationSecure credentials and MFA required for staff
Administrator accounts2FA required for administrators
User 2FA optionsTOTP app-based 2FA or email-based 2FA with time-limited codes
Secret protection2FA secrets encrypted at rest using application-level encryption
Rate limitingApplied on login, 2FA, and API endpoints
Bot controlsreCAPTCHA Enterprise and AWS WAF bot protections
Fraud and abuse preventionCloudflare and Google reCAPTCHA process session and browser data for bot detection and DDoS protection
Application monitoringLaravel Nightwatch, Discord alerts, and local log files for monitoring and forensic review

4. Data Handling, Ownership, and AI Use

Stackle's published privacy position is clear that organisations remain central to the control context around user data. The platform acts as a data processor on behalf of institutions that deploy it, while direct website interactions are handled by Stackle as controller.

Stackle's public AI position supports a straightforward principle: institutional content should not be quietly repurposed for model training, and any enabled provider should sit within explicit organisational control.

AI-powered content summarisation is opt-in at the organisation level rather than switched on by default.

Only content text from activities or packages is sent to enabled AI providers.

Names, email addresses, and LTI identifiers are not included in those AI calls.

The AI providers referenced in Stackle's privacy materials do not use that data to train their models.

API keys for enabled providers are stored encrypted per organisation.

User contributions, including answers and comments, are treated as the user's intellectual property and are not used in marketing materials.

Stackle states that it does not sell personal data.

4.1 AI Provider Model

Provider TypeHow Stackle Frames It
OpenAI (GPT)Available for enabled workflows and also referenced for support conversations
Anthropic (Claude)Optional, organisation-supplied key
Google (Gemini)Optional, organisation-supplied key
DeepSeekOptional, organisation-supplied key, with cross-border risk assessment left to the institution

Where an organisation enables an AI provider, the privacy materials place responsibility on organisation administrators to assess cross-border transfer risks before enabling that provider.

5. Resilience, Logging, and Incident Response

Security is not just about keeping systems locked down. It is also about being able to detect problems early, contain them quickly, preserve evidence, restore service safely, and notify the right people when thresholds are met.

Stackle's data breach policy provides the strongest operational detail here. That document outlines a virtual incident team, severity classifications, containment windows, notification rules, a central breach register, and post-incident review expectations.

Electronic data recovery is treated as an essential control, with backup and restore expectations tied to the importance of the system.

Logging retention must remain justifiable and aligned with privacy and regulatory obligations.

Any alert indicating potential unauthorised access to personal data is treated as a suspected data breach and reported immediately.

Where an eligible breach is confirmed, OAIC and GDPR-related notification paths are already defined in the separate breach policy.

5.1 Incident Readiness Snapshot

AreaCurrent Position
Severity modelS1 to S4 with target containment windows from 8 to 72 hours
Regulatory timingAssessment target within 72 hours for suspected eligible breaches
Internal escalationHead of Technology and Senior Management notified of all confirmed breaches
Evidence handlingLogs and system evidence preserved where legal or regulatory action may follow
Registering eventsAll confirmed breaches recorded in a central breach register
Post-incident reviewFormal review required for S1 and S2 breaches and strongly recommended for S3
Recovery postureRestore to a clean, known-good state and monitor for recurrence after recovery

6. Compliance Position and Security Questions

Stackle's published materials already align with several of the expectations institutional buyers look for on a security page: Australian Privacy Act and APP-aware handling, GDPR-aware rights and notification processes, documented breach response, controlled cookies, encryption, and layered access protection.

Institutions often want a concise overview, clear statements about ownership and training use, and a direct path for detailed security questions. That is the role of this overview, backed by the underlying privacy, breach, and trust-centre materials.

Australian Privacy Act and Australian Privacy Principles are reflected in Stackle's privacy and breach-notification materials.

UK and EU GDPR considerations are addressed through privacy-rights handling and breach-notification procedures.

The incident program references alignment with ISO 27001:2022 controls and SOC 2 trust-service criteria in the breach policy's compliance mapping.

Privacy requests are directed to admin@stacklehq.com.

Trust-centre materials are available at trust.stacklehq.com.

Institutional teams with deeper security questions can use the Trust Centre or contact Stackle directly for follow-up.

Deeper operational detail sits across Stackle's Privacy Policy, Data Breach Policy, and Trust Centre materials.