Security & Responsible Disclosure
Version 2.0 · Last updated 24 April 2026
Smartsy treats the security of our users, their data, and our infrastructure as a first-class engineering concern. We welcome reports from independent security researchers and believe coordinated disclosure is the right way to make software safer. This page sets out exactly how we work with researchers: what is in and out of scope, how to send us a report, how we triage and respond, what safe harbor protections you have, and how we credit those who help us.
Table of contents
- How to report a vulnerability
- Encrypted reports (PGP)
- What to include in a report
- Our response commitments
- In scope
- Out of scope & non-qualifying issues
- Testing rules & rules of engagement
- Safe harbor & legal
- Severity & prioritization
- Recognition & rewards
- Public disclosure policy
- AI-specific reports
- Mobile apps
- User privacy during testing
- Third-party services
- Our own security practices
- Hall of fame
- Policy history
1. How to report a vulnerability
The fastest and preferred channel is email to security@smartsy-ai.com. This mailbox is monitored during UK business hours and reviewed daily, including weekends, for reports that appear urgent.
Our machine-readable contact file is available at /.well-known/security.txt (RFC 9116) and lists current contact details, canonical policy URL, supported languages, and expiry date.
If the mailbox is unreachable or the matter is time-critical (e.g. active data exposure in the wild), you may also open a private security advisory on our code-hosting provider. Please do not open public issues, discuss the finding on social media, or contact individual employees.
We accept reports in English. Reports in other languages are welcome but may take longer to triage.
2. Encrypted reports (PGP)
For sensitive reports we support PGP. Request the current public key by emailing security@smartsy-ai.com with the subject line PGP key please, and we will reply with an ASCII-armored key and its fingerprint. Please verify the fingerprint through an independent channel before sending sensitive material.
If you prefer an alternative encrypted channel (age, Signal, encrypted form submission), mention it in your first email and we will try to accommodate.
3. What to include in a report
Good reports are triaged and fixed faster. Please try to include:
- A clear one-line summary of the issue and its class (e.g. stored XSS, IDOR, SSRF, auth bypass).
- Affected endpoint(s), host(s), or app build(s), including version and platform.
- Step-by-step reproduction instructions a developer can follow unmodified.
- A minimal proof-of-concept (request/response pair, curl command, HTML page, video, or screenshots).
- The impact you observed: what an attacker can do, to whom, and under what pre-conditions.
- Any accounts, tokens, or artifacts you created; we will rotate them.
- The date and time of testing, so we can correlate logs.
- Whether you want public credit and under what name/handle.
- Whether the issue has been reported to any third party or is publicly known.
Please do not include real user data, mass-scraped content, or personal data of people other than yourself. Redact aggressively.
4. Our response commitments
We aim to meet the following targets, measured from the time your report is received at our security mailbox:
- Acknowledgement: within 2 business days.
- Initial triage & severity assessment: within 5 business days.
- Status updates: at least every 10 business days while the report is open.
- Remediation targets (from triage confirmation):
- Critical: mitigation within 72 hours, fix within 7 days where feasible.
- High: fix within 30 days.
- Medium: fix within 90 days.
- Low / informational: scheduled in the normal backlog; no SLA.
- Closure notice: you will be informed when the fix ships and, where appropriate, invited to verify.
Timelines may slip for complex issues or those that require upstream coordination. We will tell you if that happens and why.
5. In scope
The following assets and issue classes are in scope.
Assets
smartsy-ai.comand all of its public subdomains.- Our public REST and streaming APIs under
smartsy-ai.com/api/*. - The Smartsy mobile apps for Android and iOS that we publish under our own developer accounts (package/bundle
ai.smartsy.app). - Official assets under
/.well-known/, our sitemap, robots, and feeds.
Issue classes
- Authentication, session handling, password reset, and MFA bypasses, including WebAuthn / passkey flaws.
- Account takeover, tenant isolation, horizontal and vertical privilege escalation, and IDOR.
- Remote code execution, server-side request forgery, server-side template injection, deserialization bugs.
- SQL, NoSQL, command, header, and log injection; XXE.
- Cross-site scripting (reflected, stored, DOM) with a realistic impact.
- Cross-site request forgery on state-changing endpoints.
- Path traversal, arbitrary file read/write, and unsafe redirects that lead to token theft.
- Business-logic bugs: billing bypass, usage-quota circumvention, coupon abuse, race conditions with measurable impact.
- Cryptographic weaknesses in our own code (not in well-reviewed upstream libraries, unless exploitable in our configuration).
- Leaking of credentials, API keys, or access tokens in responses, public artifacts, logs, or source maps.
- Cloud / infrastructure misconfigurations leading to data exposure (open object stores, public management endpoints, exposed databases).
- Supply-chain issues affecting our published packages, mobile builds, or Docker images.
- Vulnerabilities in the in-app billing / subscription flow that allow unpaid access, refund abuse, or receipt forgery.
- Mobile-specific: insecure data storage of credentials or tokens, intent / URL-scheme hijacking, misconfigured deep links, insecure WebViews.
6. Out of scope & non-qualifying issues
The following are out of scope or generally not eligible for recognition, unless you can demonstrate a concrete security impact beyond the default finding:
- Denial-of-service, volumetric, stress, brute-force, or rate-limit tests against production.
- Automated scanner output without a working proof-of-concept and clear impact.
- Missing or misconfigured best-practice headers (HSTS, CSP, Referrer-Policy, etc.) with no demonstrated exploitation.
- Reports based solely on outdated library versions without a working exploit path in our configuration.
- Reports based on self-XSS, tab-nabbing without sensitive data, clickjacking on pages without sensitive actions, or UI redressing in general.
- Missing
SameSite, secure flags, or cookie attributes on non-sensitive cookies. - Username / email enumeration on public endpoints that are designed to answer such queries (e.g. sign-up form feedback).
- Email issues: SPF, DKIM, DMARC, BIMI, spoofing of non-primary domains; we already publish strict DMARC on primary domains.
- SSL/TLS configuration findings (cipher suites, older protocols) on hosts that are not terminating user traffic.
- Rate-limit bypasses that do not result in a meaningful security or billing impact.
- Self-service issues an authenticated user can do to their own account (e.g. "I can log myself out").
- Social engineering, phishing, or physical attacks against staff, users, or facilities.
- Attacks that require a malicious Android app, rooted/jailbroken device, or a user installing a debug build.
- Open redirects without demonstrable impact (token theft, credential leak).
- Reports about functionality that is still marked experimental or behind a feature flag.
- Vulnerabilities only reachable with access to a user's device, cleartext credentials, or backup of their keychain.
- Findings in third-party services we rely on but do not operate (see §15).
- Any issue that requires us to disable MFA, allow-list an attacker, or otherwise weaken our own protections to reproduce.
- Output-content issues from the AI model that are not a security vulnerability (hallucinations, biased answers, refusals). Please see §12 for AI-specific reports.
7. Testing rules & rules of engagement
When testing, you must:
- Only test with accounts you own, or accounts you have explicit written permission to test against. Do not target third-party users.
- Use your own user agent or append a string identifying yourself (e.g.
x-researcher: your-handle) so we can distinguish your traffic. - Stop immediately when you reach the point where you have evidence of a vulnerability; do not escalate, pivot, or enumerate further.
- Access the minimum data required to demonstrate impact. Do not download, copy, or retain user data. If you accidentally access personal data, stop, delete local copies, and tell us.
- Avoid data destruction, modification, or exfiltration; use test data where possible.
- Respect rate limits; do not launch high-volume automated scans against our production systems.
- Not use findings to gain further access or compromise other systems.
- Not use findings for extortion, public shaming, or to demand compensation as a condition of disclosure. We will not respond to such communications.
- Keep the report confidential until we agree on a public-disclosure timeline (see §11).
You may:
- Use standard web security testing tools (proxy-based testing, manual browser tools, curl, HTTP clients, Burp Community, ZAP) at reasonable volumes.
- Create multiple test accounts (with distinct, realistic-looking but clearly non-personal data) for authorization testing.
- Reverse-engineer our publicly distributed mobile apps for the purposes of security research.
- Ask us, before testing, whether a particular action is within policy.
8. Safe harbor & legal
When research is conducted consistently with this policy, Smartsy:
- Considers it authorized under applicable anti-hacking laws, including the UK Computer Misuse Act 1990 and equivalent statutes, and will not bring or support a civil or criminal action against you.
- Considers it authorized under our Terms of Service and will not treat it as a breach of those Terms.
- Considers it authorized under applicable anti-circumvention law (e.g. DMCA §1201) for the purpose of good-faith security research.
- Will, if someone else (for example a hosting provider) takes action against you for research on Smartsy that followed this policy, tell them the activity was authorized and ask them to stand down.
Safe harbor does not extend to activity that violates the testing rules in §7, that intentionally harms users, that accesses data beyond the minimum needed to demonstrate the bug, or that violates laws other than the specific ones listed above (for example privacy, data-protection, or export-control laws). If in doubt, ask first.
This is a summary, not a contract; nothing here waives claims against actors who are not acting in good faith. We reserve the right to update the policy, with changes documented in §18.
9. Severity & prioritization
We use CVSS v3.1 as a starting point and then adjust for real-world impact in our environment: data sensitivity, exploit pre-conditions, mitigations already in place, affected user population, and reachability. We generally map severity as follows:
- Critical: unauthenticated RCE; large-scale data access; full account takeover of arbitrary users; payment bypass at scale.
- High: authenticated privilege escalation; single-user account takeover chains; sensitive data leak; bypass of paid features affecting revenue.
- Medium: stored XSS in authenticated views; CSRF on state-changing endpoints; logic bugs with moderate impact.
- Low: reflected XSS requiring unusual conditions; minor information disclosure; misconfigurations with limited impact.
- Informational: hardening suggestions with no direct exploitability.
10. Recognition & rewards
Smartsy does not yet operate a formal paid bug-bounty program. We offer the following in return for reports that lead to an accepted fix:
- Public credit on the Hall of fame below, with the name or handle you choose.
- A written acknowledgement we can confirm to prospective employers or to CVE reviewers.
- Discretionary rewards for especially impactful reports (swag, subscription credits, or cash tokens of appreciation). These are ex-gratia and not guaranteed.
- Priority consideration when we launch a paid program in future.
We do not pay for duplicate reports, out-of-scope findings, or for reports that were already known to us.
11. Public disclosure policy
We follow coordinated disclosure. After a report is fixed, we are happy for researchers to publish write-ups. Please:
- Wait until we have confirmed the fix is deployed to production (we will tell you).
- Give us at least 30 calendar days from fix deployment before publication, to allow customers to update mobile apps.
- Share a draft with us before publication so we can check for factual errors or unintentional disclosure of other researchers' work.
- Credit Smartsy accurately; link to this page if you summarize our policy.
If we cannot agree on a timeline we will discuss it in good faith. In exceptional cases where a bug is actively being exploited, we reserve the right to publish first; you will be told before we do.
12. AI-specific reports
Smartsy is built on top of large language models. We accept security-relevant AI reports in scope when they demonstrate:
- Prompt or indirect-prompt-injection that makes the model reveal another user's data, bypass authentication, or execute unintended actions via tools we expose.
- Prompt injection that causes the model to fetch attacker-controlled URLs, leak tokens/cookies, or rewrite server-side state.
- Training- or retrieval-data exfiltration — for example, prompts that reliably reveal system prompts, server environment variables, or customer data.
- Model misuse that bypasses our paid-feature or safety gating at the API layer.
We do not treat the following as security vulnerabilities, even though we still want to hear about them through regular feedback channels:
- Model hallucinations or factual errors.
- "Jailbreaks" that only affect what the model will say to the user who asked, without crossing trust boundaries.
- Biased, offensive, or otherwise unpleasant model output that does not leak data or bypass a security control.
- Reproductions of known, published jailbreak techniques against third-party upstream models.
13. Mobile apps
The Smartsy Android and iOS apps that we publish under our own developer accounts are in scope. When reporting mobile issues please include:
- App version and build number (visible in Settings → About).
- Platform and OS version, and whether the device is rooted/jailbroken.
- Whether the issue reproduces against a release build from the store, or only against a sideloaded debug build (debug builds are out of scope).
- Any local files, logs, or clipboard content involved (please redact).
Findings that require a rooted/jailbroken device, a malicious sibling app, or an attacker in physical possession of an unlocked device are generally out of scope unless they demonstrate a server- or account-level impact.
14. User privacy during testing
We take user privacy very seriously. When you test:
- Treat any personal data of real users as radioactive. Stop on first sight, notify us, and do not retain, publish, or share it.
- Never attempt to social-engineer, contact, or identify real users whose data appeared in your testing.
- If you believe your testing triggered automated abuse protections, let us know so we can clear your account.
15. Third-party services
Smartsy relies on services we do not operate — cloud providers, app stores, payment processors, model providers, analytics, and email. Vulnerabilities in those platforms should be reported to them directly; we will not accept reports we cannot remediate.
However, if a third-party misconfiguration on our side (for example, an exposed bucket or an incorrectly scoped API key) is what actually leaks data, that is in scope.
16. Our own security practices
So that you have a realistic picture of the environment you are testing against, a non-exhaustive summary:
- All traffic to
smartsy-ai.comis HTTPS-only with HSTS. We regularly review TLS configuration. - Passwords are hashed with scrypt using per-user random salts; we never store plaintext.
- Sessions are signed with HMAC-SHA256 using a server-side secret, with short TTL and re-auth on sensitive actions.
- Two-factor authentication is supported for user accounts, and required for administrative access.
- Administrative endpoints are behind a separate authentication layer and are IP-restricted where possible.
- Database credentials, third-party API keys, and service-account keys are stored outside the web root and out of version control; rotation is supported.
- Dependencies are tracked and patched on a recurring schedule; critical CVEs are expedited.
- Backups are encrypted and access-controlled; we do periodic restore drills.
- We log authentication, billing, and administrative actions with enough detail to investigate incidents.
- Code changes go through review before production; production deploys are auditable.
- We publish DMARC, SPF, DKIM, and BIMI on primary mail domains.
- We maintain this policy and /.well-known/security.txt as our public security contact surface.
17. Hall of fame
We credit researchers whose reports lead to a shipped fix. This section is updated periodically.
No public entries yet — be the first.
18. Policy history
- 2.0 · 24 April 2026: Rewrote end-to-end with detailed scope, rules of engagement, SLA, severity guide, mobile-app section, AI-specific section, expanded safe harbor, disclosure timeline, and hall of fame.
- 1.0 · initial release: basic contact, scope, out-of-scope, and safe-harbor statement.
Thank you for helping keep Smartsy and our users safe.