Security, privacy, and your content
Disclaimer. This page is high-level guidance and how we talk about Ripenn—it is not a legal agreement. Your order form, DPA (if any), and policies on ripenn.ai govern if there is ever a conflict.
Procurement for AI tools is no longer "shadow IT." Legal asks about training, subprocessors, and retention. Security asks about access controls and auditability.
Below is a practical checklist you can reuse for any vendor—and how Ripenn answers these topics in plain language. It expands on our pricing FAQ; for contractual wording, contact support@ripenn.ai or your account owner for enterprise terms.
Questions your security team will ask
- Who can access customer content? Access should be authenticated, scoped to the account, and logged where possible. Ask whether vendor staff can read your workspace by default.
- Is content used to train foundation models? Ask for a clear yes/no and under what opt-in conditions. If the answer is fuzzy, assume the worst for planning.
- Which third parties touch the data? AI products route prompts through model providers and hosting. You need a list of subprocessors (or equivalent) for your vendor record.
- How long is data retained? Ask about database retention, logs, backups, and support exports.
- What happens when you delete a project? Deletion should be explicit in the product and reflected in backend policy—not "we will get to it eventually."
How Ripenn approaches this
- Security practices. We use industry-standard practices appropriate to a SaaS product handling customer content and strategy data.
- Training use. As stated in our FAQs, your content is not shared with third parties or used to train AI models without your explicit consent. If you need that sentence in a vendor questionnaire, quote the FAQ and follow up for the latest formal wording.
- Payments and billing. Card processing runs through Stripe; enterprise customers may use invoice or wire per plan.
- Support and enterprise. Higher tiers include priority support and, where applicable, strategist access—those humans see only what your permissions and support processes allow.
Operational habits that cost nothing
- Minimize paste of highly sensitive material into any cloud tool if your policy forbids it. Use redacted briefs or synthetic examples for strategy exercises.
- Use separate projects for brands or business units so permissions stay clean.
- Rotate API keys and passwords when teammates leave—obvious, but it fixes more incidents than any AI feature.
For how we think about measuring brand presence in AI answers—models, run counts, and cost tradeoffs—see our write-up on the real cost of AI visibility tracking.