About these case studies: The following scenarios illustrate common AI governance challenges we see in e-commerce businesses and how a Safety Pinnacle audit addresses them. These are representative examples based on our methodology and typical findings, not specific client engagements. Company names are fictional.

FastGrowth Fashion Ltd

Fast Fashion E-Commerce
Risk Rating HIGH
35 Employees
£4.2M Annual Revenue
6 AI Systems
30% EU Sales

AI Systems Audited

ChatGPT (Free Tier) Klaviyo AI Shopify Product Recommendations Gorgias AI Inventory Planner Canva AI

Key Challenges Identified

Data leakage via ChatGPT: Staff using free tier ChatGPT were inputting customer names, order details, and complaint information — data that OpenAI could use for training.

No AI usage policy: No documented guidelines on which AI tools were approved or how they should be used with customer data.

Klaviyo segmentation concerns: AI-driven customer segmentation potentially using sensitive inference without proper legal basis.

No DPIAs conducted: Despite processing EU customer data through AI systems, no Data Protection Impact Assessments had been completed.

Audit Findings

1 Critical
4 High Priority
3 Medium
6 Compliance Gaps

Recommendations Delivered

  • Immediate: Migrate from ChatGPT free tier to ChatGPT Team/Enterprise with data protection agreements
  • 30 days: Implement AI Usage Policy covering all approved tools and data handling requirements
  • 60 days: Complete DPIAs for Klaviyo and Gorgias AI processing EU customer data
  • 90 days: Update privacy policy with AI disclosure and establish vendor assessment process
Estimated Remediation Investment £4,000 – £8,000

HomeStyle Direct Ltd

Home Goods E-Commerce
Risk Rating HIGH
85 Employees
£12-15M Annual Revenue
10 AI Systems
25% EU Sales

AI Systems Audited

Signifyd (Fraud Detection) Custom CLV Model Zendesk AI HubSpot AI Clerk.io Nosto ChatGPT Enterprise Brightpearl Canva AI Returns Prediction Model

Key Challenges Identified

Discrimination allegation: A customer publicly alleged discrimination when their order was declined — they were ordering to a council estate address. 15 similar complaints received in 6 months.

Custom ML model bias: In-house Customer Lifetime Value model showed postcode-correlated bias during testing — a known proxy for socioeconomic status.

No customer appeal process: No mechanism for customers to challenge Signifyd automated transaction decisions — a GDPR Article 22 violation.

EU AI Act exposure: Signifyd likely classified as HIGH-RISK under EU AI Act Annex III, with August 2026 compliance deadline approaching.

Audit Findings

2 Critical
5 High Priority
5 Medium
8 Compliance Gaps

Recommendations Delivered

  • Immediate: Investigate Signifyd discrimination allegation — request fairness documentation, analyse complaint patterns, consider external bias audit
  • Immediate: Halt CLV model deployment until bias root cause analysis completed and remediation implemented
  • 30 days: Implement customer appeal process for declined transactions (GDPR Article 22 compliance)
  • 60 days: Complete DPIAs for all 8 production AI systems; address EU AI Act high-risk classification for Signifyd
Estimated Remediation Investment £8,000 – £15,000

What Would We Find in Your Business?

Every organisation using AI has blind spots. Book a free 25-minute assessment and we'll identify your biggest exposures — whether we work together or not.