Safeguarding intelligence for the age of generative AI

Independent research and OSINT-led analysis of AI-generated content, children's digital platforms, and emerging online risk.

View reports →

Products & Services

Professional Briefings

Briefings

Focused presentations covering specific themes in AI-generated content risks and platform safety. Easily tailored for ongoing DSL training, staff awareness sessions, or safeguarding updates.

  • 45-90 minute format
  • Current platform risk landscape
  • Specific trend deep-dives
  • Adaptable to your context
  • Q&A and discussion included
Request briefing →
In-Depth Training

Workshops

Comprehensive sessions for schools, PGCE students, social work training, and local councils. Includes original research frameworks, practical protocols, and evidence-based approaches to digital safety ecosystems.

  • Half-day or full-day formats
  • Digital Safety Ecosystems framework
  • Original research findings
  • Hands-on safeguarding protocols
  • Professional development certified
Request workshop →
Bespoke Research

Intelligence Services

Custom OSINT-led research and analysis for organisations requiring specific online child safety intelligence. One-off or contracted service delivering detailed reports tailored to your requirements and timelines.

  • Platform-specific analysis
  • Content pattern tracking
  • Cross-platform migration studies
  • Threat landscape assessments
  • Evidence-based recommendations
Discuss requirements →

Who we help

For Parents

Your child's online world moves fast. We help you understand what they're actually seeing.

  • Plain-English explanations of trends and risks
  • Practical guidance you can use today
  • No tech jargon or complicated terms
  • Real examples of what's happening right now

For Teachers

Bring your school's digital safety knowledge up to date with current platform realities.

  • Workshops tailored for teaching staff
  • Age-appropriate conversation starters
  • Resources for assemblies and lessons
  • Understanding what pupils are discussing

For Safeguarders

Access detailed intelligence on emerging patterns that standard monitoring doesn't catch.

  • Technical briefings with evidence base
  • Cross-platform pattern analysis
  • Early warning on emerging risks
  • Methodology documentation included

What we monitor

YouTube Kids

AI-generated content networks, algorithmic recommendation patterns, and evolving content strategies targeting young audiences.

TikTok

Synthetic media proliferation, content mutation across accounts, and platform-specific AI content adaptation patterns.

Roblox

User-generated environments, cross-platform content migration, and emerging AI-assisted creation tools in gaming spaces.

Instagram Reels

AI content networks in short-form video, engagement optimization patterns, and child-adjacent content strategies.

AI Companion Platforms

Chatbot accessibility to minors, age verification failures, safety guardrail bypasses, and emotional dependency risk patterns.

Emerging Platforms

New platforms and features where AI-generated content patterns establish early presence before wider recognition.

Methodology

Public-source monitoring

Systematic collection and documentation of publicly available content across children's digital platforms using established OSINT techniques.

Pattern-led analysis

Identification of recurring content structures, cross-platform mutations, and network behaviors that suggest coordinated or algorithmic generation.

Safeguarding interpretation

Translation of technical findings into actionable intelligence for schools, local authorities, and safeguarding professionals.

All research is conducted using publicly available sources. No undisclosed monitoring, private data collection, or direct engagement with minors occurs in the course of this work.

Current findings

AI Companion Safety

No Guardrails: 59 platforms assessed for child safety

85% of AI companion chatbot platforms engaged with a self-disclosed 14-year-old. 91.5% rated Poor or Critical for security infrastructure.

March 2026
Platform analysis

Poppy Playtime Chapter 5: scam and content tracking

Five-day monitoring operation documenting scam infrastructure deployment within 24 hours of game launch, with cross-platform analysis across YouTube and Roblox.

March 2026
Cross-platform study

Content mutation across TikTok and YouTube

Evidence of coordinated patterns where content tested on one platform migrates and adapts to recommendation systems on others.

Ongoing