Independent research and OSINT-led analysis of AI-generated content, children's digital platforms, and emerging online risk.
View reports →Focused presentations covering specific themes in AI-generated content risks and platform safety. Easily tailored for ongoing DSL training, staff awareness sessions, or safeguarding updates.
Comprehensive sessions for schools, PGCE students, social work training, and local councils. Includes original research frameworks, practical protocols, and evidence-based approaches to digital safety ecosystems.
Custom OSINT-led research and analysis for organisations requiring specific online child safety intelligence. One-off or contracted service delivering detailed reports tailored to your requirements and timelines.
Your child's online world moves fast. We help you understand what they're actually seeing.
Bring your school's digital safety knowledge up to date with current platform realities.
Access detailed intelligence on emerging patterns that standard monitoring doesn't catch.
AI-generated content networks, algorithmic recommendation patterns, and evolving content strategies targeting young audiences.
Synthetic media proliferation, content mutation across accounts, and platform-specific AI content adaptation patterns.
User-generated environments, cross-platform content migration, and emerging AI-assisted creation tools in gaming spaces.
AI content networks in short-form video, engagement optimization patterns, and child-adjacent content strategies.
Chatbot accessibility to minors, age verification failures, safety guardrail bypasses, and emotional dependency risk patterns.
New platforms and features where AI-generated content patterns establish early presence before wider recognition.
Systematic collection and documentation of publicly available content across children's digital platforms using established OSINT techniques.
Identification of recurring content structures, cross-platform mutations, and network behaviors that suggest coordinated or algorithmic generation.
Translation of technical findings into actionable intelligence for schools, local authorities, and safeguarding professionals.
85% of AI companion chatbot platforms engaged with a self-disclosed 14-year-old. 91.5% rated Poor or Critical for security infrastructure.
Five-day monitoring operation documenting scam infrastructure deployment within 24 hours of game launch, with cross-platform analysis across YouTube and Roblox.
Evidence of coordinated patterns where content tested on one platform migrates and adapts to recommendation systems on others.