Plain Language AI DisclosureClear labelling: "AI-generated summary" / "AI-detected cultural content"
User-facing documentation explaining what each AI feature does
No hidden algorithmic decision-making
Process TransparencyPublished documentation on how Claude AI processes video content
Clear explanation of AWS Bedrock infrastructure
Open communication about AI model versions and capabilities
Data Collection DisclosurePrivacy policy clearly states what data is processed
Transparent about AWS/Supabase/Stripe data handling
Clear retention policies for video, transcripts, and user data
Cultural Advisory IntegrationMātauranga Māori detection system developed with Te Wānanga o Raukawa input
Traditional Knowledge Labels integration (Local Contexts)
Ongoing consultation with iwi partners on AI behaviour
Te Tiriti Principles ApplicationPartnership: Co-design of cultural safeguards with Māori organisations
Protection: AWS Guardrails prevent inappropriate AI handling of cultural content
Participation: Māori organisations control their own data sovereignty settings
Te Reo Māori IntegrationTe Hiku Media partnership for accurate reo transcription
Bilingual UI and content generation
Cultural concept recognition across languages
User Persona-Specific ConsultationRegular feedback loops with Cultural Knowledge Stewards
Community Hub Connectors advisory group for feature development
Pilot programmes with Te Matarau a Māui, Te Wānanga o Raukawa
Impacted Community EngagementRangatahi feedback on AI-generated learning content
Kaumātua consultation on cultural content handling
Creator surveys on AI accuracy and usefulness
Transparent Development ProcessRoadmap shared publicly with "why this matters" explanations
Beta testing with diverse user groups before features go live
Open channels for algorithm concerns (Gleap support system)
Known Limitations DocumentationAI transcription accuracy rates disclosed (typically 85-95% for clear audio)
Cultural concept recognition limitations acknowledged
Language-specific performance variations documented (te reo Māori vs English)
Bias Identification & MitigationTraining data bias: Claude trained predominantly on Western/English content - we supplement with Te Hiku Media for reo
Cultural bias: Mātauranga Māori detection system reviewed by cultural advisors
User testing: Diverse pilot groups prevent feature design bias
Data Quality StandardsVideo quality requirements for optimal AI processing
Audio clarity standards for transcription accuracy
Metadata validation for content categorisation
Regular Algorithm Review ProcessQuarterly cultural advisor review of mātauranga Māori detection accuracy
Monthly analysis of AI-generated content quality metrics
User feedback integration into AI prompt engineering
Unintended Consequences MonitoringTrack: False positives in cultural content detection
Monitor: AI generating culturally inappropriate learning materials
Assess: Over-reliance on AI reducing creator agency
Ethical AI FrameworkNo AI training on user content without explicit consent
AWS Guardrails prevent harmful AI outputs
Human-in-the-loop for all cultural content decisions
Point of ContactAlgorithm Oversight Contact: Pera Barrett (Founder), pera@kahacreate.com
Public-facing AI transparency page on website
Clear escalation path for AI-related concerns
Challenge & Appeal ProcessUsers can flag AI-generated content as inaccurate
Mātauranga Māori false positives can be contested
Content access restrictions reviewable by human moderators
Clear Human Role ExplanationAI Role: Generates drafts, suggests structures, detects patterns
Human Role: Reviews, approves, adds context, applies tikanga
Critical Decisions: All cultural protocols determined by creators, not AI