I specialize in evaluating complex tasks, workflows, and logic structures to ensure clarity, consistency, and usability — especially for AI-related or technical projects. My background in psychology, strategy, and humanitarian operations allows me to analyze systems from both a human and structural perspective. What I can do: Review prompts, tasks, or workflows for ambiguity, inconsistencies, and logic gaps Test product flows and identify missing steps, edge cases, or failure points Evaluate user instructions, UX workflows, or policy logic for clarity Provide structured feedback on AI-generated outputs Analyze complex scenarios and highlight risks, contradictions, or unrealistic assumptions Support teams creating evaluation frameworks or QA documentation Why work with me: Strong analytical and critical-thinking skills Ability to reason through complex systems without needing a coding background Clear, structured writing and documentation Experience with cross-functional, remote, async work Comfortable working with AI models, LLM outputs, and scenario-based evaluations Ideal for: AI companies needing task reviewers or evaluation support Teams building logic-based or scenario-driven workflows Startups needing someone who thinks like a QA without being overly technical Research or data annotation projects with complex instructions Rate: $20–50/hr depending on scope and complexity. I enjoy problem-solving, finding hidden inconsistencies, and helping teams build better, clearer, more reliable systems.
ACTIVE-Governance: Intelligent, Automated Policy Enforcement for Your Content Repositories ACTIVE-G…
Transform Website Visitors into Leads with Zoi Conversational AI Zoi is a cutting-edge conversation…
Riskalyze is a leading risk assessment and portfolio management platform designed for financial adv…