Current Opportunity: AI Failure Evidence Project
Document failures, get verified, build your portfolio
Become Verified AI Evaluator
Build Your Authority. Get Recognized. Shape the Future of AI Quality.
Join GrandJury's community of verified AI Jury. Evaluate AI systems in your domain of expertise, build your public portfolio, and earn recognition for making AI safer.
What AI Jury Do
AI Jury are verified domain experts who publicly evaluate AI systems. They document what works, what fails, and why it matters. Their evaluations help developers improve AI quality and help users make informed decisions.
Public Recognition
Your evaluations are publicly attributed to you with your name, credentials, and professional profile. Build your portfolio as an AI quality expert.
Self-Directed Work
Choose which AI projects to evaluate, work at your own pace. No assigned tasks, no deadlines, no stress. Flexible participation.
Authority Building
Get verified status, featured in reports, media coverage opportunities, and access to consulting opportunities through our marketplace.
Why Become AI Jury
Verified Expert Status
Earn verified AI Jury badge after demonstrating quality contributions. Display it on your professional profiles.
Public Portfolio
Build public portfolio of your evaluations with your name, credentials, and expert commentary. Showcase your expertise.
Featured in Reports
Top contributors get featured in quarterly "State of AI Failures" reports distributed to media, researchers, and AI developers.
Media Coverage
Opportunities for interviews, quotes in tech publications, and thought leadership positioning.
Consulting Opportunities
Access to our marketplace where AI developers hire verified evaluators. Earn premium rates for your expertise.
Make AI Safer
Contribute to public AI accountability. Your work helps developers fix failures and helps users stay informed.
Path to Verification
Start Evaluating
Join current campaign or apply to evaluate existing projects. Install Chrome extension, start submitting evaluations.
Demonstrate Quality
Submit high-quality evaluations showing:
- • Deep insight and domain expertise
- • Clear evidence and specific examples
- • Consistent contributions over time
Get Verified
We review your contributions. Top contributors receive verified AI Jury status with official badge and marketplace access.
Verification typically takes 4-8 weeks of consistent, quality contributions. No set number of evaluations required - quality matters more than quantity.
Ideal AI Jury Profiles
We welcome evaluators from all backgrounds. Domain expertise helps, but critical thinking and communication skills matter most.
AI Safety Researchers
Analyze AI alignment, safety failures, ethical issues. Already active on LessWrong, AI Alignment Forum, or Twitter.
Medical Professionals
Doctors, nurses, healthcare researchers evaluating AI medical advice, diagnosis tools, health information.
Legal Professionals
Lawyers, paralegals, compliance experts evaluating AI legal analysis, contract review, regulatory compliance.
Senior Engineers
Experienced developers evaluating AI code generation, finding bugs, security vulnerabilities, code quality issues.
Finance Professionals
Financial advisors, analysts, economists evaluating AI financial advice, risk analysis, market predictions.
Current AI Evaluators
Working at Outlier.ai, Scale AI, or similar platforms seeking more autonomy, recognition, and self-directed work.
General AI Critics
Anyone already publicly criticizing AI on Twitter, Reddit, forums. Get recognized for expertise you're already demonstrating.
Not Your Typical Evaluation Platform
What We Offer
- ✓ Public attribution (your name on evaluations)
- ✓ Self-directed work (choose what to evaluate)
- ✓ Portfolio building (public evaluation pages)
- ✓ Verified status (credibility signal)
- ✓ Recognition focus (featured in reports, media)
- ✓ Consulting opportunities (marketplace access)
What Others Offer (Outlier.ai, Scale AI)
- ✗ Anonymous work (no public credit)
- ✗ Assigned tasks (no choice)
- ✗ No portfolio (private work)
- ✗ No verification (generic "tasker")
- ✗ Cash only (no recognition)
- ✗ No long-term opportunities (gig work)
Why This Matters: If you want to build authority, launch consulting career, or get recognized for your expertise - GrandJury is the platform. If you just want hourly pay for anonymous work - stick with Outlier/Scale.
Current Opportunities
Active campaigns and projects seeking AI Jury evaluation.
AI Failure Evidence Project
Document failures in GPT-4, Claude, Gemini across Medical, Legal, Safety, Code, Finance domains.
Join Competition →Project-Specific Evaluations
Evaluate specific AI projects from developers in our marketplace.
Common Questions
How do I get started?
Join our current campaign (link above) or apply directly. Install Chrome extension, start evaluating, build your portfolio.
Do I need specific credentials?
Not necessarily. We value domain expertise (degrees, professional experience) but also recognize self-taught experts and critical thinkers. Quality of contributions matters most.
Is this paid work?
Initial contributions are recognition-focused (no payment). Once verified, you can earn premium rates through our marketplace when AI developers hire you for project evaluations.
How much time is required?
Completely flexible. Some AI Jury evaluate 1-2 hours per week, others dedicate more. Work at your own pace.
Can I evaluate multiple domains?
Yes! Some AI Jury specialize (Medical only), others are generalists (evaluate across domains). Your choice.
What if I disagree with other evaluators?
That's valuable! Different perspectives help. Your evaluation stands on its own with your name and reasoning attached.
How do I promote my verified status?
Once verified, you receive badge assets to display on LinkedIn, Twitter, personal website, email signature, etc.
Ready to Join AI Jury?
Start building your authority as AI quality expert. Contribute to safer AI. Get recognized.
Questions? Email hello@grandjury.xyz or connect on LinkedIn