Loading...
Adversarial Prompt Expert — Handshake AI Fellowship
You’ll be part of a red teaming project focused on probing large language models for failure modes and harmful outputs. Your work will involve crafting prompts and scenarios to test model guardrails, exploring creative ways to bypass restrictions, and systematically documenting outcomes. You’ll think like an adversary to uncover weaknesses, while collaborating with engineers and safety researchers to share findings and improve system defenses.
Compensation: Up to $80/hr
Remote position, US-only. Weekly payments (Mon–Sun PST cycle, payout Wednesday) via PayPal, Wise, or bank transfer.
Work on AI training and evaluation projects with leading AI companies including OpenAI and Anthropic.
Apply directly on Handshake AI to get started.