Requirements
- Review and assess Large Language Model (LLM) outputs across multiple dimensions, including accuracy, relevance,
coherence, tone, and completeness. - Evaluate text, audio, video, and image data according to defined guidelines and instructions.
- Ensure Safety and Ethical Integrity
- Identify and flag content that may be harmful, biased, toxic, or inappropriate.
- Ensure all model outputs uphold safety, fairness, and responsible AI standards.
- Document issues clearly and concisely.
- Provide structured feedback for developers to support iterative model improvements.
- Work closely with AI development teams to align with project objectives.
- Contribute to enhancements in evaluation guidelines.
- Stay current with emerging trends and advancements in LLM and AI technology.


