Job Description
We are looking for a seasoned AI/ML engineer with a deep passion for data and innovation, a proactive mindset, and strong ownership of solutions. You will play a pivotal role in shaping our generative AI initiatives, delivering impactful and scalable intelligence across business verticals within Sustainability and Climate Org.
Role and Responsibilities
- Architect, build, and maintain scalable AI/ML and GenAI solutions for enterprise applications.
- Actively contribute to AI strategy and roadmap in collaboration with leadership and product teams.
- Ensure robustness, reliability, and optimization of production-grade data pipelines and ML systems.
- Drive end-to-end AI/ML lifecycle from ideation to deployment and monitoring.
🎓 Have You Tried Common Jobs Pro : Book your slot now
Key Responsibilities
- Lead the design and implementation of machine learning, deep learning, and GenAI systems.
- Collaborate across engineering, product, domain experts, and senior leadership to identify and solve high-impact problems with AI.
- Architect clean, maintainable, and efficient ML and data pipelines using modern frameworks and cloud services.
- Develop and deploy AI models into production environments, ensuring resilience, scalability, and low latency.
- Stay ahead of the curve in GenAI advancements and infuse them into real-world use cases.
- Champion best practices in model training, evaluation, interpretability, and governance.
- Guide and mentor junior team members, fostering innovation and excellence.
Your skills and experience that will help you excel
- Strong programming skills in Python, with hands-on experience in ML libraries like TensorFlow, PyTorch, HuggingFace Transformers, Scikit-learn, etc.
- Proven expertise in building and deploying AI/ML systems at scale (batch or real-time).
- Practical experience with GenAI techniques including LLMs, fine-tuning, RAG, embeddings, prompt engineering, etc.
- Solid understanding of MLOps best practices and familiarity with tools like MLflow, Airflow, Kubeflow, and Docker/Kubernetes.
- Deep knowledge of data structures, algorithms, and software engineering principles.
- Strong grasp of data modeling, feature engineering, and large-scale data processing with Spark or similar technologies.
- Experience with cloud platforms (AWS, Azure, GCP) and their respective AI toolkits.
- Excellent communication skills and ability to translate technical concepts to non-technical stakeholders.
- Comfortable working in fast-paced environments and navigating ambiguity.
- Self-motivated and driven by impact, with a strong sense of ownership and initiative.


