Test generative AI solutions on AWS validating LLMs RAG prompts and agentic workflows with n8n and Python libraries like deepchecks and LangChainKey Requirements1) Test LLM outputs on AWS Bedrock using boto32) Validate finetuned LLMs on AWS SageMaker with deepchecks3) Verify langchain prompts in AWS environments4) Test RAG pipelines with AWS OpenSearch and langchain5) Validate AI agents crewaiautogen on AWS Lambda6) Test n8n agentic workflows eg n8nioworkflows62707) Ensure deployment stability on Amazon ECSEKS with Docker8) Monitor performance with Amazon CloudWatch and wandbMust Have Skills5 years of QA automation2 years testing GenAI LLMs using PythonExpertise in AWS Bedrock SageMaker Lambda boto3Proficiency in deepchecks langchain crewai autogen wandbExperience testing n8n workflows RAG promptsPreferred SkillsAWS certification Machine Learning Solutions ArchitectFamiliarity with llama index n8n templatesMandatory Skills: Agentic Framework, AI/Generative AI, Jenkins, User Acceptance Testing, Functional/System Testing, InSprint Testing, Regression Testing, SQL & Database testing, RTM -Testing, Selenium-Java -Testing, SIT -Testing, Test Design and Execution -Testing, Test Reports and Dashboards -Testing
Job Title
Quality Engineering Specialist