Skip to Main Content

Job Title


Data Engineering Architect


Company : Prudent Technologies and Consulting, Inc.


Location : Udaipur, Rajasthan


Created : 2026-04-16


Job Type : Full Time


Job Description

We are seeking a Data Engineering Architect to lead enterprise data architecture and data modeling on Snowflake, using a Medallion approach (Bronze/Silver/Gold) to deliver curated, high-quality datasets. You will also help modernize and operationalize the data platform, including containerization and migration of workloads to Kubernetes, while ensuring production readiness through strong observability, security, and reliability practices.Key ResponsibilitiesReview and document the current architecture; produce clear technical designs and architecture diagrams.Own and evolve the target-state data architecture on Snowflake, including Medallion layer design (Bronze/Silver/Gold), data contracts, and domain-aligned data products.Define and implement data modeling standards (e.g., dimensional, normalized, and/or Data Vault patterns as appropriate), ensuring consistency, lineage, and reuse across teams.Enhance an existing ingestion and processing platform to support multiple runtime modes and robust configuration management.Design and implement containerization standards (Docker) including image optimization and security best practices.Plan and execute migration from VM-based workloads to Kubernetes, partnering with DevOps/Platform teams.Develop and maintain Kubernetes deployment artifacts (manifests; Helm charts preferred).Establish production-grade observability (logging, metrics, alerting) and on-call readiness/operational runbooks.Build and optimize data ingestion patterns to SQL Server and Snowflake, leveraging dbt for modeling/transformations where applicable, including incremental loading/CDC and performance tuning.Drive security practices including secrets management and encryption; contribute to disaster recovery/failover planning.Mentor engineers and drive knowledge transfer through documentation, code reviews, and best practices.Required Qualifications5+ years of experience in data engineering/software engineering, including hands-on ownership of production pipelines and services.Advanced Python development (3.10+) with strong software design fundamentals (patterns, testing, error handling, logging, debugging).Experience integrating APIs (REST; SOAP a plus) and building reliable, maintainable integrations.Strong Docker experience (multi-stage builds, image optimization, container security).Strong Kubernetes experience (deployments, services, config maps, networking/storage fundamentals).Strong SQL skills and experience with Data Engineering Architect SQL Server (connectivity, optimization; CDC patterns a plus).Hands-on experience with dbt (development, modeling best practices, and CI/CD for dbt projects).Mandatory: Hands-on experience with Snowflake (data loading patterns, performance/optimization, warehouse concepts, and security/governance basics).Demonstrated experience in data architecture and data modeling, including designing curated layers using the Medallion framework (Bronze/Silver/Gold).Hands-on experience with SnapLogic (or similar iPaaS) for building and operating integrations and data flows.Strong quantitative/problem-solving skills (math/statistics) with the ability to translate business rules into data logic and validation checks.Ability to write clear technical documentation and drive architecture decisions with trade-off analysis.Overall 12+ Years of Experience.Preferred Qualifications (Nice-to-Have)Helm charts for Kubernetes deployments.PySpark and distributed computing frameworks.Apache Airflow (or similar workflow orchestration tools).Monitoring stacks such as Grafana and/or Azure Monitor.Metadata-driven ingestion frameworks; SCD Type 2 and incremental loading patterns.Experience migrating workloads from VMs to Kubernetes with minimal downtime.Experience & Professional SkillsTypically 5–8 years in data engineering, with 2–3 years influencing architecture and technical direction.Proven ability to troubleshoot production issues, perform root-cause analysis, and implement durable fixes.Strong collaboration skills across engineering, DevOps/Infrastructure, and stakeholders.Self-directed and able to operate with minimal supervision in an ambiguous environment.