Recent Job Listings

💼 38 Data Engineer jobs found in the last 30 days

IntraEdge

Job Title Sr Data Engineer
Hiring Agency ZipRecruiter
Published Mar 31, 2026
Location Charlotte, NC, United States
Division Data & Analytics
Source Link https://www.google.com
Job Description We are seeking a highly experienced Senior Data Engineer with 11+ years of experience in designing, building, and optimizing scalable data platforms and pipelines across cloud and on-prem environments. This role will be responsible for end-to-end data engineering, including data ingestion, transformation, warehousing, and analytics enablement, while working closely with business stakeholders, architects, and cross-functional teams. The ideal candidate brings deep expertise in Big Data ecosystems, ETL frameworks, cloud platforms (AWS/Azure), and modern data warehousing technologies like Snowflake and Redshift. Key ResponsibilitiesData Engineering & Pipeline DevelopmentDesign, develop, and maintain scalable ETL/ELT pipelines using tools such as Informatica, Talend, PySpark, and Python Build and optimize data pipelines in and out of data warehouses (Snowflake, Redshift) Develop batch and real-time data processing solutions using Spark, Hive, Hadoop, and related ecosystems Data Warehousing & Modeling Design and implement data models including 3NF, Star, and Snowflake schemas Build and support enterprise data warehouses, data lakes, and ODS systems Optimize SQL queries and ETL workflows for performance and scalability Cloud & Big Data PlatformsDevelop and deploy solutions on AWS (S3, EC2, EMR, RDS) and Azure (ADF, Data Lake, Databricks, Key Vault) Implement cloud-based data integration solutions using Informatica Cloud (IICS) and other tools Support data migration initiatives from on-prem to cloud platforms Data Integration & QualityPerform data analysis, validation, cleansing, and reconciliation Ensure high data quality through data verification and anomaly detection Build automated data pipelines and workflows using tools like Apache NiFi and Airflow DevOps & AutomationImplement and manage CI/CD pipelines using tools such as Jenkins, Docker, Ansible Automate workflows using Python/Shell scripting Ensure version control and code management using Git, SVN, Bitbucket Collaboration & LeadershipWork closely with business stakeholders to gather requirements and translate them into technical solutions Mentor junior engineers and contribute to team development and best practices Participate in Agile ceremonies, sprint planning, and backlog grooming Provide production support and incident resolution Experience11+ years of experience in Data Engineering / ETL Development / Data Warehousing Proven experience in end-to-end data lifecycle (design to production support) Strong experience in Agile development methodologies Technical Skills Programming & QueryingStrong proficiency in Python, SQL, PL/SQL, Shell scripting Experience with PySpark, SparkSQL Cloud PlatformsStrong experience with: AWS (S3, EC2, EMR, RDS) We are seeking a highly experienced Senior Data Engineer with 11+ years of experience in designing, building, and optimizing scalable data platforms and pipelines across cloud and on-prem environments. This role will be responsible for end-to-end data engineering, including data ingestion, transformation, warehousing, and analytics enablement, while working closely with business stakeholders, architects, and cross-functional teams. The ideal candidate brings deep expertise in Big Data ecosystems, ETL frameworks, cloud platforms (AWS/Azure), and modern data warehousing technologies like Snowflake and Redshift. Key ResponsibilitiesData Engineering & Pipeline DevelopmentDesign, develop, and maintain scalable ETL/ELT pipelines using tools such as Informatica, Talend, PySpark, and Python Build and optimize data pipelines in and out of data warehouses (Snowflake, Redshift) Develop batch and real-time data processing solutions using Spark, Hive, Hadoop, and related ecosystems Data Warehousing & Modeling Design and implement data models including 3NF, Star, and Snowflake schemas Build and support enterprise data warehouses, data lakes, and ODS systems Optimize SQL queries and ETL workflows for performance and scalability Cloud & Big Data PlatformsDevelop and deploy solutions on AWS (S3, EC2, EMR, RDS) and Azure (ADF, Data Lake, Databricks, Key Vault) Implement cloud-based data integration solutions using Informatica Cloud (IICS) and other tools Support data migration initiatives from on-prem to cloud platforms Data Integration & QualityPerform data analysis, validation, cleansing, and reconciliation Ensure high data quality through data verification and anomaly detection Build automated data pipelines and workflows using tools like Apache NiFi and Airflow DevOps & AutomationImplement and manage CI/CD pipelines using tools such as Jenkins, Docker, Ansible Automate workflows using Python/Shell scripting Ensure version control and code management using Git, SVN, Bitbucket Collaboration & LeadershipWork closely with business stakeholders to gather requirements and translate them into technical solutions Mentor junior engineers and contribute to team development and best practices Participate in Agile ceremonies, sprint planning, and backlog grooming Provide production support and incident resolution Experience11+ years of experience in Data Engineering / ETL Development / Data Warehousing Proven experience in end-to-end data lifecycle (design to production support) Strong experience in Agile development methodologies Technical Skills Programming & QueryingStrong proficiency in Python, SQL, PL/SQL, Shell scripting Experience with PySpark, SparkSQL Cloud PlatformsStrong experience with: AWS (S3, EC2, EMR, RDS) Education:Employment Type: CONTRACTOR