Big Data Engineer C2C jobs with GCP and Spark ||Phoenix, AZ, NY,CA

Big Data Engineer jobs

Big Data Engineer with GCP and Spark exp 

Phoenix, AZ, NY,CA(Day One Onsite)

Job Title: Big Data Engineer (GCP and Spark)

 

 

Job Summary:

We’re looking for an experienced Big Data Engineer to join our team, with expertise in Google Cloud Platform (GCP) and Apache Spark. The successful candidate will design, develop, and maintain large-scale data pipelines, ensuring efficient data processing and analysis.

Key Responsibilities:

Develop and maintain scalable ETL processes using PySpark on GCP
– Optimize and troubleshoot data processing jobs for performance and reliability
– Implement data transformations and create data pipelines to support analytical needs
– Collaborate with data scientists, analysts, and stakeholders to understand data requirements
– Ensure data quality and integrity across all data processes
– Monitor and maintain cloud infrastructure related to data processing on GCP
– Document technical solutions and provide support for data-related issues

 

Required Skills:

– 3+ years of experience in data engineering with PySpark and GCP
– Strong understanding of big data technologies and cloud services
– Experience with data modeling, ETL processes, and data warehousing concepts
– Excellent problem-solving skills and attention to detail
– Proficiency in:
    – Programming Languages: Python, SQL
    – Big Data Technologies: Apache Spark, GCP (BigQuery, Dataflow, Cloud Storage, Dataproc)
    – Data Engineering Tools: ETL, Data Modeling
Preferred Qualifications:

– Experience with agile development methodologies
– Familiarity with Hadoop ecosystem and real-time data stream platforms (e.g., Kafka)
– Knowledge of data science model development and production deployment
– Experience with cloud platforms (AWS, Azure)

 

thanks & Regards

Mohammad Faisal

 

To apply for this job email your details to md.faisal@signinsol.com