Gigster logo

Data Engineer

Gigster

United States only

Employee count:51-200

Do you want to work on cutting-edge projects with the world’s best IT engineers? Do you wish you could control which projects to work on and choose your own pay rate? Are you interested in the future of work and how the cloud will form teams? If so - the Gigster Talent Network is for you.

Our clients rely on our Network for two main areas, Software Development and Cloud Services. In some cases, they need help building great new products, in others they want our expertise in migrating, maintaining, and optimizing their cloud solutions.

At Gigster, whether working with entrepreneurs to realize ‘the next great vision’ or with Fortune 500 companies to deliver a big product launch, we build really cool enterprise software on cutting-edge technology.

The Role

We are seeking an experienced Data Engineer with deep expertise in data transformation at scale, particularly in integrating and processing data from third-party public APIs. This role is critical to enhancing and maintaining data pipelines that feed into Natural Language Processing (NLP) models.

What you’ll do:

  • Design, build, and optimize scalable ETL/ELT data pipelines usingApache Spark, Apache Kafka, and orchestration tools such as Prefect or Airflow

  • Integrate external data sources and public APIs with internal data systems

  • Work with large-scale datasets to support NLP model training and inference

  • Analyze existing pipelines and recommend enhancements for performance, reliability, and scalability

  • Collaborate with cross-functional teams, including data scientists and ML engineers

  • Own the end-to-end engineering process—from planning and technical design to implementation

  • Regularly report progress and outcomes to client stakeholders

What we’re looking for:

  • Proficiency in Python and experience with data transformation and data engineering best practices

  • Strong experience with Apache Spark, Apache Kafka, and Google Cloud Platform (GCP)

  • Hands-on experience with workflow orchestration tools (e.g., Prefect, Airflow)

  • Demonstrated experience working with large datasets and real-time data processing

  • Experience building and maintaining ETL/ELT pipelines for analytical or machine learning use cases

  • Self-motivated, with excellent communication and project ownership skills

Preferred Qualifications:

  • Familiarity with financial services data or regulated data environments

  • Experience with Snowflake or Google BigQuery

  • Exposure to NLP workflows and data requirements for machine learning models

Logistics:

  • This is a part-time, short term, 4 to 6 weeks contract
  • Preferred location: Remote US

Understanding Global Salaries

Our compensation tools provide accurate salary analyses to help you make informed decisions.

Global Salary Insights logo

Global Salary Insights

Calculator Employee Cost logo

Calculator Employee Cost

About the job

Apply before:

Jul 16, 2025

Posted on:

May 18, 2025

Job type:

Full Time

Experience level:

Mid-level

Location requirements:

Skills:

Data EngineeringData Transformation ToolsData PipelinesETLApache SparkApache KafkaPrefectAirflowPythonGoogle Cloud Platform (GCP)Machine LearningNatural Language Processing (NLP)SnowflakeGoogle Cloud PlatformKafka

About the company

Gigster logoGi

Gigster

Company size:

51-200

Founded in:

2003

Chief executive officer:

Andy Tryba

Markets:

MobileInternetMarketplaceAppSoftware DevelopmentWeb DevelopmentFreelancing
gigster.com