Sr Big Data Engineer - Remote
Concentrix Catalyst
CRI Heredia
hace 2 días

Job Title :

Sr Big Data Engineer - Remote

Job Description

This is a fully remote opportunity!!!

We are looking for a Senior Big Data Engineer to utilize their industry knowledge, technical skills, and passion for data to work closely with executive stakeholders and our data engineering team developing solutions that support and optimize business operations.

We are solving a variety of Big Data challenges and modernizing legacy data loaders as well as exploring the benefits and tradeoffs of other cutting-edge tools.

In this role, you will be responsible for a variety of duties including Apache Spark / Pyspark and Hadoop platform components, understanding our data, application design and development, and ensuring accuracy and consistency of data.

Requirements :

  • 3+ years of experience in data analysis and / or management in an enterprise environment within finance, operations, or analytics.
  • Experience with Apache, Data Warehouse, Spark, among other big data tools.
  • Experience with tools like MS SQL Server, Oracle, Postgres, Hadoop, Hive, Presto, etc.
  • Experience with at least one language (e.g. Python, C#, Scala, Java).
  • Experience with Linux / Unix.
  • Data streaming experience with Kafka / Kinesis or similar tools.
  • Experience with storage platforms like HDFS, S3.
  • Experience with NoSQL databases like HBase / Cassandra / MongoDB.
  • Experience with cloud platforms for data processing like Cloudera Public Cloud / Azure Databricks / AWS.
  • Proactive, hardworking team player with excellent written and oral communication skills
  • Passion for data organization, quality, and reliability.
  • Other Skills :

  • Strong knowledge of statistics, including hands-on experience with Python, R, SAS, Matlab, Machine Learning, AI.
  • Knowledge of C#, Object-Oriented Programming, GraphQL Query API.
  • Knowledge of Microsoft SQL Server and Stored Procedures
  • Experience working with large datasets (1B+ Records)
  • Knowledge of Tableau / BI Reporting Tools
  • Responsibilities :

  • Build Spark data pipelines using Python / Pyspark / Hadoop / Hive / Presto.
  • Develop logical data models and processes to cleanse, transform, and normalize raw data into high-quality datasets aligned with our analytical requirements.
  • Develop and maintain comprehensive controls to ensure data quality and completeness.
  • Review your coworker’s projects and provide mentoring to other members of the team.
  • Oversee and help architect ETL pipelines and data warehouse / data mart solutions.
  • Understand and be able to troubleshoot the various platforms and technologies that run our ETL processes.
  • Identify and onboard new data sources.
  • Collaborate with data vendors and internal stakeholders to define requirements and build interfaces.
  • Troubleshoot and resolve issues with data feeds.
  • Location :

    CRI Heredia - Building Corporativo Cariari

    Language Requirements :

    Time Type :

    Please Note : Job cannot be performed in the state of Colorado, USA.

    Reportar esta oferta

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Mi Correo Electrónico
    Al hacer clic en la opción "Continuar", doy mi consentimiento para que neuvoo procese mis datos de conformidad con lo establecido en su Política de privacidad . Puedo darme de baja o retirar mi autorización en cualquier momento.
    Formulario de postulación