Big Data Engineer
San Jose
hace 2 días

Our Company

Changing the world through digital experiences is what Adobe’s all about. We give everyone from emerging artists to global brands everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity.

We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

The challenge

Adobe Data Science Team is looking for a Big Data Engineer who will help building the web’s next generation of data science products.

These products would help businesses understand and manage their customers' full lifecycle, from customer analytics to marketing media optimization.

By leveraging big data machine learning techniques, our team is building intelligence products and services that address challenging business problems including real-time online media optimization, marketing mix modeling and planning, media attribution, leads scoring, sales operation analytics, customer conversion scoring and optimization, customer churn scoring and management and social network analysis.

We are looking for a passionate big data engineer who will build advanced data ETL / wrangling pipelines, serve predictive models, and provide data intelligence as a service.

Ideal candidates will have a strong knowledge in big data techniques and experience in working with cloud technologies like AWS, Azure.

What you'll do

  • Build big data pipelines and structures that power predictive models and intelligence services on large-scale datasets with high uptime and production level quality.
  • Implement and manage large scale ETL jobs on Hadoop / Spark clusters in Amazon AWS / Microsoft Azure
  • Interface with internal data science, engineering and data consumer teams to understand the data needs
  • Own data quality throughout all stages of acquisition and processing, including data collection, ETL / wrangling, ground truth generation and normalization
  • What you need to succeed

  • Master or Bachelor with equivalent experience in Computer Science or equivalent technical fields
  • 2+ years of experience working with large data sets using open source technologies such as Spark, Hadoop, Kafka on one of the major cloud vendors such as AWS, Azure and Google Cloud
  • Strong SQL (Postgres, Hive, MySQL, etc) and No-SQL (MongoDB, HBase, etc.) skills, including writing complex queries and performance tuning
  • Must have good command of Python, Spark / Scala and big data techniques (Hive / Pig, MapReduce, Hadoop streaming, Kafka)
  • Excellent communication, relationship skills and a strong team player.
  • Preferred Qualifications

  • Big plus : experience developing and productizing real-world AI / ML applications such as prediction, personalization, recommendation, content understanding and NLP
  • Experience with Kafka, Jenkins, docker
  • Distributed computing principles and experience in big data technologies including performance tuning
  • Reportar esta oferta

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Mi Correo Electrónico
    Al hacer clic en la opción "Continuar", doy mi consentimiento para que neuvoo procese mis datos de conformidad con lo establecido en su Política de privacidad . Puedo darme de baja o retirar mi autorización en cualquier momento.
    Formulario de postulación