Sr. Data Platform SRE
Adobe
San Jose
hace 2 días

Our Company

Changing the world through digital experiences is what Adobe’s all about. We give everyone from emerging artists to global brands everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity.

We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

Our Company

Changing the world through digital experiences is what Adobe’s all about. We give everyone from emerging artists to global brands everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity.

We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

Position Summary

Data has become a strategic asset of Adobe fueling all aspects of business decision-making. Real-time personalized customer experiences are driving the future of Adobe's business.

This position will be a key role within the IDS Data & Analytics Platform team to provide solutions interacting with data and powering insights needed to realize business objectives.

As a Site Reliability Engineer in our Data Platform team, you will be responsible for setup, running, and supporting a production platform with petabytes of data.

You will ensure the high availability of data and enable security at all times. This requires us to build a robust observability framework to identify and handle potential issues.

You will strive towards constantly improving platform performance and reducing overall business operation costs. The winning candidate for this high-impact role must demonstrate deep knowledge in setup, operating, and managing data platform services at scale.

At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists! We're fun, hard-working engineers who are capable of giving and receiving technical mentorship.

You will also be surrounded by colleagues who are committed to helping each other grow through our outstanding Check-In approach where ongoing feedback flows freely!

Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability, or veteran status.

What you will do :

  • Study Hadoop Platform workloads, understand the data relationship and prepare them for migration to Databricks on the Azure cloud
  • Manage large scale second-order PetaBye scale Hadoop Cluster
  • Develop tools, operational improvements, and automated solutions that enable self-service configuration changes, speed deployments, and improve monitoring and alerting solutions such as Splunk, PagerDuty,
  • Assure security compliance and implement Adobe Security & Compliance solutions to lock down data used by Hadoop On-Prem
  • Implement onboarding automation for tenants on Databricks on Azure
  • Troubleshoot Linux production environments and collaborate with other Infrastructure teams on discussions about Security, Infrastructure, networking, and Storage topics.
  • Build CI / CD tooling using : Jenkins, Spinnaker, Salt, Azure DevOps, etc.
  • Work with 3rd party vendor for troubleshooting and support on Databricks on Azure
  • What you need to succeed :

  • Experience managing and working with Apache Hadoop and related technology like Yarn, Hive, Oozie, HDFS, etc.
  • 5+ years of experience working with large data sets using open-source technologies such as Spark, Hadoop, and Kafka on one the major cloud vendors such as AWS, Azure, and Google Cloud
  • 2+ years of experience working with Cloud
  • 4+ years of experience working with any database platforms such as Oracle, Hana, SQL Server
  • Strong understanding of Core Java, JVM, and Hadoop (Map Reduce, Hive, Pig, HDFS, H-catalog, Zookeeper, and Oozie)
  • Security oriented mind, always on the lookout for ways to secure large set of data
  • Automation skills, eager to automate anything with the use of any scripting language or tool
  • Understands the business need as well as the technical need of using a monitoring framework
  • Experience with on-prem Hadoop migration to Azure
  • Great communication, and interpersonal skills to work with internal / external customers.
  • Ability to work autonomously, be results-oriented, and be a strong team leader
  • Preferred Qualifications :

  • Bachelor’s or master’s degree with equivalent experience in Computer Science or equivalent technical fields
  • 5+ years of commercial software development and / or technical operations experience, and experience handling large-scale cloud-based applications through standard software engineering methodologies.
  • Reportar esta oferta
    checkmark

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Inscribirse
    Mi Correo Electrónico
    Al hacer clic en la opción "Continuar", doy mi consentimiento para que neuvoo procese mis datos de conformidad con lo establecido en su Política de privacidad . Puedo darme de baja o retirar mi autorización en cualquier momento.
    Continuar
    Formulario de postulación