Cargill provides food, agriculture, financial and industrial products and services to the world. Together with farmers, customers, governments and communities, we help people thrive by applying our insights and over 150 years of experience.
We have 160,000 employees in 70 countries who are committed to feeding the world in a responsible way, reducing environmental impact and improving the communities where we live and work.
The will work as part of a small team that is responsible for design, build and maintenance of Cargill’s High Performance Computing (HPC) platform for the broader Cargill Science and R&D community to use.
This role will operate with significant empowerment and freedom and will have input into long-term strategies as well as day to day engineering and operations for Enteprise scale distrubted computing Platforms.
This role will have opportunity to work on other distributed environments, incuding Cargill’s Big Data solution.
This role will be part of a high performing Platform Engineering group that operates in modern devops and agile ways with a lot of empowerment.
Furthermore, CICD and automation are core values of all of Platforms.
This position will partner closely with members of other Platform teams (Cloud, Data, etc) including Delivery and Engineering Leadership.
Keys to the success of this position is an Engineering background in HPC and / or other Distributed systems as well as a strong sense of responsibility, accountability and ownership.
75%Distributed Systems Engineering and Operations
Design and build capabilities across the comprehensive component set of Cargill’s High Performance Computing (HPC) platform and environment
Design and build HPC specific capabilities in domains including Computational Fluid Dynamics and Molecular Dynamics in support of Cargill Scientists / R&D needs.
Deliver coaching on the HPC capabilities and toolset to Engineering community members within Cargill
Partner with Delivery Management, Product Owners and other Platform Engineering Leads to ensure shared understanding of priorities, vision / strategy and architecture of various Platforms
Lead and participate in continual analysis and planning to ensure HPC toolsets and technologies are relevant, reliable and cost effective
Ensure adherence development and architecture standards and best practices
Meet SLAs for Operational support of the HPC Platform environment
15%Business Relationship Management and Consulting
Assess and help drive adoption of HPC technologies and methods within the team and across Cargill
Contribute ideas and / or code that improves core software infrastructure, patterns and standards more broadly in the Enterprise
Support and contribute to an operating model that supports the current multi-location, multi-continent, multi-cultural operating framework of Cargill
20%Engineering and Operations
Develop additional technical capabilities
Provide necessary technical support through all phases of testing and incident handling after deployment.
Build prototypes to prove out concepts
Bachelor degree in thecnical or business discipline, or equivalent in experience
8 years of IT and business / industry work experience developing data or software applications focused on data and integration technologies
Experience managing Linux RedHat, CentOS, and / or SuSE environments.
Shell scripting experience (Bash, Perl, Python)
Ability to troubleshoot server and storage hardware issues
Ability to build applications from source and troubleshoot compiling issues
Master’s Degree in technical or business discipline
Applied experience architecting HPC systems is a massive plus
Applied experience with Cloud providers including AWS
Applied experience deploying engineering applications to run in a Linux HPC, specifically MATLAB, ANSYS, CFD, and Siemens NX is a plus.
Applied experience with compilers such as (GNU, Intel, PGI)
Applied experience with batch control systems (SLURM, SGE, MOAB, Torque / Maui, PBS Pro, etc.)
Applied experience with network license management
Applied experience installation and tuning programs
Applied experience with standard services (DNS, DHCP, TFTP, PXE, etc.)
Applied experience with systems automation tools such as Chef, Ansible, or Puppet.
Applied experience with identity management technology such as Active Directory, Kerberos, and LDAP
Applied experience with Version Control Systems (GIT, Mercurial)
Applied experience with Hadoop and other big data technologies
Ability to create clear and detailed technical diagrams and documentation