Job Description:
• Distributed HPC/AI workflow development, experimentation, and testing for enabling interactive processing of large-scale telemetry datasets (terabytes to petabytes).
• Building solutions by composing existing open-source solutions and using distributed and parallel programming approaches for scaling data and simulation size.
• Actively participate in a collaborative, consensus-driven design process.
• Work in an Agile development environment.
• Create documentation, collaborate with users, and present progress in writing, slides, and verbally.
Requirements:
• 6-8 years of industry or comparable experience in software engineering.
• Proficiency in one or more programming languages such as C, C++, or Python.
• Exposure to high-performance computing (HPC) or scientific computing.
• Experience designing, building, or operating distributed large-scale systems in production environments.
• Experience with software engineering workflows, including version control, code reviews, automated testing, and CI/CD pipelines.
• Proficient in conveying technical concepts clearly and effectively through documentation, presentations, and design discussions.
• Strong analytical and problem-solving skills.
Benefits:
• Health & Wellbeing
• Personal & Professional Development
• Unconditional Inclusion
Apply Now
Apply Now