See yourself in our team
We are one of the largest and most advanced Data Engineering teams in the country, building state-of-the-art data solutions that power seamless experiences for millions of customers. As a Staff Engineer, you will work at the forefront of AWS Cloud, AI, and Data Warehouse technologies, creating data-driven solutions that improve the standard of digital banking.
What You'll Do
- Design and deliver integrated data platforms providing a single source of truth for timely and accurate data
- Build and optimise enterprise-wide data ingestion, transformation, and integration pipelines for Cloud-based, Big Data, and Data Warehouse platforms
- Deliver outcomes for Everyday Banking Data Solutions, supporting the Group's strategy of trust, resilience, and capital generation
- Develop scalable and efficient data pipelines that support AI model training and inference
- Embed AI capabilities into CI/CD pipelines, secure coding practices, and developer tooling
- Drive high-quality outcomes to solve complex business problems and minimise risks
- Coach and mentor junior engineers, uplift software development practices, and lead cross-functional collaboration.
What We're Looking For
- Proven experience in building and supporting Big Data pipelines using AWS Stack
- Strong background in designing and delivering robust data solutions
- Familiarity with the full software development lifecycle, including data ingestion, transformation, integration, and visualisation
- Experience in handling large-scale data processing and analytics
- Knowledge of data profiling, numerical statistics, and data quality calculations
- Ability to enforce code quality through peer programming, code reviews, and automated release management
- Experience with data governance, security, and management controls.
Technical Skills
We use a broad range of tools, languages, and frameworks. Experience or exposure to the following (or equivalents) will set you up for success:
- Cloud & Data Platforms: AWS services (S3, RDS, Redshift, Glue, Lambda, SageMaker, Bedrock, AmazonQ, Kendra, Neptune), EMR, Athena, Iceberg tables
- Programming & ETL: Python, SQL, ETL development (Ab Initio preferred but not mandatory)
- Databases: Oracle, Graph databases (Neo4j, Neptune)
- Big Data: Hadoop, Spark
- AI & Automation: GenAI, RAG systems (LangChain, LlamaIndex), Agentic AI systems (LangGraph, MCP, Pydantic, A2A), automation for DevSecOps
- Data Architecture: Data Vault 2.0
- DevSecOps: Understanding of secure development practices and CI/CD pipelines
- Certifications: AWS Associate or Professional certifications in Data Engineering, AI, or Machine Learning
- Designing and implementing AI-driven solutions within data pipelines; leveraging AI agents to automate engineering tasks (preferred)
- Developing workflows for intelligent debugging and optimisation using AI-powered suggestions (preferred)
Working with us
At CommBank, we value diversity, inclusion, and flexibility. We offer:
- Technology hub in Sydney
- Flexible work arrangements, including hybrid working, part-time options, and job share
- A supportive environment where you'll thrive, tackle challenges, and make a positive impact for customers and communities.
If this sounds like you, apply now!