EmploymentOS for your Business

Refer a friend Apply now

Data Engineer

Engineering Team • Brisbane, Queensland 4000, Australia • Full-time
Analyse my fit

Description

The Role

We are seeking a skilled Data Engineer to join our data team and help design, build, and maintain our data infrastructure and analytics pipelines. You’ll work with modern data tools including Databricks, Amazon Quick Suite, and DBT to deliver high-quality data solutions that drive business insights and decision-making. 

Open for Brisbane and Sunshine Coast candidates.

About FlowLogic

At FlowLogic, we’re not just building software; we’re shaping the future of care through innovation. As a fast-growing company in the technology and services space, we’re a leading cloud-based solution designed to transform how NDIS providers deliver support across ANZ. Our platform empowers disability organisations to streamline operations, boost productivity, and stay fully compliant, freeing them to focus on delivering life-changing services.

Behind our success is a close-knit, high-performing team with a strong culture of collaboration, curiosity, and continuous growth. You’ll work directly with senior leaders, have a clear impact from day one, and enjoy the support you need to grow your career in a business that’s scaling fast and going places.

Your Responsibilities

Reporting directly to the Head of Engineering, you will:

  • Design, build, and maintain scalable data pipelines using Databricks and DBT for data transformation and modelling.
  • Develop and optimise data warehouse architecture, ensuring data quality, reliability, and performance.
  • Create and maintain dashboards and reports using Amazon Quick Suite to deliver actionable insights to stakeholders.
  • Design and implement dimensional models, star schemas, and other data structures using DBT best practices.
  • Build efficient ETL/ELT workflows to integrate data from multiple sources into our data lake and warehouse.
  • Implement data validation, testing, and monitoring frameworks to ensure data accuracy and consistency.
  • Optimise queries, data models, and pipeline performance in Databricks and related systems.
  • Partner with data analysts, data scientists, and business teams to understand requirements and deliver data solutions.
  • Create and maintain comprehensive documentation for data models, pipelines, and processes.
  • Establish and promote data engineering best practices, coding standards, and version control workflows.
  • Debug and resolve data pipeline issues and performance bottlenecks.

Essential Criteria

  • 2+ years of hands-on experience with Databricks, including Spark, Delta Lake, and notebook development.
  • Strong proficiency in DBT for data transformation, including models, tests, documentation, and deployment.
  • Experience building interactive dashboards, visualisations, and reports using Amazon Quick Suite.
  • Solid experience with AWS services (Athena, S3, Redshift, Glue, Lambda, Athena, IAM).
  • Advanced SQL skills for complex queries, optimisation, and database design.
  • Proficiency in Python and/or Scala for data processing and automation.
  • Strong understanding of data warehouse concepts, dimensional modelling, and star/snowflake schemas.
  • Experience with Git and modern software development practices (CI/CD, code reviews, testing).
  • Familiarity with distributed computing concepts and large-scale data processing.
  • Experience working with various data formats (Parquet, JSON, CSV, Avro).

Advantageous

  • Experience with other BI tools (Tableau, Power BI, Looker).
  • Knowledge of Apache Airflow or other workflow orchestration tools.
  • Experience with data governance and data cataloguing tools.
  • Familiarity with machine learning workflows and MLOps.
  • Understanding of data security, compliance, and privacy regulations (GDPR, CCPA).
  • Experience with streaming data platforms (Kafka, Kinesis).
  • AWS certifications (Data Analytics, Solutions Architect, etc.).
  • Experience with Snowflake or other cloud data warehouses.
  • Knowledge of data quality tools (Great Expectations, Monte Carlo).

What We Offer

  • A supportive, inclusive, and engaged team environment.
  • The opportunity to do meaningful work at the intersection of care and technology.
  • A competitive salary package, with potential for growth and development.

How to apply

We appreciate all candidates who have taken the time to apply, but only shortlisted candidates will be contacted.

We value diversity, and we believe that our uniqueness makes us stronger. We encourage applicants from diverse and/or minority backgrounds to express their interest. We believe in creating a working environment where individuals are able to bring their full selves to work and are valued for their unique contributions.

Role Type

Permanent • Full-time • Mid-level Senior
Apply now