EmploymentOS for your Business

AI Engineer

Sydney, New South Wales 2000, Australia • Casual

Description

About the role

Australia has the talent to build world-class AI infrastructure. We have always exported it. We are done with that. If you have built at the highest level and you are ready to build something sovereign, something ours, this is the moment. We are pre-seed funded, PoC ready to start. Post PoC you will be working with our sovereign cluster directly, not watching a monitor.

The platform we are building can run cancer research, scientific computing, and the AI workloads that actually matter for this country. We are not building another SaaS tool. We are building infrastructure that changes what Australia can do.

If that is the kind of work you have been waiting for, come talk to us.

We are an early-stage Australian AI infrastructure company building a distributed compute platform that routes workloads across cloud and decentralised compute environments using an economically governed scheduling engine. We are hiring an AI Engineer to own the model serving layer at the heart of our system.

This is a casual role, roughly 25 hours per week across a 12-week engagement, paying $93 per hour. Must be Sydney based and have full working rights or permanent residency.

What you will work on

• Design and build containerised AI model serving infrastructure for two co-operating inference services

• Implement an encrypted inter-model messaging protocol connecting the inference services

• Build inference endpoints that consume structured mathematical inputs and produce structured JSON decision outputs

• Integrate AI operations tooling for diagnostic log processing and inference monitoring

• Ensure all containers are hardware-agnostic and migration-ready for GPU cluster deployment post-engagement

What we are looking for

• Hands-on experience building and deploying AI model serving infrastructure

• Strong containerisation skills (Docker, Kubernetes or equivalent) for AI workloads

• Experience with open weights model serving frameworks (vLLM, Ollama, TGI, or similar)

• Ability to design encrypted inter-service communication

• Experience instrumenting AI services for operational telemetry

• GPU instance experience (AWS g4dn, p3, or similar) highly regarded

About us

Omega AI HPC is a pre-seed Australian technology company building a platform that federates heterogeneous compute resources into a unified, economically governed service. We have a filed patent, a strong technical advisory team, and a funded 12-week PoC underway. We are targeting a $5M seed round to build Australia’s first sovereign H200 compute cluster.

We are a small, senior team. You will have real ownership and real visibility from day one.

To apply

Apply with your CV and a short note (3 to 5 sentences) on what model serving infrastructure you have built and what it ran on. No cover letter required. No recruiters.

Role Type

On-site • Temporary • Casual • Mid-level Senior

Pay Rate

91 AUD – 93 AUD (Hour)