Jobs

Head of Data Engineering


Permanent

Salary: £125-130k + Bonus + Stock options (Hybrid working)


My client, a global AI Software Builder is looking for an enthusiastic, diligent & driven Head of Data Engineering to join the Data team in London.

You must be a strong Python Lead with good management skills who is also hands-on with strong coding expertise. Experience in managing and developing an engineering team is beneficial.

The start-up specialises in building, maintaining and running bespoke software for their customers, making it faster, more accessible and less expensive for everyone to bring their ideas to life.

Last year they secured $29.5m of funding to scale our growth in Europe. They already have three offices in LA, Delhi, and London and recently hit 250 employees globally. A key focus for them this year is to significantly grow their London operation.

About the role

They are a data-driven organisation, using Knowledge Graphs and AI powered by vast quantities of data, to change the way that software and data platforms are built. They have ambitious plans to develop platforms and services that not only suit their needs, but also the needs of their customers as well.

This role will report into the Director of Data and Analytics and lead the Data Engineering function of the Data & Analytics team.

Your key responsibilities will be to:

  • Provide leadership to the data engineering domain

  • Coach, mentor and inspire members of your team

  • Lead the design and implementation of streaming, Big Data management and proprietary data systems

  • Provide thought leadership in data tooling, systems and techniques, including architectural patterns that work with Software Engineers and Data Science teams

  • Effectively prioritise and deliver to a clear roadmap of deliverables

  • Communicate with stakeholders at a suitable level of detail to ensure requirements are well understood and stakeholders have accurate expectations on dates and functionality.

Requirements

Modern types of tech that we are looking for people to be aware of or use or tinker with:

Main:

  • · Pulsar (more than Kafka)

  • · Vector DBs (qdrant, neum.ai etc.)

  • · Delta .io or Iceberg

  • · Pact.io / soda.io - data contracts

Other

  • · Vault + great expectations (strong data pipelines)

  • · Trino/Presto

  • · Graphql

Further skills required

  • Experience building a platform from scratch.

  • Experience in designing, building and launching highly available, distributed systems of data extraction, transformation and loading of datasets

  • Familiarity with DataOps methods and data quality improvement techniques

  • Proficiency in building custom data frameworks and platforms

  • Agile methodologies and continuous delivery

  • Expert knowledge of programming languages (Python etc.)

  • Proficiency with cloud services (ideally AWS)Proficiency with large-scale streaming data (Kafka, Kinesis, etc.)

  • Proficiency with Big Data systems (Spark, Hadoop, Pig, etc.)

  • Proficiency with structured or custom ETL management (Airflow, Luigi, etc.)

  • Proficiency with SQL databases (Relational, Snowflake, etc.)

  • Excellent communication and leadership skills