Data Engineer

OverviewAbout us
Are you passionate about FinTech and ready to make a tangible impact in a dynamic company where your decisions shape the future? Altery could be the next chapter in your professional journey!
We seek an experienced Data Engineer to build and optimize scalable data pipelines across financial, compliance, and business systems. This role will focus on integrating diverse sources, ensuring data quality and performance, and driving our cloud-based analytics infrastructure using tools like DLT, SQLMesh, and Postgres.
Responsibilities
Drive end-to-end development of data engineering initiatives across multiple systems including financial services, CRM platforms, and analytics tools, leveraging technologies such as DLT and SQLMesh
Oversee the implementation and optimization of scalable ETL pipelines with robust error handling, incremental loading, and schema evolution for destinations like PostgreSQL
Manage cross-functional collaboration with product, engineering, compliance, and business intelligence teams to ensure timely and accurate data delivery
Develop and maintain data infrastructure roadmaps, resource plans, and risk mitigation strategies aligned with business and regulatory priorities
Lead the strategic evolution of our data stack to support real-time processing, advanced analytics, and compliance reporting in the financial services domain
Monitor and report on key data pipeline metrics including latency, throughput, and data quality benchmarks
Create and maintain detailed technical documentation, including pipeline designs, data contracts, and operational procedures
Coordinate with data governance and security teams to ensure adherence to GDPR, KYC/AML, and PCI-DSS standards
Collaborate with DevOps to maintain CI/CD pipelines and containerized environments, ensuring smooth deployment and system stability
What You’ll Bring To Us
5+ years of hands-on data engineering, designing and maintaining production-grade data pipelines
Advanced SQL for data analysis (window functions, performance tuning)
Proficient in ETL/ELT frameworks (e.g., DLT, Airflow, dbt/SQLMesh, Dagster) for ingesting, transforming, and loading data
Intermediate programming skills in Python (pandas, pyarrow, REST API integrations) and bash for scheduling and orchestration
Experience with data warehouses (PostgreSQL), including partitioning, clustering, and indexing
SQLMesh for data processing
Strong command of data validation and quality tooling, implementing checks, alerts, and retry logic for high-volume data flows
Hands-on with containerized deployments (Docker, Kubernetes) and CI/CD pipelines (GitLab/GitHub Actions) to automate testing and rollout of payment-data services
Skilled in real-time streaming and batch architectures, using Kafka or Pub/Sub
What we offer
Team and our Product: We are team players and we are passionate about our product and understand what we aim to achieve and the impact it will make.
Growth Opportunities: You can Influence and shape our story while advancing your career.
Flexibility: We always listen to our people and can be flexible with arrangements.
Hybrid or Remote Working: We don’t expect you to be in the office every day.
Local Market Perks: Enjoy insurance coverage, local perks, and beautiful offices.
Why to join usWe may not be perfect, but our strength lies in our resilience. Facing challenges with our expertise, positive attitude, and a supportive environment where everyone relies on one another gives us confidence in what we do. We empower our people to make decisions, explore, and experiment - micromanagement isnt our style. We reward those who take on additional responsibilities and go the extra mile.
We are proud of how diverse and unique we are. We thrive on diverse views, love learning from one another, and believe that our differences fuel our curiosity
#J-18808-Ljbffr
Other jobs of interest...
Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!