Attercop

AI Engineer

Build production-ready AI systems for PE-backed firms

UK (Brighton, Hybrid) / Europe (Remote)
Full-Time, Permanent
2+ Years Experience

About Attercop

Attercop is a fast-growing Specialist AI company. We don't just advise; we build. We partner with dynamic mid-size technology and technology-enabled companies to develop and implement transformative AI strategies. Our work involves creating sophisticated knowledge structures for Generative AI, building cutting-edge agentic frameworks, and delivering full-stack AI solutions that turn PoCs into production-grade systems.

We're a team of AI experts, data scientists, and strategists. Now, we need a brilliant engineer to help us build the future.

Role Overview

We are seeking a highly technical AI Engineer to design, develop, and integrate artificial intelligence (AI) models into software applications and business processes. You will act as the crucial link between data science research and practical application, designing and integrating advanced AI models into software applications and business processes. The ideal candidate brings a dynamic blend of software engineering, data science, and MLOps expertise to deliver robust, scalable, and production-ready AI solutions.

Core Responsibilities

1. Model Engineering and Orchestration

  • Implementation: Translate architectural designs and algorithms from data scientists into functional services.
  • Service Orchestration: Develop production-ready code to orchestrate data ingestion, model inference cycles, and output delivery mechanisms.
  • Performance Optimisation: Execute rigorous optimisation for inference latency, memory footprint, and computational efficiency, particularly within high-throughput environments.
  • Testing: Collaborate with data scientists to implement comprehensive testing and validation strategies to ensure model accuracy, reliability, and robustness through error analysis and bias detection.

2. Agentic Workflows Development

  • Multi-Agent Orchestration: Design and implement systems where multiple autonomous agents collaborate using frameworks such as LangChain, LangGraph, or Microsoft Agent Framework.
  • Reasoning Architectures: Implement advanced interaction protocols and reasoning loops (e.g., ReAct) to facilitate complex problem-solving.
  • Tool and Memory Integration: Equip agents with external tools (APIs, databases) and implement both short-term and long-term retrieval-based memory.
  • Safety Frameworks: Develop and enforce methodologies for evaluating agent reliability and safety within defined operational constraints.

3. Data Engineering and Infrastructure

  • Pipeline Architecture: Design and maintain scalable ETL/ELT pipelines for the processing of large-scale datasets intended for training and real-time inference.
  • Feature Engineering: Perform advanced data wrangling, normalisation, and feature transformation to maximise model performance.
  • Data Integration: Establish connectivity with diverse data architectures, including SQL/NoSQL databases, data lakes, warehouses, and real-time streaming APIs.
  • Data Governance and Compliance: Ensure that data handling practices comply with privacy regulations (like GDPR or CCPA) and internal data governance policies, maintaining data security and integrity.

4. Deployment and MLOps

  • Production Deployment: Deploy models into production environments with a heavy focus on Azure cloud infrastructure, often utilising REST APIs for service delivery.
  • Containerisation & Orchestration: Leverage Docker for application encapsulation and Kubernetes for the scaling and resilience of deployed AI services.
  • ML-Specific CI/CD: Build and manage automated pipelines for the iterative training, validation, and deployment of model versions.
  • Monitoring & Observability: Implement telemetry to monitor the performance of models in production to detect issues such as concept drift, data drift, or performance degradation. Implement systems for logging, alerting, and automated retraining.
  • Infrastructure Management: Provision and manage the necessary cloud or on-premise infrastructure for AI workloads, using Infrastructure as Code (IaC) tools like Terraform.

Candidate Requirements

Professional Experience

Minimum of 2 years of professional experience primarily functioning as an AI Engineer or as a Software Engineer with significant exposure to AI/ML projects.

Technical Proficiencies

Advanced Python & Backend Development

  • Expert-level Python skills, including a deep understanding of modern asynchronous programming (asyncio), type hinting, and data validation using Pydantic.
  • Proven experience with web frameworks such as FastAPI, Flask, or Django for building high-performance APIs.
  • Strong grasp of software design patterns, Clean Architecture, and writing testable code (unit testing).

AI & Agentic Frameworks (desirable)

  • Familiarity with agentic frameworks such as LangChain, LangGraph, or Microsoft Agent Framework.
  • Experience with LLM API orchestration.
  • Familiarity with Prompt Engineering techniques and RAG (Retrieval-Augmented Generation) architectures.

Cloud & Infrastructure

  • Hands-on experience with major cloud providers (Azure preferred, or AWS/GCP), specifically with container orchestration (e.g., AKS, EKS), managed container services, and serverless functions.
  • Hands-on experience with Docker containerisation and Kubernetes orchestration.
  • Hands-on experience with Infrastructure as Code (IaC) using Terraform.

Data & Storage Systems

  • Demonstrated knowledge in designing and maintaining relational databases (PostgreSQL preferred). Experience with NoSQL is a plus.
  • Experience with Vector Databases for semantic search and retrieval is a plus.

DevOps & MLOps

  • Demonstrated ability to build and maintain CI/CD pipelines, preferably for ML (e.g., GitHub Actions or Azure DevOps).
  • Familiarity with application monitoring, logging, and observability practices for deployed services. Experience with ML-specific monitoring (e.g., drift detection) is a plus.

Strategic and Collaborative Competencies

  • Cross-Functional Collaboration: Ability to align with data scientists, product managers, and business stakeholders to define technical requirements and project scope.
  • Technical Communication: Capability to articulate complex technical behaviours, model limitations, and performance metrics to both technical peers and non-technical leadership.
  • Documentation: Commitment to maintaining exhaustive documentation for architectures, data pipelines, and deployment protocols.

Important Information

English Language Requirement

All roles require excellent English. We work entirely in English for meetings, client calls, and business communications. This is non-negotiable.

No Recruitment Agencies

We do not work with recruitment agencies. Please do not contact us if you're representing candidates. We hire directly.

Ready to Apply?

Send us your CV and a brief note about what you do and why you're interested in joining Attercop.