Build data infrastructure for AI products at enterprise scale
Build the data backbone for AI applications used by millions. You'll design pipelines that ingest messy enterprise data, transform it into clean datasets, and serve it through performant APIs. Work spans the full stack: database schema design, ETL pipelines, API development, query optimization. You'll make architectural decisions that directly impact product performance and reliability.
Morning: Debug slow query in client dashboard. Mid-morning: Design review for new pipeline architecture. Afternoon: Implement incremental ETL for real-time feature. Late afternoon: Code review and pair programming session.
Work on systems that matter. See your code run in production at Fortune 500 companies. Learn from senior engineers who've built infrastructure at scale. Ship fast without bureaucracy.
This role requires completing a technical challenge as part of the application process. Challenge: Medium: Analytics Dashboard Backend
View Challenge DetailsPrimary: Python, PostgreSQL, Docker, AWS. We adapt to client constraints but generally push for modern tooling. You'll have input on technology decisions.
More than typical engineering roles. You'll join architecture discussions and occasionally present technical designs to client teams.
Required. You'll complete a medium-difficulty data pipeline challenge (6-8 hours). We evaluate: code quality, system design, performance considerations.
Submit your application and we'll be in touch within one week if there's a potential fit.
Apply for this Role