About the Role
Alvys is revolutionizing the transportation logistics industry with a multi-tenant SaaS platform that streamlines freight operations. As a Senior Data Engineer, you will play a central role in designing and implementing our data architecture for the year 2025 and beyond. You’ll work closely with engineering, product, and leadership teams to build a modern, scalable data platform that supports real-time and offline analytical workloads as well as machine learning initiatives.
This is a highly visible role that will directly shape the future of our data ecosystem, enabling advanced analytics and ML-driven insights across the organization.
Industry Insight
Transportation logistics, a complex and fragmented domain, is ripe for technological revolution. You'll be at the forefront of automating and standardizing a sector that moves trillions of dollars' worth of goods annually, predominantly by truck, yet lacks modern tools and solutions.
About Alvys
Alvys is on a mission to revolutionize transportation logistics. Combining hands-on industry experience with a world-class technical vision, we're building a multi-tenant SaaS platform that's becoming an essential tool for transportation companies.
Our Principles
- Engineering Excellence: We're committed to a principled approach, blending the best of practical and theoretical techniques to ensure superior code quality and architecture.
- End-to-End Ownership: Our collaborative environment ensures that if you build it, you run it.
- Blameless Culture: We focus on ownership and learning from mistakes in a supportive, finger-pointing-free environment.
- Core Values: Trust, transparency, and fairness are not just our company values—they're also the solution to the industry's underlying problems.
Tech Stack
Our cloud-native environment leverages Azure, .NET/C#, CosmosDB, Cognitive Search, and a suite of Azure services. Our front end utilizes JavaScript, TypeScript, Angular, Dart, and Flutter. Monitoring and alerting are handled by Azure Monitor & Application Insights, alongside numerous integrations with external services.
Responsibilities
- Data Architecture & Strategy: Define the vision and roadmap for Alvys’ data platform, including storage, processing, and integration strategies.
- Data Pipelines: Design and build reliable, performant pipelines for batch, streaming, and real-time data ingestion, leveraging tools like Azure Data Factory, Kafka, or similar technologies.
- ML Integration: Collaborate with data scientists and ML engineers to operationalize machine learning workflows and ensure seamless data availability.
- Analytics Enablement: Develop data models and transformations for both online (real-time) and offline analytical workloads, facilitating business intelligence and predictive analytics.
- Performance & Optimization: Continually monitor, optimize, and tune data systems (SQL, NoSQL, data lakes, data warehouses) for high scalability, reliability, and cost efficiency.
- Collaboration & Governance: Partner with cross-functional teams to implement best practices around data security, quality, governance, and compliance.
- Thought Leadership: Serve as the organization’s go-to expert for emerging data engineering trends, providing strategic direction and mentorship to junior team members.
Minimum Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 5+ years of professional experience in Data Engineering or a related role, with a strong focus on designing and implementing data architecture.
- Proven track record building, scaling, and maintaining data pipelines and ETL/ELT processes in a production environment.
- Proficiency with SQL and one or more programming languages (Python, Scala, etc.).
- Hands-on experience with cloud-based data platforms (Azure, AWS, or GCP) and related services (e.g., Azure Data Lake, Azure Synapse Analytics, AWS Redshift, Google BigQuery).
- Understanding of big data frameworks such as Apache Spark, Hadoop, or similar technologies.
- Experience working with real-time streaming solutions (Kafka, Kinesis, or Event Hubs).
- Familiarity with machine learning pipelines, including data preparation, feature engineering, and model deployment.
Preferred Qualifications
- Experience with Kubernetes and container orchestration for data workloads.
- Knowledge of ML platforms (e.g., Vertex AI, Azure ML) and MLOps best practices.
- Exposure to CI/CD pipelines for automated data pipeline deployments.
- Strong grasp of data governance, including compliance and security requirements.
- Familiarity with analytics tools, such as Tableau, Power BI, or Looker.
- Background in logistics or transportation systems is a plus.