Casechek is on a mission to innovate the implant supply chain and bring greater transparency to the cost of patient care. From procurement to payment, Casechek automates workflows for vendor supported surgical procedures. Our emergent Bill Only solution provides a comprehensive system of record for Hospitals and Health Systems to manage the complexity of surgical cases with implantable medical devices, expose hidden expenses and surface revenue opportunities.
As Casechek’s Data Analytics Engineer, you will be responsible for building and optimizing data pipelines, developing robust data models, and enhancing our analytics capabilities. You will collaborate with cross-functional teams to ensure data accessibility, governance, and quality, leveraging modern cloud and database technologies. This is an exciting opportunity for a data-driven professional who is passionate about solving complex problems and advancing data strategy in a healthcare tech environment.
Casechek is a highly collaborative environment, but our team is also self-motivated and works independently. We strongly believe that great ideas are not created in isolation! Our team interacts with our healthcare customers to understand their needs and ultimately deliver the best user experience as possible.
About Us
- Casechek is based out Chicago’s Fulton Market, but we’re a hybrid, remote and in-person team.
- We are highly motivated and passionate about solving healthcare’s messiest problems with new technologies.
- We value diversity, curiosity, and a passion for learning. The team has a lot of advanced degrees and even a few professional musicians.
- Innovating the implant supply chain is a marathon and not a sprint – we support each other in long-term growth and value developing transferrable skills.
The Perks
- Competitive salary and annual bonus
- Opportunity for LTI
- 401k match
- Work from home flexibility
- Full medical and dental benefits
- Brand new pro level laptop
- Annual education budget
About You
- You are a data-driven professional with a strong foundation in data architecture, ETL/ELT processes, and business intelligence tools.
- You excel at designing and implementing scalable data solutions that drive business insights and operational efficiency.
- You thrive in a collaborative environment, partnering with stakeholders to gather requirements and develop analytical models.
- You have a passion for writing clean, efficient, and supportable code, with strong proficiency in Python or Scala.
- You are experienced in working with modern cloud platforms, database systems, and data visualization tools.
- You prioritize data quality, governance, and accessibility, ensuring reliability and consistency in data pipelines.
- You are eager to contribute to a growing data team, mentoring others while advancing your own technical expertise.
Responsibilities
- You will be responsible for developing analytical data models that provide actionable insights across range of business functions
- Partner with business teams to gather data requirements and design scalable solutions
- Implement end-to-end analytics solutions using modern technical stack
- Facilitate personal and team growth. You will work with leadership to establish goals for your growth. Combined with mentoring and supporting other engineers, you will directly contribute to scaling out the organization.
- You will work in an environment that values ambitious but sustainable growth. Advocate and advance modern, agile development practices and evangelize great engineering culture. Define and promote best practices and data standards. Adherence to data governance processes and practices.
- Build data architecture that enables reporting, analytics, data science, and data management and improves accessibility, efficiency, governance, processing, and quality of data.
Requirements
- Bachelor’s or master’s degree in a related field
- 4+ year’s experience in data/software engineering
- 3+ years of experience with more than 1 database system such as Azure Synapse, BigQuery, Oracle, SQL server, MySQL, Snowflake, Postgres, Redshift or Databricks
- 2+ years experience in designing and building complex ETL/ELT, data modeling, schema design,
- Advanced SQL expertise and advocacy for data quality
- Experience with data warehousing concepts , object-oriented programming languages (Java, Python, Ruby, etc.)
- Strong proficiency in Python or Scala and passion for writing clean, supportable code.
- High proficiency building BI solutions using at east one data visualization tool( Tableau, Power BI, Looker, Explo)
- Experience in one of the major cloud technology providers (AWS, Azure, GCP, etc.)
- Experience working with code collaboration and repository management tools (Bitbucket, GitHub, GitLab, etc.)
- Experience working with data transfer technologies (APIs, Pub/Sub, etc.)
- Deep experience implementing a variety of data warehousing concepts and methodologies, including snapshotting, incremental data loads, SCDs and star schemas
- Experience with Databricks is a preferred
- Familiarity with healthcare data is a plus
- Analytical aptitude to think through business problems and metric development