Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Data Engineer - Remote image - Rise Careers
Job details

Data Engineer - Remote

As a Data Engineer, you will play a key role in designing, building, and maintaining our data infrastructure. You will collaborate with cross-functional teams to ensure data availability, integrity, and usability across various business processes. The ideal candidate is passionate about building efficient data pipelines, optimizing database performance, and working with cloud-based technologies.

About The Role:

Location: Remote

Timings: US Shift, 6:30 PM - 3:30 AM IST

Responsibilities:

  • Data Pipeline Development:
    • Design, build, and maintain robust ETL/ELT pipelines to integrate data from multiple sources, including APIs, databases, and third-party integrations like Elation (Snowflake), Apero, and Zoom.
    • Optimize existing pipelines to improve performance and reliability.
  • Database Management:
    • Maintain and optimize PostgreSQL and Redis databases hosted on Aptible.
    • Implement best practices for database indexing, partitioning, and performance tuning.
  • Data Integration and Analytics:
    • Work with third-party integrations such as Elation’s ODBC connection to Snowflake, ensuring accurate and efficient data extraction and transformation.
    • Support analytics teams by ensuring timely and accurate delivery of data for dashboards and reporting.
  • Cloud and Infrastructure Support:
    • Leverage AWS services such as S3 and CloudFront for data storage and distribution.
    • Collaborate with the DevOps team to ensure data infrastructure aligns with application hosting and deployment processes.
  • Monitoring and Observability:
    • Utilize tools such as Datadog, Sentry, and Sumologic for monitoring ETL jobs, database performance, and data pipeline health.
    • Set up alerts and dashboards to proactively identify and resolve issues.
  • Collaboration and Documentation:
    • Work closely with development and operations teams to support new data-related requirements.
    • Document data models, pipeline designs, and operational workflows to ensure knowledge sharing and system maintainability.

Qualifications:

  • Technical Skills:
    • 0-2 years of experience in a similar data profile.
    • Strong experience with ETL/ELT pipelines, including tools like Apache Airflow, dbt, or similar.
    • Proficiency in SQL and database optimization for Postgres and Redis.
    • Hands-on experience with cloud platforms (AWS preferred) and services like S3 and CloudFront.
    • Familiarity with ODBC adapters and APIs for integrating third-party systems.
  • Programming:
    • Expertise in for data manipulation and pipeline orchestration.
    • Familiarity with Ruby is a plus but not required.
  • Monitoring and Debugging:
    • Experience with observability tools such as Datadog, Sentry, and Healthchecks.io.
    • Strong debugging and troubleshooting skills for data pipelines and integrations.
  • Soft Skills:
    • Excellent problem-solving skills and attention to detail.
    • Strong communication and collaboration abilities.
    • Eagerness to learn and adapt to new technologies.
    • Prior experience in a customer facing role.
  • Good to have skills:
    • Experience working in a healthcare technology environment.
    • Familiarity with Snowflake or similar modern data warehouse platforms.
    • Knowledge of Docker and CI/CD pipelines (e.g., CircleCI).
    • Support experience in previous roles.

What do you get in return?

  • Contribute to impactful projects at the forefront of healthcare and life sciences innovation. 
  • Collaborate with industry-leading professionals and cutting-edge technologies. 
  • Competitive compensation, benefits, and opportunities for professional growth. 
  • Immediate start with high-visibility projects from day one. 

Timeline for Joining 

We are looking for candidates who can join at the earliest to ensure smooth onboarding experience. If you are passionate about driving innovation in life sciences and ready to make a difference, we encourage you to apply today! 

Average salary estimate

$80000 / YEARLY (est.)
min
max
$70000K
$90000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

What You Should Know About Data Engineer - Remote, Reveal Health Tech

Are you ready to take your data engineering skills to the next level? As a Data Engineer at our company, you will be at the helm of designing, building, and maintaining our cutting-edge data infrastructure, all while working remotely! Your day-to-day will involve collaborating with some amazing cross-functional teams to make sure that our data is available, reliable, and super useful across various business processes. You’ll be diving into the world of data pipelines, optimizing database performance, and working with cloud-based technologies to ensure our data is top-notch. The role requires you to design and manage robust ETL/ELT pipelines that integrate data from an array of sources. You’ll have your hands full with tasks ranging from maintaining PostgreSQL and Redis databases to leveraging AWS services for data storage. We’re looking for someone with a passion for data and a keen eye for detail to monitor and debug our systems. If you have a knack for problem-solving, a background in data management, and a desire to learn and grow, we want to hear from you! With flexible remote work hours on a US schedule, you’ll have the perfect setup to contribute to innovative projects in the healthcare and life sciences space. Jump aboard, and let’s make a difference together!

Frequently Asked Questions (FAQs) for Data Engineer - Remote Role at Reveal Health Tech
What are the main responsibilities of a Data Engineer at our company?

As a Data Engineer at our company, your primary responsibilities will include designing, building, and maintaining efficient ETL/ELT pipelines to ensure data is integrated from various sources. You'll also manage PostgreSQL and Redis databases to maintain and optimize performance, collaborate with teams to provide timely and accurate data for analytics, and utilize AWS services for effective data storage solutions.

Join Rise to see the full answer
What qualifications are required for a Data Engineer position?

To qualify for the Data Engineer position at our company, candidates should have 0-2 years of experience in a similar role. Key skills include experience with ETL/ELT tools like Apache Airflow, proficiency in SQL for database optimization, familiarity with cloud platforms like AWS, and a solid understanding of data manipulation as well as debugging skills.

Join Rise to see the full answer
What tools will I be working with as a Data Engineer?

As a Data Engineer with our company, you will work with an array of tools including Apache Airflow for pipeline orchestration, Datadog and Sentry for monitoring, as well as AWS services like S3 for data storage. Familiarity with ODBC connections and integrations will also be beneficial in ensuring seamless data transactions.

Join Rise to see the full answer
Can I work remotely as a Data Engineer at your company?

Yes! The Data Engineer position is fully remote, allowing you to work from anywhere while collaborating with a diverse and dynamic team. The role requires you to follow US shift timings, which offers great flexibility for your work-life balance.

Join Rise to see the full answer
What kind of projects will I be involved in as a Data Engineer?

In the Data Engineer role, you will be involved in impactful projects that drive innovation in the healthcare and life sciences sectors. You will work on creating robust data pipelines, optimizing data infrastructure, and ensuring analytics teams have the timely data they need for reporting, all while using cutting-edge technologies.

Join Rise to see the full answer
Common Interview Questions for Data Engineer - Remote
What experience do you have with ETL/ELT processes?

In your answer, emphasize any tools you've used, such as Apache Airflow or dbt, and describe specific projects where you designed or optimized data pipelines. Illustrate your understanding of the data flow from raw sources to structured outputs.

Join Rise to see the full answer
How do you ensure data quality in your pipelines?

Talk about your approach to validating and cleansing data as it flows through your pipelines, perhaps mentioning any specific tools or techniques you use to handle errors or anomalies effectively.

Join Rise to see the full answer
Describe a challenging database optimization you completed.

Provide details on the database you optimized, the performance issues you identified, and the specific changes you implemented. Highlight the improvements in performance metrics as a result of your efforts.

Join Rise to see the full answer
What is your experience with cloud technologies, particularly AWS?

Share your familiarity with AWS services relevant to data engineering, such as S3 for storage or CloudFront for distribution. Include examples of how you've leveraged these services in past projects.

Join Rise to see the full answer
How do you handle monitoring and alerting in data pipelines?

Discuss the tools you use for monitoring, such as Datadog or Sentry, and your strategy for setting up alerts to catch issues before they affect downstream processes. Give examples of how this has benefited your projects.

Join Rise to see the full answer
What strategies do you find effective for collaborating with cross-functional teams?

Mention your communication techniques, tools for collaboration like Jira or Slack, and provide examples of how you've successfully worked with developers and operations teams to meet shared goals.

Join Rise to see the full answer
Can you explain your experience with SQL and database management?

Detail your proficiency with SQL, including any complex queries or optimizations you’ve performed. Discuss your hands-on experience in managing databases like PostgreSQL or Redis, and any best practices you adhere to.

Join Rise to see the full answer
What programming languages are you comfortable with for data manipulation?

Outline your expertise in programming languages relevant to data engineering, such as Python or Ruby, and provide examples of how you've used these skills to enhance data processing tasks.

Join Rise to see the full answer
Have you ever dealt with data privacy or compliance issues?

If applicable, describe a situation where you ensured data privacy in your projects, mentioning relevant frameworks or regulations like HIPAA or GDPR, and how you implemented them in your data practices.

Join Rise to see the full answer
What is your approach when learning new technologies?

Discuss your methods for keeping up with industry trends, whether through online courses, reading, or community engagement. Share any recent technologies you’ve learned and how you applied them.

Join Rise to see the full answer

We are on a mission to help reveal the transformative potential of technology in delivering healthcare. We understand that technology is an enabler, when applied in the right ways. Leverage technology in the smartest way possible to meet your pati...

11 jobs
MATCH
VIEW MATCH
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
January 27, 2025

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!