Description
Our client is a rapidly growing, automation-led service provider specializing in IT, business process outsourcing (BPO), and consulting services. With a strong focus on digital transformation, cloud solutions, and AI-driven automation, they help businesses optimize operations and enhance customer experiences. Backed by a global workforce of over 32,000 employees, our client fosters a culture of innovation, collaboration, and continuous learning, making it an exciting environment for professionals looking to advance their careers.
Committed to excellence, our client serves 31 Fortune 500 companies across industries such as financial services, healthcare, and manufacturing. Their approach is driven by the Automate Everything, Cloudify Everything, and Transform Customer Experiences strategy, ensuring they stay ahead in an evolving digital landscape.
As a company that values growth and professional development, our client offers global career opportunities, a dynamic work environment, and exposure to high-impact projects. With 54 offices worldwide and a presence in 39 delivery centers across 28 countries, employees benefit from an international network of expertise and innovation. Their commitment to a 'customer success, first and always' philosophy ensures a rewarding and forward-thinking workplace for driven professionals.
We are currently searching for a Databricks Data Engineer:
Responsibilities
- Design, develop, and maintain data pipelines using Databricks and Apache Spark.
- Integrate data from various sources into Databricks, ensuring quality and consistency.
- Optimize Spark jobs for performance and cost efficiency.
- Collaborate with data scientists, analysts, and stakeholders to understand requirements and deliver solutions.
- Create and maintain data models to support analytics and reporting.
- Monitor and troubleshoot data pipelines.
- Document processes, architectures, and workflows.
- Apply best practices in data engineering and ensure compliance with governance policies.
Requirements
- 6+ years of experience in Data Engineering, with strong focus on Databricks and Apache Spark.
- Proficiency in Python, Scala, or Java.
- Experience with cloud platforms (AWS, Azure, or Google Cloud).
- Strong SQL skills for querying and data manipulation.
- Familiarity with data warehousing concepts and tools.
- Version control experience (Git).
- Strong communication skills for cross-functional collaboration.
Desired
- Experience with machine learning frameworks and libraries.
- Knowledge of data visualization tools.
- Familiarity with CI/CD practices for data pipelines.
Languages
- Advanced Oral English.
- Native Spanish.
Note:
- Fully remote.
If you meet these qualifications and are pursuing new challenges, start your application on our website to join an award-winning employer. Explore all our job openings | Sequoia Career’s Page: https://www.sequoia-connect.com/careers/
Requirements
- 6+ years of experience in Data Engineering, with strong focus on Databricks and Apache Spark.
- Proficiency in Python, Scala, or Java.
- Experience with cloud platforms (AWS, Azure, or Google Cloud).
- Strong SQL skills for querying and data manipulation.
- Familiarity with data warehousing concepts and tools.
- Version control experience (Git).
- Strong communication skills for cross-functional collaboration.