AWS Databricks Developer
Cognizant
Date: 12 hours ago
City: Mississauga, ON
Contract type: Full time

In This Role, You Will
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What You Need To Have To Be Considered
- Develop and optimize data pipelines using Spark in Scala to ensure efficient data processing and analysis.
- Implement and manage Delta Sharing to facilitate secure and efficient data exchange across platforms.
- Administer Databricks Unity Catalog to maintain organized and accessible data assets.
- Utilize Databricks CLI for streamlined management of Databricks resources and workflows.
- Design and deploy Delta Live Pipelines to automate data processing and ensure data quality.
- Oversee structured streaming processes to enable real-time data analytics and insights.
- Apply risk management techniques to identify and mitigate potential data-related risks.
- Integrate Apache Airflow for orchestrating complex data workflows and ensuring seamless execution.
- Manage data storage and retrieval using Amazon S3 and Amazon Redshift to support scalable data solutions.
- Develop Python scripts to automate data tasks and enhance data processing capabilities.
- Utilize Databricks SQL for querying and analyzing large datasets to derive actionable insights.
- Implement Databricks Delta Lake for efficient data storage and retrieval ensuring data integrity.
- Coordinate Databricks Workflows to streamline data operations and enhance productivity.
- Collaborate with cross-functional teams to align data solutions with business objectives and drive innovation.
- Contribute to the companys mission by delivering data-driven solutions that impact society positively.
- Stay updated with the latest industry trends and technologies to continuously improve data processes.
- Provide technical guidance and mentorship to junior developers fostering a collaborative team environment
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What You Need To Have To Be Considered
- Possess strong experience in Spark in Scala and Databricks with a proven track record in data engineering.
- Demonstrate expertise in Delta Sharing and Databricks Unity Catalog Admin for efficient data management.
- Have hands-on experience with Databricks CLI and Delta Live Pipelines for streamlined data operations.
- Show proficiency in structured streaming and risk management to ensure data reliability and security.
- Exhibit knowledge of Apache Airflow Amazon S3 and Amazon Redshift for robust data solutions.
- Be skilled in Python and Databricks SQL for advanced data processing and analysis.
- Understand Databricks Delta Lake and Workflows for optimized data storage and operations.
How to apply
To apply for this job you need to authorize on our website. If you don't have an account yet, please register.
Post a resume