Company Summary
DISH, an EchoStar company, has been reimagining the future of connectivity for more than 40 years. Our business reach spans satellite television service, live-streaming and on-demand programming, smart home installation services, mobile plans and products and now we are building America's First Smart Network™.
Today, our brands include EchoStar, Hughes, DISH TV, Sling TV, Boost Mobile and Gen Mobile.
Department Summary
Our Technology teams challenge the status quo and reimagine capabilities across industries. Whether through research and development, technology innovation or solution engineering, our people play vital roles in connecting consumers with the products and platforms of tomorrow.
Job Duties and Responsibilities
- Develop enterprise-ready, secure and compliant data-oriented solutions leveraging Data Warehouse and Big Data frameworks
- Optimizing data engineering pipelines
- Reviews architectural designs to ensure consistency & alignment with defined target architecture and adherence to established architecture standards
- Support data and cloud transformation initiatives
- Contribute to our cloud strategy based on prior experience
- Understand the latest technologies in a rapidly innovative marketplace
- Independently work with all stakeholders across the organization to deliver point and strategic solutions
- Ready to provide sufficient overlap with onshore team and business to gather requirements to own feature delivery
- Need to act as Product Owner by owning end to end delivery of data pipelines
- Mentor the junior team members
Skills, Experience and Requirements
- Engineering degree with 8+ years of experience as Data engineer with at least 2+ years in the wireless and/or telecom network space
- Experience in Python, Scala and SQL
- Experience in both functional programming and Spark programming dealing with processing terabytes of data. Specifically, this experience must be in writing data engineering jobs for large-scale data integration in AWS.
- Experience in logical & physical table design in Big data environment to suite processing frameworks
- Experience in writing spark streaming jobs (producers/consumers) using Apache Kafka or AWS Kinesis is required
- Should have knowledge in variety of data platforms such as Redshift, S3, MySQL/Postgres, DynamoDB
- Experience in Airflow and AWS services such as EMR, Glue, S3, Athena, DynamoDB, IAM, Lambda, Cloud watch
- Create and maintain automated ETL processes with special focus on data flow, error recovery, and exception handling and reporting
- Gather and understand data requirements, work in the team to achieve high quality data ingestion and build systems that can process the data, transform the data
Benefits
- Employee Stock Purchase
- Term Insurance
- Accident Insurance
- Health Insurance
- Training Reimbursement
- Gratuity
- Mobile and Internet Reimbursement
- Team Outings