About The Role:
- Create and maintain optimal data pipeline architecture.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift, Kinesis streams, Dynamo DB, Lambda
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala
- Experience working on a Data Lake project or building a Machine Learning platform would be highly regarded