- Solid Computer Science fundamentals, excellent problem-solving skills and a strong understanding of distributed computing principles.
- At least 3 years of experience in a similar role, with a proven track record of building scalable and performant data infrastructure.
- Expert SQL knowledge and deep experience working with relational and NoSQL databases.
- Advanced knowledge of Apache Kafka and demonstrated proficiency in Hadoop v2, HDFS, and MapReduce.
- Experience with stream-processing systems (e.g. Storm, Spark Streaming), big data querying tools (e.g. Pig, Hive, Spark) and data serialization frameworks (e.g. Protobuf, Thrift, Avro).
- Bachelors or Master’s degree in Computer Science or related field from a top university.
Job Type: Full Time
Job Location: Hyderabad