Responsibilities:
- Design and build data transformations efficiently and reliably for different purposes (e.g., reporting, growth analysis, multi-dimensional analysis)
- Design and implement reliable, scalable, robust, and extensible big data systems that support core products and business
- Establish solid design and best engineering practices for engineers as well as non-technical people
Minimum qualifications:
- BS or MS degree in Computer Science or related technical field or equivalent practical experience
- Experience with the Big Data technologies (Hadoop, M/R, Hive, Spark, Metastore, Presto, Flume, Kafka, ClickHouse, Flink etc)
- Experience in performing data analysis, data ingestion and data integration
- Experience with ETL(Extraction, Transformation & Loading) and architecting data systems
- Experience with schema design, data modeling and SQL queries
- Passionate and self-motivated about technologies in the Big Data area
Email Me Jobs Like These
Showing 1–0 of 0 jobs