Consulting role, immediate interview and start. W2 or Individual owned Corp2Corp, no 3rd party companies, H1s, etc. Responsible for the design, architecture and development of projects powered by Google BigData and MapR Hadoop distribution Must Have SkillsExperience Bachelors Degree required 2+ years of solution architecture in Hadoop Demonstrated experience in architecture, engineering and implementation of enterprise-grade production big data use cases Extensive hands on experience in MapReduce, Hive, Java, HBase and the following Hadoop eco-system products Sqoop, Flume, Oozie, Storm, Spark, andor Kaftka. Extensive experience in Shell Scripting Solid understanding of different file formats and data serialization formats such as ProtoBuf, Avro, JSON. Hands on delivery experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks or MapR MapR preferrably Excellent communication skills Nice to have Coordinating the movement of data from original data sources into noSQL data lakes and cloud environments Hands-on experience with Talend used in conjunction of Hadoop MapReduceSparkHive. Experience with Google cloud platform (Google BigQuery) Source control (preferably Git Hub) Knowledge of agile development methodologies Experience in IDE framework like Hue, Jupyter, Zepplin Needs to have a good experience on ETL Technologies and concepts of Data Warehouse Associated topics: data analyst, data analytic, data integration, data quality, data scientist, database, hbase, mongo database administrator, sql, teradata
* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.