Home > Manhattan jobs > Manhattan miscellaneous jobs

Posted: Tuesday, September 5, 2017 5:57 PM

The most exciting part is the enormous potential for personal and professional growth. We are always seeking new and better tools to help us meet challenges such as adopting proven open-source technologies to make our data infrastructure more nimble, scalable and robust. Some of the cutting edge technologies we have recently implemented are Kafka, Spark Streaming, Docker and Mesos.What you'll be doing:Design, build and maintain reliable and scalable enterprise level distributed transactional data processing systems for scaling the existing business and supporting new business initiativesOptimize jobs to utilize Kafka, Hadoop, Vertica, Spark Streaming and Mesos resources in the most efficient wayMonitor and provide transparency into data quality across systems (accuracy, consistency, completeness, etc)Increase accessibility and effectiveness of data (work with analysts, data scientists, and developers to build/deploy tools and datasets that fit their use cases)Collaborate within a small team with diverse technology backgroundsProvide mentorship and guidance to junior team membersTeam Responsibilities:Installation, upkeep, maintenance and monitoring of Kafka, Hadoop, Vertica, RDBMSIngest, validate and process internal & third party dataCreate, maintain and monitor data flows in Hive, SQL and Vertica for consistency, accuracy and lag timeMaintain and enhance framework for jobs(primarily aggregate jobs in Hive)Create different consumers for data in Kafka such as flafka for Hadoop, flume for Vertica and Spark Streaming for near time aggregationTrain Developers/Analysts on tools to pull dataTool evaluation/selection/implementationBackups/Retention/High Availability/Capacity PlanningDisaster Recovery- We have all our core data services in another Data Center for complete business continuityReview/Approval - DDL for database, Hive Framework jobs and Spark Streaming to make sure they meet our standards24*7 On call rotation for Production supportTechnologies We Use:Chronos - for job schedulingDocker - Packaged container image with all dependenciesGraphite/Beacon - for monitoring data flowsHive - SQL data warehouse layer for data in HDFSImpala- faster SQL layer on top of HiveKafka- distributed commit log storageMarathon cluster wide init for Docker ContainersMesos - Distributed cluster resource managerSpark Streaming - Near time aggregationSQL Server - Reliable OLTP RDBMSSqoop - Import/Export data to RDBMSVertica - fast parallel data warehouseRequired Skills:BA/BS degree in Computer science or related field5+ years of software engineering experienceKnowledge and exposure to distributed production systems i.e Hadoop is a huge plusProficiency in LinuxFluency in Python, Experience in Scala/Java is a huge plusStrong understanding of RDBMS, SQL;Passion for engineering and computer science around dataWillingness to participate in 24x7 on-call rotation
Associated topics: data architect, data integration, data manager, data warehouse, database administrator, erp, hbase, mongo database, sql, sybase


• Location: Manhattan

• Post ID: 131571802 manhattan is an interactive computer service that enables access by multiple users and should not be treated as the publisher or speaker of any information provided by another information content provider. © 2017