Job Opportunities:
Job Title: Senior Software Engineer
Location: Reston, VA
Duration: Year-End+
Job Description:
Experience in Tivoli Netcool, performs high complexity (i.e. system level applications) analysis, design, development and unit testing of software applications from user requirements and design documents. The candidate also resolves defects encountered during various testing cycles.
Requirements:
- Expert Knowledge of IBM Tivoli Netcool DASH, Web GUI, and Impact
- Knowledge of Monitored environments: SUSE Linux, Solaris, Oracle, DNS, Web firewalls, fireproofs, load balancers, global load balancers, portals
- Microsoft Office tools (Project, PowerPoint, Excel, and Word)
- Demonstrated Strong knowledge of Linux, PERL and/or JavaScript; Shell Scripting; Java
- Oracle database experience
- API integration experience
Job Title: Big Data Administrator
Location: Reston, VA
Duration: 6 Months+
Job Description:
- Provide support for successful installation and configuration of new Big Data clusters across various environments
- Provide support for successful expansion of Big Data clusters across various environments
- Provide day to day Big Data administration support
- Work with Big Data team, Infrastructure Architect, Change Management team to support configuration, code migrations of Big Data Deliverables
- Looks to leverage reusable code modules to solve problems across the team, including Data Preparation and Transformation and Data export and synchronization
- Act as a Big Data administration liaison with Infrastructure, Security, Application Development , Project Management
- Keep current on latest Big Data technologies and products, including hands-on evaluations and in-depth research
- Works with Big Data lead/architect to perform detailed planning, risk/issue management
Requirements:
- 5+ years of administrator experience working with batch processing and tools in the Hadoop technical stack (e.g. MapReduce, Yarn, Hive, HDFS, Oozie)
- 5+ years of administrator experience working with tools in the stream processing technical stack (e.g. Kudu, Spark, Kafka, Avro)
- Administrator experience with NoSQL stores (e.g. Hbase)
- Expert scripting skills
- Expert knowledge on Active Directory/LDAP security integration with Big Data
- Hands-on experience with at least one major Hadoop Distribution such as Cloudera, Horton Works, MapR or IBM Big Insights (preferably Cloudera)
- Hands-on experience monitoring and reporting on Hadoop resource utilization
- 5+ years of doing data related benchmarking, performance analysis and tuning
- Hands-on experience supporting code deployments (Spark, Hive, Ab Initio, etc.) into the Hadoop cluster
- 4+ years of experience with SQL and at least two major RDBMS’s
- 6+ years as a systems integrator with Linux systems and shell scripting
- Bachelor’s degree in Computer Science, Information Systems, Information Technology or related field and 8+ years of software development/DW & BI experience
- Excellent verbal and written communication skills
Job Title: Systems Analyst Specialist
Location: Richmond, VA
Duration: 3 Years
Qualification:
The Candidate must have an expert understanding of Linux, Hadoop Ecosystem and associated infrastructure. Knowledge of setting up and configuring Kerberos, Spark, R Studio, Kafka, Flume, Shiny, Ranger, Oozie, NiFI etc. is a must Should have a solid understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks Should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups Solid Understanding on premise and Cloud network architectures
Responsibilities:
Will work fairly independently, and performs complex development and support services with the IT Enterprise infrastructure teams, ensuring operability, capacity and reliability for the Big Data System. Will assist in planning, design, support, implementation and troubleshooting activities. Will work with developers and Architects to support an optimal & reliable Big Data Infrastructure. Will be on call and may need to on evenings/weekends as required. Will be responsible for implementation and ongoing administration of Hadoop infrastructure, align with the Architect to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments Will setup new users in Linux. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, HBase and Yarn access for the new users, luster maintenance as well as creation and removal of nodes using appropriate administrative tools. Performance tuning of Hadoop clusters and Spark processes, Screen Hadoop cluster for job performances and capacity planning, Monitor Hadoop cluster connectivity and security, set up and monitor users of the system Manage and review Hadoop log files. File system management and monitoring Diligently team with developers to guarantee high data quality and availability and Collaborate with application teams & users to perform Hadoop updates, patches, version upgrades when required Work with Vendor support teams on support tasks and troubleshoot system issues.