Post Job

Requirements Engineer Jobs At Synectics for Management Decisions, Inc

- 2272 Jobs
  • ETL Engineer

    Synectics for Management Decisions Inc. 3.8company rating

    Requirements Engineer Job At Synectics for Management Decisions, Inc

    Job Description We are seeking a seasoned ETL Engineer to support the design, development, and maintenance of robust data pipelines that drive large-scale federal data initiatives. This role supports remote work but requires attendance at periodic onsite meetings in Washington, D.C. Key Responsibilities: Develop and maintain ETL pipelines to transform and load data into Databricks. Provide long-term support and enhancement of data pipelines across both data domain stores and a universal data hub. Convert existing PL/SQL stored procedures from platforms such as Greenplum and Oracle into Databricks-compatible code. Collaborate with analysts, data engineers, and stakeholders to ensure clean, reliable, and scalable data flows. Required Qualifications: 6+ years of hands-on experience in ETL development. Demonstrated success supporting at least two large-scale ETL projects involving Databricks. Strong PL/SQL experience, particularly with legacy systems like Greenplum, Oracle, or similar. Experience working on data projects for the government, preferably for the IRS. Must have an active MBI clearance. Bachelor's degree in IT, Computer Science, Engineering, or a related field. Technical Environment You'll Work In: You will work within a complex hybrid data architecture that includes both cloud-based and on-premise technologies, such as: Cloud & Databases: AWS RDS (Postgres), AWS Redshift, Databricks, MongoDB, DynamoDB, exploring AWS Aurora ETL & Integration: On-premise Informatica, Databricks ETL, EFTU, Informatica Metadata (EDC, Axon) Data Governance: Immuta Analytics & BI Tools: Advanced Analytics Platform (AAP) with AI/ML services, Tableau, Business Objects (BOE), and Power BI Integration Modeling & Development: IBM Rational Suite, IBM Data Architect Legacy Systems: JCL, COBOL for z/OS, DB2 Familiarity with any of these technologies is a strong plus. Work Flexibility: Remote work option available. Must be able to attend in-person meetings in Washington, D.C., as required. Synectics is an Equal Opportunity Employer. We offer a competitive salary and an impressive full benefits package that includes medical and dental, 401k w/company matching, company-paid life and short/long-term disability insurance, and paid leave. We also provide an environment that supports everyone's professional development and growth.
    $74k-98k yearly est. 13d ago
  • Engineer

    Imagine One Technology & Management Ltd. 4.7company rating

    Portsmouth, VA Jobs

    Job Description Imagine One is now interviewing for two positions supporting the US Navy’s Aircraft Carrier program from its field office in Portsmouth, Virginia. The work requires a fully-degreed Engineer (Mid Level) who will perform analytical tasks under the direction of the government staff to support the activity’s mission of ship life cycle management. The successful Engineer (Mid Level) will provide technical support for the development and management of the aircraft carrier Class Maintenance Plan. This is accomplished by retrieving, organizing and analyzing system and component performance data, material condition data, and cost data to develop findings and recommendations for further staff action. Duties and qualifications, identifying both essential and marginal duties: Provide technical support in the principles, techniques and practices of mechanical, electrical/electronic engineering, and or naval architecture as they relate to aircraft carrier Hull, Mechanical and Electrical (HM&E) systems; aviation support systems; and electronic systems. Retrieve data from local or corporate databases, or by requests to external activities, and use that data to perform analysis that will prove/disprove the adequacy of the Class Maintenance Plan, highlight problematic systems or components, and result in recommendations for improvements to maintenance requirements and practices. Assist with development of new ship life cycle strategies. Maintain ship maintenance and configuration records. Coordinate with external activities that include NAVSEA, Type Commander, Reactor Plant Planning Yard, Propulsion Plant Engineering Activity, Propulsion Plant Planning Yard, Naval and private shipyards, and ship’s force. Develop technical papers; participate in meetings, conferences or discussions; and conduct analyses of historical maintenance data, as tasked. Experience Requirements: Five or more years of experience as an Engineer. Experience gained in naval or commercial shipyards is particularly desired. Proficiency with data management and analysis (Microsoft Excel) Familiarity with ship maintenance philosophy and strategies (Condition Based Maintenance, Reliability Centered Maintenance, etc.). Educational Requirements: Bachelor of Science degree in Mechanical or General Engineering Certification in Reliability Centered Maintenance (RCM) is a plus Security Requirements: US citizenship DoD Secret clearance Imagine One Technology & Management, Ltd., offers a full package of benefits and competitive salary, excellent group medical, vision, and dental programs. 401K savings plan; $4K annual tuition reimbursement ($5K if pursuing master’s degree); employee training, development, and education programs; profit sharing; advancement opportunities; and much more! ISO 9001:2015, ISO 20000-1:2018, ISO 27001:2013 CMMI Development and Services - Maturity Level 3 An Employee-Owned Business EEO/Veterans/Disabled
    $67k-87k yearly est. 18d ago
  • Splunk Engineer

    Caci International Inc. 4.4company rating

    Washington, DC Jobs

    The Opportunity: CACI is seeking an experienced Splunk Engineer to join our team supporting critical Navy operations. This role involves: Support the Naval Information Warfare Center (NIWC) Atlantic under a critical contract for systems validation, operations, sustainment, and installations. - Serve as a Splunk Engineer providing advanced monitoring, automation, and alerting capabilities in support of Fleet readiness. Collaborate with cross-functional teams using SAFe Agile methodologies in AWS Cloud-based environments. Contribute to cybersecurity compliance, system integration, and modernization initiatives within both Ashore and Afloat environments. This work can be performed in either Chesapeake, VA, Norfolk, VA, Washington Navy Yard (DC), Charleston, SC or New Orleans, LA. Responsibilities: Design core scripts to automate Splunk maintenance and alerting tasks. Develop dashboards and reports to display business-critical information. Engage with Product Owners to align platform capabilities with evolving business needs. Create scalable, flexible security architectures using standards-based integrations. Assist in developing policies for the secure operation of Splunk infrastructure. Support cloud-based deployment and sustainment (AWS). Conduct software integration testing and cybersecurity compliance tasks. Manage project tools including JIRA and Confluence. Automate processes and develop efficiencies alongside development and install teams. Maintain infrastructure for integration, cyber compliance, and network administration. Support both UNIX/Linux and Windows-based systems. Document configurations, changes, and troubleshooting procedures. Qualifications: Required: Active DoD SECRET security clearance Bachelor's degree with 11 years or Master's degree with 7 years in Computerized Systems (e.g., Design, Development, T&E, Network Protocols). CompTIA Security+ certification Two (2) years' experience in cloud-based solutions including networking and compute services. Demonstrated proficiency in Splunk architecture, dashboards, and scripting. Familiarity with Python and PowerShell scripting for automation. Experience with LAN administration and system-level support in both Windows and Linux environments. Knowledge of DoD cybersecurity compliance including RMF and IA standards. Experience managing systems in AWS environments. Desired: Hands-on experience with ESS (HBSS), ACAS, and Tenable tools. Exposure to MSSQL / Sybase, RHEL, JIRA, Confluence, and IIS. Familiarity with Microsoft admin tools (e.g., AD, GPO, DNS). Background in NIWC Modernization or ISEA support. Experience in Agile/Safe development frameworks.
    $68k-86k yearly est. 1d ago
  • MECM Engineer

    Caci International Inc. 4.4company rating

    Chantilly, VA Jobs

    The Opportunity: CACI is looking for a skilled Microsoft Endpoint Management (MEM) Engineer, specializing in Microsoft Endpoint Configuration Manager (MECM, formerly SCCM) to support the Department of the Air Force (DAF) Enterprise IT as a Service (EITaaS) program. The candidate will join a team of engineers responsible for engineering and implementing an enterprise Unified Endpoint Management (UEM) Solution. The UEM solution shall be capable of managing all Windows endpoints. On the EITaaS program, CACI will deliver enhanced capabilities and services to implement and operate an enterprise ITSM solution, enterprise service desk, endpoint management and security solution, as well as CONUS/OCONUS field support and life cycle support for end user devices to enable the DAF to transition focus from IT operations to mission operations. In this role, you'll serve as an experienced MECM engineer assisting with the design and implementation of the MECM solution for the Department of the Air Force (DAF) classified network. Responsibilities: Architect and design scalable solutions to accommodate the evolving needs and growth of the Department of the Air Force. Develop and implement industry best practices for both routine and zero-day patch management, ensuring comprehensive documentation is provided to the operations team for seamless sustainment. Provide Tier 3 support to operations team for deployment of applications and application updates based on defined criteria. Provide Tier 2 support to operations team to remediate issues related to failed patches or applications and conflicts related to patching. Create documentation/SOPs for new procedures and manual steps to complete tasks for soft hand-offs and cross-training of support personnel. Operate and maintain the overall MECM infrastructure supporting the enterprise, including future third-party plugins/enhancements. Track all relevant SLA/SLOs and provide reports as required. Collaborate with DAF enterprise and program security teams to ensure requirements are met within allotted timeframes. Conduct research and present technologies that can better serve the Department of the Air Force (DAF). Communicate effectively and collaboratively with customers, stakeholders, and peers. Qualifications: Required: An active DoD secret clearance is required BA/BS + 5 years of related experience as a Microsoft Endpoint Configuration Management Engineer (additional 5+ years of recent relevant experience may be substituted for degree, for total of 10+ years of experience) DoD 8140 compliance (ex: Security+) Experience working in a Department of Defense (DoD) classified environment. Proven experience in the installation, troubleshooting, and maintenance of Microsoft Endpoint Configuration Manager (MECM) and its required components. Experience in managing Microsoft Endpoint Configuration Manager (MECM) in a multi-forest environment. Comprehensive understanding of Active Directory and its seamless integration with Microsoft Endpoint Configuration Manager (MECM) environments. In-depth understanding of Microsoft SQL Server, clustering technologies, and database replication. Thorough understanding of Preboot execution environments (PXE) including its interaction with layer 3 devices and DHCP. Proven experience in automation using Microsoft PowerShell and other scripting technologies. Proven experience in creating comprehensive reports using SQL Server Reporting Services (SSRS) and Power BI. Proven experience in creating dynamic task sequences to automate the installation and upgrade of operating systems. Extensive experience in developing and deploying software packages using MSI, MSIX, EXE, and other scripting technologies. A thorough understanding of Microsoft Windows Server and Workstation operating systems. Experience in documenting and implementing Security Technical Implementation Guide (STIG) requirements. Familiarity with ITIL v4 (Information Technology Infrastructure Library) framework and practices. Familiarity with Agile Scrum methodologies. Proficient in MS Office tools, including Excel, Word, Project, and Visio, as well as SharePoint. Excellent verbal and written communication skills. Desired: Experience with the Adaptiva Onesite platform including OneSite Anywhere, OneSite Health, and OneSite Patch. Experience with the ServiceNow platform and its integration into Microsoft Endpoint Configuration Manager (MECM) environments. Acts independently to expose and resolve problems
    $64k-81k yearly est. 1d ago
  • Cloud Engineer

    V Group Inc. 4.2company rating

    Denver, CO Jobs

    For more details, please connect with Shweta Patel email her at ********************* Direct End Client: State of Colorado Job Title: GCP Cloud Engineer Duration: 12+ Months Contract Interview Type: Inperson / WebCam Ceipal ID: SCO_GCP740_SP Requirement ID: 10999740 Note - Only W2 candidates / Independent visa are eligible for this role. Description: - The GCP Cloud Engineer is responsible for implementing the designs and standards provided by the Cloud Architect and established by the Cloud Operations team as a whole and deploying application hosting environments using repeatable templates/tools/processes, with a primary focus on the GCP cloud platform within a DevOps structured team and methodology. Additionally, based on guidance by the Cloud Architect, configurations, automation, and tools that will allow cloud services to be consumed in an efficient manner should be established. Ultimately, the Cloud Engineer is the primary resource assigned to projects targeted for cloud deployment. They are responsible for provisioning the application hosting environments and associated monitoring and reporting to accomplish project objectives while acting as the primary infrastructure point of contact on such projects. They are also responsible for crafting, publishing, socializing, and overseeing adherence to the standards they establish and modifying them to achieve the strategic goals of OIT, and the tactical needs of the applications deployed to the cloud. In addition, they will drive continuous improvement within the CloudOps team, as well as lifecycle management and exploration into evolving cloud-based solutions. Key Responsibilities: Configure and deploy VPC environments and cloud instances as designed by the Cloud Architect to meet the needs across all hosted workloads Evaluate and collaborate with NetSecOps team to establish security controls necessary to meet policy and standards from the CISO, ideally in a templated and automated manner Analyze and recommend new GCP capabilities for consideration for adoption Manage users/orgs/groups/access within the cloud platform, based on requirements defined by IAM team Create monitoring and reporting capabilities to meet management needs of cloud platform Define and document standards and procedures (SOPs) for consumption of IaaS (via IaC), PaaS, and SaaS on GCP Communicate/advocate capabilities of the cloud platform to drive adoption Act as Tier-2 escalation point for on-call/break-fix efforts, to diagnose and resolve incidents and problems with cloud-based systems Work with NetSecOps resources to ensure network security policy is established in a consistent, repeatable and automated manner Required Qualifications: At least five (5) years of infrastructure development experience with hands-on experience with GCP foundation services related to computing, network, storage, content delivery, administration and security, deployment and management, and automation technologies. At least five (5) years of Infrastructure as Code (IaC) experience in Terraform or similar. Deep domain expertise in cloud infrastructure solutions (i.e. Windows and Linux IaaS, Business Continuity and Disaster Recovery, Security, Management, Storage, Networking, OSS, containers, and Infrastructure as code technology), breadth of technical experience, and technical aptitude to learn and adjust to new technologies and cloud trends required. Experience and understanding of large-scale infrastructure deployments in enterprise-wide environments are required. Experience with automated Continuous Integration/Delivery Scripting skills in Python, Bash, and PowerShell Google Foundational certification Preferred Skills: Professional Cloud Architect certification Experience with Terraform Experience with Azure DevOps V Group Inc. is an IT Services company that supplies IT staffing, project management, and delivery services in software, network, help desk, and all IT areas. Our primary focus is the public sector including state and federal contracts. We have multiple awards/ contracts with the following states: CA, FL, GA, MD, MI, NC, NY, OH, OR, PA, SC, VA, and WA, CT, TN. If you are considering applying for a position with V Group or partnering with us on a position, please feel free to contact me with any questions you may have regarding our services and the advantages we can offer you as a consultant. Website: ***************** LinkedIn: ********************************* Facebook: ************************* Twitter: *************************
    $67k-95k yearly est. 6d ago
  • Conversational AI Engineer (AI Azure Cognitive Services/CLU) ---- LOCALS ONLY --- Hybrid Role ---- US citizens / GC Holders ---- NO THIRD PARTIES

    Zillion Technologies, Inc. 3.9company rating

    Vienna, VA Jobs

    *****BEST BILL RATES****** ******US Citizens / GC HOLDERS ONLY ****** THIS IS A DIRECT BANKING CLIENT REQUIREMENT ! Those authorized to work without sponsorship are encouraged to apply please. Reach Shaily Sharma - ********** Email: ************************************* // ********** Conversational AI Engineer (AI Azure Cognitive Services/CLU) Location : Remote and Onsite twice a week ( Vienna, VA //Pensacola, FL // Winchester, VA ) Duration : Long term ongoing with no end date •Experience with Azure Cognitive Services - CLU (Conversational Language Understanding) •Experience continuously improving chatbot performance via analysis to improve experience and model accuracy •Prior experience with voice channel and transcription that is fed into a chatbot •Collaborative and ability to work in a team environment We are looking for a Conversational AI Engineer to design, build, and optimize AI-powered chatbots and voice assistants using Azure Bot Framework and Conversational Language Understanding (CLU). This role will focus on training, tuning, and analyzing AI models for voice-based interactions, ensuring seamless and intelligent user experiences. The primary focus of this role will be post-transcription. The voice to text side will be handled by another team. The ideal candidate has expertise in natural language processing (NLP), Azure Cognitive Services - CLU specifically, and tuning and training a model that is receiving requests that were initiated within a voice IVR style platform. A Conversational AI Engineer will be responsible for designing, developing, and optimizing the models within Azure Cognitive Services. Their work focuses on natural language understanding (NLU) and system integration to ensure smooth and effective human-AI interactions. --- Key Responsibilities of a Conversational AI Engineer • Conversational Model Development o Builds and fine-tunes Conversational Language Understanding (CLU) models in platforms like Azure Language Studio. o Trains intents, entities, and utterances for better chatbot comprehension. o Implements context handling to maintain conversation flow across multiple turns. • Speech & Voice AI Integration (for Voice Bots) o Integrates speech-to-text (STT) and text-to-speech (TTS) services for voice-based interactions. § Only responsible for the post-transcription side of tuning, experience on the voice side would be helpful but not necessary for this role • Optimization & Tuning o Continuously improves chatbot performance by analyzing user interactions and model accuracy. • Testing & Debugging o Conducts unit testing, regression testing, and A/B testing to validate bot performance. o Identifies and fixes misclassifications, intent overlaps, and response errors. o Uses analytics tools to track user behavior and refine interactions. • Security & Compliance o Ensures compliance with AI ethics policies. • Collaboration & Continuous Learning o Works with data scientists and software engineers to enhance chatbot capabilities. o Stays updated on NLP advancements, Azure AI updates, and emerging conversational AI trends. o Communicates chatbot performance insights Please send qualified resumes directly to : ************************************* // ********** Thanks, Shaily Sharma Zillion Technologies Inc. Asst. Director - Talent Acquisition ********** Email: ************************************* // **********
    $71k-95k yearly est. 2d ago
  • Sterling FTP Engineer

    Betsol 4.0company rating

    Denver, CO Jobs

    We are looking for a skilled Sterling FTP Engineer to manage, support, and enhance IBM Sterling File Gateway and Secure File Transfer (SFTP/FTPS) environments. The ideal candidate will have hands-on experience with Sterling B2B Integrator, strong troubleshooting skills, and a solid understanding of secure file transfer protocols and enterprise integration patterns. Key Responsibilities: Design, configure, and manage IBM Sterling File Gateway and B2B Integrator environments. Support and maintain file transfer workflows, partner configurations, and secure file exchange protocols (SFTP, FTPS, AS2, etc.). Monitor file transfer jobs and proactively resolve transmission failures or performance issues. Collaborate with internal and external stakeholders to onboard trading partners and define integration requirements. Develop and maintain system documentation, configurations, and standard operating procedures. Troubleshoot and resolve issues related to file transfers, connectivity, encryption, and security. Perform system upgrades, patching, and routine maintenance of Sterling platforms. Ensure compliance with data security and regulatory requirements during file transfers. Provide 24x7 support for critical file transfer operations (on-call rotation may be required). Work closely with infrastructure, network, and security teams to maintain optimal system performance. Requirements: 5+ years of experience with IBM Sterling File Gateway and/or B2B Integrator. Strong knowledge of secure file transfer protocols (SFTP, FTPS, AS2, etc.). Experience with scripting languages (e.g., Shell, Python) for automation and monitoring. Familiarity with SSL certificates, PGP encryption, and key management. Experience with onboarding trading partners and configuring business processes. Good understanding of networking fundamentals, firewalls, and proxy configurations. Strong analytical, troubleshooting, and communication skills. Qualifications Experience range: 5 to 8 years B.E./B. Tech/MCA /BSc (Computer Science)
    $63k-93k yearly est. 5d ago
  • Azure Cloud Engineer

    Take2 Consulting, LLC 3.7company rating

    Washington, DC Jobs

    We are seeking an Azure Cloud Engineer to support the transformation and administration of resilient, secure Cloud environments for the IRS. All work is performed on-site in Washington, DC. This position requires a Secret level security clearance. Responsibilities: Design, deploy, operate, and maintain resilient secure Azure Cloud environments to enable development teams to deliver features in the most efficient way possible. Operate and maintain cloud environments using Infrastructure as Code (Iac) and SRE principles. Server Provisioning: Provision and manage Azure virtual servers, ensuring they meet the required specifications for various user groups and applications. Deployment and Updates: Schedule and deploy updates to virtual desktops, applications, and infrastructure components while minimizing downtime and disruption to users. User Support: Respond to and resolve user tickets in a timely manner, providing technical support for the VDI environment, including connectivity, performance, and usability issues. Customer Service: Deliver excellent customer service by communicating effectively with end-users, understanding their needs, and ensuring a positive user experience. Security Compliance: Ensure that all deployments comply with the strict security protocols required in a classified environment. Maintenance and ongoing development of automation and continuous build/integration/deployment infrastructure for multiple environments; write, build and deploy scripts. Enable observability and resilience throughout multi-cloud landing zones and Dev/Test/Prod environments. Support implementation of security policies, standards, guidelines, and governance. Requirements: Bachelor's Degree and 4 or more years of related professional experience Must possess a Secret Clearance Must possess an active Security+ Certification (or higher IAM/IAT cert) Must possess an active OS level certification (ex. Azure, AWS, Microsoft, Linux, etc.) Familiar with standard concepts, practices, and procedures of cloud technology, including Software as Service (SaaS), Platform as Service (PaaS), or Infrastructure as a Service (IaaS). Ability to work in cloud environments with experience and knowledge in any of the following areas: Azure Cloud administration and support Azure Virtual Desktop Agile collaboration tools such as Atlassian Jira, Confluence Terraform, Ansible Experience working in Windows and Linux environments Ability to meet with customers to gather and understand business requirements Ability to quickly adapt to new situations and changing priorities Ability to work under pressure, meet deadlines, and handle multiple projects simultaneously Must be punctual, reliable, and accountable for themselves, with ability to follow through with tasks to completion Desired Education & Experience: 2+ years of progressive experience administering and working in public cloud environments Azure cloud certifications desired Experience working in an agile environment preferred
    $72k-90k yearly est. 1d ago
  • SWIFT Engineer

    Unisys 4.6company rating

    Reston, VA Jobs

    Skills: Skilled in supporting SWIFT products (Alliance Cloud, Alliance Access, Alliance Gateway, Alliance Web Platform) and related components (HSM, Alliance Connect) Strong understanding of agile methodologies Determining causes of operating errors and taking corrective action Programming including coding, debugging, and using relevant programming languages Skilled in SWIFT products and services Skilled in using UNIX Skilled in SQL Skilled in Excel Experience using APIs for developing or programming software #LI-CGTS TS-2652
    $70k-90k yearly est. 5d ago
  • GCP Cloud Engineer

    IMCS Group 3.9company rating

    Denver, CO Jobs

    Title: Cloud Engineer Duration: 12+months Client: State of CO The GCP Cloud Engineer is responsible for implementing the designs and standards provided by the Cloud Architect and established by the Cloud Operations team as a whole and deploying application hosting environments using repeatable templates/tools/processes, with a primary focus on the GCP cloud platform within a DevOps structured team and methodology. Additionally, based on guidance by the Cloud Architect, configurations, automation, and tools that will allow cloud services to be consumed in an efficient manner should be established. Ultimately, the Cloud Engineer is the primary resource assigned to projects targeted for cloud deployment. They are responsible for provisioning the application hosting environments and associated monitoring and reporting to accomplish project objectives while acting as the primary infrastructure point of contact on such projects. They are also responsible for crafting, publishing, socializing, and overseeing adherence to the standards they establish and modifying them to achieve the strategic goals of OIT, and the tactical needs of the applications deployed to the cloud. In addition, they will drive continuous improvement within the CloudOps team, as well as lifecycle management and exploration into evolving cloud-based solutions. They handle the day-to-day interactions with vendors partnered with to help meet work unit and organizational objectives. The engineer will be required to participate in a weekly after-hours on-call rotation. While on call, the Engineer will need to be available to assist in troubleshooting or resolve any production break-fix issues after hours. Individuals selected for this position will also need to complete agency background checks, which may include but not be limited to Criminal Justice Information and State and Federal background checks. Some of the primary duties and responsibilities include: Configure and deploy VPC environments and cloud instances as designed by the Cloud Architect to meet the needs across all hosted workloads Evaluate and collaborate with NetSecOps team to establish security controls necessary to meet policy and standards from the CISO, ideally in a templated and automated manner Analyze and recommend new GCP capabilities for consideration for adoption Manage users/orgs/groups/access within the cloud platform, based on requirements defined by IAM team Create monitoring and reporting capabilities to meet management needs of cloud platform Define and document standards and procedures (SOPs) for consumption of IaaS (via IaC), PaaS, and SaaS on GCP Communicate/advocate capabilities of the cloud platform to drive adoption Act as Tier-2 escalation point for on-call/break-fix efforts, to diagnose and resolve incidents and problems with cloud-based systems Work with NetSecOps resources to ensure network security policy is established in a consistent, repeatable and automated manner MINIMUM QUALIFICATIONS: At least five (5) years of infrastructure development experience with hands-on experience with GCP foundation services related to computing, network, storage, content delivery, administration and security, deployment and management, and automation technologies. At least five (5) years of Infrastructure as Code (IaC) experience in Terraform or similar. Deep domain expertise in cloud infrastructure solutions (i.e. Windows and Linux IaaS, Business Continuity and Disaster Recovery, Security, Management, Storage, Networking, OSS, containers, and Infrastructure as code technology), breadth of technical experience, and technical aptitude to learn and adjust to new technologies and cloud trends required. Experience and understanding of large-scale infrastructure deployments in enterprise-wide environments are required. Experience with automated Continuous Integration/Delivery Scripting skills in Python, Bash, and PowerShell Google Foundational certification PREFERRED QUALIFICATIONS: Professional Cloud Architect certification Experience with Terraform Experience with Azure DevOps
    $66k-94k yearly est. 5d ago
  • SailPoint Engineer

    IO Associates-Us 4.2company rating

    Washington, DC Jobs

    SailPoint IdentityIQ / IdentityNow Engineer (IGA) Join an elite Identity team supporting Fortune 500 and U.S. Federal clients. As a SailPoint Engineer, you'll play a hands-on role building and optimizing SailPoint IdentityIQ / IdentityNow solutions-on-prem and in the cloud-within enterprise environments. What You'll Do Implement SailPoint IdentityIQ / IdentityNow configurations and customizations Develop and maintain connectors, rules, workflows, reports, and provisioning logic Create and maintain scripts and automation (BeanShell, Java, REST APIs, Python/PowerShell) Integrate SailPoint with target systems (AD, Azure AD, HRIS, cloud apps, mainframes) Conduct unit testing, support QA/UAT, and participate in code reviews Support production environments: monitor, troubleshoot, and optimize performance Develop and maintain technical documentation (HLD/LLD, knowledge articles) Collaborate with architects, developers, admins, and client stakeholders Support CI/CD processes and DevOps tooling for SailPoint deployments Technical Requirements Core ExpertiseDepth SailPoint IdentityIQ / IdentityNow3+ yrs hands-on implementation; SailPoint Certified IdentityIQ Engineer preferred IAM / IGA Domain5+ yrs experience in identity provisioning, RBAC/ABAC, certifications Languages & ScriptingJava, BeanShell, REST/JSON, SQL; plus Python, PowerShell, or Bash for automation Protocols & StandardsSAML, OAuth2/OIDC, WS-Fed, SCIM DevOps & ToolingGit, Jenkins, Docker/K8s, Terraform, Jira/Confluence PlatformsLinux & Windows server admin; directory services (AD / LDAP); relational DBs (Oracle, MSSQL, MySQL) Additional Skills Strong problem-solving and troubleshooting skills Ability to work independently and in Agile team environments Good written and verbal communication; able to document work clearly Eager to learn from senior architects and contribute to best practices Certifications & Frameworks (a plus) SailPoint Certified IdentityIQ Engineer CISSP or equivalent security credential Agile/Scrum or SAFe practitioner Familiarity with NIST 800-53, FedRAMP, ISO 27001 controls Eligibility Active Public Trust, Secret, or Top-Secret clearance (or ability to obtain clearance is required) US Citizen Desired Skills and Experience Join an elite Identity team supporting Fortune 500 and U.S. Federal clients. As a SailPoint Engineer, you'll play a hands-on role building and optimizing SailPoint IdentityIQ / IdentityNow solutions-on-prem and in the cloud-within enterprise environments. What You'll Do Implement SailPoint IdentityIQ / IdentityNow configurations and customizations Develop and maintain connectors, rules, workflows, reports, and provisioning logic Create and maintain scripts and automation (BeanShell, Java, REST APIs, Python/PowerShell) Integrate SailPoint with target systems (AD, Azure AD, HRIS, cloud apps, mainframes) Conduct unit testing, support QA/UAT, and participate in code reviews Support production environments: monitor, troubleshoot, and optimize performance Develop and maintain technical documentation (HLD/LLD, knowledge articles) Collaborate with architects, developers, admins, and client stakeholders Support CI/CD processes and DevOps tooling for SailPoint deployments Technical Requirements Core Expertise Depth SailPoint IdentityIQ / IdentityNow 3+ yrs hands-on implementation; SailPoint Certified IdentityIQ Engineer preferred IAM / IGA Domain 5+ yrs experience in identity provisioning, RBAC/ABAC, certifications Languages & Scripting Java, BeanShell, REST/JSON, SQL; plus Python, PowerShell, or Bash for automation Protocols & Standards SAML, OAuth2/OIDC, WS-Fed, SCIM DevOps & Tooling Git, Jenkins, Docker/K8s, Terraform, Jira/Confluence Platforms Linux & Windows server admin; directory services (AD / LDAP); relational DBs (Oracle, MSSQL, MySQL) Additional Skills Strong problem-solving and troubleshooting skills Ability to work independently and in Agile team environments Good written and verbal communication; able to document work clearly Eager to learn from senior architects and contribute to best practices Certifications & Frameworks (a plus) SailPoint Certified IdentityIQ Engineer CISSP or equivalent security credential Agile/Scrum or SAFe practitioner Familiarity with NIST 800-53, FedRAMP, ISO 27001 controls Eligibility Active Public Trust, Secret, or Top-Secret clearance (or ability to obtain clearance is required) US Citizen
    $74k-98k yearly est. 3d ago
  • Senior Data Engineer

    Unisys 4.6company rating

    McLean, VA Jobs

    Job Title: Senior AWS Cloud/Data Engineer Duration: Long Term Interview Mode: In-Person Note: Minimum 14-16 years of overall experience. Job Description: Minimum 14-16 years of overall experience. Someone with expert in development of microservices based on python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project New System: Python and PySpark, AWS Postgres DB, Cucumber for automation Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 · Write automated tests using cucumber, based on the new micro-services-based architecture · Promote top code quality and solve issues related to performance tuning and scalability. · Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify · Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. · Preferred strong skills and experience in reporting applications development and data analysis. · Knowledge in Agile methodologies and technical documentation. · Location: Hybrid. (3 days in office and 2 days remote, expected to be online on teams with good network quality and responsive during the business hours). · Nice to Have: Snowflake, AMQ's, AWS, Kubernetes/Amazon EKS, Java, Sprint Boot, Informatica, SAS, AutoSys, DB2 #LI-CGTS #TS-2942
    $77k-106k yearly est. 2d ago
  • Cloud Engineer

    Zillion Technologies, Inc. 3.9company rating

    McLean, VA Jobs

    Must Have Qualifications: Cloud EPM developer / analyst to support configuration of Hyperion calc scripts, business rules, and data management/exchange rules to support changes required for the client's ERP solution. MUST HAVE recent PYTHON experience. Qualifications: Apply understanding of data warehouse / database concepts, as well as experience in relational database structures, research methods, sampling techniques, and system testing. Good understanding and demonstrable experience with Automation processes using ETL tools and python scripts (experience with EPM Automate is a plus). Solid understanding of API Calls and Rest API processes. Solid experience in configuring application code for Oracle EPM solutions such as Calc Scripts, Business Rules and Data Management/Data Exchange Rules, specifically in the EPBCS space. About 215 business rules/calc scripts and 66 input forms/business rules that need to be converted from current G&A environment to new. Configure and customize the ARCS application to meet clients' specific reconciliation requirements, ensuring alignment with best practices and industry standards. Analyze existing reconciliation processes and identify opportunities to streamline and automate them using ARCS functionalities.
    $71k-95k yearly est. 1d ago
  • Cloud Engineer

    Covetus 3.8company rating

    Richmond, VA Jobs

    • Participate in on-call rotations for 24/7 Azure Production support • Respond to and resolve production incidents and outages • Conduct root cause analysis for major incidents and implement preventive measures. • Provide advanced technical support for Azure Databricks, including troubleshooting and resolving complex issues. • Monitor system logs, detect issues, and resolve technical problems related to the Kubernetes infrastructure. • Maintain and optimize Azure infrastructure for production environments. • Monitor system health, performance, and security. • Implement and manage backup and disaster recovery solutions • Perform capacity planning and cost optimization • Automate routine tasks and create self-healing systems • Implement and maintain security controls and compliance measures • Collaborate with development teams to improve application performance and scalability • Manage and optimize Azure resources utilization • Stay updated with new Azure features and best practices • Manage and optimize costs through FinOps practices • Assist in the design and implementation of new Azure-based solutions • Document processes, configurations, and system architectures • Strong understanding of Azure services, including Azure Kubernetes Service (AKS) • Create / Maintain Scripts: Bash and PowerShell, to maintain the infrastructure. • Create / Maintain Terraform Scripts • Mentor junior team members and share knowledge • Azure Administrator Associate (AZ-104) & (AZ-204) • Azure Network Engineer Associate (AZ-700)
    $69k-86k yearly est. 5d ago
  • DevSecOps Engineer (Openshift/Kubernetes) - TS/SCI

    Maxar Technologies 4.7company rating

    Washington, DC Jobs

    Please review the job details below. Unlock your future with Maxar! We are on the lookout for a savvy DevSecOps Engineer with LINUX and OpenShift or Kubernetes expertise to lead the charge in migrating and managing our services. We've got an exciting new effort happening to modernize and shift our existing system to brand new hardware and we'd love some extra hands. Join our tight-knit ops, software, and analytics team, where your methodical approach and mission-focused mindset will play a pivotal role in crafting solutions to enable others. Dive into a project that not only challenges you but allows direct interaction with users and technologists that assist in national security operations. This is a full-time onsite position; hybrid/remote options not available . Note: US citizenship and an active TS/SCI clearance is mandatory for this position and required for consideration. Must be open to acquire CI Poly. Location : Herndon, VA Principal Responsibilities : Administer, troubleshoot, and tune new and existing Openshift and/or Kubernetes deployments. Standup automated pipelines and assist team with deploying containerized code/applications into a Kubernetes cluster. Design and apply hybrid strategy (cloud and local virtualized) for existing and future architecture needs. Troubleshoot and resolve operational network, pipeline, and infrastructure issues. Support compliance to ATO accreditation needs with tuning, patching or documentation, Communicate with multidisciplinary SAFe Agile teams and articulate technical concepts and ideas effectively. Position Requirements: 8+ years of technical experience with DevSecOps and strong Linux experience required. 4 years Openshift and/or Kubernetes Open source and custom application deployments and maintenance experience. Capable of working effectively with a geographically distributed ops & development team Communicates effectively with customers and team in written and oral forums Willingness to work 80% or as needed in a SCIF environment. Degree or equivalent demonstrated experience in a technical field. Active TS/SCI clearance and US citizenship Additional Skills Desired Some familiarity in the following other areas would be fabulous : An adaptable and solution centric mindset that embraces technology enablers. Experience with: hardware and software sustainment with IC and DoD networks. distributed processing methods and tools, such as REST APIs, microservices, IaaS/PaaS services. developing and deploying web services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management, Data integration, User Interfaces, Databases CompTIA Security+ or comparable certification for privileged user access. Experience in/or supporting military/intelligence or knowledge of some T CPED systems is a plus! #cjpost #LI-RD In support of pay transparency at Maxar, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range. ? The base pay for this position within the Washington, DC metropolitan area is: $131,000.00 - $219,000.00 annually. For all other states, we use geographic cost of labor as an input to develop market-driven ranges for our roles, and as such, each location where we hire may have a different range. We offer a comprehensive package of benefits including paid time off, health and welfare insurance, and 401(k) to eligible employees. You can find more information on our benefits at: The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire. The date of posting can be found on Maxar's Career page at the top of each job posting. To apply, submit your application via Maxar's Career page. Maxar Technologies values diversity in the workplace and is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
    $68k-84k yearly est. 5d ago
  • DevOps Engineer

    Mantech International 4.5company rating

    Norfolk, VA Jobs

    Shape the future of defense with ManTech! Join a team dedicated to safeguarding our nation through advanced tech and innovative solutions. Since 1968, we've been a trusted partner to the Department of Defense, delivering cutting-edge projects that make a real impact. Dive into exciting opportunities in Cybersecurity, IT, Data Analytics and more. Propel your career forward and be part of something extraordinary. Your journey starts now-protect and innovate with ManTech! ManTech seeks a motivated, career and customer-oriented DevOps Engineer to join our team onsite in Norfolk, Virginia. This is an onsite position. This role will support the Navy Continuous Training Environment (NCTE) Program! The primary focus will be building and maintaining a software pipeline structure to automate the process of building, testing, and deploying software within the Navy Training Baseline (NTB). Responsibilities of this position include, but are not limited to: Perform a variety of systematic, disciplined, and quantifiable approaches to the development, operation, and maintenance of software systems supporting the Navy Continuous Training Environment (NCTE) Understand the DevOps lifecycle from infrastructure and building, to monitoring and operating a product or service. Implement testing schemes with build pipeline tools Manage quarterly releases Develop and provide government-approved release notes for each quarterly NTB release, and Update all public-facing pages and documentation to reflect new releases Conduct large-scale software deployments, or monitoring and testing, such as Continuous integration and continuous delivery (CI/CD) Write shell scripts to automate relevant tasks and implement advanced software development practices and agile development practices such as code reviews using source control Work with container security technologies, evaluating and mitigating or resolving vulnerability findings Guide Software development and deployment and prepare system engineering management plans and system integration and test plans Work with the Development team to configure and deploy the CI/CD tools and ensure that the CI/CD tools are used effectively Implement Quality Assurance (QA) automation to improve the speed, efficacy, and output of testing methodologies Minimum Qualifications: High School Diploma and 5+ years of relevant DevOps experience. Experience with building and supporting DevOps tools and CI/CD Pipelines Experience with automation and configuration management tools such as Chef, Puppet, or Ansible. Experience with the container orchestration tools, such as Docker Swarm or Kubernetes. Experience using SonarQube for code analysis. Experience with CI/CD tools, such as Jenkins. Experience with version control management using Git and GitLab is essential for efficient collaboration and code management. Experience developing custom applications and supporting them in a production environment. Must have a valid Security + certification Up to 25% travel required or as needed Preferred Qualifications: Relevant work experience as a DevOps engineer within a U.S. Government environment, DoD strongly preferred. Knowledge of systems design/development lifecycle (SDLC), software systems theory and engineering principles, network/systems design and implementation, and virtualization. Clearance Requirements: US Citizenship and active Secret Security Clearance with the ability to obtain and maintain an Active Top Secret/ SCI security clearance
    $80k-105k yearly est. 2d ago
  • CloudOps Engineer

    Decisions 4.2company rating

    Virginia Beach, VA Jobs

    Job Description Decisions is a fast-growing, private-equity-backed technology company that provides an integrated workflow and rules platform for business process automation (BPA). Trusted by top Fortune 500 firms and SMBs worldwide, Decisions empowers diverse industries around the globe to streamline and improve their processes, enhancing efficiency and yielding results, regardless of technical expertise. This no-code automation platform seamlessly integrates AI tools, rules engines, and workflow management, enabling the transformation of customer experiences, modernization of legacy systems, and the achievement of automation goals three times faster than traditional software development. The CloudOps Engineer is responsible for designing, implementing, and maintaining the cloud infrastructure of a SOC 2-compliant software company. This role focuses on securing cloud environments, ensuring high availability, optimizing network performance, and enforcing compliance controls in alignment with SOC 2 security principles. This position is on-site at our HQ in Virginia Beach, VA. Key Responsibilities Network & Security Management Architect and maintain secure network configurations, including VPN gateways, firewalls, and zero-trust architectures. Configure and manage network segmentation, VPC peering, and load balancing to enhance security and efficiency. Implement cloud-native security controls such as Security Groups, IAM roles, and policy-based access controls. Monitor and enforce SOC 2 security frameworks, including encryption at rest and in transit, least privilege access, and secure authentication protocols. Conduct vulnerability assessments, apply patch management, and remediate security threats proactively. Inform and execute incident response playbooks and coordinate with compliance teams to handle security incidents. Cloud Infrastructure & Operations Deploy and optimize cloud-based workloads with automation tools like Terraform, Pulumi, or CloudFormation. Maintain high availability and fault tolerance, implementing disaster recovery strategies for SOC 2 compliance. Optimize network traffic and latency using CDNs, DNS configurations, and edge computing solutions. Ensure continuous monitoring with SIEM tools (e.g., Splunk, Elastic Security) for network security analysis. Manage cloud logging and audit trails (AWS CloudTrail, Azure Monitor, Google Cloud Logging) to ensure compliance. Automate configuration management and deployment pipelines with DevOps practices. Compliance & Risk Management Align cloud security policies with SOC 2 Trust Services Criteria: Security, Availability, Confidentiality, Processing Integrity, and Privacy. Conduct periodic compliance audits, risk assessments, and security awareness training for engineering teams. Implement role-based access controls (RBAC) and attribute-based access controls (ABAC). Maintain third-party vendor security assessments for cloud services used in the ecosystem. Work closely with compliance teams to document security controls, review policies, and ensure SOC 2 reporting standards are met. Key Performance Indicators (KPIs) Network Uptime – Maintain 99.9% availability across cloud infrastructure. Incident Response Time – Detect and mitigate security threats within defined SLAs. Compliance Adherence – Maintain 100% alignment with SOC 2 security controls and audit requirements. Automation Efficiency – Increase infrastructure automation to reduce manual intervention by X%. Security Posture – Reduce unauthorized access incidents and misconfigurations through proactive security audits. Cost Optimization – Optimize cloud costs while maintaining performance and compliance standards. Required Skills & Experience Strong expertise in cloud networking, security architecture, and SOC 2 governance. Hands-on experience with cloud platforms (AWS, Azure) and network security tools. Exposure to IaC (Terraform, CloudFormation) and CI/CD automation. Exposure to SIEM, SOC automation, and cloud-native security tools. Strong understanding of identity & access management (IAM) and zero-trust security models. Knowledge of container security (Kubernetes, Docker) and serverless security best practices. Nice to Have Certifications Network+ and Security+ AWS Certified Security – Specialty Microsoft Certified: Azure Security Engineer Associate Certified Cloud Security Professional (CCSP) Certified Information Systems Security Professional (CISSP) SOC 2 Compliance & Audit Certifications
    $73k-97k yearly est. 7d ago
  • Sr. DevOps Engineer(Azure Services for Al/ML)- Only W2 and Locals ONLY!

    SGS Technologie 3.5company rating

    Denver, CO Jobs

    US Citizens and Green Card holders would be considered only on W2 and Locals ONLY! We're seeking a skilled DevOps Engineer to support our AI team in deploying and managing cloud infrastructure on Microsoft Azure. The ideal candidate will have hands-on experience with Terraform, a solid grasp of cloud DevOps practices, and a proven ability to work collaboratively across teams in a regulated enterprise environment. Key Responsibilities Build and maintain infrastructure-as-code using Terraform for Azure services Support deployment of AI and ML applications in Azure (e.g., Azure APIM, OpenAI, and other GenAI services) Manage and optimize CI/CD pipelines Develop CI/CD processes for Python and Java enterprise services Collaborate closely with data scientists, ML engineers, and cloud architects Apply best practices for cloud security, compliance, and enterprise networking Skills: Required Qualifications 2+ years of experience in cloud DevOps roles (Azure preferred; AWS or GCP a plus) Production experience with Terraform Experience with CI/CD tools, Git, and infrastructure automation Strong understanding of cloud networking and security Experience building CI/CD pipelines for Python-based and Java enterprise applications Nice to Have Familiarity with Azure DevOps and deployment templates Exposure to AI/ML workflows or MLOps pipelines Experience working in regulated or enterprise environments Proficiency in Python or scripting for automation Education: Bachelor's degree in Computer Science or a related field, or equivalent work experience.
    $82k-105k yearly est. 1d ago
  • Data Engineer

    Synectics for Management Decisions Inc. 3.8company rating

    Requirements Engineer Job At Synectics for Management Decisions, Inc

    Job Description We are looking for an experienced Data Engineer to design and implement scalable data solutions that support advanced analytics for federal agency projects. This role offers remote flexibility but requires occasional onsite meetings in Washington, D.C. Key Responsibilities: Design and implement robust data models for deployment in a data lake architecture. Build and maintain data access services and data pipelines across both domain-specific data stores and a universal data hub. Develop reusable data products to support advanced analytics and AI/ML use cases. Validate data pipelines using analytic models to ensure quality, accuracy, and performance. Collaborate closely with stakeholders, analysts, and engineering teams to deliver scalable, secure, and mission-aligned data solutions. Required Qualifications: Minimum 6 years of professional experience in data engineering. Proven success developing data models and pipelines in large-scale environments, including federal projects. Must be able to work independently and efficiently with the Databricks application. Strong understanding of data lakes, structured and unstructured data, and modern data architecture principles. Experience working on data projects for the government, preferably for the IRS. Must have an active MBI clearance. Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. Technical Environment You'll Work In: You'll be working in a modern hybrid infrastructure, blending cloud-native and legacy technologies. Our stack includes: Cloud Platforms & Databases: AWS RDS (Postgres), AWS Redshift, Databricks, MongoDB, DynamoDB, and AWS Aurora (in exploration) ETL & Data Integration: Databricks ETL, Informatica (on-prem), EFTU, Informatica Metadata (EDC, Axon) Governance & Access Control: Immuta Analytics & BI Tools: Advanced Analytics Platform (AAP) with AI/ML services, Tableau, Business Objects (BOE), and Power BI Integration Modeling & Development: IBM Rational Suite, IBM Data Architect Legacy Systems: JCL, COBOL for z/OS, DB2 Familiarity with these technologies is highly desirable. Work Flexibility: Remote work option available. Must be able to attend in-person meetings in Washington, D.C., as required. Synectics is an Equal Opportunity Employer. We offer a competitive salary and an impressive full benefits package that includes medical and dental, 401k w/company matching, company-paid life and short/long-term disability insurance, and paid leave. We also provide an environment that supports everyone's professional development and growth.
    $75k-105k yearly est. 13d ago
  • Senior Data Engineer

    Zylo 4.1company rating

    Denver, CO Jobs

    Zylo is the enterprise leader in SaaS Management, enabling companies to discover, manage, and optimize their SaaS applications. Zylo helps companies reduce costs and minimize risk by centralizing SaaS inventory, license, and renewal management. Trusted by industry leaders, Zylo's AI-powered platform provides unmatched visibility into SaaS usage and spend. Powered by the industry's most intelligent discovery engine, Zylo continuously uncovers hidden SaaS applications, giving companies greater control over their SaaS portfolio. With more than 30 million SaaS licenses and $34 billion in SaaS spend under management, Zylo delivers the deepest insights, backed by more data than any other provider. Overview Our Senior Data Engineer will be responsible for designing, implementing, and maintaining scalable data pipelines that efficiently collect, process, and store data for analysis. You will work closely with a data scientist, software engineers, analysts, and other stakeholders to ensure that the data infrastructure supports business intelligence, machine learning, and other data-driven solutions. What will you do Data Pipeline Development: Design, build, and maintain robust and scalable data pipelines to extract, transform, and load (ETL) data from various sources to data warehouses or data lakes. Data Integration: Integrate data from multiple internal and external sources into centralized storage systems while ensuring data quality and consistency. Database Management: Manage large datasets and databases, ensuring their security, performance, and scalability. Data Modeling: Create and optimize data models to support analytics, reporting, and machine learning workloads. Optimization and Performance: Continuously monitor and improve the performance, reliability, and efficiency of the data pipeline infrastructure. Data Quality: Implement measures to maintain data integrity, cleanliness, and consistency across all systems. Automation: Automate manual data tasks to streamline data workflows and reduce manual intervention. Documentation: Document data processes, pipeline configurations, and data flow designs for team collaboration and future reference. Requirements 5+ years of experience as a Data Engineer, Software Engineer with a data focus, or a similar role with a proven track record of designing and building data pipelines at scale. Strong experience in ETL processes, data modeling, and managing large datasets in cloud environments. Proficiency with AWS services (e.g., EC2, S3, Athena, Glue, SageMaker, Redshift) and a solid understanding of cloud data architecture. Expertise in Python and SQL, with a focus on developing scalable, maintainable code to support data transformation and processing tasks. Experience with orchestration tools (e.g., Fivetran, Apache Airflow) to automate and schedule ETL/ELT workflows. Solid understanding of data warehouses (Redshift, Snowflake, BigQuery) and data lakes (e.g., AWS S3) for large-scale data storage and retrieval. Experience with streaming data tools like Kafka and Apache Flink to handle real-time data flows. Experience with distributed data/computing tools such as Hadoop, Hive, or Spark. Familiarity with data governance principles, including data retention, RBAC, and security best practices. Soft Skills: Strong problem-solving skills with an eye for detail and data-driven decision-making. Excellent communication and collaboration skills, with the ability to work seamlessly across teams (Data Scientists, Engineers, Analysts). Self-motivated with the ability to prioritize and manage multiple tasks in a fast-paced, ever-changing environment. Comfortable with both independent work and contributing as part of a cross-functional team. Nice to have Exposure to SaaS Management or Software Asset Management environments Data visualization experience (e.g., Tableau, PowerBI, Looker) to communicate complex data insights Familiarity with machine learning frameworks like TensorFlow, Keras, or PyTorch. Familiarity with Azure or other cloud platforms What it's like to work with us At Zylo, we're committed to Growing Stronger Together by fostering a diverse and inclusive workplace. We believe that a variety of perspectives not only fuels innovation, but also allows us to better serve our diverse customer base. If you meet the essential qualifications, we encourage you to apply and join us on this journey. Still growing in your career? Connect with our talent community-we're always looking for future Zylos who share our passion for continuous learning.
    $77k-108k yearly est. 7d ago

Learn More About Synectics for Management Decisions, Inc Jobs

OSZAR »