Dream job? Let's make it happen

Search jobs

Filters(0)

Clear all

Experience

Countries

Date posted

How do you want to work?

Platform Engineering - III [T500-20076]

Global-Talent-Exchange

Location

Hyderabad

Experience

8 - 15 Years

Employment type

Full time

About the Role:

The Data Platform Observability Lead is responsible for establishing and advancing observability, monitoring, and reliability practices across the Enterprise Data & Analytics (EDAA) Platforms landscape. This role ensures end-to-end visibility into platform performance, data pipeline health, system availability, and operational SLAs.

Responsibilities:

  • Design and implement comprehensive observability frameworks.
  • Define and track key SLIs, SLOs, and KPIs.
  • Lead the integration of monitoring, logging, tracing, and alerting tools.
  • Collaborate with platform engineering, SRE, and product teams.
  • Drive the adoption of best practices in telemetry collection and visualization.
  • Oversee incident management processes and post-mortem practices.
  • Provide leadership in tool evaluation and deployment.
  • Partner with security, compliance, and data governance teams.
  • Lead operational reviews and reporting.
  • Mentor and coach engineers and analysts.

Qualifications:

  • 8+ years of experience in platform engineering, SRE, DevOps, or observability roles.
  • Proficient with ETL Pipelines.
  • Strong expertise with observability tools and platforms.
  • Deep understanding of monitoring distributed data platforms and cloud-native architectures.
  • Experience setting and managing SLIs/SLOs/SLAs.
  • Strong background in incident response and performance tuning.
  • Proven ability to lead cross-functional collaboration.
  • Bachelor's degree in computer science, engineering, or related field; advanced degree or certifications preferred.

Work Location:

Hyderabad, India

Work Pattern:

  • Full-time role
  • Hybrid work mode

Platform Engineering

DevOps

+ 31 more

Platform Engineering
+ 32 more

Data Engineer

Global-Talent-Exchange

Location

NA

Experience

5 - 8 Years

Employment type

Full time

Job Summary

We are seeking a skilled and detail-oriented Data Engineer with deep expertise in Azure, SQL Server, and Databricks to design, build, and manage scalable data pipelines and enterprise data solutions. This role will be critical in supporting our analytics, reporting, and data science initiatives by delivering high-quality, reliable, and performant data systems.

Key Responsibilities

  • Design, develop, and manage ETL/ELT pipelines using Azure Data Factory (ADF) and Databricks for batch and real-time data processing.
  • Integrate data from various structured and unstructured sources including SQL Server, Azure SQL Database, Azure Data Lake Storage (ADLS), and external APIs.
  • Build and maintain data models, data marts, and data warehouses using SQL Server and Azure Synapse Analytics.
  • Write efficient and optimized SQL queries, stored procedures, views, and triggers in SQL Server.
  • Use Databricks (Spark with Python/Scala) to process large datasets for transformation and analytics.
  • Ensure data quality, integrity, security, and compliance across the pipeline using data validation, monitoring, and auditing techniques.
  • Collaborate with data analysts, data scientists, and business stakeholders to define and deliver data solutions.
  • Implement CI/CD pipelines using Azure DevOps.
  • Monitor and optimize the performance of data pipelines and queries across Azure and SQL Server environments.

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field.
  • 5+ years of experience in data engineering or related roles.
  • Proven experience with Azure Data Services: Data Factory, ADLS, Azure SQL, Azure Synapse.
  • Experience with Databricks (Azure implementation), including experience with Spark (Python or Scala).
  • Microsoft SQL Server: Writing advanced SQL, stored procedures, and performance tuning.
  • Strong understanding of data warehousing concepts, ETL/ELT best practices, and data modelling (star/snowflake schema).
  • Familiarity with data governance, RBAC, and data security practices in Azure.
  • Experience with CI/CD tools like Azure DevOps and version control with Git.
  • Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment.

Azure Data Factory

Databricks

+ 25 more

Azure Data Factory
+ 26 more

Cyber Security Specialist

Global-Talent-Exchange

Location

Bangalore Rural

Experience

5 - 8 Years

Employment type

Full time

Responsibilities:

  • Investigate, document, and report on information security issues and emerging threats.
  • Provide Incident Response (IR) support when analysis confirms the actionable incident.
  • Isolate affected systems, collect and analyze triage/logs, contain the incident, and provide remediation strategy.
  • Gather information from various threat intel sources and initiate remediation steps to neutralize risks.
  • Monitor and analyze logs and alerts from different technologies across multiple platforms to identify and triage security incidents.
  • Perform threat hunting and support incidents escalated from SOC.
  • Define and document playbooks, standard operating procedures, and processes.
  • Document results of cyber threat analysis and prepare comprehensive hand-off or escalation for the Incident Response process.
  • Utilize security tools and technologies to analyze potential threats to determine impact, scope, and recovery.
  • Collaborate with internal and external stakeholders.
  • Conduct detailed analysis of security-related events like Phishing, Malware, DoS/DDoS, Application-specific Attacks, Ransomware, etc.
  • Communicate with key business units for recommendations on mitigation and prevention techniques.
  • Research and explore the enrichment and correlation of existing data sets for deep threat analysis.
  • Contribute to special projects by providing expertise, guidance, and leadership.

Qualifications:

  • Technical know-how on the organization’s application, system, network, and infrastructure.
  • Deep understanding of technologies and architecture in a highly scalable enterprise network.
  • Proficiency with logging mechanisms of Windows, Linux, and MAC OS platforms.
  • Proficiency with EDR, Anti-Virus, HIPS, NIDS/NIPS, Full Packet Capture, Network-Based Forensics, and Encryption.
  • Advanced certifications such as SANS GIAC / GCIA / GCIH, CISSP or CASP, and/or IR-specific training and certification are an added advantage.
  • At least 5 years of experience as a lead investigator and 2.5 years as a lead analyst in Incident Response.
  • Expertise in IRP (Incident Response Playbook) creation and execution.
  • Good communication skills to coordinate among various stakeholders.

Incident Response

Threat Hunting

+ 28 more

Incident Response
+ 29 more

Python+ API Developer

Global-Talent-Exchange

Location

Bangalore Rural

Experience

5 - 8 Years

Employment type

Full time

Job Summary

Responsibilities:

  • Develop and maintain applications using Python and web frameworks such as Django or Flask.
  • Design and implement APIs and web services.
  • Integrate data from various back-end services and databases.

Requirements:

  • Proficiency in Python.
  • Experience with at least one web framework (e.g., Django, Flask).
  • Strong understanding of APIs, web services, and data integration concepts.
  • Hands-on experience with SQL, NoSQL, and Vector databases.
  • Familiarity with front-end technologies, such as JavaScript and React.

Qualifications:

  • 5+ years of experience in software development.

Location:

Bangalore, India

Python

Django

+ 15 more

Python
+ 16 more

Cloud Architect

Global-Talent-Exchange

Location

NA

Experience

5 - 8 Years

Employment type

Full time

Responsibilities

  • Design, develop, implement, and improve cloud environments in AWS/Azure.
  • Perform engineering design evaluations for new environment builds.
  • Architect, implement, and improve automations for cloud environments.
  • Recommend alterations to improve quality of products and procedures.
  • Implement industry-standard security practices and maintain them.
  • Advise and engage with customer executives on cloud strategy and improvements.
  • Create business cases for transformation and modernization.
  • Analyze processes to identify technology-driven improvements.

Requirements

  • Strong hands-on experience in Azure/AWS Cloud Infrastructure.
  • Excellent understanding of Azure/AWS services and components.
  • Strong Terraform scripting skills.
  • Experience in creating CI/CD pipelines using GitLab.
  • Proficiency in provisioning containers in Azure/AWS.
  • Good knowledge of software configuration management systems.
  • Strong business acumen and strategy skills.
  • Awareness of latest technologies and industry trends.
  • Proven experience in assessing clients' workloads for cloud suitability.
  • Ability to define new architectures and drive projects from an architectural standpoint.
  • Excellent verbal, written, and presentation skills.

Azure

Aws

+ 18 more

Azure
+ 19 more
Previous
1
Next

Filters(0)

Clear all

Experience

0
0 Yrs25 Yrs

Countries

m

Date posted

.
Any time
.
Past 24 hours
.
Past week
.
Past month

How do you want to work?

.
Full-time
.
Contract
.
Part-time
.
Work remotely