Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security.
Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope.
The Performance QE team is responsible for the system level performance of a wide range of Netskope products. Expose performance issues not manifested and/or not existed on component level. We are seeking a highly skilled and motivated Performance Test Engineer to join us. The ideal candidate will have deep expertise in performance testing methodologies, tools, and analysis to identify and resolve critical bottlenecks before they impact our customers.
Job Summary / What's in it for youYour core mission is to guarantee the capacity, reliability, and low latency of our applications, maintaining high availability at massive scale. You will thrive here if you are dedicated to partnering with teams to ship high-quality software that is fast and scalable. You will approach QE by consistently balancing customer needs with a holistic view of the entire system architecture.
Key Responsibilities / What you will be doing- Design and Strategy: Develop comprehensive performance test strategies and plans (including load, stress, volume, and endurance tests) by analyzing non-functional requirements and system architecture.
- Scripting and Execution: Design, develop, and maintain robust, reusable test scripts using industry-standard tools for various protocols (e.g., HTTP, Web Services, APIs, FTP/SFTP).
- Environment Management: Collaborate with DevOps and infrastructure teams to configure and maintain performance test environments that accurately replicate production conditions.
- Infrastructure Analysis: Monitor and analyze resource utilization (CPU, memory, network I/O) at the container (Docker) and orchestration (Kubernetes Pod/Node) level during load tests.
- Load Generation: Design and deploy containerized load generators (JMeter, k6, Locust containers) directly onto the Kubernetes cluster to simulate massive scale with minimal overhead.
- Capacity Planning: Provide data and insights to support Kubernetes Horizontal Pod Autoscaler (HPA) and cluster autoscaler configuration and validation.
- Monitoring and Analysis: Execute performance tests and conduct deep-dive analysis of results, leveraging monitoring tools to identify system bottlenecks in application code, database performance, network latency, and infrastructure (CPU, memory, I/O).
- Reporting and Collaboration: Document clear, concise reports summarizing test results, metrics, and actionable recommendations. Communicate findings to engineering and product stakeholders.
- Optimization: Work closely with software developers and architects to provide recommendations for performance tuning and code optimization.
- Process Improvement: Stay current with industry trends, tools, and best practices in performance engineering and contribute to the continuous improvement of the testing framework and processes.
- Experience: Minimum 12+ years of experience in Performance Testing/Engineering roles with scalable cloud native multi-tenant architectures
- Tool Proficiency: Proven expertise with at least one major performance testing tool (e.g., JMeter, Locust, LoadRunner, Gatling, k6, BlazeMeter).
- Scripting/Coding: Strong proficiency in at least one scripting or programming language (e.g., Java, Python, Golang, JavaScript) for test script development and customization.
- Containerization: Hands-on experience with Docker for containerizing applications and understanding how container resource limits affect application performance.
- Orchestration: Experience working with Kubernetes (K8s), including familiarity with concepts like Pods, Deployments, Services, Ingress, and the ability to view/analyze logs and metrics from a cluster.
- System Knowledge: Solid understanding of system architecture components, including web servers, application servers, databases, and load balancers.
- Monitoring/AI Tools: Experience with modern APM/Monitoring tools that incorporate AI/ML capabilities (e.g., Dynatrace, New Relic, Prometheus, Grafana, DataDog, or custom AI solutions) for in-depth analysis.
- Database: Working knowledge of SQL/NoSQL databases and the ability to analyze and optimize database query performance.
- Data Analysis: Strong analytical skills with the ability to interpret large datasets and AI-generated outputs (e.g., log traces, metrics) to pinpoint root causes.
- Foundational Knowledge (Preferred): Basic understanding of Machine Learning concepts (e.g., supervised learning, predictive modeling) and their application in performance analysis is a significant advantage.
- Experience with cloud platforms (e.g., AWS, Azure, GCP) and testing applications deployed in a cloud-native environment.
- Familiarity with Continuous Integration/Continuous Delivery (CI/CD) pipelines and integrating performance tests into the release process.
- Experience with service virtualization or mocking tools.
- Knowledge of network protocols L2/L3 networking concepts, routing protocols, VPN, IPSec and WAN architecture and troubleshooting
- Translate machine-generated data and patterns (from AI/ML models) into clear, actionable performance tuning recommendations for the development and architecture teams.
- Explore and integrate AI tools for automating repetitive tasks, such as test script maintenance (self-healing), realistic test data generation, and intelligent workload pattern creation.
- Stay current with emerging AI/ML applications in performance testing, and champion the adoption of new tools and methodologies to drive efficiency and accuracy.
#LI-VJ2
Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate.
Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.