The holidays are looming, meaning many DevOps teams are about to have their apps take a beating as hundreds of holiday orders and new device users slam them all at the same time. Whether or not your systems are consumer-focused, there will eventually come a time when the overall load on your servers is pushed to the limit.
Load testing is a method, usually using specifically-designed software, that increases the user count, transaction rate, etc, in order to monitor system performance as the load scales. A report is created with response times, CPU/RAM usage, bandwidth consumption, and other potential bottlenecks. The idea is to simulate both normal and heavy production conditions in order to evaluate overall performance.
Load testing applications in the cloud allows development and testing staff to perform scale testing to see at what point virtual machines need to scale, when to add additional resources like storage or bandwidth, and when a failover solution might be necessary.
By thoroughly performing load tests throughout the DevOps process, your organization eventually lowers costs and your team doesn’t have to scramble during a major event. Here are some best practices when performing cloud-based load testing.
Some load generating software work across a variety of cloud platforms, but you’ll want to check compatibility with your architecture before buying. Make sure to evaluate features like bandwidth simulation — as your users will not have the same type of fiber connection you enjoy from your data center or cloud provider. Upload/download speeds must be accurately represented for you to best understand your load limits.
Your chosen tool should also include analytics, custom reports, and scheduling features. This helps avoid overworking production systems while testing them, and it helps simplify your workflow. You can schedule ongoing load tests and have them send you regular reports so you have a clear picture of how your applications are performing at various times and states, getting a jump on any performance issues before they begin to cause problems.
If there are several possible reasons for application performance degradation, it can be hard to pinpoint them. Your load testing program should be able to test both inside and outside your firewall so you are better able to decide why those dropped packets are happening.
Once again, it can be difficult to isolate a single problem when load testing. Once you have a good idea of a root cause, you might remedy it, but other factors inherent in cloud servers may keep you from definitively showing improvements in performance, as the connection to your cloud make it hard to have an exact day-to-day comparison of your environment. Therefore testing on-premise with a local connection can help you test more precisely.
It kind of goes without saying, since the point of load testing is to simulate a real user load on your apps. But merely raising the load until failure is not going to create a realistic scenario, nor is using only a single type of device, browser, bandwidth speed, or operating system. Your load testing software should be able to vary the test, keeping user load at a base level with a variety of configurations.
Because load testing is inherently imprecise, you should not use an overall average of your results. Rather, take the 80th – 90th percentile as well as the overall average. This way you know how 8 out of 10 users will experience your application while under load — a much easier factor to tune for compared to the outliers.
Load testing in the cloud can be a complex operation, but using the right tools and planning ahead will help you find any performance bottlenecks before they cause problems in a production environment. As part of an agile methodology of development and administration, load testing should be performed regularly throughout the development and operations processes.