While at Interop, I participated in a Tech Field Day event where Spirent was talking about their new Axon product, as well as the possibility of usage of Blitz.io. It was an interesting discussion but it gave me some food for thought. As we move to cloud scale apps based on platforms such as Pivotal (EMC World was just down the street), OpenShift, and others, we need a way to test those applications at scale. Spirent and Ixia provide these tools, but would they be used in this new model?The new model of continuous deployment and integration process includes many different testing stages that you find within the standard SDLC, but these stages are often ignored as businesses push to get product out faster and faster. Since the code cannot be written faster yet, things get trimmed and testing at scale is done in production (perhaps for a subset of systems or users, which is actually part of many continuous integration (CI) and continuous deployment (CD) strategies). However, for some customers, one mistake in the code for this type of testing could mean hours, if not days, of repair work (which also implies that proper CI/CD strategies were most likely not employed).
For those that need cloud scale testing and do not want to test in production, perhaps because the code is still being developed and there are some doubts, in step tools by Spirent. Spirent Axon is a simple drop-in appliance that runs a version of KVM and several virtual workloads to generate load for various web-based applications. If you need something a bit larger, then you perhaps you want to look at Blitz.io or other products from Spirent or Ixia. Or, if you’d rather build your own and place virtual appliances within your own network instead of using a drop-in physical appliance, both companies have virtual form factors of their products.
However, it is not just about using the testing products, it is also about understanding what you are testing, as well as how to report your tests. At the TFD event, only the results of the test were presented, not the details of how the environment was set up. If that is not known, how can someone duplicate your tests? Even more, how can they be verified to be realistic and not optimal based on the technology? Was the test representative of real world scenarios or something entirely different? If this information is not reported, one must assume it was staged instead of reality. People want to see the details if they need to duplicate the tests, so it’s best to provide them. They also want to know the overhead of using test rigs within a virtualized environment. How many times does your data hairpin out of the virtual or cloud environments and back in, which in many ways is reality but also points to possible bad designs?
There are other testing tools out there that target specific actions. One is LoginVSI, which could also be used to show user experience against your cloud scale application. You can optimize the internals, but if the user experience is slow or not up to par, then you face another problem all together.
If you need a local test rig, Spirent Axon may fit the bill; it is simple and easy to use. However, if you need something larger, then Blitz.io, Spirent, or Ixia test rigs may be helpful. If you want to look at user experience then perhaps LoginVSI will assist, or some combination of the above. The key is to test your code before making it live, perhaps during actual code development so code can be optimized before it is actually tested.
Share this Article:
Latest posts by Edward Haletky (see all)
- Data and Metadata Everywhere - September 26, 2016
- Moving Up the Stack Does not Really Simplify Anything! - September 13, 2016
- VMware Shows a Way Forward - September 7, 2016