Does an evaluation for a virtualisation project need to be only an exercise in understanding if X hosts will on Y servers? Will you be able to to virtualize every service you deliver? Are new applications required? What are your existing service-levels and requirements across your application portfolio? In most enterprises today, IT is a cost centre not a profit centre. Business units often want detailed involvement in implementation plans, asset purchases and ownership: it is not unusual that requests for applications come in terms of functionality – not in terms of service levels. With their release of Workspace iQ, Centrix Software appear to be unique in endeavouring to aggregate information that can be used to deliver data that can help provide IT with improved costing information without relying on specific vendors solutions to be in place.
• • 1 Comment
Both Infrastructure Performance Management and Applications Performance Management vendors who are targeting the virtualization and cloud markets have realized that new and unique data is needed in order to performance manage these new environments and the applications that run on them. This is dramatic departure from the old physical world where most vendors simply relied upon the data that were provided via standard OS API’s to infer systems and applications performance.
I participated in GestaltIT’s TechFieldDay which is a sort of inverse conference, where the bloggers and independent analysts go to the vendors and then discuss the information they have received. We visited the following virtualization vendors:
* vKernel where we were introduced to their Predictive Capacity Planning tools
* EMC where we discussed integration of storage into the virtualization management tools as well as other hypervisor integrations
* Cisco where CVN and CVE were discussed in detail.
Enterprises who are going to support business critical and performance critical applications on a virtual infrastructure should at the minimum address two needs. The first is to get a true and complete picture of Infrastructure Performance based upon Infrastructure Response Time. The second is to put in place the tools required to monitor these applications in production.
I participated in GestaltIT’s TechFieldDay which is a sort of inverse conference, where the bloggers and independent analysts go to the vendors and then discuss the information they have received. We visited the following storage vendors:
* Data Robotics where we were introduced to the new Drobo FS
* EMC where we discussed stretched storage and other interesting futures
* HP where we were introduced to the IBRIX products
The opinions expressed within this site by The Virtualization Practice, LLC (TVP), TVP’s Analysts, Guest Writers, and any syndicated replication are the opinions of the Analysts and Guest Writers, and do not represent TVP’s sponsors, the companies by which the Analysts are employed, TVP, or anyone else unless otherwise explicitly stated.TVP is a for-profit organization,…
One thing I have learned in the time I have spent working in IT is that no software product, out of the box, will do everything that you want it to do. This especially goes for VMware’s vCenter Server. This is a great product but yet still has its shortcoming. vCenter will perform a lot of the tasks that we need to do and has the ability to report on a information we need to know about in our virtual environments but unfortunately not everything we need to know about can be easily found in bulk about multiple servers.
Since coming out with VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment, I have continued to consider aspects of Digital Forensics and how current methodologies would be impacted by the cloud. My use case for this is 40,000 VMs with 512 Servers and roughly 1000 tenants. What I would consider a medium size fully functioning cloud built upon virtualization technology where the environment is agile. The cloud would furthermore contain roughly 64TBs of disk across multiple storage technologies and 48TBs of memory. Now if you do not think this exists today, you were not at VMworld 2009, where such a monster was the datacenter for the entire show and existed just as you came down the escalators to the keynote session.
Infrastructure Performance Management is the single most important performance and capacity management issue that owners of a virtual environment need to address. The reason for this is that since the low hanging fruit has been virtualized, what is left is business critical and performance critical applications in the hands of applications owners and their business constituents. In order to convince these groups that the virtual infrastructure is performing acceptably in support of these important applications Operations groups in charge of virtual environments need to move beyond trying to infer infrastructure performance from resource utilization patterns.