I have started the year 2011 out by looking at some of the different monitoring solutions available for us to have an insight into the health and welfare of the systems that we support. In your typical monitoring solution you would install the monitoring server in your environment and let the system discover all the devices in your infrastructure and or to control the licenses we would manually enter the devices that we want to monitor. Some of these monitoring servers solutions have to have a beefy box to begin with and all solutions will need a great deal of “tweaking” to control the number of false positives as well as time put in to be able to report on what exactly we care to be alerted about.
You heard the buzzwords and drunk the kool-aid and now you want to move to the cloud, how do you do this? This has been the a fairly interesting question on the VMware Communities Podcast yesterday, when the vCloud team showed up to talk about the current reference architecture. Yet almost all the questions were about going to the cloud and not about the architecture. Does this mean people do not understand what is required to go to the cloud? I think so. So to take a few elements from the podcast and put them in writing is the goal of this article. The Simple Steps to move to the cloud.
Wanova has today announced general availability of Mirage 2.0, the newest release of its distributed desktop virtualization platform. Mirage 2.0 is a significant milestone for Wanova, extending the platform from a limited scalability solution better suited to LAN-based deployments to a true enterprise class platform capable of supporting multiple remote branch offices without requiring a either high capacity WAN links or a WAN acceleration appliances.
Given the VNXe’s expandability to include fibre channel cards in the future. This storage looks very attractive to those SMBs who have made the investment previously to move towards fibre. Making use of your existing infrastructure whether fabric or Ethernet would lower the cost of adoption for the low-end EMC product. The VNXe’s expandability is one of those items that makes it an attractive tool for other uses. What are those other uses with respect to security, DR, BC, and disaster avoidance?
• • 1 Comment
If synthetic transactions are dead as an approach for determining availability and performance from the perspective of the end users of an application then something has to take their place. The two candidates are approaches that analyse data on the IP network, and client side agents. Both will likely rise in prominence as more applications become more dynamic.
Chad Sakac mentions on his blog that VNXe “uses a completely homegrown EMC innovation (C4LX and CSX) to virtualize, encapsulate whole kernels and other multiple high performance storage services into a tight, integrated package.” Well this has gotten me to thinking about other uses of VNXe. If EMC could manage to “refactor” or encapsulate a few more technologies, I think we have the makings of a killer virtualization security appliance. Why would a storage appliance spur on thinking about virtualization security?
• • 3 Comments
When considering a Virtual Desktop Design a good architect needs to ask “what is the best solution for this environment?” For many, once you’ve considered the needs of your users, it is a combination of desktop delivery models – some virtual, some physical. Ideally the user is unaware of which model is being delivered to them, they consume that service on an appropriate device, at an appropriate time. Ringcube perhaps first to market for this type of solution with their Workspace Virtualization Engine.
Monitoring the performance of the infrastructure, applications and services in IT as a Service environments will require that monitoring solutions become multi-tenant, can be instantiated by ITaaS management tools without any further configuration, and that they automatically “find” their back end management systems through whatever firewalls may be in place. These requirements will probably be the straw that breaks the camel’s back for the heavyweight complex legacy tools that were in place prior to to the onset of virtualization, the public cloud and now IT as a Service. ITaaS is the tipping point that should cause most enterprises ignore every monitoring tool that they have bought in the past and to start over with a clean sheet of paper.