When to implement security and data protection practices, or even change existing ones, is all about timing, knowledge, and scope. Deciding what to implement at any particular time requires knowledge of what needs to be fixed, and also of what the future could hold. To do this properly, you need to pay close attention to the threats within your industry, understand their impact, and evaluate them based on risk. Where to obtain such knowledge is always changing, but the scope we apply the knowledge to seems to be static and not changing with the times.
Articles Tagged with VCE
Many network virtualization products appear to be aimed at the top 10,000 customers worldwide, accounting for their price as well as their published product direction. While this is a limited and myopic view, many claim it is for the best, their reason being that network virtualization is only really needed by the very large networks. The more I think about this approach, the more I believe it is incorrect. Let us be frank here. Most networking today, within many different organizational sizes, is a hodgepodge of technologies designed to solve the same problem(s) over and over: how to get data quickly from point A to point B with minimum disruption to service.
When the VCE coalition first formed in late 2009 their product, the Vblock, was the industry’s first serious attempt at delivering converged IT systems. The first models were the Vblocks 0, 1, and 2, addressing the small, medium, and larger enterprise IT use cases. Over time, these evolved into the Vblock 300 and Vblock 700, relatively high-end computing options. On February 21, 2013 VCE announced the re-addition of smaller Vblock models, Vblock 100 and Vblock 200, once again allowing the product line to cover the small & medium-sized opportunities in the market. It’s been a bit over a month since VCE announced these changes to their product line, and with the products becoming generally available let’s look at some of the technical details, then use those details to make some conclusions about these products.
The 3/7 Virtualization Security Podcast featured Andi Mann, VP of Strategic Solutions at CA Technologies, and RSA Conference. The conversation was lively and I invited Andi Mann due to a previous day tweet chat about cloud security. Lately, I have had several serendipitous conversations on cloud security from TweetChat, to in face discussions with @Qthrul, and meeting @MrsYisWhy in person. Each conversation has been about Cloud or Virtualization security in some form. Let me delve into them a bit more.
VDI is expensive and complicated; at least it used to be. Cost is no longer the issue that it was with the cost of data center hardware falling from over $1,000 per desktop a couple of years ago to a fraction of the cost of a budget PC today. Complexity however has been rising as multiple third-party components have been integrated into the mix to bring the price down. As cost falls so VDI becomes more attractive especially to budget conscious SMB customers, at the same time though as complexity has increased, the willingness and ability of these new customers to successfully deploy and maintain VDI has fallen. This has proven to be a boon for DaaS providers who can abstract the complexity of VDI behind a simple to consumer service, and are as a consequence seeing significant increase in traction.
It has been just over two years that the Cisco Unified Computing System (UCS) was announced and released to the world. I wanted to give my feedback on the progress of the platform and how it is fitting into the Cloud Computing space.
When Cisco announced their Unified Computing Platform a couple of years ago, their thinking was not to just design and get into the server hardware business, Cisco’s goal was to and become the heart of the datacenter itself. This was a big move by Cisco considering, that they had a very good working relationship and partnership with HP at least until the announcement that Cisco was getting into the server business.