I spent two days at PuppetConf 2013 in San Francisco this week, and the common themes were automate everything, monitor everything, provide feedback early in the process, and focus on culture. All four of those topics aligned with the DevOps movement, with the goal of faster and more reliable deliveries. Companies that can deliver software more frequently with fewer issues have a competitive advantage over those who can’t.
I was fortunate enough to have the opportunity to attend PuppetConf 2013. When I walked into the first keynote session, I was shocked by the size of the audience. Over 1300 people were packed into the ballroom. Another 3700 had signed up to watch the event streaming online. Last year there were 800 people at the conference and only 300 the year before. Obviously, both Puppet and DevOps are hot topics these days.
VMworld 2013 is upon us and one of our tasks is to figure out which vendor’s booths to go see. With over 230 booths to choose from this is a daunting task. If you are interested in finding creative new solutions to your management, monitoring, deployment, security, data protection, and desktop management problems, this list will help you.
The old way of delivering software was to bundle up the software and ship it, sell the software off the shelf, or allow customers to download and install it. In the “shipping model”, it was the buyer’s responsibility to install the software, manage the uptime, patch, monitor, and manage capacity. Sometimes the buyer would perform all of those tasks themselves, or sometimes they would hire a third party to handle it for them. In either case, the buyer of the software had total control over if and when the software was updated and at what time a planned outage would occur in order to perform the patches or upgrades.
What is the total cost of ownership, TCO, of the cloud? When we think of the cloud, we think of using applications in the cloud such as Salesforce, Box.net, and others. We may even consider using security as a service tool such as Zscaler and others. In some cases we also think of placing our own workloads in the cloud using Amazon and other tools. The real question that comes to mind is the TCO of the cloud? Not now, but long term.
For many years, the focus in IT has been on building robust systems that invested heavily in avoiding failures. To accomplish this goal, methodical processes were implemented to guide IT through a list of known use cases so that systems could try to avoid failing and have a plan for recovery if a failure did…
At the recent Misti Big Data Security conference many forms of securing big data were discussed from encrypting the entire big data pool to just encrypting the critical bits of data within the pool. On several of the talks there was general discussion on securing Hadoop as well as access to the pool of data. These security measures include RBAC, encryption of data in motion between hadoop nodes as well as tokenization or encryption on ingest of data. What was missing was greater control of who can access specific data once that data was in the pool. How could role based access controls by datum be put into effect? Why would such advanced security be necessary?