Software-defined storage (SDS) within the container realm often ignores storage itself. In essence, the SDS platform assumes some chunk of storage is mapped to a container host. Then, it takes over from there. SDS for containers is the orchestration through which persistent storage is mapped to a container. This gives it a unique ability to provide a mount point the SDS layer can control. It also provides a unique view of the world. SDS for containers bypasses traditional storage yet provides for retention, replication, and erasure coding. These are just some of the features, but it does not care what storage is underneath the container host. This assumption could lead to issues down the road, but how does this work?
Articles Tagged with Tech Field Day
A while ago, I was listening to the GreyBeards on Storage podcast. One idea suggested by a guest is that application storage is going to end up in one of two forms. There will only be flash and cloud: specifically, flash in a mobile device and object storage in the cloud. I don’t buy that as the end state, but I do see flash and cloud as the two growth areas of storage. I am seeing flash as a performance tier close to consumption, and a separate persistent capacity tier. I am also hearing a lot more about the persistent tier being farther away, across town or across the country. At recent Tech Field Day events, I saw a couple of companies that are putting flash storage close to users and persistent storage farther away. ClearSky provides a managed primary storage service, and Avere wants to alleviate your NAS performance problems.
Recently, I had the opportunity to talk about the shortest path bridging (SPB) protocol with Avaya while at Interop. This conversation was one of many with networking companies. While SPB is a very interesting protocol, my questions were about how deep into the virtual environment the protocol extends. While SPB and other networking protocols are considered by some to be network virtualization, I could not see this within the realm of the virtual network and hence, confusion reigned. Depending on who is talking to whom, the same words can mean many different things. What I found amazing, still, is that most people thinks networking ends at the physical NIC within the virtualization host, and that what is inside, does not matter as much as what is outside.
While at Interop, I participated in a Tech Field Day event where Spirent was talking about their new Axon product, as well as the possibility of usage of Blitz.io. It was an interesting discussion but it gave me some food for thought. As we move to cloud scale apps based on platforms such as Pivotal (EMC World was just down the street), OpenShift, and others, we need a way to test those applications at scale. Spirent and Ixia provide these tools, but would they be used in this new model?
The Virtualization Field Day delegates joined the Virtualization Security Podcast as guest panelists on 2/23 and the topic of the day was cloud security. There were questions about compliance, security of the tenant, and security of the administrators, and legal issues. There were answers from Rodney Haywood (Rodos), another Virtualization Field Day Delegate and cloud architect as well as the podcast standard panelists. So what did the questions boil down to?
While participating in the GestaltIT Virtualization Field Day #2, I was asking PureStorage on whether or not SSD based storage was throwing hardware at a problem that is better fixed, by changing the code in question? What brought this thought to mind was the example used during the presentation which was about database performance. This example, tied to a current consulting problem, where fixing the database improved performance by 10x. This alleviated the need for over all storage improvements. So the question remains, is using SSD, throwing hardware to solve a basic coding problem?