HP announced the newest addition to the top of their thin client device line. The t820 series focuses on delivering the highest level of performing thin clients, targeting users that historically have not been able to use thin clients in the past. In a press release on August 19th, “There is new and growing demand in today’s market for quad-core processing and multimedia graphics on thin clients,” said Jeff Groudan, marketing director, Thin Clients, HP. “With the HP t820, we’ve delivered a more advanced thin client solution to give companies the speed and performance required for their most demanding applications.” Continue reading HP Releases New Thin Client amidst a Growing Market
Teradici, the developer of the PCoIP® protocol, has announced the release of two updates to their hardware acceleration products that are geared to optimize protocol bandwidth and improve end user experience. Continue reading News: Teradici Rolls Out Updates to Optimize PCoIP Experience
As you can probably tell from the title, Citrix is leveraging their biggest advantages in the mobility/BYOD race: their understanding of ALL client operating systems, multimedia in both SBC and VDI environments, and their established partnerships with hardware and OS vendors. In a conversation I had with Chris Fleck, VP Mobility & Alliances at Citrix (@chrisfleck), we spent an hour talking about the various methods Citrix has decided to use to manage mobile devices in both multi-user and multi-OS virtual environments, while extending their function from consumption to productivity. Oh, yeah, they have also changed their product and technology names to reflect their commitment to mobility; shocking I know. Continue reading Mobility Bytes – Citrix’s Mobile Strategy Has Sharp Teeth & It’s Attacking from Every Side
Corporate data is floating around on PC’s and laptops, sitting on cloud file-sharing platforms and being transmitted over email. Laptops and mobile devices are sitting in the trunks of cars at the mall, being left in hotel rooms or lost in the backs of taxis. Data has become as good as gold. Credit Card numbers, Social Security numbers, architectural diagrams, marketing plans and source code – each a target for a particular thief. And just like fine art and jewelry, there is a huge black market of data buyers. Don’t think your competition wouldn’t want to get their hands on your customer accounts, price lists or intellectual property if they could. There are too many cases in recent history of massive data loss to think that this problem is something that can be easily fixed without changing the way employees get access and use corporate data. Continue reading Rethinking Thin Clients from a Security Perspective
When we look at the secure hybrid cloud, the entry point to the hybrid cloud is the end user computing device, whether that device is a tablet, smart phone, desktop, laptop, google glass, watch, etc. We enter our hybrid cloud from this device. From there we spread out to other clouds within our control, clouds outside our control, or to data centers. How these devices authenticate and access the data within these various places within the hybrid cloud becomes a matter of great importance and has been a concentration for many companies. How we protect the data that ends up on the end user computing device is also of great importance. Continue reading End User Computing within the Secure Hybrid Cloud
Right now is a particularly interesting time in the world of IT. Historically, IT has swung back & forth between centralization and decentralization, closed and open, tightly controlled and loosely controlled. Lately, though, a third option has cropped up: centralized control with decentralized workloads. In my opinion it’s a function of speed, implemented through bandwidth and processing capacity. We now have enough bandwidth between our devices to start treating the device in the next rack column like a slightly-less-local version of ourselves. We also have enough bandwidth that we’ve outstripped our need for separate storage and data networks, and can converge them into a single wire, running a single set of protocols (most notably TCP and IP). On the processing side, each node is basically a datacenter unto itself. 16, 32, 64 cores per server, terabytes of RAM. The advent of SSD and PCIe flash rounds out the package, lessening the need for large monolithic collections of spindles (aka “traditional storage arrays”). The problem then becomes one of control. How do we take advantage of the performance and cost that local processing brings, but maintain all the control, redundancy, and management benefits we had with a monolithic solution, while keeping the complexity under control? And while we usually talk about doing this at great scale, can we do this on a small scale, too?