What is the future of big data? This is the question on my mind and perhaps those of others. We know it is being used heavily by industry, business, and government. One thing I want to know is how do we make big data even faster? Eventually, even with Moore’s Law, we will hit a roadblock. Granted, for many, we are a far cry from hitting any limits. However, for some, some queries still take months to answer. For security groups, months is too slow. So, where do we turn? How do we get faster?
Faster is not just using the latest tools (Hadoop, Google BigQuery, Elasticsearch, etc.) Faster is also not the latest algorithms. Faster is not employing faster forms of machine learning or even artificial intelligence. Faster is employing new technologies with entirely different semantic views. In essence, we will need to change our view of data. High-performance computing has already done this by using GPU-based computing systems. However, what works for numerical algorithms may not work for text-based data, unless we rethink what text-based data really is (nothing more than numerical data in a different form).There are now GPU-based clouds to help with these endeavors.
Perhaps we need even more rethinking.
Numbers are either a number or a different number, but data could represent different states of an environment, different characteristics of an item, action, or system. This leads us to look at big data and solving more interesting classes of problems using even newer systems. Perhaps this is the real use of quantum computers. They are also a reality: an expensive reality, but real nonetheless. D-Wave Systems provides a quantum computer. Unlike traditional semiconductor-based systems, quantum systems have unique characteristics:
- The CPU is chilled to near 0 Kelvin (literally colder than outer space in many cases)
- Nondeterministic results; a solution set is returned, not a unique result
- Requires a new way of thinking about problems
Any mathematician will tell you that a probabilistic system returns a set of answers. However, humans want deterministic results. With quantum, that is not viable. At the same time, since the real world is not very deterministic, quantum is pretty good at resolving real-world problems using multiple data sets. The goal is to start thinking about relationships between data. These interactions will be the problem set to which we are searching for an answer.
Granted, the above is highly simplified, but perhaps now it is time for the quantum cloud for big data solutions, where we combine quantum solutions with semiconductors to get the best probabilistic answers with a deterministic trend. Given the cost of quantum computers, clouds may be a great starting point. First, however, we need to change how we view our problems.
If the goal is to get faster answers, quantum computing is proving itself to many.
If the goal is to get a set of answers, with new insights, then quantum computing is ready now. This is the real strength behind quantum computing. The result sets can provide new insights into older or even newer problems, new avenues to research, new thoughts for the business. Perhaps the optimum solution is orthogonal to current thought.
Big data is going through a fundamental shift as we enter the age of IoT. What we once considered big will now be small, and we still need to process any new data faster to get ideal and usable results. At the moment, not many companies have this need, but some of the more forward-thinking companies are looking toward quantum computers to speed up processing, to speed up the resolution of results for business needs.
These are currently huge machines. Eventually the technology will improve, and they will get smaller and smaller. The general use case for quantum may be for a new computational cloud service. Is this where Google may go next?