A logical progression towards solving (yet) undefined problems
Let’s begin with a premise: There is a pattern developing in the way humans are solving problems with technology.
Computational power - energy requirements = profit. Profit could be learnings, project success, or pure dollars.
First some history:
CPUs
1. CPUs (Central Processing Unit) were the dominate force in number crunching and problem solving up until a few years ago. A CPU is the general brain of your computer, capable of running your operating system and your applications.
CPUs were (and are) good because they have a general instruction set designed to handle any kind of problem.
However, this general ability is a positive and a negative. CPUs are intended to handle any kind of problem, so only using a portion of their available capability is inefficient from an energy perspective.
CPU Output = 10 units of productivity per 1 unit of energy.
GPUs
2. GPUs (Graphics Processing Units) showed up on the scene and were originally intended for graphics— mainly video games. Their capability is now far broader.
GPUs are incredible devices, with hundreds if not thousands of smaller brains that all solve more specific kinds of problems. They are well suited to gaming because applications ask them to perform specific actions: Apply this texture to this wall in this game. Render this character in this Minecraft map.
GPUs are now used for a wide range of high intensity number crunching and work including deep learning and certain kinds of crypto mining.
GPU Output = 100 units of productivity (for a specific task) per 1 unit of energy.
ASICs
3. ASIC (Application-Specific Integrated Circuits) are started taking over for GPUs in specific applications.
An ASIC machine is like a GPU but taken to an almost absurd level. They are highly efficient machines designed to solve an even more specific task than a GPU: do this exact math function, solve for this exact algorithm. They exist to serve only one purpose.
ACIS machines are incredibly efficient. Each ASIC machine does exactly one sort of thing, and it does it incredible well. ASIC machines have taken over for GPUs in mining crypto in bitcoin and other currencies.
ASIC Output = 1000 units of productivity (for a specific task) per 1 unit of energy.
Where to go from here?
The economics are fairly incredible. When solving specific problems we get orders of magnitude increases in performance vs energy required.
CPU Output = 10 units of productivity per 1 unit of energy.
GPU Output = 100 units of productivity (for a specific task) per 1 unit of energy.
ASIC Output = 1000 units of productivity (for a specific task) per 1 unit of energy.
One application of this progression has been cyrpto mining. Mining has taken this path BECAUSE it has profit directly attached to it — in the case of mining profit = output minus energy expenditure.
To overly simplify the task at hand: For problems where computational capability and energy usage are both variables— it’s a matter of understanding what problem needs to be solved, and developing specific machines against that problem.
The question to answer:
To what other problems can we apply this progression?
Surely crypto mining is only one example of this quick progression of technology. What other problems exist that need to be solved in such an efficient manor?
And the question to answer: how do you identify those problems today?
-Andy