Industry in WA does not fully take advantage of supercomputing, a rapidly maturing IT segment.
PERTH'S mining industry abounds with challenges to which high-performance computing (HPC) can be productively applied.
Although using large computers to find resources isn't particularly new or novel, the emergence of low-cost supercomputing in the last decade has transformed the businesses of resource exploration and production.
Whether we're in a commodities bubble striving for maximum production, or a global depression focused on cost, computational analysis saves money.
Some savings are obvious. With the cost of a simple well topping $10 million, and complex wells many times that, the price of even typical failures is steep. Less typical failures, emphasised by events such as the Lake Peigneur disaster in the US state of Louisiana are even more costly.
In 1980, due to an error in analysis, an oil rig drilling from the surface of Lake Peigneur penetrated a salt mine below the lake bed. An estimated 13 billion litres of water disappeared down the hole, along with two drilling platforms, 12 boats, and 263,000 square metres of land.
In the end, the three-metre deep freshwater lake was forever transformed into a saltwater lake with a 400-metre chasm. Adjusted for inflation, the drilling company paid almost $US115 million in compensation, and counted itself fortunate that no lives were lost.
Modern exploration companies, if they're honest with themselves, should now be at least as focused on advances in algorithms and computation as geoscience and drill rigs. The question for Perth is, shouldn't more of those resources and that expertise be located here?
If policymakers are truly concerned about diversifying Perth away from a total focus on mining, high-performance computing is one possibility in which the long-term goal could have immediate benefits.
And once in place, supercomputing expertise is attractive to an entire spectrum of modern industry, from aircraft design to pharmaceutical development, weather modelling to finance.
For businesses, Perth's geography also represents a practical argument for local action. A single petroleum exploration project comprises a staggering quantity of raw data. Our customers' data doesn't arrive on DVD, because it would require more than 5,000 units for a single job. That album that took so long to download from iTunes is about 200,000 times smaller.
It's an amount of data that is simply not practical for the internet; not anywhere, but particularly in the most remote capital city on earth. So if you want to be involved in the analysis of that much information, you must be physically near it.
If you want to have local geoscientists imparting their local knowledge, that's where you'd like to have your supercomputing resources.
HPC, however, is in no way a one-size-fits-all problem. Even within a single industry such as petroleum exploration, software and procedures are far too diverse for a single cost-effective solution.
There is a vast corporate graveyard littered with the remains of technology companies who tried to force businesses to change to fit the computer, rather than the other way around.
Different applications tend to operate most efficiently on different kinds of supercomputers; some are focused on mathematical processing power, others require more data storage, or a very fast network.
Because the application landscape is so varied, companies trying to outsource supercomputing often face serious problems of either cost or productivity. The typical supercomputer-for-hire tries to be vast in every dimension, lest it risk being unsuitable for certain customers. As a result, this computer that excels at everything is available only at a steep price, one that a purpose-built machine dramatically undercuts.
When you consider the alternative of bringing it in-house, an HPC installation is a business unit in and of itself.
If you're not prepared to think of your company as partially a computing enterprise, you're probably not committed enough to make it a success.
For along with its benefits, supercomputing brings a unique set of complex challenges. During the design of the largest systems, for example, modelling the heat flows and the necessary cooling infrastructure can be a computational fluid dynamics problem that itself requires the services of a supercomputer.
The skills involved in designing, maintaining, and productively using these resources are not especially common, either.
University lectures in HPC design are rare, and I know of few hobbyists with a supercomputer in the basement.
This is a field in which the experts are in the trenches, and you learn by doing.
I know first-hand that iVEC, Western Australia's state supercomputing agency, is an excellent resource that Perth industry is lucky to have, but few really take advantage of. Through the Industry Uptake Program, iVEC offers advice and expertise to companies and government groups considering a role for high-performance computing. Those organisations would do well to avail themselves of iVEC's services before they make too many expensive decisions.
HPC is still a rapidly maturing segment of IT and there are real opportunities for companies that are capable of leveraging it. Proper computational analysis can easily be the difference between tapping a 20 million barrel oil field or drilling a 20 million dollar dry well.
At the same time, there are opportunities for Perth to expand its high-tech portfolio, to encourage the development of an ecosystem of HPC expertise that will be of service to any number of modern industries.
n Phil Schwan is a high-performance computing entrepreneur, now at Subiaco-based DownUnder GeoSolutions.