High-Performance Computing: How Bursts in the Cloud Can Triple Your Work Rate
A happy marriage between High-Performance Computing (HPC) and cloud computing provides geoscientists with faster results from and increased efficiency in reservoir simulation processes.
E&P companies are faced with increasingly challenging requirements for their petro-technical workloads. These workloads usually require large amounts of data, best-in-class graphics and processing, and scalability and flexibility capabilities. To handle these workloads, E&P companies can rely on services that facilitate geoscience-specific workloads on a secure, scalable, and cloud-based high-performance platform.
What is High-Performance Computing (HPC)?
High-Performance Computing (HPC) refers to the use of “supercomputers” to solve computational problems that either take too long to run on or are too comprehensive for standard computers. In other words, HPC is the use of specialized hardware that can reach much faster computational speeds than regular computers with a low latency network.
HPC typically comes in the form of a cluster, which means that it is built with several ordinary computers that are connected in a network. These computers, or separate servers, are usually called nodes.
The first supercomputer was built during the 1960s. Since then, however, HPC has reached a new level of maturity and is now capable of solving some of the most complex problems within science, engineering, and business. The University of North Carolina at Chapel Hill, for instance, relies on on-premise HPC clusters to support various research activities in science, engineering, and medical areas.
High-Performance Computing (HPC) in the E&P Industry
Within the oil and gas industry, HPC is primarily related to reservoir simulation and can aid companies by reducing the time to oil and improve the foundation for time critical decision making. The technology can help companies accelerate ROI and minimize risk by enabling engineers and geoscientists to map crucial project decisions when identifying and analyzing resources.
The American multinational energy corporation Chevron, for instance, has used HPC extensively in exploration. HPC was crucial in discovering a new field in the Gulf of Mexico deepwater that could yield 3 to 15 billion barrels of oil. Without HPC, it would be impossible to process the massive data sets needed for the discovery. More recently, Chevron is looking to move their existing HPC seismic application to the public cloud.
HPC enables geoscientists to outsource their simulations from their workstations to an external cluster. This increases speed and efficiency and allows geoscientists to continue to work on their computers while the simulations are running. Particularly smaller organizations, where machines have been occupied for days, can now utilize HPC and continue to work while outsourcing simulations to the cluster.
High-Performance Computing (HPC) in the Cloud
Many oil and gas companies already rely on on-premise HPC clusters for reservoir simulation. On-premise HPC, however, usually generate significant capital expenditure and ongoing operational expenses, have limited capacity, and rely on infrastructure that quickly becomes outdated.
But HPC is not only available on-premise. The increasing proliferation and rapid development of cloud computing technology have also made the cloud suitable for HPC. The advanced networking capabilities and the powerful servers of the cloud enables the creation of ad-hoc clusters based on an enormous number of servers. The cloud, then, can provide HPC or supercomputers on-demand.
Instead of spending days or months to run reservoir simulations on an on-premise solution, E&P companies can now leverage HPC in the cloud to perform the same simulations in hours – while reducing up-front capital expenditures for on-premise infrastructure.
Read more about Cetegra and Cetegra High Performance Computing >