Computing power and engine simulationTags : test-equipment
In a previous RET-Monitor, we looked at engine simulation software, specifically open source CFD packages. However, while having the ability to simulate what is going on in an engine is one thing, having access to the correct hardware to undertake such simulations is another. The level to which you can simulate complex flows such as those found in an engine’s inlet and combustion chamber depends to a great degree on the computing power available. For those wishing to undertake such simulations, there are a number of options open.
The first, and by far the cheapest, is simply to use a powerful desktop computer. The latest generation of high-end machines have an excellent cost-to-performance ratio. They feature multiple core processors, which – if configured correctly – can be harnessed to effectively handle the requirements of basic CFD computations. For more complex calculations though, the very large number of computations required becomes a limiting factor: while they can be undertaken, the time taken to do so is considerable. When it comes to simulating engine flows, anything much more complex than basic steady-state flow analysis of simple geometries is beyond the capabilities of a regular PC.
The next step takes us into the world of high-performance computing (HPC), a term often bandied about in relation to CFD simulation and one that covers a wide range of computer types. There’s no fixed definition of how powerful a computer needs to be for it to be considered as ‘high performance’. This is because the performance of microprocessors has increased in an exponential way for many years, so any such definition is soon out of date. It’s more usual to consider a computer to be high performance if it uses multiple processors (tens, hundreds or even thousands) connected together by some kind of network to achieve well above the performance of a single processor. Using multiple processors in this way is also often referred to as parallel computing.
When it comes to CFD calculations, the benefit of such systems are considerable: with many processors and processor cores handling calculations, more calculations can be undertaken in a shorter time. Even with a properly configured single-processor machine, with say, a quad-core processor, only four calculations of a particular solution can be undertaken at any one time. Use, say, 20 quad-core processors on the same solution though and clearly the processing time will fall considerably.
At one end of the HPC spectrum you have the multi-million pound supercomputers used by the likes of Formula One teams, featuring thousands of processing cores and capable of very rapid and complex CFD solutions. With the best of these, the sky is (almost) the limit, allowing for example high-resolution simulation of entire engines. At the other end, and the one applicable to most who need a decent level of CFD functionality, are interlinked clusters of individual desktop machines. While each machine on its own is nothing special, considerable performance can be achieved when linked together correctly. The benefit of this type of system is that regular workstations can be used, provided they are connected by a sufficiently high-speed local area network. That means they can be used for other tasks when not needed for CFD work.
For most small-scale powertrain CFD users, this approach is the most useful, negating the need to have a dedicated and expensive HPC cluster. At this point, and in order to avoid confusion, it should be noted that cluster computing can also refer to a cluster of processors housed in a single machine. The mode of operation is the same, but instead of consisting of a cluster of workstations, the processors are all centrally located.
The final option, which has only become a viable one in recent years, is the use of ‘cloud’ computing resources. This essentially means that computing tasks are outsourced virtually to HPC facilities around the world. It is a nascent industry at the moment, but its obvious benefit is that it removes the need for in-house computing resources. For companies who may only have an infrequent need for simulation, the availability of such resources opens up the possibility of undertaking far more complex simulations than would have been feasible a few years ago.
Written by Lawrence Butcher