There is a lot more to designing and manufacturing modern crankshaft bearings than many people realise. Designed to run reliably over a wide range of operating conditions, and using lubricants with viscosities so low they would have made engineers laugh as recently as 15 years ago, the high power ratings of modern engines places a growing demand on them.
But with bearing shapes nowhere near circular and clearances between shell and journal measured in microns these days, how do we determine whether a bearing is up to the task throughout its lifetime?
One way during development is to repeatedly strip, examine and rebuild the engine bottom end, looking for markings on the bearings that would indicate wear and perhaps something going wrong. This is obviously very time-consuming, not to say expensive in terms of man-hours, but visual inspection is probably still the best method. Dismantling the engine also serves as a check on the assembly process and the 1001 other little things that can go wrong at this stage.
Another way to check for bearing wear is by using a technique that exploits radioactivity. It is very good at understanding the engine conditions under which wear is generated at a specific place or location in the bearing, and has been developed in the past to understand the limits to viscosity in a particular application. At the time, and before the general ban on lead in automotive applications, the lead overlay in a bearing would be irradiated to produce the radioactive isotope bismuth206, and bearing wear would be assumed by detecting the build-up of radioactivity in the oil filter or engine oil. The method doesn’t take into account the shape of the bearing surface during use or the smearing effect of the soft overlay across the bearing, but with a measurement resolution of nanometres per hour, the technique was certainly very sensitive (if a bit expensive).
The easiest way to screen engines for bearing wear, however, is to analyse the elements of the lubricating oil. For little more than $15-20 (£10-15) a small 100 ml sample of oil can be analysed using a spectrometer of one form or another. In the laboratory this could be of the inductively coupled plasma type or, at the track, well-heeled teams might use rotating disc electrode systems, which will typically screen any oil sample for the 20 or so elements commonly used in engines or engine oils to levels of parts per million.
Understanding the type of bearing design, and after reviewing the other likely metals that will be sources of wear, one can pick out those materials attributed to bearing wear. The presence of copper, for instance, could mean con rod little-end bearing wear; it could also mean big-end or main bearing wear when using copper-based products. Aluminium, however, is a difficult one. It could be attributed to piston ring top land erosion of contact between skirt and cylinder bore, or it could be down to the aluminium-tin bearing material.
Either way, it might be prudent to strip the engine for further examination or to look for the presence of tin in the oil sample. One engine I know uses bismuth at 99% purity as an overlay that was flashed onto an aluminium-tin bearing using a very thin layer of pure silver to assist adhesion. To screen for bearing wear I looked for small amounts of bismuth in the oil, but when silver started to appear during the test I knew that the bismuth overlay was being breached. Not necessarily a bad thing though.
But for only a few dollars (or pounds) it often amazes me why race teams or enthusiasts don’t analyse oil more regularly.
Written by John Coxon