Ultrasonic fatigue

Ultrasonic fatigue allows 1 billion mechanical stress in 14h.

  • stress frequency: 20kHz
  • 109 cycles in less than 14h
  • Tests in economically viable time
  • 15min Installation on conventional system
  • Ease of use
  • Ease of calibration

 

 

 

 

 

 

Download our brochure

All our fatigue machines are driven from our WinVHCF software.
Main characteristics :

  • Select the biasing amplitude and the number of cycles to reach
  • Automatic completion of several rounds of solicitations at different amplitudes
  • Automatic stop once the broken
  • Ease of Use

Provided materials :

  • steel structure
  • Signal generator
  • Converter

 

GF20-TC : Turkey machine

Tests in Tension-Compression

  • Tests at 20kHz
  • cylindrical test tube
  • TC horn for amplitudes between 3,6μm and 20µm
  • TGD horn for amplitudes between 17μm and 80µm

 

 

 

GF20-KB : Turkey machine

222

 

Tests in Compression-Compression

  • Tests at 20 kHz
  • TC horns
  • Booster (Amplifier)

 

 

 

 

GF20-KT

3333

 

Tests in Tension-Tension or Tension-Compression

  • Kit suitable for conventional fatigue machines
  • Set up within 15minutes
  • Stress ratio from 0 to 0.8

 

 

 

 

 

Reliable and robust, our Ultrasonic fatigue machines test materials in Tension-Tension, Tension-Compression and Compression-Compression on more than one billion cycles in less than 14h..

Automobile

Wind Power

Aerospace

Train

Research

 

nasa renault safran oxford
ensam karlstad

State of the art

When Wöhler proposed his fatigue endurance curve, the leading applications of the time were steam engines for railway locomotives and ships. These were slow machines, operating at a few tens of cycles per minute and with lifespans of between 106 and 107 cycles. It was perfectly justified in practical terms to consider a megacycle limit on fatigue, especially as the fatigue testing machines of the time could not exceed 10 Hertz or so. To give a few current-day examples, the rotation speed of today’s engines is measured in thousands of cycles per minute, the service life of an internal combustion engine in hundreds of millions of cycles and that of a turbine in billions of cycles. Nevertheless, fatigue tests exceeding 107 cycles remain relatively rare due to the operating costs of conventional testing machines. It is also of note that accelerated fatigue tests using resonance testing machines have not been sufficiently successful to date. The main criticism of such machines has been the lack of control over test parameters. Computer-controlled machines and sensors with fast response times have rendered this criticism obsolete. Reliable fatigue testing machines are now in use that can perform 1010 cycles in less than a week where conventional systems would have taken over three years to achieve the same number of cycles for a single test piece.
Accordingly, one may ask whether it is sufficient to apply the current standards (Fig. 1) to determine a safe fatigue limit beyond 107 cycles using a statistical approach, or whether a S-N curve should be extended to 1010 cycles and beyond. To summarise the current situation, we accept that the concept of a fatigue limit is bound to the assumption that there is a horizontal asymptote on the S-N curve above 106 or 107 cycles. Accordingly, a test piece that has not ruptured at 107 cycles is considered to have an infinite service life, which may actually be a practical and economical approximation but is not particularly rigorous.

 

It is important to understand that the staircase method is popular today for determining an assumed fatigue only because of the convenience of this approximation. A fatigue limit determined by rupture of a test piece at 107 cycles requires around 30 hours of testing on a machine operating at 100 Hertz. Extending the test to 108 cycles would take 300 hours and increase the cost tenfold, explaining why the possibility of accelerated testing on a piezo-electric testing machine is so advantageous.

Principle of piezo-electric fatigue testing

Numerous articles have been published on the overall principle of vibration fatigue; this article will therefore simply summarise the fundamental theory. A detailed explanation can be found in “Gigacycle Fatigue In Mechanical Practice” by C. Bathias and P.C. Paris, published by Dekker/CRC (2005).

The key principle of vibration fatigue testing machines (Fig. 2) is to produce stationary resonant vibrations in a test piece. This requires a converter capable of converting the sinusoidal signal supplied by the electrical generator into mechanical vibrations. In commercially-available units the converter and generator generally operate at a fixed frequency (20 kHz).

 

 

The vibration from the converter is in principle too weak to damage the test piece. A horn is required to amplify the vibrational travel. If the vibration system (converter, horn and sample) has the same intrinsic frequency (20 kHz) a high vibration amplitude can be achieved with a low energy level and a stationary wave in the system.

The following underlying assumptions apply to the theoretical analysis of fatigue vibration testing:
– The metal studied is uniform and isotropic.
– The material is elastic (the plastic domain is considered as negligible compared to the elastic domain for fatigue at particularly long service life times).
– As the vibration is longitudinal, the theoretical analysis can be simplified to one dimension.

Under these conditions, piezo-electric fatigue testing machines can only give results after 106 cycles in elastic operation; clearly they cannot replace hydraulic testing machines.

History of piezo-electric fatigue testing machines

Vibration fatigue testing at 33 Hz was first discussed in scientific publications by Hopkinson in 1911, then by Jenkin and Lemann; the first machine to reach 20 kHz was developed by Mason in 1950. Below this frequency the wave is audible. Girard conducted tests in 1959 and Vidal in 1965 to increase the frequency to 92 and 199 kHz respectively. However, the computers of the day were not powerful enough to control the tests correctly and the results were not convincing. Successful numerical control of piezo-electric testing machines was not achieved until recently by C. Bathias and his team.

This technique, at an unofficial standard of approximately 20 kHz, is used for fatigue testing of particularly long-lasting materials and rupture mechanics.

Experimental vibration fatigue testing resources have been significantly improved since the 1970s, and new systems and more extensive test possibilities have been developed. In 1996, S. Stanzl summarized the development and the various aspects of ultrasound fatigue testing. The first international congress entitled “Fatigue Life In the Gigacycle Regime” was held in France in 1998, organized by Euromech. Three subsequent congresses at Vienna (2001), Kyoto (2004) and Ann Arbor (2007) have confirmed the increasing attention paid to very long cycle fatigue testing.

The high-tech vibration fatigue testing system marketed by LASUR was developed in C. Bathias’s laboratory.

Book :

Gigacycle Fatigue in Mechanical Practice, Claude BATHIAS & Paul C. PARIS. 2004

Return to basics

To measure a surface temperature we use the radiation flux of thermic origin that all bodies emit. The physical law that governs this phenomenon is Planck’s Law that describes the behavior of an ideal entity called the blackbody. The set of curves in Figure 1 describes the spectral distribution of this radiation for several temperatures representative of those commonly encountered, particularly in metallurgy.

 

 

 

We see that the peaks of these curves are located in the infrared (λ>0,8µm) and that the hotter the body is, the more the curve shifts toward short wavelengths.
All equipment currently marketed work in the infrared, meaning close to energy peak, which allows reducing the cost of their fabrication.

 

Emissivity gets in the way

 

Unfortunately the blackbody is a purely theoretical entity for, in nature, no body is perfectly black or even grey. Planck’s Law gives only the maximum radiant energy that a body can emit. This value must be weighed by an inclusive factor between 0 and 1 called emissivity and symbolized by the Greek letter ε. To complicate everything, this factor which depends on the nature of the body to be measured, varies according to the condition of the surface (oxidized or not), the wavelength and even the temperature! If controlling an industrial process is desired in following the evolution of the heating of a body, a constant and precise temperature measurement is necessary. However, as we have seen, the result obtained by measuring the flux depends also upon the variation in emissivity. Up to now, there has been no reliable and exhaustive experimental data available that would allow an analytical representation of this factor, or even a reliable interpolation with all the parameters concerned. In the low-cost equipment that can be found on today’s market there is a pre- defined emissivity value, usually 0.8. On the more sophisticated equipment the user chooses an emissivity value that remains unchanged until the end of the running process. As a direct consequence, the measurement result is false and the estimation of the error value is very unpredictable since we have no idea of the amplitude of variation of ε !

 

Signal variation 100 times greater in ultraviolet

Ultraviolet pyrometry and its generalization, ultraviolet thermography, bring the solution because the measurement is done in a spectrum range where the signal variation with temperature is so large that it hides the consequences of an emissivity variation. Due to the linear scale used in Figure 1, there is no evidence of the steep slope of the curves on the left of the energy peak. In Figure 2, with a logarithmic scale, we zoom toward the short wavelengths for the two temperatures 800°C and 1000°C.

 

 

It can be seen that at 0.3µm, in the UV, a 200°C variation results in a multiplication by 1000 of the energy signal, whereas it is only multiplied by 10 at 1µm. The drawback is that the energy levels in the UV are 10-10 times lower.

A reliable solution, highly sensitive and sturdy

Photomultipliers are detectors that have been designed to deal with these extremely weak energy levels. We use photomultipliers in the photon counting mode whose intrinsic noise is only 10 photoelectrons/second and a 100 photoelectron/second signal (that is to say 10-16 joules) is good enough to get a significant measurement result. Due to the extreme sensitivity of our detectors we can work with a very narrow bandwidth, almost monochromatically. Since the signal doubles every 20°C, we obtain very precise results. Another advantage of such a measurement process is that it is fully digital, from the captor to the display of the result. Thus we get rid of the drift problem that is the main flaw with analog devices and we get an excellent repeatability of results. Moreover, these photon counting devices have proven to be very sturdy and they can work continuously in a harsh environment for years.

X