Encyclopaedia Index

## Brian Spalding, CHAM Ltd, London, England

A lecture delivered at The Isaac Newton Institute, Cambridge, England, on April 30 1999

### Abstract and contents

It is argued that:
1. probability-density functions (PDFs) are more useful than statistical averages;
2. the colliding-fluid-fragments model of Reynolds and Prandtl, when generalised, leads to the multi-fluid model (MFM) and thence to PDFs;
3. the "eddy-break-up model" of 1971 was a small step in the right direction;
4. the PDF-transport model of 1982 and the two-fluid model of 1987 were larger steps in two divergent nearly-right directions; now MFM can deliver the results sought by the former by using the mathematical techniques of the latter;
5. MFM differs from Monte Carlo PDF transport in several respects: in particular, it allows "population-grid refinement and adaptation".
6. So, Kolmogorov's introduction of transport equations for statistical averages was "a good idea at the time, but..."

7. During the lecture, applications of MFM will be made to: whereafter possible future developments will be discussed,

8. References are provided

### 1. Probability-density functions are more useful than statistical averages

1.1 What PDFs are

• PDFs, when discretized, can be thought of as "fluid-population distributions", measuring what proportion of the fluid present at a given location, averaged over time, has a prescribed state.

• The state prescription could be:
• having temperature between 20 and 21 degrees, or
• having velocity between 0.15 and 0.16 m/s;
• or both.

• One-dimensional PDFs (discretized) look like this or this or this.

• From such PDFs, if the property in question is the velocity, we can deduce, if we wish, the average kinetic energy of the turbulent motion.

• A discretised two-dimensional PDF looks like this, or this.

Knowing the heights of the ordinates in each box, if their locations represent (say) two components of velocity, we can deduce the average of their product, ie the shear stress; but the reverse calculation is not possible.

1.2 Why we need PDFs

• We need to know the PDFs for many engineering purposes, for example:
• when the variable of a one-dimensional PDF is temperature, and the radiative-heat emission is to be calculated (proportional to temperature**4);

• when the two variables of a 2D population are the concentrations of participants in a chemical reaction, and the volumetric average of the reaction rate is required (proportional to the product of the concentrations);

• when the two variables of a 2D population are the fuel-air ratio and the extent of reaction of a combusting mixture, and it is required to calculate the production rate of oxides of nitrogen (non-linearly dependent on concentrations and temperature).

• Some analysts who are aware of the need to know the PDFs satisfy their consciences by guessing their shapes (e.g. beta-function, or "clipped gaussian"); but none have ever been able to prove that their guesses are correct.

• PDFs calculated from a physically plausible hypothesis must be better than any guess.
1.3 Why not get PDFs from "weighted averages"?

• It is possible, in principle, to reconstruct a curve from knowledge of a sufficient number of suitably weighted averages.

• Therefore, if:
• equations existed which enabled a sufficient number of such averages to be computed, and
• these equations could be solved with sufficient accuracy, and
• the equations had a sound physical basis,
then at least one-dimensional PDFs could be obtained.

• But:
• actual PDF shapes vary so widely that the equations would have to be very numerous, in order to express their essential features;
• the computational expense would therefore be horrendous;
• the physical basis of such equations as exist (eg for Reynolds stresses) is insecure.

• This approach therefore appears to be impracticable.

### 2. The colliding-fluid-fragments model of Reynolds and Prandtl

• Osborne Reynolds in 1874 explained observations about friction and heat transfer between fluid streams and solid walls by postulating that fragments of fluid collided with the walls and were brought thereby to kinetic and thermal equilibrium with them.
This is the conceptual basis of the so-called "Reynolds Analogy".

• Ludwig Prandtl in 1925 explained observations about shear stress and heat transfer within fluids by postulating a similar collision and equalization between fluid fragments emanating from locations of divers average velocity and temperature.
This is the conceptual basis of the so-called "mixing-length hypothesis".

• The conceptual basis of the multi-fluid model is also the collision of fluid fragments; but, instead of fully merging into one another, they enjoy only a brief encounter; and when they separate, they leave off-spring behind them which are intermediate in properties between those of the parents.

• The following picture illustrates this.

• These encounters change the population distribution in a calculable manner. Allowing for all possible encounters, ie evaluating a "collision integral", enables the PDFs to be computed.

• What happens in a single brief encounter is easy to calculate and display. The diagram shows contours of constant velocity difference on a distance-time plot . Time is vertical and distance, normal to the contact surface of the fluid fragments, is horizontal.

• The longer the duration of the encounter, the thicker becomes the boundary layer between the two fragments, and therefore the greater the amount of "offspring material".

• Arguments presented in a recent article have shown that, at high Reynolds numbers, the encounters are "brief", in the sense that the boundary layer is typically much smaller then the size of the colliding fragment, at the time when the inter-mingling is interrupted by the next collision.

• At low Reynolds numbers, on the other hand, the picture looks different, as shown here.

• It is of course possible to work out exactly the consequences of many kinds of encounter, for example that between hot burned gas and cold unburned combustible gas.
1. at high Reynolds number,
2. and at low,

• It is thus possible to work out a complete "encounter theory", and to predict how the collisions between fragments of all members of the fluid population affect the development of the fluid-population distributions, i.e. the PDFs.

• Fortunately, the brevity of the high-Reynolds-number encounters entails that only molecular-diffusion interactions have to be taken into account.

### 3. The "eddy-break-up" model; a small step in the right direction

• The "eddy-break-up" (EBU) model, first published in 1971, involved regarding a turbulent burning mixture as comprising inter-mingled fragments of just two gases, namely:
• a completely unburned mixture of fuel and oxidant; and
• the completely burned products of its combustion.

• It was recognised that, at the interfaces between the two sets of fragments, thin layers of gas in various stages of incomplete combustion must exist.

However, the volume occupied by these gases was regarded as small: so the only important consequence of their existence was the chemical transformation (fuel + air --> products) which took place in them.

• The rate of that transformation, per unit volume of the total space, was treated as being proportional to:

density * mbu * mub * epsilon / k

where:
• mbu represents the mass fraction of burned gas,
• mub represents the mass fraction of un burned gas,
• epsilon represents the volumetric rate of dissipation of the kinetic energy of turbulence, and
• k the kinetic energy itself.

• The EBU rate formula has been extensively used but little examined; and it is only in the recent article referred to above that density * epsilon / k has been connected clearly with the rate at which interface material results from encounters between fragments, and mbu * mub identified as the proportion of all collisions which involve the two fluids in question.

• Moreover, the fluid population envisaged by EBU was extremely crude, being composed of only two constituents; but it was not until 1995, it appears, that anyone thought of "refining the population grid" by postulating four fluids !

• However, that second small step was enough to clear the way to the multi-fluid model. Calculations with 100 and more fluids are now routine. Results will be shown below

### 4. The PDF-transport model of 1982 and the two-fluid model of 1987

• Dopazo and O'Brien, already in 1974, formulated differential equations of which the solution would be the PDF field. However it was left to Pope to provide the first numerical solutions in 1982.

Unfortunately (in the author's opinion), he chose to employ the Monte Carlo method of solution, as have all his followers. The computational expense of the method has proved to be a serious deterrent to its widespread use.

• At around the same time, the author was exploring a different avenue, namely the development of a two-fluid model similar to those which were then being employed for two-phase flow. He thought of it as being:

"what Prandtl would have done, to further the colliding-fragments model, if he had possessed the computational tools."

This had some success, particularly in explaining and simulating the phenomenon of "unmixing". However, it involved the solution of two sets of Navier-Stokes equations; and it never "caught on".

• Both of these approaches possess merits: the former does at least aim at the right target, namely PDF calculation; and the latter, although its PDF is a crude "two-spike" one, can simulate real phenomena about which popular models such as k-epsilon have nothing to say.

• MFM can be regarded as a logical extension of the two-fluid model; alternatively it may be looked on as PDF-transport without Monte Carlo but with the additional merits to be described below.
It may therefore perhaps achieve greater popularity than either.

### 5. The Multi-Fluid Model (MFM), and how it differs from Monte Carlo PDF-transport (MCPT)

1. MFM focusses attention on discretized PDFs. It produces "battlement-shaped" histograms, whereas MCPT produces a cloud of points, through which one may be able to draw a curve.

2. The fineness of MFM discretization is chosen by the analyst, who may test its adequacy by grid-refinement. Sometimes an extremely coarse population grid will suffice, as for example, in this reactor study.

3. MFM does not need to have the same number of fluids at all points in the domain of study. In a combustor simulation, a single fluid will often suffice over a large proportion of the volume. An algorithm can be devised for dynamically determining the number of fluids needed to provide a given accuracy.

4. Population grids can thus be "unstructured" and "self-adaptive", exploiting experience gained by CFD experts with space and time grids.

5. There appear to be no economising counterparts to points b, c and d in MCPT.

6. Because the local mass fraction of each fluid is a calculated and accessible variable, MFM allows "micro-mixing hypotheses" to be investigated which are more sophisticated than any formulated by MCPT practitioners.

7. MFM distinguishes between (what the analyst chooses as) population-distinguishing attributes (PDAs) and continuously-varying attributes (CVAs), for example, in a combustor simulation:
• PDAs: (1) fuel/air ratio; (2) unburned-fuel mass fraction.
• CVAs; (1) temperature; (2) concentrations of chemical species; (3) velocity components.

MCPT appears to enjoy no such freedom.

8. MFM fits easily into conventional finite-volume-type solution algorithms, whereas MCPT requires, in addition, the Monte-Carlo apparatus and methodology.

9. MFM concepts can be described rather easily in words, whereas (it appears) MCPT demands a daunting display of mathematical symbols.

10. The computer expense associated with MFM is of the same order of magnitude as that associated with the hydrodynamics in a typical CFD application.

### 6. Kolmogorov's"bright idea"

Kolmogorov's 1942 paper said, in effect:
"Although we really want to know much more (eg the PDFs), perhaps we can get away with calculating a few statistical quantities"

The turbulence-modelling world has followed him.

Kolmogorov chose the energy, k, and the "frequency", k/epsilon, as his variables, as did Wilcox much later.

Particularly since the late 1960's, many other choices have been made, the most popular being k and epsilon; but all modellers have shared Kolmogorov's hope: that "a few statistical quantities" will suffice.

However, for reasons explained in section 1, they do not suffice, and never will. PDFs are what we must have; and MFM enables us to get them economically.

### 7. Applications of MFM

Extracts will now be presented from earlier lectures by the author.

### 7.1 The plane mixing layer,

This concerns the first-ever simulation of a much-studied turbulent flow which does not employ one the "classical" turbulence-model approaches.

### 7.2 The stirred reactor

This concerns a large three-dimensional transient flow simulation, to which introduction of the multi-fluid model added little compuational expense but much valuable insight.

In this case, the k-epsilon turbulence model is used for the hydrodynamical part of the calculation, showing that MFM easily co-exists with conventional models.

### 7.3 The gas-turbine combustor.

This recent lecture shows how the predicted smoke-generation rate in a three-dimensional steady-flow combustor differs considerable are according to whether the concentration fluctuations are or are not taken into account.

Also reported are computer times, and how they vary with the number of fluids employed; and a population-grid-independence study is also reported.

### 7.4 Future developments.

Clicking here leads to the final section of a 1998 lecture on MFM.

This sets out what is, in essence, a multi-man-year program of research. This, it is argued, could beneficially transform the capabilities of engineers and applied scientists to simulate turbulent-flow phenomena realistically.

However, it recognises that a formidable obstacle stands in the way of such an enterprise, namely the strong psychological hold which Kolmogorov's "bright idea" of 1942 still exerts.

Loosening that hold is one intent of the present lecture.

Will The Isaac Newton Institute assist?

Or must the world wait for The Einstein Institute to take an interest in turbulence?

### 8. References

C Dopazo and EE O'Brien (1974)
Acta Astronautica vol 1, p1239
AN Kolmogorov (1942)
"Equations of motion of an incompressible turbulent fluid"; Izv Akad Nauk SSSR Ser Phys VI No 1-2, p56
SB Pope (1982)
Combustion Science and Technology vol 28, p131
O Reynolds (1874)
"On the extent and action of the heating surface of steam boilers"; Proc. Manchester Lit Phil Soc, vol 8, 1874
DB Spalding (1971)
"Mixing and chemical reaction in confined turbulent flames"; 13th International Symposium on Combustion, pp 649-657 The Combustion Institute
DB Spalding (1987)
"A turbulence model for buoyant and combusting flows"; Int. J. for Num. Meth. in Engg., vol 24, pp 1-23
Spalding DB (1995a)
"Models of turbulent combustion" Proc. 2nd Colloquium on Process Simulation, pp 1-15 Helsinki University of Technology, Espoo, Finland
DB Spalding (1999)
"Connexions between the Multi-Fluid and Flamelet models of turbulent combustion"; www.cham.co.uk; shortcuts; MFM
DC Wilcox (1993)
"Turbulence modelling for CFD", DCW Industries, La Canada, California