x=10; X = 10+Log(10) = 11
x=11; X = 10+Log(11) = 11.0413926852
x=11.0413926852; X = 10+Log(11.0413926852) = 11.0430238558
and so on…
after a few more itterations I get 11.0430906367

To add to my last post…
You can also take out a couple of guessing steps by some logic. Assuming we are dealing with positive real numbers here, we know that X is greater than 10 (because you add 10 to Log(X). Log(10)=1 and Log(100)=2, so it should be somewhere between 10 and 100, but closer to 10. By logic I can see that it will be greater than 11 [since 10+Log(10)=11], and very close to 11.
Maybe my first guess should be 11 or slightly above.
Again, this will only eliminate the first couple of steps when guessing, so it is up to you if you use the logic deductive reasoning, or if it is worth saving the brain power and just guessing.

There must also be a solution near to X = 0. Unfortunately, plugging that into the equation gives X = -∞.

We must have that Log(X) ≈ -10 => X ≈ 10^{-10} = 0.0000000001. If we try that (or any value less than the correct value), the next value is negative – so that’s hopeless. If we try any value larger than the correct value, we eventually get DP’s 11.043…

However, as is often the case when that happens, rearranging the equation to get X = 10^{-10} 10^{X} (say) and plugging in X = 0.0000000001 gives 0.00000000010000000002302585093259140589562 and plugging that in gives 0.00000000010000000002302585093789330400793. I end up with 0.00000000010000000002302585093789330400915

Hi Ragknot. Neither of us have provided a mathematical solution. We have only provided high accuracy numerical approximations.

There is a non-elementary function, the Lambert W function, that provides the exact solution(s). I used the info in example 1 on that page to provide an input to Wolframalpha. See exact solution.

Apparently, my “solution” is the principal value solution (LOL – I hadn’t expected that).

I expect that there are infinitely many solutions, but all the other will be complex (numbers).

I also plugged the original equation into Wolframalpha to get: info and pictures. I also note that it rearranged the equation in tems of the Lambert W function – I could have saved some sweat (above) if I’d noticed that before.

PS Wolframalpha uses Log(x) for the natural log of x, and Log(b,x) for the log of x base b.

I checked out the URL to “exact solution”, but the equation says LOG(10) not Log(X). So what ever the Log(X) is, 10 is added to get the X again. And the correct X must be positive for the Darcy Friction Factor.
So I get this, but to only 30 decimals.

But for the engineering design, other variables can’t be more than 3 decimal places. Like the pipe distance might be 1000 meters, but to the nearest whole meter, not 1000.0002 meters.
So you might be right for Math, but I did not see it, but for engineering design, “11.04″ would be good enough for most designs.

Help me see your understanding…. I have a lot of confidence in you, Chris.

How did I get “11.043090636672833920160199501041″?

With Windows 7 Scientific calculator.
Step 1 enter any number
Step 2 click LOG
Step 3 Click Plus 10 then Equals

Doing steps 2 and 3, over and over till the Equal stops changing, you will have 30 decimals of “accuracy”, but for 99.99% of the time, 11.04 would be good enough.

Hi ragknot. I did mention that, like VB, wolframalpha takes Log(x) to be base e. The LambertW function is based on e as well. The Log(10) (base e) is simply a conversion factor. It’s a shame that wolframalpha and VB break the usual convention that Log should imply base 10, and Ln should imply base e. But we’ve got to live with that.

I used the result in the second half of example 1 in the Wiki link to the LambertW function. It’s crucial that you read at least the introduction to the Wiki, and check out the last half of example 1.

Your equation may be rewritten as: 10^{x-10} = x. Comparing with Wiki’s
p^{ax+b} = cx+d we see p = 10, a = 1, b = -10, c = 1, d = 0. So we get
the exact principal solution is x = -W(-Ln(10) 10^{-10})/Ln(10) and of course I used Log rather than Ln to keep wolframalpha happy.

The LambertW function is multivalued. It also can produce the 11.04… result. I’ve just worked out how to do that: the other exact real solution

PS. Thanks for the problem. It caused me to read up on the LambertW function. I hadn’t previously realised how important the function was. I wouldn’t be surprised if it started to appear on fancy scientific calculators.

One other thing is what I call the “funnel”. If you plot each result on a loop as Y, and the step number as X, then try a few other initial guesses and plot those on the same graph, you should see no matter what you guessed, each loop takes you very quickly to a better solution. I did hundreds of tests an saw that each loop increases the accuracy by a average of 2 decimals places. It would be smart to add an “ABS()” because the answer must be positive.

DP?:
You worked some to figure out an initial step. That’s
not bad, but it does not make much difference. Compare two initial numbers 1 vs 1000. The results at loop 1, would be 10 vs 13, loop 2 gives 11 vs 11.11. The third loop gives 11.041… vs 11.046…

And it was Chris who showed me that. I don’t disagree with the LambertW method, but the way I do it gets 15 decimal accuracy in an average of 7 steps, and 15 decimals is much more than is required in almost engineering designs for the Colebrook equation and I know that if Excel could do 100 decimal places my method would give 100 decimals.

Hi ragknot. My “initial” step was simply to convert your equation into a “standard” form. The next step was the final step.

However, I’ll admit to trickery. The LambertW function itself cannot be calculated exactly in a finite number of steps. But that applies to all the transcendental functions.

I think the distinction between a mathematical (i.e. exact) solution and an engineering (i.e. approximate) solution is subtle when dealing with such functions.

I guess the most famous (3400 years old at least) recursive algorithm for finding the square root of a number n is x_{1} = (x_{0} + n/x_{0})/2. e.g if n = 2, try x_{0} = 1, then. x_{1} = 1.5. Use that to get 1.41666…, then 1.41421…, then 1.414213… etc. You can do this on the Windows calculator as follows.
Frst enter [1] and press [MS] (memory store). Then repeat the following steps:
[+] [2] [/] [MR] [=] [/] [2] [=] [MS], where I’ve used [] to indicate a single keypress.

Chris, I get that, but I wonder about parts. You talked about “square root” and I think I know what you mean. But what I would like to know, in Excel, how accurate is the Log function? If Excels internal accuracy for Log is 15 decimals then the recursive algorithm could not further than 15.
Since I assume the Math version you used is more than 15 decimals, then it could be used also for the recursive algorithm to go further.

But for normal engineering needs are much lower than 15 decimals, Excel is Ok, but the test if the recursive algorithm is right, it would be good to go more than 15.

Can you use that math type you used to test that recursive algorithm?

Hi ragknot. Wolframalpha (and Mathematica) can work to an arbitrary number of significant figures. I think you’re right in that Excel/VB only work to about 15 significant digits.

Mathematica can display Pi to 1 million places – it takes about 2 seconds to do it on mt PC. It takes over 30 seconds to do it to 10 million places though. Wolframalpha will restrict the output on the freebie site.

Provided the numbers used don’t included decimal points (for input), Mathematica (and wolframalpha) try to give exact answers (as well as decimal approximations).

I don’t know how it does it, but for (intermediate) calculations e.g. LambertW(10) would be regarded as an exact value – as would Pi and e etc. It’s a fantastic piece of software engineering.

I know about lots of decimal places for Pi, and once figured out how it was done. But since it is only ONE set of decimals, I understand why it was done. But LOG can not have a single set of decimals like Pi.

Some people have asked me how to prove my method can be tested, and I always said if the computation makes the left side of the equation equal the right side then it must be true. But some guys say “it might work for a few specific variables, so I show how to use RANDOM values for the Re and Rr. I have tested over 10,000 different random values and it always works.

I posted my method on Wikipedia and Wikipedia kicked it off because I had to prove it was could by them reading it in a published place. But I have a logon with Wikipedia and I added my web page URL in a Wikipedia place taking about the Lambert method.

If you find any corrections on my Colebrook-White web page, I would be glad to hear from you.

Solving..
The Colebrook equation used to be solved numerically due to its apparent implicit nature. Recently, the Lambert W function has been employed to obtained explicit reformulation of the Colebrook equation.[4]

Hi slavy. Even though I don’t really understand it’s relevance, thanks for that link. In case anyone is curious, d(x,y) is the “distance” between the points x and y, but the notion of “distance” in Banach spaces is more general than that of the usual Euclidean space.

Hi ragknot. Long reply here. You’ll be most interested in the last half of it.

Of course Pi is a very important number, and so knowing it’s value to a very high precision (in decimal) is inevitable. Regardless of the base of the log, if x is an exact value, then so is log(x). More relevantly, if x is an exact value, then so is W(x), where W() is the LambertW function.

However, just because a number is exact, doesn’t mean we can calculate it exactly. Clearly no irrational number can be calculated exactly.

The values produced by DP and myself are not trues/exact solutions, they are approximations only. In the case of (at least) one approximation of the Colebrook-White equation, it is possible to find an exact solution in terms of the W function. An exact solution, in terms of the LambertW function, exist for the general form:
x = a log(b + cx) {NB I accidentally wrote c/x originally}

Just for completeness, W() is defined by: z = W(z) e^{W(z)}, where z is a complex number and e = 2.71828… i.e. W() is only defined implicitly, but exactly. There is no known (or perhaps even possible) way to write W() in terms of (a finite number of) elementary functions.

The trick that DP and I employed is well-known to the mathematical community (I learnt it on a maths course), as is the behaviour of such algorithms.

—
As to the reliability of the method: crudely put, for any real continuous monotonic function f(x), if |f’(x)| < 1, then the recursion will converge, where
|f’(x)| is the absolute value of the derivative of f().

From Wiki, and using obvious substitutions for the variables, the Colebrook-White equation (CWE from now on) is:
1/√f = -2 log( ε/(3.7D) + 2.51/(R√f)), where the log is base 10.

Subsitute x for 1/√f, a for 2/ln(10) ≈ 0.868590, b for ε/(3.7D) and c for
2.51/(R√f), we get the CWE is: x = – a ln(b + cx), where ln() is the natural log (i.e. base e) and we may take it that a,b,c,x > 0.

The derivative of the RHS is -ac/(b + cx). We need ac/(b + cx) < 1. The “-” sign simply means that the recursion will oscillate about the limit value, hence the funnel that you mention. If the derivative had a positive value, the method would still work, but there wouldn’t be a funnel.

I expect, that as a matter of luck, the sorts of values that a,b,c and x have in practice, are such that ac/(b + cx) < 1.

It happens that the well-known Newton-Raphson technique only converges if the derivative is < 1 also. So both methods will succeed or fail together.

In the worst case, x ≈ 0, so it is sufficient (but not necessary) that ac/b < 1 for the method(s) to work. If ac/b > 1, then we need that both the initial guess and the final value of x are such that a/(b + cx) < 1

Rr= ε/D
2.51/(R√f)= 2.51/R * 1/√f
If we say X= 1/√f then
X=-2 log( Rr/3.7 + 2.51/R * X)

Engineering vs. Math
What is ε/D?
Rr= ε/D where ε is the actual roughness, D is the inside diameter of a pipe.
Example: The “roughness” is 0.005 of inch and the inside diameter is 10 inches
so ε is 0.005 inches and D is 10 inches, so the relative roughness is 0.005/10
so Rr is 0.0005.
But ε can not be “exactly” 0.005 inches, because it depends on the material and type of building the pipe and also the age of the pipe and the fluid flow. Does the use of the pipe make it smoother or rougher? But for the design of the friction loss it is an approximation the the relative roughness is 0.0005 (no units)
Also the Diameter is not “exactly” 10 inches, because tempature and pressure and fluid flow will add or subtract a small amount of diameter.

What is R?
R is the Reynolds Number that I call “Re” at no units. It is computed from
several variables such as the tempature, viscosity, speed, and a few other things about the fluid. The Re will be somewhere between 2000 and maybe 1,000,000. And how exact will it be? Well even it is was right to zero decimals at a specific time and date at a certain location it will change as evolution changes and the earth rotates.

So engineers should know that Rr and Re are not exact and will change as the world changes many used variables. So “f” (the friction loss) will change, so should if be exact? Most engineering designer’s should know that Darcy friction will change slightly because all the variables used will be changing also.

Specific kinds of designs should require the friction loss be accurate to at least
a determined approxmation. Eight decimal places will almost be ok for a very high amount of designs.

Chris,
I am not disagreeing with you, I am only laughing. I see a large difference in Math and Engineering. Of course the Math needs to true, but in Engineering, building a way for REAL objects in the REAL world, Engineers must know that things chance. For figuring out the friction factor of a fluid flowing through a pipeline there are lots of factors that will not be “exact”. The Rr and Re in the Colebrook-White equation will change, and for a good design they should be expressed as what they might be. Knowing that other variables will change, shows good engineers why DacryF to 20 decimals is not necessary, but 8 decimals could help. I know some DarcyF approximations are good for only 3 deciamls, and a couple are better than 10 deciamls, but as the Rr and Re change their DarcyF computation does not vary near as accurate as my method. Since you helped me, I can now evaluate the accuracy of each type of approximation.

Chris,
You said there could be no solution for this I copied.
x = a log(b + c/x)

Well, that is true, but I think you have a slight error.
This could mean you would compute an error for X, because X must be positive, because the friction loss must be positive. If you add an AbS to the right side, it could probably be computed.

Another thing is to be like the Colebrook equation, the left side should have been 1/x.

You should have written it like this.

X = -2 * Log(Rr/3.7+2.51/Re*X)

And there are some other things. Rr will have to be less than 1/2 because the roughness can not be more than 1/2 the diameter, because if Rr is 1/2 that would leave no inside diameter. Also Re must be more the 2000 becaue if the Reynolds Number < 2000 then DarcyF = 64 / Reynolds Number, not the Colebrook equation.

Also, the section…

Log(Rr/3.7+2.51/Re*X)

must be negative because it will be multiplied by “-2″
to get a positive number. But if the initial guess was a negative number, I made my solution be like this

X = Abs(2 * Log(a + b * X ))

And if you convert the X to 1/√f you should see it like the Colebrook equation where “a” would be Rr/3.7 and “b” would be 2.51/ Re where Rr and Re have limited
values.

Hi ragknot. What baffles me is why people bother making approximations when then full equation is so easy to number crunch.

In view of the nature of the CW equation, I’d have thought that two or three significant figures would suffice.

In quantum physics, a single “constant” has been measured to about 15 significant figures, and that is considered to be a stunning achievement. It was done to test Quantum Electro-Dynamics (QED) which had been used to calculate the value. QED got it right.

I doubt that precision engineers work to as many as 5 or 6 sigificant figures, except in highly extraordinary circumstances.

I am shocked that on your blog that you say that people don’t immediately understand the method, or that they have any doubts about it. What I put in the last half of my last post is stuff that I’d expect any mathematically aware engineer to know. I’m pretty sure that I learnt the technique as part of an A-level maths course – i.e. pre-university and at about the same time that I learnt about the Newton-Raphson method.

Hii ragknot. Posts crossed (i.e. I hadn’t seen your last when when I posted my last one).

The c/x (in post 20) was a mistake – it should have been cx – I’ve corrected it.

You haven’t fully understood what I wrote. The Abs shouldn’t be necessary, but it might help if the initial guess was such that ac/(b + cx) > 1 even though near the solution, ac/(b + cx) %lt; 1.

I’d ditch the Abs and use a large value for the initial guess for x instead.

Stop press: Unless I’ve messed up big-time, the exact solution to the CW equation (x = -a ln(b +cx) is:
x = (a c W(e^{b/(a c)}/(a c))-b)/c = a W(e^{b/(a c)}/(a c)) -b/c, where W() is the LambertW function.

I really don’t get this Engineering vs. Math nonsense! I provided you a link with the general mathematical theory behind that and the theory is pretty simple and as Chris pointed out – well known even in High School! Whenever you have X = F(X) equation, and the function F is contractive (if differentiable, then first derivative less than 1, otherwise |F(x)-F(y)|<|x-y|) then you can apply the Fixed Point Theorem and use an iterative method to very well approximate the solution! In your case F(x)=10+log(x) with F'(x)=1/x, so we can apply the theory only on the interval (1,infinity). Good luck finding the root in (0,1)

Hi slavy. Maths vs Engineering. I agree that the distinction is esoteric. However, ragknot makes claims on his own (non-ToM) blog that the iterative method is the “TRUE” solution – I’m making a hash of saying, “no it isn’t”.

I had found the approximate (post 3) and exact (post 20) solutions in [0,1].

The literature on the CWE incorrectly led me to believe that there is no exact solution for the CWE. I now say claim the literature is wrong, and that the exact solution is that in post 25 above.

However, the exact solution can only be approximated numerically

Math vs Engineering
Well “Math” is not disabled but the real Earth, but as long as Engineering are working on Earth, their computations are.

A few things, the Darcy friction must be positive.
The absolute Roughness and Diameter gives the Relative Roughness with the units are the same as roughness/diameter which can then be called Rr.

Let’s say the min Rr is 0.00001 and the max Rr is 0.25

The minimum Reynolds number is 2000, but just because if it is less, the Colebrook equation should not be used. I have not studied what the max Reynolds Number might be but I don’t thing it should be over 1 millon, but lets say it is 2 million.

Ok, next, lets compute the minimum and maximum number for X, given the min and max for Rr and Re.

Chris, by “true” solution I mean my solution will give 15 correct decimals for the colebrook equation if the Rr and Re were right, but real engineers should know that the Rr and Re are “approximate” because for a Safe design Rr and Re should be at a high number for safety and the Darcy Factor will be high for a safe design.

If you think the Darcy Factor has to have a thirty decimal accuracy, then you must not be a good engineer, but maybe a good mathematician.
Almost all accurate pipe design need 8 or few decimals.

Hi ragknot. You seem to be trying to turn this around.

I am (not so) simply saying that you talk about a TRUE solution. OK until your last post, that was an an undefined term. The way you have used it seemed to imply that you meant it in the mathematical sense. You certainly mean that other people do not have a TRUE solution – well I say the Newton-Raphson method produces the same result, but possibly not so rapidly. Both methods could yield arbitrary precision. The 15 digits you mention (frequently) is simply the limit imposed by Excel/VB.

You are the one who is obsessed with large numbers of decimal places (yes I have played along with that). I have talked about an exact solution, that isn’t the same idea as unlimited numbers of decimal places.

Ever since you first introduced me to the CW equation, it was clear to me that 4 significant figures is more than adequate – none of the data can be that good, nor can the behaviour of real systems. I’ve stated that clearly above and in previous posts and in private emails.

I have mentioned above and previously, that 15 significant figures (it’s a coincidence that that’s what Excel/VB also limit you to) is at the extreme end of what mankind has been able to measure.

I believe that I have a very good sense of engineering. I also agree with my dad’s sentiments: an engineer is someone who can design a bottle top and a space-craft. I am definitely not a good mathematician.

PS In your last post and in post 23 you have written the equation incorrectly. It should be:
X = -2 * Log(Rr/3.7+2.51*X/Re), where X = 1/√F

You said this…
X = -2 * Log(Rr/3.7+2.51*X/Re), where X = 1/√F

And that is right, except I re-arranged it to put the X at the far left and far right.
You have “2.51*X/Re” and moved the X on purpose to help students … to “2.51/Re*X”. That make no difference except to have X at the far left and far right, just to show students. And once X is found, then “f = 1/X^2″

And if X is solved to be less than 2 or more than 9 then you should recheck to find your error.

Hi ragknot. I stand corrected (although I’m sitting). Excel and VB will interpret it correctly. I’d have written (2.51/Re)*X because 2.51/Re*X might have been interpreted (as it was by me, in a senior moment) as 2.51/(Re*X). As a rule, I add brackets to ensure there is no misinterpretation, or write it as I did, i.e. 2.51*X/Re, then the brackets aren’t necessary.

Test1
Rr=.0115
Re=13614
My solution is 0.0434482966750288
Now the very best public approximation is the Goudar Sonnad approximation.
It’s solution is 0.0434482966749311

Which one is more accurate to the Colebrook Eqaution?
(Use each one as f and see how close the left side = right side)

I just ran almost 15,000 random solutions for each of these. Here the data for the Goudar Sonnad approximation.

Hi ragknot. By definition, the CWE is the only one (bar minor variations due to the property of logarithms) that precisely calculates the CWE results.

The real question is, which one more acurately represents reality. Because they agree to more than 4 significant figures, I doubt that it is possible to determine.

The CWE is a best fit to empirical data. I’d be surprised if it was generally good to more than a few percent accuracy.

Hi ragknot. CWE is simply an abbreviated way of saying Colebrook-White Equation.

The Goudar–Sonnad equation (GSE from now) is not a method. It is simply an approximation to the CWE. The reason for making it is that it is both explicit and elementary. From what you say in post 34, it is very good approximation – easily good enough for engineering purposes.

The CWE is not a method either, it is simply an equation. It’s problem is that it is implicit and non-elementary. It would therefore quite a pain to consider e.g. it’s derivatiive in a reasonably simple, explict and elementary fashion.

Try differentiating both equations. The GSE will be dead easy to do, and will yield an explicit and elementary result. The CWE will be harder to do, and the result will be implicit and non-elementary.

i.e. GSE is user friendly. CWE can be consider partially tamed becuase it is expressible in terms of the LambertW funcion, and that has been studied by the mathematicians.

Yes, I know Goudar–Sonnad is an approximation. I have plotted it an see that it’s answers go up and down over the true line of the real answers.
As Re and Rr move up, my solution explicitly gives the right answers, but like I said Goudar–Sonnad goes up and down around the true values of the Re and Rr.

I thought that maybe CWE was one of those private solutions that an guy would have to buy to use it.

Like I said, a private pipe design company in England agreed my solution was right, and but they said they did not want it to become public because it would go against sales of private computations.

Hi ragknot. I don’t understand why that company thought that the (numerical) method should be kept secret. The basic method is well known. I gather that the most common way of numerically solving the equation uses the Newton-Raphson method. The only problem with that method is that it is slighlty more difficult to write the code – but it’s not hard to do. It’s certainly not a deal breaker. I could understand that company wanting to be the only one that knew the CWE.

When you first presented me with the CWE (a few years ago), it immediately occurred to me to try the simple method. The form of the CWE screams it out as the way to go.

Glossing over the details, if you plug a guessed value of the friction factor into the RHS of the CWE, the result must either nearer or further from the correct value. If it is nearer, then you have a recursion that will get you as close to the correct value as you desire – the only limitation being your computer.

Another take (as per slavy’s fixed point observation) is that if you have the value (of the Darcy friction factor), then plugging it into the RHS of the CWE, must produce the same value.

I’ve just seen that the Wiki now contains a link to your blog. It has the erroneous claim (by you, I deduce) that you have the “simple and true solution”. The claim should be “a (very) simple method” for finding the value of the Darcy friction factor using the CWE.

The only true (i.e. explicit and exact, but non-elementary) solution to the CWE is the one I gave in post 25 (in encoded form). On re-reading the Wiki, I strongly suspect that Brkić had also discovered it, but he went a step or two further to turn it into an elementary (function) approximation. I note that it is good to around 3%, so that’s probably good enough. But, it’s still too hard to do in your head, so I’d stick with the method you have to avoid unecessary loss of precision.

My quick two cents (wall of text coming) on how I’d solve any of these types of problems quickly (and lazily) that require some arbitrary precision approximation of the solution:

1. Force the equations into the form f(x)=0
e.g. for x= 10 + log(x) we can write it as
f(x)=10+log(x)-x
2. Now we apply some root (zero) finding algorithm
-Super handy to have one or more of these programs in a calculator
A) First choice: Approximation of Newton’ Method
-I like to use a numerical approximation of newton’s
method
– x_new=x_old-f(x)/f’(x)
-Being lazy I approximate f’(x) with:
f’(x) ~ (f(x+dx)-f(x))/dx
I set dx to something small like 0.001
-keep finding x_new until x_new-x_old is within our tolerance level

-Pros:
-If it’s going to converge it typically does it quickly (if it takes more than 30s I can be fairly sure it’s not going to converge)
-I can usually select the initial x as any arbitrary value, and it will typically still converge if it is going to. (best to look at a graph to find a decent guess)
-Cons:
-Doesn’t always converge
-If there are multiple roots, it doesn’t always find the one you wer looking for

B) Bisection method
Pick some x on the left side of the root
Pick some x on the right side of the root
Their f(x) values must have opposite signs(i.e. one must be + and one must be -)
call the x for f(x) > 0 x_pos
call the x for f(x) < 0 it is the new x_pos
if the sign 0 and another x where f(x) < 0. (Not a big deal but we are being lazy here).

3. Results.

A) Newton’s method started at a relatively terrible guess of x=1
converges in 4 iterations to 11.043090636677208

B)Bisection method with dismal guesses of x=1 and x=100
converges in 25 iterations to 11.043090835213661

These methods are fairly standard methods to solve these problems. I learned both of these back in high school in a different century, but the still hold up as a couple of the most useful algorithms I've ever learned.

Those pesky less than signs messed up the Bisection method

Bisection method
Pick some x on the left side of the root
Pick some x on the right side of the root
Their f(x) values must have opposite signs(i.e. one must be + and one must be -)
call the x for f(x) > 0 x_pos
call the x for f(x) < 0 x_neg
x_new=(x_pos+x_neg)/2

if x_new > 0 then it is the new x_pos
if x_new < 0 then it is the new x_neg

Keep finding x_new until f(x) is within our tolerance level for 0

Pros:
-Always converges
-Each guess is better than the last one (easy to kill algorithm if we can see what value it is converging towards)

Cons:
-We need to choose two guesses where one x has f(x) > 0 and another x where f(x) < 0. (Not a big deal but we are being lazy here).
-Takes longer than Newton’s method

Hi Cam. I’ve fixed up the pesky < problems. It’s quite weird how the html translator messed up your comment.

Funnily enough, behind the scenes, ragknot and I have been working on the Colebrook White equation. I’ve given ragknot the Newton’s method code for doing it. It converges in 3 or 4 iterations (to 15 sig figs). On a good day, Newton’s method can double the number of sig figs in each iteration (as it converges quadratically).

A minor problem with numerically calculating the derivative numerically is that it doubles the time take for each iteration.

The family of Colebrook etc. equations can be written (using simple substitutions and converting log (base 10) to ln (base e):
x = k – a ln(b + cx). NB for Colebrook, a,b,c > 0 and k ≥ 0
Rewriting that as: p(x) = a ln(b + cx) + x – k = 0
Then p’(x) = ac/(b + cx) + 1

Now letting x be the current best value, the new best value is (says Newton):
x – p(x)/p’(x) = x – (a ln(b + cx) + x – k)/(ac/(b + cx) + 1). That can be made slightly tidier, i.e.: new x = x – (a ln(b + cx) + x – k)(b + cx)/(ac + b + cx)

I note that the only one logarithm has to be found per iteration, and the rest is trivial +, -, * and /. Ragknot has done some testing and so far it looks like that at most 4 loops are needed to get 15 sig figs. I believe that he is well happy about that as it means that high precision is quite cheap in computer time.

I’m droning on about Colebrook as it’s of real consequence to ragknot.

Unless you are repeating the algorithm thousands of times the double evaluation of the function, or even doubling the iterations should be insignifcant (should converge in less than 1 s).

On the other hand if you are solving the equation thousand of times you may want to try Halley’s method (just like Newton but makes use of the 2nd derivative). It appears at 1st glance the second derivative will be far enough away from 0 to allow for this to make some modest time improvements.

Additionally if you are looking for a real performance gain, but don’t want to sacrafice accuracy use something like Serghides’s solution for initial guess at f and then refine using the above methods. I would expect it to be so close to the root that you would need only 1 or two iterations of the above methods to nail the root within the desired tolerance.

Hi Cam. Thanks for the heads-up on Halley’s method. I had a 5 second glance at it, and it looks well worth trying out. So ragknot, you’d better get ready for some new code from me.

With systems of pipes it is possible that the routine might be called many hundreds of times. The whole system will need to be solved iteratively.

It turns out that there is almost no advantage in making a good initial guess – at least, not when using the range of values that occur in piping problem. The calculated initial value is equivalent to doing one iteration, and at best saves 2, but usually only saves 1 and then we’ve broken even.

If you really,really want to speed things up, you may want to consider a 3-D lookup table of precomputed values (using Re,e/d as indexes) for use as initial guesses.

In such a table, since Re covers such a wide range you could index the Re values by powers of 2. Then you can right shift the searched for Re value until your value is < the Re in the table (this is a cheaper method than finding the log of Re).

You should be able to get a linearly interpolated guess in less time than it takes to perform one log operation. The guess in all likely hood would be so close to the solution, as to only require 1-2 iterations of Newton's or Halley's.

Basically this a time-memory trade off to boost speed. The bigger the table, the better the initial guess.

Not sure if you want to go through all of that do get a performance boost though.

Hi Cam. I’ve sent ragknot a fairly nice (but quick and dirty) bunch of code. I’ve provided both Newton and Halley’s methods so he can compare them. Sadly, Halley didn’t clearly beat Newton. No doubt ragknot will try both out with many values.

Unless I hear otherwise, I shall assume that both methods are considerably better than the cheap trick way that I showed him a couple of years ago – i.e. the one DP used way back in post 1. I believe that that method takes 6 – 20 loops with the Colebrook equation.

There is a pretty good explicit approximation to the Colebrook equation (due to Haaland). Unfortunately, it also uses a Log. But it may lend itself very nicely to being tabulated, and that table would be one dimensional. There may even be ready made solutions for that.

If ragknot really wants to shave 0.9 iterations off, I might give it a go. But I need data so that I know the range that the table needs to cover.

Hi Cam. Thank you very much for the mention of Halley’s method. I also tried a fourth order Householder method. Initially I thought that Householder was the better one, but after tweaking the initial guess value, Halley had a definite edge over Householder. I hadn’t heard of the Halley or Householder methods before.

I’d better add that the above is true in the context of the Colebrook equation with Rr (= E/Dh) in the range 0.00004 to 0.05 and Reynolds number in the range 2,500 to 10,000,000 (as hinted at in a Wiki).

Despite Halley being the winner, the fact that there are an infinite number of Householder methods (each of higher order) is quite amazing.

If I were writing a serious multi-pipe systems program, I suspect that after each pass, that I’d remember the results, and use them as initial values for subsequent passes, etc. Perhaps, usig Haaland for the first run, might be a sensible compromise – it might reduce the likelihood of possible instabilities that are the bugbear of high order iterative methods. I haven’t really thought about solving a system of pipes, but I wouldn’t be surprised if it took at least 20 iterations before getting close enough.

Here’s my code (slightly tweaked for publication here)

Function darcy_hm(ByVal b As Double, ByVal c As Double, _
Optional ByVal k As Double = 0) As Double
' Uses Halley's method
Const lim = 10^-15 'I write it this way as it's easy to read
Dim x_new As Double, x As Double 'x is the "old" value
Dim static a As Double 'see comments
Dim f As Double, fd As Double, fdd As Double 'working f(x), f'(x) and f"(x)
Dim t As Double 'useful intermediate value
Dim nLoops as Integer 'for evaluation purposes
' solve x = k - 2*Log10(b +c*x), where the log is base 10
' actually return 1/x^2
' re-write the equation as f(x) = a ln(b+cx) + x - k = 0
' where a = 2/ln(10). NB in VB/Excel ln(x) = Log(x)
' For the Colebrook equation, k = 0, b = Rr/3.7 and c = 2.51/Re
' Halley's method (x is the current best value). For f(x) = 0 then:
' x_new = x - 2*f(x)*f'(x) / (2*f'(x)^2 - f(x)*f"(x))
' where the ' and " mean first and second derivative respectively
' x is the old best estimate so far, and x_new is the new best estimate so far.
' Let f(x) = f = a log(b +c*x) + x - k = 0
' Let t = c/(b + c*x), then dt/dx = -t^2
' f('x) = fd = a*c/(b + c*x) + 1 = a*t + 1
' f"(x) = fdd = -a*t^2 = -a*t*t
' plugging that lot into Halley's iteration =>
' x_new = x - 2*f*fd / (2*fd*fd -f*fdd)
nLoops = 0
a = 2 / Log(10) 'because a is static, it is only evaluated once per program invocation
' or you could use the literal constant 0.86858896380650365530225783783321
x_new = 5 'initial value (found by trial and error)
'Haaland estimate
'x_new = -1.8 / Log(10) * Log(b ^ 1.11 + 2.75 * c)
Do
nLoops = nLoops + 1
x = x_new
f = a * Log(b + c * x) + x - k
t = c / (b + c * x)
fd = a * t + 1
fdd = -a * t * t
x_new = x - 2 * f * fd / (2 * fd * fd - f * fdd)
Loop Until Abs(x_new - x) < lim * x_new
darcy_hm = 1 / (x_new * x_new)
End Function

Generally, methods with an order higher than Halley’s aren’t practical. The higher order derivatives are typically very close to 0, and as such eat up computation time while having little to no effect on the result. You also end up with pesky instability issues.

PHP Warning: PHP Startup: Unable to load dynamic library 'C:\Program Files (x86)\Parallels\Plesk\Additional\PleskPHP5\ext\php_mssql.dll' - The specified module could not be found.
in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library 'C:\Program Files (x86)\Parallels\Plesk\Additional\PleskPHP5\ext\php_pdo_mssql.dll' - The specified module could not be found.
in Unknown on line 0

January 22nd, 2013 at 7:37 am

You guess and plug in.

first guess: x=1;

X = 10+Log(1) = 10

x=10; X = 10+Log(10) = 11

x=11; X = 10+Log(11) = 11.0413926852

x=11.0413926852; X = 10+Log(11.0413926852) = 11.0430238558

and so on…

after a few more itterations I get 11.0430906367

January 22nd, 2013 at 7:49 am

To add to my last post…

You can also take out a couple of guessing steps by some logic. Assuming we are dealing with positive real numbers here, we know that X is greater than 10 (because you add 10 to Log(X). Log(10)=1 and Log(100)=2, so it should be somewhere between 10 and 100, but closer to 10. By logic I can see that it will be greater than 11 [since 10+Log(10)=11], and very close to 11.

Maybe my first guess should be 11 or slightly above.

Again, this will only eliminate the first couple of steps when guessing, so it is up to you if you use the logic deductive reasoning, or if it is worth saving the brain power and just guessing.

January 22nd, 2013 at 9:31 am

There must also be a solution near to X = 0. Unfortunately, plugging that into the equation gives X = -∞.

We must have that Log(X) ≈ -10 => X ≈ 10

^{-10}= 0.0000000001. If we try that (or any value less than the correct value), the next value is negative – so that’s hopeless. If we try any value larger than the correct value, we eventually get DP’s 11.043…However, as is often the case when that happens, rearranging the equation to get X = 10

^{-10}10^{X}(say) and plugging in X = 0.0000000001 gives 0.00000000010000000002302585093259140589562 and plugging that in gives 0.00000000010000000002302585093789330400793. I end up with 0.00000000010000000002302585093789330400915January 22nd, 2013 at 10:32 pm

DP is exactly right. This is an example I gave on my solution on a web site.

Most math people think this is not right, but I found a way to easily solution. I named this a funnel, because plotting a graph looks like a funnel.

But like a funnel huge funnel, it pushes you to a specific point. Some one on ToM should have laughed and said it was easy.

January 22nd, 2013 at 11:03 pm

Who showed me? Chris!!!!

http://colebrook-white.blogspot.com

If you look here. you can see the funnel.

This example X=10+Log(x), using a hand calculator.

Our security guard guessed “22″, but being close does no make much difference.

Step 1. type 22 (or your own guess) 1000, maybe?

Step 2. press LOG

Step 3. press plus 10 then EQUAL

Step 4. return to step 2 again.

Each loop of the funnel takes you much closer. But with almost any numbers one loop increases accuracy to abut two more decimals. I got this…

X=11.043090636672833920160199501041

But you should get about 11.04 after 2 to 4 loops.

January 22nd, 2013 at 11:27 pm

Chris is right also. So in my VBA programming, X has to be positive, I use it like this

X = Abs(10+Log(X))

January 22nd, 2013 at 11:33 pm

Hi Ragknot. Neither of us have provided a mathematical solution. We have only provided high accuracy numerical approximations.

There is a non-elementary function, the Lambert W function, that provides the exact solution(s). I used the info in example 1 on that page to provide an input to Wolframalpha. See exact solution.

Apparently, my “solution” is the principal value solution (LOL – I hadn’t expected that).

I expect that there are infinitely many solutions, but all the other will be complex (numbers).

I also plugged the original equation into Wolframalpha to get: info and pictures. I also note that it rearranged the equation in tems of the Lambert W function – I could have saved some sweat (above) if I’d noticed that before.

PS Wolframalpha uses Log(x) for the natural log of x, and Log(b,x) for the log of x base b.

January 23rd, 2013 at 10:32 am

Chris,

I checked out the URL to “exact solution”, but the equation says LOG(10) not Log(X). So what ever the Log(X) is, 10 is added to get the X again. And the correct X must be positive for the Darcy Friction Factor.

So I get this, but to only 30 decimals.

11.043090636672833920160199501041=10+Log(11.043090636672833920160199501041)

But for the engineering design, other variables can’t be more than 3 decimal places. Like the pipe distance might be 1000 meters, but to the nearest whole meter, not 1000.0002 meters.

So you might be right for Math, but I did not see it, but for engineering design, “11.04″ would be good enough for most designs.

Help me see your understanding…. I have a lot of confidence in you, Chris.

January 23rd, 2013 at 10:39 am

How did I get “11.043090636672833920160199501041″?

With Windows 7 Scientific calculator.

Step 1 enter any number

Step 2 click LOG

Step 3 Click Plus 10 then Equals

Doing steps 2 and 3, over and over till the Equal stops changing, you will have 30 decimals of “accuracy”, but for 99.99% of the time, 11.04 would be good enough.

January 24th, 2013 at 2:19 am

Hi ragknot. I did mention that, like VB, wolframalpha takes Log(x) to be base e. The LambertW function is based on e as well. The Log(10) (base e) is simply a conversion factor. It’s a shame that wolframalpha and VB break the usual convention that Log should imply base 10, and Ln should imply base e. But we’ve got to live with that.

I used the result in the second half of example 1 in the Wiki link to the LambertW function. It’s crucial that you read at least the introduction to the Wiki, and check out the last half of example 1.

Your equation may be rewritten as: 10

^{x-10}= x. Comparing with Wiki’sp

^{ax+b}= cx+d we see p = 10, a = 1, b = -10, c = 1, d = 0. So we getthe exact principal solution is x = -W(-Ln(10) 10

^{-10})/Ln(10) and of course I used Log rather than Ln to keep wolframalpha happy.The LambertW function is multivalued. It also can produce the 11.04… result. I’ve just worked out how to do that:

the other exact real solution

January 24th, 2013 at 2:34 am

PS. Thanks for the problem. It caused me to read up on the LambertW function. I hadn’t previously realised how important the function was. I wouldn’t be surprised if it started to appear on fancy scientific calculators.

January 24th, 2013 at 7:14 am

One other thing is what I call the “funnel”. If you plot each result on a loop as Y, and the step number as X, then try a few other initial guesses and plot those on the same graph, you should see no matter what you guessed, each loop takes you very quickly to a better solution. I did hundreds of tests an saw that each loop increases the accuracy by a average of 2 decimals places. It would be smart to add an “ABS()” because the answer must be positive.

DP?:

You worked some to figure out an initial step. That’s

not bad, but it does not make much difference. Compare two initial numbers 1 vs 1000. The results at loop 1, would be 10 vs 13, loop 2 gives 11 vs 11.11. The third loop gives 11.041… vs 11.046…

And it was Chris who showed me that. I don’t disagree with the LambertW method, but the way I do it gets 15 decimal accuracy in an average of 7 steps, and 15 decimals is much more than is required in almost engineering designs for the Colebrook equation and I know that if Excel could do 100 decimal places my method would give 100 decimals.

Thanks for the fun guys.

January 24th, 2013 at 9:14 am

Hi ragknot. My “initial” step was simply to convert your equation into a “standard” form. The next step was the final step.

However, I’ll admit to trickery. The LambertW function itself cannot be calculated exactly in a finite number of steps. But that applies to all the transcendental functions.

I think the distinction between a mathematical (i.e. exact) solution and an engineering (i.e. approximate) solution is subtle when dealing with such functions.

I guess the most famous (3400 years old at least) recursive algorithm for finding the square root of a number n is x

_{1}= (x_{0}+ n/x_{0})/2. e.g if n = 2, try x_{0}= 1, then. x_{1}= 1.5. Use that to get 1.41666…, then 1.41421…, then 1.414213… etc. You can do this on the Windows calculator as follows.Frst enter [1] and press [MS] (memory store). Then repeat the following steps:

[+] [2] [/] [MR] [=] [/] [2] [=] [MS], where I’ve used [] to indicate a single keypress.

January 24th, 2013 at 7:10 pm

Chris, I get that, but I wonder about parts. You talked about “square root” and I think I know what you mean. But what I would like to know, in Excel, how accurate is the Log function? If Excels internal accuracy for Log is 15 decimals then the recursive algorithm could not further than 15.

Since I assume the Math version you used is more than 15 decimals, then it could be used also for the recursive algorithm to go further.

But for normal engineering needs are much lower than 15 decimals, Excel is Ok, but the test if the recursive algorithm is right, it would be good to go more than 15.

Can you use that math type you used to test that recursive algorithm?

January 24th, 2013 at 7:16 pm

PS: I had to go back see the name of “Wolfram Alpha”.

Can you use the recursive algorithm in that so that the LOG goes farther than Log in Excel for a test?

January 24th, 2013 at 7:32 pm

Hi ragknot. Wolframalpha (and Mathematica) can work to an arbitrary number of significant figures. I think you’re right in that Excel/VB only work to about 15 significant digits.

Mathematica can display Pi to 1 million places – it takes about 2 seconds to do it on mt PC. It takes over 30 seconds to do it to 10 million places though. Wolframalpha will restrict the output on the freebie site.

Provided the numbers used don’t included decimal points (for input), Mathematica (and wolframalpha) try to give exact answers (as well as decimal approximations).

I don’t know how it does it, but for (intermediate) calculations e.g. LambertW(10) would be regarded as an exact value – as would Pi and e etc. It’s a fantastic piece of software engineering.

January 24th, 2013 at 8:49 pm

I know about lots of decimal places for Pi, and once figured out how it was done. But since it is only ONE set of decimals, I understand why it was done. But LOG can not have a single set of decimals like Pi.

Some people have asked me how to prove my method can be tested, and I always said if the computation makes the left side of the equation equal the right side then it must be true. But some guys say “it might work for a few specific variables, so I show how to use RANDOM values for the Re and Rr. I have tested over 10,000 different random values and it always works.

I posted my method on Wikipedia and Wikipedia kicked it off because I had to prove it was could by them reading it in a published place. But I have a logon with Wikipedia and I added my web page URL in a Wikipedia place taking about the Lambert method.

If you find any corrections on my Colebrook-White web page, I would be glad to hear from you.

January 24th, 2013 at 8:55 pm

On Wikipedia,

http://en.wikipedia.org/wiki/Darcy_friction_factor_formulae

Solving..

The Colebrook equation used to be solved numerically due to its apparent implicit nature. Recently, the Lambert W function has been employed to obtained explicit reformulation of the Colebrook equation.[4]

A simple and true solution can be learned here. http://colebrook-white.blogspot.com

Someone else wrote the first part, I added the last sentence.

January 26th, 2013 at 3:41 am

http://en.wikipedia.org/wiki/Banach_fixed-point_theorem

January 26th, 2013 at 10:54 am

Hi slavy. Even though I don’t really understand it’s relevance, thanks for that link. In case anyone is curious, d(x,y) is the “distance” between the points x and y, but the notion of “distance” in Banach spaces is more general than that of the usual Euclidean space.

Hi ragknot. Long reply here. You’ll be most interested in the last half of it.

Of course Pi is a very important number, and so knowing it’s value to a very high precision (in decimal) is inevitable. Regardless of the base of the log, if x is an exact value, then so is log(x). More relevantly, if x is an exact value, then so is W(x), where W() is the LambertW function.

However, just because a number is exact, doesn’t mean we can calculate it exactly. Clearly no irrational number can be calculated exactly.

The values produced by DP and myself are not trues/exact solutions, they are approximations only. In the case of (at least) one approximation of the Colebrook-White equation, it is possible to find an exact solution in terms of the W function. An exact solution, in terms of the LambertW function, exist for the general form:

x = a log(b + cx) {NB I accidentally wrote c/x originally}

Just for completeness, W() is defined by: z = W(z) e

^{W(z)}, where z is a complex number and e = 2.71828… i.e. W() is only defined implicitly, but exactly. There is no known (or perhaps even possible) way to write W() in terms of (a finite number of) elementary functions.The trick that DP and I employed is well-known to the mathematical community (I learnt it on a maths course), as is the behaviour of such algorithms.

—

As to the reliability of the method: crudely put, for any real continuous monotonic function f(x), if |f’(x)| < 1, then the recursion will converge, where

|f’(x)| is the absolute value of the derivative of f().

From Wiki, and using obvious substitutions for the variables, the Colebrook-White equation (CWE from now on) is:

1/√f = -2 log( ε/(3.7D) + 2.51/(R√f)), where the log is base 10.

Subsitute x for 1/√f, a for 2/ln(10) ≈ 0.868590, b for ε/(3.7D) and c for

2.51/(R√f), we get the CWE is: x = – a ln(b + cx), where ln() is the natural log (i.e. base e) and we may take it that a,b,c,x > 0.

The derivative of the RHS is -ac/(b + cx). We need ac/(b + cx) < 1. The “-” sign simply means that the recursion will oscillate about the limit value, hence the funnel that you mention. If the derivative had a positive value, the method would still work, but there wouldn’t be a funnel.

I expect, that as a matter of luck, the sorts of values that a,b,c and x have in practice, are such that ac/(b + cx) < 1.

It happens that the well-known Newton-Raphson technique only converges if the derivative is < 1 also. So both methods will succeed or fail together.

In the worst case, x ≈ 0, so it is sufficient (but not necessary) that ac/b < 1 for the method(s) to work. If ac/b > 1, then we need that both the initial guess and the final value of x are such that a/(b + cx) < 1

January 26th, 2013 at 12:21 pm

1/√f = -2 log( ε/(3.7D) + 2.51/(R√f))

Rr= ε/D

2.51/(R√f)= 2.51/R * 1/√f

If we say X= 1/√f then

X=-2 log( Rr/3.7 + 2.51/R * X)

Engineering vs. Math

What is ε/D?

Rr= ε/D where ε is the actual roughness, D is the inside diameter of a pipe.

Example: The “roughness” is 0.005 of inch and the inside diameter is 10 inches

so ε is 0.005 inches and D is 10 inches, so the relative roughness is 0.005/10

so Rr is 0.0005.

But ε can not be “exactly” 0.005 inches, because it depends on the material and type of building the pipe and also the age of the pipe and the fluid flow. Does the use of the pipe make it smoother or rougher? But for the design of the friction loss it is an approximation the the relative roughness is 0.0005 (no units)

Also the Diameter is not “exactly” 10 inches, because tempature and pressure and fluid flow will add or subtract a small amount of diameter.

What is R?

R is the Reynolds Number that I call “Re” at no units. It is computed from

several variables such as the tempature, viscosity, speed, and a few other things about the fluid. The Re will be somewhere between 2000 and maybe 1,000,000. And how exact will it be? Well even it is was right to zero decimals at a specific time and date at a certain location it will change as evolution changes and the earth rotates.

So engineers should know that Rr and Re are not exact and will change as the world changes many used variables. So “f” (the friction loss) will change, so should if be exact? Most engineering designer’s should know that Darcy friction will change slightly because all the variables used will be changing also.

Specific kinds of designs should require the friction loss be accurate to at least

a determined approxmation. Eight decimal places will almost be ok for a very high amount of designs.

January 26th, 2013 at 5:24 pm

Chris,

I am not disagreeing with you, I am only laughing. I see a large difference in Math and Engineering. Of course the Math needs to true, but in Engineering, building a way for REAL objects in the REAL world, Engineers must know that things chance. For figuring out the friction factor of a fluid flowing through a pipeline there are lots of factors that will not be “exact”. The Rr and Re in the Colebrook-White equation will change, and for a good design they should be expressed as what they might be. Knowing that other variables will change, shows good engineers why DacryF to 20 decimals is not necessary, but 8 decimals could help. I know some DarcyF approximations are good for only 3 deciamls, and a couple are better than 10 deciamls, but as the Rr and Re change their DarcyF computation does not vary near as accurate as my method. Since you helped me, I can now evaluate the accuracy of each type of approximation.

January 27th, 2013 at 1:13 am

Chris,

You said there could be no solution for this I copied.

x = a log(b + c/x)

Well, that is true, but I think you have a slight error.

This could mean you would compute an error for X, because X must be positive, because the friction loss must be positive. If you add an AbS to the right side, it could probably be computed.

Another thing is to be like the Colebrook equation, the left side should have been 1/x.

You should have written it like this.

X = -2 * Log(Rr/3.7+2.51/Re*X)

And there are some other things. Rr will have to be less than 1/2 because the roughness can not be more than 1/2 the diameter, because if Rr is 1/2 that would leave no inside diameter. Also Re must be more the 2000 becaue if the Reynolds Number < 2000 then DarcyF = 64 / Reynolds Number, not the Colebrook equation.

Also, the section…

Log(Rr/3.7+2.51/Re*X)

must be negative because it will be multiplied by “-2″

to get a positive number. But if the initial guess was a negative number, I made my solution be like this

X = Abs(2 * Log(a + b * X ))

And if you convert the X to 1/√f you should see it like the Colebrook equation where “a” would be Rr/3.7 and “b” would be 2.51/ Re where Rr and Re have limited

values.

January 27th, 2013 at 1:34 am

Hi ragknot. What baffles me is why people bother making approximations when then full equation is so easy to number crunch.

In view of the nature of the CW equation, I’d have thought that two or three significant figures would suffice.

In quantum physics, a single “constant” has been measured to about 15 significant figures, and that is considered to be a stunning achievement. It was done to test Quantum Electro-Dynamics (QED) which had been used to calculate the value. QED got it right.

I doubt that precision engineers work to as many as 5 or 6 sigificant figures, except in highly extraordinary circumstances.

I am shocked that on your blog that you say that people don’t immediately understand the method, or that they have any doubts about it. What I put in the last half of my last post is stuff that I’d expect any mathematically aware engineer to know. I’m pretty sure that I learnt the technique as part of an A-level maths course – i.e. pre-university and at about the same time that I learnt about the Newton-Raphson method.

January 27th, 2013 at 2:12 am

Hii ragknot. Posts crossed (i.e. I hadn’t seen your last when when I posted my last one).

The c/x (in post 20) was a mistake – it should have been cx – I’ve corrected it.

You haven’t fully understood what I wrote. The Abs shouldn’t be necessary, but it might help if the initial guess was such that ac/(b + cx) > 1 even though near the solution, ac/(b + cx) %lt; 1.

I’d ditch the Abs and use a large value for the initial guess for x instead.

Stop press: Unless I’ve messed up big-time, the exact solution to the CW equation (x = -a ln(b +cx) is:

x = (a c W(e

^{b/(a c)}/(a c))-b)/c = a W(e^{b/(a c)}/(a c)) -b/c, where W() is the LambertW function.January 27th, 2013 at 3:32 am

I really don’t get this Engineering vs. Math nonsense! I provided you a link with the general mathematical theory behind that and the theory is pretty simple and as Chris pointed out – well known even in High School! Whenever you have X = F(X) equation, and the function F is contractive (if differentiable, then first derivative less than 1, otherwise |F(x)-F(y)|<|x-y|) then you can apply the Fixed Point Theorem and use an iterative method to very well approximate the solution! In your case F(x)=10+log(x) with F'(x)=1/x, so we can apply the theory only on the interval (1,infinity). Good luck finding the root in (0,1)

January 27th, 2013 at 10:18 am

Hi slavy. Maths vs Engineering. I agree that the distinction is esoteric. However, ragknot makes claims on his own (non-ToM) blog that the iterative method is the “TRUE” solution – I’m making a hash of saying, “no it isn’t”.

I had found the approximate (post 3) and exact (post 20) solutions in [0,1].

The literature on the CWE incorrectly led me to believe that there is no exact solution for the CWE. I now say claim the literature is wrong, and that the exact solution is that in post 25 above.

However, the exact solution can only be approximated numerically

January 27th, 2013 at 12:24 pm

Math vs Engineering

Well “Math” is not disabled but the real Earth, but as long as Engineering are working on Earth, their computations are.

A few things, the Darcy friction must be positive.

The absolute Roughness and Diameter gives the Relative Roughness with the units are the same as roughness/diameter which can then be called Rr.

Let’s say the min Rr is 0.00001 and the max Rr is 0.25

The minimum Reynolds number is 2000, but just because if it is less, the Colebrook equation should not be used. I have not studied what the max Reynolds Number might be but I don’t thing it should be over 1 millon, but lets say it is 2 million.

Ok, next, lets compute the minimum and maximum number for X, given the min and max for Rr and Re.

X = -2 * Log(Rr/3.7+2.51/Re*X)

January 27th, 2013 at 12:53 pm

Once you get the min and max for X, what would be the values for DarcyF?

January 27th, 2013 at 3:46 pm

Chris, by “true” solution I mean my solution will give 15 correct decimals for the colebrook equation if the Rr and Re were right, but real engineers should know that the Rr and Re are “approximate” because for a Safe design Rr and Re should be at a high number for safety and the Darcy Factor will be high for a safe design.

If you think the Darcy Factor has to have a thirty decimal accuracy, then you must not be a good engineer, but maybe a good mathematician.

Almost all accurate pipe design need 8 or few decimals.

January 27th, 2013 at 4:38 pm

Hi ragknot. You seem to be trying to turn this around.

I am (not so) simply saying that you talk about a TRUE solution. OK until your last post, that was an an undefined term. The way you have used it seemed to imply that you meant it in the mathematical sense. You certainly mean that other people do not have a TRUE solution – well I say the Newton-Raphson method produces the same result, but possibly not so rapidly. Both methods could yield arbitrary precision. The 15 digits you mention (frequently) is simply the limit imposed by Excel/VB.

You are the one who is obsessed with large numbers of decimal places (yes I have played along with that). I have talked about an exact solution, that isn’t the same idea as unlimited numbers of decimal places.

Ever since you first introduced me to the CW equation, it was clear to me that 4 significant figures is more than adequate – none of the data can be that good, nor can the behaviour of real systems. I’ve stated that clearly above and in previous posts and in private emails.

I have mentioned above and previously, that 15 significant figures (it’s a coincidence that that’s what Excel/VB also limit you to) is at the extreme end of what mankind has been able to measure.

I believe that I have a very good sense of engineering. I also agree with my dad’s sentiments: an engineer is someone who can design a bottle top and a space-craft. I am definitely not a good mathematician.

PS In your last post and in post 23 you have written the equation incorrectly. It should be:

X = -2 * Log(Rr/3.7+2.51*X/Re), where X = 1/√F

January 27th, 2013 at 6:06 pm

You said this…

X = -2 * Log(Rr/3.7+2.51*X/Re), where X = 1/√F

And that is right, except I re-arranged it to put the X at the far left and far right.

You have “2.51*X/Re” and moved the X on purpose to help students … to “2.51/Re*X”. That make no difference except to have X at the far left and far right, just to show students. And once X is found, then “f = 1/X^2″

And if X is solved to be less than 2 or more than 9 then you should recheck to find your error.

January 27th, 2013 at 9:06 pm

Hi ragknot. I stand corrected (although I’m sitting). Excel and VB will interpret it correctly. I’d have written (2.51/Re)*X because 2.51/Re*X might have been interpreted (as it was by me, in a senior moment) as 2.51/(Re*X). As a rule, I add brackets to ensure there is no misinterpretation, or write it as I did, i.e. 2.51*X/Re, then the brackets aren’t necessary.

January 27th, 2013 at 9:12 pm

Test1

Rr=.0115

Re=13614

My solution is 0.0434482966750288

Now the very best public approximation is the Goudar Sonnad approximation.

It’s solution is 0.0434482966749311

Which one is more accurate to the Colebrook Eqaution?

(Use each one as f and see how close the left side = right side)

I just ran almost 15,000 random solutions for each of these. Here the data for the Goudar Sonnad approximation.

Correct

Decimals ….. How many?

8……. 1

9……. 2

10……. 6

11……. 9

12……. 11

13……. 45

14……. 290

15……. 14347

This shows 98% of the time, the Goudar Sonnad method gets 15 correct decimals. 2% of the time is it from 14 down to 8 correct decimals.

And of course, 100% of the time my solution gets 15 decimals.

There is one other public approximation that gets pretty good solutions, but most of them gets from 3 to 6 decimals right.

January 28th, 2013 at 7:55 am

Hi ragknot. By definition, the CWE is the only one (bar minor variations due to the property of logarithms) that precisely calculates the CWE results.

The real question is, which one more acurately represents reality. Because they agree to more than 4 significant figures, I doubt that it is possible to determine.

The CWE is a best fit to empirical data. I’d be surprised if it was generally good to more than a few percent accuracy.

January 28th, 2013 at 4:27 pm

I don’t know about the CWE? But the sounds like the Colebrook-White Equation for CWE.

If CWE has a specific method, it should get my same answers.

Let me know, or email at Ragknot@gmail.com

Thanks

January 28th, 2013 at 11:07 pm

Hi ragknot. CWE is simply an abbreviated way of saying Colebrook-White Equation.

The Goudar–Sonnad equation (GSE from now) is not a method. It is simply an approximation to the CWE. The reason for making it is that it is both explicit and elementary. From what you say in post 34, it is very good approximation – easily good enough for engineering purposes.

The CWE is not a method either, it is simply an equation. It’s problem is that it is implicit and non-elementary. It would therefore quite a pain to consider e.g. it’s derivatiive in a reasonably simple, explict and elementary fashion.

Try differentiating both equations. The GSE will be dead easy to do, and will yield an explicit and elementary result. The CWE will be harder to do, and the result will be implicit and non-elementary.

i.e. GSE is user friendly. CWE can be consider partially tamed becuase it is expressible in terms of the LambertW funcion, and that has been studied by the mathematicians.

January 29th, 2013 at 9:13 pm

Yes, I know Goudar–Sonnad is an approximation. I have plotted it an see that it’s answers go up and down over the true line of the real answers.

As Re and Rr move up, my solution explicitly gives the right answers, but like I said Goudar–Sonnad goes up and down around the true values of the Re and Rr.

I thought that maybe CWE was one of those private solutions that an guy would have to buy to use it.

Like I said, a private pipe design company in England agreed my solution was right, and but they said they did not want it to become public because it would go against sales of private computations.

January 30th, 2013 at 6:33 am

Hi ragknot. I don’t understand why that company thought that the (numerical) method should be kept secret. The basic method is well known. I gather that the most common way of numerically solving the equation uses the Newton-Raphson method. The only problem with that method is that it is slighlty more difficult to write the code – but it’s not hard to do. It’s certainly not a deal breaker. I could understand that company wanting to be the only one that knew the CWE.

When you first presented me with the CWE (a few years ago), it immediately occurred to me to try the simple method. The form of the CWE screams it out as the way to go.

Glossing over the details, if you plug a guessed value of the friction factor into the RHS of the CWE, the result must either nearer or further from the correct value. If it is nearer, then you have a recursion that will get you as close to the correct value as you desire – the only limitation being your computer.

Another take (as per slavy’s fixed point observation) is that if you have the value (of the Darcy friction factor), then plugging it into the RHS of the CWE, must produce the same value.

I’ve just seen that the Wiki now contains a link to your blog. It has the erroneous claim (by you, I deduce) that you have the “simple and true solution”. The claim should be “a (very) simple method” for finding the value of the Darcy friction factor using the CWE.

The only true (i.e. explicit and exact, but non-elementary) solution to the CWE is the one I gave in post 25 (in encoded form). On re-reading the Wiki, I strongly suspect that Brkić had also discovered it, but he went a step or two further to turn it into an elementary (function) approximation. I note that it is good to around 3%, so that’s probably good enough. But, it’s still too hard to do in your head, so I’d stick with the method you have to avoid unecessary loss of precision.

February 2nd, 2013 at 5:23 pm

Which email address works? Both?

February 5th, 2013 at 11:57 am

It turns out that my instinct regarding convergence is wrong. So please ignore it.

February 6th, 2013 at 4:25 am

My quick two cents (wall of text coming) on how I’d solve any of these types of problems quickly (and lazily) that require some arbitrary precision approximation of the solution:

1. Force the equations into the form f(x)=0

e.g. for x= 10 + log(x) we can write it as

f(x)=10+log(x)-x

2. Now we apply some root (zero) finding algorithm

-Super handy to have one or more of these programs in a calculator

A) First choice: Approximation of Newton’ Method

-I like to use a numerical approximation of newton’s

method

– x_new=x_old-f(x)/f’(x)

-Being lazy I approximate f’(x) with:

f’(x) ~ (f(x+dx)-f(x))/dx

I set dx to something small like 0.001

-keep finding x_new until x_new-x_old is within our tolerance level

-Pros:

-If it’s going to converge it typically does it quickly (if it takes more than 30s I can be fairly sure it’s not going to converge)

-I can usually select the initial x as any arbitrary value, and it will typically still converge if it is going to. (best to look at a graph to find a decent guess)

-Cons:

-Doesn’t always converge

-If there are multiple roots, it doesn’t always find the one you wer looking for

B) Bisection method

Pick some x on the left side of the root

Pick some x on the right side of the root

Their f(x) values must have opposite signs(i.e. one must be + and one must be -)

call the x for f(x) > 0 x_pos

call the x for f(x) < 0 it is the new x_pos

if the sign 0 and another x where f(x) < 0. (Not a big deal but we are being lazy here).

3. Results.

A) Newton’s method started at a relatively terrible guess of x=1

converges in 4 iterations to 11.043090636677208

B)Bisection method with dismal guesses of x=1 and x=100

converges in 25 iterations to 11.043090835213661

These methods are fairly standard methods to solve these problems. I learned both of these back in high school in a different century, but the still hold up as a couple of the most useful algorithms I've ever learned.

Anonymous Cam

February 6th, 2013 at 4:35 am

Those pesky less than signs messed up the Bisection method

Bisection method

Pick some x on the left side of the root

Pick some x on the right side of the root

Their f(x) values must have opposite signs(i.e. one must be + and one must be -)

call the x for f(x) > 0 x_pos

call the x for f(x) < 0 x_neg

x_new=(x_pos+x_neg)/2

if x_new > 0 then it is the new x_pos

if x_new < 0 then it is the new x_neg

Keep finding x_new until f(x) is within our tolerance level for 0

Pros:

-Always converges

-Each guess is better than the last one (easy to kill algorithm if we can see what value it is converging towards)

Cons:

-We need to choose two guesses where one x has f(x) > 0 and another x where f(x) < 0. (Not a big deal but we are being lazy here).

-Takes longer than Newton’s method

Anonymous Cam

February 7th, 2013 at 1:04 pm

Hi Cam. I’ve fixed up the pesky < problems. It’s quite weird how the html translator messed up your comment.

Funnily enough, behind the scenes, ragknot and I have been working on the Colebrook White equation. I’ve given ragknot the Newton’s method code for doing it. It converges in 3 or 4 iterations (to 15 sig figs). On a good day, Newton’s method can double the number of sig figs in each iteration (as it converges quadratically).

A minor problem with numerically calculating the derivative numerically is that it doubles the time take for each iteration.

The family of Colebrook etc. equations can be written (using simple substitutions and converting log (base 10) to ln (base e):

x = k – a ln(b + cx). NB for Colebrook, a,b,c > 0 and k ≥ 0

Rewriting that as: p(x) = a ln(b + cx) + x – k = 0

Then p’(x) = ac/(b + cx) + 1

Now letting x be the current best value, the new best value is (says Newton):

x – p(x)/p’(x) = x – (a ln(b + cx) + x – k)/(ac/(b + cx) + 1). That can be made slightly tidier, i.e.: new x = x – (a ln(b + cx) + x – k)(b + cx)/(ac + b + cx)

I note that the only one logarithm has to be found per iteration, and the rest is trivial +, -, * and /. Ragknot has done some testing and so far it looks like that at most 4 loops are needed to get 15 sig figs. I believe that he is well happy about that as it means that high precision is quite cheap in computer time.

I’m droning on about Colebrook as it’s of real consequence to ragknot.

February 7th, 2013 at 10:16 pm

Unless you are repeating the algorithm thousands of times the double evaluation of the function, or even doubling the iterations should be insignifcant (should converge in less than 1 s).

On the other hand if you are solving the equation thousand of times you may want to try Halley’s method (just like Newton but makes use of the 2nd derivative). It appears at 1st glance the second derivative will be far enough away from 0 to allow for this to make some modest time improvements.

Additionally if you are looking for a real performance gain, but don’t want to sacrafice accuracy use something like Serghides’s solution for initial guess at f and then refine using the above methods. I would expect it to be so close to the root that you would need only 1 or two iterations of the above methods to nail the root within the desired tolerance.

Just some thoughts.

Anonymous Cam

February 7th, 2013 at 11:51 pm

Hi Cam. Thanks for the heads-up on Halley’s method. I had a 5 second glance at it, and it looks well worth trying out. So ragknot, you’d better get ready for some new code from me.

With systems of pipes it is possible that the routine might be called many hundreds of times. The whole system will need to be solved iteratively.

It turns out that there is almost no advantage in making a good initial guess – at least, not when using the range of values that occur in piping problem. The calculated initial value is equivalent to doing one iteration, and at best saves 2, but usually only saves 1 and then we’ve broken even.

February 8th, 2013 at 3:30 pm

If you really,really want to speed things up, you may want to consider a 3-D lookup table of precomputed values (using Re,e/d as indexes) for use as initial guesses.

In such a table, since Re covers such a wide range you could index the Re values by powers of 2. Then you can right shift the searched for Re value until your value is < the Re in the table (this is a cheaper method than finding the log of Re).

You should be able to get a linearly interpolated guess in less time than it takes to perform one log operation. The guess in all likely hood would be so close to the solution, as to only require 1-2 iterations of Newton's or Halley's.

Basically this a time-memory trade off to boost speed. The bigger the table, the better the initial guess.

Not sure if you want to go through all of that do get a performance boost though.

Anonymous Cam

February 8th, 2013 at 10:09 pm

Hi Cam. I’ve sent ragknot a fairly nice (but quick and dirty) bunch of code. I’ve provided both Newton and Halley’s methods so he can compare them. Sadly, Halley didn’t clearly beat Newton. No doubt ragknot will try both out with many values.

Unless I hear otherwise, I shall assume that both methods are considerably better than the cheap trick way that I showed him a couple of years ago – i.e. the one DP used way back in post 1. I believe that that method takes 6 – 20 loops with the Colebrook equation.

There is a pretty good explicit approximation to the Colebrook equation (due to Haaland). Unfortunately, it also uses a Log. But it may lend itself very nicely to being tabulated, and that table would be one dimensional. There may even be ready made solutions for that.

If ragknot really wants to shave 0.9 iterations off, I might give it a go. But I need data so that I know the range that the table needs to cover.

Thanks for the idea.

February 9th, 2013 at 1:23 am

…aaargh. I’ve just noticed that the Haaland equation involves a power. So two tables or a 2D table might be needed after all.

February 14th, 2013 at 8:05 am

Hi Cam. Thank you very much for the mention of Halley’s method. I also tried a fourth order Householder method. Initially I thought that Householder was the better one, but after tweaking the initial guess value, Halley had a definite edge over Householder. I hadn’t heard of the Halley or Householder methods before.

I’d better add that the above is true in the context of the Colebrook equation with Rr (= E/Dh) in the range 0.00004 to 0.05 and Reynolds number in the range 2,500 to 10,000,000 (as hinted at in a Wiki).

Despite Halley being the winner, the fact that there are an infinite number of Householder methods (each of higher order) is quite amazing.

If I were writing a serious multi-pipe systems program, I suspect that after each pass, that I’d remember the results, and use them as initial values for subsequent passes, etc. Perhaps, usig Haaland for the first run, might be a sensible compromise – it might reduce the likelihood of possible instabilities that are the bugbear of high order iterative methods. I haven’t really thought about solving a system of pipes, but I wouldn’t be surprised if it took at least 20 iterations before getting close enough.

Here’s my code (slightly tweaked for publication here)

February 16th, 2013 at 12:46 am

Glad you found Halley’s method useful.

Generally, methods with an order higher than Halley’s aren’t practical. The higher order derivatives are typically very close to 0, and as such eat up computation time while having little to no effect on the result. You also end up with pesky instability issues.

Anonymous Cam

February 19th, 2013 at 7:28 pm

Hey Cam,

I emailed Chris and asked for info.

After reading the previous info I think you helped Chris.

I hope Chris emails you.

But see colebrook-white.blogspot.com

this and let me know something if you want to.

Raj told me he wanted give info, but he has not yet.