Back in 1997, during my visit to beautiful Mar del Plata in Argentina, I was asked to solve a math problem that I soon realized was close to being unsolvable. The setting was the Banquet for the 38th International Math Olympiad. I was 17 and there was delicious, free food in front of me, so it was pretty impossible to get my attention. Still, the coach of the Romanian team decided to drop by the Greek table to challenge us with the following problem:

**Infinite Power:** If , find .

I was pretty hungry and a bit annoyed at the interruption (it is rude to eat and do math at the same time!), so I looked at the problem for a moment and then challenged him back with this one:

**Infiniter Power:** If , find .

After a few moments, he looked at me with a puzzled look. He knew I had solved his problem within a few seconds, he was annoyed that I had somehow done it in my head and he was even more annoyed that I had challenged him back with a problem that confounded him.

Your mission, if you choose to accept it, is to figure out why the coach of the Romanian IMO team left our table alone for the rest of the competition.

Good luck!

PS: The rest of the Romanian team (the kids) were really cool. In fact, a few years later, I would hang out with a bunch of them at MIT’s Department of Mathematics, or as my older brother put it, *The Asylum*.

YWJanuary 23, 2013 at 6:30 pmInteresting. It seems that, given {{x^x}^x}^x… = 2, we obtain x^2 = 2, from which we deduce that x = pmsqrt{2}. However, by the same reasoning, we could arrive at the conclusion that the soluton for {{x^x}^x}^x… = 4 is x = qmsqrt{2}, and therefore we end up with a contradiction.

spirosJanuary 23, 2013 at 7:24 pmIndeed. So… what is the resolution of this contradiction?

KernelJanuary 23, 2013 at 9:04 pm*****************SPOILER*****************

Sqrt[2]^Sqrt[2] = {2^(1/2)}^Sqrt[2] = 2^(Sqrt[2]/2) = 2^(1/Sqrt[2])

So Sqrt[2]^Sqrt[2]^Sqrt[2] = {2^(1/Sqrt[2])}^Sqrt[2] = 2^1 = 2.

Notice x^Sqrt[2] > x for x>1. In particular, 2^Sqrt[2] > 2. So x such that x^x^x^… = 2 doesn’t actually exist, because if it did, it would have to be Sqrt[2] and it isn’t.

That Romanian was a jerk and your response was awesome 🙂

KernelJanuary 23, 2013 at 9:10 pmWait, I see the issue. (x^x)^x != x^(x^x)

KernelJanuary 23, 2013 at 9:29 pmy=x^(x^(x^…))=4

y=x^y=x^4=4

x=Sqrt[2] ==><==

so I think y can't be 4 for any x…

Lubos MotlJanuary 24, 2013 at 12:53 amFirst of all, I think that by general conventions, , not which (the latter) could be written as . So let me follow the general conventions about the meaning of the composite powers.

If the sequence of convoluted powers converges to 2 or 4 respectively, it must be that or 4 respectively. We may substitute 2 or 4 for here to get or , respectively. Both conditions are solved by , at least when we stay within real non-negative numbers that have a unique enough result for .

So is a necessary condition for to be a solution to both problems. We must now verify whether it is a sufficient condition for either problem. One may numerically calculate a dozen of steps to be sure that the composite power converges to 2 but not 4. One may easily prove the inequality that if , then still , so one can’t get above two. It can’t ever get above two, i.e. towards four.

That’s why the problem with has the solution of $sqrt{2}$ while the problem with 4 has no solutions.

The final answer? I think that the Romanian coach decided to avoid the Greek table because the Greek contestants already ate the free food (as your description of your eating habits also suggests) and meat and he had to choose other tables where there was some free food left to take, something they don’t have in Romania. 😉 Another, less important problem is that he or she didn’t quite understand the logic, the difference between sufficient and necessary conditions, and the possibility that some problems have no solutions.

Lubos MotlJanuary 24, 2013 at 12:55 amThe LaTeX here doesn’t recognize backslash-lt and gt for “less than and greater than”. The missing sentence says

… if , then , too.

Fortunately, the claim works with sharp as well as non-sharp inequalities. The proof of the implication above is simple: is an increasing function of and for , the power just happens to give exactly 2, so for lower bases, it gives less than 2.

spirosJanuary 24, 2013 at 1:05 amWhat is the largest number for which this equation has a solution? Elementary proof if possible.

FrankJanuary 24, 2013 at 2:04 amthe largest number is exp(1)

this can be shown as follows: let us consider the increasing sequence y_{k+1}=x^(y_k) , and with x the solution of x=y_{inf}^(1/y_{inf}). Obviously, y_k/y_inf is converging quikly to 1, so z_k=1-y_k/y_inf is getting very small for large k. We can then take the log of both sides of the recursion, and expand in first order in z_k (as those quantities are very small) and get

z_{k+1}/z_{k} = log y_{inf}. A necessary condition for convergence is therefore log(y_inf)<=1. It can easily be checked that the boundary case exp(1) still converges.

Lubos MotlJanuary 24, 2013 at 6:52 amCool, just to be sure, I manually numerically verified Frank’s result. The maximum for which the infinite-composite-power converges is 1.44466786 or so, and indeed, the power is then or so. The number 1.44466786 may be seen to be nothing else than .

spirosJanuary 24, 2013 at 7:18 amIt would be interesting to see how fast x=1.44467 would surpass 1000 (ie after how many powers?) I wonder if there is a simple way to figure this out without calculators.

Lubos MotlJanuary 24, 2013 at 7:45 amDear Spiros, for your “overshot” constant, it’s about 2,212 steps. One exceeds 1,000 only about 20 iterations after the composite power exceeds 3. Almost exactly in one-half of the interations, after 1,100 steps or so, the composite power surpasses e=2.71828.

If you only want orders of magnitude estimates, this behavior is probably describable theoretically, following methods by Frank etc. One may study the increase of composite power by adding one more step “x to something” to the chain. It slowly decreases roughly to 0.000011 for your starting number – the minimum increment is achieved after approximately 1,100 steps (in the middle of the near-divergent series). One could probably interpolate the increment as a quadratic function of (i-1100), with the coefficients calculable without a calculator.

At the end, it’s still a difficult problem although the dependence on “i”, the iteration level, is pretty much gradual, as if “i” were another continuous variable, so I will give up attempts to solve it fully.

spirosJanuary 24, 2013 at 7:16 amNicely done Frank!

AlexJanuary 24, 2013 at 9:38 amIf you differentiate

Then you get

This derivative is positive if

Based on what Frank and Lubos said previously, I don’t think the equation makes any sense if y is outside this interval.

2 ίσον 4 at physics4meJanuary 27, 2013 at 2:59 am[…] Συνεπώς 2=4 ;; ………………… διαβάστε περισσότερα ΕΔΩ http://quantumfrontiers.com/2013/01/23/unsolvable/#comments […]

ZJanuary 28, 2013 at 7:28 pmIf Exp[1] is the interval of convergence, what’s the radius of convergence … i.e. z^z^z^z^z…= h where both z and h are complex?

spirosJanuary 28, 2013 at 11:59 pmI would use Euler’s expression for z and the usual h = h_{re} + i h_{im} to solve the equation z^h = h.

ZJanuary 29, 2013 at 9:09 amAh ha, but there is a rather big ambiguity in what branch cuts one chooses, e.g. i^i = Exp[-pi /2] or Exp[-pi/2+2pi*k] k integer.

spirosJanuary 29, 2013 at 9:21 amAmbiguity is opportunity.

FrankJanuary 29, 2013 at 11:52 amLet us first define what we mean by the equation x=y^(1/y) or equivalently y.log(x)=log(y); using the convention log(a)=log(|a|exp(itheta_a))=log|a|+itheta_a with -pi<theta_a<pi, we get the linear set of equations (written as a 2×2 matrix)

[|x| ] = [cos(theta_inf) , sin(theta_inf) ] [log|y_inf| /|y_inf| ]

[theta_x] [-sin(theta_inf) , cos(theta_inf)] [ theta_inf / |y_inf|]

Just like in the real case, we now set up a recursion y_{k+1}=x^{y_k} using the same rules; defining |y_k|/|y_inf|=1+z_k,

theta_k=theta_inf+xi_k, we get, up to first order in z_k and xi_k,

[z_{k+1} ] = [log(|y_inf|) -theta_inf] [z_k]

[xi_{k+1}] [theta_inf log(|y_inf|)] [xi_k]

given that x obeys the above set of equations (this is how this consistency condition was derived).

These real recursion equations converge iff the eigenvalues of the corresponding 2×2 matrix are smaller than 1, which is equivalent to the condition (with y_inf = y )

|log(y_inf)|<1

or equivalently

sqrt(log|y|^2+theta_y^2) <1

which indeed reduces to y<exp(1) for the real case. Note that we use the convention that -pi<=theta<=pi, which is also the convention that e.g. matlab is using. So if |log(y)|<=1, calculate x using the above set of equations, and do the recursion y_{k+1}=x^{y_k} for an arbitrary starting point y_0, then the recursion y_k should converge to y_inf.

spirosJanuary 29, 2013 at 11:38 pmFrank wins.

FrankJanuary 30, 2013 at 2:26 amJust for the record, |x| has to be replaced by log(|x|) in the above formula.

The “boundary” points of the set for which the function y=x^x^x^… is well defined can now easily be seen to be parameterized by one angle -pi<chi<pi :

y=exp(exp(i*chi))

and corresponding

x=exp(exp(i*chi-exp(i*chi)))

This expression for x hence determines "the radius of convergence" for the function f(x)= x^x^x…

JaimeFebruary 18, 2013 at 11:40 pmSpiros, could you provide me with an email to be able to send an innovative non-binary/Boolean quantum algorithm to be considered at the Expert’s Corner of Quantum Frontiers?

Regards,

Jaime

PS. Off-topic contact comment not for posting.

Daniel Tung (@tytung2020)April 14, 2013 at 11:20 pmLet X^(x^(x^(…. = m,

So m^x = m and x^m = m

From the second equation, m cannot be 0.

If m is not 0, from the first equation, x=1, thus m=1.

So both his and your questions are invalid.

spirosApril 15, 2013 at 5:48 amDear Daniel,

How do you get the first equation?

Daniel Tung (@tytung2020)April 17, 2013 at 3:05 amOops, it’s a mistake… forget about my solution! 🙂