Oy Veh! Power Series, Analytic Continuations, and Riemann Zeta

Jan 20 2014 Published by under complex analysis, Numbers

After the whole Plait fiasco with the sum of the infinite series of natural numbers, I decided it would interesting to dig into the real math behind that mess. That means digging in to the Riemann function, and the concept of analytic continuation.

A couple of caveats before I start:

  1. this is the area of math where I'm at my worst. I am not good at analysis. I'm struggling to understand this stuff well enough to explain it. If I screw up, please let me know in the comments, and I'll do my best to update the main post promptly.
  2. This is way more complicated than most of the stuff I write on this blog. Please be patient, and try not to get bogged down. I'm doing my best to take something that requires a whole lot of specialized knowledge, and explain it as simply as I can.

What I'm trying to do here is to get rid of some of the mystery surrounding this kind of thing. When people think about math, they frequently get scared. They say things like "Math is hard, I can't hope to understand it.", or "Math produces weird results that make no sense, and there's no point in my trying to figure out what it means, because if I do, my brain will explode. Only a super-genius geek can hope to understand it!"

That's all rubbish.

Math is complicated, because it covers a whole lot of subjects. To understand the details of a particular branch of math takes a lot of work, because it takes a lot of special domain knowledge. But it's not fundamentally different from many other things.

I'm a professional software engineer. I did my PhD in computer science, specializing in programming languages and compiler design. Designing and building a compiler is hard. To be able to do it well and understand everything that it does takes years of study and work. But anyone should be able to understand the basic concepts of what it does, and what the problems are.

I've got friends who are obsessed with baseball. They talk about ERAs, DIERAs, DRSs, EQAs, PECOTAs, Pythagorean expectations, secondary averages, UZRs... To me, it's a huge pile of gobbledygook. It's complicated, and to understand what any of it means takes some kind of specialized knowledge. For example, I looked up one of the terms I saw in an article by a baseball fan: "Peripheral ERA is the expected earned run average taking into account park-adjusted hits, walks, strikeouts, and home runs allowed. Unlike Voros McCracken's DIPS, hits allowed are included." I have no idea what that means. But it seems like everyone who loves baseball - including people who think that they can't do their own income tax return because they don't understand how to compute percentages - understand that stuff. They care about it, and since it means something in a field that they care about, they learn it. It's not beyond their ability to understand - it just takes some background to be able to make sense of it. Without that background, someone like me feels lost and clueless.

That's the way that math is. When you go to look at a result from complex analysis without knowing what complex analysis is, it looks like terrifyingly complicated nonsensical garbage, like "A meromorphic function is a function on an open subset of the complex number plain which is holomorphic on its domain except at a set of isolated points where it must have a Laurent series".

And it's definitely not easy. But understanding, in a very rough sense, what's going on and what it means is not impossible, even if you're not a mathematician.


Anyway, what the heck is the Riemann zeta function?

It's not easy to give even the simplest answer of that in a meaningful way.

Basically, Riemann Zeta is a function which describes fundamental properties of the prime numbers, and therefore of our entire number system. You can use the Riemann Zeta to prove that there's no largest prime number; you can use it to talk about the expected frequency of prime numbers. It occurs in various forms all over the place, because it's fundamentally tied to the structure of the realm of numbers.

The starting point for defining it is a power series defined over the complex numbers (note that the parameter we use is s instead of a more conventional x: this is a way of highlighting the fact that this is a function over the complex numbers, not over the reals).

\zeta(s) = \sum_{n=1}^{\infty} n^{-s}

This function \zeta is not the Riemann function!

The Riemann function is something called the analytic continuation of \zeta. We'll get to that in a moment. Before doing that; why the heck should we care? I said it talks about the structure of numbers and primes, but how?

The zeta function actually has a lot of meaning. It tells us something fundamental about properties of the system of real numbers - in particular, about the properties of prime numbers. Euler proved that Zeta is deeply connected to the prime numbers, using something called Euler's identity. Euler's identity says that for all integer values:

\sum_{n=1}^{\infty} n^{-s} = \prod_{p \in \textbf{Primes}} \frac{1}{1-p^{-s}}

Which is a way of saying that the Riemann function can describe the probability distribution of the prime numbers.


To really understand the Riemann Zeta, you need to know how to do analytic continuation. And to understand that, you need to learn a lot of number theory and a lot of math from the specialized field called complex analysis. But we can describe the basic concept without getting that far into the specialized stuff.

What is an analytical continuation? This is where things get really sticky. Basically, there are places where there's one way of solving a problem which produces a diverging infinite series. When that happens you say there's no solution, that thepoint where you're trying to solve it isn't in the domain of the problem. But if you solve it in a different way, you can find a way of getting a solution that works. You're using an analytic process to extend the domain of the problem, and get a solution at a point where the traditional way of solving it wouldn't work.


A nice way to explain what I mean by that requires taking a
diversion, and looking at a metaphor. What we're talking about here isn't analytical continuation; it's a different way of extending the domain of a function, this time in the realm of the real numbers. But as an example, it illustrates the concept of finding a way to get the value of a function in a place where it doesn't seem to be defined.

In math, we like to play with limits. One example of that is in differential calculus. What we do in differential
calculus is look at continuous curves, and ask: at one specific location on the curve, what's the slope?

If you've got a line, the slope is easy to determine. Take any two points on the line: (x_1, y+1), (x_2, y_2), where x_1 < x_2. Then the slope is \frac{y_2 - y_1}{x_2 - x_1}. It's easy, because for a line, the slope never changes.

If you're looking at a curve more complex than line, then slopes get harder, because they're constantly changing. If you're looking at y=x^2, and you zoom in and look at it very close to x=0, it looks like the slope is very close to 0. If you look at it close to 1, it looks like it's around 2. If you look at it at x=10, it looks a bit more than 20. But there are no two points where it's exactly the same!

So how can you talk about the slope at a particular point x=k? By using a limit. You pick a point really close to x=k, and call it x=k+\epsilon. Then an approximate value of the slope at k is:

\frac{(x+\epsilon)^2 - x^2}{x+\epsilon - x}

The smaller epsilon gets, the closer your approximation gets. But you can't actually get to \epsilon=0, because if you did, that slope equation would have 0 in the denominator, and it wouldn't be defined! But it is defined for all non-zero values of \epsilon. No matter how small, no matter how close to zero, the slope is defined. But at zero, it's no good: it's undefined.

So we take a limit. As \epsilon gets smaller and smaller, the slope gets closer and closer to some value. So we say that the slope at the point - at the exact place where the denominator of that fraction becomes zero - is defined as:

 \lim_{\epsilon \rightarrow 0}  \frac{(k+\epsilon)^2 - k^2}{k+\epsilon - k} =

 \lim_{\epsilon \rightarrow 0}  \frac{  k^2 + 2k\epsilon + \epsilon^2 - k^2}{\epsilon} =

(Note: the original version of the previous line had a missing "-". Thanks to commenter Thinkeye for catching it.)

 \lim_{\epsilon \rightarrow 0}  \frac{ 2k\epsilon + \epsilon^2}{\epsilon} =

Since \epsilon is getting closer and closer to zero, \epsilon^2 is getting smaller much faster; so we can treat it as zero:

 \lim_{\epsilon \rightarrow 0}  \frac{ 2k\epsilon}{\epsilon} = 2k

So at any point x=k, the slope of y=x^2 is 2k. Even though computing that involves dividing by zero, we've used an analytical method to come up with a meaningful and useful value at \epsilon=0. This doesn't mean that you can divide by zero. You cannot conclude that \frac{2*0}{0} = 2. But for this particular analytical setting, you can come up with a meaningful solution to a problem that involves, in some sense, dividing by zero.


The limit trick in differential calculus is not analytic continuation. But it's got a tiny bit of the flavor.

Moving on: the idea of analytic continuation comes from the field of complex analysis. Complex analysis studies a particular class of functions in the complex number plane. It's not one of the easier branches of mathematics, but it's extremely useful. Complex analytic functions show up all over the place in physics and engineering.

In complex analysis, people focus on a particular group of functions that are called analytic, holomorphic, and meromorphic. (Those three are closely related, but not synonymous.).

A holomorphic function is a function over complex variables, which has one
important property. The property is almost like a kind of abstract smoothness. In the simplest case, suppose that we have a complex equation in a single variable, and the domain of this function is D. Then it's holomorphic if, and only if, for every point d \in D, the function is complex differentiable in some neighborhood of points around d.

(Differentiable means, roughly, that using a trick like the one we did above, we can take the slope (the derivative) around d. In the complex number system, "differentiable" is a much stronger condition than it would be in the reals. In the complex realm, if something is differentiable, then it is infinitely differentiable. In other words, given a complex equation, if it's differentiable, that means that I can create a curve describing its slope. That curve, in turn, will also be differentiable, meaning that you can derive an equation for its slope. And that curve will be differentiable. Over and over, forever: the derivative of a differentiable curve in the complex number plane will always be differentiable.)

If you have a differentiable curve in the complex number plane, it's got one really interesting property: it's representable as a power series. (This property is what it means for a function to be called analytic; all holomorphic functions are analytic.) That is, a function f is holomorphic for a set S if, for all points s \in S, you can represent the value of the function as a power series for a disk of values around s:

 f(z) = \sum_{n=0}^{\infty} a_n(z-c)^n

In the simplest case, the constant c is 0, and it's just:

 f(z) = \sum_{n=0}^{\infty} a_nz^n

(Note: In the original version of this post, I miswrote the basic pattern of a power series, and put both z and s in the base. Thanks to John Armstrong for catching it.)

The function that we wrote, above, for the base of the zeta function is exactly this kind of power series. Zeta is an analytic function for a particular set of values. Not all values in the complex number plane; just for a specific subset.

If a function f is holomorphic, then the strong differentiability of it leads to another property. There's a unique extension to it that expands its domain. The expansion always produces the same value for all points that are within the domain of f. It also produces exactly the same differentiability properties. But it's also defined on a larger domain than f was. It's essentially what f would be if its domain weren't so limited. If D is the domain of f, then for any given domain, D', where D \subset D', there's exactly one function with domain D' that's an analytic continuation of f.

Computing analytic continuations is not easy. This is heavy enough already, without getting into the details. But the important thing to understand is that if we've got a function f with an interesting set of properties, we've got a method that might be able to give us a new function g that:

  1. Everywhere that f(s) was defined, f(s) = g(s).
  2. Everywhere that f(s) was differentiable, g(s) is also differentiable.
  3. Everywhere that f(s) could be computed as a sum of an infinite power series, g(s) can also be computed as a sum of an infinite power series.
  4. g(s) is defined in places where f(s) and the power series for f(s) is not.

So, getting back to the Riemann Zeta function: we don't have a proper closed form equation for zeta. What we have is the power series of the function that zeta is the analytic continuation of:

\zeta(s) = \sum_{n=1}^{\infty} n^{-s}

If s=-1, then the series for that function expands to:

\sum_{n=1}^{\infty} n^1 = 1 + 2 + 3 + 4 + 5 + ...

The power series is undefined at this point; the base function that we're using, that zeta is the analytic continuation of, is undefined at s=-1. The power series is an approximation of the zeta function, which works over some specific range of values. But it's a flawed approximation. It's wrong about what happens at s=-1. The approximation says that value at s=-1 should be a non-converging infinite sum. It's wrong about that. The Riemann zeta function is defined at that point, even though the power series is not. If we use a different method for computing the value of the zeta function at s=-1 - a method that doesn't produce an incorrect result! - the zeta function has the value -\frac{1}{12} at s=-1.

Note that this is a very different statement from saying that the sum of that power series is -\frac{1}{12} at s=-1. We're talking about fundamentally different functions! The Riemann zeta function at s=-1 does not expand to the power series that we used to approximate it.

In physics, if you're working with some kind of system that's described by a power series, you can come across the power series that produces the sequence that looks like the sum of the natural numbers. If you do, and if you're working in the complex number plane, and you're working in a domain where that power series occurs, what you're actually using isn't really the power series - you're playing with the analytic zeta function, and that power series is a flawed approximation. It works most of the time, but if you use it in the wrong place, where that approximation doesn't work, you'll see the sum of the natural numbers. In that case, you get rid of that sum, and replace it with the correct value of the actual analytic function, not with the incorrect value of applying the power series where it won't work.

Ok, so that warning at the top of the post? Entirely justified. I screwed up a fair bit at the end. The series that defines the value of the zeta function for some values, the series for which the Riemann zeta is the analytical continuation? It's not a power series. It's a series alright, but not a power series, and not the particular kind of series that defines a holomorphic or analytical function.

The underlying point, though, is still the same. That series (not power series, but series) is a partial definition of the Riemann zeta function. It's got a limited domain, where the Riemann zeta's domain doesn't have the same limits. The series definition still doesn't work at s=-1. The series is still undefined at s=-1. At s=-1, the series expands to 1 + 2 + 3 + 4 + 5 + 6 + ..., which doesn't converge, and which doesn't add up to any finite value, -1/12 or otherwise. That series does not have a value at s=-1. No matter what you do, that equation - the definition of that series - does not work at s=-1. But the Riemann Zeta function is defined in places where that equation isn't. Riemann Zeta at s=-1 is defined, and its value is -1/12.

Despite my mistake, the important point is still that last sentence. The value of the Riemann zeta function at s=-1 is not the sum of the set of natural numbers. The equation that produces the sequence doesn't work at s=-1. The definition of the Riemann zeta function doesn't say that it should, or that the sum of the natural numbers is -1/12. It just says that the first approximation of the Riemann zeta function for some, but not all values, is given by a particular infinite sum. In the places where that sum works, it gives the value of zeta; in places where that sum doesn't work, it doesn't.

Tags:

25 responses so far

  • Nitpick (I know you'll appreciate since we're going for rigor here): the expression you start with is not a power series. Power series have the function variable (s) in the base and the iteration variable (n) in the exponent.

    • MarkCC says:

      Thanks. I really do appreciate it!

      • Sophie Schmieg says:

        You still missed a bunch of instances in the last section. You pretty much can write series instead of power series everywhere except the bit were you define power series in the second to last section.

        I think the confusion stems from the fact that the series we are talking about is a Dirichlet series, but we are using implicitly that a Dirichlet series produces a holomorphic function on its domain of convergence. Therefore we get a power series representation of zeta which gives us the meromorphic continuation. The power series is usually not written down explicitly as one uses a functional equation and not the series itself for the continuation.

  • Thinkeye says:

    Missing minus sign in second row between epsilon^2 and k^2, while computing the first derivative of quadratic function.

  • David Ratnasabapathy says:

    Typos:

    Take any two points on the line: $(x_1,y+1), (x_2,y_2)$,...
    #You've got $y+1$ where you mean $y_1$.

    #The formula for the approximate value of the slope of $y = x^2$ at $x=k$ uses $x$. It should use $k$.

    "Computing analytic continuations is not easy. This is heavy ENOUG already... "

  • John Fringe says:

    I like very much the last paragraph. That's precisely the core of the problem.

  • MarkCC wrote (Jan 20 2014):
    > The Riemann function is something called the analytic continuation of [the power series.]
    > The Riemann zeta function at s=-1 does not expand to the power series
    that we used to approximate it.

    What the Riemann zeta function at s = -1 consequently *is* can be expressed more
    concretely by referring to "Riemann's functional equation":

    \zeta(s) = 2^s \times \pi^{(s - 1)} \times {\text{Sin}}\left( \frac{s \pi}{2} \right) \times \Gamma( 1 - s ) \times \zeta( 1 - s ),

    which for s = -1 gives:

    \zeta( -1 ) = \frac{1}{2} \times \frac{1}{\pi^2} \times {\text{Sin}}\left( \frac{-\pi}{2} \right) \times \Gamma( 2 ) \times \zeta( 2 ),

    \zeta( -1 ) = \frac{1}{2} \times \frac{1}{\pi^2} \times (-1) \times 1 \times \frac{\pi^2}{6},

    where the value of \zeta( 2 ) is of course interesting in its own right (but clearly more directly related to a converging power series, and in this sense "more obvious").

    • p.s.
      Re: "power series"

      In the series
      \sum_{n = 1}^{\infinity} n^{-s}
      mentioned in the article all terms are obviously expressed as powers:
      the inverses of the various natural numbers $n \ge 1$ as bases, "raised to the power of" the same exponent value $s$.

      However, the phrase "power series" has another specific meaning (involving the same base value, but various exponent values);
      and the series under consideration is more correctly called (a form of) a "Dirichlet series".

      • MarkCC says:

        Yeah, I screwed up. I ended up deciding not to correct the text inline; that feels like it would be covering up for my mistake, so I added some text at the end that explains the error. Better?

        • MarkCC wrote (January 21, 2014 at 11:59 am):
          > [...] so I added some text at the end that explains the error. Better?

          I couldn't have done it better myself.
          And especially thanks for once again emphasizing the main point:

          > The value of the Riemann zeta function at s=-1 is not the sum of the set of natural numbers.

          Though I still have a quibble about your article:

          > The starting point for defining it [namely the Riemann zeta function] is [...]
          > \zeta(s) = \Sum_{n = 1}^{\infty} n^{-s}
          >
          This function \zeta \, is not the Riemann function!

          Damn right, it isn't!
          But then -- why denote this "plainly untamed series" (namely \Sum_{n = 1}^{\infty} n^{-s}) precisely by the same function symbol (namely $\zeta$; read "zeta") which is otherwise and elsewhere used and recognized for denoting the Riemann zeta function ?!?

          To me, that's at least an abuse of notation; you might consider perhaps writing instead:

          "The starting point is ...: \zeta_{\text{start}}(s) = \Sum_{n = 1}^{\infty} n^{-s}.
          This function \zeta_{\text{start}} \, is not the Riemann function!"

        • MarkCC wrote (January 21, 2014 at 11:59 am):
          > [...] so I added some text at the end that explains the error. Better?

          I couldn't have done it better myself.
          And especially thanks for once again emphasizing the main point:

          > The value of the Riemann zeta function at s=-1 is not the sum of the set of natural numbers.

          Though I still have a quibble about your article:

          > The starting point for defining it [namely the Riemann zeta function] is [...]
          > \zeta(s) = \sum_{n = 1}^{\infty} n^{-s}
          >
          This function \zeta \, is not the Riemann function!

          Damn right, it isn't!
          But then -- why denote this "plainly untamed series" (namely \sum_{n = 1}^{\infty} n^{-s}) precisely by the same function symbol (namely $\zeta$; read "zeta") which is otherwise and elsewhere used and recognized for denoting the Riemann zeta function ??

          To me, that's at least an abuse of notation; you might consider perhaps writing instead:

          "The starting point is ...: \zeta_{\text{start}}(s) = \sum_{n = 1}^{\infty} n^{-s}.
          This function \zeta_{\text{start}} \, is not the Riemann function!"

    • p.s.
      Re: "power series"

      In the series
      \sum_{n = 1}^{\infty} n^{-s}
      mentioned in the article all terms are obviously expressed as powers:
      the inverses of the various natural numbers n \ge 1 as bases, "raised to

      the power of" the same exponent value s.

      However, the phrase "power

      series" has another specific meaning (involving the same base value, but

      various exponent values);
      and the series under consideration is more correctly called (a form of) a

      "Dirichlet series".

  • Christian says:

    Mark, the basic sum for the zeta function you give is not a power series as John already stated. This error goes through the whole text. You corrected the definition of a power series, but further down you still write: "So, getting back to the Riemann Zeta function: we don't have a proper closed form equation for zeta. What we have is the power series of the function that zeta is the analytic continuation of:" which is wrong. I studied maths, but complex analyis was not my strong field (did functional analysis and numerical analysis mainly) so I don't know enough about the zeta function (only that the structure of its nontrivial zeros is still a mayor unsolved problem) but I suggest you proofread the whole text for references between power series and the standard represenation.

    Cheers and many thanks for your efforts to bring high level mathematics to the public, Christian

  • Nice post!

    Just one remark: many of the algebraic manipulations of divergent series, such as relating 1+2+3+4+... to 1+1+1+1+... and 1-1+1-1+... have natural counterparts that are actually completely valid within the domain of convergence of the series, and the equalities obtained this way are preserved by analytic continuation. So somehow, computing "as if the series were convergent" ends up giving correct, or at least coherent results.

    Not to say that one should not be careful, but _some_ of the crazy divergent series manipulations are correct and nothing but shortcut in notation for what you are talking about here.

  • dr24hours says:

    The deeper math is cool and all, but there's a far easier way to show that Plait was wrong. Assume the series converges (to *anything*). Name it Q. And prove that Q is greater than any a priori selected value. Done.

    Sum (N) doesn't exist. And fancy (incorrect) notation doesn't change that.

    • I believe that you are mistaken about what the video is stating. Nobody is claiming that the series converges (it doesn't, for the reason that you are giving). The question is rather whether it is possible to give the formal expression Sum(n) a numerical value in a way that is reasonable. This way cannot be as the limit of the sequence of partial sums, indeed, but maybe something else would be possible? And analytic continuation does provide such a way.

      The important thing I believe is that addition is defined for 2 numbers, hence for finitely many, but as soon as you want to sum infinitely many, you have to do something else. E.g. taking limits and so on. But even the usual definition of Sum(1/n^2) still is "fancy notation"...

      • dr24hours says:

        Sums are defined. And limits are defined. Any expression which gives a numerical value to Sum(N) (Where N is the natural numbers) is wrong. We're done.

    • Sorry, I am a noob at mathematics. How to prove that Q is bigger than any a priori selected value?

      • MarkCC says:

        Suppose that the infinite sequence of naturals sums to some value Q.

        Now, pick any value N. Let N' be the smallest integer larger than N.

        Since N' is an integer, it's one of the numbers that was part of the sum that added up to Q.

        We know that for any integer N, N+1 > N.

        The sequence of natural numbers includes both 1 and N'. So Q must be greater than N' + 1. Therefore Q > N.

      • dr24hours says:

        Suppose the sum converges to a value Q. Since all elements are positive, Q must be larger than any partial sum (monotonicity). Find the partial sum Sn = 1+2+...+n such that Sn>Q. Done.

  • Dante says:

    I think the post needs pictures. Wikimedia has a few nice slope animations, for example.

    A few typos that I lost my concentration over: if it's domain; he domian of; heavy enoug already.

    • MarkCC says:

      Fixed up those typos. I didn't put a note about those corrections inline; I think that would be more disruptful than helpful.

      I probably should have put some figures in, but at this point, it is what it is - knowing my usual traffic patterns, the majority of people who are going to read it already have.

  • annyingram says:

    We're going to think about it pretty hard and learn how to promote math research very soon, or else we're going back to 1850 levels of math research, where everyone knew each other and stuff was done by letter.

  • […] (1/21/2014): Mark Carrol-Chu has posted a follow-up, and Evelyn Lamb over at “Roots of Unity” has chimed in as […]

Bad Behavior has blocked 1740 access attempts in the last 7 days.