Last time on
Tricks of the Trade
(with Professor Glesser)
we introduced integration by parts as an analogue to the product rule. We start this post with an example to show why the method can become tedious.
As there is a product of functions, this seems ideal for integration by parts. A question we will take up in our next post is which term we should look to differentiate (i.e., be our ) and which we should antidifferentiate (i.e., be our ). For now, I will give you that a sound choice is
With this, we get
Using the integration by parts fomula:
Using linearity, we reduce the question to solving .
Hold on, now. Is that really an improvement?
Yes, because the power of is smaller. But, I’ll grant you that life doesn’t seem much better. Essentially, we need to do integration by parts again. So, we rename things:
and we get
and after using linearity, we only need to compute .
Before you get up and leave, notice that the power of is one less again.
Whoo-hoo. Yay, capitalism!
Seriously, each time we do this process, the exponent will decrease by one (since we are differentiating). So we “only” need to do it two more times.
You suck, Professor Glesser
Agreed. This is why it is nice to automate the process. I first learned this by watching Stand and Deliver over and over while in high school. I am not much of a fan of Battlestar Galactica (nerd cred…plummeting) and the few times I watched, I thought Edward James Olmos’ portrayal of William Adama was really flat; I thought Olmos was mailing in the performance. The most likely reason for my feelings? If you’ve never seen it, watch Stand and Deliver and Olmos’ portrayal of math teacher Jaime Escalante. Now that was a performance. Anyhow, here is the clip I watched incessantly.
I decided on a different notational scheme, but the method is the same. We make the following observation: when doing integration by parts repeatedly, the term that we differentiate will usually be differentiated again. That is, (abusing notation) the becomes our new . If you like, the formula for integration by parts has us multiply diagonally left to right () and then subtract the integral of the product left to right along the bottom ():
Essentially, this creates an alternating sum. In practice, it means we can set up the following chart where, going down, we differentiate on the left until we get and antidifferentiate on the right as many times as we differentiated.
Notice here that we are condensing quite a bit of notation with this method since we are no longer using the u, v, du, and dv notation. But, we are getting out precisely the same information. We draw diagonal left-to-right arrows to indicate which terms multiply and we superscript the arrows with alternating pluses and minuses to give the appropriate sign.
Could you do that again?
Let’s try a different example, a little more complicated. Say we want to compute . We simply set up the chart where, going down, we differentiate on the left and antidifferentiate on the right:
as the antiderivative for .
Indeed. Next time we’ll take this a step further and show how to handle some situations where neither function is a polynomial. This will also bring up the question, again, about how to choose which function to differentiate and which to integrate.