Last time on

**Tricks of the Trade**

(with Professor Glesser)

we introduced integration by parts as an analogue to the product rule. We start this post with an example to show why the method can become tedious.

Consider

As there is a product of functions, this seems ideal for integration by parts. A question we will take up in our next post is which term we should look to differentiate (i.e., be our ) and which we should antidifferentiate (i.e., be our ). For now, I will give you that a sound choice is

With this, we get

.

Using the integration by parts fomula:

we get

Using linearity, we reduce the question to solving .

**Hold on, now. Is that really an improvement?**

Yes, because the power of is smaller. But, I’ll grant you that life doesn’t seem much better. Essentially, we need to do integration by parts again. So, we rename things:

and we get

and after using linearity, we only need to compute .

**Check please!**

Before you get up and leave, notice that the power of is one less again.

**Whoo-hoo. Yay, capitalism!**

Seriously, each time we do this process, the exponent will decrease by one (since we are differentiating). So we “only” need to do it two more times.

**You suck, Professor Glesser**

Agreed. This is why it is nice to automate the process. I first learned this by watching *Stand and Deliver* over and over while in high school. I am not much of a fan of Battlestar Galactica (nerd cred…plummeting) and the few times I watched, I thought Edward James Olmos’ portrayal of William Adama was really flat; I thought Olmos was mailing in the performance. The most likely reason for my feelings? If you’ve never seen it, watch *Stand and Deliver *and Olmos’ portrayal of math teacher Jaime Escalante. Now *that* was a performance. Anyhow, here is the clip I watched incessantly.

I decided on a different notational scheme, but the method is the same. We make the following observation: when doing integration by parts repeatedly, the term that we differentiate will usually be differentiated again. That is, (abusing notation) the becomes our new . If you like, the formula for integration by parts has us multiply diagonally left to right () and then subtract the integral of the product left to right along the bottom ():

The next iteration of integration by parts gives:

Essentially, this creates an alternating sum. In practice, it means we can set up the following chart where, going down, we differentiate on the left until we get and antidifferentiate on the right as many times as we differentiated.

Notice here that we are condensing quite a bit of notation with this method since we are no longer using the u, v, du, and dv notation. But, we are getting out precisely the same information. We draw diagonal left-to-right arrows to indicate which terms multiply and we superscript the arrows with alternating pluses and minuses to give the appropriate sign.

We don’t need to draw a horizontal arrow on the bottom since that would simply give us the antiderivative of . Following the arrows and taking account of signs, our antiderivative is

**Could you do that again?**

Let’s try a different example, a little more complicated. Say we want to compute . We simply set up the chart where, going down, we differentiate on the left and antidifferentiate on the right:

as the antiderivative for .

Indeed. Next time we’ll take this a step further and show how to handle some situations where neither function is a polynomial. This will also bring up the question, again, about how to choose which function to differentiate and which to integrate.

As we’re working through this sort of example, but before I show them this method, I usually make a show of how sometimes the order does matter (and you may bring this up in part 3). That is, I’ll let u = x^4 and dv = sin(x) dx the first time and then go through it and say something like, “Well, that didn’t get us much of anywhere. What if we switch up our u and dv this time? Let’s let u = cos(x) and dv = x^3.” Then when you work it through, everything cancels out and we’re back to the original problem.

I use that to show them that it usually makes sense to keep your sequence of choices for u consistent rather than bouncing back and forth. I guess it makes more sense with the non-polynomial functions you’ll show in the next part you’re writing about, but I usually do this before the method here to describe why we keep the same u in a single column.

Comment by CalcDave — September 27, 2010 @ 8:40 am |

[…] Integration by Parts 1 Integration by Parts 2 […]

Pingback by Integration by Parts 3 « GL(s,R) — October 1, 2010 @ 8:43 am |

Did you ever continue after your break? If so, where can I find the material? Thank you!

Comment by Evelyn Saravia — June 28, 2011 @ 6:29 pm |