GL(s,R)

July 7, 2012

The Usual Way Is Just Fine, Man.

Filed under: Tricks of the Trade — Adam Glesser @ 11:29 pm
Tags: ,

Tricks of the Trade

(with Professor Glesser)

As I mentioned in this much-maligned post, “my all-time favorite differentiation technique is logarithmic differentiation.” In that post, I give examples of two types of problems where the technique proves useful. The second type—where a variable function is raised to a variable power—is handled with the SPEC rule (essentially the sum of the power rule and exponential rule, with the chain rule used as per normal). Here is the example I gave of a function of the first type.
y = \sqrt[3]{\dfrac{(3x-2)^2\sqrt{2x^3+1}}{x^4(x-1)}}
Typically, I show the students how to use logarithmic differentiation in order to compute the derivative of this type of function (see the post linked to above for the full derivation). However, this is not how I compute it myself!


Story Time

Like most everybody who takes calculus, I learned the quotient rule for differentiation:

\left(\dfrac{f}{g}\right)' = \dfrac{g \cdot f' - f \cdot g'}{g^2}

Or, in song form (sung to the tune of Old McDonald):

Low d-high less high d-low
E-I-E-I-O
And on the bottom, the square of low
E-I-E-I-O
[Note that when sung incorrectly as High d-low less low d-high, the rhyme will not work!]

At some point, I was given an exercise to show that
\left(\dfrac{f}{g}\right)' = \dfrac{f}{g}\left(\dfrac{f}{f'} - \dfrac{g}{g'}\right).
If you start from this reformulation, it is a simple matter of algebra to get to the usual formulation of the quotient rule. However, a couple of things caught my eye. First, the reformulation seemed much easier to remember: copy the function and then write down the derivative of each function over the function and subtract them; the order is the “natural” one where the numerator comes first.

Story Within A Story Time

Actually, there is a reasonably nice way to remember the order of the quotient rule, at least if you understand the meaning of the derivative. Assume that both the numerator and denominator are positive functions. If the derivative of the numerator is increasing, then the numerator and the quotient are getting bigger faster, so the derivative of the quotient should also be getting bigger, i.e., f' should have a positive sign in front of it. Similarly, if the derivative of the denominator is increasing, then the denominator is getting bigger faster, which means the quotient is getting smaller faster, and so the derivative of the quotient is decreasing, i.e., g' should have a negative sign in front of it.

Secondly, the appearance of the original function in the answer screams: LOGARITHMIC DIFFERENTIATION. Let’s see why.

If y = \dfrac{f}{g}, then \ln(y) = \ln\left(\dfrac{f}{g}\right) = \ln(f) - \ln(g). Differentiating both sides using the chain rule yields
\dfrac{y'}{y} = \dfrac{f'}{f} - \dfrac{g'}{g},
and so the result follows by multiplying both sides by y. This is one of my favorite exercises to give first year calculus students—before and after teaching them logarithmic differentiation*.

*Don’t you think that giving out the same problem at different times during the course is an underutilized tactic?

Being a good math nerd, I had to take this further. What if the numerator and denominator are, themselves, a product of functions? Assume that f = f_1 \cdot f_2 \cdots f_m and that g = g_1 \cdot g_2 \cdots g_n. Setting y = \dfrac{f}{g}, taking the natural logarithm of both sides, and applying log rules, we get:

\ln(y) =\ln(f_1) + \ln(f_2) + \cdots + \ln(f_m) -\ln(g_1) - \ln(g_2) - \cdots - \ln(g_n).

Differentiating (using the chain rule, as usual) gives:

\dfrac{y'}{y} = \dfrac{f'_1}{f_1} + \dfrac{f'_2}{f_2} + \cdots + \dfrac{f'_m}{f_m} - \dfrac{g'_1}{g_1} - \dfrac{g'_2}{g_2} - \cdots - \dfrac{g'_n}{g_n}.

Multiplying both sides by y now gives us the formula:

y' = \dfrac{f}{g}\left(\dfrac{f'_1}{f_1} + \dfrac{f'_2}{f_2} + \cdots + \dfrac{f'_m}{f_m} - \dfrac{g'_1}{g_1} - \dfrac{g'_2}{g_2} - \cdots -\dfrac{g'_n}{g_n}\right).

An immediate example of using this is as follows. Differentiate y = \dfrac{\sin(x)e^x}{(x+2)\ln(x)}. The usual way would involve the quotient rule mixed with two applications of the product rule. The alternative is to simply rewrite the function, and to work term by term giving:

y' = \dfrac{\sin(x)e^x}{(x+2)\ln(x)}\left(\dfrac{\cos(x)}{\sin(x)} + \dfrac{e^x}{e^x} - \dfrac{1}{x+2} - \dfrac{1/x}{\ln(x)}\right),

which immediately reveals some rather easy simplifications.

But we haven’t used all of the log rules yet! We haven’t used the exponential law. So, let’s assume that each of our f_i's and g_j's has an exponent, call them a_i and b_j, respectively. In this case, using logarithmic differentiation, we get:

\ln(y) = a_1\ln(f_1) + \cdots + a_m\ln(f_m) - b_1\ln(g_1) - \cdots - b_n\ln(g_n).

Differentiating, we get almost the same formula as above, but with some extra coefficients:

y' = \dfrac{f}{g}\left(a_1\dfrac{f'_1}{f_1} + \cdots + a_m\dfrac{f'_m}{f_m} - b_1\dfrac{g'_1}{g_1} - \cdots - b_n\dfrac{g'_n}{g_n} \right).

Look back to the example near the top of the post. If we rewrite it with exponents instead of roots, we get:

y = \dfrac{(3x-2)^{2/3}(2x^3 + 1)^{1/6}}{x^{4/3}(x-1)^{1/3}}.

Taking the derivative is now completely straight-forward.

y' = \dfrac{(3x-2)^{2/3}(2x^3 + 1)^{1/6}}{x^{4/3}(x-1)^{1/3}}\left(\dfrac{2}{3}\cdot\dfrac{3}{3x-2} + \dfrac{1}{6}\cdot\dfrac{6x^2}{2x^3+1} - \dfrac{4}{3}\cdot\dfrac{1}{x} - \dfrac{1}{3}\cdot \dfrac{1}{x-1}\right).

Again, there is some simplifying to be done.

An easier problem is one without a denominator! Let y = \tan(2x)x^{3/4}(3x-1)^3. Normally, one would use the product rule here, but why don’t we try our formula. It gives:

y' = \tan(2x)x^{3/4}(3x-1)^3\left(\dfrac{2\sec^2(2x)}{\tan(2x)} + \dfrac{3}{4}\cdot \dfrac{1}{x} + 3\dfrac{3}{3x-1}\right).

That was pretty painless, while the product rule becomes more tedious as the number of factors in the product increases.

Oh, and if you can’t imagine this being appropriate to teach to students, no less an authority than Richard Feynman encouraged his students to differentiate this way. At the very least, his support gives me the confidence to let you in on my little secret.

Advertisements

January 12, 2011

A Calculus List

Filed under: High Effort/Low Payoff Ideas,Standard Based Grading — Adam Glesser @ 4:19 pm
Tags: ,

One of the advantages of my job is the incredible scheduling. We finished the fall semester the second week of December and my first class of the spring is next Tuesday! The downside is that this gives me way too much time to plot, scheme, doodle, dabble, think, rethink, and overthink. In the end, I usually settle on a plan that is far too ambitious, pedagogically impossible, philosophically suspect, and utterly indefensible. Thus, I bring you my plan for calculus this semester.

I read the wonderful article, Putting Differentials Back Into Calculus, which argues for using differentials in a way closer to their original creation than the way they are employed in modern textbooks.  As a huge fan of Thompson’s Calculus Made Easy, this suggestion didn’t seem half bad. Considering that only a fifth of my students are math majors and that, for the rest, using calculus outside of their physics class is unlikely, why not make things as easy as possible. I’m going to push to teach this same group next fall in calculus II, so the only teacher I can hurt is myself, right? But then I started thinking: the reason the differential approach will work so well with these students is that they will always be using differentiable functions. What reason is there to mention limits and continuity? These are technical issues that won’t help them at all in understanding calculus or how to apply it in their field of inquiry.

Oh dear, so here I am with essentially two months of material (this includes learning to differentiate any elementary function and using this to solve the standard problems). What will I do for the last month and half? I quickly remembered to add Taylor series because I love teaching that in calculus I. Then I added in the obligatory introduction to antiderivatives and integration. I even sprinkled in some partial differentiation at the end so that I could show the students the totally-awesome-implicit-differentiation-trick that would save them five minutes on the final exam. Grr…still two weeks left. These are precisely the two weeks that I usually spend on limits in the beginning. Now I remember why I always do this. It perfectly fills in the semester calendar. And then it hits me.

Any subject can be made repulsive by presenting it bristling with difficulties.
—Silvanus P. Thompson

Limits sure confuse the heck out of students. Why in the world are we leading calculus off with limits, especially to non-math majors? For the purpose of rigor?

You don’t forbid the use of a watch to every person who does not know how to make one? You don’t object to the musician playing on a violin that he has not himself constructed. You don’t teach the rules of syntax to children until they have already become fluent in the use of speech. It would be equally absurd to require rigid demonstrations to be expounded to beginners in the calculus.
—Silvanus P. Thompson

So I decide I just won’t do it.  No limits for us. We will just do some extra exploratory work. There is a great article on math in medicine we could read together. Okay, time for sleep.

But sleep does not come.

Toss.

Turn.

Toss.

Turn.

All right. All right. I’m up.

Why can’t the limits just die? Why do I feel the compulsion to put them back in? They’re like the tell-tale heart beating under my floor. What will become of my math majors if they don’t see limits? No, I can’t do that to them. What a cruel joke to play: send them to real analysis without having used limits. Back in they go. Of course, those biology majors are going to be completely turned off and once you lose them, they’re gone for good. Argh, out they go. On the other hand, if I get audited by the department, they are sure going to ask questions. With my review coming up, I can’t afford that kind of chatter. Put them back in…

…but later. Huh? Put them back in, but later. Yes, of course. Later. The course description says I need to cover limits, but it doesn’t say when! What if we introduced everything in a reasonable way and, only after the students know what is going on and why any of this is important, then showed them those funky limit do-hickies? Hmm. Interesting. And that is my explanation for the following calendar and skills list:

View this document on Scribd
View this document on Scribd

October 1, 2010

Integration by Parts 3

Filed under: Tricks of the Trade — Adam Glesser @ 8:43 am
Tags: , ,

Tricks of the Trade

(with Professor Glesser)

In the first two installments of this series

Integration by Parts 1
Integration by Parts 2

we introduced integration by parts as a way to compute antiderivatives of a product of functions and we saw how certain integration by parts problems are handled more efficiently with the so-called tabular method (or, in Stand and Deliver, the “tic-tac-toe” method). In this post, we will consider the following question: As integration by parts requires the making of a choice—which is your u and which is your dv—how can we make this choice so that the resulting integral is easier to compute?

From the Mailbag

Über-reader CalcDave wrote in the comments to the last post in this series that,

I usually make a show of how sometimes the order does matter…That is, I’ll let u = x^4 and dv = \sin(x)\ dx the first time and then go through it and say something like, “Well, that didn’t get us much of anywhere. What if we switch up our u and dv this time? Let’s let u = \cos(x) and dv = x^3.” Then when you work it through, everything cancels out and we’re back to the original problem.

Indeed, Dave. Let’s take a look at what happens if we switch it up.

Egad, Dave is right. Since the product of the terms in the last line of the table is what we will need to integrate, doing it this way just makes things worse.  Ah, but what if we start with the cosine on the left and then switch it up? Oh, yeah, we’ll just get back what we started with. This suggests that we should always put a polynomials on the left so that it doesn’t go up in degree. It turns out that there are several examples where this is precisely the wrong thing to do. We implicitly saw this in the first post, but let me give you a couple of more explicit examples.

\int x\sin^{-1}(x)\ dx

If we split this up using our ‘rule’ to always put the polynomial on the left, then we are forced to integrate \sin^{-1}(x). Let’s say you just happen to know the antiderivative of \sin^{-1}(x) is x\sin^{-1}(x) + \sqrt{1 - x^2} + C (I didn’t, although I can use integration by parts to figure it out!). You would now get:
and be forced to integrate the monstrosity on the right. Not for me thank you. However, if you put the \sin^{-1}(x) on the left, we get:
and at the very least we have gotten rid of the \sin^{-1}(x). In fact we have done more, but we’ll have to wait until the next post to resolve this.

Another example is \int x\ln(x)\ dx. Although we did integrate \ln(x) in our first post, it gave an answer of x\ln(x) - x + C and we don’t want to integrate that since it we don’t know how to integrate x\ln(x) (in a future post, we will resolve this last problem directly). On the other hand, if we put the \ln(x) on the left, the derivative will return \frac{1}{x} and the natural logarithm is gone. So when does it pay to put the polynomial on the right? Whenever the derivative of the other function changes it into an algebraic function, it will be right to integrate the polynomial. Otherwise, you should differentiate the polynomial. If we also include trigonometric functions and exponential functions, the rule of thumb is:
Logarithms Inverse-Trig Algebraic Trig Exponential

This list represents a good order in which to choose your u in the following sense: if you have two functions, whichever comes first in the above list should be your u. Some people enjoy a good mnemonic to memorize the order. I’ve heard the following:

LIATE rule (or alL I ATE rule)

Lions In Africa Tackle Elephants

Liberals In America Typify Elitists

Little Indians Are Tiny Engines

Lets Integrate All The Equations

This says, for example, that when confronted with \int \sin(x) e^x\ dx, differentiate the \sin(x) and integrate e^x.

Next Time

In our next segment, we will introduce the box method for handling several of the integrals left unsolved in this post.


September 24, 2010

Integration by Parts 2

Filed under: Tricks of the Trade — Adam Glesser @ 8:28 am
Tags: , ,

Last time on

Tricks of the Trade

(with Professor Glesser)

we introduced integration by parts as an analogue to the product rule. We start this post with an example to show why the method can become tedious.

Consider

\int x^4\sin(x)\ dx

As there is a product of functions, this seems ideal for integration by parts. A question we will take up in our next post is which term we should look to differentiate (i.e., be our u) and which we should antidifferentiate (i.e., be our dv). For now, I will give you that a sound choice is

u = x^4 \qquad dv = \sin(x)\ dx

With this, we get

du = 4x^3\ dx \qquad v = -\cos(x).

Using the integration by parts fomula:

\int u\ dv = uv - \int v\ du

we get

\int x^4\sin(x)\ dx =-x^4\cos(x)-\int 4x^3(-\cos(x))\ dx

Using linearity, we reduce the question to solving \int x^3\cos(x)\ dx.

Hold on, now. Is that really an improvement?

Yes, because the power of x is smaller. But, I’ll grant you that life doesn’t seem much better. Essentially, we need to do integration by parts again. So, we rename things:

u = x^3 \qquad dv = \cos(x)\ dx
du = 3x^2\ dx \qquad v = \sin(x)

and we get

\int x^3\cos(x)\ dx = x^3\sin(x) - \int 3x^2\sin(x)\ dx

and after using linearity, we only need to compute \int x^2\sin(x).

Check please!

Before you get up and leave, notice that the power of x is one less again.

Whoo-hoo. Yay, capitalism!

Seriously, each time we do this process, the exponent will decrease by one (since we are differentiating). So we “only” need to do it two more times.

You suck, Professor Glesser

Agreed. This is why it is nice to automate the process. I first learned this by watching Stand and Deliver over and over while in high school. I am not much of a fan of Battlestar Galactica (nerd cred…plummeting) and the few times I watched, I thought Edward James Olmos’ portrayal of William Adama was really flat; I thought Olmos was mailing in the performance. The most likely reason for my feelings? If you’ve never seen it, watch Stand and Deliver and Olmos’ portrayal of math teacher Jaime Escalante. Now that was a performance. Anyhow, here is the clip I watched incessantly.

I decided on a different notational scheme, but the method is the same. We make the following observation: when doing integration by parts repeatedly, the term that we differentiate will usually be differentiated again. That is, (abusing notation) the du becomes our new u. If you like, the formula for integration by parts has us multiply diagonally left to right (uv) and then subtract the integral of the product left to right along the bottom (-\int v\ du):

The next iteration of integration by parts gives:

Essentially, this creates an alternating sum. In practice, it means we can set up the following chart where, going down, we differentiate on the left until we get 0 and antidifferentiate on the right as many times as we differentiated.

Notice here that we are condensing quite a bit of notation with this method since we are no longer using the u, v, du, and dv notation. But, we are getting out precisely the same information. We draw diagonal left-to-right arrows to indicate which terms multiply and we superscript the arrows with alternating pluses and minuses to give the appropriate sign.

We don’t need to draw a horizontal arrow on the bottom since that would simply give us the antiderivative of 0 \cdot (-\cos(x)) = 0. Following the arrows and taking account of signs, our antiderivative is

-x^4\cos(x) + 4x^3\sin(x)+ 12x^2\cos(x)- 24x\sin(x)- 24\cos(x)+C

Could you do that again?

Let’s try a different example, a little more complicated. Say we want to compute \int (2x^2- 3x + 4)\cos(3x)\ dx. We simply set up the chart where, going down, we differentiate on the left and antidifferentiate on the right:

and follows the arrows to get

\frac{1}{3}(2x^2 - 3x+4)\sin(3x)+ \frac{1}{9}(4x-3)\cos(3x)- \frac{4}{27}\sin(3x)+C
as the antiderivative for \int (2x^2- 3x + 4)\cos(3x)\ dx.

I think I need a break

Indeed. Next time we’ll take this a step further and show how to handle some situations where neither function is a polynomial. This will also bring up the question, again, about how to choose which function to differentiate and which to integrate.

September 22, 2010

Integration By Parts 1

Filed under: Tricks of the Trade — Adam Glesser @ 7:13 am
Tags: ,

This is the first in a series of posts on one of my favorite methods of antidifferentiation: integration by parts. I didn’t love it at first, but a little practice and a few tricks made me appreciate it. Teaching it, well, that is where the love affair begins.

Tricks of the Trade

(with Professor Glesser)

What are you talking about?

Let me assume that the reader is familiar with basic differentiation (including the product rule) and antidifferentiation of some basic elementary functions, i.e., the reader knows such facts as the power rule and how to antidifferentiate exponential functions as well as sine and cosine.

Integration by parts is an analogue to the product rule for derivatives (which tells you how to differentiate a product of functions). In the language of differentials, we have

d(uv) = u\ dv + v\ du

for functions u and v of some common variable, say x. Integrating both sides, we get

uv = \int d(uv) = \int u\ dv + \int v\ du.

The usual form of the integration by parts formula is now obtained by subtracting a term:

\int u\ dv = uv - \int v\ du.

Uh…What?

An example may be helpful. A canonical first example is \int x\sin(x)\ dx. The typical calculus student, fooled by the simplicity of the sum rule and not having the product rule in mind, will incorrectly assert \int x \sin(x)\ dx = (\int x\ dx)(\int \sin(x)\ dx) = (\frac{1}{2}x^2)(-\cos(x)) = -\frac{1}{2}x^2\cos(x). Of course, differentiating shows that this answer is wrong. Why? Well, because antidifferentiation is additive but isn’t multiplicative.

So let’s try the integration by parts formula. We start by noting that \sin(x) is the derivative of -\cos(x), i.e., d(-\cos(x)) = \sin(x)\ dx. Consequently, we could write

\int x \sin(x)\ dx = \int x\ d(-\cos(x)).

We may now apply the integration by parts formula where u = x and v = -\cos(x). This gives

\int x\ d(-\cos(x)) = -x\cos(x) - \int -\cos(x)\ dx = -x\cos(x) + \sin(x) + C.

Could I see one more?

Sure, here is a less obvious example. Consider \int \ln(x)\ dx.

Wait, there is no product of functions.

There is a product; it is just a bit silly. You see \ln(x) = \ln(x)\times 1. Yes, it is one of those kinds of tricks. Now, I know that 1 is the derivative of x and so I can use the integration by parts formula with u = \ln(x) and v = x. This gives:

\int \ln(x)\ dx = x\ln(x) - \int x\ d(\ln(x)) = x \ln(x) - \int x \frac{1}{x}\ dx = x\ln(x) - \int dx = x\ln(x) - x + C.

How do I keep everything straight?

A very common bookkeeping measure is to make a little table including u, v, du and dv. For our first example, you would start with:

u = x \qquad dv = \sin(x)\ dx
du = ? \qquad v = ?

You then compute du = 1\ dx = dx and v = -\cos(x) to complete the table:

u = x \qquad dv = \sin(x)\ dx
du = dx \qquad v = -\cos(x)

You can then simply plug everything into the integration by parts formula.

This isn’t so bad. Why do you need multiple posts?

For those who don’t know the punchline, I won’t spoil it here. It suffices to say that there are some harder problems out there and there are some really efficient ways of handling these difficulties. Stay tuned!

Blog at WordPress.com.