For example, if I apply a force to an object, then the work done by that force in moving the object a certain distance in a given direction (denote this shift by ) is . If the force is not constant over some curve parametrized by (), then we compute the work by evaluating the integral since, at any given point, our from above is just the tangent vector to the curve at that point, i.e., .

If you understand multivariable calculus, then you are probably laughing at me. “Duh. Why did it take you so long to figure that out?”

Here is my answer: We (or maybe just I) improperly motivate the dot product. This semester, I’m using Stewart for Multivariable Calculus*. He introduces vectors in a way that seems fairly standard for math texts.

**Definition: **The dot product of and is .

The great thing about this definition is that it is bloody easy to compute and understand.

**Theorem:** If and be vectors with angle between them, then .

The beauty here is that you can use the dot product to help compute angles and it is immediately obvious that the dot product of orthogonal vectors is .

**This wasn’t my choice, but rather the choice of my department. Oh, did I mention I got a new job? Indeed I finally gave up on east coast living and moved back to California. I am now in the mathematics department at California State University Fullerton.*

I’ve heard that in physics textbooks, they switch the order of the above, i.e., they define the dot product via the cosine formula and then prove the above definition as a theorem. As a mathematician, I always went with the first definition. Now, I am not so sure. What follows is the introduction to the dot product I plan to give to my students (until I come up with something better, anyway*).

**In the comments, please do set me straight about the real purpose of the dot product or how you think it best to introduce it in this context.*

Let’s start with two vectors, joined by their tales.

I am interested in how far extends along , so I drop a line perpendicular to from the end of .

At this point, I’m already confused by what would happen if I had tried to see how far goes along , but I decide that I could simply extend and at least draw the following picture:

Awesome, I have a couple of right triangles. And, heck, since they are right triangles that share the angle (let’s call it ) between and , they are similar triangles. Let’s give some names to the important sides.

The comment about similar triangles implies that . Ugh, let’s clear denominators to get . On the other hand, , and so if we multiply by , we get

The moral is that this important quantity——coming from projecting the vectors onto each other, has a very simple reformulation as which only relies on knowing the original vectors and the angle between them. Since this projection property is so important to us physically, we give a short name to this expression: , and call it the **dot product*** *of and .

If is orthogonal to , then the projection should be , which of course it is since the cosine of is .

At this point one can go about proving that the dot product is obtained directly from the components, i.e., without knowing the angle between them. Of course, there is still the issue of when is obtuse, and it will probably be helpful to cover that case as well. Geometrically it will look a bit different, but the algebra and trig will be almost the same*.

**You do get to use the fact that the cosine of an angle equals the cosine of the supplementary angle!*

There is nothing really new here, but I think the ordering is important. Their first impression of the dot product should convey the purpose of the dot product, not just the easiest algorithm for computing it. As it stands, the projection of a vector onto another vector gets a a somewhat token reference at the end of the dot product chapter. As ubiquitous as the idea is throughout the end of the class, it deserves its time in the sun.

]]>(with Professor Glesser)

As I mentioned in this much-maligned post, “my all-time favorite differentiation technique is logarithmic differentiation.” In that post, I give examples of two types of problems where the technique proves useful. The second type—where a variable function is raised to a variable power—is handled with the SPEC rule (essentially the sum of the power rule and exponential rule, with the chain rule used as per normal). Here is the example I gave of a function of the first type.

Typically, I show the students how to use logarithmic differentiation in order to compute the derivative of this type of function (see the post linked to above for the full derivation). However, this is not how I compute it myself!

Story Time

Like most everybody who takes calculus, I learned the quotient rule for differentiation:

Or, in song form (sung to the tune of *Old McDonald*):

*Low d-high less high d-low
E-I-E-I-O
*

[Note that when sung incorrectly as

At some point, I was given an exercise to show that

If you start from this reformulation, it is a simple matter of algebra to get to the usual formulation of the quotient rule. However, a couple of things caught my eye. First, the reformulation seemed much easier to remember: copy the function and then write down the derivative of each function over the function and subtract them; the order is the “natural” one where the numerator comes first.

Actually, there *is* a reasonably nice way to remember the order of the quotient rule, at least if you understand the meaning of the derivative. Assume that both the numerator and denominator are positive functions. If the derivative of the numerator is increasing, then the numerator and the quotient are getting bigger faster, so the derivative of the quotient should also be getting bigger, i.e., should have a positive sign in front of it. Similarly, if the derivative of the denominator is increasing, then the denominator is getting bigger faster, which means the quotient is getting smaller faster, and so the derivative of the quotient is decreasing, i.e., should have a negative sign in front of it.

Secondly, the appearance of the original function in the answer screams: **LOGARITHMIC DIFFERENTIATION**. Let’s see why.

If , then . Differentiating both sides using the chain rule yields

and so the result follows by multiplying both sides by . This is one of my favorite exercises to give first year calculus students—before and after teaching them logarithmic differentiation*.

**Don’t you think that giving out the same problem at different times during the course is an underutilized tactic?*

Being a good math nerd, I had to take this further. What if the numerator and denominator are, themselves, a product of functions? Assume that and that . Setting , taking the natural logarithm of both sides, and applying log rules, we get:

Differentiating (using the chain rule, as usual) gives:

Multiplying both sides by now gives us the formula:

An immediate example of using this is as follows. Differentiate . The usual way would involve the quotient rule mixed with two applications of the product rule. The alternative is to simply rewrite the function, and to work term by term giving:

which immediately reveals some rather easy simplifications.

But we haven’t used all of the log rules yet! We haven’t used the exponential law. So, let’s assume that each of our and has an exponent, call them and , respectively. In this case, using logarithmic differentiation, we get:

.

Differentiating, we get almost the same formula as above, but with some extra coefficients:

Look back to the example near the top of the post. If we rewrite it with exponents instead of roots, we get:

.

Taking the derivative is now completely straight-forward.

Again, there is some simplifying to be done.

An easier problem is one without a denominator! Let . Normally, one would use the product rule here, but why don’t we try our formula. It gives:

That was pretty painless, while the product rule becomes more tedious as the number of factors in the product increases.

Oh, and if you can’t imagine this being appropriate to teach to students, no less an authority than Richard Feynman encouraged his students to differentiate this way. At the very least, his support gives me the confidence to let you in on my little secret.

]]>Let’s face it, anyone who disagrees with using shortcuts or mnemonics probably should add, “at least in addition to the shortcuts and mnemonics that are already in common use.” Our entire system of notation is designed (sometimes poorly) to make the meaning easier to remember and computations easier to perform, e.g., decimal notation versus Roman numerals. There is nothing wrong with observing that a long computation reveals a pattern or an invariant that allows for a more direct route to the answer; this is a process embraced by mathematicians (I’d love to know what percentage of math papers simply improve on the proof of a known result).

Am I wrong or is there a misconception that teaching a shortcut implies not teaching the reason behind the shortcut? When I was in ninth grade, I did a project on mental arithmetic. The teacher gave back my draft with a comment asking for justifications of the tricks I was using. I learned so much about algebra trying to complete that assignment, perhaps more than I would learn in an entire high school algebra course. *Make learning the inner-workings a priority, and the shortcuts arise naturally.*

The June 2012 issue of the Notices of the AMS contains a provocative article by Frank Quinn. Amongst other things, he stresses that work on an abstract and symbolic level is important. Of course, there are lots of ways of incorporating abstraction into a class. Wouldn’t it be doubly beneficial if the result of that work was a faster way to perform a calculation?

]]>Actually, the answer is usually none of the above. In fact, most of us probably know fairly successful people whose background and skill in mathematics was as minimal as they could get away with. I know several scientists (mostly in biology) who are extremely good at what they do, but who would be terrified to sit through even our freshman level math courses. No, it does not seem that an individual’s lack of mathematical background will necessarily cost that person anything substantial. Ah, but what of a nation or a world?

Suppose a society fails to learn mathematics? I don’t mean that all individuals fail to learn, just a number so overwhelming that they permeate the government, regulatory agencies, businesses, and schools. What if a sufficient fraction of the population learns to think intuitively, rather than critically? What if enough people agree that what feels right, is right? What if the system of checks and balances fails because the counterweights just can’t keep up? What if people start to believe that these rocks are what keep the tigers away?

I am writing this as I live the answer to these questions. To which circle of hell am I referring? Circle 9er: Airport security.

I know: It is such an easy target for scrutiny, and yet without a doubt the people who work in airport security are an honest group who are doing their job and who likely sympathize with many of the travelers inconvenienced by policies they never asked for. Am I annoyed that the security officer here at Heathrow just confiscated from me the half-full 4oz. bottle of contact lens solution—which passed through American security without notice—because 4oz. is 118mL and the British limit is 100mL?* No. I am annoyed that many people in power in both (all?) countries believe that such trivial differences matter.

**Technically, they should have confiscated it in the United States since they match Britain with a 3.4oz (roughly 100mL) limit. *

How should we decide the appropriate level of security at our airports? Should experts come up with a list of reasonable ways a terrorist might attempt to take over or destroy an airplane, and then enact sufficient security measures to make those avenues of destruction prohibitively difficult? It sounds pretty good. It feels like the right solution.

Of course, if you’re one of the those anal mathematicians, then you might start questioning the definition of `reasonable ways’ and `prohibitively difficult’. At that point, it might occur to you that probability and statistics are at play here, and that these are necessary to consider before deciding upon a course of action. But those of us without a Ph.D. in pointdextery know that probability and statistics is a just a smart person’s attempt to get around the immutable law that either the plane crashes or it doesn’t; you either stop the terrorist or you don’t. Never mind the regular reports of journalists sneaking weapons or TSA agents sneaking people through security. We’d all rather be alive with a little less liberty (and contact solution) than free and dead at the bottom of the Atlantic, right?

So, I guess this is my answer: The cost of a society failing to learn mathematics is giving up some of its liberty for, well, the appearance of security? But hey, at least those badges the TSA officers wear now are keeping the tigers away.

[Update: This post was retroactively inspired (that is, I read it after I wrote this) by an article of Keith Devlin.]

]]>Happy Finals, everybody!

]]>*Creating Significant Learning Experiences:
An Integrated Approach to Designing College Courses
L. Dee Fink
Copyright © 2003 by John Wiley & Sons, Inc.*

Reviewed by Adam Glesser

Fink argues that a new paradigm is emerging in college teaching, one that encourages a focus on activities that produce significant learning experiences, valuing the quality of learning over the quantity of content coverage. In order to frame the discussion, he defines a Taxonomy of Significant Learning consisting of three categories that essentially mimic Bloom’s taxonomy of educational objectives:

- Foundational Knowledge
- Application
- Integration

and three categories that go beyond Bloom:

- Human Dimension (“students learn something important about themselves or about others” (p. 31))
- Caring (about the subject, phenomena, ideas, their own self, others, the process of learning, etc. (p. 49))
- Learning How to Learn

Fairly little attention is given by faculty to the latter three in the course design process, although I suspect that when pressed, most professors would espouse these as goals of their courses. In the sciences, I see some of these categories as long-term goals, built up through the entire curriculum and difficult to foster in a single course. This suggests that we need a concerted effort to consider these values collectively, not merely in isolation.

The heart of this book is the two chapters on course design. My teaching mimics that of my own teachers and so, like them, I am a member of content-aholics anonymous: the group of professors ashamed that their courses are creatively designed to include as much content as possible. Conveniently, Fink offers up a 12-step plan for designing a course. Although few of his suggestions are innovative, many of them will make you say, “That makes so much sense! Why haven’t I been doing that?”

My only complaint is the relative lack of attention given to the the grading system and its role in fostering significant learning. While the author accepts the need for “the development of a feedback and assessment system that goes beyond just grading and contributes to the learning process” (p. 142), he gives an example of a grading system that is “fair and educationally valid”, but which reduces the course, for many students, to the calculus of point grubbing.

The title sets the bar: the book is a failure if reading it is not itself a significant learning experience. Fortunately, the author succeeds in the ultimate accomplishment in pedagogical writing: he made me put the book down at times, frantic to work on designing one of my courses.

]]>]]>

(with Professor Glesser)

My all-time favorite differentiation technique is logarithmic differentiation. The implementation is right there in the name: take a logarithm and then differentiate. If you are a pro with your log rules, you will understand why this would be useful.

There are two canonical types of functions where this technique is often used in standard calculus courses. The first is where you have a product and/or quotient of functions, potentially raised to a rational power. For example:

If you apply the natural log to both sides—we choose base so as to avoid unnecessary constants when differentiating (recall that when is a function of that —then we can deconstruct the right hand side using the log rules to get:

.

Differentiating both sides is now a snap:

.

Multiplying both sides by —which is the orignal function—gives us the derivative.

The second example is one that gives students no trouble at all, but gives teachers fits. The simplest such example is

.

Three-quarters of the class knows that you use the power rule to get

.

The remaining group of students will point out that the power rule only works with a constant exponent, so instead you need to use the exponential rule which gives

.

Of course, the teacher is squirming right now because they know the exponential rule only works you have a constant base. In fact, neither rule is correct! However, in a way, they are both half-right. Applying the natural log to our orginal equation gives

.

Differentiating—using the product rule on the right—gives

.

Multiplying both sides by now gives

, the sum of the two incorrect answers.

Using logarithmic differentiation on functions of the form , we can get a general rule which is not particularly well-known:

If is differentiable, then , i.e., evaluate the derivative using the power and chain rules, then with the exponential and chain rules, and finally add the two incorrect answers together.

In short,

On my midterm, I asked the students to compute the derivative of . The SPEC rule makes this a piece of cake: the power rule gives and the exponential rule gives . Adding them together gives:

.

When coming up with this, I thought that the perfect title would be “Two Wrongs Make a Right.” Unfortunately, this was already taken by the authors of the paper of nearly the same name (which has to be the most obvious title for a paper—ever). They don’t give a name to this rule, so in honor of my anime loving friends, I stick with SPEC rule for the moniker.

One of the faculty asked me if students will avoid logarithmic differentiation now. For the first type of problem: absolutely not—the SPEC rule doesn’t really apply. For the second type: I hope so—logarithmic differentiation is useful because it simplifies calculations; why not use a trick to simplify it even more?

]]>Spring 2011 Multivariable Calculus Standards List

Let me admit something, here, in between two documents—less likely to read in here—about teaching this course, now for the third time: I’m a fraud. That’s right, I’m a fake, a charlatan, an impostor. I’ve created a counterfeit course and hustle the students with a dash of hocus-pocus and a sprinkle of hoodwinking. It is only through mathematical guile that my misrepresentations, chicanery and flim-flam go unnoticed. In short, and in the passing Christmas spirit, I am a humbug. This is a physics course. It should be taught be someone proficient in physics, someone with honed intuition about the geometry of abstract mathematical notions like div, grad, curl and all that, someone who sees everything as an application of Stokes’ theorem and has strong feelings about whether it should be written Stokes’ theorem or Stokes’s theorem. About the only thing I bring to the table is that I can teach students to remember that:

and

because curl and cross both start with c, while div and dot both start with d.

Here is the calendar for the course. After it, I’ll explain a little bit of what I’m trying.

Spring 2011 Multivariable Calculus Calendar

There are several big differences here from how I’ve taught this course in the past. First, I am going to try with all my might to get to Stokes’ theorem *before* the last week. Part of the way I plan to do this is, similar to my calculus class, to cut out most of the stuff on limits and continuity that I usually get bogged down on in the first couple of weeks—am I the only person who finds interesting the pathological examples that make Clairaut’s theorem necessary? I get to teach an extra hour a week to a subset of the class and that stuff will fit perfectly in there. For the science majors, I’m more interested in helping them figure out how to use this stuff and how to develop intuition. Second, I’m skipping Green’s theorem until the end. Yes, it changes the story I normally tell, one that progresses so nicely up the dimension chart, but the trade-off is that I get more time to show them Stokes’ theorem and more time to focus on the physical interpretation.

Speaking of interpretation, you will notice in the calendar eleven or so ‘Group Activities’. These are stolen from an excellent guide produced by Dray and Manogue at Oregon State as part of their Bridge Project. To work within their framework, I’ve made another structural change that I’d never considered given how I think about the subject. Immediately after finishing triple integration (which, essentially, finishes the first half of the course), we start with vectors (I never start with vectors as most calculus books do) and then I want to get to line integrals and surface integrals as fast as possible. Normally, I mess around with div and curl before getting to integration of vector fields. Instead, I’m going to push out the Divergence theorem—the theorem I always cover in the last 45 minutes of the course—and use this to motivate the definition of div. Then I’ll push out Stokes’ theorem and use this to help motivate the definition of curl. This ought to give me two solid weeks to explore the physical meaning of these theorems as well as to use them to prove some of the standard cool corollaries (like Green’s theorem).

This class will also be the first of my SBG courses to incorporate a final project. If anyone has good suggestions based on experience about how best to incorporate projects into the SBGrading scheme, I would live to hear them. My current system is quite simplistic. The standards for the course are given a 90% weighting for the overall grade—did I mention that midterms and finals now are simply extended assessments whose grades are treated like an arbitrary quiz, just with a *lot* more standards tested?—and 10% weighting for the project.