# As I was saying last time…

#### Wait, when was the last time you blogged?

Since my last post, I took a new position at California State University, Fullerton, got tenure, became vice-chair of the department, had to give up homeschooling my three boys, coached a tremendous amount of Little League, and ran my first marathon. I’ve also gone from being an enthusiastic (maybe even avid) gamer to a certified hobbyist (read “nutjob”). My gaming collection far exceeds 300 games, with some number of expansions in the triple digits. But you didn’t come here to read me bragging about my game collection—if you did, then you should come over and play some games with me. You came for the kind of insightful commentary you read over at dy/dan or Solvable by Radicals. Well, I might be a touch out of practice to reach those levels.

#### Why blog again?

I’m going to assume you aren’t asking that question as a way of passive-aggressively suggesting that you were quite happy with me not blogging and that you would prefer I had kept my keyboard shut. Instead, I am going to assume that you are genuinely curious as to why a successful (some would say amazing) professor and father, who has achieved all conceivable forms of academic and personal success, would feel the need to say anything to the math edu-blog-o-sphere (is that still a thing?). The answer is simple: a promise.

#### A little dramatic isn’t it?

Yeah, well, a big change requires something drastic, right? In this case, though, the something drastic was a not-so-chance meeting with Bret Benesh at the MAA MathFest in Cin City. For those of you who don’t follow the comment section of Bret’s blog, I’ve been a long-time reader and occasional responder to Solvable by Radicals. Over the years, we have communicated via blog comment section and even a few times by email, but we had never met in person. When I saw that Bret was planning to go to MathFest, I immediately commented on his blog that I would be going. Naturally, he didn’t see it before the conference, but amazingly he saw my name in the program and sought out my joint talk with colleague Matt Rathbun (I’ll describe the talk in my next post). After the talk he found me, and then I got to have a fun conversation over lunch with Bret and his super cool patient wife, along with Matt and another new friend and CSUF alum Michael Martinez. Recently, Bret started blogging again after a hiatus and the world is a much better place because of it. He encouraged me to start again, and, well, I promised that I would write a blog post on the airplane ride home. And you don’t break your promises to your heroes!

#### So, do you actually have anything to say?

Wouldn’t you like to know?

Hmmm…you called my bluff. Here’s the deal. I’ve done a lot over the past seven years and some of it might even be worth mentioning. Let’s keep this stuff compartmentalized, though. This blog post fulfills the promise. The next one will get into some real thoughts on—well, anything I find relevant. The good news for me is that I am clever enough to be able to connect to my work to anything in which I’m interested, and not so clever as to realize when I’ve jumped the shark.

Fun fact: Students today are more likely to know the phrase “jump the shark” than to have any clue as to its etymology.

# Projecting onto Projections

The first time I saw the expression $\int_C \mathbf{F} \cdot \mathbf{n}\ d\mathbf{r}$, I thought, “Why should that dot product be in there. By the time I saw $\iint_S \mathbf{F} \cdot\ d\mathbf{S}$, I resigned myself to the fact that there was always a dot product in these seemingly random integrals. At some point, I decided that the dot products are in there to turn vectors (or vector fields) into scalar functions—which is something we know how to integrate. More recently, I’ve decided that the purpose of these dot products is to capture the projection of one vector on the other.

For example, if I apply a force $\mathbf{F}$ to an object, then the work done by that force in moving the object a certain distance in a given direction (denote this shift by $\mathbf{v}$) is $\mathbf{F} \cdot \mathbf{v}$. If the force is not constant over some curve parametrized by $\mathbf{r}(t)$ ($a \leq t \leq b$), then we compute the work by evaluating the integral $\int_a^b \mathbf{F} \cdot \mathbf{r}'(t)\ dt$ since, at any given point, our $\mathbf{v}$ from above is just the tangent vector to the curve at that point, i.e., $\mathbf{r}'(t)$.

If you understand multivariable calculus, then you are probably laughing at me. “Duh. Why did it take you so long to figure that out?”

Here is my answer: We (or maybe just I) improperly motivate the dot product. This semester, I’m using Stewart for Multivariable Calculus*. He introduces vectors in a way that seems fairly standard for math texts.

Definition: The dot product of $\langle x_1, \ldots, x_n \rangle$ and $\langle y_1, \ldots, y_n \rangle$ is $x_1y_1 + \cdots + x_ny_n$.

Theorem: If $\mathbf{a}$ and $\mathbf{b}$ be vectors with angle $\theta$ between them, then $\mathbf{a} \cdot \mathbf{b} = \mid\mid \mathbf{a} \mid\mid\ \mid \mid\mathbf{b}\mid\mid \cos(\theta)$.

The beauty here is that you can use the dot product to help compute angles and it is immediately obvious that the dot product of orthogonal vectors is $0$.

*This wasn’t my choice, but rather the choice of my department. Oh, did I mention I got a new job? Indeed I finally gave up on east coast living and moved back to California. I am now in the mathematics department at California State University Fullerton.

I’ve heard that in physics textbooks, they switch the order of the above, i.e., they define the dot product via the cosine formula and then prove the above definition as a theorem. As a mathematician, I always went with the first definition. Now, I am not so sure. What follows is the introduction to the dot product I plan to give to my students (until I come up with something better, anyway*).

*In the comments, please do set me straight about the real purpose of the dot product or how you think it best to introduce it in this context.

I am interested in how far $\mathbf{b}$ extends along $\mathbf{a}$, so I drop a line perpendicular to $\mathbf{a}$ from the end of $\mathbf{b}$.

At this point, I’m already confused by what would happen if I had tried to see how far $\mathbf{a}$ goes along $\mathbf{b}$, but I decide that I could simply extend $\mathbf{b}$ and at least draw the following picture:

Awesome, I have a couple of right triangles. And, heck, since they are right triangles that share the angle (let’s call it $\theta$) between $\mathbf{a}$ and $\mathbf{b}$, they are similar triangles. Let’s give some names to the important sides.

The comment about similar triangles implies that $\dfrac{h}{||\mathbf{b}||} = \dfrac{k}{||\mathbf{a}||}$. Ugh, let’s clear denominators to get $h||\mathbf{a}|| = k||\mathbf{b}||$. On the other hand, $\cos(\theta) = \dfrac{h}{||\mathbf{b}||}$, and so if we multiply by $||\mathbf{a}||\ ||\mathbf{b}||$, we get

$||\mathbf{a}||\ ||\mathbf{b}||\cos(\theta) = h||\mathbf{a}||$

The moral is that this important quantity—$h||\mathbf{a}|| = k||\mathbf{b}||$—coming from projecting the vectors onto each other, has a very simple reformulation as $||\mathbf{a}||\ ||\mathbf{b}||\cos(\theta)$ which only relies on knowing the original vectors and the angle between them. Since this projection property is so important to us physically, we give a short name to this expression: $\mathbf{a} \cdot \mathbf{b}$, and call it the dot product of $\mathbf{a}$ and $\mathbf{b}$.

If $\mathbf{b}$ is orthogonal to $\mathbf{a}$, then the projection should be $0$, which of course it is since the cosine of $90^\circ$ is $0$.

At this point one can go about proving that the dot product is obtained directly from the components, i.e., without knowing the angle between them. Of course,  there is still the issue of when $\theta$ is obtuse, and it will probably be helpful to cover that case as well. Geometrically it will look a bit different, but the algebra and trig will be almost the same*.

*You do get to use the fact that the cosine of an angle equals the cosine of the supplementary angle!

There is nothing really new here, but I think the ordering is important. Their first impression of the dot product should convey the purpose of the dot product, not just the easiest algorithm for computing it. As it stands, the projection of a vector onto another vector gets a a somewhat token reference at the end of the dot product chapter. As ubiquitous as the idea is throughout the end of the class, it deserves its time in the sun.

# The Usual Way Is Just Fine, Man.

(with Professor Glesser)

As I mentioned in this much-maligned post, “my all-time favorite differentiation technique is logarithmic differentiation.” In that post, I give examples of two types of problems where the technique proves useful. The second type—where a variable function is raised to a variable power—is handled with the SPEC rule (essentially the sum of the power rule and exponential rule, with the chain rule used as per normal). Here is the example I gave of a function of the first type.
$y = \sqrt[3]{\dfrac{(3x-2)^2\sqrt{2x^3+1}}{x^4(x-1)}}$
Typically, I show the students how to use logarithmic differentiation in order to compute the derivative of this type of function (see the post linked to above for the full derivation). However, this is not how I compute it myself!

# Story Time

Like most everybody who takes calculus, I learned the quotient rule for differentiation:

$\left(\dfrac{f}{g}\right)' = \dfrac{g \cdot f' - f \cdot g'}{g^2}$

Or, in song form (sung to the tune of Old McDonald):

Low d-high less high d-low
E-I-E-I-O
And on the bottom, the square of low
E-I-E-I-O
[Note that when sung incorrectly as High d-low less low d-high, the rhyme will not work!]

At some point, I was given an exercise to show that
$\left(\dfrac{f}{g}\right)' = \dfrac{f}{g}\left(\dfrac{f}{f'} - \dfrac{g}{g'}\right).$
If you start from this reformulation, it is a simple matter of algebra to get to the usual formulation of the quotient rule. However, a couple of things caught my eye. First, the reformulation seemed much easier to remember: copy the function and then write down the derivative of each function over the function and subtract them; the order is the “natural” one where the numerator comes first.

## Story Within A Story Time

Actually, there is a reasonably nice way to remember the order of the quotient rule, at least if you understand the meaning of the derivative. Assume that both the numerator and denominator are positive functions. If the derivative of the numerator is increasing, then the numerator and the quotient are getting bigger faster, so the derivative of the quotient should also be getting bigger, i.e., $f'$ should have a positive sign in front of it. Similarly, if the derivative of the denominator is increasing, then the denominator is getting bigger faster, which means the quotient is getting smaller faster, and so the derivative of the quotient is decreasing, i.e., $g'$ should have a negative sign in front of it.

Secondly, the appearance of the original function in the answer screams: LOGARITHMIC DIFFERENTIATION. Let’s see why.

If $y = \dfrac{f}{g}$, then $\ln(y) = \ln\left(\dfrac{f}{g}\right) = \ln(f) - \ln(g)$. Differentiating both sides using the chain rule yields
$\dfrac{y'}{y} = \dfrac{f'}{f} - \dfrac{g'}{g},$
and so the result follows by multiplying both sides by $y$. This is one of my favorite exercises to give first year calculus students—before and after teaching them logarithmic differentiation*.

*Don’t you think that giving out the same problem at different times during the course is an underutilized tactic?

Being a good math nerd, I had to take this further. What if the numerator and denominator are, themselves, a product of functions? Assume that $f = f_1 \cdot f_2 \cdots f_m$ and that $g = g_1 \cdot g_2 \cdots g_n$. Setting $y = \dfrac{f}{g}$, taking the natural logarithm of both sides, and applying log rules, we get:

$\ln(y) =\ln(f_1) + \ln(f_2) + \cdots + \ln(f_m) -\ln(g_1) - \ln(g_2) - \cdots - \ln(g_n).$

Differentiating (using the chain rule, as usual) gives:

$\dfrac{y'}{y} = \dfrac{f'_1}{f_1} + \dfrac{f'_2}{f_2} + \cdots + \dfrac{f'_m}{f_m} - \dfrac{g'_1}{g_1} - \dfrac{g'_2}{g_2} - \cdots - \dfrac{g'_n}{g_n}.$

Multiplying both sides by $y$ now gives us the formula:

$y' = \dfrac{f}{g}\left(\dfrac{f'_1}{f_1} + \dfrac{f'_2}{f_2} + \cdots + \dfrac{f'_m}{f_m} - \dfrac{g'_1}{g_1} - \dfrac{g'_2}{g_2} - \cdots -\dfrac{g'_n}{g_n}\right).$

An immediate example of using this is as follows. Differentiate $y = \dfrac{\sin(x)e^x}{(x+2)\ln(x)}$. The usual way would involve the quotient rule mixed with two applications of the product rule. The alternative is to simply rewrite the function, and to work term by term giving:

$y' = \dfrac{\sin(x)e^x}{(x+2)\ln(x)}\left(\dfrac{\cos(x)}{\sin(x)} + \dfrac{e^x}{e^x} - \dfrac{1}{x+2} - \dfrac{1/x}{\ln(x)}\right),$

which immediately reveals some rather easy simplifications.

But we haven’t used all of the log rules yet! We haven’t used the exponential law. So, let’s assume that each of our $f_i's$ and $g_j's$ has an exponent, call them $a_i$ and $b_j$, respectively. In this case, using logarithmic differentiation, we get:

$\ln(y) = a_1\ln(f_1) + \cdots + a_m\ln(f_m) - b_1\ln(g_1) - \cdots - b_n\ln(g_n)$.

Differentiating, we get almost the same formula as above, but with some extra coefficients:

$y' = \dfrac{f}{g}\left(a_1\dfrac{f'_1}{f_1} + \cdots + a_m\dfrac{f'_m}{f_m} - b_1\dfrac{g'_1}{g_1} - \cdots - b_n\dfrac{g'_n}{g_n} \right).$

Look back to the example near the top of the post. If we rewrite it with exponents instead of roots, we get:

$y = \dfrac{(3x-2)^{2/3}(2x^3 + 1)^{1/6}}{x^{4/3}(x-1)^{1/3}}$.

Taking the derivative is now completely straight-forward.

$y' = \dfrac{(3x-2)^{2/3}(2x^3 + 1)^{1/6}}{x^{4/3}(x-1)^{1/3}}\left(\dfrac{2}{3}\cdot\dfrac{3}{3x-2} + \dfrac{1}{6}\cdot\dfrac{6x^2}{2x^3+1} - \dfrac{4}{3}\cdot\dfrac{1}{x} - \dfrac{1}{3}\cdot \dfrac{1}{x-1}\right).$

Again, there is some simplifying to be done.

An easier problem is one without a denominator! Let $y = \tan(2x)x^{3/4}(3x-1)^3$. Normally, one would use the product rule here, but why don’t we try our formula. It gives:

$y' = \tan(2x)x^{3/4}(3x-1)^3\left(\dfrac{2\sec^2(2x)}{\tan(2x)} + \dfrac{3}{4}\cdot \dfrac{1}{x} + 3\dfrac{3}{3x-1}\right).$

That was pretty painless, while the product rule becomes more tedious as the number of factors in the product increases.

Oh, and if you can’t imagine this being appropriate to teach to students, no less an authority than Richard Feynman encouraged his students to differentiate this way. At the very least, his support gives me the confidence to let you in on my little secret.

# In Defense of Shrtcts

Several times, I have written about handy shortcuts that bypass some of the tedium of calculation.The frequency with which readers derided these tricks or mnemonics surprised me. For instance, my post on a shortcut for logarithmic differentiation ended up being the topic of a Facebook debate on sound pedagogy. One of the participants claimed that he would never teach the SPEC rule since he would rather the students know how to use logarithmic differentiation. It seemed to me that this is equivalent to not teaching the power rule because you would rather the students know how to evaluate limits and to utilize the binomial theorem.

Let’s face it, anyone who disagrees with using shortcuts or mnemonics probably should add, “at least in addition to the shortcuts and mnemonics that are already in common use.” Our entire system of notation is designed (sometimes poorly) to make the meaning easier to remember and computations easier to perform, e.g., decimal notation versus Roman numerals. There is nothing wrong with observing that a long computation reveals a pattern or an invariant that allows for a more direct route to the answer; this is a process embraced by mathematicians (I’d love to know what percentage of math papers simply improve on the proof of a known result).

Am I wrong or is there a misconception that teaching a shortcut implies not teaching the reason behind the shortcut? When I was in ninth grade, I did a project on mental arithmetic. The teacher gave back my draft with a comment asking for justifications of the tricks I was using. I learned so much about algebra trying to complete that assignment, perhaps more than I would learn in an entire high school algebra course. Make learning the inner-workings a priority, and the shortcuts arise naturally.

The June 2012 issue of the Notices of the AMS contains a provocative article by Frank Quinn. Amongst other things, he stresses that work on an abstract and symbolic level is important. Of course, there are lots of ways of incorporating abstraction into a class. Wouldn’t it be doubly beneficial if the result of that work was a faster way to perform a calculation?

# 4 oz. > 100 mL

Most everyone is familiar with Santayana’s admonition that, “Those who do not learn from history are doomed to repeat it.” What is the analagous statement for mathematics? Are those who fail to learn mathematics doomed to work in food services? Doomed to playing the lottery? Doomed to credit card debt and forclosures on the house they can’t afford?

Actually, the answer is usually none of the above. In fact, most of us probably know fairly successful people whose background and skill in mathematics was as minimal as they could get away with. I know several scientists (mostly in biology) who are extremely good at what they do, but who would be terrified to sit through even our freshman level math courses. No, it does not seem that an individual’s lack of mathematical background will necessarily cost that person anything substantial. Ah, but what of a nation or a world?

Suppose a society fails to learn mathematics? I don’t mean that all individuals fail to learn, just a number so overwhelming that they permeate the government, regulatory agencies, businesses, and schools. What if a sufficient fraction of the population learns to think intuitively, rather than critically? What if enough people agree that what feels right, is right? What if the system of checks and balances fails because the counterweights just can’t keep up? What if people start to believe that these rocks are what keep the tigers away?

I am writing this as I live the answer to these questions. To which circle of hell am I referring? Circle 9er: Airport security.

I know: It is such an easy target for scrutiny, and yet without a doubt the people who work in airport security are an honest group who are doing their job and who likely sympathize with many of the travelers inconvenienced by policies they never asked for. Am I annoyed that the security officer here at Heathrow just confiscated from me the half-full 4oz. bottle of contact lens solution—which passed through American security without notice—because 4oz. is 118mL and the British limit is 100mL?* No. I am annoyed that many people in power in both (all?) countries believe that such trivial differences matter.

*Technically, they should have confiscated it in the United States since they match Britain with a 3.4oz (roughly 100mL) limit.

How should we decide the appropriate level of security at our airports? Should experts come up with a list of reasonable ways a terrorist might attempt to take over or destroy an airplane, and then enact sufficient security measures to make those avenues of destruction prohibitively difficult? It sounds pretty good. It feels like the right solution.

Of course, if you’re one of the those anal mathematicians, then you might start questioning the definition of reasonable ways’ and prohibitively difficult’. At that point, it might occur to you that probability and statistics are at play here, and that these are necessary to consider before deciding upon a course of action. But those of us without a Ph.D. in pointdextery know that probability and statistics is a just a smart person’s attempt to get around the immutable law that either the plane crashes or it doesn’t; you either stop the terrorist or you don’t. Never mind the regular reports of journalists sneaking weapons or TSA agents sneaking people through security. We’d all rather be alive with a little less liberty (and contact solution) than free and dead at the bottom of the Atlantic, right?

So, I guess this is my answer: The cost of a society failing to learn mathematics is giving up some of its liberty for, well, the appearance of security? But hey, at least those badges the TSA officers wear now are keeping the tigers away.

[Update: This post was retroactively inspired (that is, I read it after I wrote this) by an article of Keith Devlin.]

Final exams are coming and my four-year old son, Jonathan, has a message for my students.

Happy Finals, everybody!

# Hey, I wrote a book review!

The following is a book review written this summer for the Center for Teaching Excellence at Suffolk University. The shortness of the review is not a function of the content of the book, but rather the medium (a newsletter).

Creating Significant Learning Experiences:
An Integrated Approach to Designing College Courses
L. Dee Fink

Fink argues that a new paradigm is emerging in college teaching, one that encourages a focus on activities that produce significant learning experiences, valuing the quality of learning over the quantity of content coverage. In order to frame the discussion, he defines a Taxonomy of Significant Learning consisting of three categories that essentially mimic Bloom’s taxonomy of educational objectives:

• Foundational Knowledge
• Application
• Integration

and three categories that go beyond Bloom:

• Human Dimension (“students learn something important about themselves or about others” (p. 31))
• Caring (about the subject, phenomena, ideas, their own self, others, the process of learning, etc. (p. 49))
• Learning How to Learn

Fairly little attention is given by faculty to the latter three in the course design process, although I suspect that when pressed, most professors would espouse these as goals of their courses. In the sciences, I see some of these categories as long-term goals, built up through the entire curriculum and difficult to foster in a single course. This suggests that we need a concerted effort to consider these values collectively, not merely in isolation.

The heart of this book is the two chapters on course design. My teaching mimics that of my own teachers and so, like them, I am a member of content-aholics anonymous: the group of professors ashamed that their courses are creatively designed to include as much content as possible. Conveniently, Fink offers up a 12-step plan for designing a course. Although few of his suggestions are innovative, many of them will make you say, “That makes so much sense! Why haven’t I been doing that?”

My only complaint is the relative lack of attention given to the the grading system and its role in fostering significant learning. While the author accepts the need for “the development of a feedback and assessment system that goes beyond just grading and contributes to the learning process” (p. 142), he gives an example of a grading system that is “fair and educationally valid”, but which reduces the course, for many students, to the calculus of point grubbing.

The title sets the bar: the book is a failure if reading it is not itself a significant learning experience. Fortunately, the author succeeds in the ultimate accomplishment in pedagogical writing: he made me put the book down at times, frantic to work on designing one of my courses.