# Tricks of the Trade

(with Professor Glesser)

Parentheses in mathematics never fails to impress me. Take, for instance, the *freshman dream*:

# FAIL!

Not only do the parentheses matter, but in nontrivial way. Another of my favorite examples is the difference between and . Just a teeny little difference that makes all the difference in the world. You see, the first function is always greater than or equal to 0. Here are their graphs:

Outside of , they aren’t even close. Of course, this suggests that if you integrate them, you expect to get wildly different answers (there are some exceptions to this: try integrating both from 0 to ). Ah, but there is a little problem when you try to integrate, isn’t there? You can probably handle (possibly with a great deal of effort) finding an antiderivative for , but more about that in a bit. Rather oddly, there is no elementary antiderivative for . Integrating it from 0 to gives an example of a Fresnel integral, but already this is beyond what most of my students want to hear in calculus. So, let’s talk about what we can actually do.

**C’mon, Get to the Trick**

I’ve seen two reasonable ways to find the antiderivative of : integration by parts and trig identities. The former method is actually used twice along with a little trick (I’ll get back to this later in the summer when I have a four-part series on integration by parts), while the latter requires you to remember how to convert products of sines into the cosine of a sum. I tend to use the former since I don’t have to remember anything, but the latter is probably a bit easier.

Using either method, we get . Now, say that we want to compute (this is a rather common integral that seems to show up quite a bit in integral calculus and, especially, in multivariable calculus when you start doing coordinate changes). Using the fundamental theorem of calculus, we have:

and after realizing all the terms but one are 0, we see that the integral evaluates as .

**Some trick. I already knew how to do that.**

Here is the trick. Notice that and have nearly identical graphs on , the only difference is a shift. This implies that if you integrate them on , you should get the same answer, i.e., . If we square both functions, the same result holds: ( you better convince yourself of this before moving on).

From here, we get .

Why did we clutter up our integrand? Because, of course, we didn’t. The integrand is 1 and hence the integral evaluates to the length of the interval. In particular, .

Cool, no?

**But that is just one integral**

True, but it is an important one. False, because obviously we can use it to compute . Okay, that is cheating a bit. But we can actually go a little further. First, we really didn’t think hard enough about the last example. Consider the graph of on :

Notice that we get a full period of . Therefore, the reason what we did above worked on is that it works on and we just repeated it. Instead of the observation about , we could also draw a rectangle with height 1 and width and remark that the area under the graph makes up precisely half the area of the rectangle, i.e., . The graph makes it obvious that we could also look at only .

Before you get too excited, though, this will not work on any interval. We don’t get . Generally speaking, you want the interval to consist of integer multiples of .

**Is that it?**

Not quite. Roger Nelsen, in a paper entitled *Symmetry and Integration* described the following Putnam problem:

This problem is absolutely ridiculous. Wait—did I say ridiculous? I meant ridiculously easy!!! Consider the graph of :

It appears to have the same sort of symmetry as before. In fact, if we draw in few lines and do some shading, we get:

With just a modicum of thought, we see that .

**Whoa! What is going on here?**

Actually, a lot. But let me keep it simple (there are nice generalizations of what I’ll write here); I’ll give a heuristic argument for why these things work. The key is that the functions we’ve been dealing with are symmetric about a point. Without going into too much detail, let’s just say that a function is symmetric about a point , which is the midpoint of an interval , if for any such that is still in the interval , the average of and is , i.e., . Truly, then, the average value of the function on is . However, we also know that the average value of any continuous function on an interval is given by . Therefore, or, equivalently, .

In our case, , , and we get . Well, except for one thing. We still need to show the averaging property. This is just a little bit of algebra, thankfully. First, I leave it as an exercise to show that (try converting things into sines and cosines and using the angle addition formulas). Now, for simplicity, we write . The average is now given by

, where the negative exponent comes from .

Using our fraction addition trick, this becomes which is the required value.

and

Comment by dedusuiu — February 24, 2018 @ 5:57 am |

sure and we see as say prof dr mircea orasanu and prof horia orasanu as followed

SOLUTIONS OF LAPLACE EQUATIONS USING MULTIPLE INTEGRALS AND CONSTRAINTS

ABSTRACT

Most optimization problems have constraints of different types which modify the shape of the search space. During the last two decades, a wide variety of metaheuristics have been designed and applied to solve constrained optimization problems. Evolutionary algorithms and most other metaheuristics naturally operate as unconstrained search techniques.

1 INTRODUCTION

Therefore, they require an additional mechanism to handle constraints during the search process. Historically, the most common approach to handling constraints are the penalty functions originally proposed in the 1940s and later expanded by many researchers. Penalty functions are not effective if the optimum lies in the boundary between the feasible and the infeasible regions or when the feasible region is disjoint. Researchers have also proposed a number of other approaches to handle constraints such as the self-adaptive penalty, epsilon constraint handling and stochastic ranking

and DOCTORAL THESIS

REFERENCES

. References

[1] Y. Yamamoto, and X. Yun, “Coordinated obstacle avoidance of a mobile manipulator”, Proceedings of the IEEE Conference on Robotics and Automation, pp.2255-2260, 1995.

[2] Y. Yamamoto, and X. Yun, “Unified analysis on mobility and manipulability of mobile manipulators”, Proceedings of the IEEE Conference on Robotics and Automation, pp.1200-1206, 1999.

[3] R. Colbaugh, “Adaptive stabilization of mobile manipulators”, Proc. of the Amer. Controls Conf., pp. 1-5, 1998.

[4] A. M. Bloch, M. Reyhanoglu, and N. H. McClamroch, “Control and stabilization of nonholonomic caplygin dynamic systems”, Proc. of the IEEE Conference on Decision and Control, pp. 1127-1132, December 1991.

[5] S. Jagannathan, F. L. Lewis and K. Liu, “Motion control and obstacle avoidance of mobile robot with an onboard manipulator”, Journal of Intelligent Manufacturing Systems, vol.5, pp. 287-302, 1994.

[6] S. Jagannathan, S. Q. Zhu and F. L. Lewis, “Path planning and control of a mobile base with nonholonomic constraints”, Robotica, vol. 12, part 6, pp. 529-540, 1994.

[7] S. Jagannathan and A. Levesque, “An Adaptive Network Framework for Control”, Tech Report, Dept. of Electrical Engineering, The University of Texas at San Antonio, June 2000.

Comment by dedusuiu — February 24, 2018 @ 6:06 am |