I have been spending the last month reading Euler's Elements of Algebra. It's an interesting way of teaching and learning algebra because it's broken out into small paragraphs that introduce algebraic rules one at a time and tries to relate them to each other in some logical sequence. It's a little odd some times, though, because right after learning about logarithms, you go back to subtraction.
One strange thing I learned about myself in college is that I am much better at math involving variables than I am numbers. I've always been slow adding 5 + 6
or taking 8 * 9
. Most people by the time they're out of 3rd grade can't even help but recite the answers. Not me - I still have to think about it everytime. I find it strange, then, that I am much more comfortable reducing expressions like 3x^2 + 9x + 12
. In fact I get a kick out of it! Euler's book is geared much more toward people who like my kind of math.
One nice thing about a math book just being a series of rules is that it screams for someone with a Computer Science background, my background, to implement those rules using a program. That's how we have programs like MATLAB and Wolfram's Mathematica. The trick is both of those applications are commericial and are already doing the math for you... what fun is that? And don't get me wrong: these applications are beyond anything most people would ever actually need to write by hand. Without too much exaggeration, they culminate all of man's current understanding of mathematics. They not only generate the right answer but they most likely do so as quickly as possible on modern computer hardware. For example, the methods of approximating square roots are so slow that they'd be useless in most real-world settings; however, for just playing around on a computer, it may be fun to implement the different, slow methods found on Wikipedia.
Say you did take to writing an algebra problem-solving program. There are languages with constructs that would make recognizing and reducing expressions much easier. Most functional programming languages are naturally more mathematical in nature, for example. The first language feature that comes to mind is Scala's match clause. In fact, in Martin Odersky's Programming Scala, he uses simplifying expressions as his primary example for pattern matching. First he defines "case classes" for variables, numeric constants, expressions, etc.:
abstract class Expr
case class Var(name: String) extends Expr
case class Number(num: Double) extends Expr
case class UnOp(operator: String, arg: Expr) extends Expr
case class BinOp(operator: String, left: Expr, right: Expr) extends Expr
Then he goes on to use the match
clause to detect and simplify patterns:
def simplifyTop(expr: Expr): Expr = expr match {
case UnOp("-", UnOp("-", e)) => e // Double negation
case BinOp("+", e, Number(0)) => e // Adding zero
case BinOp("*", e, Number(1)) => e // Multiplying by one
case _ => expr
}
This sets my imagination ablaze. It also makes me wish case classes and match
blocks existed in more languages. Today, they are mostly found in functional programming languages. Fortunately, I hear tell that many C-based languages are planning to extend switch
statements to support similar pattern matching. For example, C# 7 started down that path and we'll probably see more support for case-class-like concepts in future releases. It would also be a real win to get this into a low-level language like C++. But I would also like to see JavaScript adopt this style of pattern matching natively. I'm betting C++ will have support before JavaScript at the current rate of things.
I'm approaching math from a different angle this time around. Growing up, school had me learn a formula and then apply it repeatedly 10, 20, 30+ times until it became almost mindless (but only the even problems because there were answers to the odd ones in the back). Then 5 minutes after the test, bliip, gone forever! Knowing that I became a computer programmer, I wish math at my schools had been set up completely differently. I'd have liked to spend a good deal of my time, early on, learning how to do very basic computer programming. Only after reaching comfort with that would I start learning math by figuring out how to instruct the computer how to do the math problems for me.
In college, I had a fantastic professor named Dr. Strayer who was kind enough (and forward-sighted enough) to allow me to write programs to solve his problem sheets. The only catch was I had to provide to him my source code, in order to show my work. "You mean I get to do what I love to automate solving your tedious problems? Deal!" This was Modern Algebra, so there was a lot of set theory, group theory and other high-level topics. Counter-intuitively, I think it might be easier to write programs for these sorts of problems than simplifying algebriac expressions!
I would be very interested to learn about any school taking this new approach to teaching mathematics. Obviously, there are some chicken-and-egg problems. Programming, even the most basic programming, involves a small amount of math and logic. I think the earliest you could begin is around the time you'd normally teach pre-algebra. I know I personally didn't understand if/else-type logic when I was in 7th grade. I know because we covered if
statements in Excel way back then and they were simply beyond my comprehension. A year earlier, in 6th grade, my teacher tried to get the class to explain the difference between a yield sign and a stop sign... the class was utterly stumped. If you have kids, I'd recommend experimenting with conditional logic -- it could be entertaining. I also had quite a bit of fun trying to come up with the data structures/algorithm for handling 4-way stop signs (it's more complicated than you might think).
I truly believe we're on the cusp of a fundamental shift in the way we operate. Too often I hear of scientists working in a lab, complaining that these awesome simulators of theirs are too hard to program series of experiments into. In many shops, actually running experiments by hand would be too costly, too slow and too error prone to be practical. Specialized computer systems now exist that will simulate combining chemicals to form different proteins. Electrical engineers are often relying on scripts to figure out how much wattage can be sent over a wire and at what cost. Will the wire ignite? How much electricity is being converted to heat as wattage increases? And so on. Outside of science and engineering, businesses are running on crazy Excel spreadsheets, where your annual income seems to be directly tied to your use and understanding of pivot tables. What does this mean?
We're getting to a point where the average human needs to know exactly what a computer is capable of doing on their behalf. I don't want to add 5 + 6
anymore. No, using a calculating is not cheating! My only stipulation is that users should know how their calculator are doing addition, multiplication, logarithms, derivatives, etc. Better yet, they should be the ones programming their calculators in the first place. Remember, though, that computer software is composed of layers, abstraction built upon abstraction: you gradually isolate complexity and you can start to forget the minutiae. Once I program a computer to add two numbers, I needn't care how it adds them in the future: it's a one-and-done-type deal. Again, computers are much better at remembering things than humans, too. Within reason, teaching a stupid computer how to correctly do something involves an great deal of understanding. Maybe once in a while you need to remind yourself of something, but then you just look at the source code and hope you have good comments.
Now imagine you started out in school writing the same programs as your classmates. Depending on what you want to pursue, you take different courses and acquire different formulas, aka. different programs and libraries. Over time, you adjust your implementations based on the new things you learn and you gradually build up a repetoire of software. Perhaps your acceptance into college is based on how many sample problems your software can correctly handle. Or maybe college is an opportunity to circle back on that old, crufty code and start anew, now with more focus. Perhaps this leads down a dark path: where your future prospects are limited because you simply haven't written enough code. Or someone cheats and copies an algorithm off the Internet and gets an unfair advantage. The ethics surrounding such a new approach to education is an area that'd need some serious attention.
Nonethless, I really believe this is where we're heading. I think we need a more standard programming language that's universal for anyone early on in their education, perhaps with more specialized languages for people in a particular focus. I also think the fear of being replaced by AI or robots dramatically decreases when people start focusing on "what can I get the computer to do for me". In that light, we're all scientists researching, experimenting and tweaking, trying to push computers to do more and more and humans less and less. I think the idea that some people can program and others cannot is inaccurate. Instead, people simply range from "bad" to "great" when it comes to programming -- something that's immediately apparently to anyone who programs professionally!
I don't know about you, but I plan to start building up my own repetoire right now. Even if it's mostly oriented toward math, perhaps in a couple years I might have something to rival a 9th grader in the year 2030!