[ anders ]
[ resume ]
[ choppers ]
[ projects ]
[ netatalk ]
[ route66 ]
[ webgallery ]
[ mockMarket ]
[ merits ]
[ dailyBulletin ]
[ panacea ]
[ words ]
[ pictures ]
[ movies ]
[ contact ]
When I first learned to program, it was BASIC on a Commodore PET. I could
just barely fit the entire problem into my tiny head. Someone had asked me
why I didn't play lotto, and I had to make a practical example for myself so
I went to the local 7-Eleven and grabbed a lotto form with a copy of the
rules. I sat down and wrote a program in BASIC that played lotto 1 million
times. The execution of the program took 2 days and in the end, for the 1
million dollars I spent, I won back about 300,000, or less than 1/3 of it. A
few things became painfully obvious in the process.
Firstly, playing lotto is craziness. In all the games the computer played, it never hit the jackpot. Not even once. In fact, it never even won second prize! Winning second prize once wouldn't even have me break even.
Secondly, I should have played the money all on one day rather than playing 1 million different games. Every time you double up on a game, you increase your chances of winning. Too bad for all those people out there who play every day thinking they have good chances. They might as well save their dollars for a year and then play 365 different combinations of numbers all on one day. That would really make their chances go up.
Lastly though, I learned the dire consequences of inefficient code that is to be executed millions of times. In my haste to solve the problem, I had called the random number generator once more than I actually needed to inside the main loop. As it turns out, the random number generator in this particular computer takes forever (in computer time) to come back with a random number. I didn't even think about efficiency until I ran the loop 1 million times! The time to code the program correctly was not during production. Of course I figured this out a day and a half into the execution so it didn't make much sense to fix it at that point. Hence, I learned the lesson of efficiency the hard way, but to this day I have never spent a dollar on lotto!
Another big program I worked on was for a company involved with selling tapes of lectures. There were thousands of master tapes, and from those, they needed to make varying numbers of copies depending on how many were sold. They had a "fan-fold" computer printer version of a sticky label that they used to type the titles by hand on a typewriter. This had to go. I fooled with printer codes and a simple menu system until I had about 1,000 lines of code. At the end of my summer there, I had a complete program capable of printing multiple copies of labels, saving and retrieving label records from disk and a nice menu system to control it. To my utter horror, when I was back there visiting about 5 years later, they were still using my system!
I learned a critical lesson there though and again it was learned the hard way. Previous to this project, I had never needed to code anything that needed to be re-used. The most reuse I got out of code was when I put something in a "for - next" loop. But the menu system for the label-printing program was a clear case for code re-use. I started out by manually coding the same sorts of things over and over and over. I wised up a little bit and started to copy and paste, but at the end of the day, I had a mountain of code that was fairly immoveable if I wanted to change the general behavior of menu selection. As the menu code evolved, I would make the changes throughout the code. (and it touched every piece, let me tell you!) It was a mountain of work, but I ended up making it do what I wanted in the end.
This happened to be a summer job, so the next summer (my last) I spent completely rewriting the program from the ground up. I started by making the menu system modular. If there was a more clear case for modularity, I hadn't seen it. Suddenly, I could change the look of the menu pointer in one place and have it changed for every single menu in the entire system. The code ended up in the 600 lines range, the overall stability was greatly improved and the feature set was much greater. The only lesson I didn't learn from this was that in larger projects to come, I couldn't afford to just sit down and completely rewrite everything. I had an idea about that, but the only way I really learn is by falling flat on my face.
Modularity carried me for a long time. Perl was down and dirty modularity. I spoke the language of subroutines and gloated over each module making things more and more efficient reaping the benefits of "write once, use many times". Life was good.
Along the way, I picked up bits and pieces about why one codes in a particular fashion. For me, a stubborn idiot who doesn't learn until I smash my head into a wall a few dozen times, I could never learn this by someone telling me, particularly in school. If someone were to tell me one thing or another, this would pretty much be all the reason I needed not to believe it. Until they proved their legitimacy by demonstration, I wouldn't take anything they said. And even after that, it was hard. Hence, it took me longer to learn, but my knowledge is based on demonstrated fact, not "because someone said so".
But someone said that object orientation was a much better way to program. Up to this point, I hadn't seen true object orientation, so of course I didn't believe it. Yes, I had programmed a few Java applets and fooled with object-oriented languages before, but it didn't seem like there was some sort of overwhelming need for it. For all practical intents and purposes, I thought it was just another way to program, no better or worse than any other way. But as I was to find out, there are very good reasons why object-oriented programming techniques are much better than any other way for serious software construction. And I don't believe one can see that until they have worked on a large-scale software project because I was incapable of seeing it until that point. However, you might just humor me and continue reading so that I can say, "See, I told you so!" in a few years after you too have seen the light.
I was missing the point of object-orientation because I didn't have a realistic value in my head about the various "bottlenecks" of large-scale software engineering. For instance, when I was young and the project was relatively small, if I made a fundamental mistake in the design of the code, I could just turn around and recode the whole thing. In a large-scale setting you can't do that. Not only do you have to get the broad strokes right, but the implementation of all the little pieces has to be done correctly as well. If you screw up one or two of the small pieces, you might be able to recode them, but you couldn't redo a whole boatload of them, and you certainly can't re-tool the architecture of the system. So the value of taking the time to come up with the right way to do things grows several orders of magnitude when you take these things into account. So much so that it should enter the equation as a design necessity, not merely something to just keep in mind. Object-orientation is really more of an approach to the total problem of software construction than a type of a language. It sets up rigid rules for you that help force you to think through the architecture before just diving in and making something. That is what I was missing; the real world reasons one would want to pursue an object-oriented approach.
Previous to this, when looking at a language with structure, in this case Pascal, dealing with the constraints of the programming language was tedious. You have to declare variables before you can use them and you have to break things up into compartments and call them properly with the correct variable types. To me, this was all tedious because I could get the same effect faster with an unstructured programming language. Of course my projects back then were quite simple and arguably didn't need compartmentalization, but Pascal is a teaching language and this was supposed to teach me how to break things up. Of course it wasn't teaching me why you break things up because the task was usually too simple. Perhaps you could have told me why you break things up, but I wouldn't understand it until there was a practical example. I saw the light the day recursion was demonstrated to me. With 10 lines of code, one could solve the problem of the Towers of Hanoi quite elegantly. Of course the code took more than a cursory overview to understand, but once I could see how you could use variable scopeing to save literally limitless amounts of data, the simplicity of the solution blew me away. Suddenly I saw why you would want to privately scope variables rather than use global variables everywhere. Better yet, there was suddenly a need for the structure.
... to be continued