Models in frameworks
One of my biggest frustrations with most MVC frameworks is that their approach to models are generally broken. Models should hide the data from the controllers. A controller shouldn’t need to know anything about the data, including what kind of storage is used. Most frameworks don’t use this approach.
Suppose, for example, that I have a model for Users. The model should provide the ability to add a User, remove a User, and update a User. We’ll also probably need some methods for listing and other ways of manipulating the user information. However, the model should be written in such a way that if we start with the user information in a database, and later move it to a remote LDAP server, or some SOAP or REST service, or a text file it won’t matter. All of the controllers should be able to continue to call the same model methods that they were calling before, in the same way, and it shouldn’t matter to them where the data is coming from. Knowing where the data is stored and how to get it and work with it is the model’s job.
This means that the model should not extend some database class. The framework’s model structure should not assume that all models use databases. In general, most discussions of MVC frameworks try to isolate the actual database work from the controllers, but they still require the controller to know too much about the data repository. The effect is that you could probably move from a MySQL database to SQLite, but if you stop using a database entirely you are going to have some major work to do on your controllers that use that model. To me, a major measure of success for any MVC approach is how much your controllers have to know about the data. Less is better.
Zend Framework tries to accomplish this, but they basically do it by not giving you a start at any models at all. Even if it was a very abstract class it would be nice to have something to extend. At least that would give people a place to start that’s better than extending one of the DB classes, which just about all of the tutorials and articles on ZF models recommend doing. ZF’s approach to just leave it entirely out of the framework means most instructions tell people to do it wrong.
I’m not a purist. I’m a pragmatist. I don’t care a great deal about patterns for their own sake, in any sort of Platonic sense. I just want to be able to be lazy when I find out later that indeed change is normal and I need to adapt an application to shifting requirements. Because as predictable as the weather in Phoenix, just about the time that an inventory application is just about through a couple of iterations and really getting useful, they are going to want to integrate it with some remote POS system, and oh yeah can it get its user information from some place in the cloud rather than the local accounts we wanted to use last week. I’d much rather say “Sure, that’ll be ready day after tomorrow” than say “DOH!” That requires the controller to be ignorant about the data repository.
Moving to wordpress.com
After I don’t know how many years of keeping resellers accounts on various web hosting locations or running my own servers, I have decided to close down all that and move my remaining personal sites to various services. This blog, for example, has moved from a self hosted wordpress site to wordpress.com. Other things, like email, are going to Google. Overall the move was very smooth.
The move comes with a few pros and cons. On the pro side, I don’t need to maintain the sites like I do when I do my own. Keeping sites upgraded to the latest version of Drupal or WordPress or Joomla or whatever is a pain. I generally just don’t, which means I’m almost always a couple of versions behind, and I still feel like I’m always needing to fix something. Using wordpress.com means I don’t have to worry about any of that. Additionally, using services like this is considerably cheaper than doing it myself. For just under $10 a year you can use your domain and let them do all the work. That’s a bargain, not even counting the value of the time I save. Overall, the management tools at wordpress.com work very well, they are easy to use, and its a very well done service (from all I can tell so far).
On the minus side, I lose some control. On my own sites, I had a modified theme, on which I had changed the templates and customized the CSS. On wordpress.com, you can customize the CSS if you pay a fairly nominal fee, but a far as I can tell you can’t use your own custom theme. On a service like this that probably makes sense. You also can’t load extra widgets, probably for very similar reasons. I had a few on my old site, and historically I have been in the habit of tweaking their code slightly. One of the biggest losses in regard to the widgets is the code syntax highlighting, for which I was using a plugin on my old site. I lose that ability and give up some of the control of the sites. I could probably get code highlighting back by signing up for the custom CSS option, but I haven’t done that yet.
I also was not that impressed by the domain hosting component. Basically, wordpress wants to be the NS host for the domain. That means you point your domain registrar to wordpress.com name servers. This is giving up a huge amount of the flexibility of putting the web site in one place but using subdomains and other services on the same domain. WordPress does support using Google for email, and will easily point your MX records there. Since I was also moving all of my email to Google anyway at the same time, and am not doing a lot else with these domains, after some mumbling under my breath I went with it.
The move itself was very easy. I was already using WordPress, so I exported the posts, pages, and comments from my old site and imported them into the new site. The process of signing up and adding domain mapping, setting the theme, and all the other small things I needed to do was quite easy. I’d be hard pressed to think of ways they could make signing up and getting going much easier. I think just about anybody could have gone through the process smoothly, even if they were novices. That part of wordpress.com, at least, is well done.
The investment is quite small, and if it doesn’t work out I can move on with very little loss. My sites are predominantly personal with a small audience, so the risk of making changes is very low. On the whole, I am fairly pleased with what I get from wordpress.com, and the cost is very reasonable even if you added in additional extras. For me, giving up some flexibility and control was well worth the time savings.
Archiving a site using httrack
Recently someone asked me to help them archive a site. They had lots of personal stories, pictures, scanned copies of kids school work, and so on that they wanted to preserve, but not necessarily on an active web site. Basically they wanted an electronic scrap book they could keep for the future. The site was using Drupal, and almost all of it was protected by passwords using a Drupal style posted form for logins. The site was also not using rewrites on the URLs, so pages looked like index.php?q=node/125 and index.php?q=logout
My first thought was to just wget the site using the mirror and preserve cookies options. The main problem with that approach is that one of the early links that wget followed was the Logout link on the main menu, so I’d get three or four pages into the site and then just get a bunch of “403 Forbidden” messages. wget’s exclusion arguments didn’t help, because they aren’t usable on the query string, and that’s where the logout part of the link was located.
Fortunately, httrack did the trick. At first I had decided not to use httrack because the help says that to work with sites that have logins you need to go through a process that involves setting your browser’s proxy settings, and I found the interface annoying. Those complaints are not a huge deal, but I didn’t really want to bother. As it worked out, I didn’t need to.
I got what I needed using the command line version of httrack, and I didn’t use the proxy workaround at all. I logged into the web site in firefox, copied the session ID from the cookie in firefox, and put it in the cookie.txt file for httrack to use. The cookies.txt file is documented here: http://httrack.kauler.com/help/Cookies
Its the same layout as wget uses for the same file, so I was actually able to use the file wget had created when I tried using it, and all I had to do was change the session ID.
The line in the cookie.txt file looked like this:
example.com FALSE / FALSE 1999999999 PHPSESSID 1f85edbfc2db8e20af20489f7fb7b417
Obviously the session ID for each session is going to be different. Although I used the file I already had, it would be easy to create this file by hand.
Then I ran httrack, omitting some of the links, and it grabbed the site nicely.
httrack “http://example.com” -O “./example.com” -v ‘-*logout*’ ‘-*edit*’ ‘-*user*’
That tells httrack to fetch http://example.com, place the resulting files at ./example.com, be verbose in its output, and omit any URL’s that included logout, edit, or user in them. What this does better than wget is that it will omit any URL that matches the exclusions, even if they are in the query string. wget only lets you exclude based on the directory, domain, or file name.
Overall, I found httrack did a very good job. The naming of the files that resulted was cleaner than wget produces, at least for my purposes. My only complaint about httrack was that although there’s plenty of documentation (if you count the information on httrack.kauler.com), it was hard to find what I needed.
1999999999
Remote collaboration and teams
When teams are distributed, it is important to find ways to build constant communication and gain the benefits that might otherwise be gained by working together in one room. This can be challenging. As we move towards more mobile and nomadic work styles (see this interesting set of articles from the Economist), and as development and other teams are scattered around the world, this becomes increasingly important.
The primary purposes for having people sit together for development or other work include promoting the team mentality and relationships between team members, constant communication that promotes frequent but but typically short interaction, and the free flow and transparency of information. All of these can be obtained with remote workers using the right tools, and with the right mental leap and cultural shifts.
One of the key elements is Presence. When you share space with someone, you can see if they are sitting at their desk, and you can see from their body language if they are intently focused on something. We need a way to convey this in situations in which people are not physically together. The status and mood statements provided by most Instant Messaging systems can do this fairly well, if they are reliable and are used actively. A good IM presence indicator should tell if a person is available, online but busy, online but should not be disturbed, or offline. If the person can add text to this, using a mood line, or in some systems just as custom text on the status, this can add additional information for others. A key element for all of this to work, though, is that the presence has to be reliable. Skype’s presence indicators, for example, aren’t reliable. You don’t really know from a status indicator in your contact list if that’s really the status of the person, or if it just hasn’t updated. The same result happens when people don’t update their status (a failing I have, actually). Effective remote team work takes the extra effort to use the tools to communicate in ways that would be inherent in physical proximity.
Another element lost when people are not physically together is the constant give and take, the chatter of people working together. This constant communication is an element of status, telling people what you are working on, how it is going, and whether or not you should be bothered at the moment. Another piece is a constant awareness of what is going on in the team. Periodic meetings, even daily, don’t really get this. The constant chatter of co-workers in close proximity is also important for team building. Most of this can be obtained using micro-blogging services. This makes the chatter easy, quick, non-obtrusive, but available to everybody in real time even if it isn’t aimed at them. Services like Twitter or Jaiku can do this, but a service that is aimed at teams, like Yammer, is better.
I was skeptical of micro-blogging as an effective tool, until I saw it in action. I invited a hand full of coworkers to Yammer, thinking they would say “Cool, but why?” Instead, within a very short time the few people I had invited turned into around 50 people, many of them enthusiastically using the service. Within a short time, I learned a couple important things that were going on in the work of people I don’t talk to very much (I’m not in the same city as they are). I was amazed at how quickly the tool was picked up, and at how well it simulated hearing things in the hallway or other situations in which in an open office environment you would hear conversations between other coworkers.
Of course, not all conversations should happen around the water cooler or in the hallway. Sometimes you need more indepth conversations, or you need privacy. IM is still the way to go here. It is important that IM have a way to archive and search old conversations, that it be really easy to use, that it allow mutliple people in the same conversation, and that it be a part of your working culture.
I should note that not all of the conversations that will happen on micro-blogging or IM services are going to be directly work-related. In fact, you want to have a healthy ratio of personal give and take, joking, community building kinds of activity going on with these. That’s the sort of thing that goes on when a real team is together, and it is important to get the same type of community buzz going on remotely.
It would be helpful if these tools would converge. There are likely other ways to accomplish these goals. IRC and other IM type systems that let you post comments in a “room” that is available to everybody might be part of it. But you need to be able to subscribe to and follow multiple tags, people, and areas at the same time without switching contexts. It has to come naturally. The tools I mentioned are headed in that direction.
It would also be good to seamlessly tie voice and video to these mediums of communication. Sometimes text isn’t working, and its best to pick up the headset and go to a voice chat. Skype does that well. If Google would make Google Talk cross platform maybe that service would work for this as well. I am not a big fan of video (you don’t need or want to know what I’m wearing while I work), but for some people that would probably work best. Sometimes face to face meetings are required, and its best to hop on a plane (if necessary) and go sit down for a face to face talk. But for the vast majority of everyday work, there isn’t any reason that distributed teams can’t get the same results as teams sitting together.
Ubuntu’s nice, but I’m headed back to OpenSuse
I ran Ubuntu for about a month. It is a very nice distribution. Everything worked very smoothly. I had no real problems with it. I’m not going to use it any more. There is nothing at all wrong with it. On the other hand, there aren’t any real advantages over OpenSuse, so I’m going back to what’s more familiar. If any Windows users ask me what Linux flavor they should try, now that I have actually used it I am comfortable recommending Ubuntu as a good choice.
OpenSuse 11 is excellent. They have continued to develop a very good distribution, and here again everything pretty much just works. The installation is very smooth. The Network Manager for KDE is improved, as is the update manager.
One difference that threw me for a few minutes was configuring Xinerama across two screens on my Dell D620. In Ubuntu, the key is to install the nVidia drivers, and use the nVidia tool to configure X. On OpenSuse, you need the nVidea drivers, but you have to use SaX2 (accessible through YaST) to configure X. Once you do this, its very smooth and easy to get working. Just to be clear, the display works fine without the proprietary drivers, but to get the best use out of multiple screens you need the drivers from nVidia.
Once or twice a week I go into the office for meetings. Its not unusual for me to unplug the monitor to carry my laptop to a conference room, and plug it into a projector without changing the configuration at all. OpenSuse handles this without problems. Ubuntu just wouldn’t do it satisfactorily at all. I didn’t take the time to figure out how to get it to work, although I’d guess there’s a way to get the results I wanted. It didn’t happen “out-of-the-box” like it does with OpenSuse.
I initially ran KDE4 under OpenSuse 11. It looks very nice, but it is not ready for real use yet, at least not for me. It is still lacking most of the options that make me really like KDE. Panel configuration is simplistic and just not ready yet. Fortunately, OpenSuse has packaged things so you can easily install KDE3 beside KDE4 and choose which to run. Installing KDE3 was as easy as loading the package manager and picking KDE3 from the package patterns list. Now I can check back on KDE4’s progress as it gets better. KDE3 is just so good, they are going to have to do a lot more work on KDE4 to bring it up to being close to as good. Until then, I’ll choose flexibility and customizability over the extra coolness of KDE4. KDE4 has a lot of promise. It is going to be beautiful when its ready.
A few direct comparisons between OpenSuse running KDE and Ubuntu running Gnome (not attempting to differentiate which are distro and which are desktop environment features). Ubuntu’s printer and network configuration is easier. I like Ubuntu’s package management tools better than YaST, but both work quite well and are fairly easy to use. The odd difference between the Ubuntu package manager on the normal menu vs the one on the system configuration menu is strange, but both worked for me fine. I like the ease of YaST’s patterns, which I mentioned made it easy to install KDE3. Its been quite a while since either of these distros has failed to resolve a dependency for me without having to go hunting. Overall, I like YaST over the configuration tools in Ubuntu. It is nicely organized and consistent, and they keep adding more stuff to it. The real reason for my preference might be familiarity. YaST has been improving steadily, but has been consistent in its look and feel for several versions. The one thing I really don’t like to use YaST for is to configure Apache.
The short answer to a comparison between Ubuntu and OpenSuse is that there is no clear winner. They are both excellent. They both install and run without a lot of headaches. They both perform well on my hardware, and they both found and configured all of my hardware without difficulty (including various USB devices I plug and unplug at whim). The few differences are minor, and for the most part (other than the issue with unplugging the monitor and plugging it into a different display, which Ubuntu didn’t like), it comes down to a matter of taste more than it does functionality.
Both of them beat Windows soundly, on features, customization, and ease of installation and use. Linux just keeps getting better, and the gap is widening. Now if we could just get commercial software to distribute end user software that is cross platform… but that’s a post for a different day.