Advertisements

Archive

Archive for November, 2009

Simple form submit with jQuery

November 19, 2009 5 comments

In building ajax driven apps, one of the things that you have to do is capture form submits, send them to the server, and then handle the results.  In many applications, the results are data, passed as JSON or XML or some other format that the remote service provides.  In other apps, we’re dealing with snippets of HTML that then get placed in the current page somewhere.   I want to be able to handle all forms on the pages, without needing to know in advance how many forms there are, what they are called, what’s in them, or what to do with the results.  I haven’t found a good succinct description of how to do this, so here’s mine using jQuery.  This isn’t rocket science.  Its just pulling together some of jQuery’s nice features.  There are other good ways to do this.

$(document).ready(function(){
    $("form").submit(function(){
         var thistarget = this.target;
         jQuery.ajax({
             data: $(this).serialize(),
             url: this.action,
             type: this.method,
             error: function() {
                 $(thistarget).html("<span class='error'>Failed to submit form!</span>");
             },
             success: function(results) {
                 $(thistarget).html(results);
             }
         })
         return false;
         }
    );
});

This captures all submits on all forms on the page, and uses jquery’s ajax call to post the form’s fields to the URL in the form’s action attribute, and then places the results in the element identified by the form’s target attribute.   You could easily use jQuery.post instead of ajax, and all you would lose is the error message, at least of the options I included above.  jQuery.ajax has a bunch of options, which are documented nicely here:
http://docs.jquery.com/Ajax/jQuery.ajax#options

The forms in the page look like normal forms, and don’t need any special attributes other than that we are abusing the target attribute a little to specify the target element for the results.

<form name=’myform’ id=’myform’ action=’targetURL’  target=’target element’>
First Name: <input type=’text’ name=’firstname’ id=’firstname’>
<input type=’submit’ name=’submit’ value=’Submit’>
</form>
<div id=’target element’></div>

If I submit this form, either using the Submit button or by hitting enter in the firstname field, the fields from the form (in this case just firstname) are posted to targetURL, and if the post works the results that I get back are inserted into the ‘target element’ div.  If the post fails for some reason (for example, if I get back a 404 error or something) then the “Failed to submit form!” message is inserted into the ‘target element’ div.

Note that if javascript isn’t available, the forms will still work.  We’ll just get the results in the named target window instead of having it inserted into the page.

Advertisements
Categories: javascript

My linux distribution odyssey (so far)

November 3, 2009 Leave a comment

Last year I wrote a post about trying out Ubuntu, but then moving back to OpenSuse.   I was very happy with OpenSuse, and wouldn’t hesitate to recommend it to anybody.  However, when Ubuntu 8.10 was released I moved to it.  Some people just can’t make up their minds, I guess.  Then I moved to Ubuntu 9.04, and a couple of weeks ago I upgraded that to 9.10.  All of these worked great.

I used to tell people that the linux distribution they chose is a matter of taste, and there were several perfectly good choices, but I’m not sure I really meant it.  In my linux life, I’ve used Redhat, Gentoo, Ubuntu, Fedora, Debian, CentOS, Conectiva, and several small specialized distributions.   I ran KDE for years, and was clearly superior to all those Gnome users.  Now (since the release of KDE4) I run Gnome.  I somewhat expect that at some point in the future I’ll find myself back on KDE.  My longest chunk of time was on Redhat, before they came up with the whole Fedora thing, and for me that was clearly the winning choice.  During that time (and a while after) I wouldn’t touch anything with Debian in its lineage.  Then I got a job in which I inherited some Debian servers, and I was pretty miserable for a bit, but I got over it.  I have a long history of being a distro snob of one kind or another.  I have at some point or other fervently disliked some of the best software out there.

For now, I’m pretty happy with Ubuntu.  It works really well, and long gone are my days of hacking at some perl script or trying to figure out why code distributed with the distro would not compile on my box.  For some time now, stuff just works.  I still occasionally find a use for dusting off my perl skills, but not because I couldn’t find good tools from somewhere else or because basic components of the software I need don’t work.  In fact, its pretty rare to not be able to just run a search in Synaptic, install something, and be on my way when I need some new application.  My hacking time is spent solving the problem I’m actually trying to solve, instead of on getting to the point that I can start working on it.

Things have really come together in the last couple of years for Linux.  Its amazingly good.  Things work amazingly well, on every piece of hardware I ever use.   My kids and my wife still use Windows.  Its fine for games.  Ubuntu is easier to install.  Its easier to maintain.  Its easier to install and manage software on.  Its easier to patch.  Its even easier to use.  If I want to get stuff done, I never choose Windows.

And the message, I think, of my odyssey through all the different linux distributions is that it doesn’t matter.  I’ve used several linux distributions.  I’ve used KDE, Gnome, and Enlightenment (and probably something before KDE, but I don’t remember what).  The key to my computing world is choice.  Use what works for you for what you are doing now.

That’s not what life is about, but it is what computers are about.  They are tools.  Use what works.  Use what you like.

If you haven’t tried Ubuntu yet, you should.

Categories: linux, ubuntu

Clean Models

November 2, 2009 Leave a comment

The flip side of what I said in my last post about Models in frameworks is that models shouldn’t know anything about the controller or view.  The data that the model is providing might eventually end up in JSON, or XML, or in a web page or HTML snippet, or in a CSV file.  None of that should matter to the model.

Generally models should not return error messages or “no data” messages that are intended for the eventual user, unless they are generic enough to be meaningful regardless of context.  It should always be obvious for the controller whether the call was successful, but the controller and view should handle how to display or send any messages.  Typically errors and other similar messages should be communicated through a central error or messaging handler.  The messages formatting and even content will vary quite a bit depending on how the results will eventually be delivered.

The data might be destined for a javascript heavy table, or for a CSV or Excel file, or a PDF file, or for a clean display on a light HTML page, or to pre-fill a form on a user’s return visit.  All of these will have very different formatting requirements, both for the data and for any messages that might get displayed to the user.  The deliveries that download files might actually never download anything if there’s no data, and might display messages in some other way.

Not building clean models, either by requiring that the controller know too much about the model, or by embedding information about the eventual delivery of the information in the model, is a recipe for spending time rewriting later, or for ending up with lots of duplicate code.

Categories: development Tags: , ,

Models in frameworks

November 2, 2009 Leave a comment

One of my biggest frustrations with most MVC frameworks is that their approach to models are generally broken.  Models should hide the data from the controllers.  A controller shouldn’t need to know anything about the data, including what kind of storage is used.  Most frameworks don’t use this approach.

Suppose, for example, that I have a model for Users.  The model should provide the ability to add a User, remove a User, and update a User.  We’ll also probably need some methods for listing and other ways of manipulating the user information.  However, the model should be written in such a way that if we start with the user information in a database, and later move it to a remote LDAP server, or some SOAP or REST service, or a text file it won’t matter.  All of the controllers should be able to continue to call the same model methods that they were calling before, in the same way, and it shouldn’t matter to them where the data is coming from.  Knowing where the data is stored and how to get it and work with it is the model’s job.

This means that the model should not extend some database class.  The framework’s model structure should not assume that all models use databases.  In general, most discussions of MVC frameworks try to isolate the actual database work from the controllers, but they still require the controller to know too much about the data repository.  The effect is that you could probably move from a MySQL database to SQLite, but if you stop using a database entirely you are going to have some major work to do on your controllers that use that model.  To me, a major measure of success for any MVC approach is how much your controllers have to know about the data.  Less is better.

Zend Framework tries to accomplish this, but they basically do it by not giving you a start at any models at all.  Even if it was a very abstract class it would be nice to have something to extend.  At least that would give people a place to start that’s better than extending one of the DB classes, which just about all of the tutorials and articles on ZF models recommend doing.  ZF’s approach to just leave it entirely out of the framework means most instructions tell people to do it wrong.

I’m not a purist.  I’m a pragmatist.  I don’t care a great deal about patterns for their own sake, in any sort of Platonic sense.  I just want to be able to be lazy when I find out later that indeed change is normal and I need to adapt an application to shifting requirements.  Because as predictable as the weather in Phoenix, just about the time that an inventory application is just about through a couple of iterations and really getting useful, they are going to want to integrate it with some remote POS system, and oh yeah can it get its user information from some place in the cloud rather than the local accounts we wanted to use last week.  I’d much rather say “Sure, that’ll be ready day after tomorrow” than say “DOH!”  That requires the controller to be ignorant about the data repository.

Categories: development, PHP

Moving to wordpress.com

November 1, 2009 Leave a comment

After I don’t know how many years of keeping resellers accounts on various web hosting locations or running my own servers, I have decided to close down all that and move my remaining personal sites to various services.  This blog, for example, has moved from a self hosted wordpress site to wordpress.com.  Other things, like email, are going to Google.  Overall the move was very smooth.

The move comes with a few pros and cons.  On the pro side, I don’t need to maintain the sites like I do when I do my own.  Keeping sites upgraded to the latest version of Drupal or WordPress or Joomla or whatever is a pain.  I generally just don’t, which means I’m almost always a couple of versions behind, and I still feel like I’m always needing to fix something.  Using wordpress.com means I don’t have to worry about any of that.  Additionally, using services like this is considerably cheaper than doing it myself.  For just under $10 a year you can use your domain and let them do all the work.  That’s a bargain, not even counting the value of the time I save.  Overall, the management tools at wordpress.com work very well, they are easy to use, and its a very well done service (from all I can tell so far).

On the minus side, I lose some control.  On my own sites, I had a modified theme, on which I had changed the templates and customized the CSS.  On wordpress.com, you can customize the CSS if you pay a fairly nominal fee, but a far as I can tell you can’t use your own custom theme.  On a service like this that probably makes sense.  You also can’t load extra widgets, probably for very similar reasons.  I had a few on my old site, and historically I have been in the habit of tweaking their code slightly.  One of the biggest losses in regard to the widgets is the code syntax highlighting, for which I was using a plugin on my old site.  I lose that ability and give up some of the control of the sites.  I could probably get code highlighting back by signing up for the custom CSS option, but I haven’t done that yet.

I also was not that impressed by the domain hosting component.  Basically, wordpress wants to be the NS host for the domain.  That means you point your domain registrar to wordpress.com name servers.  This is giving up a huge amount of the flexibility of putting the web site in one place but using subdomains and other services on the same domain.  WordPress does support using Google for email, and will easily point your MX records there.  Since I was also moving all of my email to Google anyway at the same time, and am not doing a lot else with these domains, after some mumbling under my breath I went with it.

The move itself was very easy.  I was already using WordPress, so I exported the posts, pages, and comments from my old site and imported them into the new site.  The process of signing up and adding domain mapping, setting the theme, and all the other small things I needed to do was quite easy.  I’d be hard pressed to think of ways they could make signing up and getting going much easier.  I think just about anybody could have gone through the process smoothly, even if they were novices.  That part of wordpress.com, at least, is well done.

The investment is quite small, and if it doesn’t work out I can move on with very little loss.  My sites are predominantly personal with a small audience, so the risk of making changes is very low.  On the whole, I am fairly pleased with what I get from wordpress.com, and the cost is very reasonable even if you added in additional extras.  For me, giving up some flexibility and control was well worth the time savings.

Categories: blogging

Archiving a site using httrack

November 1, 2009 2 comments

Recently someone asked me to help them archive a site.  They had lots of personal stories, pictures, scanned copies of kids school work, and so on that they wanted to preserve, but not necessarily on an active web site.  Basically they wanted an electronic scrap book they could keep for the future.  The site was using Drupal, and almost all of it was protected by passwords using a Drupal style posted form for logins.  The site was also not using rewrites on the URLs, so pages looked like index.php?q=node/125 and index.php?q=logout

My first thought was to just wget the site using the mirror and preserve cookies options.  The main problem with that approach is that one of the early links that wget followed was the Logout link on the main menu, so I’d get three or four pages into the site and then just get a bunch of “403 Forbidden” messages.  wget’s exclusion arguments didn’t help, because they aren’t usable on the query string, and that’s where the logout part of the link was located.

Fortunately, httrack did the trick.  At first I had decided not to use httrack because the help says that to work with sites that have logins you need to go through a process that involves setting your browser’s proxy settings, and I found the interface annoying.  Those complaints are not a huge deal, but I didn’t really want to bother.  As it worked out, I didn’t need to.

I got what I needed using the command line version of httrack, and I didn’t use the proxy workaround at all.  I logged into the web site in firefox, copied the session ID from the cookie in firefox, and put it in the cookie.txt file for httrack to use.  The cookies.txt file is documented here: http://httrack.kauler.com/help/Cookies
Its the same layout as wget uses for the same file, so I was actually able to use the file wget had created when I tried using it, and all I had to do was change the session ID.

The line in the cookie.txt file looked like this:

example.com        FALSE   /       FALSE   1999999999      PHPSESSID       1f85edbfc2db8e20af20489f7fb7b417

Obviously the session ID for each session is going to be different.   Although I used the file I already had, it would be easy to create this file by hand.

Then I ran httrack, omitting some of the links, and it grabbed the site nicely.

httrack “http://example.com” -O “./example.com” -v ‘-*logout*’ ‘-*edit*’ ‘-*user*’

That tells httrack to fetch http://example.com, place the resulting files at ./example.com, be verbose in its output, and omit any URL’s that included logout, edit, or user in them.  What this does better than wget is that it will omit any URL that matches the exclusions, even if they are in the query string.  wget only lets you exclude based on the directory, domain, or file name.

Overall, I found httrack did a very good job.  The naming of the files that resulted was cleaner than wget produces, at least for my purposes.  My only complaint about httrack was that although there’s plenty of documentation (if you count the information on httrack.kauler.com), it was hard to find what I needed.

1999999999
Categories: tools and resources