Setting styles in Google Apps Scripts

February 8, 2011 2 comments

I recently wrote a post reviewing Google Apps Scripts. Buried in that post was a code snippet that showed my workaround for the lack of CSS or global styling in Google Apps Scripts.  The problem is that it is in the middle of my venting about some of this service’s limitations, and most people will likely overlook it.  So I’m editing that post and putting that piece of it here on its own.

Google Apps Scripts doesn’t allow you to work with CSS at all.  Instead, they provide a mechanism for setting in-line styles as style attributes to your HTML elements.  This is done by using the setStyleAttribute method, which is available on several of the classes.  This method takes two string arguments, a style name (“line-height”, “font-size”, and so on) and the value for that style setting (10px, 1.1em, 90%, ‘bold’, etc).  These aren’t CSS styles, they are inline style attributes.  Its been a while since I’ve worked with this attribute, and I may have forgotten some of what I once knew about using these.   I found that it didn’t work well to set a “font” attribute to “110% bold serif,” you needed to set that with three separate setStyleAttribute calls.  You can chain these, which is still verbose but does make it somewhat easier.


var label = app.createLabel("Text for the label").setStyleAttribute("font-size","110%").setStyleAttribute("font-weight","bold");

That works fine, except that you end up with LOTS of setStyleAttribute calls throughout your script, and if you want to change something uniformly you’ll be copy and pasting all over the place.  That gets old fast.  So I wrote a little function as a workaround for this issue, and it seems to work reasonably well.

First, I created an object to hold my global styles

var appStyles = {
  "tableheaders": {
    "font-weight": "bold",
    "font-size": "1.1em",
    "padding": "0px 5px 0px 5px",
    "text-decoration": "underline"
  } ,
  "listtable": {
    "border-top": "1px solid #CCC" ,
    "margin-top": "6px",
    "width": "99%"
  },
  "helptext": {
    "font-size": ".9em",
    "color": "#999"
  }
};

Actually, pasting that in I realized that you don’t need the quotes around the property names. My fingers have been typing too much JSON. That does work as it appears there though, and its what I’ve tested, so I’ll leave it for now.

 

The next part is the function to apply the styles, uncreatively called applyStyles:

function applyStyles(element, style){
  if(!element.setStyleAttribute || !(style in appStyles)) return element;
  for(var attribute in appStyles[style]){
    element.setStyleAttribute(attribute, appStyles[style][attribute]);
  }
  return element;
}

Many of the classes in Google Apps Scripts support setStyleAttribute, but not all. So we’re checking before we try to use it. Then, we iterate over the attributes in the appStyles object and apply the styles that match the passed in style name to the element. You can pass any element that supports setStyleAttribute to this function and it will be styled, and instead of copied styles all across your script you have one place to tweak the style headings.

This isn’t an ideal solution, and if I was going to use it much I’d probably do more with it. However, it is a start at a way to turn unmanageable styles in Google Apps Scripts into something you can work with and maintain.

Categories: javascript

Google Apps Scripts

January 26, 2011 3 comments

I was recently asked to review Google Apps Scripts as a potential platform for some simple business apps, tying spreadsheet data to Google sites.  The organization already uses Google Apps, and it seems like a short leap to using Google Apps scripting.  My conclusion from the review was to recommend looking for another platform for building apps, and integrate with other information in Google Apps in other ways.  It might be that Google Apps Scripts are just too young, and the feature set will grow to fulfill its promise.  At the moment, its just not there yet.

The first hurdle is that building apps in Google Apps Scripts is clunky, and doesn’t really feel like working with javascript.  Javascript is flexible and easy to work with, if at times hard to wrap your head around its quirks.  Working with Google App Scripts felt like going back a decade.  A large part of that is because the Script platform is built around GWT.  You don’t really work with HTML, CSS or the DOM.  You can’t generate client side javascript or any kind of markup directly.  Everything has to pass through a series of overlaid panels with grids and widget objects that you put field objects, labels, and text into.  This works, and in the end you can make decent looking web pages out of it, but its very cumbersome.

There is no CSS that you can work with in Google Apps Scripts.  In my original version of this post I had my workaround here, but in the interest of letting people find that without having to read through the rest of this, I pulled that out into its own post.  The approach let me apply the whole set of styles to all “tableheaders” across the whole app, and if I wanted to change the styles I could change it one place and have it applied everywhere. That’s not particularly wonderful, but it sure saves a lot of setStyleAttribute calls. Without this approach, applying the four styles in my appStyles example above to the table headers would have taken four uses of setStyleAttribute for each column in each table that appeared in the app. This approach could be improved on significantly, but it solved my immediate headache.

What killed my ability to use Google Apps Scripts, though, was the lack of searches or queries for data in spreadsheets. When I started digging into this I assumed that Google would provide search capabilities, sort of like they do in the Spreadsheet data API. As far as I can tell this capability is completely missing, so far, from Google Apps Scripts. There’s a feature request for it, but it isn’t there yet.

This means that if you want to find something in a spreadsheet, you have to load the whole sheet (or range of cells) and iterate through the whole thing looking for the bit you’re actually needing. This works, but it is slow. It would be fine if you had a few dozen rows, but it doesn’t take much data to be getting into a thousand or few thousand records, and this isn’t an acceptable way to look up data.  I wasn’t expecting this to be an approach suited for hundreds of thousands of records or anything, but from my quick tests it’s too slow for 1000 rows to be practical.

On top of that, it isn’t really asynchronous as we expect things to be in a modern platform. If I was building the same type of thing with jQuery or something, and parts of it took more time than I really wanted to keep users waiting (which these days isn’t long), I’d slap out the basic HTML and fill in all the pieces as they became available. That would hide the slower data on a panel or tab or whatever that isn’t displayed on start up, allowing a delayed load. You can’t really do that with Google Apps Scripts, since you can’t write custom client side code beyond the relatively few event driven callbacks they provide. From what I can tell, you pretty much have to load all of your data at the start, including data you aren’t going to use until people start clicking. That loses you those seconds to get your data in order that might otherwise be available before the user looks behind the curtain. There isn’t any way to swap out content, other than simple things like text values. You can’t manipulate the DOM. That’s no way to build a web application.

Most of the problems I had using Google Apps Scripts were related to how young it is. The documentation is incomplete at best. If you want to know what most of the classes do, you have to dig into the GWT documentation. Most of Google’s code documentation is decent (I recently had occasion to work through their OpenId documentation as well, but that’s a different post for a different day), and I expect that before too long they’ll fix this documentation as well.

But if this platform is really going to be useful Google is going to have to take significant steps to embrace javascript much more than they are here. Obviously they can. They have other things based on javascript that are awesome. Unfortunately, this isn’t one of them. Yet.

Categories: development, javascript

couchdb bulk document transfer

July 17, 2010 1 comment

I have a few test and development servers scattered around at various locations, some of which are there just because of convenience for the different people and locations I work with. I needed to copy some databases from one couchdb server to the other, but since the servers weren’t running the same version of couchdb I couldn’t just copy the files. The bulk document functionality almost gets me there, but it has extra junk besides just the bare documents. I also wanted to be able to remove the _rev tags, since in this case they are just an extra headache (I’m deleting the database and loading it fresh from the other site).

My solution was to write a little method in PHP to dump a database and write out the contents to a file. The approach I used was sparked by the specific project I was working on when I needed it, so it is written based on Zend Framework. I’m just using Zend_HTML for my CouchDB interaction, which enables working with couchdb very nicely.

First, we need to get the data from CouchDB.


       $client = new Zend_Http_Client( );
        $resp = $client->setUri( $this->URI . '/' . $db . '/_all_docs?include_docs=true' )
                ->request( 'GET' );
        $body = $resp->getBody( );

        if( $resp->getStatus( ) != 200 ){
            echo '<div class="error">Status: ' . $resp->getStatus( ) . "<br>Did you spell the database name right?<br><pre>$body</pre></div>"; 
            exit;
        }

We’re using Zend_Http_Client to connect to CouchDB. We set the URI, which we have in our class constructor. I’m getting it from Zend_Config, and piecing together the login credentials if they are in the config, but that’s a different post. The $db is just the database name we’re dumping. If the status returned in the response isn’t 200, that almost always means (in this specific context) that we asked for a database that isn’t there ($db doesn’t exist in CouchDB), so since we can’t continue we show an error and exit.

So now we have a big glob of JSON in $body, but it isn’t formatted the way we need it to load, and we need a file to write this into. One side note here: I’m assuming the size of our database is fairly moderate. My use case was a couple thousand documents, and this approached worked well. You would need to handle it in pieces if you had more data than you could process at once.

So in the next part we convert the JSON to an array so we can iterate through it in PHP, and we drop the parts we don’t care about. We only need the part that’s in “rows.” CouchDB gives us extra information about the number of rows in the data and some other stuff we aren’t going to use here.

        $data = Zend_Json::decode( $body );
        $total_rows = $data['total_rows'];
        $data = $data['rows'];

I use Zend_Json::decode instead of just json_decode here because it converts to a nice array instead of to an object, and its just nicer in this situation to work with.

        $resource = fopen( $outpath, 'w' );

        $count = 0;
        $comma = "{\"docs\": [\n";
        foreach( $data as $key => $value ){
            if( !$preserve_rev ) unset( $value['doc']['_rev'] ); // we aren't going to use this to update, so we don't need the _rev
            $docs = $comma . Zend_Json::encode( $value['doc'] );
            fwrite( $resource, $docs );
            unset( $data[$key] );
            $comma = ",\n";
            $count++;
        }
        fwrite( $resource, "\n]}" );
        fclose( $resource );
        $filesize = filesize( $outpath );
        echo "<div>CouchDB reported $total_rows rows.  I wrote $count rows to $outfile, with a resulting file size of $filesize bytes.</div>";
    }

The first line here is opening a file so we can write to it. Make sure you check if the file exists or not and that you are safely handling the file name first.

We’re using $count and $total_rows (from that last section) to report on the results at the end.

We add the JSON opening to the front of our file (the first $comma setting does that), and iterate through the array, sticking each document from the data into the file, separating each document with a comma. We unset the documents from the array as we use them, just to tidy up as we go.

After we’ve gone through the whole array we close the JSON array and object, close the file, get the filesize to report, and output a summary of what we did.

This results in a file at $outpath that we can move to the destination server and just load in. Typically for my case it means I copy the file, delete and recreate the database, and push the file in as the new data. So far I’ve done that from the command line with curl, an example of why CouchDB is so easy to work with.

Categories: couchDB, Zend Framework

Simple form submit with jQuery

November 19, 2009 5 comments

In building ajax driven apps, one of the things that you have to do is capture form submits, send them to the server, and then handle the results.  In many applications, the results are data, passed as JSON or XML or some other format that the remote service provides.  In other apps, we’re dealing with snippets of HTML that then get placed in the current page somewhere.   I want to be able to handle all forms on the pages, without needing to know in advance how many forms there are, what they are called, what’s in them, or what to do with the results.  I haven’t found a good succinct description of how to do this, so here’s mine using jQuery.  This isn’t rocket science.  Its just pulling together some of jQuery’s nice features.  There are other good ways to do this.

$(document).ready(function(){
    $("form").submit(function(){
         var thistarget = this.target;
         jQuery.ajax({
             data: $(this).serialize(),
             url: this.action,
             type: this.method,
             error: function() {
                 $(thistarget).html("<span class='error'>Failed to submit form!</span>");
             },
             success: function(results) {
                 $(thistarget).html(results);
             }
         })
         return false;
         }
    );
});

This captures all submits on all forms on the page, and uses jquery’s ajax call to post the form’s fields to the URL in the form’s action attribute, and then places the results in the element identified by the form’s target attribute.   You could easily use jQuery.post instead of ajax, and all you would lose is the error message, at least of the options I included above.  jQuery.ajax has a bunch of options, which are documented nicely here:
http://docs.jquery.com/Ajax/jQuery.ajax#options

The forms in the page look like normal forms, and don’t need any special attributes other than that we are abusing the target attribute a little to specify the target element for the results.

<form name=’myform’ id=’myform’ action=’targetURL’  target=’target element’>
First Name: <input type=’text’ name=’firstname’ id=’firstname’>
<input type=’submit’ name=’submit’ value=’Submit’>
</form>
<div id=’target element’></div>

If I submit this form, either using the Submit button or by hitting enter in the firstname field, the fields from the form (in this case just firstname) are posted to targetURL, and if the post works the results that I get back are inserted into the ‘target element’ div.  If the post fails for some reason (for example, if I get back a 404 error or something) then the “Failed to submit form!” message is inserted into the ‘target element’ div.

Note that if javascript isn’t available, the forms will still work.  We’ll just get the results in the named target window instead of having it inserted into the page.

Categories: javascript

My linux distribution odyssey (so far)

November 3, 2009 Leave a comment

Last year I wrote a post about trying out Ubuntu, but then moving back to OpenSuse.   I was very happy with OpenSuse, and wouldn’t hesitate to recommend it to anybody.  However, when Ubuntu 8.10 was released I moved to it.  Some people just can’t make up their minds, I guess.  Then I moved to Ubuntu 9.04, and a couple of weeks ago I upgraded that to 9.10.  All of these worked great.

I used to tell people that the linux distribution they chose is a matter of taste, and there were several perfectly good choices, but I’m not sure I really meant it.  In my linux life, I’ve used Redhat, Gentoo, Ubuntu, Fedora, Debian, CentOS, Conectiva, and several small specialized distributions.   I ran KDE for years, and was clearly superior to all those Gnome users.  Now (since the release of KDE4) I run Gnome.  I somewhat expect that at some point in the future I’ll find myself back on KDE.  My longest chunk of time was on Redhat, before they came up with the whole Fedora thing, and for me that was clearly the winning choice.  During that time (and a while after) I wouldn’t touch anything with Debian in its lineage.  Then I got a job in which I inherited some Debian servers, and I was pretty miserable for a bit, but I got over it.  I have a long history of being a distro snob of one kind or another.  I have at some point or other fervently disliked some of the best software out there.

For now, I’m pretty happy with Ubuntu.  It works really well, and long gone are my days of hacking at some perl script or trying to figure out why code distributed with the distro would not compile on my box.  For some time now, stuff just works.  I still occasionally find a use for dusting off my perl skills, but not because I couldn’t find good tools from somewhere else or because basic components of the software I need don’t work.  In fact, its pretty rare to not be able to just run a search in Synaptic, install something, and be on my way when I need some new application.  My hacking time is spent solving the problem I’m actually trying to solve, instead of on getting to the point that I can start working on it.

Things have really come together in the last couple of years for Linux.  Its amazingly good.  Things work amazingly well, on every piece of hardware I ever use.   My kids and my wife still use Windows.  Its fine for games.  Ubuntu is easier to install.  Its easier to maintain.  Its easier to install and manage software on.  Its easier to patch.  Its even easier to use.  If I want to get stuff done, I never choose Windows.

And the message, I think, of my odyssey through all the different linux distributions is that it doesn’t matter.  I’ve used several linux distributions.  I’ve used KDE, Gnome, and Enlightenment (and probably something before KDE, but I don’t remember what).  The key to my computing world is choice.  Use what works for you for what you are doing now.

That’s not what life is about, but it is what computers are about.  They are tools.  Use what works.  Use what you like.

If you haven’t tried Ubuntu yet, you should.

Categories: linux, ubuntu

Clean Models

November 2, 2009 Leave a comment

The flip side of what I said in my last post about Models in frameworks is that models shouldn’t know anything about the controller or view.  The data that the model is providing might eventually end up in JSON, or XML, or in a web page or HTML snippet, or in a CSV file.  None of that should matter to the model.

Generally models should not return error messages or “no data” messages that are intended for the eventual user, unless they are generic enough to be meaningful regardless of context.  It should always be obvious for the controller whether the call was successful, but the controller and view should handle how to display or send any messages.  Typically errors and other similar messages should be communicated through a central error or messaging handler.  The messages formatting and even content will vary quite a bit depending on how the results will eventually be delivered.

The data might be destined for a javascript heavy table, or for a CSV or Excel file, or a PDF file, or for a clean display on a light HTML page, or to pre-fill a form on a user’s return visit.  All of these will have very different formatting requirements, both for the data and for any messages that might get displayed to the user.  The deliveries that download files might actually never download anything if there’s no data, and might display messages in some other way.

Not building clean models, either by requiring that the controller know too much about the model, or by embedding information about the eventual delivery of the information in the model, is a recipe for spending time rewriting later, or for ending up with lots of duplicate code.

Categories: development Tags: , ,

Models in frameworks

November 2, 2009 Leave a comment

One of my biggest frustrations with most MVC frameworks is that their approach to models are generally broken.  Models should hide the data from the controllers.  A controller shouldn’t need to know anything about the data, including what kind of storage is used.  Most frameworks don’t use this approach.

Suppose, for example, that I have a model for Users.  The model should provide the ability to add a User, remove a User, and update a User.  We’ll also probably need some methods for listing and other ways of manipulating the user information.  However, the model should be written in such a way that if we start with the user information in a database, and later move it to a remote LDAP server, or some SOAP or REST service, or a text file it won’t matter.  All of the controllers should be able to continue to call the same model methods that they were calling before, in the same way, and it shouldn’t matter to them where the data is coming from.  Knowing where the data is stored and how to get it and work with it is the model’s job.

This means that the model should not extend some database class.  The framework’s model structure should not assume that all models use databases.  In general, most discussions of MVC frameworks try to isolate the actual database work from the controllers, but they still require the controller to know too much about the data repository.  The effect is that you could probably move from a MySQL database to SQLite, but if you stop using a database entirely you are going to have some major work to do on your controllers that use that model.  To me, a major measure of success for any MVC approach is how much your controllers have to know about the data.  Less is better.

Zend Framework tries to accomplish this, but they basically do it by not giving you a start at any models at all.  Even if it was a very abstract class it would be nice to have something to extend.  At least that would give people a place to start that’s better than extending one of the DB classes, which just about all of the tutorials and articles on ZF models recommend doing.  ZF’s approach to just leave it entirely out of the framework means most instructions tell people to do it wrong.

I’m not a purist.  I’m a pragmatist.  I don’t care a great deal about patterns for their own sake, in any sort of Platonic sense.  I just want to be able to be lazy when I find out later that indeed change is normal and I need to adapt an application to shifting requirements.  Because as predictable as the weather in Phoenix, just about the time that an inventory application is just about through a couple of iterations and really getting useful, they are going to want to integrate it with some remote POS system, and oh yeah can it get its user information from some place in the cloud rather than the local accounts we wanted to use last week.  I’d much rather say “Sure, that’ll be ready day after tomorrow” than say “DOH!”  That requires the controller to be ignorant about the data repository.

Categories: development, PHP