Showing posts with label css. Show all posts
Showing posts with label css. Show all posts

Wednesday, July 24, 2013

A Guide to Writing Backbone Apps at Coursera

Preamble:

At Coursera, we made the choice to use the Backbone MVC framework for our frontends, and over the past year, we've evolved a set of best practices for how we use Backbone.

I wrote a guide for internal use that documents those best practices (much of it based on shorter blog posts here), and I've snapshotted it here on my blog to benefit other engineering teams using Backbone and to give potential Coursera engineers an idea of the current stack. This was snapshotted on July 24th, 2013, so please keep in mind that the Coursera frontend stack may change over time as the team figures out new and better ways to do things.

If you're interested in joining Coursera, check out the many job listings here. The frontend team is a really smart and fun bunch, and there are a lot of interesting technical and usability challenges in the future.

The Architecture

There are many different frontend architectures to choose from, and at Coursera, we have made the deliberate decision to opt for a very JavaScript-heavy, JavaScript-dependent approach to our frontend architecture:

We build up the entire DOM in JavaScript, loading the data via calls to RESTful JSON APIs, and handle state changes in the URL via the hash or HTML5 history API.

This approach has several advantages, atleast as compared to a traditional data-rendered-into-HTML approach:

  • Usabiity: Our interfaces can easily be dynamic and real-time, enabling users to perform many interactions in a small period of time. This is particularly important for our administrative interfaces, where users want to be able to drag-and-drop, tick things on and off, and generally manipulate many little things that are present on one screen.
  • Developer Productivity: Since this architecture relies on the existence of APIs, that makes it easy for us to build new frontends for the same data, which encourages experimentation with new ways of viewing the same data. For example, after porting our forums to this architecture, I was able to create portable sidebar widgets based off the forums API in just a few hours.
  • Testability: The APIs and the frontends can both be tested separately and rigorously using the best suite of tools for the job.

It also has a few disadvantages:

  • Linkability: We have to go through a bit more work to make the JS-powered interfaces linkable, and previously simple things like internal anchors (page#section) are surprisingly difficult to implement.
  • Search/shareability: Since Facebook bots and search bots do not handle JS-rendered webpages as well, we have to go through more work to make our public pages indexable by them, which we've done through our Just-in-time renderer.
  • Testability: We have to write far more tests for our JS frontends since the user can change state via sequences of interactions, and some bugs may not surface until a particular sequence. We also now have state across URL routes when we use the HTML5 history API, and may have to test across multiple views.
  • Performance: We must be constantly monitoring our JavaScript to make sure we are not pushing the browser to do too much, as JavaScript can still be surprisingly slow at processing data and turning it into DOM.

However, given the usability benefits of the JS-rendered approach, we have elected to stick with it, and we will need to become experts in overcoming the disadvantages of the approach. At the same time, we can hope that the browsers and tools make those disadvantages slowly disappear, as this is an increasingly popular approach.

The APIs

We have APIs coming from Python/Django, PHP, and Scala/Play. We try to be consistent in the API design, and when possible, we opt for a RESTful JSON API.

For example, if I want to retrieve information about a forum, we'd perform an HTTP GET to a RESTful URL and expect a JSON to come back with an "id" attribute and other useful attributes.

Request:

HTTP GET /api/forums/1

Response:

{
    "id": 1,
    "parent_id": -1,
    "name": "Forums",
    "deleted": false,
   "created": "1369400797"
}

To create a new forum, we'd perform an HTTP POST to a RESTful URL with our JSON, and expect JSON to come back with the "id" filled in:

Request:

HTTP POST /api/forums/
{
    "parent_id": -1,
    "name": "Forums"
}

Response:

{
    "id": 1,
    "parent_id": -1,
    "name": "Forums",
    "deleted": false,
   "created": "1369400797"
}

To update an existing forum, we could do an HTTP PUT with the full JSON of the new properties, but when possible, we prefer to do an HTTP PATCH, only sending in the changed properties. That is a safer approach and means we are less likely to change attributes that we did not intend to change, and also makes our interfaces more usable by multiple people at once.

Request:

HTTP PATCH /api/forums/1
{
    "name": "Master Forums"
}

Response:

{
    "id": 1,
    "parent_id": -1,
    "name": "Master Forums",
    "deleted": false,
   "created": "1369400797"
}

To delete a forum, we could do an HTTP DELETE, but we prefer instead to set a deleted flag on the object, and make sure that we respect the flag in all of our APIs. We often have users that accidentally delete things, and it is much easier to restore if the information is still in the database.

Request:

HTTP PATCH /api/forums/1
{
    "deleted": true
}

Response:

{
    "id": 1,
    "parent_id": -1,
    "name": "Master Forums",
    "deleted": true,
   "created": "1369400797"
}

If we are retrieving many resources, we may want a paginated API, to avoid sending too much information down to the user. Here's what that might look like:

Request:

HTTP GET /api/forum/search?start_page=1&page_size=20

Response:

{"start_page": 1,
  "page_size": 20,
  "total_pages": 40,
  "total_results": 800,
  "posts":  [ ... ]
}

The JavaScript

JavaScript is a powerful language, but it can easily become a jumbled mess of global variables and files that are thousands of lines long. To keep our JavaScript sane, reusable, and modularized, we chose to use an MVC framework. There are approximately a million* MVC frameworks to choose from, but we chose Backbone.JS as it has a large community of knowledge built up around it and it is lightweight enough to be used in many different ways.

Backbone

Backbone provides developers with a means of separating presentation and data, by defining Models and Collections for the data, Views for the presentation, and triggering a rich set of events for communication between the models and views. It also provides an optional Router object, which can be used to create a single page web app that triggers particular views based on the current URL route. For a general introduction to Backbone, see these slides.

There is a lot that Backbone does not provide, however, and it's up to the app developer to figure out what else that app needs, and how much of that you'll get from open-source libraries or decide to write yourselves. That's good because Backbone can lend itself to many different sorts of apps, with the right combination of add-ons, but it's bad because it takes longer to find those add-ons and get them working happily together. As a company, it is in our best interest to converge on a recommended set of add-ons and best practices, so that our code is more consistent across the codebase. At the same time, it's also in our best interest to continually challenge our best practices and make sure that we are using the right tool for the job. If we discover a particular add-on is too buggy or slow, we should phase that out of the codebase and document the reasons why.

There is a larger question, of course: Is Backbone the right framework for us, given how many new frameworks have come out recently that may entice us with promises of speed and flexibility? That is not a question that I have an answer for, but I do think that one can spend forever trying out new frameworks to find the perfect one, and time might be better spent building up best practices around a single framework. However, there may be a time at which we become sufficiently convinced that Backbone is no longer working for our codebase and it is worth the cognitive effort and engineering resources to invest in a new framework.

Here's an exploration of the add-ons and best practices that we use in our Backbone stack.

Backbone Models

A basic model might look like this:

define([
  'underscore',
  'backbone',
  "pages/forum/app",
  "js/lib/backbone.api"
],function(_, Backbone, Coursera, BackboneModelAPI) {

  var model = Backbone.Model.extend({
    api: Coursera.api,
    url: 'user/information'
  });

  _.extend(model.prototype, BackboneModelAPI);

  return model;
});

We start off by declaring the JS dependencies for the model:

  • underscore: This is a collection of generic utility functions for arrays, objects, and functions, and it's common to find yourself using them, so most models and views will include it.
  • backbone: This is necessary for extending Backbone.Model
  • pages/forum/app: Every model will depend on an "app.js", which defines a base URL for API calls and a few other details. It adds objects to the Coursera singleton variable, like Coursera.api, which is used by the Model.
  • js/lib/backbone.api: This is a Backbone-specific wrapper for api.js that overrides the sync method and adds create/update/read/delete methods. The api.js library is an AJAX API wrapper that takes care of emulating patch requests, triggering events, showing AJAX loading/loaded messages via asyncMessages.js, and creating CSRF tokens in the client.

Then we define an extension of the Backbone.Model object with api and url options that help Backbone figure out where and how to pull the data for the model, and it mixes in the BackboneModelAPI prototype at the end of the file.

Backbone Models: Relational Models

Out of the box, Backbone will take JSON from a RESTful API and automatically turn it into a Model or a Collection of Models. However, we have many APIs that return JSON that really represent multiple models (from multiple tables in our MySQL database), like courses with universities:

[{"name": "Game Theory",
 "id": 2,
 "universities": [{"name": "Stanford"}, {"name": "UBC"}
]

We quickly realized we needed a way to model that on the frontend, if we wanted to be able to use model-specific functionality on the nested models (which we often do).

Backbone-relational is an external library that makes it easier to deal with turning JSON into models/collections with sub collections inside of them, by specifying the relations like so:

var Course = Backbone.RelationalModel.extends({
   relations: [{
      type: Backbone.HasMany,
      key: 'universities',
      relatedModel: University,
      collectionType: Universities
    }],
});

We started use that for many of our Backbone apps, but we've had some performance and caching issues with it, so we've started stripping it out in our model-heavy-apps and manually doing the conversion into nested models.

For example, heres how the Topic model turns nested courses array into a Courses collection:

  var Topic = Backbone.Model.extend({
    defaults: {},

    idAttribute: 'short_name',

    initialize: function() {
      this.bind('change', this.updateComputed, this);
      this.updateComputed();
    },

    updateComputed: function() {
      var self = this;
      if (!this.get('courses') || !(this.get('courses') instanceof Courses)) {
        this.set('courses', new Courses(this.get('courses')), {silent: true});
        this.get('courses').each(function(course) {
          if (!course.get('topic') || !(course.get('topic') instanceof Topic)) {
            course.set('topic', self);
          }
        });
      }
   }
  });

For a trickier example, here's how the Course model sets a nested Topic model. It has to require the Topic file dynamically, to avoid a cyclic dependency in the initial requires which will wreak all sorts of havoc:

  var course = Backbone.Model.extend({
    defaults: {},

    initialize: function() {
      this.bind('change', this.updateComputed, this);
      this.updateComputed();
    },

    updateComputed: function () {
      // We must require it here due to Topic requiring Courses
      var Topic = require("js/models/topic");
      if (this.get('topic') && !(this.get('topic') instanceof Topic)) {
        this.set('topic', new Topic(this.get('topic')), {silent: true});
      }
   });

We could also look into using Backbone.nested, which seems like a more lightweight library than Backbone.Relational, and it may have less performance issues.

Backbone Views

Here's what a basic Backbone view might look like:

define([
  "jquery",
  "underscore",
  "backbone",
  "js/core/coursera",
  "pages/site-admin/views/NoteView.html"
  ],
function($, _, Backbone, Coursera, template) {
  var view = Backbone.View.extend({
    render: function() {
      var field = this.options.field;
      this.$el.html(template({
        config: Coursera.config
        field: field,
      }));
      return this;
    }
  });

  return view;
});

We start off by declaring the JS dependencies for the view:

  • jquery: We often use jQuery in our views for DOM manipulation, so we almost always include it.
  • underscore: Once again, underscore's utility functions are useful in views as well as models (in particular, debounce and throttle are great for improving performance of repeatedly called functions.)
  • backbone: We must include Backbone so that we can extend Backbone.View.
  • js/core/coursera: We include this so that we have a handle on the Coursera singleton variable, which contains useful information like "config" that includes the base URL of assets, which we often need in templates.
  • pages/site-admin/views/NoteView.html: This is a particular Jade template that's been auto-compiled into an *.html.js file, and we include it so we can render the template to the DOM. We try to keep all of our HTML and text in templates, out of our view JS.

Then we create the view and define the render function, which passes in Coursera.config and a view configuration option into a template, and renders that template into the DOM.

Backbone Views: Templating

Backbone requires Underscore as a dependency, and since Underscore includes a basic templating library, that's the one you'll see in the Backbone docs. However, we wanted a bit more out of our templating library.

Jade is a whitespace-significant, bracket-less HTML templating library. It's clean to look at because of the lack of brackets and the enforced indenting (like Python and Stylus), but one of it's best features is that it auto-closes HTML tags. We've dealt with too many strange bugs from un-closed tags, and it's one more thing we don't have to worry about when using Jade. Here's an example:

div
    h1 {#book.get('title')}
    p
    each author in book.get('authors')
        a(href=author.get('url')) {#author.get('name')}
    if book.get('published')
        a.btn.btn-large(href="http://blog.pamelafox.org/buy") Buy now!

We could also consider using Handlebars, Mustache, or many other options.

Backbone Views: Referencing DOM

Inside a view, we find ourselves referencing the DOM from the templates repeated times, like to set up events, read off values, or do slight manipulations. For example, here's what a view might look like:

var ReporterView = Backbone.View.extend({
  render: function() {
    this.$el.html(ReporterTemplate());
  },
  events: {
     'change .coursera-reporter-input': 'onInputChange'
     'click .coursera-reporter-submit': 'onSubmitClick'
  },
  onInputChange: function() {
    this.$('.coursera-reporter-submit').attr('disabled', null);
  },
  onSubmitClick: function() {
    this.model.set('title', this.$('.coursera-reporter-input').val());
    this.model.save();
  }
});

There are a few non-optimal aspects of the way that we reference DOM there:

  • We are repeating those class names in multiple places. That means that changing the class name means changing it in many places - not so DRY!
  • We are using CSS class names for events and manipulation. That means our designers can't refactor CSS safely without affecting functionality, and it also means that we must come up with very long overly explicit class names to avoid clashing with other CSS names, since we bundle our CSS together..

To avoid repeating the class names, we can store them in a constant that is accessible anywhere in the view, and only access them via that constant. For example:

var ReporterView = Backbone.View.extend({
  dom: {
     SUBMIT_BUTTON: '.coursera-reporter-submit',
     INPUT_FIELD:   '.coursera-reporter-input'
  },
  render: function() {
    this.$el.html(ReporterTemplate());
  },
  events: function() {
    var events = {};
    events['change ' + this.dom.INPUT_FIELD]    = 'onInputChange';
    events['click ' +  this.dom.SUBMIT_BUTTON]  = 'onSubmitClick';
    return events;
  },
  onInputChange: function() {
    this.$(this.dom.SUBMIT_BUTTON).attr('disabled', null);
  },
  onSubmitClick: function() {
    this.model.set('title', this.$(this.dom.INPUT_FIELD).val());
    this.model.save();
  }
});

As a bonus, this technique gives us easier-to-maintain testing code:

it('enables the submit button on change', function() {
  chai.expect(view.$(view.dom.SUBMIT_BUTTON).attr('disabled'))
 .to.be.equal('disabled');
  view.$(view.dom.INPUT_FIELD).trigger('change');
 chai.expect(view.$(view.dom.SUBMIT_BUTTON).attr('disabled'))
 .to.be.equal(undefined);
});

As for the use of class names entirely, we can avoid them by using data attributes instead, perhaps prefixing with js-* to indicate their use in JS. We would still have CSS class names in the HTML templates, but only for styling reasons.

So then our DOM would look something like:

var ReporterView = Backbone.View.extend({
  dom: {
     SUBMIT_BUTTON: '[data-js-submit-button]',
     INPUT_FIELD:   '[data-js-input-field]'
  },
...
});

Note that selecting via data attributes shown to be less performant but for the vast majority of our views, that performance difference is insignificant.

Backbone Views: Data Binding

Backbone makes it easy for you to find out when attributes on your Model have changed, via the "changed" event, and to query for all changed attributes since the last save via the changedAttributes method, but it does not officially offer any data ⟺ dom binding. If you are building an app where the user can change the data after it's been rendered, then you will find yourself wanting some sort of data binding to re-render that data when appropriate. We have many parts of Coursera where we need very little data-binding, like our course dashboard and course description pages, but we have other parts which are all data-binding, all-the-time, like our discussion forums and all of our admin editing interfaces.

Backbone.stickit is a lightweight data-binding library that we've started to use for a few of our admin interfaces. Here's a simple example from their docs:

Backbone.View.extend({bindings: {
    '#title': 'title',
    '#author': 'authorName'
  },render: function() {
    this.$el.html('<div id="title"/><input id="author">');
    this.stickit();
  }
});

We still do custom data-binding for many of our views (using the "changed" event, changedAttributes(), and partial re-rendering), and I like that because it gives me the most control to decide exactly how a view should change, and I don't have to fight against a binding library's assumptions.

We could also consider using: KnockBack

Maintaining State: Single-Page-Apps vs. Widgets

After we've created a view for our frontend, we still have big decisions to make:

  • How will users get to that view?
  • What state of the view will be kept in the URL, i.e., what can the user press back on and what can they bookmark?
  • Will our view be used in multiple parts of the site or just one?

In our codebase, we have two main approaches to those questions: "single page apps" and "widgets".

Single-Page-Apps

Besides being the buzz word du jour, a single-page-app ("SPA") is what Backbone was originally designed for, via its Backbone.Router object. A SPA defines a set of routes, and each route is mapped to a function that renders a particular view into a part of the page. Backbone.History then takes care of figuring out which route is referred to by the current URL, and calling that function. It also takes care of changing the URL using the HTML5 History API (which makes it appear like a normal URL change) or window.location.hash in older browsers.

For example, we could have this routes file:

define([
  "jquery",
  "backbone",
  "pages/triage/app"
],
function($, Backbone, Coursera) {

  var routes = {};
  var triageurl   = Coursera.config.dir.home.replace(/^\//, "triage");

  routes[triageurl + '/items'] = function() {
    new MainView({el: $('.coursera-body')});
  };

  Coursera.router.addRoutes(routes);
 
  $(document).ready(function() {
      Backbone.history.start({pushState: true});
  });
});

After declaring its dependencies, it defines a mapping of routes, adds those to our global Coursera.router (an extension of Backbone.Router) and then kicks off Backbone.history.start() on page load.

SPAs: Syncing Users

More typically, for our logged in-apps, we will attempt to login the user before calling the routes, and our document.ready callback will look like this:

    (new User())
      .sync(function(err) {
      Coursera.user = this;
      if (!Backbone.history.start({
        pushState: true
      })) {
        Coursera.router.trigger("error", 404);
      }
    });
SPAs: Regions

Backbone lets you create views and render views into arbitrary parts of your DOM, but many developers soon run into the desire for standard "regions" or "layouts". We want to specify different parts of their page, and only swap out the view in those parts across routes - like the header, footer, and main area. That's a better user experience, since there's no unnecessary refreshing of unchanging DOM.

For that, we use origami.js, a custom library that lets us create regions associated with views, and then in a route, we'll specify which region we want to replace with a particular view file, plus additional options to pass to that view. In the view, we can bind to region events like "view:merged" or "view:appended" and take appropriate actions.

In our SPAs, we always render into the regions instead, so our routes code looks more like this. It is a bit of an unwieldy syntax, but it gets the job done:

routes[triageurl + '/items/:id'] = function(id) {
    Coursera.region.open({
      "pages/home/template/page": {
        regions: {
          body: {
            "pages/triage/views/MainView": {
              id: "MainView",
              initialize: {
                openItemId: id
              }
            }
          }
        }
      }
    });
  };

We could also consider using: Marionette.js or Chaplin.

SPAs: Dirty Models

In traditional web apps, it's common practice to warn a user before leaving a page that they have unsaved data, using the window.onunload event. However, we no longer have that event in Backbone SPAs, since what looks like a window unload is actually just a region swap in JS. So, we built a mechanism into origami.js that inspects a view for a "dirty model" before swapping a view, and it throws up a modal alert if it detects that.

To utilize this, a view needs to specify a hasUnsavedModel function and return true or false from that:

var view = Backbone.View.extend({
    // ...
    hasUnsavedModel: function() {
       return !this.$saveButton.is(':disabled');
    }
});
SPAs: Internal Links

In traditional web apps, it is easy to link to a part of a page using an internal anchor, like /terms#privacy. However, in a SPA, the hash cannot be used for internal anchors, since it is used as the fallback technology for the main URL in some browsers, and the URL would actually be /#terms#privacy. We have experimented with various alternative approaches to internal links, and the current favorite approach is to use a URL like /terms/privacy, define a route that understands that URL, pass the "section" into the view, and use JS to jump to that part of the view, post-rendering. For example:

In the routes file:

  routes[home + "about/terms/:section"] = function(section) {
    Coursera.region.open({
      "pages/home/template/page": {
        regions: {
          body: {
            "pages/home/about/tosBody": {
              initialize: {section: section}
            }
          }
        }
      }
    });
  };

In the view file:

var tosBody = body.extend({
    initialize: function() {
      var that = this;
      document.title = "Terms of Service | Coursera";

      that.bind("view:merged", function(options) {
        if(options && options.section)
          util.scrollToInternalLink(that.$el, options.section);
        else
          window.scrollTo(0,0);
      });
    },
    // ...
});

In the Jade template:

h2(data-section="privacy") Privacy Policy

Widgets

In some cases, we do not necessarily want our Backbone view to take full control over the URL, like if we want to easily have arbitrary, multiple Backbone views on the same page. We take that approach in our class platform, because that will ultimately make it easier for professors who want to compose together views to their own liking (i.e. if they'd like to mix a forum thread and a wiki view on the same page, that should be easy for them.)

To create a widget, we use a declarative HTML syntax, specifying data attributes that define the widget type and additional attributes to customize that instance of the widget:

<div data-coursera-reporter-widget
    data-coursera-reporter-title=""
    data-coursera-reporter-url="">
Just one moment while we load up our reporter wizard...
</div>

Then, we create a widgets.js file that will be included on that page, and knows how to turn DOM elements into Backbone views. Typically that file would know about multiple widgets, but we show one here to save space:

define([
  "jquery",
  "underscore",
  "backbone",
  'pages/forum/app',
  'pages/forum/views/ReporterView'
],
function($, _, Backbone, Coursera, ReporterView) {
  
  $(document).ready(function() {

    $('[data-coursera-reporter-widget]').each(function() {
      var title = $(this).attr('data-coursera-reporter-title');
      var url = $(this).attr('data-coursera-reporter-url');
      new ReporterView({el: $(this)[0],itemTitle: title, itemUrl: url}).render();
    });

  });

});
Widgets: Maintaining State

We still want to maintain state within those views and support the back button, however, without changing the main URL of the page.

jQuery BBQ is an external non-Backbone specific library for maintaining history in the hash, and as it turns out, it works pretty well with Backbone. You can read my blog post on it for a detailed explanation.

We could also considering using: Backbone.Widget.

Testing Architecture

First, let it be said: testing is important. We are building a complex product for many users that will pass through many engineer's hands, and the only way we can have a reasonable level of confidence in making changes to old code is if there are tests for it. We will still encounter bugs and users will still use the product in ways that we did not expect, but we can hope to avoid some of the more obvious bugs via our tests, and we can have a mechanism in place to test regressions. Traditionally, the frontend has been the least tested part of a webapp, since it was traditionally the "dumb" part of the stack, but now that we are putting so much logic and interactivity into our frontend, it needs to be just as thoroughly tested as the backend.

There are various levels of testing that we could do on our frontends: Unit testing, integration testing, visual regression testing, and QA (manual) testing. Of those, we currently only do unit testing and QA testing, but it's useful to keep the others in mind.

Unit Testing

When we call a function with particular parameters, does it do what we expect? When we instantiate a class with given options, do its methods do what we think they will? There are many popular JS unit testing frameworks now, like Jasmine, QUnit, and Mocha.

We do a form of unit testing on our Backbone models and views, using a suite of testing technologies:

  • Mocha: An open-source test runner library that gives you a way to define suites of tests with setup and teardown functions, and then run them via the command-line or browser. It also gives you a way to asynchronously signal a test completion. For example:
    
    describe('tests for the reporter library', function() {
      beforeEach(function() {
        // do some setup code
      }
      afterEach(function() {
       // do some cleanup code
      }
      it('renders the reporter template properly', function() {
        // test stuff
      }
      it('responds to the ajax request correctly', function(done) {
        // in some callback, call:
        done();
      }
    });
  • Chai: An open-source test assertion library that provides convenient functions for checking the state of a variable, using a surprisingly readable syntax. For example:
    
      chai.expect(2+2).to.be.equal(4);
      chai.expect(2+2).to.be.greaterThan(3);
    
  • JSDom: An open-source library that creates a fake DOM, including fake events. This enables us to test our views without actually opening a browser, which means that we can run quite a few tests in a small amount of time. For example, we can check that clicking changes some DOM:
    
         var view = new ReporterView().render();
         view.$el.find('input[value=quiz-wronggrade]').click();
    
          var $tips = view.$el.find('[data-problem=quiz-wronggrade]');
          chai.expect($tips.is(':visible'))
            .to.be.equal(true);
          chai.expect($tips.find('h5').eq(0).text())
            .to.be.equal('Tips');
    
  • SinonJS: An open-source library for creating stubs, spies, and mocks. We use it the most often for mocking out our server calls with sample data that we store with the tests, like so:
    
        var forumThreadsJSON  = JSON.parse(fs.readFileSync(path.join(__filename, '../../data/forum.threads.firstposted.json')));
       server    = sinon.fakeServer.create();
       server.respondWith("GET", getPath('/api/forum/forums/0/threads?sort=firstposted&page=1'), 
            [200, {"Content-Type":"application/json"}, JSON.stringify(forumThreadsJSON)]);
       // We call this after we expect the AJAX request to have started
       server.respond();
    

    We can also use it for stubbing out functionality that does not work in JSDom, like functions involving window properties, or functionality that comes from 3rd party APIs:

    
          var util = browser.require('js/lib/util');
          sinon.stub(util, 'changeUrlParam', function(url, name, value) { return url + value;});
          var BadgevilleUtil = browser.require('js/lib/badgeville');
          sinon.stub(BadgevilleUtil, 'isEnabled', function() { return true;});
    

    Or we can use it to spy on methods, if we just want to check how often they're called. Sometimes this means making an anonymous function into a view method, for easier spy-ability:

    
        sinon.spy(view, 'redirectToThread');
        // do some stuff to call function to be called
        chai.expect(view.redirectToThread.calledOnce)
            .to.be.equal(true);
         view.redirectToThread.restore();
    

Besides those testing-specific libraries, we also use NodeJS to execute the tests, along with various Node modules:

  • require: Similar to how we use this in our Backbone models and views to declare dependencies, we use require in the tests to bring in whatever libraries we're testing.
  • path: A library that helps construct paths on the file system.
  • fs: A library that helps us read our test files.

Let's see what all of that looks like together in one test suite. These are a subset of the tests for our various about pages. The first test is a very simple one, of a basically interaction-less, AJAX-less posts. The second test is for a page that does an AJAX call:


describe('about pages', function() {
  var chai = require('chai');
  var path = require('path');
  var env  = require(path.join(testDir, 'lib', 'environment'));
  var fs   = require('fs');

  var Coursera;
  var browser;
  var sinon;
  var server;
  var _;

  beforeEach(function() {
    browser = env.browser(staticDir);
    Coursera  = browser.require('pages/home/app');
    sinon = browser.require('js/lib/sinon');
    _ = browser.require('underscore');
  });

  describe('aboutBody', function() {

    it('about page content', function() {
      var aboutBody = browser.require('pages/home/about/aboutBody');
      var body      = new aboutBody();
      var view      = body.render();

      chai.expect(document.title).to.be.equal('About Us | Coursera');
      chai.expect(view.$el.find('p').size()).to.be.equal(6);
      chai.expect(view.$el.find('h2').size()).to.be.equal(3);
    });
  });


  describe('jobsBody and jobBody', function(){

    var jobs     = fs.readFileSync(path.join(__filename, '../../data/about/jobs.json'), 'utf-8');
    var jobsJSON = JSON.parse(jobs);

    beforeEach(function() {
      server = sinon.fakeServer.create();
      server.respondWith("GET", Coursera.config.url.api + "common/jobvite.xml", 
        [200, {"Content-Type":"application/json"}, jobs]);
    });

    it('job page content', function(done) {
      var jobBody = browser.require('pages/home/about/jobBody');
      var view      = new jobBody({jobId: jobsJSON[0].id});

      var renderJob = sinon.stub(view, 'renderJob', function() {
        renderJob.restore();
        view.renderJob.apply(view, arguments);
        chai.expect(view.$('.coursera-about-body h2').text())
          .to.be.equal(jobsJSON[0].title);
        done();
      });

      view.render();
      chai.expect(document.title).to.be.equal('Jobs | Coursera');
      server.respond();
    });

  });

Integration testing

Can a user go through the entire flow of sign up, enroll, watch a lecture, and take a quiz? This type of testing can be done via Selenium WebDriver, which opens up a remote controlled browser on a virtual machine, executes commands, and checks expected DOM state. The same test can be run on multiple browsers, to make sure no regressions are introduced cross-browser. They can be slow to run, since they do start up an entire browser, so it is common to use cloud services like SauceLabs to distribute tests across many servers and run them in parallel on multiple browsers.

There are client libraries for the Selenium WebDriver written in several languages, the most supported being Java and Python. For example, here is a test for our login flow that enters the user credentials and checks the expected DOM:


from selenium.webdriver.common.by import By
import BaseSitePage

class SigninPage(BaseSitePage.BaseSitePage):
    def __init__(self, driver, waiter):
        super(SigninPage, self).__init__(driver, waiter)
        self._verify_page()

    def valid_login(self, email, password):
        self.enter_text('#signin-email', email)
        self.enter_text('#signin-password', password)
        self.click('.coursera-signin-button')
        self.wait_for(lambda: \
                self.is_title_equal('Your Courses | Coursera') or \
                self.is_title_equal('Coursera'))

We do not currently run our Selenium tests, as they are slow and fragile, and we have not had the engineering resources to put time into making them more stable and easier to develop locally. We may out source the writing and maintenance of these tests to our QA team one day, or hire a Testing engineer that will improve them, or both.

Visual regression testing

If we took a screenshot of every part of the site before and after a change, do they line up? If there's a difference, is it on purpose, or should we be concerned? This would be most useful to check affects of CSS changes, which can range from subtle to fatal.

There are few apps doing this sort of testing, but there's a growing recognition of its utility and thus, we're seeing more libraries come out of the woodwork for it. Here's an example using Needle with Selenium:


from needle.cases import NeedleTestCase

class BBCNewsTest(NeedleTestCase):
    def test_masthead(self):
        self.driver.get('http://www.bbc.co.uk/news/')
        self.assertScreenshot('#blq-mast', 'bbc-masthead')

There's also Perceptual Diffs, PhantomCSS, CasperJS, and SlimerJS. For a more manual approach, there's the Firefox screenshot command with Kaleidoscope. Finally, there's dpxdt (pronounced depicted).

We do not do visual regression testing at this time, due to lack of resources, but I do think it would be a good addition in our testing toolbelt, and would catch issues that no other testing layers would find.

QA (manual) testing

If we ask a QA team to try a series of steps in multiple browsers, will they see what we expect? This testing is the slowest and least automate-able, but it can be great for finding subtle usability bugs, accessibility issues, and cross-browser weirdness.

Typically, when we have a new feature and we've completed the frontend per whatever we've imagined, we'll create a worksheet in our QA testing spreadsheet that gives an overall description of the feature, a staging server to test it on, and then a series of pages or sequences of interactions to try. We'll also specify what browsers to test in (or "our usual" - Chrome, FF, IE, Safari, iPad), and anything in particular to look out for. QA takes about a night to complete most feature tests, and depending on the feedback, we can put a feature through multiple QA rounds.

Additional Reading

The following slides and talks may be useful as a supplement to this material (and some of it served as a basis for it):

Saturday, December 15, 2012

A Tale of Two Bootstraps: Lessons Learned in Maintainable CSS


The Pull Request

346 files changed, 100 commits, 6 authors.

The title of that pull request? "Port to Bootstrap2." Yes, that was how much effort it took to make a change as seemingly simple as upgrading a CSS framework in our codebase.


The Journey

It all started on a rainy train ride, my daily commute back to San Francisco on the Caltrain. I knew that we used Bootstrap 1 in our legacy codebase (which powers class.coursera.org, where students watch lectures and take quizzes) and we were using Bootstrap 2 in our "modern" codebase (the one behind www.coursera.org, where students find courses to enroll in), and that was making it hard for us to share resources across them, so I thought, hey, I know, I'll just go through and upgrade all our class names in the old codebase. I had upgraded personal projects to Bootstrap 2 in the past, and it hadn't been that hard, especially with the help of the upgrade guide.

As you can guess, it took a wee bit longer than that train ride. Why?

  • Our codebase size: This was only my second foray into this codebase, and so I had very little idea how it worked and how big it actually was. After a few greps for Bootstrap class names, I soon realized that the codebase was about 10x bigger than I eventually realized. I was only familiar with the student side of class.coursera.org, but of course, there's an instructor side, and as it turns out, we have a rather comprehensive admin interface. I also discovered that we had multiple versions of some of our interfaces and we're still supporting both of them, until all the admins have time to learn the new interfaces.
  • Our codebase architecture: Our legacy codebase is written in a hand-spun PHP framework, spitting out HTML with inline JS and linked JS files. I expected to find some CSS class names in the JS, used in selectors or DOM creation, but besides finding a lot of that, I also discovered CSS class names outputted by PHP, and the worst, CSS class names in JS outputted by PHP. Yech! It meant that I had to look in every nook and cranny for class names, from the back to the front.
  • The bootstrap class names: Bootstrap - both 1 and 2 - uses very succinct class names, like "alert", "label", "btn". Those short names are great when you're hand typing out your CSS, but oh wow, they are a *bitch* to search for in a codebase of thousands of files. Do you know how often developers use the word "label"? All the freaking time. Bootstrap 1 even had these short add-on class names for component states like "primary", "success", "info", which thankfully Bootstrap 2 changed into the more specific "btn-primary", "btn-success", "btn-info".
  • The bootstrap JS components: At its core, Bootstrap is a CSS framework, but it offers a small number of JS components on top, and we were using a few of them. After upgrading to the new JS components and making them RequireJS-friendly (so that we could use the same JS libraries in our Require-powered codebase as this legacy one), I had to upgrade the relevant HTML and test that the interactivity still worked as expected. Now, I was no longer making just a visual change, I was possibly affecting interactions instead.

All of those factors combined turned my train-ride of a change into a month-long change, where I roped in more of our engineers, held many "Bootstrapv2athons", and worked with a QA team to do a weeklong test of the branch in multiple browsers.

Finally, we were ready to deploy it. Or, more accurately, my colleague convinced me that we should just get the damn thing out and be ready to deal with anything that we missed. As it turns out, yes, we still did miss a few things- a few big things, a few little things, but we responded as quickly as we could to minimize the damage. Now, finally, we have our entire codebase running off the same Bootstrap version and we can start to share our CSS and JS. Man, was that a journey of epic renaming proportions that I did not imagine.


The Lessons Learned

But I really don't think it should have to be that hard to upgrade a CSS framework. Here's what I want to try next time I work in a codebase that's based on Bootstrap:

  • Do not use the Bootstrap class names directly in our code. They are too short, and it's too hard to keep track of where they're being used.
  • Instead, use a CSS pre-processor like Less (which Bootstrap itself is based on) to extend the Bootstrap class names and define longer, more app-specific, more semantic class names. For example, we might extend btn into "coursera-generic-button" and we might extend btn/btn-success into "coursera-save-button".
  • We would then use those custom CSS class names everywhere instead of the Bootstrap class names, especially if we are using them in our JavaScript.

For a longer article that also recommends that technique for semantic more than maintenance reasons, read Stop Embedding Bootstrap Classes in your HTML.

What do you think? I'd love to hear ideas from those of you who rely on Bootstrap or similar frameworks in your codebase, and what you've done to avoid dependence and ease maintenance. I haven't yet had the chance to test out my strategy, so if there's a better way, I want to hear it.

Oh, on a related note, I hear that Bootstrap 3 is almost out... time to start my next epic branch?

Tuesday, October 9, 2012

Using Transloadit with Bootstrap

As I discussed in a previous post, one of my first projects at Coursera was implementing social profiles. A big part of our motivation for adding user profiles is to add a sense of community and intimacy to classes, particularly in the place where students interact with each other the most - our forums. The forums were just long streams of text before, and there was one little thing we could add that would break up those streams of text and instantly make them feel like a social experience: user photos!

Upload Strategies

So, in our new profile editor form, the photo upload area is at the very top, and it's the first part of the form that I worked on. Based on my experience from EatDifferent, I knew that photo upload (or file upload generally) wasn't an easy thing to get working across all the different browsers, and I decided immediately that I did not want to rewrite cross-browser file upload from scratch. I started playing around with the popular jQuery upload library, but then I realized that I also had to solve the second part of the upload equation: once I had the photos, where would I store them on the server, how would I resize them, and how would I get them back out? We store many of our assets in Amazon S3, so I started looking at how to use Python to do image manipulation and S3 upload.

Client-side Uploads with Transloadit

Before I got too far down that path, I remembered that there's a new startup that would do all of that for me: Transloadit. They take care of file manipulation (like image resizing and cropping) and file storage (like to S3 and Youtube), and best of all, they provide a jQuery plugin that will do all of that for you, with no server-side scripting on your side. (There's also a very similar startup FilePicker.io, but I happened to meet the co-founder of Transloadit while wandering the streets of Lyon, France, so of course, I was a bit biased in my selection.)

The Transloadit Plugin

Transloadit designed their plugin so that you attach it to a form, and then it listens to the submit, processes the file inputs, optionally throws up a progress indicator, and notifies you with JSON results when it's done. That wasn't quite the experience that I wanted for our profile form, however. I wanted users to select a file to upload and immediately see it uploaded and displayed, while they continue filling out the form. I changed the Transloadit plugin so that it attaches to any DOM node, and processes the file inputs inside whenever a change event is fired for them.

My Bootstrap Plugin

Both because I didn't like the modal nature of the Transloadit progress indicator and because I wanted a more cohesive look & feel to the upload, I came up with my own Bootstrap-based HTML and CSS for the uploader, which includes a progress bar and a hack for actually letting users click a nice pretty button instead of the default (ugly) file input control.

To tie the HTML, CSS, and JS together, I wrote a small Twitter Bootstrap plugin. In that file, I call the Transloadit plugin on the form, specifying options to wait for results, disable the modal, not submit the form after file upload, and also defining various event callbacks. When the user first picks a file, I change the button text and show the progress bar, and as the upload progresses, I increase the progress bar width. Finally, when the file has uploaded, I fade the progress bar and display the image for the user to see. If the user wants to replace the image, they re-click the button and the progress begins again. Behind the scenes, I store the generated S3 URLs (for 3 different resized versions plus the original) in hidden inputs in the form, and that's what goes into our database.

You can see that flow here:



Improvements?

This isn't the perfect photo upload UI, of course. I'd love to give users the ability to crop the photo on the client (like Twitter has now implemented in their avatar upload UI), and I think I could make it more obvious how to change the photo. But, it has served us well enough so far for the profile form, so we're now using the plugin in our admin interface as well, for accepting all the course, university, and instructor media assets.

Fork It!

To make it easier for others to use Transloadit together with Bootstrap like I have, I've put all the code into a git repository along with examples, which show how to use it for both photos and files.

Monday, May 28, 2012

Using Grunt.js with CSS

For most of my apps, I've been using either a Makefile, shell scripts, Python scripts, or some combination of those as my build tool — taking care of tasks like concatenating files, minifying files, and linting my code. A few months ago, Ben Alman introduced Grunt, a build tool written in JS and designed for JS. I quite liked the idea of using JS for task automation, since it's a language I already know (bash scripting makes me want to bash my head against the wall) and it's the language that I write my apps in (at least the front ends). I finally got the opportunity to properly try it out last week, while porting my HTML5 slide editing app over from Closure to Bootstrap.

Grunt is designed primarily for JavaScript projects, like Node.JS or jQuery plugins, so it comes out of the box with tasks for concatenating files, linting JS files (via JSHint), compressing JS files (via UglifyJS), running a Node server, and running JS unit tests — no CSS-related tasks. For this web app, I decided to write the CSS using Less, a CSS pre-processor. While debugging locally, I include the .less file on the page and the less.js pre-processor, but for performance reasons on production, I wanted to just include a minified CSS file. So I needed 2 new tasks: one for processing the .less, and another for minifying the .css. Grunt has a formal way for developers to define and share new tasks via extensions, and luckily I found developers have already made grunt-less and grunt-css extensions.

To use the Grunt extensions, I first had to install them as node modules. I could have installed them simply by running npm install grunt-less inside my project folder, but at the suggestion of Ben, I instead created a package.json file with the modules listed as dependencies. Then I just ran npm install and it grabbed everything necessary. Now, since my code is open-source, I don't have to tell people what they need to install - just have to make sure they run install. Here's what my file looks like:

{
  "name": "SlideEditor",
  "version": "0.0.0",
  "dependencies" : {
    "grunt-less":  ">0.0.0",
    "grunt-css":   ">0.0.0"
  }
}

Once I did that, I just loaded the task definitions in my grunt.js file by sticking these lines in:

  grunt.loadNpmTasks('grunt-less');
  grunt.loadNpmTasks('grunt-css');

And now I could use the tasks! However, before I set them up, I spent a while figuring out the best directory structure for both my JS and CSS. When you're using any sort of build tool, it helps to have a sensible directory structure, like separate folders for the debug files versus the build files. For this app, I decided to put all the original files in a src folder with child folders for css and js, and for the build files, I just put them in css and js folders under the root. In the src/css folder, I have children folders less (for the .less file), app (for the processed less file), and libs (for any CSS files from 3rd party libraries). Here's what that looks like:

  • src
    • css
      • less
      • app
      • libs
    • js
      • app
      • libs
  • js
  • css

Okay, so finally, I put the Grunt extensions to work, setting up the less and cssmin tasks. Here's what all the CSS-related tasks look like in my grunt file:

  var SRC_CSS   = 'src/css/';
  var BUILD_CSS = 'css/';
  
  grunt.initConfig({
    // ...
    less: {
      css: {
        src: [SRC_CSS + 'less/base.less'],
        dest: SRC_CSS + 'app/base.css',
      }
    },
    concat: {
      css: {
        src: [SRC_CSS + 'libs/*.css',
              SRC_CSS + 'app/*.css'],
        dest: BUILD_CSS + 'css/all.css'
      }
    },
    cssmin: {
      css: {
        src: '',
        dest: BUILD_CSS + 'css/all-min.css'
      }
    },
    // ...
  });

You can check out my full grunt.js file here, to see how I use the CSS tasks and JS tasks together. Like any tool, Grunt takes a bit of time to learn, but at least for me, I find it much more approachable than the world of bash scripting. Try it out for your next project!

Thursday, April 12, 2012

Theming Tumblr with Twitter Bootstrap

I started a blog recently for EatDifferent and I decided to use Tumblr as the blogging platform, as it has more of a community than other platforms. I wanted the blog to share some of the look & feel of the main site, for consistency's sake, so I choose to make my own custom Tumblr theme instead of using a pre-set theme from the gallery. I also wanted the blog to share some of the look & feel of the Stripe blog, because I think it's just so pretty — the author photos, the outset photos, the ample whitespace. After a few hours of hardcore copying, pasting, and tweaking from the various stylesheets, I achieved my goal. You can see what I came up with on the live EatDifferent blog or in the screenshot below:

Since I figure other people might also want to use Twitter Bootstrap in their Tumblr theme (like if you're already using it for your main site), I spent a few more minutes making a generic version of the theme. You can see it on the demo blog or in the screenshot below:

If you want to use it as a base for your theme, just grab it from this gist and modify away.

Saturday, March 24, 2012

Working around Android Webkit

I use PhoneGap to output the Android app for EatDifferent, and that means that my app runs inside an embedded Android browser. As I've discovered and re-discover everytime I work on a new version of the app, I am not the biggest fan of the Android browser. And that's an understatement.

Sure, the Android browser is Webkit-based, so it technically supports modern HTML5 elements and CSS3 rules, but in practice, the browser can sometimes struggle with rendering the new shiny CSS stuff, especially when some user interaction causes it to repaint the DOM. It's not just that the browser slows down — it actually fails to re-paint DOM nodes (or as I like to describe it, it "white outs" those nodes). When the whited-out nodes are my navigation menu or form buttons, then my app is rendered basically un-usable. That's a shitty user experience, of course, so as the developer, I want to do whatever I can to make sure that a user doesn't have to experience that.

Unfortunately, these white-outs are difficult to debug. When I run into one, I try to replicate it a few times (so I know what user interactions caused it), and then I start stripping out CSS rules until I can't reliably replicate it anymore. Since the glitches only happen on the device themselves (and not in Chrome, where I usually test CSS changes), I have to re-deploy the app everytime I test a change. Needless to say, it's a slow process. The white-outs are also impossible to programmatically test for, as far as I know, so there's nothing I can add to my test suite to guarantee that changes in my code haven't brought any back.

So, yeah, they suck. But they suck less when you know what to look for and what to change, so here are some of the changes I made to out the white-outs.

But first... detecting Android

I use the same HTML, CSS, and JS codebase for both the Android and iOS versions of my app, and for most of the changes, I only wanted to make them for Android. To do that, I have a function that detects if we're on Android or if we're testing Android mode on the desktop. I can use the results of function in my initialization code to add a "android" class to the body tag, which I can reference in my CSS.

My Android detection function checks by looking at the user agent (which isn't as simple as just looking for "Android", thanks to HTC and Kindle) and as a backup, looking at the device information served by the PhoneGap API.

Note that my Android detection function only checks if we're on an Android operating system, not if we're specifically in the built-in Android WebKit browser. My app only needs to check if it's on an Android OS, since it's wrapped inside PhoneGap and not accessed from arbitrary mobile browsers. If you're writing a website accessible from a URL and want to employ these workarounds only on the built-in Android Webkit browser, then you need a check that looks for the Android OS and a non-Chrome Webkit browser. Thanks to Brendan Eich for pointing that out in the comments.

  function inUserAgent(str) {
    return (new RegExp(str)).test(navigator.userAgent.toLowerCase());
  }

  function isAndroid() {

    function isAndroidOS() {
      return inUserAgent('android') || inUserAgent('htc_') || inUserAgent('silk/');
    }

    function isAndroidPG() {
      return (window.device && window.device.platform && window.device.platform == 'Android' || false;
    }

    return isAndroidOS() || isAndroidPG();
  }

  if (isAndroid() || getUrlParam('os') == 'android') {
    $('body').addClass('android');
  }

The case of the shiny modals

I use the modals from Twitter Bootstrap for dialogs in the mobile app, and I noticed white-outs and other visual oddities would happen when a modal rendered. To workaround that, I overrode the modal CSS rules to reset various CSS3 properties to their defaults so that the browser processes them as never being set at all.

.android {
  .modal {
    @include box-shadow(none);
    @include background-clip(border-box);
    @include border-radius(0px);
    border: 1px solid black;
  }
}

After making that change, I decided to go ahead and strip down the Bootstrap buttons, nav bar, and tabs as well — not because I necessarily knew their CSS3 rules were causing issues, but because I'd rather have a useable app than a perfectly rounded, shadowed app. I later realized that the stripped down CSS conveniently matches the new Android design guidelines quite well, so my changes are actually a performance and usability gain. You can see all my Android CSS overrides in this gist, and see the visual effect of the overrides in the screenshot below.


The case of too many nodes

When my app loads the stream, it appends many DOM nodes for each of the updates, and those DOM nodes can include text, links, buttons, and images. Android was frequently whiting out while trying to render the stream, understandably. I made a few changes to improve the performance of the stream rendering:

  • Stripping the CSS3 rules from the buttons (as described above) plus a few other classes.
  • Implementing delayed image loading. I had already implemented that for the web version of the app, since it is pretty silly from a performance and bandwidth perspective to load images that your users may not scroll down to see, and I discussed that in detail in this post.
  • Pre-compiling my Handlebars templates. This is a change I actually made for the iPhone, after discovering super slow compile times on iOS 5, but it helps a bit on Android as well.

The case of the resizing textarea

My app includes a textarea for the user to enter notes which defaults to a few rows. Sometimes users get wordy though and type beyond the textarea, and I wanted the textarea to resize as they typed. I got this plugin working, but I kept seeing my app's header and footer white out when the textarea resized. I eventually figured out that Android didn't like repainting the header and footer because they were position:fixed (it didn't white out when I made them absolute), and couldn't figure out how to get Android to not white them out. So, I opted here to just make sure the code never resized the textarea while the user was typing by adding the resizeOnChange option, and setting that to true for Android. It's not ideal, but well, that's life.

A better future?

As recently announced, there's now a Chrome for Android which is significantly better than the built-in Webkit that ships with it, and I'm hopeful that Android apps will be able to use Chrome for their embedded WebView in the future (see my issue filed on it here). I look forward to making app design decisions based on making a better user experience, and not on preventing a horrible one. :)

Tuesday, February 7, 2012

Theming Twitter Bootstrap (Without Less)

As I've mentioned in previous posts, I use Twitter Bootstrap as the base for my CSS for EatDifferent, both on the web and mobile versions. I like it because it's pretty by default — the way I think the web should be, but it started off ugly and must stay backwards compatible, so it's too late for that dream

I also like it because it's easy to customize. The Bootstrap CSS is built using LESS, a stylesheet language like SASS, so if you use LESS, then you can actually modify the variables in the LESS files themselves and re-compile the CSS. However, I use SaSS instead of LESS, so for me to customize the Bootstrap CSS, I have to override their generated rules. I started off by tinkering with the CSS in Chrome DOM inspector, but then I realized I could more effectively and cleanly override their rules by basing my overrides on the rules in the original LESS files. I thought I'd share my overrides here in case others are going down the same path as me.

Colors

Much of my Bootstrap customization is just changing the colors and background colors of elements. Bootstrap defaults to a fairly generic color scheme- white background, dark grey text, blue links, and a black nav bar. For EatDifferent, I wanted a slightly brighter color scheme and a different blue. Since I use SASS, I set up various color variables at the top of my .scss file that I use throughout my CSS overrides.

$white: #ffffff;
$black: #000000;
$darkblue: #339bb9;
$darkestblue: #257085;
$blue-gradient: linear-gradient(top, $lightblue, $darkblue);

Links & Buttons (buttons.less)

The standard .btn button fits fine into my scheme — it's just a muted grey — but the btn-primary class is a blue, so I needed to change that over to my scheme's blue.

The SASS:

a {
  color: $darkblue;
  &:active, &:link, &:hover {
    color: $darkestblue;
  }
  &.btn-primary {
    color: $white;
  }
}

.btn-primary {
  @include background-image($blue-gradient);
  background-color: $darkblue;
  border-color: $darkblue;
  &:hover, &:active, &.active, &.disabled, &[disabled] {
    background-color: $darkblue;
  }
}

The result:


Navigation (navs.less)

Bootstrap offers two navigation elements, pills and tabs. The tabs don't require customization as they're just links with a grey border, but the pills have a hover and active background that must be overridden.

The SASS:

 
.nav-pills {
  .active > a, .active > a:hover {
    background: $darkblue;
  }
}

The result:


Tooltips (tooltip.less)

Bootstrap tooltips default to black background and white text, so I changed the background to my blue instead.

The SASS:

.tooltip {
  font-size: 13px;
  @include opacity(1.0);

  .tooltip-inner {
    background: $darkblue;
    color: white;
  }
}

The result:


Header (navbar.less)

Bootstrap provides a navigation bar (.navbar) for creating the typical top bar with site navigation, and this navigation bar is probably what most people recognize when they see a Bootstrap site. It defaults to a black background with white/grey links, and that was just way too morbid for my tastes. I changed the backgrounds to my blue, and added a logo and login area. There's a bit more to override here because the navbar also includes a dropdown menu.

The HTML:

  <div id="header" class="navbar navbar-fixed-top">
    <div class="navbar-inner">
      <div class="container">
        <a id="header-logo" class="pull-left">      
        </a>
          <ul id="header-login" class="nav pull-right">
            <li class="dropdown">
              <a href="#" class="dropdown-toggle" data-toggle="dropdown">
                <img src="http://blog.pamelafox.org/pamelafox.png">
                <span id="header-login-name">Pamela Fox</span>
                <b class="caret"></b>
              </a>

              <ul class="dropdown-menu">
                <li><a href="http://blog.pamelafox.org/%7B%7B%20url_for('settings') }}">Settings</a></li>
                <li><a href="#" id="logout-button">Logout</a></li>
              </ul>
            </li>
          </ul>
      </div>
    </div>
  </div>

The SASS:

.navbar {
  .navbar-inner {
    background-color: $darkblue;
    @include background-image($blue-gradient);
  }
  div > ul a, .nav a {
   font-size: 14px;
   font-weight: bold;
   color: $white;
   &:hover {
      background-color: $darkblue;
   }
  }
  div > ul .active a, .nav .active a {
    background-color: white;
    color: $darkblue;
  }
  .dropdown-menu {
    &:before {
      border-bottom-color: #ccc;
    }
    &:after {
      border-bottom-color: $darkblue;
    }
  }
  div > ul .menu-dropdown, .nav .menu-dropdown, .topbar div > ul .dropdown-menu, .nav .dropdown-menu {
    background-color: $darkblue;
    li {
      a {
        background: $darkblue !important;
        color: white;
        &:hover {
          background-color: $darkblue;
        }
      }
    }
  }
}

#header {
  #header-logo {
    background-image: url('http://blog.pamelafox.org/images/logo_long_trans.png');
    width: 138px;
    height: 40px;
  }
  #header-login {
    img {
      width: 16px;
      height: 16px;
      vertical-align: text-bottom;
      border: 1px solid #CCCCCC;
    }
  }
}

The result:

Wednesday, October 26, 2011

Logging JS Errors on iOS with PhoneGap

I've spent the last few days getting the EatDifferent PhoneGap app working on an iPhone (an app which previously worked on Android). The hardest part has been learning to debug in the iOS browser, so I thought I'd post on my findings:

  • To view the output of console.log, you must open the XCode console. The iOS browser "Debug console" that most iOS debugging articles mention is only displayed in the standalone Safari browser, not in the WebView (where PhoneGap HTML lives).
  • There seem to be times when console.log does not log the output (perhaps during loading?) - in that case, alert() always seems to work.
  • If you log a JS object using console.log, it will just print "Object" by default. You must JSON stringify it to be useful.
  • You can also use debug.phonegap.com (hosted weinre) to view the DOM and JS console logs as well.
  • The WebView browser silently fails on JS errors - it stops running the JS code and does not report the error. To see the error, you must wrap the offending code in a try/catch block.

Given all of those learnings, here is my log() wrapper function that I use across my webapp:

    function log(something) { 
        if (window.console){ 
          if (something instanceof Date) { 
            something = something.toDateString(); 
          } 
          if (isIOS() || isAndroid()) { 
            if (typeof something == 'object') { 
              something = JSON.stringify(something); 
            } 
            console.log(something); 
          } else { 
            console.log(something); 
          } 
        } 
    } 

And I wrap various code blocks in try/catch, like the callback function for AJAX requests:

    try { 
      onSuccess(processJSON(responseJSON)); 
    } catch(e) { 
      log(e); 
    } 

I posted my observations in the PhoneGap group and the developers there made several recommendations: 1) use Ripple, a Chrome extension for mobile emulation 2) monkey-patch JS functions to always try-catch, as done in this library. I've taken a break from iOS debugging for a few days, but I'll probably revisit debugging soon and try out their ideas.

Saturday, October 15, 2011

JS & CSS Compiling, Compression & Cache-Busting

Everytime I deploy a new version of the CSS and JavaScript for EatDifferent to production, I run it through a series of steps to ensure code quality and performance:

  • Code quality: I use JSHint to check for JavaScript code quality issues. Sometimes it's a matter of style, but other times it actually finds issues that can become runtime bugs.
  • Concatenation: I use cat to combine my JS files and CSS files into one file each, so that the browser can issue less HTTP requests when loading the page.
  • Compression: I use Closure Compiler to minify my JS and YUI Compressor to minify my CSS, so that those HTTP requests are smaller.
  • Cache bust: I append the current timestamp as a query parameter to the JS and CSS in my base template HTML. I serve the files as static files off App Engine which would normally result in browsers caching them forever, but by appending new query parameters for each deploy, I force the browsers to re-download them only when they've changed.

I do all of this in a Makefile, including downloading the necessary tools. You can see the relevant bits in this gist:

Thursday, September 8, 2011

Switching from jQuery Mobile to Twitter Bootstrap

I've been using jQuery mobile (jQM) to make a mobile-enhanced version of EatDifferent for the past few weeks and though I have much respect for the team behind it, I've grown increasingly frustrated with it. Why?

  1. It typically enhances by adding additional DOM elements to the original elements - which can help in making them easier to use on a mobile device, but it means that whenever you want to tweak the styles of an enhanced element, you need to dig into the DOM and apply CSS overrides at various levels. If the DOM or inner CSS names change in later jQM versions, you may have to update those CSS overrides.
  2. If you dynamically update an enhanced DOM element (like a form input, for example), you often have to tell jQM to force refresh that element, since it needs to redo the added DOM. That means you need jQM-specific JavaScript calls in your code.

I was fine with those issues when I used jQM for an earlier project, but for the EatDifferent app, they are becoming bigger issues because: 1) I'm doing more customization, and 2) I'm doing more dynamic form creation and updating.

And most importantly, I'm trying to re-use my HTML/CSS/JS across both my the desktop web version and mobile app, and I want minimal difference between them - both so users have a consistent user experience across them, and so that I can write the code just once.

So today I scrapped jQM and decided to use just HTML/CSS. And since I suck at making pretty CSS, I'm using Twitter's bootstrap library. It's pure CSS (no JS!), it's simple to use, and it's pretty. It's not specifically designed for mobile, but it works well enough on it and is easy to customize when it doesn't quite work.

Here's what the app looked like using jQuery mobile and the default black theme:

Here's what it looks like using Twitter bootstrap and some basic customization:

As you can see, jQuery mobile does create more mobile-optimized form input interfaces (larger areas for clicking, e.g.), but Twitter bootstrap creates clean CSS that I can easily add my own mobile-optimized CSS on top of when necessary. I'm happy I made the switch.

Sunday, August 28, 2011

Spriting with Compass

I use little icons in various places on EatDifferent, like to show food bonuses in the stream:

The browser can take a while to load lots of images since it has to make a request for each one, so I decided to implement icon spriting - baking all the images into one image, and using background-position in CSS to make what appears like a standalone icon.

Thankfully, I was already using SASS and Compass for my CSS, and it comes with built in spriting.

After putting my icons in one folder (and sizing them all to be 16*16 pixels), I added this to the top of my .scss file:

    @import "http://blog.pamelafox.org/icon/*.png";
    @include all-icon-sprites;

I also specified sizing and display properties for the special "icon-sprite" class in my .scss file:

    .icon-sprite {
      width: 16px;
      height: 16px;
      display: inline-block;
      margin-right: 2px;
    }

Compass then auto-generated CSS rules for my icons - one rule to specify the background for each of them, and a rule per icon to specify the background-position. It also applies any of the ".icon-sprite" rules it found to all of the generated icon classes. Here's a snippet of the auto-generated rules:

    .icon-sprite, .icon-activity, .icon-android, .icon-bodylog, .icon-buddies, .icon-camera, .icon-comment, .icon-edit, .icon-female, .icon-foodlog, .icon-grassfedmeat, .icon-highfive, .icon-home, .icon-homecooked, .icon-localfood, .icon-logs, .icon-organicveg, .icon-profile, .icon-reminder, .icon-settings, .icon-settings2, .icon-stats, .icon-stats2, .icon-sustseafood, .icon-tip {
      background: url('http://blog.pamelafox.org/img/icon-s97f5308db7.png') no-repeat;
    }

    .icon-activity {
      background-position: 0 0;
    }

    .icon-android {
      background-position: 0 -27px;
    }

    /* line 99, ../sass/_common.scss */
    .icon-sprite, .icon-activity, .icon-android, .icon-bodylog, .icon-buddies, .icon-camera, .icon-comment, .icon-edit, .icon-female, .icon-foodlog, .icon-grassfedmeat, .icon-highfive, .icon-home, .icon-homecooked, .icon-localfood, .icon-logs, .icon-organicveg, .icon-profile, .icon-reminder, .icon-settings, .icon-settings2, .icon-stats, .icon-stats2, .icon-sustseafood, .icon-tip {
      width: 16px;
      height: 16px;
      display: inline-block;
      margin-right: 2px;
    }

I measured the loading performance of my site before and after spriting, using the HAR Viewer, and these are the results:

Before: 28 requests 2.61s (onload: 1.92s, DOMContentLoaded: 1.64s)
After: 15 requests 1.09s (onload: 817ms, DOMContentLoaded: 600ms)

As you can see, spriting had a significant effect on performance. I definitely recommend spriting (and Compass) for sites that display multiple images on page load.