Binding Isolated Scopes in AngularJS

In AngularJS an isolated scope does not inherit properties from its parent scope, unlike a child scope. However, a property of an isolated scope in a directive can be bound to an attribute, which in turn can be bound to a property on the controller scope, making it possible for the isolated scope to access specified properties on the parent scope.

Type Prefix
Text @ The property on the isolated scope will take its value (always a string) from the parsed value of the specified attribute
One-way & Exposes a function on the controller scope to the isolated scope
Two-way = The specified property on the controller scope can be manipulated by changing the value of the property on the isolated scope


    <div my-directive
         text="Hello {{ parentProperty2 }}!"


            .directive("myDirective", function () {
                return {
                    restrict: "A",
                    scope: {
                        text: "@",                                       // Attribute name is optional if attribute has same
                        oneWayBoundProperty: "&attributeForOneWayBind",  // name as property on isolated scope.  (If you do
                        twoWayBoundProperty: "=attributeForTwoWayBind"   // specify the name, remember to camelCase it.)
                    link: function (scope) {
                        var tmp = scope.oneWayBoundProperty(); // A one-way bound property has to be called as a function
                        scope.twoWayBoundProperty = "Jenny";  // This will also change parentProperty2
                    template: 'My text is "{{ text }}" <br> My oneWayBoundProperty is "{{ oneWayBoundProperty() }}" <br> My twoWayBoundProperty is "{{ twoWayBoundProperty }}"'
            }).controller("myController", function ($scope) {
                $scope.parentProperty1 = "World";
                $scope.parentProperty2 = "User";


My text is "Hello Jenny!" 
My oneWayBoundProperty is "World" 
My twoWayBoundProperty is "Jenny"

AngularJS documentation on What are Scopes?
AngularJS documentation on Creating Custom Directives
AngularJS Directives, Using Isolated Scope with Attributes


Posted in AngularJS | Leave a comment

Working with master and project branches in Git

Bringing project work into the main branch
$ git checkout master
$ git merge projectbranchfrommaster

If there are merge conflicts, you will need to resolve them and commit the changes.
    x     <- merge commit
    |  \
    x   o <- commit on main branch and conflicting commit on project branch
    |  /
If there aren’t any conflicts, you'll get a 'fast-forward' merge which just brings in the new commits on top of the branch
But it’s nice to create merge commits when a project branch is merged into the main branch, to show how the commits relate to each other. This can be achieved using
$ git merge --no-ff projectbranchfrommaster

Bringing a project branch up to date
$ git checkout projectbranchfrommaster
$ git rebase master

Given a project branch taken from a main branch which has since moved on, this makes your history look as if the project branch was taken from the most recent point of the main branch. If you want to rebase onto an earlier point of your main branch, to change
   x2    o
    |   /
you can use
$ git rebase --onto x1 master [projectbranchfrommaster]
where ‘x1’ is the commit ID of the commit you want to branch from. You’re specifying the new base, the parent of the first of the commits you want to move, and, optionally, the last of the commits to be moved.

If you encounter conflicts during a rebase and your merge tool talks about ‘theirs’ and ‘mine’, remember that ‘mine’ refers to the commit you are rebasing onto and ‘theirs’ refers to the commits that are being rebased onto it. (The terminology confused me initially, since from my point of view the rebased commits were ‘mine’ and the new base was ‘theirs’.) Unlike merge conflicts, you don’t need to commit your conflict resolution when rebasing – Git will include the changes in the reapplied commit when you do
$ git rebase --continue

Coping with upstream rebases
The difference between rebasing and merging is that rebasing reapplies your commits. When the rebase starts you will see a message
First, rewinding head to replay your work on top of it...
and once you have rebased your commits will all have new commit IDs. So if you have any branches which have been taken from the rebased branch you will need to rebase them onto the new version of the branch. (Hey, I’m not saying rebasing upstream branches is a good idea. I’m just making a note of how to cope if it happens…)

If nothing changed in the rebased commits, then Git is apparently clever enough that you can just do
$ git checkout projectsubbranch
$ git rebase projectbranchfrommaster

(Disclaimer: I have never tried applying this version and so can’t vouch for whether Git really is that clever.)

However, if there were any changes during the rebase – such as a conflict resolution – you’ll need to use --onto. As before, you specify the new base and the parent of the first of the commits you want to move. If you have 3 commits in the branch you want to move, you can refer to the parent of the first commit in it as projectsubbranch~3, so you can use:
$ git rebase --onto projectbranchfrommaster projectsubbranch~3

And finally…
And if everything goes horribly wrong, remember that you can undo a rebase using the instructions here: look up the head commit of the branch prior to the rebase in the reflog and do a hard reset to it (see my post on git workflow solutions on how to do this).

git-rebase(1) Manual Page
Recovering from an upstream rebase
Examples of getting rid of unwanted commits with a rebase

Posted in Version Control | Leave a comment

Favourite Quotes

‘when you have eliminated the impossible, whatever remains, however improbable, must be the truth’
– Arthur Conan Doyle, The Sign of the Four, 1890

In other words: When you’ve checked everything you think likely without success, you need to start investigating the things you dismissed as unlikely.

‘A journey of a thousand miles begins with a single step’

In other words: If you have to implement an ‘epic’, you need to think about it in terms of smaller units of functionality.

‘Il semble que la perfection soit atteinte non quand il n’y a plus rien à ajouter, mais quand il n’y a plus rien à retrancher.’
Antoine de Saint Exupéry, Terre des Hommes, 1939, p. 60.
(‘…perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away’
Lewis Galantière)

In other words: Don’t check in dead-ends that didn’t actually solve the problem.

‘The point is, it’s not productive to ask questions like “Do most people like pulldown menus?” The right kind of question to ask is “Does this pulldown, with these items and this wording in this context on this page create a good experience for most people who are likely to use this site?’
Steve King, Don’t Make Me Think! A Common Sense Approach to Web Usability, Second Edition, 2006, p. 129.

In other words: Speaks for itself!

Posted in Uncategorized | Leave a comment

Working with Dates in JavaScript

Given the quite limited built-in functionality for working with dates in JavaScript, here are some bits and pieces I have found useful.

Comparing two dates by value:
oDateOne - oDateTwo === 0

Checking if a string converts to a valid date:
var d = new Date("foo");
test using === "[object Date]" && !isNaN(d.getTime())

Formatting dates is easy if you’re using jQuery UI:
$.datepicker.formatDate('yy-mm-dd', new Date(2007, 1 - 1, 26));
Alternatively, the date formatting capabilities of Moment.js are very useful.

Posted in JavaScript | Leave a comment

Integrating Pivotal Tracker and GitHub

I’ve been a Pivotal Tracker user for some time now and I’ve recently moved my code to GitHub, so I was interested when I found John Metta’s article which talks about automatically updating the progress of PT stories using a post-commit hook.

To get it set up you need to:

  1. Get an API token from Pivotal Tracker
    1. Scroll to the ‘API TOKEN’ section at the bottom of your profile page.
    2. If you don’t already have a token, click the ‘Create New Token’ button.
  2. Add the token to the service hooks on the GitHub repository
    1. Click on the settings tab for the repository
    2. Go to the Service Hooks section
    3. Scroll down the list of available service hooks and click on PivotalTracker
    4. Paste in your token
    5. Click ‘Update settings’

Now you’re ready to update your stories by including the special syntax in the commit message, specifying the story ID and (optionally) an action:
[fixes|completes|finishes|delivers #123456] My commit message
(Pivotal Tracker API Help Page)

Posted in Version Control | Leave a comment

Converting an SVN Repository to Git

I’ve come to prefer working with git to working with SVN, so I set about converting one of my small projects. GitHub says ‘We highly recommend using the svn2git tool to convert an SVN repo to a git repo. …For more detail on how to do the conversion using svn2git, check out the svn2git README.’ The README gives a number of simple-looking examples,so I got hold of svn2git and tried to run it. Luckily, I was using the -v option to get more informative logging messages…

Error message number 1:
Couldn't open a repository: Unable to open an ra_local session to URL: Unable to open repository 'file://C:/path/MyRepository': Can't open file '//C:/path/MyRepository/format': No such host or network path at /usr/lib/perl5/site_perl/Git/ line 310
The solution was to run svnserve and access the repository using the svn protocol. I followed the instructions here so I could access my repository as svn://localhost/MyRepository. I tried again and got…

Error message number 2:
Expected FS format between '1' and '3'; found format '4'
I eventually found an explanation of the message here: it indicates that the repository was created by a newer version of SVN than the one being used by the client trying to access it. TortoiseSVN, which the repository was created with, uses its own version of Subversion, but svn2git was using my older installed version. So I tracked down an installer for the version TortoiseSVN was using and set that up on my system.

I tried again and got…success! (But it did take a couple of hours to convert even my small repository.)

Posted in Version Control | Leave a comment

Github Glitch

I was not amused this afternoon when my attempt to push to GitHub for the first time from a new (well, old) machine was met with the message

Failed to erase credential: Element not found

fatal: Authentication failed

A quick Google revealed that other Vista 32-bit users have had a similar experience. The solution (as described by gregfiske) was not to input my credentials into the Windows pop-up box, but to cancel it and input them at the subsequent command line prompts instead.

Posted in Version Control | 3 Comments

Writing Maintainable Code

Much of my life as a developer has been spent reading through someone else’s code to try to fix a defect or add some new behaviour. As a result I have some heartfelt pleas to make to my fellow developers to make life easier for those who follow them.

Give tests names that describe what they test

The worst test methods I’ve come across in this regard were called “Test1”, “Test2” and “Test3”. Together with the way the tests were written it made it very difficult to work out what behaviour was being tested. Not only would more descriptive names have made it easier to fix the tests when they broke, but the tests could have provided documentation of the expected behaviour of the subject class. The Growing Object Oriented Software book suggests the use of TestDox, which generates documentation from tests using the test method names. Another way of providing documentation from tests is suggested by Phil Haack in his post on Structuring Unit Tests. By having a test class for each individual method on the class being tested you can read the collapsed methods like a spec and also see which methods are under test from the class dropdown in Visual Studio.

Show how to use the class

The best tests I have seen in this respect are the SimpleTests for the MarkdownSharp project, which are sufficiently clear to provide a demonstration of the formatting to apply in order to achieve the desired effect. For classes requiring lots of setup I’ve found it useful when the setup code has been extracted into separate (clearly-named!) methods.

Give methods descriptive names

And remember to update them if you change the code! One of the worst pieces of code I’ve had the misfortune to set eyes on was an 800-odd-line monster method called ‘Make[X]’. Although the method itself did eventually produce X, there were dozens of nested calls to other methods also called ‘Make[X]’ which, of course, just performed steps in the process (one of them actually generated the filepath to a control). I suspect that in many cases like this the name was originally descriptive of what the method did, but code was gradually added or removed until the name no longer described the behaviour. My suggestion is that in the first instance you describe everything that the method does in its name. If you end up with a name with lots of ‘and’, ‘or’ or ‘buts’ you may find you want to split the method up.

Write comments – and make them useful

I used to work with a guy who found it terribly frustrating that the rest of us, in his eyes, didn’t comment our code properly – he thought it essential to prefix every if statement with the comment ‘check if [variable name] is true’. Sadly none of his comments ever explained the relevance of the check to the behaviour. Of course, careful naming of variables goes a long way towards expressing their role (don’t use comments instead of sensible names), but it can’t always make things completely clear. (See Jeff Atwood’s post Code Tells You How, Comments Tell You Why.) If you have an instance variable, put a comment on it explaining what it’s used for, and likewise remember summaries on your methods explaining what the parameters are for. Comments and regions can also provide useful signposts when you’re skimmming through a large file of unfamiliar code. Of course, remember to update the comments if you’re changing the code.

Posted in Testing, Uncategorized | 1 Comment

Solutions for Git Workflow Problems

I want to rename a branch
$ git branch -m <currentname> <newname>
Source: blog

I’ve forgotten which branch my branch was created from
$ git log -g <branchname>
$ git reflog show <branchname>
and look for the first entry (assumes branch was created recently).
Source: Stack Overflow

Careful! The following hints assume you haven’t pushed your changes.

I’ve just committed my changes to the wrong branch
1. Do $ git reset --soft HEAD^ – this will undo the commit but leave the changes staged.
2. Checkout the branch you actually wanted to commit to.
3. Commit your changes.
Source: Stack Overflow

I’ve just done a hard reset and need to undo it
1. Do $ git reflog and look up the sha1 of the commit you want to go back to
2. Do $ git reset --hard <sha1 to return to>
Source: Stack Overflow

I missed out a file a couple of commits ago
1. Commit the files you want to add to the commit
2. Do $ git rebase -i HEAD~<number of commits to go back>
3. Your text editor will bring up the list of commits. Re-arrange the rows into the desired order (oldest first) and change ‘pick’ to ‘squash’ for the commit you want to merge.
4. After saving that file, the text editor will ask you to specify a message for the merged commit.
Source: git-scm

Posted in Version Control | Leave a comment

Reaching a Solution with TDD

I’m lucky enough to work for a company where there’s a strong culture of promoting professional development for the software engineers. One of the forms this takes is training days where we all get together to do practical exercises to try recommended programming practices.

One of the practices we’ve been trying is TDD. While this has shown itself to have the benefit of improving code coverage, it did seem to have some drawbacks. Back in January, and again this month, one of the exercises we looked at was converting numbers to their representation in Roman numerals e.g. 1 = I, 2 = II. My pairings didn’t manage to complete the exercise in either session. It seemed that the application of the principle of writing the simplest possible code to make the test pass resulted in very clumsy implementations – pretty much hard-coding one case after another, rather than actually coming to an algorithm. We tried testing different numbers in an attempt to make the tests “smaller”, but with no success.

Having had more success with a different problem – an OCR parser for converting representations of numbers in underscores, spaces and pipes to the corresponding number – I realised what the trouble was. In trying to write “small” or “simple” tests for the Roman Numerals exercise, we had been getting bogged down in writing tests for test cases, instead of writing tests for the small or simple parts of the problem. We had been thinking along the lines of “1 returns I” or “6 returns VI” instead of testing that, for example, the method returned a string representation the of the number. In the OCR exercise, we sat down and discussed the problem before we touched the keyboard, and we didn’t focus on tests for the test cases but tests for the component parts of the problem (representing a digit, converting a digit representation to a number, etc.) – and we ended up with working code in the time available.

There’s a clue to the need for the latter approach in Uncle Bob’s blog post The Craftsman 62, The Dark Path. It quotes from The Moon is a Harsh Mistress

when faced with a problem you do not understand, do any part of it you do understand, then look at it again.

In fact, when I re-did the Roman Numerals exercise taking the small-problem approach, it turned out that a lot of time could be saved by identifying and testing the key features of the problem. After testing that the method returned a representation of a digit which mapped to a single Roman numeral, I next tried testing that it returned a representation of a digit which mapped to a repetition of a single Roman numeral. In fact, my next test turned out to be a better approach to the problem: I tested that the method returned the correct result for a digit which mapped to “addition” of Roman numerals, which covered the previous case as well. With this approach a solution presented itself surprisingly quickly.

So TDD doesn’t have to lead to clumsy implementation – by properly understanding the structure of the problem it’s possible to write small tests and still come up with a solution. Look forward to trying it out on our next training day.

Posted in Testing | Leave a comment