Wednesday, December 25, 2013

JavaScript Pearl: Function Properties

Functions in JavaScript are Objects. This we know.
A lesser spotted
 JavaScript function
with Properties 

We pass these objects around as arguments to other functions, we assign them to variables, set them as methods of other objects, and, ultimately, invoke them.
What we tend to forget (at least I do) is that as Objects JavaScript functions can also be the bearers of properties. 
There are a few ways we can use this fact to our advantage and a very cool example is given in Resig and Bibeault's "Secrets of the JavaScript Ninja".

The question they pose is this - if we're storing functions in an object of array, how can we prevent inserting the same function twice?
As Resig and Bibeault point out, one possibility is to simply iterate over the array and test each item to see if it's already part of the collection. There is a more interesting way to accomplish this. Directly below I provide a function that returns what looks like a simple array, but one which both doesn't add objects more than once, and doesn't have to iterate over the entire array. 

1:  function buildCache() {  
2:   var cache = [];  
3:   var cache_current_id = 0;  
4:   var old_push = cache.push;  
5:   cache.push = function() {  
6:    //since we can't actually run a forEach across the "arguments" object (it looks like an array, but it isn't)  
7:    Array.prototype.forEach.call(arguments,function(element) {  
8:     //first we check whether the element has an id - if it does, we skip adding it  
9:     if(element.cache_id === undefined) {  
10:      element.cache_id = cache_current_id;  
11:      logToDiv('adding a new function with the id of ' + cache_current_id);  
12:      cache_current_id++;  
13:      old_push.call(cache,element);  
14:     } else {   
15:      logToDiv('Function already exists with id ' + element.cache_id);  
16:     }  
17:    });  
18:   };  
19:   return cache;  
20:  }  

There are a couple of things going on here.
On line 2 I set up a simple array object - this will act as our function cache.
We then declare (line 3) a variable that will be used to both keep track of, and assign ids to, items that we will be adding to our cache.
On line 3 we save the standard push function attached to the cache array - we then (line 5) overwrite the push method so that for every element we test whether it has yet been assigned a cache_id (line 9) -- this tells us that we've already added this element to our cache. If the cache id is unassigned, we add an id as a property to the element and use the original push function (which we saved in "old_push") to push it on to our cache array. 
If we find a property with the name of "cache_current_id" then we know we've already added this element and we simply skip over it.   
No need to iterate. Pretty great. You can see this code running in a JS Bin here.
Note the log output. In the JS Bin I create two functions, and add them both to the cache twice. With both functions the first time I add them to the cache they are assigned an id and pushed on to the array, but the second time we attempt to push them on to the cache we're informed they're already there.

As a bonus, I've used function properties in my logToDiv function to deal with line numbering. I leave figuring it out as an exercise for the reader.

1:  //simple logger to show what's   
2:  function logToDiv(message) {   
3:   if(logToDiv.line_no === undefined) {  
4:    logToDiv.line_no = 1;   
5:   }  
6:   var message_div = document.createElement('div');  
7:   message_div.innerHTML = logToDiv.line_no++ + ': ' + message;  
8:   var log_div = document.getElementById('content');  
9:   log_div.appendChild(message_div);  
10:  }  


Wednesday, December 18, 2013

JavaScript Pearl: Self-defining Functions

Here's a neat trick that I learned from Stoyan Stefanov's great book JavaScript Patterns.

If you have a function that is called several times during program execution, but that has some prep work that needs to be done only once, we can have our function do whatever upfront work it needs to do and then redefine itself.

Here's a simple example - the prep work that the function will do is to generate and return some random number. Subsequent runs should return the same number as the first run.

var foo = function() {
  var i = Math.floor(Math.random()*100) + 1; //generate some number between 1-100
  foo = function() { return i; }; //here we redefine foo() and fix i in our closure, which is still available in subsequent runs
  return i;
}; 


Here is example output from running the code

> foo(); //should return some random number
   21

> foo(); //should return the same number as the first call
   21



Thursday, December 12, 2013

Working through the common currency talk in Shizgal and Conover (1996)

As I mentioned in my last post, I'm writing up a paper with Prof David Spurrett on the common currency argument. I'm woefully behind on my writing - so I'm going to use this series of blog posts to get my thinking down on paper, as it were, rather than sitting looking at a blinking cursor in LyX.

In this post I want to try and lay out the structure for the common currency argument as it's exemplified in Shizgal and Conover's 1996 paper "The Neural Computation of Utility" (it was published in Current directions in Psychological Science, but you can grab a preprint of it here).

Prof Spurrett has actually written about this paper online before, so this will be - in some sense - a rehash, but it's useful for me. I'm also not as interested in neuroscience as he is, so I'll say less about the actual experiments than he does.

Right, so we're focusing on an argument that S&C make about how the mechanisms by which choices are made have to be in order for agents to make certain kinds of choices.

Let's look at what they say
for orderly choice to be possible, the utility of all competing resources must be represented on a single, common dimension
There's a lot in this line, so let's try and unpack it a little.

I think it helps to point out that the words "orderly choice"  can (to keep us thinking straight) be replaced with "certain kinds of choices" - so, "for certain kinds of choices to be possible, the utility of all competing resources must be represented on a single, common dimension".
I think this helps because the way that it's phrased in the paper makes it seem that "orderly" is a property of all choices for some agent (at least it seemed that way to me).
This isn't necessarily the case - so I like rephrasing it to highlight the fact that an agent actually needs all choices available to it to be represented in some scalar value (the common currency) that's able to be compared and ranked.

Shizgal and Conover themselves point out that only certain kinds of choices need to be represented on a common scale - in particular, you don't need a common scale of evaluation if choices are made by using some kind of categorical rule such as "if offered a choice between cheese and ham, always choose the cheese, no matter how much ham you're offered".
Following this rule I will always choose cheese, even if I'm offered one tiny crumb of Camembert versus  enough ham to make me the king of the world, I'll choose the cheese, no matter what. Note that there need not be any kind of weighing of options in this case. We see the cheese, we choose the cheese. Period.
No common currency needed.
Making choices like this seems irrational in the case of ham and cheese, but in some cases it might make perfect sense. More on this later.

So, if not all choices require a common currency, which do?

Shizgal and Conover suggest at least two kinds

First, a common currency seems to be required when the choice is between different kinds of things.
If I'm presented with a choice between 5 slices of ham on the one hand, and 10 slices of ham on the other - then assuming that more is better (an assumption that S&C make explicitly), then the choice is simple - mo ham fo' sure. But if it's between 5 slices of ham and two slices of cheese (assuming we're not choosing according to our categorical rule above), then the choice is more difficult. Cheese and ham differ from each other in many different ways and, ultimately, when we make our choice between them we'll have to compare them along some common dimension (say, which one is saltier, or has the highest energy yield) or reduce a number of those dimensions to some single comparable number.

Second, a common currency seems to be required when the options themselves are "complex". For instance, if we were to choose between a Ham and Cheese on rye bread and a chicken and mayo toasted panini. Each of the elements that make up the two sandwiches contribute to its overall level of enjoyment or aversiveness (we can speak in terms of utility here if we like, right now we keep it loose). Not only are we comparing between two different kinds, but we're faced with options made of different kinds and then having to choose between them. This requires not only a way of reducing different kinds to a comparable scale but also a way of summing the values of the items that make up an option for a choice.

In both of these choice situations it's rational (in an economic sense) to always choose the higher ranked of the two options. This, it seems to me, is what S&C mean when they speak about ordered choice - the choosing of the higher valued option in a choice situation that includes (potentially) different kinds of goods and where (potentially) the choices available are complex in the sense sketched above.

So, let's quickly recap. Although common currencies are not necessary for choices based on a categorical rule, S&C assert that they are necessary for ordered choices.

So what's the big deal about ordered choice? Well, two things (at least).

Firstly, it seems that certain kinds of animals (like rats, and humans) seem to be able to make ordered choices, and if S&C are right it seems that there would need to be some place in the brain where the reduction to and comparison of the common currency / unidimensional representation of value actually happens. In their paper they lay out their experiments that show that there is good reason to believe this is happening in the mid-forebrain of rats.

Second, and more generally, if the common currency represents or tracks the contribution of a choice option's contribution to the chooser's fitness, then any agent that whose goal is to maximize fitness will need the kind of computational machinery that is able to perform - at the very least - the following operations

  1.  reducing multidimensional representations of objects to a single value (reduction of options to a common currency)
  2. the ability to rank options according to this common currency (in order to pick the highest ranked)
  3. the ability to add up the values of several items that make up an option in a complex choice situation.
So where to from here?
Well, for me there are two interesting questions. 
Firstly, and this I really don't know, I wonder if there's a proof that for these kinds of choices one require a common scale of representation? Can we find a counterexample to S&C's claim? 

Secondly, could an agent that couldn't do the kinds of subtle calculations required of ordered choice still behave or choose (near? ) optimally?

My gut tells me that the answer to the first question will probably be "yes, you need a common scale of representation for ordered choice", but also, to the second, that "some things can get along just fine - and close to optimally - without having to make these kinds of choices and, therefore, have no need for the kind of computational machinery required to make ordered choices".

Thursday, October 31, 2013

Common Currencies - PSSA abstract

So I have the extreme good fortune to be working on a paper with Prof. David Spurrett on a paper for next year's Philosophical Society of South Africa's annual conference. It's a small piece of a much longer and larger project on how unified motivation is (Prof Spurrett is documenting this project over at his common currencies blog).

Without further ado, here is the abstract - the content isn't entirely fixed just yet, but this is a reasonable outline of what we're trying to achieve.

Distributed control systems and order in action selection.

Blaize Kaye and David Spurrett (UKZN)


There is a commonly held view in the cognitive and behavioural sciences that orderly action selection is best explained by the existence of a determinate psychological system that represents potential or actual outcomes of action along a single dimension of value.

In this paper, we begin by sketching a general version of the argument for a “common currency” for decision making after which we present a potential challenge to this family of arguments posed by work in behaviour-based robotics. Specifically, we look at Rodney Brooks' work on “subsumption architectures”, an approach that has been especially influential within 4EA (embodied, extended, embedded, enactive, affective) approaches to cognitive science and the philosophy of mind.

With the subsumption architecture, Brooks eschews explicit representations and centralised planning in favour of a set of modules, organized into a hierarchy of layers, each of which is more or less independently responsible for implementing one of the agent's goals. Crucially, while these layers do communicate, communication is restricted to extremely simple signalling – for instance, disabling or activating another layer or module.

Robots controlled by subsumption architectures are able to engage in simple, but fairly robust patterns of behaviours. As such, any theory of motivation that posits a unidimensional representation of value will need to address the fact that there exist agents that demonstrate ordered patterns in action selection but whose internal control system is both distributed and anti-representational.

Tuesday, October 29, 2013

A lesson on character and exposition by Saramago

To leap ahead, by bold suppositions, or by dangerous deductions, or, worse still, by ill-considered guesswork, to what their thoughts were would not, in principle, if we consider how promptly and impudently the heart's secrets are often violated in stories of this kind, would not, as we were saying, be an impossible task, but, since those thoughts will, sooner or later, be expressed in actions, or in words that lead to actions, it seems to us preferable to move on and wait quietly for the actions and words to make those thoughts manifest
--  José Saramago, The Cave

Wednesday, October 2, 2013

"Testable JavaScript" by Mark Ethan Trostler

While there are a number of JavaScript programming manuals that teach the basics of the language, there is a real need for texts aimed at working JavaScript programmers who would to take take their practice in a more professional direction. "Testable JavaScript" by Mark Ethan Trostler does a fine job of addressing this particular concern. 
The book presents a really nice spread of topics that range from how one's code should be written to maximise readability and maintainability through to automating your workflow.

At the time of writing this review I can heartily recommend the book. The one concern that I have is that some of the specific technologies (for testing, automation, etc.) that Trostler chooses to cover may not age as well as the material that deals with best practices for code composition. 
If the book gets semi-regular updates, this will not be a problem (and this is really a concern with any technology book, but I mention it because some of the information in the book really does seem timeless and it would be a waste if the book wasn't purchased in the future because of the more dated material).

Trostler's book should become a go-to guide for professional JavaScript development.


I review for the O'Reilly Reader Review Program

Wednesday, July 17, 2013

"Functional Javascript" by Michael Fogus; O'Reilly Media;

In his new book, Fogus attempts the twofold task of introducing his audience to functional programming in general, and demonstrating how one can achieve a functional style using Javascript and the underscore.js library in particular.

Reading this book was my first sustained investigation into functional programming proper. I had heard it mentioned in various contexts through the years, but as far as real reading into the topic, I doubt that I had done more than simply skimmed the functional programming wikipedia page.

I had, naively, expected to be faced with something entirely foreign when I initially opened the book. What I found, though, is probably best compared to the first time you listen to jazz music after years of listening to rock. All the parts are the same, the musicians use the same instruments, making the same sounds, but use them in ways that are both familiar but, in some sense, radically different at the same time.
In terms of example code provided, it will be comprehensible to anyone familiar with plain old imperative javascript coding, one might simply be struck by the way things are done. They may seem unnatural at first, but once one starts to get a feeling for the functional style it becomes clear how functional programming makes it easier to reason clearly about your code, something that (it is obvious to me now) is much more difficult in the good old fashioned OOP or imperative programming paradigms.

I have no gripes about the book, I picked up a few minor typos - these are to be expected in a first printing, and have all seemed to already been reported. All in all I think it's a fantastic addition to the JS literature and should be read by any JS programmer who is serious about writing extensible, scalable code.

I see myself returning to this book again and again - I can think of no higher compliment for technical writing.

You can purchase the book directly from O'Reilly at the "Functional Javascript" product page and support DRM free publishing.

  I review for the O'Reilly Blogger Review Program

Tuesday, June 25, 2013

Note: Reducing nondeterminism in Machines to branching deterministic processes

"The key to simulating an NFA on a deterministic computer is to find a way to explore all possible executions of the machine. This brute-force approach eliminates the spooky foresight that would be required to simulate only one possible execution by somehow making all the right decisions along the way. When an NFA reads a character, there are only ever a finite number of possibilities for what it can do next, so we can simulate the nondeterminism by somehow trying all of them and seeing whether any of them ultimately allows it to reach an accept state." (Stuart, 2013)
 In my (admittedly shallow) foray into theoretical computer science I've become interested in the way in which nondeterministic machines (and, here, a machine designates anything with a particular formal structure), such as the nondeterministic pushdown automata, differ from purely deterministic machines / versions of themselves.

Daniel I.A. Cohen in his fun (but sometimes too relaxed) "Introduction to computer theory" points out in his discussion of Transition Graphs (the first nondeterministic machine he presents) that with the introduction of nondeterminism we have "changed something more fundamental than just the way the edges are labeled", namely, a nondeterministic machine "makes us exercise some choice in its running" (Cohen, 1997).

Cohen's talk about choice is misleading, because it suggests, as the Stuart quote above notes, that in order for our machine to do its work, it (or the person "running it") somehow has to know precisely how it should read/interpret/act on the input we've provided for. This wont fly because what we're interested in is really how we can do this mechanically. There's no room for human interpretation/choice/whatever. How do we get rid of these notions?

There are two steps.
The first is to note that nondeterminism in these machines doesn't require that the machines themselves control which paths they take when they get to a fork in the road. Cohen's loose talk suggested it was "us" that was determining the pathway through the machine - but this was really just shorthand for saying that in order for a particular machine to be seen as accepting some input there should be at least one possible path through the machine
The second step is to note that we can mechanize this process of checking all possible paths by running our input string through our machine and, at each point where the we are required to "choose" a possible path through the machine, we simply choose all of the paths. Each nondeterministic action in our machine can then be seen as a deterministic step that creates two or more branches (or copies of the machine) that will evolve according to the choice that was made at the point of branching. If any of these new branches lead to a nondeterministic state, the execution branches again, and so on until either (1) the string is rejected by all the branches, or (2) at least one of the branches accept the string.
This is a nice strategy because it reduces these "spooky" nondeterministic choice points into an entirely deterministic process.
The next step for this investigation is to find out whether this kind of branching process will suffice for mechanizing at least a large subset of nondeterministic (finite) machines. Further, I need to clear up my terminology etc. and start defining the machines more precisely.

*EDIT: note that in order to show that a nondeterministic machine accepts a particular string, one has to use a meta-process. In some cases this meta-process/machine can be made of the same "stuff" - one can reduce a Nondeterministic FA to a simple deterministic FA - although I don't believe this is the case for all classes of machine*

References:

Cohen, D. 1997. Introduction to Computer Theory (2nd edition). John Wiley & Sons inc.
Stuart, T. 2013. Understanding Computation. O' Reilly Media inc.

Saturday, January 5, 2013

X-phi - experimenter bias in vignette composition.

**Note: This post is essentially a posting of a half cooked idea - I'm trying to articulate a particular issue I have with one of the suggestions made by the paper under discussion. If anyone reads this and wants to help develop the argument - or explain why it doesn't matter - I'd be deeply appreciative **

Brent Strickland and Aysu Suben have a recent paper out in which they identify (and try to demonstrate experimentally) an important source of potential experimenter bias in the survey style experiments that comprise the bulk of the work in X-phi.

The problem is, simply, that knowledge of the experimental hypothesis that is being tested may affect the design of the experimental stimuli - which, in X-phi, are typically short vignettes. Subjects are exposed to these vignettes and their immediate, intuitive responses recorded for analysis.

 Strickland and Suben attempt to demonstrate how knowledge of the hypothesis under investigation might introduce this kind of bias by doing a replication-with-a-twist of an earlier experimental result of Knobe and Prinz's.

I was going to recap the entire argument and experiment, but in this case a stellar summary is already up over at the experimental philosophy blog. The comments section on this entry is of a characteristically high quality (seriously, read the comments - they really push the debate forward).
Further, the original paper is only 11 pages, and really straightforward - so read that if the X-phi blog summary wasn't enough.

Another problem?


Now that you're back and know all about the experiment ...

 One of the suggestions that Strickland and Suben make in their paper about how to avoid this kind of experimenter bias in X-phi to to have "blind stimulus creation" in which people who have not been exposed to the hypothesis under investigation are used to draw up the vignettes to be used in the eventual experiments.

 I don't think that anyone has yet pointed out just how problematic this may be given that the aim of many of the experiments in X-phi are specifically to test our natural intuitions about broadly philosophical issues.
My (not yet fully developed) problem is that given that if people do in fact have a set of intuitions about the issues that our experimenters are investigating then that would potentially introduce a bias into the stimuli anyway.

Let's take the idea of freedom of will as an example. Let's assume that people are naturally incompatiblist - that is, let's assume that, intuitive, people untainted by years of reading philosophy intuitively hold that freedom of will is possible in a fully deterministic universe.
Suppose, further, that we want to test this hypothesis with vignettes and surveys but are afraid that our exposure to our hypothesis will introduce a bias into our stimuli creation - I fail to see how getting someone who we've hypothesized holds some pre-theoretical intuition about determinism and free-will to draw up the stimuli will help. It seems possible, even likely, that stimuli they create will be influenced by this pre-theoretical (or, if they are philosophers, their pet-theoretical) understanding of the issue at hand.

A step in the right direction to a much more productive - and interesting - approach is suggested in a comment by Strickland:

I'm most interested in comparing the effectiveness of different possible solutions. One simple starting point would be to have three groups of on-line experimenters : (1) receives hypothesis A (2) receives hypothesis B (3) receives no hypothesis. Then you give all m-turkers clear instructions on the types of sentences they need to build (e.g. all sentences must have a group as a grammatical subject and must contain the verb "desire". then the experimenter can choose the tense and any complements)....

This is much more thorough - unlike the Strickland and Suben experiment, which only had groups (1) and (2) in the quote above - the inclusion of the "hypothesis neutral" group would give us the opportunity to not only test peoples's responses to the given vignettes, but further gives us the opportunity to see how the actual vignettes of hypothesis naive stimuli creators behave in comparison to those created by subjects who had been exposed to hypotheses.