Showing only posts by William Morgan [.rss for this author]. See all posts.

Trollop 1.11 has been released. This is a minor release with only one new feature: when an option <opt> is actually given on the commandline, a new key <opt>_given is inserted into the return hash (in addition to <opt> being set to the actual argument(s) specified).

This allows you to detect which options were actually specified on the commandline. This is necessary for situations where you want one option to override or somehow influence other options. For example, configure’s --exec-prefix and --bindir flags: if --exec-prefix is specified, you want to override the default value for --bindir, unless that’s also given. If neither are given, you want to use the default values.

This should be a backwards-compatible release, except for namespace issues, if you actually had options called <something>-given.

William Morgan, January 30, 2009.

One of the fundamental questions in VM design for OO languages is how you represent your object handles internally. Regardless of what the language itself exposes in terms of fundamental types, boxing, and the like, the VM still has to shuffle objects around on the stack frame, pass them between methods/functions, etc.

The traditional way to do this is the “tagged union”, where you use a two-element struct consisting a type field and a union of values for each possible type. One of these types is probably an object pointer; the other types let you represent unboxed fundamental types like ints and floats. This is the approach used by Rubinius, by Lua, and probably many more.

The Neko VM instead uses 31-bit pointers for everything except ints, and fixes the lowest bit as the integer bit. If this bit is on, the value represents a 31-bit integer; if it’s off, the value is a pointer. Of course this means that Neko objects can only be at even addresses in memory. (I’m not sure what happens on 64-bit machines; either ints stay at 31 bits or they grow to 63-bit longs; the pointers certainly grow to 64-bit).

The result is that Neko object handles are the size of pointers, hence small, but Neko loses the ability to handle unboxed floats. All float operations will require lots of heap allocation and dereferencing. On the other hand, Lua object handles are much larger, but Lua can do float arithmetic on the stack. (The VM stack, not the system stack.)

The Neko folks claim that their representation is better, because it’s smaller, and faster when you’re copying things around. But what value do you really get by sacrificing floats? And what about when we take into account different architectures?

Comparing sizes is easy. On a 32-bit machine, Lua objects take up 12 bytes: a double is 8 bytes, and the tag grows the struct to 12. So Lua object handles are three times the size of Neko handles. On a 64-bit machine, Lua objects take up 16 bytes, and Neko objects take up 8. Note that Lua handles are now only twice as big as Neko handles.

Comparing speed is a little more interesting. How much is lost, exactly by copying around those extra 8 bytes, for each architecture? I did some simple experiments where I copied objects of various sizes around 10 million times, picking a random start and end point for each within a block of allocated memory on the heap, and measuring how long everything took.

On my 32-bit machine, taking 10m random numbers took 2.874 seconds; copying 12-byte objects from one location to the other each time took an additional 91ms. Copying 4-byte objects took only an extra 77ms. That works out to a 15.3% slowdown for Lua.

On my 64-bit machine, taking 10m random numbers took 517ms; copying 16-byte objects each time took an additional 1.85 seconds; copying 8-byte objects took an additional 1.81 seconds. That works out to a 2.2% slowdown for tagged unions.

So personally, the 32-bit case is maybe arguable, but the 64-bit case doesn’t seem that compelling. Copying object handles around is one thing of very many that the VM spends its time on, so the overall slowdown is going to be much less than 2.2%. I don’t know that sacrificing float performance, and half of your integer space, is really worth it.

If you want to run these experiments for yourself, the code is here. Please let me know if I’m doing something wrong!

William Morgan, January 15, 2009.

I swear to god that two weeks ago I started writing a VM for a classless OO language with scoped mixins. And now you go and release Potion.

William Morgan, January 9, 2009.

There’s a good discussion with lots of interesting details on a recent patch submission for adding indirect threading to the Python VM. (And by “discussion” I mean a single, un-threaded sequence of comments where you have to manually figure out who’s replying to what, which apparently is what everyone in the world is happy with nowadays except for me. Email clients have had threading since 1975, bitches, so get with the fucking program. [Hence, Whisper—ed.]) Pointed to by programming.reddit.com, which remains surprisingly useful, as long as you cut yourself once the comment thread devolves (as it invariably does) into meta-argumentation.

Indirect threading is a vaguely-neat trick that I first learned about around the time I was getting into the Rubinius code. The idea is that, in the inner loop of your VM, which is going through and interpreting the opcodes one at a time (dispatching each to a block of handler code), instead of jumping back to the top of the loop at the end of handler code section, you jump directly to the location of the handler code for the next opcode. The big benefit is not so much that you save a jump per opcode (which maybe is optimized out for you anyways), but that the CPU can do branch prediction on a per-opcode basis. So common opcode sequences will all be pipelined together.

But the discussion shows that this kind of thing is very compiler- and architecture-dependent, and you have to spend a lot of time making sure that GCC is optimizing the “right way” for your particular architecture, is not overly-optimizing by collapsing the jumps together, etc. OTOH, the submitter is reporting a 20% speedup, and this is the very heart of the VM, so it could very well be worth spending time on such trickery.

More information:

William Morgan, January 3, 2009.

One of the “fun” things about living in MA (besides the obvious fun of “the weather” and “the people”) is that you can’t get wine shipping directly to your house anywhere in the state. Until 2005 it was illegal; until very recently it was effectively illegal; and now, thanks to a district court decision overturning a state law, it’s merely uncertain.

But uncertainty is a positive step in this state. If you read the “factual background” section of the text of the decision itself, you’ll get a fun overview of how, in typical Massachusetts fashion, the current situation is the result of a culture of cronyism and old-boys-club-ism, with wine wholesalers in the state controlling the legislative process and protecting their own monopoly at the expense of both wineries and consumers, typically while justifying their actions by appealing to the state’s deep-seated Puritan anti-alcohol sentiment.

The specifics of the legislation that was just overturned should give you an idea of the crass, absolutely unsubtle gerrymandering the state legislature is willing to stoop to, in this case to circumvent another state law that was overturned as unconstitutional in 2005 (by the US Supreme court, no less!):

The detailed account sheds light on a fact that we known all along—that the 30,000 gallon capacity cap was set conveniently above the production capacity of the largest winery in Massachusetts (24,000 gallons). This cap was designed to allow the Massachusetts wineries to ship directly to consumers, while simultaneously protecting Massachusetts wholesalers by prohibiting out-of-state medium and large wineries from doing the same.

Of course, we’re still a ways away from being able to join the Screaming Eagle wine-of-the-month club: MA still has a host of other regulations that make delivery services like Fedex and UPS either unable or unwilling to deliver wine, like requiring a special permit for each vehicle that might have wine on it. But maybe we’re getting closer. If nothing else, we can hope the increased attention will have a “sunlight is the best disinfectant” kind of effect on the issue.

William Morgan, January 2, 2009.

Found a good, old, post on a Scheme mailing list which explains the historical context behind the very confusing terms “closure”, “downwards funargs problem”, and “upwards funargs problem”: Max Hailperin in 2001.

The reason is that [the term] “closure” only makes sense in a particular historical context, where procedures had previously been left “open”, that is with free variables not associated with any particular binding. This was the case in various pre-Scheme Lisps, and lead to what was known as the “funarg problem,” short for “functional argument”, though it also was manifested when procedures were used in other first-class ways than as arguments, for example, as return values, where it was confusingly called the “upward funarg problem” (by contrast to the “downward funarg problem,” where the arg was genuinely an arg). The “funarg problem” is what logicians had been calling “capture of free variables,” which occurs in the lambda calculus if you do plain substitution, without renaming, in place of proper beta-reduction.

So anyhow, an evolutionary milestone in the development of the Lisp family of languages was the realizations that procedures should be “closed”, that is, converted into a form where all the variables are bound rather than free. (The way this is normally done, as others have written in this thread, is by associating the procedure text, which still has free variables, with a binding environment that provides the bindings.)

Because this was such a big deal in terms of the languages’ evolution, Lisp hackers took to using the word “closure” rather than just “procedure” to emphasize that they were talking about this great new lexically scoped kind of procedure.

William Morgan, November 29, 2008.

Some git-fu I’ve been finding particularly useful recently:

1. Untangling concurrent changes into multiple commits: git add -p is the greatest thing since sliced bread. But did you know it features an ‘s’ command which allows you to split a hunk into smaller hunks? Now you can untangle pretty much anything.
2. Splitting a previous commit into multiple commits: I’ve been finding this one useful for quite a while. Start with a git rebase -i, mark the commit(s) as edit, and once you get there, do a git reset HEAD^. All the changes in that commit will be moved out of the staging area, and you can git add/git commit to your heart’s content. Finish with a quick git rebase --continue to the throat.
3. Fixing your email address in previous commits: I often make a new repo and forget to change my email address. (For historical, and now silly, reasons, I like to commit to different projects from different addresses, and I often screw it up.) Here’s how to do a mass change: git filter-branch --env-filter "export GIT_AUTHOR_EMAIL=your.new.email.address" commit..HEAD, where commit is the first commit to be affected. Of course, changing the email address of a commit changes its id (and the id of all subsequent commits), so be careful if you’ve published them. (Also note that using --env-filter=... won’t work. No equal sign technology.)
4. A git log that includes a list of files modified by each commit: git log --stat, which also gives you a colorized nice histogram of additions/deletions for each file. This is a nice middle ground between git log and git log -p.
5. Speaking of git log -p, here’s how to make it sane in the presence of moves or renames: git log -p -C -M. Otherwise it doesn’t check for moves or copies, and happily gives you the full patch. (These should be on by default.)
6. Comparing two branches: you can use git log --pretty=oneline one..two for changes in one direction (commits that ‘two’ has that ‘one’ doesn’t); and two..one for the opposite direction. You can also use the triple-dot operator to merge those two lists into one, but typically I find it useful to separate the two. Or you can check out git-wtf, which does this for you.
7. Preview during commit message: git commit -v will paste the diff into your editor so you can review it while composing the commit message. (It won’t be included in the final message, of course.)
8. gitk: don’t use it. You’ll get obsessive about merge commits, rebasing, etc., and it just doesn’t matter in the end. It took me about 4 months to recover from the bad mindset that gitk put me into.
William Morgan, October 28, 2008.

Just read a great Stephen Pinker article about morality that appeared the in NY times earlier this year. Being the curmudgeonly contrarian that I am, I most enjoyed the identification and dissection of the moralization so prevalent but so rarely recognized in my peer group:

[W]ith the discovery of the harmful effects of secondhand smoke, smoking is now treated as immoral. Smokers are ostracized; images of people smoking are censored; and entities touched by smoke are felt to be contaminated (so hotels have not only nonsmoking rooms but nonsmoking floors). The desire for retribution has been visited on tobacco companies, who have been slapped with staggering “punitive damages.”

And:

[W]hether an activity flips our mental switches to the “moral” setting isn’t just a matter of how much harm it does. We don’t show contempt to the man who fails to change the batteries in his smoke alarms or takes his family on a driving vacation, both of which multiply the risk they will die in an accident. Driving a gas-guzzling Hummer is reprehensible, but driving a gas-guzzling old Volvo is not; eating a Big Mac is unconscionable, but not imported cheese or crème brûlée. The reason for these double standards is obvious: people tend to align their moralization with their own lifestyles.

There’s also the compelling idea that we’re not actually less moral than we were in the past (a claim that old people have been making since time immemorial), but rather, our morality has simply shifted to other things:

This wave of amoralization has led the cultural right to lament that morality itself is under assault, as we see in the group that anointed itself the Moral Majority. In fact there seems to be a Law of Conservation of Moralization, so that as old behaviors are taken out of the moralized column, new ones are added to it. Dozens of things that past generations treated as practical matters are now ethical battlegrounds, including disposable diapers, I.Q. tests, poultry farms, Barbie dolls and research on breast cancer.

I’m reminded of one of my favorite Paul Graham essays, <i>What You Can’t Say</i>, the thesis of which is that the powerful ideas that define the modern age are often ideas that were completely verboten in earlier times (e.g. Copernicus’s claim that the earth revolves around the sun); thus, if we want to identify powerful ideas that will shape the future, we should look to things that are taboos today.

William Morgan, October 22, 2008.

I released a new version of Trollop with a couple minor but cool updates.

The best part is the new :io argument type, which uses open-uri to handle filenames and URIs on the commandline. So you can do something like this:

require 'trollop'
opts = Trollop::options do
opt :source, "Source file (or URI) to print",
:type => :io,
:required => true
end
opts[:source].each { |l| puts "> #{l.chomp}" }


Also, when trying to detect the terminal size, Trollop now tries to stty size before loading curses. This gives better results when running under screen (for some reason curses clears the terminal when initializing under screen).

I’ve also cleaned up the documentation quite a bit, expanding the examples on the main page, fixing up the RDoc comments, and generating the RDoc documentation with a modern RDoc, so that things like constants actually get documented.

If you’re still using OptParse, you should really give Trollop a try. I guarantee you’ll write much fewer lines of argument parsing code, and you’ll get all sorts of nifty features like help page terminal size detection.

William Morgan, October 22, 2008.

On the topic of numeric paradoxes, here’s another one that drove a lot of work in economic and decision theory: the St. Petersburg paradox.

Here’s the deal. You’re offered a chance to play a game wherein you repeatedly flip a coin until it comes up heads, at which point the game is over. If the coin comes up heads the first time, you win a dollar. If it takes two flips to come up heads, you win two dollars. The third time, four dollars. The fourth time, eight dollars. And so on; the rule is, if you see heads on the th flip, you win dollars.

How much would you pay to play this game?

The paradox is: the expected value of this game is infinity, so according to all your pretty formulas, you should immediately pay all your life savings for a single chance at this game. (Each possible outcome has an expected value of 50 cents, and there are an infinite number of them, and expectation distributes over summation, so the expected value is an infinite sum of 50 cents, which works out to be a little thing I like to call infinity dollars.)

Of course that’s a paradox because it’s crazy talk to bet more than a few bucks on such a game. The paradox highlights at least two problems with blithely using positive EV as the reward you’ll get if you will play the game:

1. It assumes that the host of the game actually has infinite funds. The Wikipedia article has a very striking breakdown of what happens to the St. Petersburg paradox when you have finite funds. It turns out that even if your backer has access to the entire GDP of the world in 2007, the expected value is only $23.77, which is quite a bit short of infinity dollars. 2. It assumes you play the game an infinite number of times. That’s the only way you’ll get the expected value in your pocket. And the St. Petersburg paradox is a great example of just how quickly your actual take-home degenerates when subject to real-world constraints like finite repetitions. It turns out that if you want to make$10, you’ll have to play the game one million times; if you’re satisfied with \$5, you’ll still have to play a thousand times.

The classical answer to the paradox has been to talk about utility, marginal utility and things like that; i.e., people with lots of money value more money less than people without very much money. And recent answers to the paradox, e.g. cumulative prospect theory, are along the lines of modeling how humans perceive risk, which (unsurprisingly) is not really in line with the actual probabilities.

But it seems to me that these solutions all involve modeling human behavior and explaining why a human wouldn’t pay a lot of money to play the game, either because money means less as it gets bigger or because they mis-value risks. But the actual paradox is not about human behavior or psychology. It’s the fact that the expected value of a game is not a good estimate of the real-world value of a game, because expected value can make assumptions about infinite funds and infinite plays, and we don’t have those.

So my solution to the St. Petersburg paradox is this: drop all events that have a probability less than some small epsilon, or a value more than some large, um, inverse epsilon. That neatly solves both of the infinity assumptions. (In this particular case one bound would do, because the probabilities drop exponentially as the values rise exponentially, but not in general.) I’ll call this the REV: the realistically expected value.

In this case, if you set the lower probability bound to be .01, and the upper value bound to be one million, then the REV of the St. Petersburg paradox is just about three bucks. (The upper value bound doesn’t even come into play.) And that’s about what I’d pay to play it.

So there you go. Fixed economics for ya.

William Morgan, October 21, 2008.