Monoid means Appendable

Imagine an interface called Appendable. An Appendable is just something that can have other things (of the same type) appended to it. String would be one example.

In order to be a proper Appendable, you need to define two things:

  1. An EMPTY value. (For String, this would be the empty string.)

  2. An append function, which takes two things (of your type) and returns the result of appending one to the other. (For String, this function would just concatenate the given strings.)

You also need to follow two rules:

  1. If you append anything to EMPTY, or append EMPTY to anything, you get back the same value you put in.

  2. The append function is associative, meaning you get the same result from calling append(append(foo, bar), baz) as you do from calling append(foo, append(bar, baz)).

List would be another example of an Appendable; its EMPTY value is just an empty list. Number can be an Appendable—the EMPTY value is 0, and the append function simply adds two numbers.

For pedantic reasons, Appendable is often referred to in programming by the less intuitive mathematical term Monoid. An Appendable without EMPTY would fit the definition of a Semigroup.

Requesting Whiteboard Coding is Doing it Wrong

Recently Vivek Haldar responded to a tweet of mine in which I disparaged the practice of having programming candidates write code on whiteboards. Vivek’s point was to advise candidates, but here I’d like to advise interviewers. Although a concise explanation can fit in 140 characters, I think the subject warrants elaboration.

As in Vivek’s example, suppose you have asked a candidate to code a sorting algorithm on a whiteboard.

Suppose instead you had them describe the solution aloud. Is that useful? Absolutely—it’s a test of their ability to communicate in English about programming. This happens all the time on teams: someone will say “I was thinking about doing it this way,” and others will propose alternative solutions. A productive discussion will yield a better solution, on average, than the original, so this is an important skill to evaluate in a candidate.

Of course, if you handle all the interview exercises this way, you will never get to see the candidate solve a problem with actual code. I have personally hired people who turned out to be much better at discussing programming concepts than writing programs, and it went badly for the team. I learned the hard way to carefully evaluate a candidate’s ability to describe programming solutions both in English and in code.

However, whiteboard coding only tells you how good someone is at coding on whiteboards. It does not tell you how good they are at coding on computers. How different are these things? Let’s compare:

1. Web Searches

In real coding, Google is an extension of your brain. You can’t remember the last day when you didn’t use it to solve a programming problem. You’d be extremely skeptical of any coworker who didn’t make regular use of Web searches.

In whiteboard coding, Web searches are banned outright. You may write 2012 code using 1970 tools only.

2. Performance Pressure

In real coding, you rarely (if ever) have several observers breathing down your neck as you code something critical. If you did, it would be understood that this was a bad situation to put you in. You are used to dealing with pressure from deadlines, not from performances.

In whiteboard coding, every interviewer is breathing down your neck. They are scrutinizing your every pause, and you know it. Your performance on this live coding exercise will likely determine whether you get this job. You aren’t used to this because it never comes up in your job.

3. Tools

In real coding, you have a monitor and a keyboard. A familiar text editor or IDE. In short, the right tools for the job.

In whiteboard coding you have markers. No find-and-replace, automatic indentation, tab completion, or any other modern programming convenience. Poor penmanship can distract the interviewer. You might skip things you’d normally have an editor do because they’re too time-consuming to do with a marker.

Each area where these diverge is an opportunity to disqualify a good candidate, or to qualify a bad candidate.

If someone makes a mistake because they are nervous about being put on the spot—in a way they will never again be in the course of their job—is that a good reason to disqualify them? Of course not. If someone writes sloppy code and you brush it off because you’re thinking “obviously they’d refactor this if they had find/replace” and it turns out they don’t in practice, wouldn’t you rather have known that before hiring them? Of course.

So if whiteboard coding risks hiring bad candidates and passing on good ones, why not have the candidates do computer coding?

The usual reason is time. It takes longer…doesn’t it?

Not for the interviewer—not if you’re doing it right. It takes seconds to email a piece of toy code with the instructions “add these features, fix any bugs you spot, and refactor the result into something you’d be comfortable checking in.” In return you’ll receive a wealth of information about the code the candidate produces given a keyboard, editor, the Web, and all the rest. You can scan this for deal-breakers in the time you’d have spent waiting for the candidate to handwrite a solution on a whiteboard.

Although whiteboard coding can’t accomplish much more than checking for deal-breakers (you learn their proposed solution from what they say out loud, and you rightly wouldn’t consider whiteboard code indicative of their coding style or cleanliness), with real code you can dig deeper.

Having real code in advance gives you time to prepare questions for the in-person interview like “Why did you implement this using Library A instead of Library B?” or “If this one requirement changed, how would you change this?” In an interview you’re on the spot too, and the most useful questions are often the ones that don’t come to you right away. A planned follow-up conversation about code they’ve written teaches you about their discussion abilities while verifying that they are conversant about the code they wrote—so you can be sure a friend didn’t write it.

Some candidates may balk at the idea of spending an hour or two (let alone upwards of 40—whoa!) on an interview exercise, and some may even ask to bill for that time. Naturally, it’s up to you whether to accommodate them based on what you’ve seen so far. Depending on their disposition, this can be an actionable data point; if they consider this an unacceptable inconvenience, you have learned something about their likely reaction to the frustrating red tape that every organization inevitably encounters from time to time.

Likewise, some interviewers might balk at the idea of having to come up with a toy code exercise. In my experience it only takes about an hour or two to develop an appropriate one, and once you have it, you can reuse it for as many different candidates as you want to screen.

All in all, comparing whiteboard coding to the “toy code” approach, we see that it:

  1. Takes longer for you, the interviewer, to identify deal-breakers
  2. Is more error-prone, introducing new ways to mistakenly qualify or mistakenly disqualify candidates
  3. Yields less data on which to dig deeper with follow-up questions

Whiteboard coding is a lose-lose-lose. If you’ve ever done this, or still do this, please reconsider. It’s better on several levels to evaluate candidates based on real code.

Nonsense Accusations of Spaghetti Code Considered Harmful

I have no beef with callbacks. However primitive they may be, I appreciate their simplicity, and I consider them a technique worth learning and using.

A number of people I respect have recently voiced their support for the article Escape from Callback Hell. As it happens, not only do I like many of the people who have endorsed it, I also like the Elm programming language (whose author presumably posted it).

However, I also like honesty. And in the spirit of honesty, I feel obliged to point out that this article is high-octane nonsense.

If you have worked with AJAX or node.js or any other callback heavy framework, you have probably been to Callback Hell. Your whole application ends up being passed around as callbacks, making the code extremely difficult to read and maintain. The resulting tangled mess of code is often pejoritively called spaghetti code, a term borrowed from the days of goto.

In case you don’t know what this is referring to, here’s an example of some synchronous (“normal”) code in CoffeeScript:

someFunction "argument", 1, 2
anotherFunction "another argument", 5, 6

In contrast, here is the dreaded callback version:

someFunction "argument", 1, 2, ->
  anotherFunction "another argument", 5, 6

As you can see, the callback makes this code “extremely difficult to read and maintain.” Whereas the first example is perfectly readable, the second example is clearly a “tangled mess of code” unfit for human consumption.

Er.

Okay, so unless you have a pathological aversion to indentation, the objection clearly doesn’t apply with this trivial two-line example…but let’s read on to see if the article raises the more common callback complaint: the infamous “callback pyramid”, where nested callbacks lead to so much indenting that you can make out an entire implied triangle protruding from the left margin.

Just like goto, these callbacks force you to jump around your codebase in a way that is really hard to understand. You basically have to read the whole program to understand what any individual function does.

No mention of the pyramid yet, but hang on—you have to do what with the what now?

Let’s assume that it’s good practice to organize your program into simple functions that have as few side effects as possible. If that’s the case, then if you “have to read the whole program to understand what an individual function does,” the diagnosis is that your functions are poorly written.

There is no law against writing simple, concise callbacks that defer to well-organized, side-effect-free functions for complex processing. In fact, I’d recommend using them in precisely that way. And once you’ve done that, the above paragraph no longer describes your callbacks.

This claim is plainly false. Callbacks in no way “force you” to engage in any of that badness.

And good luck if you want to add something to your code. A change in one function may break functions that appear to be unrelated (the functions may never even appear together in the entire codebase, connected only by deeply nested callbacks). You’ll usually find yourself carefully tracing through the entire sequence of callbacks to find out what your change will really do.

If this bit describes something smelly to you, what you are smelling is side effects.

It is indeed a downer when “a change in one function may break functions that appear to be unrelated.” That’s why it’s good practice to minimize side effects in your functions.

But callbacks are only one of many ways to sequence side effects. Why apply such a broad concern only to callbacks in particular? This is like saying “poorly organized Haskell code sucks, therefore Haskell sucks.” No, poorly organized code sucks. If that code happens to be written in Haskell, don’t shoot the messenger.

If you are not convinced that callbacks and goto are equally harmful, read Edsger Dijkstra’s famous “Go To Statement Considered Harmful” and replace the mentions of goto with mentions of callbacks.

Doing this makes poor Dijkstra out to be a lunatic raving about the importance of eliminating callbacks (which employ procedure calls) and instead employing…procedure calls.

And let’s not overlook the dishonesty of suggesting “take this article decrying a bad thing, substitute what I’m opposing for the bad thing, and watch how bad that makes it look!”

Yay, we solved the problem of the site freezing, but now our code is a mess of deeply nested functions that is annoying to write and unpleasant to read and maintain. This is a glimpse into Callback Hell.

Aha! Our first glimpse into Callback Hell.

Let’s have a look at The Code of the Beast:

function getPhoto(tag, handlerCallback) {
  asyncGet(requestTag(tag), function(photoList) {
    asyncGet(requestOneFrom(photoList), function(photoSizes) {
      handlerCallback(sizesToPhoto(photoSizes));
    });
  });
}

getPhoto('tokyo', drawOnScreen);

Since this is an article about callbacks, not JavaScript—JS is only used for examples, except where it’s briefly mentioned in the Conclusion—allow me to rewrite this in CoffeeScript, so that the comparison to Elm (which, like CoffeeScript, is much syntactically cleaner than JavaScript) is at least fair.

getPhoto = (tag, handlerCallback) ->
  asyncGet requestTag(tag), (photoList) ->
    asyncGet requestOneFrom(photoList), (photoSizes) ->
      handlerCallback sizesToPhoto(photoSizes)

getPhoto "tokyo", drawOnScreen

As the article notes, “now our code is a mess of deeply nested functions that is annoying to write and unpleasant to read and maintain. This is a glimpse into Callback Hell.”

Indeed: to gaze upon this four-line function with three indentations is to peer tremblingly into the gaping maw of the Abyss itself. I recommend a seatbelt when dealing with code this Hellacious.

The article then goes on to describe Functional Reactive Programming (which actually is nice stuff, the surrounding fire and brimstone about callbacks notwithstanding), and cites the following example as a much cleaner alternative to Callback Hell:

getPhotos tags =
  let photoList = send (lift requestTag tags) in
  let photoSizes = send (lift requestOneFrom photoList) in
    lift sizesToPhoto photoSizes

As anyone can plainly see, whereas the previous four-line function with three indentations was about enough to curdle Cerberus’s nose hairs, the above four-line function with only two indentations is really quite a thing of beauty.

Are these claims for real?

Look, plenty of people can’t stand callbacks—just like plenty of people can’t stand monads. If that’s your personal preference, then cool, use something else. But “callbacks are the modern goto”?

Come on.

Scattering interdependent functions across a code base should be avoided for its own sake, and nothing about callbacks requires that you use them that way. Trying to conflate the two in order to scare people away from learning a perfectly usable technique does more harm than good to our discipline. 

Are there more enticing ways to sequence asynchronous effects than callbacks? I think so, yeah. And that’s the only good reason I’ve heard to avoid them: a preference for something else.

Regardless, they’re definitely worth learning, and there are plenty of cases where they’re the right tool for the job. Don’t let “Escape from Callback Hell” dissuade you.

-@rtfeldman

PS: Separately, Elm actually is a very cool project. Check it out.

Are references antiquated?

It used to be the case that one of the major constraints for programmers was a program’s memory usage. You had to be a careful conservationist when allocating and freeing memory, to keep performance reasonable.

Over the years, hardware advances have changed this. Whereas there used to be a real debate about the everyday performance considerations of garbage collection versus direct memory management, today garbage collection is ubiquitous except for use cases where performance must be strictly controlled (like games with bleeding-edge graphics and physics).

Nowadays when your program starts to gobble up memory to the point where it impacts performance, the most common culprit is an algorithm run amok or some sort of large I/O operation—like you tried to load a humongous file into memory, or way too many rows from a database. It’s rare to find a program whose poor performance is alleviated by better memory allocation micromanagement as opposed to a higher-level change like algorithm choice or I/O usage patterns.

However, while common garbage collected languages have long since dropped direct pointer manipulation, they still retain a memory management construct that has arguably outlived its usefulness: the reference.

Let’s look at some JavaScript code:

var original = "this is a string";
var alternate = original;

original = "this is a different string now";

console.log(original); // logs "this is a different string now"
console.log(alternate); // logs "this is a string"

In this code, the = operator is assigning by value in all cases. You write foo = bar and foo takes on whatever value bar had. This is simple and consistent.

Now consider this extremely similar JavaScript code:

var original = { val: "this is a string" };
var alternate = original;

original.val = "this is a different string now";

console.log(original.val); // logs "this is a different string now"
console.log(alternate.val); // logs "this is a different string now"

It’s basically the same logic except that now we are working with strings wrapped in objects instead of just strings. However, in JavaScript, just working with objects instead of strings is enough to completely change the code’s behavior.

This is because objects in JavaScript are all about holding references, not values. In this second code snippet, the = operator has two different meanings; sometimes it assigns by value, as it consistently did before, and sometimes it assigns by reference.

But why? Why not just have it work the same way, regardless of whether you are dealing with objects or strings?

It comes down to memory. Since objects tend to take up more space in memory than other types, back when hardware was a more serious constraint, it was preferable to manage references to objects than to make entire copies of them.

However, in most use cases this concern is becoming obsolete. Copying objects—or portions of them—is so cheap as to be insignificant to performance, especially when compilers can optimize behind-the-scenes data structures to allocate more memory only for portions of objects that have changed.

Assuming a modern programming language could succeed without references, why bother disallowing them? Why take away the capability, remove a tool from the toolbox?

Because a less-cluttered toolbox is easier to work with.

Consider pointer arithmetic. This is a useful tool for languages that support direct access to memory addresses; it lets you do tricks like sequentially adding fixed values to an address in order to efficiently read out certain portions of an array, for example.

However, pointer arithmetic has a high learning curve—it takes a strong understanding of how memory is structured to use it properly—and is highly error-prone. And while it is surely indispensable in certain use cases, for the vast majority of programs it’s more of a liability than a benefit.

The case for a language without references is essentially the same case as for a language without pointer arithmetic. Both capabilities add significant complexity to a language, require a substantial learning curve to use properly, and offer a primary benefit of a performance increase that has been largely obviated by advancements in hardware and compilers.

Without references, assignment could always do the same thing. Objects would have fewer dependencies between one another to worry about. Tracking down how an object came to have a certain value would take fewer considerations.

In short, coding in a language without references would be simpler.

Certainly references are not entirely obsolete—no more so than direct memory management and pointer arithmetic. There are plenty of cases where these features are just the right tool for the job.

However, it seems that in the vast majority of use cases where garbage collection is acceptable, on today’s hardware it would be just as acceptable to omit references. The cost would be minimal, and the benefit substantial: a noticeably simpler and less error-prone language.

Type Safety and Magnificently Self-Documenting Code

I’ve used several languages in a variety of production environments over the past decade:

  1. C++ (games)
  2. Java (web app back end, native app front end)
  3. Groovy (web app back end)
  4. Perl (web app back end)
  5. JavaScript (web app front end)
  6. CoffeeScript (Dreamwriter front end and back end)

I’ve sorted these not chronologically, but by preference. CoffeeScript’s clean, simple syntax and trivial JavaScript interoperability is fantastic!

Although I love writing CoffeeScript, the one thing I really miss from other languages I’ve used is type safety. Suppose I’ve got this function:

logEmails = (emails) ->
    for email in emails
        console.log "Email: #{email}"

The function works as expected if you pass it an array of strings…

logEmails ["foo@bar.com", "bar@baz.com"]
## Email: foo@bar.com
## Email: bar@baz.com

…but what if you accidentally just pass it a single email?

logEmails "foo@bar.com"
## Email: f
## Email: o
## Email: o
## Email: @
## Email: b
## Email: a
## Email: r
## Email: .
## Email: c
## Email: o
## Email: m

Gah! That’s horrible!

A Bit of Safety

Wouldn’t it be better if logEmails just threw an exception instead of spewing gibberish? That way it would fail fast, and I’d have immediate, descriptive feedback on what went wrong and where.

Let me add a quick check for that…

logEmails = (emails) ->
    unless emails instanceof Array
        throw "logEmails() takes an array, not a(n) #{typeof emails}"

    for email in emails
        console.log "Email: #{email}"

…and try passing it a single email again.

logEmails "foo@bar.com"
## ERROR: logEmails() takes an array, not a(n) string

Nice! Now what happens if we pass it undefined?

logEmails()
## ERROR: logEmails() takes an array, not a(n) undefined

Cool.

But there’s another annoyance: what if the function gets an array containing something other than strings? What if its elements are, say, objects?

logEmails [ { email: "foo@bar.com" }, { email: "bar@baz.com" } ]
## Email: [object Object]
## Email: [object Object]

Great…just the output I was hoping for.

Down the Rabbit-Hole

At this point I could go a couple of different directions. One would be to add another check inside the loop, so I’d get an exception if any of its elements were not strings. However, that would mean I’d partially execute the loop before dying if something went wrong, which has the potential to make a real mess when you’re manipulating data instead of just logging to the console.

I could also loop over the entire array first, checking every element for type safety, before moving on to the “real” loop below. That’s very clunky, but it would work.

The most tempting option is just to leave it alone. After all, this is just a simple example, and it would be easy to debug even if I got bizarre outputs instead of helpful error messages. This whole type checking thing is becoming more of a hassle than it’s worth!

Of course, when you have values zipping around through increasingly deep call stacks, encountering surprises at the end of the line (like “I thought this was going to be an integer here!” or “why is this undefined all of a sudden?”) can lead you on time-consuming quests to figure out where the type went off the rails.

We’ve all been there.

So what to do? Even if you wrap the exception-raising logic in a function, you’re going to be calling it every time you want to check this stuff, which is going to bulk up your functions with all sorts of calls unrelated to the main logic you’re trying to execute.

Yet you genuinely want those enhanced debugging capabilities…there’s got to be some better way.

Some Better Way

When a language lets you add type safety quickly, even if it’s just runtime checks—something less boilerplate heavy than the “throw an exception if this isn’t what I expect” approach I’ve had to resort to here—you can save yourself a lot of debugging time with fairly minimal extra effort.

Java handles the above case with ease:

void logEmails (String[] emails) {
    for (String email : emails) {
        System.out.println("Email: " + email);
    }
}

True, the code is littered with Java syntax cruft, but there’s a pile of debugging assistance wrapped up in that one little String[] definition.

Even better, this type declaration is very self-documenting. Compare these examples:

logEmails = (emails) ->


# Logs each email in an array of email strings
logEmails = (emails) ->


void logEmails (String[] emails) {

The first example leaves me with unanswered questions. How are the emails supposed to be formatted? The safest assumption is the correct one (as an array of strings), but you can’t really know without walking through the code in the function’s body.

The second example is clearer, but also much more verbose. You’d think a function like “logEmails” would be pretty self-documenting, right? It’s going to log some emails! Yet here I am reading more English than I am CoffeeScript, just to learn how to format the function’s arguments.

Java is the clear winner here. If the type is specified right there in the definition, and the developer has done a good job choosing intuitive names for things, I don’t even need a comment. Now instead of reading both a lengthy English comment and the code, I can get the same information just by reading the code!

Even better, I can be sure the type information—if nothing else—is up to date. People often forget to update comments when logic changes, but code is code. If that argument is declared as a String[], then there is no doubt in my mind I should be passing it an array of strings.

Magnificent Type Systems

Although Java is better than CoffeeScript at concisely specifying types, it’s nowhere near the top dog.

For one thing, it offers no way to specify that an argument cannot be null. How infuriating! That’s probably the most common type error that would be caught by a type system capable of detecting it.

Separately, Java has no way to specify what ranges of values are acceptable.

In the above example, although we are accepting an array of strings, we may get ourselves into trouble down the line if we presume a string like “Hello World!” is a valid email address.  (Like if, for example, we tried to send an email to it.) But even in Java, to be sure that the string we’re getting is indeed a valid email address, we’ll have to resort to CoffeeScript-esque inline checking.

Functional programming languages tend to give you more control over your types. In Haskell, for example, there’s no equivalent of a NullPointerException because its type system allows you to specify whether null is an acceptable value. Functional languages are also often statically typed, which means they can catch type errors at compile time. A researcher recently experimented with this, doing line-by-line conversions of Python programs with good unit test coverage into Haskell, and managed to uncover several type errors that the Python unit tests missed, but which Haskell quickly discovered at compile time.

Granted, I’ve written vanishingly little production code in any functional languages, but what I’ve read about them has definitely impressed me from a type safety perspective. They let you pack a lot of self-documenting debugging assistance into a very small amount of code.

(Amusingly, many of their most vocal adherents seem to prefer the least self-documenting names in the world—single-character argument names at every turn—making their function bodies miserable to read despite the magnificent type safety. Maybe it’s a mathematician thing, but I’m never happy to encounter a logEmails(e) in someone else’s code.)

Despite all this, CoffeeScript remains my favorite language to work with. While I do value type safety, I value clean, concise syntax and interoperability with well-supported languages like JavaScript more.

My desire to have the best of both worlds is exactly what’s got me so excited about Roy. It not only has a robust type system, but also the static type checking that enables catching type problems before the program is even run—a rarity among languages that compile to JavaScript. Roy isn’t ready for prime time yet, but I can’t wait to see what it can do when it is!

In the meantime, no matter what language you’re using, keeping these concepts in mind can make your debugging life a lot easier. If your language supports type safety, put it to work as a documentation tool. Choose self-documenting names that, when coupled with the type information you’ve defined, help other developers intuit what your code does without getting sidetracked into reading boilerplate English.

And if your language doesn’t offer much in the way of type safety, remember that you can always add it yourself (as I’ve done in the CoffeeScript examples above) to make sure your most important functions are failing fast with helpful error messages.

Happy coding!

Why iOS Will Completely Replace OS X

There is no chance that OS X will not be completely replaced by iOS.
When? I don’t know, but I bet if you’d had the chance to ask Steve Jobs, he’d have said “as soon as possible.” Why? Allow me to explain.

Since Steve Jobs regained control of Apple, the company has been about making simple, elegant products. Plenty of MP3 players preceded the iPod, and plenty of smartphones preceded the iPhone. The reason Apple’s takes on those products were so iconic, so relentlessly successful, is not that they were more powerful than the competition, not that they had a longer feature list, and certainly not that they were cheaper.
Apple’s products competed on simplicity and elegance.

By 2011 standards, OS X is neither simple nor elegant. You could make the case on several fronts that Windows and even Ubuntu - freaking Linux - wins out on either metric. If Steve Jobs had been faced with no alternative but to continue maintaining OS X, here is what he would have said:
  • Forcing every app to have a menu bar is stupid. Menu bars are relics of the past. Just get rid of the thing.
  • Apps shouldn’t keep running in the foreground after you close them. Why should people have to learn the difference between closing and quitting? Just don’t let there be a difference.
  • Finder is antiquated; users don’t need to mess with the guts of their systems. And it certainly doesn’t need to be presented as an app that’s constantly running. Just get rid of it.
  • There are a million ways to launch apps: from the desktop, from Finder, from the dock, from the Launchpad…just pick the best way and stick with that.
  • The operating system should handle installing and uninstalling apps through the App Store. You shouldn’t have to mess with any other way.
Maybe you like these features in OS X. Maybe you think they’re great.

But Steve’s philosophy was “our customers don’t know what they want until we’ve built it,” not “let’s keep around successful relics like the Apple II because people still like them.” The continued existence of these weak design elements in OS X releases, at a time when Jobs was overseeing the creation of an operating system as simple and elegant as iOS, makes no sense - it is nonsense - except in the context of the explanation that OS X is being phased out.
Now compare iOS to the incumbent OS X.

Is it more powerful? More feature-tastic? No.
Is it simpler and more elegant? Holy hell, yes.

The desktop launches apps. The App Store controls everything from getting the app to your system to paying for it to uninstalling it when you’re done. That’s all there is to learn - now go enjoy your apps.
OS X is exactly the type of product Steve Jobs would set his sights on to blow out of the water, and iOS is exactly the type of product he’d come up with to do it.

Can you imagine iOS introducing Finder? A menu bar? A dock bar? Apps that appear to be running in the foreground, but which have no window when you switch to them?
What would that last one even mean in iOS’s simplified interface?

One might hope that OS X and iOS could continue to coexist. But remember that one of the first things Jobs did when he returned to Apple was to cut the product lines down to just a few products. Simple. Concurrently maintaining two vastly different operating systems when he could replace the relic with the future? Which path sounds more like Steve Jobs to you?
The transition has already begun. Lion is, first and foremost, a socialization product.

It brings the App Store to the Mac so you can start getting used to installing apps on your laptop using the Way Of The Future. This is a necessary transition step, as in a few years there won’t be any other way for average users to get apps onto and off of their new MacBooks.
It also introduces fullscreen, since that’s how iOS apps run. Mac Developers need to get used to making their apps support fullscreen so that they’ll still run in iOS after the change-over happens. (There’s a chance that iOS will add support for windowing, but I wouldn’t hold my breath. I suspect Jobs came to view windowing as bloated and overcomplicated even though he was the first one to bring it to the mass market.)

Lion reverses the direction of the swipe gestures to match iOS. Why maintain two different sets of rules? Just go with the one The Future uses.
And of course it introduces Launchpad, a simple grid of icons from which you can launch apps with one touch - err, click. Given that the dock bar and Finder already exist, in what universe would Steve Jobs permit adding such complexity in order to do little more than expand the dock bar?

Replacing the dock bar with the Launchpad, sure - I could see that. But can you imagine Jobs saying “nah, let’s just have both”? That’s the type of compromise he was famous for screaming at people about. The only realistic explanation is that in the future, Launchpad is going to be all there is, and this “dock bar plus Launchpad” transition is nothing more than a stepping stone.
And that’s what Lion as a whole is: a stepping stone to an iOS future. There might be another release or two of OS X to ease the transition, but the writing is clearly on the wall.

Any bets on whether they brand the first release as iOS X?

Installing and running NodeJS and Jessie from scratch

If you’re also like me, you have one or more projects in which you’d like to use Jasmine for unit testing JavaScript, and NodeJS sounds like a fantastic way to bootstrap it.

Also if you’re like me, you tried getting a Node-based Jasmine setup up and running (in my case, Jessie) and promptly found yourself frustrated. As with so many new libraries, it’s super easy to get it set up…until it’s not.

Here’s what worked for me.

  1. Ran the instructions under Building and Installing Node.js to install NodeJS itself.
  2. Verified that it worked by opening a terminal and running node --version (output was v0.4.11)
  3. Ran the instructions for installing NPM at http://npmjs.org/
  4. Verified that it worked by opening a terminal and running npm --version (output was 1.0.30)
  5. Installed Jessie by running npm install jessie - output was:
    jessie@0.3.7 ../node_modules/jessie 
    └── cli@0.3.8
  6. Tried to verify that it worked by running jessie --version - and this was where the wheels came off. No jessie found.Long story short, I discovered that npm bin ls tells you the directory where the executables npm installs end up - including jessie. I added that directory to my PATH environment variable and could then execute jessie as expected.
  7. I made a directory called specs and put a test spec in there, which I named sometest_spec.js. Note that specs must end in “_spec.js” - if they end in something else, like .spec.js or just .js, they will be ignored.
  8. I ran jessie specs and lo and behold, the tests in the specs directory were run!

Hopefully this helps someone else who was befuddled why the npm installer did not do the PATH addition automatically and/or why the jessie scripts did not seem to run.

Now off to try some unit tests…

Prediction: New Microsoft CEO by 2015

"There’s no chance that the iPhone is going to get any significant market share. No chance…I’d prefer to have our software in 60% or 70% or 80% of them, than I would to have 2% or 3%, which is what Apple might get…My 85-year-old uncle probably will never own an iPod, and I hope we’ll get him to own a Zune.

 -Microsoft CEO Steve Ballmer, 2007 interview

I give Steve Ballmer five more years as Microsoft CEO, tops. He hasn’t run the organization into the ground - yet - but I don’t see how the tech company can expect to grow under the leadership of a captain who has the market sense of an ostrich.

Microsoft still makes a lot of money, but all their biggest revenue centers come from categories that are shrinking. Windows and Office, their two most profitable products by a mile, run almost exclusively on exactly two types of devices: desktop PCs and laptop PCs. How’s that market looking?

The ascendant Tablet category cannibalizes sales from laptops and desktops, and Microsoft has a weak offering: Windows 7, optimized for keyboard and mouse, except just use your fingers instead. People are doing more of their computing on phones, where Microsoft also has a weak offering. (Less market share than Apple, in fact, who charges more.) Computing on the television is snowballing towards reality and even though Microsoft has been connecting televisions to the Internet for years through the Xbox 360, they’ve made so little of that opportunity they’re barely considered a contender.

What about competition within the desktop/laptop operating system market? Google is about to release Chrome OS - probably the most credible threat to Microsoft’s operating system market share in two decades, particularly given the rate at which the Chrome browser has eaten away at Internet Explorer’s market share - and I wouldn’t be surprised to see Android or iOS laptops cropping up in the next few years; their user interfaces have proven wildly popular.

This is to say nothing of the fact that Office faces competition from the increasingly sophisticated and convenient Google Docs.

A larger problem is that, like GM, the majority of Microsoft’s value proposition is boiling down to familiarity. The Japanese made better cars, and eventually the familiarity wasn’t enough to keep Americans from buying GM. “I’ve always driven a Chevy” only works for so long. Today Apple makes more popular high-end products and Google makes cheaper products that are generally better-built to boot. Internet Explorer, which once dominated the browser wars, has dropped to the role of minority competitor. The familiarity game won’t last forever.

Microsoft seems to be headed in the same direction as AOL, except they don’t seem to be addressing the problem. AOL likewise still has a cash cow - their dialup buisiness - but they recognize that it’s on its way out and are refocusing the company to be able to survive without it. Even Google, which is still on the rise, is frantically investing R&D dollars in the search for alternative revenue streams (self-driving cars much?) because they know search advertising won’t bring home the bacon forever.

What’s Microsoft doing? Their video game business seems to be about the only category in which they’re moving in the right direction. Windows and Office play in a shrinking playground. Windows Mobile is a huge underdog at best. Zune is doing horribly. Silverlight is a joke.

And the 2007 quote from Ballmer highlights exactly what the problem is. Microsoft doesn’t understand that Windows and Office continue to be profitable not because they’re still the best, but because the value-through-familiarity well has not yet run dry. Ballmer talked like he was expecting most smartphones to be running Windows Mobile in short order, simply because most PCs run Windows. The reality of the smartphone world is that BlackBerry is having its lunch eaten by Android and iOS, and Windows Mobile 7 is a day late and a dollar short. People are already getting familiar with Android and iOS.

In fact, BlackBerry is looking to be a Microsoft Microcosm. They got in early, took hold where others had tried and failed, and are now seeing their once-dominant position fall apart because of the assumption that people would settle for incremental improvements forever. Ballmer seems to be letting Microsoft coast on the strategy that worked well throughout the 90s but which shows few signs of working out in the next decade. It remains to be seen if he lets it coast all the way to the ground, or if he turns it around before he’s looking for a new job.

My money’s on a fresh Microsoft CEO taking the reins by 2015.

Step-By-Step: SSDifying the new laptop

My new laptop arrived in the mail today, and the first thing I did (before turning it on to Windows 7) was take out the old hard drive and replace it with the solid state drive I’ve been enjoying in my desktop for awhile now.

I took pictures, ‘cause I was excited, so here’s a blow-by-blow of the whole gory proceedings.

Step 1. Savor the box

Mmm…tasy new computer…

Step 2. I turn around, and now there’s a cat sitting on the box

This always happens. You bring a new thing into the house, and a cat has to sit on it ASAP. I don’t know what their deal is with that.

      

Step 3. Surgically remove Solid State Drive from desktop computer

This wasn’t especially hard. Just power down the machine and unplug stuff.

Note the blue plate on the back of the drive itself. It’s just an expander; the drive is sized for laptops, and they give you a 3.5” piece of metal with screw holes in it so if you’re inclined to put it in a desktop (like I was) you can do so easily. I’ll be removing it shortly to get the drive back to laptop size.

Step 4. Lay everything out

Left to right: new laptop, battery, extracted SSD, screwdriver set.

Step 5. Figure out where the hard drive goes

I’ve never actually installed a laptop HD before, so the first thing I did was look at the instructions get on Google ask someone knowledgeable flip it over and look for a panel that appeared to be about the right size.

Found it!

Step 6. Remove the old hard disk

This was a simple matter of unscrewing it and pulling it out. There was a little casing around it just like in desktops, which helped secure it in the drive bay, so I took it out as well. Then, because Legos taught me well in my youth, I compared the thing I’d removed to the thing I was about to put in.

Yup, looks like power and data transfer ports on the left, power and data transfer ports on the right. This ought to work!

Fin

And sure enough, work it did!

I had to start Mint in recovery mode to get its display drivers acclimated to the laptop, but after that and adding some wireless and battery level indicators to my toolbar, I was ready to roll. I’m writing this post from said laptop.

How’s it run?

Sweet as hell. On my 3ghz dual core desktop, I used to build our services module for work in 35ish seconds before the SSD took it down to the mid twenties. On this laptop I’m in the low 30s, even with a Core 2 Duo laptop processor that’s probably clocked in the 1ghz range.

Programs are snappy as hell, and it booted from the grave in 13 seconds thanks to this Express Gate (which is apparently Asus’s rebranding of something open source) speedy startup thing that makes vanilla BIOS look like a three-toed sloth on a hot summer’s afternoon.

Good times! Now I finally have a laptop that’s not only adequately speedy, it even doesn’t blue screen every other millisecond.

Powered By Asus

Back in the day, many of my friends and I were into CounterStrike. Some of them played competitively in leagues and tournaments, and a recurring theme - especially in those circles - was the issue of cheaters.

People would use all sorts of shenanigans to cheat at that game. There were “aim bots”, scripts that would run in the background of your game and snap your crosshairs right onto the enemy at the press of a key. There were “speed hacks”, which would send packets to the server at an inhuman rate, making you zip around the map at Roadrunner-esque speeds, firing a million shots per second. There were “wall hacks” which let you see other players through walls and get the drop on them.

And then there was Asus.

When Asus entered the graphics card market, there were whispers that they had created a card that would allow its wielder to see through walls: a wall hack in a box. If memory serves, they offered some sort of global transparency setting for any game; possibly innocently, probably not.

Thus whenever someone appeared to be wall hacking, Asus became the immediate scapegoat. After being shot through a wall (you could do that in CounterStrike), you’d howl that the enemy “Asus’d” you. “Help, I am getting Asus’d” was a brief meme, and one of my friends took up the moniker “HelpAsus” for a time.

Particularly amusing was the fact that Asus shipped these little “Powered by Asus” decal stickers with their hardware. That was the only positive spin I can ever recall associating with the brand. “Dude, he’s Powered by Asus” became a meme that escaped the game to mean something akin to “he’s so awesome, it’s like he’s cheating.” You could say someone was Powered by Asus at any game you liked, or even a non-game. At school, at work, whatever.

I’m reminded of all this because UPS tells me that on Monday my first Asus product (well, excluding motherboards) will arrive in the mail.

On that day, I will be more Powered by Asus than ever before. We’ll see if it lets me see through walls.