Innovation: Applying “Inspect & Adapt”

Innovation: Applying “Inspect & Adapt” To The Agile Manifesto http://ow.ly/abwnn

Support for Independent Contractors

A friend of mine asked me about getting set up as an independent contractor. I had looked at Solo W-2 in the past: http://ow.ly/4NqGU

Nerd Kerfuffle: Hudson to Move to Eclipse! What is Jenkins to Do?

Breaking: Oracle Plans to Transfer Hudson IP to Eclipse | Javalobby.

Hudson was (is) a great continuous integration platform-  super easy to install and use with lots of plugins.  Then, big bad Oracle claimed that while the source was open, the name was not and Oracle owned a trademark on Hudson.  The original author of Hudson, Kohsuke Kawaguchi, said, “Fine, I’m taking my ball and going home,” meaning that he forked the project, called it “Jenkins” and said to Oracle, “There, now the whole thing is open.”  This all happened quite recently, so Jenkins and Hudson are still nearly the same product.

Everyone on both sides seems to agree that the tensions started because of miscommunication, but that wasn’t the whole thing.  The issue was the trademark.  Oracle claims that they just needed more time to resolve an issue and didn’t want to exercise any unusual control over the open source project.

This is another example of why I think Oracle is going to kill Java.  Don’t get me wrong: they don’t want to kill it.  They simply don’t know how to manage it without killing it.  Never assign malevolence to that which can be explained by incompetence.

W. Edwards Deming quote

“Experience alone, without theory, teaches management nothing about what to do to improve quality and competitive position, nor how to do it.”

Sharing workspaces between computers

Here’s my problem: I have a MacBook Pro that is nice and fast and portable and I have an iMac that is nice and fast and big with lots of screen real estate. When I’m at home, I want to work on the iMac and when I’m camping in someone’s conference room, the MacBook works great.

I would be happy to dock my MacBook Pro into the iMac if I could reuse the mouse and keyboard (and ideally the speakers, mic and camera), but apparently I am crazy to want this because it does not seem to be possible. I can plug in to the display, but if I’m on Skype or GotoMeeting or something and my laptop is closed, the mic is covered so that doesn’t work. (I don’t care as much about the video.) Also, when I plug in to the display, I have to disconnect my mouse and keyboard from the iMac and plug them in to my laptop. I haven’t figured out how to do this with the Magic Mouse and wireless keyboard that came with it, so I got a third party (Kensington) wireless mouse and keyboard that work with a USB dongle thingy that I can move from one place to another. So that almost works, except for online meetings, which I have every morning, so it’s still pretty inconvenient.

So, the next thing to do to solve my problem is just to use the iMac as an iMac, not a docking station, and somehow use the files from my MacBook. The only thing I need to keep in sync is my Eclipse (actually STS) workspace, so I looked into online sync. I tried just mapping a network drive, but that only works as long as the MacBook is plugged in and either open or connected to an external display, so that was a fail.

Next solution to sharing the workspace was to use an online cloud syncing. I looked at several things that were rejected because they didn’t really support syncing between multiple computers – they were more about backup (Carbonite, Mozy, etc.) I was already a DropBox user, but they have a pretty chintzy 2GB free and I would have to have my workspace under the DropBox special directory. SugarSync seemed to be a better fit.

I installed SugarSync pretty easily and shared my workspace, which, with all of my projects, worked out to about 800 MB (well below the free threshold). So far so good, but then the performance problems started.

For most users, this wouldn’t be a problem, but a development workspace (whether Eclipse or Netbeans or Visual Studio) has a very different usage profile than someones photos and work documents. If you are very busy, you might add a few hundred photos per day and make several dozen edits in, say PhotoShop. However, for a development environment, when you do a “mvn clean install” or equivalent, you are deleting and recreating thousands of mostly small files in the course of a few minutes or even less. Also, with your files under version control (they are, right?) and you switch to a different branch or just pull in a lot of changes from the repository, you are changing not just those files but lots of hidden files used by your version control client.

You might do these kinds of things several times an hour, but the frequency really isn’t the issue. The issue is how long do you have to wait after you’ve done a clean build for your files to be in sync (so you can close your clam shell and go home). In the case of SugarSync it was a very long time – hours even. Not acceptable.

So, I tried DropBox. Again, I don’t like putting my workspace under their special directory, but the performance was great. After a clean build, all the files were updated within a couple of minutes. Not instantaneous, but if a clean build isn’t the last thing you do before you close your laptop, you probably don’t have to wait at all. Furthermore, since the performance is good enough, you can be fairly confident that the changes you really care about (uncommited source files) are not queued up behind a long list of .class files so you can work pretty much the way you are used to.

I’ve just started it, so I’ll update this post if I run into a bunch of problems, but so far, so good.  I think it works fine for Mac to Mac and probably for Windows to Windows, but I don’t think it would work as well for switching between platforms because there are too many special files and changes to .settings files, etc. that wind up being specific to the platform.

If this is helpful to anyone (or a big goose chase) let me know.

UPDATE: This turned out to all be a wild goose chase.  DropBox didn’t really work for me either.  If you don’t share the whole disk, then you don’t share your particular Eclipse/STS installation and the installation/configuration of your local development Servers.  It’s too hard to keep everything in sync.  My ultimate solution: Keep things in sync as much as you can, use branches and patches to share changes between the two machines, and just put up with this inconvenience.

Maybe Java isn’t dead after all!

Maybe Java isn’t dead after all! Twitter Engineering: Twitter Search is Now 3x Faster http://ow.ly/4yCLc

RE: The “Optimal” Fallacy

In The “Optimal” Fallacy, Jurgen Apello states that “You cannot ‘optimize the whole’. The best you can do is sub-optimize, cooperate, and iterate.”

Read his whole post for more context, but I’ll do my best to summarize.

  1. There exists a Lean principle called “Optimize the Whole”
  2. The assumption is that the customer’s view on a business is one that includes the whole system. This is not true.
  3. No observer of a system can claim to have an objective view on the whole
  4. All the parts try to optimize for themselves, and through interdependencies between the parts the whole system tends to evolve to an optimal situation.

Later, in the comments, he states, “I am referring to complex systems, which are not designed by a single authority,” as opposed to designed systems.  If that’s really what he meant, it’s a straw man that is not very interesting at all.  So emergent systems cannot be optimized as a whole? Duh! Who is the actor in that passive voice sentence? I mean, who would even do the optimization if it were possible? Mr. Emerge?

Is Toyota’s production system not complex? Or does he claim that they don’t really optimize the system at Toyota? If we are talking about theoretically optimal, I would agree.  “Optimal” means the “the most desirable possible under given restrictions.”

However, to “optimize”, at least in the most common usage among people like me, means “to make more desirable”. When a compiler optimizes your performance, we do not assume that it is as fast as it can possibly be and as memory efficient as it can possibly be, only that it is faster and perhaps more memory efficient than it would otherwise be without the optimization.

When I think of “optimizing the whole,” “optimize” does not mean mathematically optimal and “the whole” does not mean a boundless system of independent actors and discussions about Arrow’s Impossibility Theorem don’t enter into it because you just get a product manager who does the best job he can of ranking things and consider that gospel, then do the best you can to improve your throughput on that list.

Take for example the Kanban systems for software development. A major benefit of this system is the help it provides optimizing the whole system.  If you have a process of product/feature conception feeding into high level design feeding into development that feeds into QA that feeds into deployment, and you can see that QA always has more works in progress than they should, it won’t do you any good to get more efficient at development or high level design or feature conception.  You don’t need more ideas for more features, you need QA to move faster (or save money by having less development capability), but speeding up your development process without speeding up your QA process won’t get your customers features any faster (or won’t get your shareholders value any faster).  That’s just this example.  The bottleneck could obviously be anywhere – deployment, product conception, development.

Taking measures to improve the efficiency of the whole value chain is all that is meant by “Optimizing the whole”.   To claim this is futile is just silly.

Still talking about Spring Roo sucking

Back in June 2010, I wrote a provocative (apparently) post called Spring Roo Sucks! It was a bit of a rant, so I backed off on the tone, but I’m still defending the overall point.

Somebody commented that I had unfair expectations of this young project. I started to reply in the comments, but it got a little long-winded for a comment, so I’m putting it here.

Joseph wrote:

It’s a great project, your expectations are unreasonable because Spring roo doesn’t have release 2 or 3, but 1 … It’s a quite new project. And you expect it to be something that exist 4+ years and that has been extremely refactored in time…

If I was programming as a hobby, I might agree. But I need my tools to increase my productivity, no matter what revision.

There are plenty of smaller, less ambitious open source projects in version 0.9 that I have found to be extremely helpful. The issue is not the revision (Spring can bring a lot of resources to bear and they are working closely with Google on the GWT stuff). To me, the issue is with the scope. This tool builds a lot of my application, but it builds it exactly the way it wants to and with sensible defaults. But almost all interesting applications have something besides a CRUD view on a bunch of entities and as soon as you starting looking in the corners, you get lost. I don’t think it’s Roo’s fault exactly. It’s just the way code generation stuff works.

To me, there are similar issues with GUI drawing tools. Those are great and deliver a lot of productivity. Just lay out your screen the way you want it and presto! That works until you say, “actually, for these 5 fields, I only want to show each of them if the database says they should be visible and I want to adjust the size of the dialog accordingly, so I don’t have overflow and I don’t have a bunch of blank space.” So then you go look at all that code that got generated for you and you can pretty much understand most of it (or even all of it) but you start modifying that code to do more precisely what you want. So far, it’s still a productivity saver, but then someone says, “I want to change the theme” or “I want to add this button” and you say, “OK, that’s a few hours,” thinking you’ll impress them with your speed. Then they come back with “A day?!? How could it possibly take that long. You just open up the GUI builder and add the button!” But that doesn’t work anymore because the code isn’t code that the GUI builder understands anymore.

So, you’ve got code now that you basically understand even if it’s not structured as modularly as you would like, making future changes more expensive. You’ve also got some code that your mouse-jockey tool doesn’t recognize anymore, so you’re productivity boost from that was brief. However, I will concede that it was also substantial. You got a nice bootstrap to your project, so who’s complaining?

Not me, if it’s limited to the GUI – the outermost layer of your application on which nothing else is dependent. I don’t think it’s necessarily wise, though, to treat the foundation of your application (the domain model) and the glue (your controller layer or servlet layer or dialog layer or whatever you want to call it) the same way.

One other way Roo’s strategy is not as good a fit as GUI builders is that building a user interface is inherently visual and so is the tool. Moving stuff around to get it just so is what it’s all about. It’s a very visual exercise so a visual builder is just a good fit, even if you only get to use it for the first draft. Your domain model, however, doesn’t work quite that way. While you are building it at first, it is going to change several times as you realize this dependency would create a cycle or that field name doesn’t fit the naming convention that you really want or whatever. I find the Roo command line to be very unfriendly to this kind of refactoring work. That kind of work calls for an editor, not a command line.

As I’m writing this, I’m wondering to myself if you took the Roo command line away if I would like the tool better. You have to learn all the annotations, sure, but then you’ve learned them so you understand what you’re doing and how to change it. And you really would save yourself a ton of typing if you got good at it. So maybe in the near future I’ll come around on the annotations. We’ll see.

Mise en Place

Mise en place is a culinary term referring to getting everything ready to cook before you start cooking. If you are going to barbecue, you make sure you have all the fuel you need (charcoal and wood chips), all the tools you need (tongs, thermometer, brushes, knives, gloves) and all the ingredients you need (the meat, seasonings, mop sauces, butters in bowls). The point of all this is that when you actually fire up the grill and get started, you want to concentrate on the task at hand, not running around trying to find your extra charcoal or looking for the pepper.

So what does this have to do with programming? It’s the way I think about all of the tools that you want to have to do your job. You have source control, IDE, automated tests and a CI server that runs those tests every time you check something in. Those are some of the tools, but it goes further.

Say you are building some web services. You need to write integration tests to test those services. If you have a maven build that loads some test data and starts up an application server, then you just need to write the tests (using something handy like HtmlUnit) and you’re good to go. If you don’t, then you probably wind up inserting some test data manually and just testing it with curl or something. That’s a lot of work. It’s enough work that you certainly aren’t going to regression test. Even worse, it may be enough work that you don’t even do it the first time and just throw it over the wall to the QA department.

It is not a perfect analogy. The mise en place is set up to get you through a recipe, but then everything is either consumed or put back away. When you’re done coding, though, everything is still there. Nothing is consumed per se. So, for example, if you go through a lot of effort to set up your integration testing automation, you don’t need to do it again the next time. I think of this as every time I finish a coding task, I’m also setting the table for the next coding task. If I have to spend some extra effort for something that wasn’t there (e.g. automated tests) I make sure to spend that effort in a way that it will serve as the mise en place for next task. If, during the next task, I find that I my automated testing executing scheme came up short, I enhance it so that it has a better chance of being sufficient for the next task. Eventually, you wind up with an environment where you can really spend almost all of your time concentrating on the task at hand and very little time distracted by having to “put everything in place”.

Share my pain!

Here’s a painful puzzle I ran into using Jersey 1.4 with Spring.

I have two kinds of entities in my system. (Well, more than that, but for this example, we’ll just look at two.) First is RoleImpl:

@Entity(name = "Role")
@Table(name = "Role")
@XmlRootElement(name = "role")
@XmlType(name = "role")
public class RoleImpl implements HasLongId, HasName, HasZone, Role {
...
}

Second is PartyEntity:

@Entity(name = "Party")
@XmlType(name = "party")
@XmlRootElement(name = "party")
public abstract class PartyEntity implements Party {
...
}

I have a resource for each and a @GET method that returns a List<RoleImpl> and a List<PartyEntity> respectively. When they produce JSON, the parties are rendered as an array like this:
[{"id":45,"name":"","firstName":"","lastName":""},{"id":48,"name":"Winston Abbott","firstName":"Winston","lastName":"Abbott"}]

This is the “correct” behavior as far as I’m concerned. However, the roles are rendered as an object with an array in the “role” field like this:
{"role":[{"id":"1","name":"superuser","permissions":"*"},{"id":"2","name":"admin","permissions":["*:*:ownZone"]}]}

After much fiddling around, I found the solution. See if you can tell me the functional difference between these two classes:

package com.factorlab.ws.security;

import java.util.HashSet;
import java.util.Set;

import javax.ws.rs.ext.Provider;
import javax.xml.bind.JAXBException;

import com.factorlab.security.RoleImpl;
import com.factorlab.security.UserEntity;
import com.factorlab.ws.AbstractJAXBContextResolver;

public class SecurityJAXBContextResolver extends AbstractJAXBContextResolver {

	public SecurityJAXBContextResolver() throws JAXBException {
		super();
	}

	@Override
	protected Set<Class> getTypes() {
		Set<Class> classSet = new HashSet<Class>();
		classSet.add(UserEntity.class);
		classSet.add(RoleImpl.class);
		return classSet;
	}

}

and

package com.factorlab.ws.security;

import java.util.HashSet;
import java.util.Set;

import javax.ws.rs.ext.Provider;
import javax.xml.bind.JAXBException;

import com.factorlab.security.PartyEntity;
import com.factorlab.security.PersonEntity;
import com.factorlab.ws.AbstractJAXBContextResolver;

@Provider
public class SecurityJAXBContextResolver extends AbstractJAXBContextResolver {

	public SecurityJAXBContextResolver() throws JAXBException {
		super();
	}

	@Override
	protected Set<Class> getTypes() {
		Set<Class> classSet = new HashSet<Class>();
		classSet.add(PartyEntity.class);
		classSet.add(PersonEntity.class);
		return classSet;
	}

}

The answer came to me after variously staring at the code and stepping through a bunch of Jersey code in my debugger for only about 5 hours. Of course, the actual solution, once known, was trivial to implement (and I do mean trivial).

Mouse over me for the answer

Now the question is: Why did it even work as well as it did. I leave that as an exercise for the reader (because I don’t have an answer). I’d love to hear theories in the comments.

Follow

Get every new post delivered to your Inbox.