Monday, December 08, 2008

My Online Eyeballs

I'm a huge fan of Fred Wilson. Have been from the first time I ever read his words.

Lately, he's wrote a little something on an area that I generally ignore but had been catching my eye. Specifically, this post was about the challenges and opportunities with online advertising.

Most of the time, I tend to put my energy into learning about and understanding other areas of technology or trends, but lately I've become more interested. This is mostly because I have friends that work in various segments of advertising and those specializing in the online space have had interesting things to say.

One of the great things about Fred is that no matter whether I agree or not, he brings up the salient and pivotal points of the subject matter with precision and elegance. This is how I would want to write if I could write the way I want. Witness:

Analog and digital, it turns out, are polar opposites. Analog has physical costs which lead to scarcity driven business models. Digital has zero marginal cost (or near zero) which leads to ubiquity driven business models.
- from Trading Analog Dollars For Digital Pennies by Fred Wilson

The remainder of the article is a succinct primer proving once again that smart people are in fact out in the world, kicking ass and taking names. And then there are geniuses who take the time to write about it for the rest of us.

What struck me in this discussion is not the conclusion (in any logical progression that's typically the boring part), it was how subtly he separates two behemoths that are often deemed inseparable and treated as such. Too often, I find myself in the discussion with an educated person who is waving theirs hands about the effort it will take to "move more of our advertising to the web". Or some such nonsense. They fail to understand that the basic economics, motivators, and operating principles for digital consumers is completely different. When I listen to a "marketing person" talk about how a platform will have "captive eyeballs" or "impressions" I realize immediately I am talking to someone who doesn't get it.

This was made real in conversation with a buddy of mine recently about some time after Wii Music hit the stores. His introductory comment:
I own a Wii. I love music. How did I not know about this?
Hmm, lets think about this. When I consider his lifestyle it is like many of my friends. He uses Firefox with ad-blocking software, watches tv only online and usually just via netflix streaming, listens to pandora or online music, works from home/coffee shops, and doesn't drive. No wonder it slipped under his radar.

If you want to get his attention (or mine, or most of my friends) you need to leverage the things we do care about and the media we do invest in. Simply running ads on blogs won't find a growing hoard of us. Put that information IN that same blog and then we'll be all over it.

Monday, December 01, 2008

And...I'm Safe.

It's good to know that according to the big analyst firms, the key areas I'm focused on are going to continue to be the top spending areas in this down-turned economy.

InfoWorld reported in an article here, about the findings of several big IT analysis firms what are supposed to be the top 5 spending priorities for the next year. Not surprisingly, cloud computing and business optimization were up there.

When the economy isn't doing well, I often get the question about how my business is doing. The great thing about helping companies do business better is that tough times are great motivator in the way that years of plenty are not. Simply put, if they are making lots of money anyway, it's hard to get people to focus on the costs involved. When they aren't making money so easily, all of sudden they are very interested in what things cost.

You can apply this to your own life too. Are you in feast or famine? Should you be making hay while the sun shines, or playing frisbee? Having billable work is no excuse to put off training, writing, and exercise.

Friday, November 07, 2008

Preparing For Multi-site Engagements

I've spent quite a bit of time in the last several years living and learning with the challenges of doing multi-site projects. From offshore development where I managed teams operating between the US and India, to near-shore development with teams on both coasts of America, to smaller engagements with 6 people in 3 west coast cities. I've had successes and not a few glorious failures. It's the pitfalls, missteps, and screw ups that helped me learn the most.

Recently, I was asked to give some feedback on the things that have the most impact to new multi-site engagements. This is a pretty common request these days and I've since written down my thoughts so I could present them more consistently. This post is a very succinct summation of the big ticket items.

Naturally, much of this advice would be tailored to the specific circumstances and players, but the items discussed in the post are all generically applicable. If your situation involves working with teams in India or Manila or Slovenia (as examples), you'd want to get some specific tips on dealing with those cultures. This is no different than if you will be working with any of the Native American nations or in Silicon Valley where there are special considerations. Always get the heads up so you can be sensitive to the culture in which you'll be working. But those are for other posts, on with the show.

My personal guide is an ordered list I cheekily refer to as the Four T's. In order of impact they are Trust, Time, Transparency, and Talking. I'll discuss each a little to explain what I mean.

Building and maintaining trust is the single most critical thing that can impact your engagement. It is extremely easy to take this for granted, especially since it seems so obvious. In reality, we rarely address issues of trust head-on in the corporate world, but they become very important when you don't have face-time to rely on in the relationship. Nothing builds trust as fast as personal bonding time, nothing destroys it faster than a lack of transparency. Once trust becomes compromised, every other facet becomes harder and more risky. Without trust, communication become s suspect and morphs from a tool into a weapon.

If you want to earn trust, get some face-time. Obviously, in the real world is best, but lacking that, get on a video chat. You have to be able to read and see body language. You have to find a way to bond and see each other as people not resources. You have to allow everyone to take the measure of everyone else and to fill in the mental picture that they'll remember and substitute into every other conversation regardless of media.

Recognize up front that coordination among parties in different physical locations is inherently going to take more time. It takes more iterations to verify directional correctness, ensure quality, and declare accomplishments. Everything just takes more time. Stop trying to pretend this isn't the case or even to minimize this to act like it won't. Just prepare and plan for the impact, embrace the reality when it occurs, and don't get cocky when things are going smooth and you think you can tighten up.

When it comes to operating transparently, make sure you are delivering messages properly. Any directional messages should be disseminated with the whole team at the same time. When possible, do all messaging to the entire team at once. Then reserve your smaller leadership group for issue resolution and check-pointing. This can feel unwieldy at first, but the dividends it pays in trust will more than make up for it. If you are suffering from any trust issues, this can be the only chance at repairing the damage and pulling the team back together.

Lastly, don't under-estimate the power of Talking. Emails do not carry tone or intent very well. They aren't very transparent so they wear down the trust and are the easiest road to miscommunication. This is not to imply you don't write things down. On the contrary, always follow up your conversations with a written summary of the talking points decisions, action items, etc. But as much as possible carry your meaningful conversations over voice. If you can add the element of your face and body language via video, this is all the better.

These are important things to consider as you move forward but the attitude in which you apply each of them is also critical. Remember that you aren't planning for efficiency, you plan so that you'll recognize the problems that WILL occur and fix them without a total collapse. This is similar to how automobiles are designed. Some years ago, some automobile companies made amazing and powerful engines but you had to take the whole thing apart to change the oil. Other companies came along with designs that weren't as efficient to run, and were a little more expensive to design and build. But the oil change was a simple thing anyone could do for a few dollars. Ultimately the more successful designs were those that made it easiest to deal with the challenges that were certain to arise (changing the oil) as efficiently as possible instead of trying to optimize for performance or build cost.

The last tip I'll offer in this post is about quality and standards. When doing multi-site engagements that involve parallel efforts or multiple work streams, consider separating the oversight for quality or standards enforcement into an isolated group. Leaving the oversight functions within the different operating units allows disparities in enforcement or standards to creep in. This often is construed as unfair advantage or favoritism or something equally nefarious. Left within each group this disparity is pretty unavoidable and is a very subtle disease that can do a lot of damage when left unchecked. Making a distinct group responsible for these functions will help avoid this issue and its variants. It can also serve as a single neck to choke when trouble does arise.

Don't forget that tips like these aren't any use if you don't treat people with respect and act with integrity.

Wednesday, July 30, 2008

Master Data Management

Recently I had the pleasure of discussing with a prospective client the bulky subject of Master Data Management (MDM). In this situation the client was considering a variety of MDM solutions and wanted some specific direction into which technology/vendor to choose from.

Now it is no secret that I tend to favor Microsoft products but in this case the platform already in place was Oracle so naturally, I wanted to give them more general advice instead of just pushing MS MDM.

There are quite a few upfront activities that must be done regardless of which products you are going to use. I've listed some introductory efforts that need to be done earlier in the process and that can be done before production selection.

  • Governance
    To get the ball rolling, you really need to figure out what data you are talking about, where it lives, who thinks they own it now, and what organization you need to address this effort. Here are some suggestions on how to make this go smoothly:

    1. Identify External Data Ownership
      Often there will be multiple owners identified for the same data. Being able to identify these different uses in their various applications is a significant lever used throughout the rest of the process. Without a clear understanding of where the data is currently 'owned' and its criticality to various applications, mis-matched expectations will cause problems.
    2. Formalize Ownership in MDM
      This often requires significant negotiation and compromise from the various stakeholders. In organizations where the inter-organization trust level is high, this is more straight-forward. In some environments, only a top-down directive will accomplish this. Whichever tactic is used, establishing formal ownership is a prerequisite for success.
    3. Identify Data Domains
      This sounds a lot easier than it always ends of being. Attributes utilized by disparate systems often have subtleties in their definitions which must be accommodated. Being able to provide synonym mapping and transition approaches is imperative to keeping comfort levels high between organizations.
    4. Formalize Domain Administration Process
      Having an established (and hopefully standardized!) set of processes to perform CRUD operations on domains can take the stress out of relationships where trust levels are sub par between applications.
    5. Establish Organizational Governance
      This is often rolled up into Change Management or a similar phase, but in reality needs to be an ongoing activity. These organizational groupings will allow for escalation procedures, conflict resolution, and increased visibility into the data lifecycles.

  • Once you have the basic framework to talk about data ( and who owns it, how it gets maintained, etc. ) you can then delve into the specifics of the data.

  • Model Dimensions
    If governance is properly addressed, this should be a fairly straightforward exercise for data architects. They'll go through some steps similar to the following:

    1. Identify Dimensions
      What data are you really talking about? You need to pick a single name for each domain. Sometimes the same type of data is called different things in different places. You have to formalize on a common taxonomy.
    2. Identify Consuming Applications
      There may be downstream consumers beyond the currently perceived data owners. Make sure you understand the data lifecycle and flows so you can estimate impacts properly.
    3. Identify Entities Per Dimension
      Make sure you flesh out the taxonomy down to the most granular level necessary. The more detail you include now, the more hours saved later.
    4. Identify Entity Attributes
      This is a critical step that is often treated as a second-class citizen. In reality, it is a key driver. Without precision about attributes, the potential values and rules (which come next) can't be properly defined. It also means your estimates will be incorrect.
    5. Quantify Attribute Values
      This is only as important as the complexity of the data rules (which come next) and the accuracy of your rule implementation estimates need to be. If your sources have this well defined, it should be straightforward to ensure this is comprehensive.
    6. Identify Data Rules
      This is sometimes referred to ambiguously as Business Rules which is very imprecise. Here we are speaking not to rules governing operations or activities (hence the term Business) but instead the states, lifecycles, and value cohorts for entities and domains.

As you can well imagine this is a significant undertaking for any organization, and can be done in a technology agnostic way. Once you've identified and quantified this information, you can begin to look at transition plans, data flow planning and infrastructure. These are very technology and environmentally specific.

As the proverb states: "Measure twice, cut once."

Monday, June 23, 2008

I Just Want My Media on My TV

The last couple of weeks have been ridiculously frustrating. All I want is to get my media, on my television, playing the sound through my stereo. This should be a no brainer, right?

Well it turns out that once again I must be just an outlying data point in the world of consumer electronics. What a frustrating adventure this is turning out to be. I've tried lots of different combinations with mixed bag of success and no clear winners. To start with here's a quick run-down of my basic requirements.
  • My media is on two big hard-drives:
    • 1TB external hard-drives
    • accessible via USB
    • willing to share them over the net with an always on pc, but prefer to just plug them in
  • My media is very big and mixed:
    • about 100K songs, from 10K artists, all in MP3, with cover art and ratings embedded in the tags (about 600 GBs)
    • about 10k high-quality pictures (about 100GB)
    • about 3K videos, mostly DIVX and XVID, organized in folders by type, series, season, etc. (about 400 GB)
  • I need to navigate music by playlists, artists, and genre. I've listed them in the order of importance to me. Playlists need to be 'smart' so they'll respect the ratings, genre, etc. that have been painstakingly maintained. For example, a playlist would be 'Songs in Genre = Folk, Subgenre = Acoustic, Rating > 3'
  • Displaying cover art from the file is a must. Displaying other tag attributes is a nice-to-have.
  • Displaying a picture slide-show while playing songs is a must.
    • Being able to choose a specific directory is a nice-to-have.
    • Being able to pick a set of specific directories is a nice-to-have.
    • Assigning a set of directories to a playlist is a nice-to-have.
  • Being able to navigate videos by type, show, season is important.
  • Output via HDMI or DVI is important.
  • I have no interest in using a tv tuner, scheduling recordings or any of that other TV nonsense. So not having it in my way is a good thing.
  • Being able to edit ratings on screen is important.
  • Being able to create playlists on the fly is important.
  • Being able to shuffle by playlist, album, artist is important.
  • Media will get added routinely, so being able to monitor the folders on the drives is important.
  • Start-up time for spinning up music and videos is important.
  • Being able to navigate the very large amount of media quickly and by different attributes is important.
  • Being able to access this media from more than one television is important.
  • Being able to do all this without a dedicated, always-on pc, is only a nice-to-have.
  • Being able to get all this with commercial off-the-shelf stuff is important. I don’t crack boxes, and most soft-mods are off the table too.

When it comes to home electronics, I'm no slouch. I've been trying a whole host of devices and combinations. For example the Netgear EVA8000 Digital Entertainer HD and the Helios X5000 Network Media Player are two examples I've tried. They both really suck. Not only are they slow and can't do half of what I need, they are expensive and crashed often.

I do have a Windows Media Center pc (from Windows Vista Ultimate) which is a pretty good interface. And we have two Xbox 360's. There are multiple ways to connect these bad boys, but the really obvious one (as a Media Extender) doesn't seem to work out of the box. Being pretty good at figuring this crap out, if I can't get it to work after a couple long sessions of fiddling, it's just not meant to be (for me). Tips anyone?

One option I'm considering is the Apple TV. The interface is one of the slickest I've ever seen and the speed appears acceptable. But it won't play my videos without jumping through mod hoops and I'm not a fan of that. Unless someone has an alternative?

So now you know my predicament. What else should I try? Ideas? Anyone? Bueller? Bueller?

Monday, June 02, 2008

It's Been A Circus

Sorry for the delay guys!

Lately it has been a circus around here. Lots of new projects and learning going on these days. Who's in a recession? Not the tech industry!

So I've been working on Facebook Apps, MOSS customizations, writing tons of Silverlight widgets, reams of XAML and buckets of boatloads of WPF.

I'll bring you up to date on the latest shenanigans once things calm down a bit. Feel free to post questions if you want to spur my creative juices!

Friday, April 25, 2008

Leadership and Agile

This week I had several opportunities to speak with people about Agile.

As I listened to yet another discussion about Agile there were several things about the conversation that just really stuck out to me. It started with one strong idea of which I just couldn't seem to let go:
Agile is great under the umbrella of strong leadership. If the direction, metrics or criteria for success aren't well understood and broadly accepted across the organization, then returns on investment and efficiencies will likely not be realized.

In high-performing companies the gestalt is achieved when the employees are empowered to self-direction and a feeling of ownership. Even in these companies, that sense of empowerment and value is only powerful as the individual choices and decisions can be aligned to the bigger vision of the organization as a whole. When the vision or mandates of the organization aren't clearly articulated or fail to be fully accepted within the ranks, then organizations experience churn and confusion. Without alignment to a bigger mission, the agendas from the various levels of management tend to begin conflicting and drifting as individual interpretations of value and success criteria are formed. This churn due to conflicting expectations makes it easier for issues to hide and for misalignment to go unchecked for increasing periods of time.

Even in cases where misalignment is uncovered, the process needed to achieve consensus can then take longer and without strong leadership won't necessarily drive the business impact needed for growth or even sustainment.

This then is the crux. Leadership is essential for achieving business value. You can use Agile processes to seek business value, but without strong leadership you will find the criteria for success shifting too often and hiding behind the easy process of consensus building.

To bring this back to the Agile conversations, I was feeling frustration with the reactive nature of those who ascribe to the Agile approaches. They espouse this easy-going attitude of reaction in which direction, strategy, and even the definition of business value are left completely to the client. This zen-like state in which the Agile practitioner responds fluidly to the changing and ethereal nature of the client demands is unnatural and impractical. Business value is unlocked by making the hard choices, by understanding the elements of the business value and capability chain better than everyone else and leveraging that unfair advantage.
The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself.
Therefore, all progress depends on the unreasonable man.
-- George Bernard Shaw

Simply put, it is called work because it IS hard. It's painful and requires discipline. If you are choosing to avoid the hard choices you are inevitably giving up the business advantages, wasting opportunities, and releasing accountability. Live in a reactive mode long enough and you remove the incentive to estimate well, to live up to commitments, and to deliver beyond expectations.

As everyone in technology knows, regardless of how long a task should take, it always takes at least as long as the plan said it should. We don't finish early, we just fill up the extra time with more stuff and junk. We test a little more, refine a little more, increase scope if necessary. If you aren't pushing the boundary and requiring unreasonable attainment, then you'll never get it either.

When it comes to Agile, I embrace the incremental nature. I covet the consensus of reactive expectations. I do not respect the reduced commitment to discipline, the minimization of leadership, or the dismissal of accountability afforded a facilitator. As Harry would say, "If you are going to come, come correct." Harry can be very wise.

Tuesday, April 01, 2008

Free Code

Lately, I've been really paying attention to how much functionality you can reuse for free from major players. For example, you can check out the various Google APIs ( the Yahoo! User Interface Library ( These libraries are free, released via Open Source licenses, and offer ridiculous amounts of functionality.

As a developer, I love the idea of being able to leverage the work of other people. Most of the time, this is hard to do because so much of the code that everyone else writes is crap. ;-) But seriously, there is even a name for this, we call it Not Invented Here (NIH). The premise being most developers like to know exactly what their code is going to do, so using someone else's code can be hard.

In the case of the big boys like Yahoo! and Google, it becomes a lot easier to trust that the code is going to deliver expected results and not be malicious or buggy. After all, when it is provided (and used) by a company with billions of dollars of online reputation to uphold, you can pretty much bet they've tested it.

Which isn't to say that just because they are all that, you don't need your own bag of chips, they have their issues too. Dependency and version issues, occasional bugs, and warped programming models are abundant. But in the end, you are definitely getting your moneys worth.

The chart at the top of this post is generated using the super cool (and free!) Google Chart API which means I won't be paying for Dundas or ChartFX licenses again. You can find the details at

Monday, March 10, 2008

Why Unit Test?

Recently, I had a down and dirty discussion with a team who is gearing up for a new engagement. This team has been doing Scrum for a while but was struggling with how to fold testing more tightly into their development processes.

The heart of the matter ended up being differences in the purpose and perception of Unit Tests specifically. One camp saw the role of Unit Test as "to find bugs". That's it. They tell you where the bugs are. If they are just about finding bugs, they have more value to new developers than established developers. After all, great developers are much less likely to allow the kind of bugs found by Unit Tests in their code. As a great developer I'm going to catch bad parameters and logic flaws before checking in my code. Integration and functional bugs are probably not going to be found in Unit Tests anyway.

The other camp (read: in my opinion) was resolute that Unit Tests fill a much larger purpose. To my eyes, Units Tests go way beyond just finding bugs. Following are four big ways that Unit Tests can add significant value over and above just finding bugs.

Proving Integration
Today's applications are more distributed and componentized than ever. The number of moving parts has just sky-rocketed with the use of frameworks and new API's, the prevalence of SOA and more component-oriented designs. Personally I see goodness here, but regardless if you like this direction or not, it's happening and is relevant to address. With this fragmentation comes a need to understand when the integration between components is complete. Simple Unit Tests can tell you quickly if the work being delegated downstream of a component is completed and ready. As you are bringing on multiple layers of components the pass rates of the Unit Tests at various layers can quickly show you the level of completeness for the entire integrated system.

Sample Code
One of the difficult parts of delivering a complete software product especially when the product is a framework or an API, if the product is extensible or supports programmability. In these cases specifically, Unit Tests can serve as great examples of how to utilize the API or exercise the programming model. They can often be directly consumed as documentation and sample code for SDKs or in knowledge transfers throughout the life of the product. Building this from outside the team without relying on the author of the component is extremely costly and difficult. However the returns are typically the most valuable.

Proving Design
Designing and building components is easy with the current toolsets. The downside of this is that we can create classes and objects so easily that we don't always think about the bigger picture. Taking time to write code that exercises a new component or interface can act as a sanity check to ensure that the component is easy to use and meets the programmability desires. It's quite often that I see a single API that requires two or three different programming or data interaction models. Some quick Unit Tests would have shown the designer how difficult and inconsistent the usage of the API had become.

Change Management
As a great engineer, it might be totally natural for you to write code that has few bugs. Are the engineers who take up your legacy as experienced? Will the engineer who will open the code in six months to add some features or fix an integration issue be as comfortable knowing how to change things without breaking anything? Would you? Being able to run regressions over time adds a wonderful safety net in the identification of the impact of changes upstream, downstream, or inside your code. It doesn't mean you will (or can) catch everything with Unit Tests but it can sure give you a head start and a level of confidence.

So perhaps this is more about semantics or expectation setting, but I think it helps to keep in mind the variety of purposes Unit Testing can serve. When you expand the contributions made by Unit Tests hopefully you will have an incentive to get even your experienced engineers to spend time writing them.

Tuesday, March 04, 2008

Are You Sure?

It is a pretty normal day for me when a friend says they want to accomplish something and yet they have no idea how to go about it. They want to start a business, but aren't sure what kind of business. They want to make more money, but aren't sure how. They want to get certified/learn new technology/become famous/[insert random goal here]. In all cases, they have an ambiguous idea but not a specific agenda.

A man with a watch knows what time it is. A man with two watches is never sure. -- Segal's Law

In our era of The Paradox of Choice, it is the amazing potential we all have to stand on the shoulders of giants and achieve greatness that is most often our downfall.

We have to choose every day as engineers which technologies to support, which languages to learn, which certifications to pursue. We are bombarded with choices for what social networks to participate in, which email system to rely on, and should we use an Mac or PC? Is it IPhone or Windows Mobile?

As an architect, the most valuable thing you can do is be deliberate in your choices. The hard part is being deliberate quickly. Digesting the massive amounts of ambiguity at a breathtaking pace to synthesize answers and clarity as and when they are needed, usually well before the obvious answers have become apparent. If being an architect was just about picking the obvious when it became obvious, then everyone could do it. (Which probably explains why everyone thinks they can do it.)

If you want to get out there and do something, first narrow down what that something might be. Perhaps start by writing down what it isn't. Then start writing down the attributes or identifying characteristics for what it is. If you can describe what the world looks like when you've achieved your goal, you'll be a good way towards deciding what road will get you there.

Or don't.

It's your choice.

Wednesday, February 20, 2008

Offshoring for Rank and File

I've been doing offshore work as long as the next guy, if the next guy has been doing it for almost a decade. There are successes and failures and lots of learning along the way. It's hardly the only thing I've done, but it's a good share of the pot.

Recently, I was asked to speak with some gents about how their offshore effort was going. Which is usually code for "We don't think it's going well". And by all accounts from the team, it wasn't going all that well. It helps when everyone at least starts from the level of dissatisfaction.

Whenever we start to talk about the concept of managing work efforts in more than one location, everyone spits out the same mantras. Communicate, be sensitive to cultural differences, etc. The thing is there aren't all that many that have moved past the theory into practical application. I should say demonstrably successful application. A big part of this shortcoming, from my perspective, has been that while we can have concepts in our heads like cultural differences, and detailed communication plans, we miss what can be the subtle point of these guidelines.

Let's borrow from a recent experience at Big Redmond Software Company. Companies (like Microsoft or Google) don't really run tight ships. Not in the way the unfamiliar would think. At the foundation, they expect individuals to be very self-sufficient and to work with minimal communication. Contributors are expected to bridge gaps in their own style. These companies are impactful because they have leveraged the ability to walk down the hall and say "Build this" while pointing to the contents of a Visio diagram or the corner of a whiteboard. The resources are expected to be accountable for the applicability and context of their solutions, not just the code or outputs they produce. The business value driving the work should naturally be taken into account by the self-reliant resources who are tasked with unlocking that capability.

When you start moving that work to different geographies and timezones, suddenly that level of communication isn't enough. The expectation on all parties is to minimize communication since that is a cost both parties share. But how they each tend to approach that goal is often different. Remember that the sender is expecting to point and grunt and have accountability and context magically shift. They'll be available for questions, they expect to review the progress and deliverables and throw stones along the way. But if it takes more than [insert arbitrary threshold here] energy and time commitment, then it just isn't working out. It would have been easier to do it myself in house.

Meanwhile, the receiver of the work has an expectation that the work will be fully-formed, coherent, and that they will not have significant decision-making requirements. Give me enough information up-front to be self-sufficient and yardstick with which to measure success. I'll come back when the output meets the success criteria. My accountability only extends to the output according to the specifications and acceptance criteria. If those have been ill-defined, that is the senders responsibility.

Obviously, I over simplify a little and as always YMMV. But in practice, understanding the different accountability and communication expectations we place on resources on different sides of a pond (or client-facing fence) will help you be more successful at leveraging offshore resources.

Saturday, February 02, 2008

How "Open" Can It Be?

Something that has bothered me from the beginning of the Open Source movement (and I use the word very loosely) is the seeming hypocrisy of these commercial companies that have huge revenue streams based around the idea of Open Source software.

Not to go off on a rant here, but how open can they really be when they've got hundreds of people paying money for them to support their specific releases? To my mind the idea of Open Source was always that the source could and would be extended by the community at large. However if you are going to start asking people to pay for the specific versions of extensions that you are providing, you've essentially closed the system. At this point, it seems to me you aren't an Open Source contributor any more. You are someone who took an Open Source offering, tweaked it, and are now asking people to pay for your tweaks, thereby closing it. If you did this once, perhaps a point could be argued, but when you've been building on YOUR specific tweaks for release after release, and your support policies are version specific, then you are just like any other software company. Okay, perhaps there is a little more transparency into the code, but still.
” SANTA CLARA, CA January 16, 2008 Sun Microsystems, Inc. (NASDAQ: JAVA) today announced it has entered into a definitive agreement to acquire MySQL AB, an open source icon and developer of one of the world's fastest growing open source databases for approximately $1 billion in total consideration. The acquisition accelerates Sun's position in enterprise IT to now include the $15 billion database market. Today's announcement reaffirms Sun's position as the leading provider of platforms for the Web economy and its role as the largest commercial open source contributor.”

Exactly how open to the inputs of Joe developer are they really going to be when they have to sustain a $15 billion dollar Support industry around these products. Do they really expect us to believe they are going to leave the evolution of these products to the average engineer? The community at large? Hell no. They have product teams and planners and release strategists just like every one else. They just don't have the huge upfront investment in research and development, they essentially jumpstarted their code from the masses. And they won't have a huge testing effort, they'll just rest on the backs of the industry at large. Which makes for good stability.

I hate to see such blatant parasitical behavior cowering behind the beauty that could be Open Source. Greedy bastards.

Wednesday, January 16, 2008

Running The Gauntlet

This week I had the chance to discuss using Team Foundation Server in a large team setting specifically as concerns Build and Deployment.

As I was organizing my thoughts to answer a question about check-in policies, merging and branching, and so forth and I kept coming back to my guiding principles:
  • The Build is the build. There is only one Build.
  • Build anywhere you like, as much as you like, but it only counts if it is in the Build.
  • If it doesn't work in the Build, it doesn't work.
Now I am certainly familiar with having to manage simultaneous release schedules, juggling branches, and so forth, but that doesn't mean I like it. I absolutely support the mainline concept. Much like the Highlander, when it comes to the build, there can be only one.

We could argue that there are many techniques to allow these fancy behaviors but that doesn't make it okay. Just because you can do something, doesn't mean you should. Experience tells us that if you have a big team, you can't afford to have anyone interrupt the flow of the system. This includes breaking the build, causing failures of critical tests, etc. When more people or organizations are involved, the cost increases.

In the agile communities there is a push towards Continuous Integration (CI) which is really an attempt to implement the Only One Build concept as an ideal. When you can do it, you should. When the amount of code grows or the number of loose connections increases this can be cost-prohibitive, primarily in time. Compiling 1.8 million lines of code is going to take a little while regardless of how big the build machine might be. And when it comes to running Build Verification Tests, these problems are compounded. Especially if you are using disparate services that are loosely coupled.

The idea behind CI is to bring compilation and integration issues to the attention of the team as soon as possible thereby minimizing distribution. Going beyond that are tools that would actually test code entering the system before check-ins occur and reject or accept them based on the results. With TFS policies this is fairly straightforward to set up and an open source effort has begun to bring this to life. You can find out more about this effort at

If you prefer to wait until this functionality is built-in, you'll be glad to know that the next product release most likely will include a similar feature with a working title of "Gated Checkin".