Category Archives: Uncategorized

Solution mode

After a few years of pushing into enterprise architecture I have taken a break and gone back to a solution/delivery focussed role. This was a move made with some trepidation but 5-6 weeks in, I have to say that it is good to be back in solution mode, working to create a practical rendition of someone else’s enterprise architecture vision.  What better way to inject some reality into the Powerpoint-coloured glasses 🙂 Interestingly, even though I am officially “just” a delivery guy now, quite a few of the things I am having to do will have enterprise reach if successful. So watch this space for insights from “the other side”. Topics may include SOA in a COTS environment, data governance and possibly even some stuff on the middleware of the day (see my tag cloud for hints) if I manage to get close enough to it.

On the EA identity crisis ..

The Open Group have an interesting post on Enterprise Architecture’s Quest for its Identity

I think it’s time for EITAs to give up the EA label in favour of something that clearly reflects their technology focus *and* also emphasis *why they are there*.

One idea I have started to toss around at my day job is to re-badging our IT architecture group (which is really EITA) “technology effectiveness” to focus on our key result area: making sure that the business has the right technology to achieve its goals (strategic / planning focus), and that we as IT run that technology as efficiently and effectively as possible (operational / doing focus).   Hopefully this is tweetable enough for me to get across the next time I meet a CxO in the elevator 🙂

On IT EAs and Business Architects

Notes from a recent EA roundtable that I  attended:

  • Success factors for business architecture:
    • business architecture focusses on the what, and why rather than the how
    • initiative-driven, i.e. do it in the context of an initiative
  • All participants worked in organisations where business architecture is separate from IT architecture (even if the “EA” sits in the IT architecture space).  This was interesting for me because there’s been quite a bit of noise in the blogosphere to suggest that they should be together.  I think I was the only one who was  more of an IT architect than a business architect.
  • Some EA goals and/or measures:
    • reuseability
    • avoidance of project blowouts or gaps
    • complexity reduction (both people and systems)
    • eliminating single points of failure
    • simply avoiding building a Winchester House!

I think positioning of EA including whether there is a single business/IT architecture team depends a lot on organisational dynamics.  If architecture is only spoken of in the context of IT, then that’s where it will sit, although interestingly over time what happens is you have business people, not necessarily with the title ‘architect’ doing enterprise architecture-type things.  A few of the other participants came from organisations that have groups with names such as ‘business strategy and  delivery’.  I believe the trick to aligning the IT and business architecture is to encourage IT archtects to stop being such techies.   Drag them into a central team and give them a few problems to solve where technology is clearly not the best solution.   Not for everyone, but I think it could be a useful career builder for people whose future isn’t necessarily all in the nuts and bolts of the technology.

More later ..

Convergence of EAI /ESB tools and DI tools

In recent times I have encountered 3 or 4 debates (both at work and on the web) on whether you need an ETL tool when you already have an ESB (or EAI tool?).  The reason this comes up is that if you just look at the connectivity and transformation capabilities it is nigh impossible to tell them apart.   (Update – there is a discussion on LinkedIn about this very topic).

To my mind the key point of difference is the volume of data they are designed for.   ETL tools tend toward high-volume batch-oriented capabilities such as job scheduling and management as well as the ability to split jobs into parallel streams by configuration (rather than coding in your ESB). They also have native intelligence to use the bulk update abilities of the databases where they are often used (again, you’d likely have to code this into your ESB).  Processes in the ETL space are often time-critical but in the range of minutes to hours rather than seconds (there was a slide on this at the recent Informatica 9 world tour – todo:add link).

There are probably a few more reasons but the above should suffice for the purpose of this discussion.

Interestingly, in recent months there have been a few announcements of data integration / ETL-type vendors adding real-time integration capabilities to their portfolios. Informatica with 29West, Oracle with GoldenGate, SAS with DataFlux and so on.

This leaves me wondering – what differentiates them from your garden-variety ESB? Why would I buy yet another tool for realtime integration just because it has the word ‘data’ rather than ‘application’ or ‘service’?

Hmmm …

But wait, just when you thought it was confusing enough, Informatica are heavily touting the concept of
“SOA-based data services” (complete with lots of white papers & webinars by/with David Linthicum for true SOA cred) that allow you to surface information from your warehouse directly into your operational systems without the operational systems needing to know where the data comes from.  Oracle’s data service integrator (formerly BEA Liquid Data) is similar.

The Ujuzi take?  I haven’t figured this one out yet, but it does feel that approx 3 years from now, we will probably see tools that can be applied to all of the above scenarios – the uber-integrator that can do service mediation, policy enforcement, transformation, maybe  a bit of orchestration if you’re that way inclined, some ETL, some data services, some real time data shuffling etc.   There is just too much commonality between these for it to make sense to have 4-5 different products that do very similar things.  I want one modular product, with pluggable engines that you can bring to bear as required.  One skillset to develop on it.  One skillset to operate it.

What do you think?

Some open-source gotchas

The increasing use of open-source is a reality in our environments.   Regardless of what you have defined as your SOE, your technical teams are constantly playing with new tools and coming up with new ideas – and hopefully you encourage this.  Every commercial vendor worth their salt embraces open source in some form of other, whether it is just a rubber-stamp exercise or as part of their portfolio offering.  IBM use Geronimo to lower the barrier to entry to the WebSphere brand, and Oracle look set to do the same with Glassfish as an entry point to the WebLogic brand (yay!).

This is all fine for “commodity” capabilities such as app servers, operating systems (!) and suchlike, but what about the more specialised niches where standards are not so mature?   For example, ESBs, data integration tools and suchlike?  Well, open source is a great way to get familiar with the sorts of capabilities you should be looking for when you do get to commercial vendor selection, but equally the open source offerings have matured beyond the point where they are just a stepping stone to the real thing.  There is quite a large number of decent lightweight ESBs (Mule, wsO2 etc) and data integration tools (e.g. Pentaho Kettle) out there that quite frankly will address 80% of the functional use cases you might throw at them, and hopefully the non-functional ones too.

Here are a few things to look out for as you boldly embrace open source, either as an architectrural stepping stone or a core element of your IT environment:

  1. Pick a popular tool – there must be a good reason that people like to use it (and it won’t be because it has been mandated from on high to ensure that the maximum return is squeezed out of an over-priced commercial offering)
  2. Don’t try to support yourself – Even if the tool is open-source, rely on the community to fix bugs, and avoid deploying fixes you have built yourself as you will effectively have your own fork of the code if your fixes don’t make it into the mainstream.  Maintaining this discipline will also allow you to take up commercial support when the tool is a roaring success and becomes mission-critical.
  3. Don’t hack the tool – Just because you have the source code doesn’t mean you can customise the tool to do things it wasn’t designed to.  Why?  Because when the tool fails because of a defect in your code, the baby will be thrown out with the bathwater.  On the other hand, if you are particularly keen on a commercial offering, by all means do take an open source tool, hack it to make it break and then use this as an excuse to get management to fork out for a commercial offering because open source sucks!
  4. Skills – Unfortunately in the increasingly outsourced world, where projects can no longer be delivered fully using in-house capability, open source presents another challenge.  The outsource vendors tend to invest in the mainstream.  This is natural because this is where one is guaranteed to get the volume of work.  Unfortunately this can be a bit of a blocker for open source, because the more niche the technology you want to use is, the less likely the outsourcer will be able to provide resources to build it, and if you contract them on a fixed-price to build it, you’d better have a good reference architecture to govern their deliverable!.  This isn’t only a problem with outsourcing.  Niche technologies also raise the bar on internal hiring.
  5. Non-functionals – if you manage to steer through the above issues (and others I haven’t thought of today) your project will probably be a success, which means it will quickly be promoted to being a mission-critical part of the enterprise.   Follow your basic functional proof of concept with a few non-functional scenarios to test out how it handles failover, load balancing and suchlike.
  6. Familiarise yourself with the product roadmap – particularly for major releases.   In the open source world, sometimes decisions are made that favour architectural purity (or some other greater good) over basic functionality.  Case in point – Glassfish 2.1 has clustering support but 3.0 doesn’t! (this is a personal annoyance for me at present – luckily it is still at the exploratory stage).

Finally, don’t forget that there will be use cases where a commercially supported product is better than “plain old” open source.   Depending on your need, this could be open source with commercial offering, or a commercial product – just keep an open mind and choose based on what you really need.

NICTA survey on SOA projects

NICTA are conducting a survey on SOA projects.

“We are conducting a survey of SOA implementation projects to determine cost and effort factors associated with such implementations. We would be grateful if would complete the survey or pass it to the appropriate person within your organisation. We are seeking as much input from different sources as possible so if there are others in your organisation or beyond who you feel could complete the survey we would be grateful if you could forward this email to them.
The survey can be found at:
http://zlix056.srvr.cse.unsw.edu.au/Questionnaire/qs/create
It should take no more than about 15 minutes to complete the survey.
Your answers are completely confidential, and can also be anonymous. If you would like a copy of the results of the survey please include you email.

Once the survey closes I’ll post my responses and comments on the results (assuming they’ll be publicly available).