Backward compatibilty and Transformation Avoidance

A recent post on LinkedIn got me thinking about my use of namespaces to signify that two XML structures are compatible, and how I manage schema compatibility in general.

A common practice (and my current preferred practice) is to use a namespace to denote messages that are structurally compatible.   The key driver for this approach was Thomas Erl’s ‘Standardised Service Contract’ principle of service design, and a consequence of successful application of this approach is that you will have little or no message transformation happening in your solution.   Versions of the schema that have the same major version are compatible.  Document instances for version 2.8 of the service will validate against the version 2.1 schemas – and vice versa. Because consumers and providers that are certified to a given major version all understand the messages, there is no work for integration developers so they move on to greener pastures where ..

.. a different line of thought prevails.  Major and minor versioning is still used, however in this case the major version denotes  semantic compatibility. In this world, we allow schemas to be refactored over time as our understanding of the domain grows, and judiciously use the power of integration tools to isolate users of older compatible versions from the effects of change.

So how to choose?

  • Go for structural compatibility + transformation avoidance if you have clever schema designers and a small or tightly-controlled IT application portfolio that will accomodate the occasional need to roll out wholesale changes as you fine-tune your schemas for more backward compatibility and less transformation!   If  you are part of an IT shop that runs a well-oiled release train this could be the choice for you – particularly if there is lots of automated testing that can be used to identify impact of any wholesale schema changes in minutes
  • Go for semantic compatibility + contained transformation if you are in an environment where there is a lot more formality and cost around getting changes done to the applications that provide the services. In this case, you may choose to set up a dedicated integration team or better still, just add the transformation/integration tools to the toolkit of the developers who inherit the apps when the vendors leave and provide them with guidance on when to use them – or not.

As organisations grow, funding reduces and the need to do localised changes quickly increases, they may gravitate towards the latter approach so it’s probably best to plan to end up here from the outset, even if initially you mandate a “no transformation” rule.

Heresy?  Anarchy?  Job creation?  or just plain pragmatic?  The jury is still out on this one….

 

Leave a Comment May 15, 2012 colink

the who, why, what, how and where of architecture

Have you ever been in a meeting where you ask ‘how’ something works and you only get an answer about what it does? Or .. you get an answer that is so detailed that you make a mental note never to invite the respondent to the meeting because they are “too low level”. Or .. you ask why a component works a certain way and the answer is ‘ask the architect’ (hmm .. I thought you were the architect?)

It almost seems that you get a different answer depending on WHO you ask – even within the same team! This is particularly bad in the IT architecture space because  - well – everyone wants to be an architect. I personally remember the feeling of absolute glee when after approx 5 years of waiting and 2 job changes, I officially landed a role where I was ‘the architect’. You would not believe what sort of other titles people will come up with just to avoid calling you an architect (at least that’s what it felt like at the time!).

Ok – back to the point of the post. I thought it might be an idea to try and express the different types of architect roles (WHO) in your typical big-IT environment and how their work interrelates in a complex web of why,what and how. If nothing else, it’ll help me remember who what sort of answer to expect when I ask the business architect ‘how’ a given component we are building affects the business user..

  • The business architect helps the business to define a suitable business process to achieve a given outcome (BUSINESS WHY) and out of this, what technology needs to do in order to support the business. At this stage, all we have is a ‘BUSINESS HOW’ that may have some technology components.
  • If a process involves technology, the solution architect is given the business process definition and any related material as input/justification (the SOLUTION WHY) and asked to come up with a solution. They start from the technology touchpoints identified in the BUSINESS HOW, and in conjunction with the business architect and/or business analysts gathers further details around how the business process will really interact with technology to achieve this outcome. At this stage, we have progressed our understanding to a ‘SOLUTION WHAT’ (ie what is the solution really meant to do, precisely defined in reasonably measurable tech-friendly terms).
  • next the solution architect works with various specialists (ok, “technical architects”) to devise a workable end to end solution to meet the objective. At this stage they have defined a ‘SOLUTION HOW’ at minimum, and if they have really taken the time to talk to the people responsible for all the components, they may also have specified what each components is responsible for – i.e. ‘COMPONENT WHAT’s. Interaction definitions within the solution design provide justification these capabilities, so the solution design also serves the role of the ‘COMPONENT WHY’.
  • finally in order to actually build out the components, the specialists need to work out ‘HOW’ they are going to make their component do what the solution architect has come up with. This gets captured in detailed specs which are the ‘COMPONENT HOW’s.  These of course form the builder’s WHY, based on which they will BUILD WHAT THEY ARE MEANT TO!
  • across all of this discussion, we haven’t even touched on the data. Data is the 4th dimension to all of this – ‘WHERE’. At solution level we talk about where data is mastered and where it is moved to and from as part of interactions. At component level we talk about where a data item can be found in transit (integration) or at rest (database, filesystem etc).
  • WHEN is always yesterday – no need to discuss this point.

Note that in smaller organisations, some architecs will wear multiple if not all of these hats.  It is however important to ensure that the different perspecitves of a solution are understood and addressed adequately.

For the non-architects out there who have to interact with IT’s most trusted profession, hopefully this gives you a view of the different perspectives that people labelled as “architects” might be viewing things from – if nothing else it will help you understand why they say they are in sync but are constantly arguing!

PS – having written this I think I’ll take another look at the Zachman framework – maybe the penny has finally dropped on what he’s on about :-)

Leave a Comment April 27, 2012 colink

Solution mode

After a few years of pushing into enterprise architecture I have taken a break and gone back to a solution/delivery focussed role. This was a move made with some trepidation but 5-6 weeks in, I have to say that it is good to be back in solution mode, working to create a practical rendition of someone else’s enterprise architecture vision.  What better way to inject some reality into the Powerpoint-coloured glasses :-) Interestingly, even though I am officially “just” a delivery guy now, quite a few of the things I am having to do will have enterprise reach if successful. So watch this space for insights from “the other side”. Topics may include SOA in a COTS environment, data governance and possibly even some stuff on the middleware of the day (see my tag cloud for hints) if I manage to get close enough to it.

1 Comment September 21, 2011 colink

On the EA identity crisis ..

The Open Group have an interesting post on Enterprise Architecture’s Quest for its Identity

I think it’s time for EITAs to give up the EA label in favour of something that clearly reflects their technology focus *and* also emphasis *why they are there*.

One idea I have started to toss around at my day job is to re-badging our IT architecture group (which is really EITA) “technology effectiveness” to focus on our key result area: making sure that the business has the right technology to achieve its goals (strategic / planning focus), and that we as IT run that technology as efficiently and effectively as possible (operational / doing focus).   Hopefully this is tweetable enough for me to get across the next time I meet a CxO in the elevator :-)

Leave a Comment March 15, 2011 colink

Avatar – made with Glassfish?

I’m very excited about the l-o-n-g-awaited launch of Glassfish 3.1, as it seems to have a very similar “shape” to WebLogic which is an app server I’ve been using for a long time.  (“shape” = Admin server, managed servers, node manager, scriptable setup, good high availability, nice admin console etc etc).

Looking around on Glassfish.java.net I found this rather interesting white paper comparing it with Tomcat (hopefully it will be updated to reflect 3.1 capabilities such as clustering). One point of comparison was their “CGI”[1] capabilities – apparently this is still an important feature for application servers! Check it out below:

screenshot of oracle white paper comparing glassfish & tomcat features

CGI

I wonder which application server they used to make Avatar?

Seriously though, I will be checking out Glassfish 3.1 in detail albeit for more pedestrian goals like providing a robust and cost-effective “full service” appserver for my mission-critical apps. Sure I can embed ActiveMQ in Tomcat, but do I want to? Watch this space for my findings :-)

[1] Note to Oracle – you might want to get your tech writers up to speed on this CGI - http://en.wikipedia.org/wiki/Common_Gateway_Interface

 

Leave a Comment March 3, 2011 colink

On IT EAs and Business Architects

Notes from a recent EA roundtable that I  attended:

  • Success factors for business architecture:
    • business architecture focusses on the what, and why rather than the how
    • initiative-driven, i.e. do it in the context of an initiative
  • All participants worked in organisations where business architecture is separate from IT architecture (even if the “EA” sits in the IT architecture space).  This was interesting for me because there’s been quite a bit of noise in the blogosphere to suggest that they should be together.  I think I was the only one who was  more of an IT architect than a business architect.
  • Some EA goals and/or measures:
    • reuseability
    • avoidance of project blowouts or gaps
    • complexity reduction (both people and systems)
    • eliminating single points of failure
    • simply avoiding building a Winchester House!

I think positioning of EA including whether there is a single business/IT architecture team depends a lot on organisational dynamics.  If architecture is only spoken of in the context of IT, then that’s where it will sit, although interestingly over time what happens is you have business people, not necessarily with the title ‘architect’ doing enterprise architecture-type things.  A few of the other participants came from organisations that have groups with names such as ‘business strategy and  delivery’.  I believe the trick to aligning the IT and business architecture is to encourage IT archtects to stop being such techies.   Drag them into a central team and give them a few problems to solve where technology is clearly not the best solution.   Not for everyone, but I think it could be a useful career builder for people whose future isn’t necessarily all in the nuts and bolts of the technology.

More later ..

Leave a Comment October 24, 2010 colink

Convergence of EAI /ESB tools and DI tools

In recent times I have encountered 3 or 4 debates (both at work and on the web) on whether you need an ETL tool when you already have an ESB (or EAI tool?).  The reason this comes up is that if you just look at the connectivity and transformation capabilities it is nigh impossible to tell them apart.   (Update – there is a discussion on LinkedIn about this very topic).

To my mind the key point of difference is the volume of data they are designed for.   ETL tools tend toward high-volume batch-oriented capabilities such as job scheduling and management as well as the ability to split jobs into parallel streams by configuration (rather than coding in your ESB). They also have native intelligence to use the bulk update abilities of the databases where they are often used (again, you’d likely have to code this into your ESB).  Processes in the ETL space are often time-critical but in the range of minutes to hours rather than seconds (there was a slide on this at the recent Informatica 9 world tour – todo:add link).

There are probably a few more reasons but the above should suffice for the purpose of this discussion.

Interestingly, in recent months there have been a few announcements of data integration / ETL-type vendors adding real-time integration capabilities to their portfolios. Informatica with 29West, Oracle with GoldenGate, SAS with DataFlux and so on.

This leaves me wondering – what differentiates them from your garden-variety ESB? Why would I buy yet another tool for realtime integration just because it has the word ‘data’ rather than ‘application’ or ‘service’?

Hmmm …

But wait, just when you thought it was confusing enough, Informatica are heavily touting the concept of
“SOA-based data services” (complete with lots of white papers & webinars by/with David Linthicum for true SOA cred) that allow you to surface information from your warehouse directly into your operational systems without the operational systems needing to know where the data comes from.  Oracle’s data service integrator (formerly BEA Liquid Data) is similar.

The Ujuzi take?  I haven’t figured this one out yet, but it does feel that approx 3 years from now, we will probably see tools that can be applied to all of the above scenarios – the uber-integrator that can do service mediation, policy enforcement, transformation, maybe  a bit of orchestration if you’re that way inclined, some ETL, some data services, some real time data shuffling etc.   There is just too much commonality between these for it to make sense to have 4-5 different products that do very similar things.  I want one modular product, with pluggable engines that you can bring to bear as required.  One skillset to develop on it.  One skillset to operate it.

What do you think?

Leave a Comment June 4, 2010 colink

Some open-source gotchas

The increasing use of open-source is a reality in our environments.   Regardless of what you have defined as your SOE, your technical teams are constantly playing with new tools and coming up with new ideas – and hopefully you encourage this.  Every commercial vendor worth their salt embraces open source in some form of other, whether it is just a rubber-stamp exercise or as part of their portfolio offering.  IBM use Geronimo to lower the barrier to entry to the WebSphere brand, and Oracle look set to do the same with Glassfish as an entry point to the WebLogic brand (yay!).

This is all fine for “commodity” capabilities such as app servers, operating systems (!) and suchlike, but what about the more specialised niches where standards are not so mature?   For example, ESBs, data integration tools and suchlike?  Well, open source is a great way to get familiar with the sorts of capabilities you should be looking for when you do get to commercial vendor selection, but equally the open source offerings have matured beyond the point where they are just a stepping stone to the real thing.  There is quite a large number of decent lightweight ESBs (Mule, wsO2 etc) and data integration tools (e.g. Pentaho Kettle) out there that quite frankly will address 80% of the functional use cases you might throw at them, and hopefully the non-functional ones too.

Here are a few things to look out for as you boldly embrace open source, either as an architectrural stepping stone or a core element of your IT environment:

  1. Pick a popular tool – there must be a good reason that people like to use it (and it won’t be because it has been mandated from on high to ensure that the maximum return is squeezed out of an over-priced commercial offering)
  2. Don’t try to support yourself – Even if the tool is open-source, rely on the community to fix bugs, and avoid deploying fixes you have built yourself as you will effectively have your own fork of the code if your fixes don’t make it into the mainstream.  Maintaining this discipline will also allow you to take up commercial support when the tool is a roaring success and becomes mission-critical.
  3. Don’t hack the tool – Just because you have the source code doesn’t mean you can customise the tool to do things it wasn’t designed to.  Why?  Because when the tool fails because of a defect in your code, the baby will be thrown out with the bathwater.  On the other hand, if you are particularly keen on a commercial offering, by all means do take an open source tool, hack it to make it break and then use this as an excuse to get management to fork out for a commercial offering because open source sucks!
  4. Skills – Unfortunately in the increasingly outsourced world, where projects can no longer be delivered fully using in-house capability, open source presents another challenge.  The outsource vendors tend to invest in the mainstream.  This is natural because this is where one is guaranteed to get the volume of work.  Unfortunately this can be a bit of a blocker for open source, because the more niche the technology you want to use is, the less likely the outsourcer will be able to provide resources to build it, and if you contract them on a fixed-price to build it, you’d better have a good reference architecture to govern their deliverable!.  This isn’t only a problem with outsourcing.  Niche technologies also raise the bar on internal hiring.
  5. Non-functionals – if you manage to steer through the above issues (and others I haven’t thought of today) your project will probably be a success, which means it will quickly be promoted to being a mission-critical part of the enterprise.   Follow your basic functional proof of concept with a few non-functional scenarios to test out how it handles failover, load balancing and suchlike.
  6. Familiarise yourself with the product roadmap - particularly for major releases.   In the open source world, sometimes decisions are made that favour architectural purity (or some other greater good) over basic functionality.  Case in point – Glassfish 2.1 has clustering support but 3.0 doesn’t! (this is a personal annoyance for me at present – luckily it is still at the exploratory stage).

Finally, don’t forget that there will be use cases where a commercially supported product is better than “plain old” open source.   Depending on your need, this could be open source with commercial offering, or a commercial product – just keep an open mind and choose based on what you really need.

2 Comments May 11, 2010 colink

YASS (Yet another SOA Survey)

Some of the questions in this one made me think about gaps in our governance.  Take it here – eBizq’s SOA survey.

Leave a Comment March 31, 2010 colink

tweetability

Jacki Johnson, CEO of the Buzz (IAG’s new online-only insurer) spoke at the FST financial services conference about engaging with your customers online 24×7.

I liked her comment about how the 160 character limit on twitter forces you to make your messages succinct (unlike this post I guess).

Exercise for me – increase the tweetability of my communications at work :-)

Leave a Comment March 27, 2010 colink

Previous page


Pages

Tags

October 2014
M T W T F S S
« May    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Meta

RSS Bookmark of the week