Tag Archives: soa

Backward compatibilty and Transformation Avoidance

A recent post on LinkedIn got me thinking about my use of namespaces to signify that two XML structures are compatible, and how I manage schema compatibility in general.

A common practice (and my current preferred practice) is to use a namespace to denote messages that are structurally compatible.   The key driver for this approach was Thomas Erl’s ‘Standardised Service Contract’ principle of service design, and a consequence of successful application of this approach is that you will have little or no message transformation happening in your solution.   Versions of the schema that have the same major version are compatible.  Document instances for version 2.8 of the service will validate against the version 2.1 schemas – and vice versa. Because consumers and providers that are certified to a given major version all understand the messages, there is no work for integration developers so they move on to greener pastures where ..

.. a different line of thought prevails.  Major and minor versioning is still used, however in this case the major version denotes  semantic compatibility. In this world, we allow schemas to be refactored over time as our understanding of the domain grows, and judiciously use the power of integration tools to isolate users of older compatible versions from the effects of change.

So how to choose?

  • Go for structural compatibility + transformation avoidance if you have clever schema designers and a small or tightly-controlled IT application portfolio that will accomodate the occasional need to roll out wholesale changes as you fine-tune your schemas for more backward compatibility and less transformation!   If  you are part of an IT shop that runs a well-oiled release train this could be the choice for you – particularly if there is lots of automated testing that can be used to identify impact of any wholesale schema changes in minutes
  • Go for semantic compatibility + contained transformation if you are in an environment where there is a lot more formality and cost around getting changes done to the applications that provide the services. In this case, you may choose to set up a dedicated integration team or better still, just add the transformation/integration tools to the toolkit of the developers who inherit the apps when the vendors leave and provide them with guidance on when to use them – or not.

As organisations grow, funding reduces and the need to do localised changes quickly increases, they may gravitate towards the latter approach so it’s probably best to plan to end up here from the outset, even if initially you mandate a “no transformation” rule.

Heresy?  Anarchy?  Job creation?  or just plain pragmatic?  The jury is still out on this one….

 

Solution mode

After a few years of pushing into enterprise architecture I have taken a break and gone back to a solution/delivery focussed role. This was a move made with some trepidation but 5-6 weeks in, I have to say that it is good to be back in solution mode, working to create a practical rendition of someone else’s enterprise architecture vision.  What better way to inject some reality into the Powerpoint-coloured glasses 🙂 Interestingly, even though I am officially “just” a delivery guy now, quite a few of the things I am having to do will have enterprise reach if successful. So watch this space for insights from “the other side”. Topics may include SOA in a COTS environment, data governance and possibly even some stuff on the middleware of the day (see my tag cloud for hints) if I manage to get close enough to it.

Convergence of EAI /ESB tools and DI tools

In recent times I have encountered 3 or 4 debates (both at work and on the web) on whether you need an ETL tool when you already have an ESB (or EAI tool?).  The reason this comes up is that if you just look at the connectivity and transformation capabilities it is nigh impossible to tell them apart.   (Update – there is a discussion on LinkedIn about this very topic).

To my mind the key point of difference is the volume of data they are designed for.   ETL tools tend toward high-volume batch-oriented capabilities such as job scheduling and management as well as the ability to split jobs into parallel streams by configuration (rather than coding in your ESB). They also have native intelligence to use the bulk update abilities of the databases where they are often used (again, you’d likely have to code this into your ESB).  Processes in the ETL space are often time-critical but in the range of minutes to hours rather than seconds (there was a slide on this at the recent Informatica 9 world tour – todo:add link).

There are probably a few more reasons but the above should suffice for the purpose of this discussion.

Interestingly, in recent months there have been a few announcements of data integration / ETL-type vendors adding real-time integration capabilities to their portfolios. Informatica with 29West, Oracle with GoldenGate, SAS with DataFlux and so on.

This leaves me wondering – what differentiates them from your garden-variety ESB? Why would I buy yet another tool for realtime integration just because it has the word ‘data’ rather than ‘application’ or ‘service’?

Hmmm …

But wait, just when you thought it was confusing enough, Informatica are heavily touting the concept of
“SOA-based data services” (complete with lots of white papers & webinars by/with David Linthicum for true SOA cred) that allow you to surface information from your warehouse directly into your operational systems without the operational systems needing to know where the data comes from.  Oracle’s data service integrator (formerly BEA Liquid Data) is similar.

The Ujuzi take?  I haven’t figured this one out yet, but it does feel that approx 3 years from now, we will probably see tools that can be applied to all of the above scenarios – the uber-integrator that can do service mediation, policy enforcement, transformation, maybe  a bit of orchestration if you’re that way inclined, some ETL, some data services, some real time data shuffling etc.   There is just too much commonality between these for it to make sense to have 4-5 different products that do very similar things.  I want one modular product, with pluggable engines that you can bring to bear as required.  One skillset to develop on it.  One skillset to operate it.

What do you think?

NICTA survey on SOA projects

NICTA are conducting a survey on SOA projects.

“We are conducting a survey of SOA implementation projects to determine cost and effort factors associated with such implementations. We would be grateful if would complete the survey or pass it to the appropriate person within your organisation. We are seeking as much input from different sources as possible so if there are others in your organisation or beyond who you feel could complete the survey we would be grateful if you could forward this email to them.
The survey can be found at:
http://zlix056.srvr.cse.unsw.edu.au/Questionnaire/qs/create
It should take no more than about 15 minutes to complete the survey.
Your answers are completely confidential, and can also be anonymous. If you would like a copy of the results of the survey please include you email.

Once the survey closes I’ll post my responses and comments on the results (assuming they’ll be publicly available).

Reference architectures

Zapthink remind us of the value of reference architectures.

The Ujuzi take (with apologies):

  1. build on a pre-existing reference architecture where possible (this is the standing on the shoulders of giants bit)
  2. if  you are committed to a vendor’s product, decide how deeply you are prepared to entrench it in your architecture and then build on their best practices for the chosen bits
  3. maintain a register of lessons learned (and solutions) as you go through projects
  4. document antipatterns and how to avoid them
  5. document new patterns and how to apply them
  6. document what doesn’t work as advertised – whether it is a vendor product capability or just an architectural approach
  7. PoC, PoC, PoC

Big-S services

Over the last year or so, I have been leading an SOA initiative which by all accounts has gone relatively well, with one downside. All the developers and architects will give it a qualified thumbs up (hopefully not just because it’s good resume fodder), but the business analysts and business owners of the applications that are enabled by our beautiful service catalog, have no appreciation of what the services are and how they affect them. The only exception from this is services that we expose to our customers as web services.

So how do we ‘take SOA to the business’ – and more importantly, why should we?

The recent release of TOGAF 9, which includes a chapter on SOA has made it clearer. Basically, we have been doing developer-led SOA (or for the sake of my ego, architect-led SOA), building little-S services that are important building blocks for the techies, but the business couldn’t care less about.  Ironically, the highly resuseable services that the developers love are probably the ones that are least meaningful to the business (future post on this).    In order to ‘take SOA to the business’, we need to think of the entire business as a set of capabilities that we provide to customers through a combination of people, process and technology.   These capabilities are big-S services,  and are recognisable elements  of the business value chain (e.g. assessing a credit card application).  Their work has a quantifiable real world effect on the bottom line (as opposed to a technical real world effect such a database table being updated), and similarly their SLAs on performance, availability etc. are traceable to a commitment to a real customer or other business stakeholder.

But wait, there’s more.

ITIL v3 gives us a service-centric framework for managing the lifecycle of IT services, i.e. capabilities provided by IT as the technology contribution to big-S services, and is already well established at the ‘lights-on’ end of the service lifecycle. In my opinion there is no reason why we couldn’t use ITIL as an overarching framework to manage the lifecycle of all big-S services, and then overlay whatever else we need at the lower-level to manage the lifecycle of the little-S services that contribute to a big-S service. As an examlpe, this is where we would see the intersection between the service registry/repository and the CMDB.

So Bill (are there people in my service), I think I might finally be getting it.  Now I’m off to re-read the debate on B-SOA vs T-SOA 🙂

Eureka

ps – if you have a southern drawl, replace big-S service with ‘Business service’ and little-S service with ‘Technical service’.

Welcome to blog.ujuzi.com

Welcome – this blog is about sharing and growing our ujuzi:

  • our experience – over 30 years in total in the domains of business process improvement and using technology for business advantage
  • our knowledge and expertise – accumulated over time, and ever growing
  • our skill and technique – how to put what we know into action

We don’t claim to know it all – far from it – this blog is more about an opportunity to get some good old 360-degree review on what we think we do know!

Enjoy.