Tag Archives: oracle

Solution mode

After a few years of pushing into enterprise architecture I have taken a break and gone back to a solution/delivery focussed role. This was a move made with some trepidation but 5-6 weeks in, I have to say that it is good to be back in solution mode, working to create a practical rendition of someone else’s enterprise architecture vision.  What better way to inject some reality into the Powerpoint-coloured glasses 🙂 Interestingly, even though I am officially “just” a delivery guy now, quite a few of the things I am having to do will have enterprise reach if successful. So watch this space for insights from “the other side”. Topics may include SOA in a COTS environment, data governance and possibly even some stuff on the middleware of the day (see my tag cloud for hints) if I manage to get close enough to it.

Avatar – made with Glassfish?

I’m very excited about the l-o-n-g-awaited launch of Glassfish 3.1, as it seems to have a very similar “shape” to WebLogic which is an app server I’ve been using for a long time.  (“shape” = Admin server, managed servers, node manager, scriptable setup, good high availability, nice admin console etc etc).

Looking around on Glassfish.java.net I found this rather interesting white paper comparing it with Tomcat (hopefully it will be updated to reflect 3.1 capabilities such as clustering). One point of comparison was their “CGI”[1] capabilities – apparently this is still an important feature for application servers! Check it out below:

screenshot of oracle white paper comparing glassfish & tomcat features
CGI

I wonder which application server they used to make Avatar?

Seriously though, I will be checking out Glassfish 3.1 in detail albeit for more pedestrian goals like providing a robust and cost-effective “full service” appserver for my mission-critical apps. Sure I can embed ActiveMQ in Tomcat, but do I want to? Watch this space for my findings 🙂

[1] Note to Oracle – you might want to get your tech writers up to speed on this CGI – http://en.wikipedia.org/wiki/Common_Gateway_Interface

 

Convergence of EAI /ESB tools and DI tools

In recent times I have encountered 3 or 4 debates (both at work and on the web) on whether you need an ETL tool when you already have an ESB (or EAI tool?).  The reason this comes up is that if you just look at the connectivity and transformation capabilities it is nigh impossible to tell them apart.   (Update – there is a discussion on LinkedIn about this very topic).

To my mind the key point of difference is the volume of data they are designed for.   ETL tools tend toward high-volume batch-oriented capabilities such as job scheduling and management as well as the ability to split jobs into parallel streams by configuration (rather than coding in your ESB). They also have native intelligence to use the bulk update abilities of the databases where they are often used (again, you’d likely have to code this into your ESB).  Processes in the ETL space are often time-critical but in the range of minutes to hours rather than seconds (there was a slide on this at the recent Informatica 9 world tour – todo:add link).

There are probably a few more reasons but the above should suffice for the purpose of this discussion.

Interestingly, in recent months there have been a few announcements of data integration / ETL-type vendors adding real-time integration capabilities to their portfolios. Informatica with 29West, Oracle with GoldenGate, SAS with DataFlux and so on.

This leaves me wondering – what differentiates them from your garden-variety ESB? Why would I buy yet another tool for realtime integration just because it has the word ‘data’ rather than ‘application’ or ‘service’?

Hmmm …

But wait, just when you thought it was confusing enough, Informatica are heavily touting the concept of
“SOA-based data services” (complete with lots of white papers & webinars by/with David Linthicum for true SOA cred) that allow you to surface information from your warehouse directly into your operational systems without the operational systems needing to know where the data comes from.  Oracle’s data service integrator (formerly BEA Liquid Data) is similar.

The Ujuzi take?  I haven’t figured this one out yet, but it does feel that approx 3 years from now, we will probably see tools that can be applied to all of the above scenarios – the uber-integrator that can do service mediation, policy enforcement, transformation, maybe  a bit of orchestration if you’re that way inclined, some ETL, some data services, some real time data shuffling etc.   There is just too much commonality between these for it to make sense to have 4-5 different products that do very similar things.  I want one modular product, with pluggable engines that you can bring to bear as required.  One skillset to develop on it.  One skillset to operate it.

What do you think?

Some open-source gotchas

The increasing use of open-source is a reality in our environments.   Regardless of what you have defined as your SOE, your technical teams are constantly playing with new tools and coming up with new ideas – and hopefully you encourage this.  Every commercial vendor worth their salt embraces open source in some form of other, whether it is just a rubber-stamp exercise or as part of their portfolio offering.  IBM use Geronimo to lower the barrier to entry to the WebSphere brand, and Oracle look set to do the same with Glassfish as an entry point to the WebLogic brand (yay!).

This is all fine for “commodity” capabilities such as app servers, operating systems (!) and suchlike, but what about the more specialised niches where standards are not so mature?   For example, ESBs, data integration tools and suchlike?  Well, open source is a great way to get familiar with the sorts of capabilities you should be looking for when you do get to commercial vendor selection, but equally the open source offerings have matured beyond the point where they are just a stepping stone to the real thing.  There is quite a large number of decent lightweight ESBs (Mule, wsO2 etc) and data integration tools (e.g. Pentaho Kettle) out there that quite frankly will address 80% of the functional use cases you might throw at them, and hopefully the non-functional ones too.

Here are a few things to look out for as you boldly embrace open source, either as an architectrural stepping stone or a core element of your IT environment:

  1. Pick a popular tool – there must be a good reason that people like to use it (and it won’t be because it has been mandated from on high to ensure that the maximum return is squeezed out of an over-priced commercial offering)
  2. Don’t try to support yourself – Even if the tool is open-source, rely on the community to fix bugs, and avoid deploying fixes you have built yourself as you will effectively have your own fork of the code if your fixes don’t make it into the mainstream.  Maintaining this discipline will also allow you to take up commercial support when the tool is a roaring success and becomes mission-critical.
  3. Don’t hack the tool – Just because you have the source code doesn’t mean you can customise the tool to do things it wasn’t designed to.  Why?  Because when the tool fails because of a defect in your code, the baby will be thrown out with the bathwater.  On the other hand, if you are particularly keen on a commercial offering, by all means do take an open source tool, hack it to make it break and then use this as an excuse to get management to fork out for a commercial offering because open source sucks!
  4. Skills – Unfortunately in the increasingly outsourced world, where projects can no longer be delivered fully using in-house capability, open source presents another challenge.  The outsource vendors tend to invest in the mainstream.  This is natural because this is where one is guaranteed to get the volume of work.  Unfortunately this can be a bit of a blocker for open source, because the more niche the technology you want to use is, the less likely the outsourcer will be able to provide resources to build it, and if you contract them on a fixed-price to build it, you’d better have a good reference architecture to govern their deliverable!.  This isn’t only a problem with outsourcing.  Niche technologies also raise the bar on internal hiring.
  5. Non-functionals – if you manage to steer through the above issues (and others I haven’t thought of today) your project will probably be a success, which means it will quickly be promoted to being a mission-critical part of the enterprise.   Follow your basic functional proof of concept with a few non-functional scenarios to test out how it handles failover, load balancing and suchlike.
  6. Familiarise yourself with the product roadmap – particularly for major releases.   In the open source world, sometimes decisions are made that favour architectural purity (or some other greater good) over basic functionality.  Case in point – Glassfish 2.1 has clustering support but 3.0 doesn’t! (this is a personal annoyance for me at present – luckily it is still at the exploratory stage).

Finally, don’t forget that there will be use cases where a commercially supported product is better than “plain old” open source.   Depending on your need, this could be open source with commercial offering, or a commercial product – just keep an open mind and choose based on what you really need.