In-Depth
Richard Soley, OMG: On modelers, code generation, Web services and CORBA
- By Jack Vaughan
- October 31, 2002
OMG head Richard Soley has always been enthusiastic in extolling technologies
he cares about. He had a big role in promoting CORBA, and has continued to lead
the OMG as it now seeks to infuse the development world with the Model Driven
Architecture (MDA). Soley spoke recently with ADT's Jack Vaughan over
coffee at a cafeteria between sessions at a trade show held on a Boston wharf.
Not transcribed here are the spirited shouts of 'macaroni and cheese' and
'salisbury steak' heard from a hardworking cafeteria counter crew. These we must
leave you to simply imagine.
There is a great emphasis now with OMG on modeling, and MDA. Yet
isn't it true that models and code generation can be controversial in
organizations?
The problem with modeling the way it's done today is
that it's not part of the development process. It is sort of a pre-development
process. You put your best people over in the corner because they supposedly
understand the business better. They develop a model of how the business works,
print up the document, bind it nicely and hand it to the developers. The
developers then come out of their cubes just long enough to have a model-burning
party and develop what they're going to develop, because the model has no
relation directly as a development artifact.
The difference with a model-driven architecture is that modeling artifacts
are part of the development process. You can actually take models and use them
as development artifacts to automatically generate the data on the wire,
protocols, data formats, data transformation, transactional integrity,
persistent storage and event management -- all those services that are difficult
to do can be automatically generated from the model.
Don't programmers say ''I can generate better code than any machine
ever built''?
Absolutely. Go back to 1959, and that was what Jim
Backus had to get over when he did the first FORTRAN compiler. Every programmer
said ''I can do better code than a FORTRAN compiler'' and of course they couldn't.
But even if a programmer could do better code for a particular machine, the
problem is that the infrastructure keeps changing. Then you have to look at the
code, figure out what the model was that's implicit in the code, and regenerate
code for some other infrastructure. And that is a complete waste of time. I
would be more effective to start with one description of the application as a
model and generate code for different infrastructures. I'm still going to have
to write code, right? The business logic has to be described somehow. But the
hard parts that link with the infrastructure, that do persistent storage and
that do the data transformations, those parts can be generated from the model.
This is not something in the future. This is what companies like Lockheed Martin
and Wells Fargo, [as well as] smaller companies, are doing today. It's not hard
to do, but it requires you to understand a bit more about modeling and how it
becomes part of the development process.
Do managers have a hard time selling this idea to their teams?
Absolutely. Because people like to use the tools they know how to use. People
like to use the IDEs that they have been using, and the older IDEs don't support
modeling directly. I think one of the most exciting things for me, over just the
last year or two, is that not only do you have newer IDEs from companies like IO
Software, Kabira and Kennedy Carter -- tools that allow you to model and
generate code -- but now tools from IBM, Sun, Hewlett-Packard and Microsoft
allow you to draw a UML model, to import and export XMI descriptions of those
models and automatically generate code. I just saw Microsoft's Visual Studio
.NET Enterprise Architect, [which lets you] draw a UML model on the top screen
and Visual Basic code is automatically generated down below. You can move the
cursor down, edit the Visual Basic code and the model changes up above.
Are you seeing situations where there is a place
for fully executable UML as described by Steve Mellor and others?
There are situations where you
will not have to write any code at all, that's what Steve's talking about. Those
situations today are primarily limited to real-time embedded systems. Eventually
we'll see much more automatic code generation. But today I'm not promising 100%
code generation. In most IT infrastructure you're going to see 50%, 75%, maybe
80% [code generation], but not more. What's Steve's talking about when he says
executable UML are situations where the application is 100% characterized and
where the infrastructure is also 100% characterized as models ... then you can
automatically generate all the code. He's talking primarily about embedded,
real-time control systems, things like factory floors. The Lockheed Martin case
is a flight management computer for the Lockheed Martin F-16 fighter jet. There
is a model that describes the avionics, a model that describes the box that it
runs on and so on. They automatically generate all the code for the F-16 flight
system. That's quite an impressive story. That's not one of Steve Mellor's,
that's one of Alan Kennedy's, but they're the same story basically, and both of
them have quite a few of those success stories.
This is an interesting approach, but it seems that Web services is the
next big thing or the new approach.
I like the way you say Web services is
the ''next big thing.'' Do we believe Web services are the last next new thing?
No, we don't. At least I don't. Web services is just another color for the pipe
that connects systems, and it continues to ignore the more difficult problems of
defining the semantics of applications. I do think Web services is a great step
forward because so many companies have adopted the architecture and standards.
But it's not the last interoperability architecture we'll see.
We've seen quite a few [interoperability architectures] go by: DCE, COM,
CORBA, sockets, Web services. There will be something else in a year or two, and
then we'll be back to developing applications for that new infrastructure. Let's
take the opportunity now that we're developing Web services, to abstract the
architecture, automatically generate the infrastructure, and then when the
infrastructure changes again in the future we can generate for whatever the new
infrastructure is. It's a pretty simple, straightforward idea; it's no different
than compiling high-level languages.
Oftentimes new technologies come along and
they're a comment on the previous ones, and to some extent, Web services have
been positioned as an answer to CORBA and even, I think, to EJB.
I have read two articles that
address that directly, one called ''Web services: CORBA done wrong,'' and another
one called ''Web services: CORBA done right.'' And they both ignore the issue,
which is: We have another infrastructure. It's better for some things;
specifically, it's better designed for dealing with the real Internet, and the
firewalls that people have put in place. It's not as well designed for dealing
with other things. For example, compared to CORBA, COM or even DCE, the protocol
is excessive, slow, huge, and doesn't have persistence, transactions, security
or authentication of the SSL. The reality is that people will use both.
A lot of people are still choosing CORBA -- major organizations like Sabre
and Target Stores -- because it's still the only interoperability infrastructure
that is language-independent, platform-independent and vendor-independent. In
the case of Web services, you don't even have a standardized API, you only have
standard packet formats -- SOAP over HTTP -- so you've actually given up quite a
lot there. That said, people are going to use Web services, CORBA and other
things; and even though Microsoft has deprecated COM, people are still using
it.
I think we need to step up a level, code the application using a model and
automatically generate whatever infrastructure comes along. We can already
automatically generate Web services infrastructure, CORBA infrastructure, COM
infrastructure and pure Java J2EE infrastructure from a UML model according to
an OMG standard.
Which is ... ?
Those are the MDA modified profiles.
All the things lacking in Web services may be added as time moves
along.
Our experience in this industry over the last 20 years is that all
those services that will be new for Web services will be added over the next
five to 10 years; but one or two more new infrastructures will appear over the
next two to three years as well.
Web services won't be done and we'll have some new, even more immature
technology that comes along afterwards.
That said, we have an opportunity to take existing transactions or security
services that have been done for other infrastructures, including CORBA and
J2EE, and abstract them and apply them to Web services. That's one of the things
the OMG is doing today. We're taking our transaction infrastructure and
abstracting it so that it can be used in Web services, as is. In fact, the
transaction infrastructure for CORBA is the same API as for Java. It's the same
infrastructure. And it's very close in model to even the Microsoft transaction
server, MTS. So having it be the same effectively for Web services is a good
thing.
You mention a couple of design wins for CORBA: Sabre and Target. Can you
say how they're using them in a little more detail?
Those particular ones
I was just hearing about, they're actually wins of Hewlett-Packard. Both
companies needed extremely high transaction rates. Currently HP is the winner
there with a CORBA system that does 135,000 transactions per second on TPC-C.
Sabre was maxing out -- had already maxed out its existing infrastructure at, I
think, 9,000 transactions a second.
The front ends were CORBA-based. Now the entire back end will be CORBA-based
too, because it's the only thing that can run the kind of transaction rates they
need. Target does similar things, and HP has another customer that they're not
willing to name that has even higher transaction rates.
Well, anything new you've seen lately of interest besides Sabre and
Target?
Actually, the most fun I've had lately is in real-time and
embedded, because that's a world that understands exactly what they need and how
they need it, as well as what kind of performance they need.
Do you think Linux has a chance in the embedded space?
There is a
lot of Linux activity now. Linux is growing really fast in that area because you
can see the source. Embedded people love to see the source. They know they can
tweak it and make it better. It's a much more fragmented market than the desktop
or the enterprise. There are dozens of operating systems.