In-Depth

The Business Perception of MDA

In an interview with Java Pro editors, Alan Brown, a distinguished engineer at IBM Rational software, discussed patterns, modeling, and focus areas for software development in 2005.

The Business Perception of MDA

Alan Brown, a distinguished engineer at IBM Rational software, maps out future product strategy for IBM Rational's design and construction products. His responsibilities include defining technical strategy and evangelizing product direction with customers who need to improve software development efficiency through visual modeling, code generation from abstract models, and reuse. Brown recently participated in an exclusive interview with Java Pro editors to talk about the five major areas for software development in 2005 that he identified recently in a blog entry as well as his position on patterns, modeling, and interoperability.

Java Pro: You've laid out in your blog five areas that you think will be important in software development this year and how Rational/IBM will address them. These areas include a focus on the business/IT gap, design and modeling for services and SOAs, MDA and UML2, domain-specific languages, and modeling and visualization of software and systems. Based on these observations, how do you think MDA will help close the business/IT gap?

Alan Brown: What we've been thinking about in terms of some of the issues that customers face today is that they typically are trying to work at multiple levels at the same time. Typically, there's a lot of confusion between the business needs that business analysts have and those that overall executives of the organization have for the business goals, which is what the IT infrastructure folks can deliver. So we've been looking at how business folks use that knowledge they have, that business understanding they have, and the kinds of business processes they're looking at implementing—how can they use that description that they have as a driver for the IT systems that they're going to use to supply that need?

So we're thinking about how you can create what you might call a business contract between the business folks and the IT folks that use those initial designs, those business models, as the driver for the IT organization. These models that are being created are in the form of model-driven development, model-driven architecture, essentially have a computation-independent model of how the business wants to operate. And that's mapped into the IT infrastructure in terms of how you're going to implement those solutions.

What's happening right now with our tools and customers is that they're using technologies such as WebSphere Business Integration modeling technology to model their business processes, and then the IT organization uses those same models and transforms them into design for the systems that will realize those business needs. In that way we think that that's an important and key direction for folks to use model-driven architecture—not just at the level of giving a design to the code and generating an implementation itself in some programming language, but also to think about model-driven architecture from the point of view of how the business perceives it, in business problems, and how we transform those into the IT architectural solutions.

JP: What are the advantages of using UML as a modeling language over a domain-specific language in better mapping business needs to IT implementations?

Brown: That sounds like an oblique reference to Microsoft. We've got no problems with the idea of domain-specific languages. We support the notion of organizations having specific domain needs, and taking a notation in a language such as the Unified Modeling Language, and customizing it for their domain and creating what we might call a domain-specific language. The question of how do we see UML versus domain-specific languages isn't something that we focus on at all. We focus on: how can we use UML as the basis for domain-specific languages in the key domains that our customers are interested in?

Let me give you an example. You know a lot of our customers are interested in real-time embedded systems for cell phone use or telecommunication situations or command-and-control systems for events organizations. What they do is, they take UML—the plain vanilla UML specification and our tools to support that—and they customize it for their particular domain. They may add in a very particular way that modeling is used in the telecommunications phase or in some real-time embedded area, where there's some particular things they're interested in, to give it timing of systems that either support directly in UML per se, and they extend the notation to support that to create what we'll call a domain-specific language for their needs.

We've always supported that approach; we think it's an absolutely vital way of thinking about modeling tools and IBM's modeling tools. So we support that and we think that UML has outstanding features for supporting domain-specific uses of modeling. We have tooling support that supports plain UML, and we have customization approaches based on that, which allow you to create domain-specific languages for these areas. And we work with customers every day who are taking that approach.

JP: MDA seems to be making greater inroads among the Java community than among .Net-only shops and developers. Do you have any insights as to why this might be the case?

Brown: I couldn't say I have data that could justify that comment, but in the context of code generation, one reason you might be seeing it more is because there're certainly more vendors in the Java space providing code-generational models for Java-based solutions, typically around some sort of Java-based architectural framework like Struts or JSF. We're also seeing some interesting open source efforts around this, too—Andromeda is one effort. Some of the other MDA tooling sort of extends the open source modeling tools or IBM's own modeling tools with some code generation capabilities as well as some solutions from larger organizations like Compuware solutions, for example, and IBM Rational solutions that have code generation built in and have extensible capabilities for code generation.

I think what you're seeing is a number of vendors offering code-generation opportunities in the Java space and a number of organizations beginning to apply those on top of those Java-based frameworks, and so how can you extend those frameworks more easily than doing it by hand? I think in the .Net area, of course, most of the innovation is in the Microsoft technology itself, so they talk quite a bit about some of their code-generation capabilities that will be coming out in their next-generation Visual Studio. We've seen some basis for that, but of course until that technology really appears, we're not sure what we're going to see.

JP: Pattern-based development has been gaining a lot of traction recently. It seems to have largely supplanted the role of objects as the elemental component of software reuse. Is there a role for defining and using patterns in the implementation of models? If so, how would you go about defining patterns that can be used in conjunction with models?

Brown: There are several dimensions to that. The implication is that people are going away from object-based reuse, or it's being supplanted by pattern-based reuse. I'm not sure I would necessarily agree with that in the sense that I think people are still looking at object- and component-level reuse, if we can broaden it just slightly, and there are lots of organizations that we deal with that are very interested in it from an implementation point of view: how they can be more efficient, and reuse parts of their implementations, and whether they're objects or components or other sorts of implementation fragments. We're still seeing that being a key part of how organizations want to improve.

I think what we're seeing is that many organizations see that as necessary but not sufficient, as it were, that there are all levels of reuse and they provide different kinds of value. There might not be more value. Perhaps it's just a different level of value to the organization, and in particular, strategies for solving problems are where the pattern-based reuse seems to fit in more effectively. They see recurring ways of addressing a problem, and it might be implementation-level solutions or it might be design, or even at the business level, and it could be conceptual, or it could be more physical deployment kinds of solutions. I see them in all of these phases, and in the same kind of advantages they got from reusing code, they want to reuse at a more strategic level, which tends to be where they're looking for describing patterns.

The way in which they describe those patterns tends to be in some form of model. UML of course is an ideal language for doing that, because it includes a set of capabilities that make it easy to describe the strategy and easy to stub out the parts of it to make them reusable through some of the templating mechanisms and other approaches inside UML. We're seeing a lot of people saying, "we want to apply that kind of approach at the modeling level and to use it inside your tooling." Of course, we provide quite sophisticated things to that today. One of the things you'll see in our tooling right now is support for packaging and reuse of these sort of models or other sorts of reusable assets using what we call the reusable assets specification [RAS], which is an OMG-approved specification for packaging and delivery of resusable artifacts in all of their forms.

One of the key forms we find people are doing is they take nodes and snip them out of a larger context, describe them in some form—maybe they add some variability points to those models—and then create them as a reusable asset in the resusable asset specification standard, which is then something that becomes a true effort in the organization that can be applied from one situation to the next. We are seeing this being an active area of work with our customers right now, and this whole area of what we might call pattern-based development or maybe more broadly, asset-based development, is something we're seeing a lot of interest in from our customers. We're seeing greater innovation in our tools to support that, and we'll see more and more patterns delivered both in the box with our tools and also outside of the delivery context of our products through other channels like developerWorks and other areas.

JP: In Visual Studio 2005, Microsoft has come up with a way of mapping application designs to computing infrastructures, to show whether or not a design fits into the constraints defined by IT for that infrastructure. Is this an approach being considered by the WebSphere/Rational tools? Does this sort of constraining of the application design make sense?

Brown: I think what you're referring to, and correct me if I'm wrong, is that they're beginning to look at Microsoft as how [it] takes these designs of services and components and then they're able to map those graphically or in some other way to a deployment topology, and so they can say this thing will run on this node and then maybe there are some constraints about what's able to run and they can clarify those at deployment time?

JP: Yes, exactly.

Brown: Okay. That approach is something we've spent quite a bit of time working through. We do have some customers already working on doing some level of mapping between design time and deployment time, relationships between what they intended to build for an application, and how it got deployed in a run time. In fact, we're working very closely with one of our colleagues in the application management space, particularly in the Tivoli brand, who have a great interest in how you can describe and manage deployed applications, both from a physical point of view—depending on what service you have and which server farm and how many nodes—and a more modular point of view of what capabilities you have available to you when you deploy applications and how you track, and log, and manage them. There's also more focus on the design side where we're interested in how did you conceive of this application, and what components and services did you break it up into? Then how did you package and deliver those for a target architecture, such as Tivoli might manage.

We're working together to look at that relationship and create joint solutions so that you can do this sort of closed-loop management among design, test, deploy, debug, and then back to design. In fact, the first aspect of that shift in tooling is what we released toward the end of last year, which was specifically around the whole test and deploy area, where we thought the most obvious, low-hanging fruit for some of our customers was where they want to be able to very easily deploy, test, and manage the applications that we build. We released the joint technology between ourselves and Tivoli that addressed [that], and you'll see continuing improvements and innovation about how we connect our technology in that space. And this is a clear advantage, I think, for IT architects and deployment managers who want to have support in that area but also want to have guidance: "what's the best way to deploy it?" Not just, "if I try and put this on this node will it work or not?" But, "given that I need to be able to deploy applications in this complex technology space, what variations of packaging and delivery do I have, and what are the advantages of these? Which would you recommend for certain kinds of usage patterns that I might see once the application is deployed?"

We've been spending quite a bit of time looking at that angle. Rather than what can be deployed where, what is the best guidance that we can help people to do that with. Again, this is an area where patterns and best practices fit in. If we have experience with deployments and certain topology mixes, we can expand that knowledge in our tools and recommend strategies for deployment to our customers that will help them be more successful. That's the kind of approach we're investigating right now.

JP: Will we see a day where the model becomes the application? In other words, will we ever get to the point where we see a UML compiler that can take a model that accurately represents the application and simply put it on a server and put it into production? Would that be a good idea, and what are the technical hurdles in doing so?

Brown: Let's take the first part of your question: "Do you see a day where the model becomes the application?" To a large extent you can do that today in various well-defined domains. If you take many fourth-generation languages or you take many domain-specific languages—if we take a broader view of what that word means, a collective definition—then we do see this in practice today. Let me give you an example. If you look at some of our customers in the real-time domain who have been using tools like the Rational Rose real-time tools for some time, to a large degree they take models of the system they are building, they generate the application from those models, and run those generated applications. Typically they have no need to go down to the code level to look at what was generated to tweak the application or to performance-tune it at that level. They always work at this model level, and we've seen this for a number of years in certain domains.

For example, if you were going to build? let's say you were a customer and you came to me and you said, "I need to build a sort of dashboard-style application that takes information from several of my existing databases and maybe some data transactions and does some correlation of that against some business rules and displays that information on the screen, and I'm going to make some real-time operational decisions based on that." And it could be in lots of domains, from shop-floor processing to other sorts of process analogies where you want to display information, real-time scheduling of air crews, something that's fairly common. You can build those applications without writing a line of Java, or C#, or anything else because these are portal-style applications. We have portal-based tooling that can help you to WYSIWYG screen design for what you are going to see on the glass. We can help you automate the writing of the business logic itself that can connect the back-end database services so that you can automatically drag and drop the data sources onto a palette or to connect those to the portal windows that are on your dashboard display.

We can easily write snippets of business logic in a higher-level language to describe how you connect those, and even do some the mediations between those things if there's something of a difference between exactly what you wanted and what comes out as a database. We can do all that without you writing a line of code and then generate the application and go deploy it. We can do it for the portal server and other runtime. I know you can do that in other languages; Microsoft can do a certain amount of that, BEA can do a certain amount of that. So we are in a world where there a lot of people building very useful, highly performant applications from models. It just depends on your view of what you mean by a model and how transparent the application is.

That's where I think we've got some ways to go. In most situations the model of what you're building isn't as explicit as what you might like, or isn't as configurable as many organizations are looking for. If you take a fairly narrow slice of what you're trying to build, and you narrow it down to constraints of the kinds of applications you want to build, we can automate large parts of that because we can build in those assumptions into our tooling. If you then fall outside the boundaries of those assumptions, it becomes quite a bit more awkward for builders of applications to use that tooling. What you might be referring to is, can we more generically, just arbitrarily, build any sort of UML diagram, UML model, and then say now I want to implement this and somehow press a button. Of course there's a lot of business logic and a set of what you might call, I'm not sure whether you call them "usage" models or whether you call them "execution assumption built into what you are implying" in those models. And again, you can simulate some of that and you see quite interesting UML-based simulation that people then go and implement by hand.

JP: Is that like iterative modeling?

Brown: There's a level of iteration here where often the way people want to try and do this, they want to initially sketch out some path of what they are trying to build to see "how am I doing so far?" And one way to see how you are doing so far is can I run that thing and see what it does and then show it to a customer. And they say, "well, that's kind of nice, but it isn't quite what I meant." And you learn from that and you go ask some more of the model. To some degree you can use these tools in that mode and gradually and incrementally build on the model.

Obviously, if you do a high-level class diagram and a few sequence diagrams, and then say okay I want to run that, it's much more complicated. Where if what you need is some deeper semantics underneath the models that connect to what the execution is really going to be out of that, and there's a number of sets of activities ongoing. One part of your question was what are the technology hurdles here? There are a number of ongoing activities both in the object management group and elsewhere looking at the underlying semantics of UML and how you do more well-defined and standard transformations between UML and underlying languages. You see a number of activities going on in the UML world based on that that will overcome some of the barriers for this, but I think it's going to always be an interesting path for us to be able to take these high-level models and execute them directly.

What we've seen in the industry over the last 20 years is that we've moved from machine code to assembly language to fairly low-level third generation languages to much higher-level third generation languages. In a sense improving the models of any programming language is a model up to real systems that we execute directly and we rely on the assembler or compiler technology to map out the underlying run-time systems that are going to execute. You know we've seen improvement in that over the last 20 years, and we'll continue to see that. Will it ever get to the point where we can, in general and with consistency and with fidelity execute high-level UML models, I'm not sure that's a path we'll get to in any near time.

JP: Based on what you have written about the five major areas of software development for this year—focus on business/IT gap, design and modeling for services and SOAs, MDA and UML2, domain-specific languages, and the modeling and visualization of software and systems—do you still see those as the five primary areas of software development and is there anything else you'd like to add or expand on that we haven't covered here?

Brown: You bring up one that immediately strikes a chord, and that is the patterns area. I've been spending a lot of time looking at patterns in our tooling lately—the idea of how we codify knowledge more effectively and reapply it consistently from one domain to the next. That's important because how people are going to be successful in service-oriented solutions, for example, will be based on several things, and one of those will be a successful example of how people have built services-oriented solutions. We'll try to extract some of the key ideas behind those successes and then make them available to others in usable forms.

Of course, the whole idea of patterns fits in very nicely here. What we've been looking at is, can we begin to describe both the key elements of these solutions as patterns, but also, how we plug these patterns together to solve broad problems. You can use various names for these things, so one name might be a "recipe," for example, which is a collection of interesting ingredients that you put together to solve a broader problem. We're looking at those ideas of patterns and collections of patterns that you might informally call recipes and how they would solve these problems, particularly in the space of developing service-oriented architectures.