- By Stephen Swoyer
The Big Idea
- Even the most elaborate problem-solving systems can be reduced to a rules repository (or knowledge base) and an inferencing engine.
- Many of the problem-solving techniques that power adaptive learning and rule-based inferencing systems have gone mainstream.
- There are a couple of vexing impediments to the development of smart apps in most enterprises—namely, a lack of expertise and time.
Some of the techniques first pioneered by AI researchers
(and commercialized by the expert systems pioneers
of the 1970s and 1980s) have been used to produce highly
adaptive applications or systems. Many expert systems—
which can range from the simple (true/false logic) to the complex
(true/false logic in tandem with fuzzy logic)—might be called
smart, because they're able to make reasonable inferences about
the behaviors and preferences of their interactive users and adjust
their application presentation or strategic guidance accordingly.
More to the point, the problem-solving capabilities that power
the expert systems of today are explicitly portable, because
they aren't based on domain-specific expertise. With a few exceptions,
even the most elaborate of expert or problem-solving
systems can be reduced to two components: a rules repository
(or knowledge base) and an inferencing engine.
AI and expert systems concepts and methods aren't just the
province of academics or highly specialized commercial ISVs anymore.
Many of the problem-solving techniques that power artificial
intelligences or expert systems (adaptive learning and rulebased
inferencing capabilities) have gone mainstream. Most
codejockeys are at least familiar with pathfinding, decision trees
and rule-based inferencing, if only as vague holdovers from otherwise
long-forgotten CompSci AI seminars.
But some, like Steve Berczuk, a programming consultant and
co-author, with Brad Appleton, of Software Configuration Management
Patterns, are using these techniques in their ongoing work. "I once worked on an application that used genetic algorithms
for scheduling—or rather for optimizing schedules. I'd classify it
as intelligent in the sense that it came up with answers comparable—or better—than a person might have," Berczuk explains.
"On the other hand, the way that genetic programming works is
by trying different things and picking the best options in an efficient
way. So it was intelligent, but achieved it by doing lots of
small simple things, and correcting course."
Smarter app dev tools
Smart apps will require smarter development tools. And there are a lot of ways in which app dev tools can get smarter. Consider the emergence of domain-specific languages (DSL), which describe a way to encode esoteric domain expertise in (often declarative) programming idioms. Although DSLs haven't yet gone mainstream, some programmers have used them in tandem with other techniques (such as rules) to develop smarter or quasi-intelligent applications. Although there are typically caveats here, too.
"I like using DSLs to implement rules in a concise way, but I've never managed to find a client who would not require something that lay outside the scope of those," says Stefan Schmiedl, a programmer with risk management specialist Approximity.
More to the point, experts say, development tools are going to get smarter. The DSL models championed by Intentional Software, JetBrains and Microsoft will grow in popularity. Model-driven tools will become even more sophisticated and incorporate even more innate "smarts." Over time, Borland, IBM, Microsoft, the Eclipse Foundation and other players will make it easier to build smart apps by incorporating explicit support for rules and other smart programming techniques into their tools. And one consequence of improved app dev smarts might be the diminishment of conventional (imperative) software development.
"A lot of coding is still done in the imperative, one-line-after-another model. We've gone to extremes to make sure that it's easy and efficient to produce software that's essentially one chunk of imperative code after another," says Dan Massey, chief technical architect with Borland Software. "The next step will be more levels of abstraction. There's UML, there's other modeling ideas like Alloy, where—given enough information about your design—for some finite number of nodes, they can tell you whether you're going to violate the constraints you set for them."
This is going to take time. As experts concede, most IT orgs are still developing software the old-fashioned way. So the big tools vendors aren't in any hurry. In the interim, then, small players will have to do much of the heavy lifting.
"Right now, I think [Borland, IBM and Microsoft] are doing the right thing: there's not enough of a market and not enough commonality of tools and approaches. Probably some small company will do something good," suggests Ron Jeffries, a senior consultant with The Cutter Consortium's agile software development and project management advisory service. "A small company looking to get something going might do well to integrate with Eclipse or [Visual Studio], depending on whether they were going after a Java or .NET market."
In addition, Jeffries speculates, there are a lot of unspectacular ways in which app dev tools can be improved, if not made smarter. "There are lots of things where more intelligent IDEs might help. Even in conventional programming, tools like 'lint' and refactoring tools can be of great assistance," he comments, noting that smarter source code diff tools would be a good start.
As for the emergence of programming expert systems that are able to provide meaningful guidance to novice developers, Jeffries isn't holding his breath. "I could imagine tools that would ask 'Are you trying to do X?' and help out. But so far, even the kind of thing that Word can do usually guesses wrong and is hard to configure. In programming, intelligent help is just beginning."
Others, such as Stefan Schmiedl, a programmer with risk management
specialist Approximity, are starting to experiment with
smart applications in production environments for the first time. "We actually have [used] adaptive techniques to improve prediction
in a prototype currently under heavy development," he confirms.
100 years of progress in 25 years
If Ray Kurzweil is correct, technological singularity is inevitable. And by "inevitable," Kurzweil doesn't just mean the stuff of some distant or oblique future. In his new book, The Singularity Is Near: When Humans Transcend Biology, Kurzweil says that what he calls Singularity is only a few decades away.
Singularity is a term Kurzweil and other trans-human futurists use to describe what they believe will happen when the development of artificial (or superhuman) intelligence fuels the exponential acceleration of technological progress and human cultural evolution. There's a lot at stake. In Kurzweil's view, the Singularity will trigger a "profound and disruptive transformation" in human capabilities. In other words, the Singularity will fundamentally alter what it is to be human, both biologically and experientially.
Kurzweil is a technology polymath who's played instrumental roles in the development of optical character recognition and text-to-speech synthesis technologies. He believes Singularity will occur by 2045. More to the point, he argues, we're already in the home stretch. The rate of technological progress is doubling every decade, and Kurzweil expects it will accelerate even more as advances in supercomputing power bring Singularity ever closer.
For Kurzweil, it mostly boils down to an issue of computing horsepower. By 2013, we'll have enough processing power to support functional simulation of the human brain. And by 2025, we'll be able to simulate the brain's neural activity, which will let researchers simulate consciousness in the machine realm.
Key to Kurzweil's argument is that the rate of technological progress is increasing. (He has said that at today's rates, we'll achieve 100 years of progress in just 25 years.) In a certain sense, what Kurzweil really means by "technological progress" is computing power. And the geometric growth of computing horsepower (Moore's Law) isn't simply a function of AI research. There's a profit motive at stake, especially insofar as many of the world's most prominent supercomputer manufacturers also have thriving enterprise server or consumer PC businesses. In other words, businesses and consumers are the driving forces behind technological innovation. What this means is that if the rate of technological progress accelerates-- as we verge ever closer to Singularity--businesses and consumers will be the primary beneficiaries.
And if this happens, it could radically transform what is today called enterprise application development--long before the advent of Kurzweil's Singularity.
At the very least, applications and app dev tools will almost certainly get smarter. With so much processing power at their behest, and with the mainstreaming or trickling down of innovations derived from AI research, robotics, nanotechnology and other domains--it'll amount to a fait accompli of sorts.
But the lesson of today's smart applications is dual-fold: Yes, the pieces are in place and--yes--the technology is in many ways compelling. But the idea of smart applications (to say nothing of machine awareness and super-intelligence) is a disquieting one, at least for most consumers. And until Kurzweil's highly disruptive Singularity utterly explodes the issue, the qualms of potential consumers could act as a brake on the development of truly smart applications.
Not a trend, yet
Dan Massey, chief technical architect with Borland Software, says
smart application development isn't a widespread trend—yet. "I
haven't seen that much of it coming in from customers in terms of requirements [for app dev tooling]," he
"They're doing a lot of imperative
coding, and some of them are kind of
finding new ways to solve traditional problems,
but I've actually had to force rules
[inferencing] on people," he says.
There are impediments to the development
of smart apps in most enterprises—a lack of expertise and a lack of
time. Codejockeys aren't AI researchers,
nor do most rank-and-file programmers
have much experience with AI or expert
systems concepts and methods.
And even if they did, Massey points
out, most projects don't give codejockeys
enough time to build the infrastructure
services—such as a robust inferencing
engine—that are needed to
enable smart apps in the first place.
"The amount of time to do it is more
than most developers are given to devote
to a project," he points out. "I think
a lot of developers could embrace this
stuff and go forward with it, but implementing
it in the real world, they're not
given the time to go do that." That's
One emerging model for building potential
smart apps is the business rules approach,
which to some extent is an offshoot
of expert systems research. At a
basic level, BRA describes a model in
which (often declarative) rules determine
outcomes; in their more sophisticated incarnations,
business rules management
systems (BRMS) use inferencing engines
to decide which rules are called for and
why. In this respect, organizations can
configure their BRMSes to automatically
trigger decisions that might otherwise
have to be made by a human being—such as a business analyst. A host of vendors
market BRMSes, and at least one shrinkwrapped
a rules engine in a commodity
product (Biztalk Server).
The upshot, says BRA proponent Ron
Ross, co-principal of consultancy Business
Rules Solutions and executive editor of
BRCommunity.com, is there's a market for
off-the-shelf BRMSes. And while there
is still a host of issues associated with these
products—the codification and management
of rules, the rapid rate of change (and
accompanying evolution of rules) in many
organizations, the human resistance to
divulge privileged knowledge—they can
eliminate a lot of the complex and timeconsuming
coding that would otherwise
preclude the construction of smart apps in
"There have been, probably out of necessity,
a lot of do-it-yourselfers in the
past, but...the momentum to acquire off-the-shelf capabilities has grown tremendously,
and very few people are undertaking
to develop their own [BRMSes]
these days," said Ross in an interview last
year. "At the very least, there are a couple
of open-source places that people go
so they won't code their inference engine
or rules engine from scratch."
The BRAdescribes a very specific paradigm—
in this case, using declarative rules
designed to appeal to business users—but
the concepts on which it's based (rules and
inferencing algorithms) have much broader
applicability. As a result, even skeptics
say it's possible to use a combination of
rules (declarative or otherwise) and inferencing
algorithms to construct highly
adaptive, quasi-intelligent apps.
And given the availability of commercial,
off-the-shelf and open-source rules
management systems, it's easier than
ever. Although many question the wisdom
of doing so. "It's definitely possible,"
concedes Ron Jeffries, a senior consultant
with The Cutter Consortium's agile
software development and project management
advisory service. "I think that
for most purposes, an inferencing engine
is probably overkill. It's yet another instance
of a framework looking for something
to do. I'm sure there are exceptions.
But I don't run into them."
In a certain sense, it comes down to a
fundamental human need, argues Borland's
Massey. "People...want some [application
intelligence], but they also
want it to be deterministic," Massey says.
"Consider Bayesian stuff. In theory, you
could test for it, you could trend, expecting
it to pass or fail. But even though
you know it is deterministic somewhere
down in the guts of it, it could still seem
like magic. That unnerves many people.
They want to be the ones who are in control."
Programmer Schmiedl says he has firsthand
experience with this phenomenon.
His company incorporated adaptive learning
capabilities into one of its risk-management
software prototypes, designed to
increase the predictive capabilities of the
prototype, Schmiedl says. At the same
time, improved predictive insight does entail
a cost of sorts—at least from the perspective
of some potential users.
"I demo'ed some of the techniques to
a client about a year ago, who replied
that he really felt uncomfortable
with a system that unpredictable. He
valued reproducibility over evolution,"
Schmiedl's experience is seconded by
Mary Crissey, a marketing director with
data mining and statistical analysis powerhouse
SAS Institute. To some extent,
Crissey concedes, it might be possible to
use rules and inferencing algorithms to
automate the actionable insights that are
unearthed via data mining or information
analysis. After all, she says, data mining
and information analysis is undergoing a
renaissance of sorts, thanks to the ability
to process unprecedented amounts of
data. But empowering the application to
make decisions of this kind isn't a direction
in which SAS is going, she says.
"It's more of an alert [model] right
now. It's showing you things that you
might not have seen because of all of the
possible combinations. That's where
we are," she explains. But couldn't SAS
automate information analysis via rules,
inferencing or genetic algorithms, if only
on an industry-specific basis?
Certainly, she says, SAS can hardcode
trending information to custom-tailor
data mining or text mining solutions for
specific customer environments. As for automation
via quasi-intelligence, Crissey is
more reserved. "I would really want a human
being [involved]. I think most of our
customers expect that [the output of]
analysis will be interpreted by a human being.
The point is that the machines are not
running by themselves. I am strongly in favor
of the human participant being a valuable
part of the decision-making process."
Unsure smart apps will be
In many cases, of course, artificial decision-
making can be a good thing. E-commerce
applications use rules and inferencing
to trigger discounts, suggest
additional or alternative items, and provide
incentives (free shipping, anyone?)
based on a customer's online buying patterns.
Healthcare, insurance and financial
services firms—or any company that's
subject to increased regulatory oversight—can use rules to more easily manage
changing regulations (rules aren't
hardcoded into applications but exist as
declarative statements in a repository),
ensure compliance with requirements or mitigate exposure to potential litigation.
(A rule-based, automated loan approval
process is by definition impartial.) As for
HAL Corp.'s permitting its Executive
Information System to trigger a merger
with XYZ Corp., that won't happen.
At some point, suggests Borland's
Massey, applications might become smart
enough to quell most human concerns.
He uses the example of a project management
system that's able to customtailor—or custom-market—its guidance
based on information it has collected
about specific users. Even so, he says, this
isn't so much a function of application
awareness as of programmer ingenuity.
"There doesn't even have to be any awareness
for something like this. [The computer
is] going through its rule base and
everything it knows, and it looks at the
options it's given you and which ones
you've picked. Then some algorithms
kick in, and it has a library of different
ways to show you its data. It goes through
different ways of prioritizing them;
maybe it starts to learn that you always
pick the second option. It learns through
trial and error, and from there, you start
moving toward teaching it what marketing
is," he concludes. "If humans don't
want machines thinking for them, maybe
you might want to focus on putting in the
guts so [the machines] can market their
decisions to you."
ILLUSTRATION BY RYAN ETTER
- Content management's changing story line
By Paul Korzeniowski www.adtmag.com/article.asp?id=11883
- Information lifecycle management lives
By Steve Ulfelder www.adtmag.com/article.asp?page=1&id=11501
- App integration reflects a new ideal
By Alan Radding www.adtmag.com/article.asp?page=1&id=11362