News

Self-Service Applications Put Business Users in the Driver’s Seat

What’s the hottest trend in BI? Several industry causes celebres come to mind–data integration, anyone?–but a very good case can be made for an emerging trend in favor of user “self-service” applications.

These are BI apps (often reporting front ends) which provide some support for end-user customization and design. Ideally, a self-service application lets business users construct their own views of data–including, increasingly, information that’s culled from a variety of different data sources–while also giving them some leeway in terms of building and running their own reports.

This week, data quality market-leader DataFlux Corp., a SAS Institute Inc. company, trumpeted a textbook case of the user self-service wave in BI. The company unveiled its DataFlux 7.0 data quality suite, which ships with a substantially revamped design environment, DataFlux dfPower Studio 7.0, intended primarily for use by business analysts, and not code jockeys.

“The interface that we have is primarily a business user interface. The end-user interface is targeted toward business users, [because] the business users know what the data is supposed to look like,” says DataFlux president and general manager Tony Fisher. “It allows the business user to go in and define the workflow, define the rules that are required to maintain the data that you need.”

According to Fisher, data quality–which describes the process of profiling, monitoring, enriching and cleansing data that’s culled from CRM, ERP, MRP or other information repositories–is too important an issue to be left to IT.

He tells the story of one IT department (a “big book retailer”) that wrestled with a data quality implementation for several months before finally calling in a business analyst to help make sense of it.

“The business analyst was called in and looked at the UPC code and said, ‘That’s not our data. We don’t have any UPC codes like that.’ This was true for 20 percent or more of the UPC codes that were in their systems,” he says. “The fact of the matter is that it was their data, and it was in there because of a number of anomalies in their source system that were primarily introduced by IT unbeknownst to the business analysts.”

In this respect, then, DataFlux 7.0 more or less turns the typical enterprise application deployment cycle on its head. It’s typically first used by business analysts, who define the data quality rules (e.g., what data should look like, how it must be cleansed, what should be added to it, what should be audited) that are processed by the DataFlux Integration Server rules engine. “Once the business analyst has designed these, the thing is to get these deployed out into the organization. That’s when [the business] hands it off to IT so IT can deploy it in all of these different integration environments,” Fisher explains.

The user self-service model has obvious ramifications for the code jockeys who earn a living developing reports, designing data quality workflows or working with other full-service BI technologies.

But Fisher, for his part, says there’s still plenty of work for programmers. He points to DataFlux 7.0’s service-oriented underpinnings and notes that someone has to write the J2EE, .NET or Web services code that facilitates its connectivity to heterogeneous data sources.

“We almost always have IT participation early in the process, because even though the business analyst understands the data, at the end of the day, IT controls the data, and it’s IT that knows how to tie all of [these data sources] together,” he concludes.

About the Author

Stephen Swoyer is a contributing editor. He can be reached at [email protected].