News

New Framework for Java Devs Building Big Data Apps

Continuuity is set to give attendees at this week's Hadoop Summit in San Jose a peek at the latest version of its Developer Suite. Version 1.7, which will be released in July, is a framework for Java developers building big data apps on Apache Hadoop. This version integrates MapReduce into the platform and allows developers to run real-time and batch workloads on a single cluster over shared datasets.

The Palo Alto, Calif.-based Java toolmaker also plans to include an expanded set of example applications in this release in the form of templates. The company is including these templates to "accelerate application development by providing developers with end-to-end implementations of common Big Data applications," said Coninuuity in a press release. For example: streaming real-time analytics (OLAP), targeting and personalization, and anomaly detection.

By providing Java developers with the ability to easily build and run batch and real-time workloads on a single Hadoop cluster with a unified API, the new tool suite opens up "possibilities for building a new class of applications," insisted Continuuity CEO Jonathan Gray. "Instead of being limited to traditional batch workloads, developers can add real-time streaming capabilities to have the most current data reflected in their apps," he said in a statement.

The new developer suite allows Java developers to establish a baseline of information with batch processing that can be augmented on a continual basis and served up in real time. The value of this feature, the company claims, lies in its ability to allow devs to build apps that always reflect the latest data and behaviors. "Developers will be able to easily create apps for building and serving user profiles from Web logs, spotting and acting on anomalies in streams of sensor data, and providing dynamic dashboards and analytics over batches and streams of log data," the company said.

Hadoop, of course, is the open source framework for running applications on large data clusters built on commodity hardware. A combination of Google's MapReduce and HDFS, MapReduce is a programming model for processing the large data sets that supports parallel computations on so-called unreliable clusters. HDFS is the storage component designed to scale to petabytes and run on top of the file systems of the underlying operating systems.

Continuuity recently released its new Weave framework, which gives developers of distributed applications the interfaces they need for managing resources, nodes, and jobs within those apps through an abstraction layer built on YARN (sometimes called MapReduce 2.0).

The company's flagship product, Continuuity Reactor, is one of the first scale-out application server and development suites for Hadoop.

 

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].