Java 9 Deep Dive at EclipseCon 2015

The Java community is still rolling around in the awesomeness of the long-awaited Java 8 release, with its support for lambda expressions, virtual extension methods and streams, compact profiles, the new the date/time API and so much more (but mostly that stuff). It was the largest-ever upgrade to the programming model, and by some accounts, it has been the most rapidly adopted update in the history of the platform.

But, you ask, what about Java 9?

Mark Reinhold, chief architect of Oracle's Java Platform Group, offered attendees at EclipseCon 2015, which wrapped on Thursday, a deep dive into the Even Cooler Java update, coming sometime next year.

The big change in Java 9, as everyone knows, is modularity, as laid out in Project Jigsaw, the oft-deferred capability that aims to make it possible for Java developers to create apps that don't need to lug around the entire environment. A Jigsaw module is a collection of Java classes, native libraries and other resources, along with metadata.

"From the beginning, the Java SE platform has been this huge monolithic thing," Reinhold said. "Even if you wanted to use just a small part of it, you had to install all of it." It has been difficult to run Java SE on small devices, he observed, but it has also been a pain on large devices and in some cloud environments. "What we want is a box of Lego parts [which are] modular that we can assemble as needed," he said.

The introduction of compact profiles in Java 8 was a "baby step" toward relieving some of that pain, Reinhold said, but Java 9 "will be composed of a set of finer-grained modules and will include tools to enable developers to identify and isolate only those modules needed for their application," he said.

Project Jigsaw comprises the three JEPs and a JSR. JEPs (JDK Enhancements Proposals) allow Oracle to develop small, targeted features for the Java language and virtual machine outside the Java Community Process (JCP), which requires full Java Specification Requests (JSRs).

  • JEP 200: The Modular JDK, defines a modular structure for the JDK. Reinhold described it as an "umbrella for all the rest of them."
  • JEP 201: Modular Source Code reorganizes the JDK source code into modules.
  • JEP 220: Modular Run-Time Images restructures the JDK and JRE run-time images to accommodate modules.
  • JSR 376: Java Platform Module System, the central component of Project Jigsaw, which defines the module system for the Java Platform.

Among other things, the modularization of Java will lead to the removal of the rt.jar files (runtime JAR), which Reinhold referred to as "a constant source of pain," in favor of compact profiles for a major reduction of the JVM footprint. (JAR files, he added, would be with us "until the heat death of the universe.") While the full JRE clocks in at 55Mb on a Linux ARM 32, Reinhold noted, compact1, the smallest profile, clocks in at 11Mb; compact2 is 17Mb, and compact3 is 30Mb. The modularization project will also lead to the elimination of the extension classpath and some reorganization of the lib path, he said.

Modularity will also improve security, he said. After "a bit of a rough period," Java security is now much better, but Java 9 will make it even better by making it possible to enforce strong modular boundaries -- defining what's internal to the module and what's external. Java 9 will also introduce a tool called jlink or the Java linker, which will make it possible to link modules to a single runtime.

"Vanilla Java is a dynamically linked environment," Reinhold said. "But when you are assembling modules into a one pile of bits that is custom JRE, you need a linker."

Of course, there's already a modular architecture for Java. The OSGi Alliance currently provides set of specifications that define a dynamic component system for the platform. Reinhold responded to the inevitable question about whether Java 9's new modular system will be compatible OSGi's modular architecture.

"We intend to explore ways of making standard Java modules available to other module systems," he said, but added, "We don't see how to achieve all of the goals we have for the module system if one of those goals is also to be completely compatible with OSGi."

Reinhold also looked into his crystal ball and speculated a bit on developments beyond Java 9. He touched on current efforts to improve Java's typing system and make the language more efficient at handling situations requiring identity less types via Project Valhalla , announced in August. He also pointed to Project Panama, which aims to improve the connections between the JVM and "foreign" non-Java APIs.

 

Posted by John K. Waters on March 13, 20150 comments


2 Open Source Eclipse IoT Projects Released Ahead of EclipseCon 2015

The San Francisco edition of the Eclipse Foundation's user conference, EclipseCon 2015, gets under way next week (March 9-12). I'm looking forward to catching some sessions and keynotes on a range of topics, but I'm particularly intrigued by the foundation's activities around the Internet of Things (IoT). The Eclipse IoT momentum just keeps building. In fact, two open-source projects that are part of that effort, Eclipse Paho and Eclipse Mosquitto, announced new releases this week.

Both projects -- Paho 1.1 and Mosquitto 1.4 -- implement the client and broker for the OASIS Message Queuing Telemetry Transport (MQTT) protocol. The MQTT protocol is designed to connect physical world devices and networks with applications and middleware. It has been widely adopted by IoT solution providers, largely because of its small footprint, minimal bandwidth requirement for messages, and its ability to adapt to unreliable network connections -- all essential qualities for an IoT protocol.

Ian Skerrett, the Foundation's vice president of marketing, who has been leading the Eclipse effort to foster an open-source community around IoT, told me that providing open-source implementations of MQTT has been something of a project focus. Interest in these two projects in particular has been high in the community, Skerrett said in an email, and the Foundation considers their release to mark "a pretty big milestone."

The Eclipse IoT Project aims to establish an open platform for IoT and machine-to-machine (M2M) communication that combines a set of services and frameworks, open-source implementations of standard protocols, and an Eclipse-based IDE for IoT/M2M development.

The Paho Project provides scalable open-source client implementations of open and standard messaging protocols for IoT/M2M apps. New in this release: support for .NET, WinRT, and Android clients; C and C++ libraries for embedded clients; updated versions of the Java, Python, and JavaScript clients to conform to the MQTT 3.1.1 standard. The new version is available for download now.

The Mosquitto project provides an open-source implementation of an MQTT broker. New in this release: easier integration with web sites through support for WebSockets; more flexible support for TLS v1.2, 1.1 and 1.0 f or enhanced security, plus support for ECDHE-ECDSA family ciphers; improved interoperability between MQTT brokers via better bridge support, including wildcard TLS certificates and conformance to MQTT 3.1.1. The new version is also available now for download.

"In the last year we have seen tremendous interest in the Eclipse IoT community, and in particular Paho and Mosquitto," said the Foundation's executive director Mike Milinkovich, in a statement. "Forty developers contributed to the new Paho and Mosquitto releases, demonstrating incredible interest for these projects and MQTT in general."

The Eclipse IoT project has evolved fairly quickly into a full-fledged community that is currently 15 projects strong. In addition to the MQTT protocol, those projects implement Lightweight M2M and CoAP, as well as several IoT-friendly frameworks.

A complete list of Eclipse IoT projects is available on the Foundation Web site here.

Posted by John K. Waters on March 6, 20150 comments


Report: Oracle's Click-to-Play Feature Greatly Improves Java Security

During last October's JavaOne conference, I attended the post-keynotes Java panel, where leaders of the various Java organizations within Oracle, along with JCP chairman Patrick Curran, lined up at one end of the press room to answer reporters' questions. It's a traditional part of the event, this panel, and I've been to more than a few of them, so you'd think I would have noticed immediately the dearth of questions about the security of Java, which had kicked off the Q&A for the last few years. But it was Henrik Stahl, vice presidentof product management in Oracle's Platform Group, who observed at the end of the discussion that there had been no security questions at all.

I mentioned this later to Mike Milinkovich, the executive director of the Eclipse Foundation, who was on hand that day to lead a session. He was not surprised. "That's what happens when you have a squeaky clean year," he said.

I'm not sure I'd call 2014 "squeaky clean," but Java-based breaches -- not to mention headline -- were down last year. In fact, there were no major zero-day Java vulnerabilities discovered and exploited in the wild. Why? A new report released this week by HP Security Research offers at least part of the answer. The authors of "HP Cyber Risk Report 2015," (PDF) credited Oracle's click-to-play feature, introduced in 2014, for the improved security.

"Oracle introduced click-to-play as a security measure making the execution of unsigned Java more difficult," the report's authors wrote. "As a result we did not encounter any serious Java zero days in the malware space. Many Java vulnerabilities were logical or permission-based issues with a nearly 100 percent success rate. In 2014, even without Java vulnerabilities, we still saw high success rate exploits in other areas."

Click-to-play is the browser feature that blocks Java content by default. The Web page displays a blank space until the user clicks the box to enable that content. This seems to have mitigated the vulnerability of Java in the browser, which was largely the result of the way Oracle has bundled the Java browser extension with the Java runtime environment (JRE).

Among the exploits listed in the report's Top 10, none targeted Java, which had been one of the most commonly exploited targets in previous few years. "This may indicate that the security push, which caused delay in the release of Java 8, is getting some results," the researchers wrote, "although it may be too early to tell. It may also be a consequence of browser vendors blocking outdated Java plugins by default, making the platform a less attractive target for attackers."

The success of the click-to-play feature at thwarting Java attacks was "the one exception" in an "inherently vulnerable" environment in which systems are built on decades-old code, and patches are inadequately deployed, the researchers concluded. And that success may be responsible for shifting attacker focus to vulnerabilities in Microsoft's Internet Explorer and Adobe Flash.

"Attackers continue to leverage well-known techniques to successfully compromise systems and networks," the researchers wrote. "Many client and server app vulnerabilities exploited in 2014 took advantage of codes written many years back -- some are even decades old."

The most common exploit the researchers saw last year was CVE-2010-2568 (CVE: "Common Vulnerabilities and Exposures"), which accounted for just over a third of all discovered exploits. According to the CVE site, this vulnerability affects the Windows Shell in XP SP3, Server 2003 SP2, Vista SP1 and SP2, Server 2008 SP2 and R2, and Windows 7. It allows local users or remote attackers to execute arbitrary code via a crafted .LNK or a .PIF shortcut file, which is not properly handled during icon display in Windows Explorer. Six Java exploits were listed, accounting for a total of 28 percent.

There's much more in this report -- things like a deep-dive into highly successful vulnerabilities, an awesome glossary, and a lot of revealing statistics. The report is free for download. I also recommend the HP Security Research Blog.

Posted by John K. Waters on February 24, 20150 comments


Bosch ProSyst Acquisition Good News for Java and OSGi

German Internet of Things (IoT) platform provider Bosch Software Innovations (BSI) is acquiring ProSyst, a Java- and OSGi-based software vendor specializing in middleware for the IoT, the two companies announced this week. BSI, a subsidiary of the Bosch Group, specializes in the development of gateway software and middleware for IoT.

ProSyst is a provider of middleware for managing connected devices and implementing Machine-to-Machine (M2M) cloud-based applications. The company's roots are in Java and the Open Service Gateway initiative (OSGi) specification, and it has focused mainly on open, modular, and neutral software platforms that services providers and device manufacturers can use to deploy apps and services.

ProSyst products serve as a link between devices and the cloud, and that link is essential for interconnecting buildings, vehicles and machines, said BSI president Rainer Kallenbach, in a statement.

"[T]he ProSyst software will enable our customers to launch new applications on the Internet of Things more quickly and be one of the first to tap into new areas of business," Kallenbach said. "The ProSyst software is highly compatible with the Bosch IoT Suite, our platform for the Internet of Things. Above all, it complements our device management component by supporting a large number of different device protocols. This will allow us to achieve an even better market position than before."

BSI will be acquiring, among other assets, the ProSyst device runtime stacks, tools, SDKs and remote device management/provisioning platforms. Bosch also takes on the company's approximately 110 Java/OSGi engineers.

"ProSyst has been the leading provider of OSGi implementations for embedded systems for many years," Mike Milinkovich, executive director of the Eclipse Foundation, told ADTmag in an e-mail. "A quick look at their customer reference page shows a pretty amazing list of accounts, including Bosch. And those are just the ones that [the company is] allowed to talk about. There are other, very significant players who embed the ProSyst OSGi technology, but prefer anonymity."

The ProSyst customer list includes, among others, Intel Cisco, AT&T and Deutsche Telekom.

Milinkovich believes that the acquisition signals the intention of Bosch to become a significant player in the IoT, with a particular focus on the industrial applications.

"To me, [the Bosch] acquisition of ProSyst means that Java and OSGi will be an important part of [the company's] strategy," Milinkovich said. "That is great news for both Java and OSGi. In particular, I see this as significantly increasing the likelihood that Java and OSGi will be fundamental technologies in the Industrial Internet."

Posted by John K. Waters on February 19, 20150 comments


Understanding Service (not Server) Virtualization

"What's in a name?" Shakespeare's Juliet asked. Quite a lot, actually. Take it from me: the other John Waters. Another example: service virtualization. The name is so close to the most well-known and widely implemented type of virtualization -- server virtualization -- that it's gumming up the conversation about using virtualization in the pre-production portion of the software development lifecycle.

Industry analyst Theresa Lanowitz has been doing her part for a while now to clarify terms. It matters, she says, because service virtualization could have as big an impact on application development as server virtualization had on the datacenter.
"When many people hear the word 'virtualization,' the first thing that pops into their heads is serv-er virtualization, and of course, VMware," Lanowitz told me. "Which is understandable. Servervirtualization is incredible technology. It allows enterprises to make better use of their hardware and to decrease their overall energy costs. It allows them to do a lot more with underutilized resources. Serv-ice virtualization is almost the antithesis, in that it allows you to do more with resources that are in high demand."

To be clear, as Lanowitz defines it, service virtualization "allows development and test teams to statefully simulate and model the dependencies of unavailable or limited services and data that cannot be easily virtualized by conventional server or hardware virtualization means." Lanowitz stresses "stateful simulation" in her definition, she said, because some people argue that service virtualization is the same as mocking and stubbing. But service virtualization is an architected solution, while mocking and stubbing are workarounds.

"Lifecycle Virtualization" is voke's umbrella term for the use of virtualization in pre-production. The full menu of technologies for LV includes service virtualization, virtual and cloud-based labs, and network virtualization solutions. The current list of vendors providing service virtualization solutions includes CA, HP, IBM, Parasoft, and Tricentis.

Lanowitz and her voke, inc., co-founder, Lisa Donzek, take a closer look at recent developments in the service virtualization market in a new report, ("Market Snapshot Report: Service Virtualization"). The report looks at what 505 enterprise decision makers reported about their experiences with service virtualization between August 2014 and October 2014, and the results they got from their efforts. It also does an excellent job of defining terms and identifying the products and vendors involved.

This is voke's second market report on service virtualization. Their first survey was conducted in 2012 and included 198 participants. "In 2012, the market had just been legitimized," Lanowitz said. "We still had a long way to go."

What jumped out at me in this report was the change in the number of dependent elements to which the dev/test teams of the surveyed organizations required access. In 2012 participants reported needing access to 33 elements for development or testing, but had unrestricted access to only 18. In 2014, they reported needing access to 52 elements and had unrestricted access to only 23. Sixty-seven percent of the 2014 participants report unrestricted access to ten or fewer dependent elements.

"It's clear to us that if you're not using virtualization in your application life cycle, you're going to have some severe problems, whether it's meeting time-to-market demands, quality issues, or cost overruns," Lanowitz said. "Service virtualization helps you remove the constrains and wait times frequently experienced by development and test teams needing to access components, architectures, databases, mainframes, mobile platforms, and so on. Using service virtualization will lead to fewer defects, reduced software cycles, and ultimately, increased customer satisfaction."

There's a lot more in the report. It's a must read.

A rose by any other name might smell as sweet, but it's not going to jack up your productivity.

Posted by John K. Waters on February 9, 20150 comments


One Solution for Developer Fatigue

When you hear the words "developer fatigue," what images come to mind? Your team leader talking about yet another project with an impossible deadline? Bleary-eyed teammates on all-night coding sessions? Too much java (and Java)? Or maybe you see a more profound enervation brought on by the "the constant and increasing flood of new languages, libraries, frameworks, platforms, and programming models that are garnering popular attention in the developer community." Those are JNBridge CTO Wayne Citrin's words, soon to appear in a company blog post.

Citrin and other industry watchers have noted the growing frustration among developers faced with a constant need to learn and adjust to new languages and technologies while remaining productive. Citrin believes he has at least a partial solution to this problem, and I talked with him about it recently.

"It's great that there's so much innovation going on," he said, "but the half-life of many of these innovations seems to be decreasing. It used to be that something would come out, and once it got adopted, you'd expect it to be the big paradigm for at least a few years. That's just not the case anymore. It can be a genuine hassle to keep up with all this stuff, and it takes away from the time people need to be productive. At some point, people just start throwing up their hands."

JNBridge is a provider of interoperability tools for connecting Java and .NET frameworks. The company's flagship product, JNBridgePro, is a general purpose Java/.NET interoperability tool designed to bridge anything Java to .NET, and vice versa. The tool allows developers to access the entire API from either platform.

Citrin's solution, not surprisingly, lies in tools like JNBridgePro.

"It's a little self-serving on our part, I admit, but it's also true that interop tools can really help," he said. "They make it possible for you to introduce yourself to new technologies gradually, focusing on the things in those technology that can help you solve your immediate problems. If you're a developer who uses our stuff and you're deep into both Java and .NET and you need to make use of, say, Python or Groovy, you can reach out for features from those technologies at your own pace."

By taking it slowly and integrating features from the new tech into your projects as they need them, devs can leverage their existing skills, reduce the risk of making a bad bet, and reduce their stress -- which ultimately reduces developer fatigue, he said.

"As long as the implementation is Java- or .NET-based, we can help developers integrate any part of it into their existing project, when they're ready, and without having to throw all the existing stuff away, and completely learning the new thing, re-implementing it from scratch, and having to find and fix new bugs," he said. "It beats the heck out of the alternative of 'warehousing,' which is essentially jumping in and learning a new technology for its own sake."

Citrin isn't the only one noticing this phenomenon, of course. A lively back-and-forth on the consequences of these seemingly never ending demands on developers was sparked last July when front-end and server-side developer Ed Finkler (aka funkatron) wrote a blog post entitled "The Developer's Dystopian Future." In that post he confessed that his "tolerance for learning curves" was growing smaller every day.

"New technologies, once exciting for the sake of newness, now seem like hassles," he wrote. "I'm less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity." He also expressed what is almost certainly a widespread fear in this increasingly polyglot world of simply being left behind.

Finkler's post struck a nerve in more than a few developers and fellow bloggers. Tim Bray, one of the creators of the XML specification, talked about it in his "Discouraged Developer" blog.

"[T]here is a real cost to this continuous widening of the base of knowledge a developer has to have to remain relevant," he wrote. One of today's buzzwords is "full-stack developer." Which sounds good, but there's a little guy in the back of my mind screaming 'You mean I have to know Gradle internals and ListView failure modes and NSManagedObject quirks and Ember containers and the Actor model and what interface{} means in Go and Docker support variation in Cloud providers?' Color me suspicious."

Programmer/podcaster Marco Arment took up Finkler's commentary in his blog.

"I feel the same way," he wrote, "and it's one of the reasons I've lost almost all interest in being a web developer. The client-side app world is much more stable, favoring deep knowledge of infrequent changes over the constant barrage of new, not necessarily better but at least different technologies, libraries, frameworks, techniques, and methodologies that burden professional web development."

Author Matt Gemmel commented on Arment and Finkler's posts on his "Confessions of an ex-developer" blog. "There's a chill wind blowing, isn't there? I know we don't talk about it much, and that you're crossing your fingers and knocking on wood right now, but you do know what I mean," he wrote.

Redmonk analyst Stephen O'Grady noticed Arment's, Bray's, Finkler's, and Gemmell's posts and wrote about them on his "tecosystems" blog.

"Developers have historically had an insatiable appetite for new technology," he wrote, "but it could be that we're approaching the too-much-of-a-good-thing stage. In which case, the logical outcome will be a gradual slowing of fragmentation followed by gradual consolidation." (Be sure to check out his new O'Reilly book, The New Kingmakers: How Developers Conquered the World.)

If you haven't already, it's worth reading these connected blogs. (I'd start with Finkler's.) And keep an eye out for Citrin's upcoming post on the JNBridge Web site.

Posted by John K. Waters on February 9, 20150 comments


Following 'Whirlwind' Year, Docker Changes Operational Structure

The open source Docker project experienced "unprecedented growth" last year, its maintainers say, with project contributors quadrupling and pull requests reaching 5,000.

To cope with the surge of this "whirlwind year," Docker, Inc., the chief commercial supporter of the project, has modified its organizational structure, spreading out the responsibilities that had been handled by Docker's founder and CTO, Solomon Hykes, into three new leadership roles.

The new leadership roles include Chief Architect, Chief Maintainer, and Chief Operator. The new operational structure also defines the day-to-day work of individual contributors working in each of these areas. All three positions were filled by new or existing employees of Docker, Inc., the chief commercial supporter of the open source Docker.io container engine project.

"This is the natural progression of any successful company or open source project," Docker's new Chief Operator, Steve Francia, told ADTmag. "As your popularity grows, you eventually have to spread the load, and that's what this new structure is doing."

Since the release of Docker 1.0 last June, the project has attracted more than 740 contributors, and fostered more than 20,000 projects and 85,000 "Dockerized" applications.

Hykes will take on the role of Chief Architect, which Francia called "the visionary role. It trims his responsibilities to overseeing architecture, operations, and technical maintenance of the project. He will also be responsible for steering the general direction of the project, defining its design principles, and "preserving the integrity of the overall architecture as the platform grows and matures."

The role of Chief Maintainer has been assigned to Michael Crosby, who became a Docker team member in 2013, and has been a core project maintainer. He will be responsible for "all aspects of quality for the project, including code reviews, usability, stability, security, performance, and more." Crosby began working with the project in 2013 as a community member. "He was appointed to the position because he was already so good at supporting the other maintainers," Francia said. "It's a role that, in some ways, he's already been playing." Crosby is described in the Docker announcement as "one of its most active, impactful contributors."

As Chief Operator, Francia will be responsible for the day-to-day operations of the project, managing and measuring its overall success, and ensuring that it is governed properly and working "in concert" with the Docker Governance Advisory Board (DGAB). For the past three years Francia had served in a similar capacity as chief developer advocate at MongoDB, where he "created the strongest community in the NoSQL database world," the announcement declares.

"When I joined MongoDB, I'd been around long enough to realize that companies that transform the industry come along maybe once in a decade," Francia said, "and I knew how lucky I was to be a part of that. At Docker I get to be part of another transformation, one that is going to change the way development happens, forever. You always hope that lightening will strike twice, but I sure didn't expect it to happen so soon."

Francia introduced himself to the Docker community in a Q&A session today on IRC chat in #docker. He also posted his first blog as Chief Operator.

The Docker reorganization itself went through the same process as a proposed feature, and was documented in a pull request (PR #9137). It was commented on, modified, and merged into the project. The changes are intended to make the project more open, accessible, and scalable, and in an incremental way, without unnecessary refactoring.

Docker and containerization seem to be on everybody's mind these days as microservice architectures gain traction in the enterprise. Over the past few years, Netflix, eBay, and Amazon (among others) have changed their application architectures to microservice architectures. Thoughworks gurus Martin Fowler and James Lewis defined the microservice architectural style as "an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms." Containers are emerging as a popular means to this end. 

"The level of ecosystem support Docker has gained is stunning, and it speaks to the need for this kind of technology in the market and the value it provides," said IDC analyst Al Hilwa said in an earlier interview.

Posted by John K. Waters on January 28, 20150 comments


Can Containers Fix Java's Legacy Security Vulnerabilities?

I reported last week on Oracle's latest Critical Patch Update, which included 169 new security vulnerability fixes across the company's product lines, including 19 for Java. The folks at Java security provider Waratek pointed out to me that 16 of those Java fixes addressed new sandbox bypass vulnerabilities that affect both legacy and current versions of the platform. That heads-up prompted a conversation with Waratek CTO and founder John Matthew Holt and Waratek's security strategist Jonathan Gohstand about their container-based approach to one of the most persistent data center security vulnerabilities: outdated Java code.

Holt reminded me that the amount of Java legacy code in the enterprise is about to experience a kind of growth spurt, as Oracle stops posting updates of Java SE 7 to its public download sites in April.

"When you walk into virtually any large enterprise and you ask them which version of Java they're running, the answer almost always is, every version but the current one," Holt said. "That situation is not getting better."

Outdated Java code with well documented security vulnerabilities persists in most data centers, Gohstand said, which is where it's often the target during attacks. The reasons that legacy Java persists, in spite of its security risks (and the widespread knowledge that it's there), is up for debate. But Waratek's unconventional approach to solving that problem (and what Holt calls "the continued and persistent insecurity of Java applications at any level of the Java software stack") is a specialized version of a very hot trend.

Containers are not new, of course, but they're part of a trend that appears to have legs (thanks largely, let's face it, to Docker). Containers are lightweight, in that they carry no operating system; apps within a container start up immediately, almost as fast as apps running on an OS; they are fully isolated; they consume fewer physical resources; and there's little of the performance overhead associated with virtualization -- no "virtualization tax."

Waratek's containerization technology, called Java Virtual Containers, is a lightweight, quarantined environment that runs inside the JVM. It was developed in response to a legacy from the primordial Java environment of the 1990s, Holt said.

"It was a trendy idea at the time to have a security class sitting side-by-side with a malicious class inside the same namespace in the JVM," he said. "Sun engineers believed that the security manager would be able to differentiate between the classes that belonged to malicious code and those that belonged to the security enforcement code. But that led to a very complicated programming model that is maintained by state. And states are difficult to maintain. When we looked at the security models that have succeeded historically, we saw right away that they were based on separation of privileges."

Waratek began as a research project based in Dublin in 2010, an effort to "retrofit this kind of privilege and domain separation" into the JVM, Holt said. That research led to the company's Java virtual container technology. "Suddenly you have parts of the JVM that you know are safe, because they are in a different code space," he said.

Holt pointed out that containerization is a technique, not a technology, and he argued that that is a good thing.

"It means that it doesn't matter what containerization technology you use," he said. "People are starting to wake up to the value of putting applications into containers—which are really locked boxes. But the choice of one container doesn't exclude the use of another. You can nest them together. This is really important, because it means that people can assume that containers are going to be part of their roadmap going forward. Then the conversation turns to what added value can I get for this locked box."

Holt and company went on to build in a new type of security capability into their containers, called Runtime Application Self-Protection (RASP), producing in the process a product called Locker. Gartner has defined RASP as "a security technology built in or linked to an application or app runtime environment, and capable of controlling app execution and detecting and preventing real-time attacks." In other words, it's tech that makes it possible for apps to protect themselves.

"We see this as an opportunity to insert security in a place where security is going to be more operationally viable and scalable," Gohstand said.

Gohstand is set to give a presentation today (Wednesday) on this very topic at the AppSec Conference in Santa Monica.

Posted by John K. Waters on January 28, 20150 comments


2015 Enterprise Dev Predictions, Part 3: Digital Transformation and Lifecycle Virtualization

More on This Topic:

And finally...Okay, this one isn't so much a set of predictions as observations on some trends enterprise developers should be aware of at the dawn of 2015.

Industry analyst and author Jason Bloomberg is president of Intellyx, an analysis and training firm he founded last June. He's probably best known as a longtime ZapThink analyst (and president, before he went out on his own). He has also written several books; I'm a fan of The Agile Architecture Revolution (John Wiley & Sons, 2013).

I recently caught up with Bloomberg shortly before he headed to Las Vegas for the annual CES gizmogasm. He pointed to two trends that he believes will have a profound effect on enterprise developers this year. First, what he called "the digital transformation."

"Customer preferences and behavior are now driving enterprise technology decisions more than they ever have before," he said. "That includes B-to-B and B-to-C. They're driving this combination of digital touch points and ongoing innovation at the user interface, and the enterprise is upping the ante on performance, resilience, and the user experience. But it all has to connect, end-to-end. All the pieces have to fit together."

DevOps, which connects development and operations, is now being extended to the business, to the customer experience, Bloomberg said. (He called it "DevOps on steroids.") This trend also includes things like continuous integration, continuous delivery, and established Agile methodologies that now have to connect to the customer at increasing levels.

These changes could be especially challenging for enterprise developers, Bloomberg said, because the shift is organizational, which is very different from the technology changes they're used to. If companies get this right, he said, server-side developers and user-facing developer will be working together in a new way, focused on delivering technology value to customers.

"Developers are going to be called upon to expand past their boundaries, in terms of how they can contribute and provide value to the companies they work for," he said. "This shakes some traditional developers to their core, but it's also very exciting to a lot of people, especially the twentysomethings, who are becoming the go-to players for digital technology. This is what they live and breath."

These shifts are already showing up in retail and media (Google, Netflix, Spotify), but Bloomberg expects them to spread quickly to virtually every industry. "I think it's going into overdrive in 2015," he said.

Trend No. 2: the Moore's-Law-like progress of The Internet of Things, which is evolving around exponential improvements in things like batteries, which are shrinking as they become more powerful, and burgeoning memory capacity.

"People tend to think linearly," Bloomberg said. "They expect things to get twice as good every year. But things are going to explode. The question will quickly become, how do we take advantage of so many different improvements in the technology? What can I do with a battery that is a thousandth of the size of current batteries, with processors that are a thousand times more powerful, with terabytes of memory?"

Developers in the trenches who just need to get their jobs done will have a hard time finding solid ground amid all of these changes, he said.

"All of this stuff is changing so fast, it's hard to know what's real and what's hype," Bloomberg said. "You could argue that it's always been this way, but developers are facing a range of changes that are going to be disruptive and quite challenging in the coming year."

Theresa Lanowitz is another industry watcher who went out on her own. The former Gartner analyst founded Voke, Inc. in 2006 to cover "the edge of innovation driven by technology, innovation, disruption, and emerging market trends." The white papers she publishes are not to be missed.

Among other things, Lanowitz has been tracking the enterprise adoption of the practice of applying virtualization to the pre-production portion of the application lifecycle, which she has dubbed Lifecycle Virtualization. A number of technologies support this practice, including most prominently service virtualization (provided by vendors such as CA, Parasoft, and HP), but also virtual and cloud-based labs (Skytap), and network virtualization (HP with its Shunra acquisition).

"We're starting to see more and more organizations saying, okay, we recognize this need for parity among dev, QA, and operations," she said. "We also understand that we need to support our line of business. How do we do that? We move virtualization to the portion of the application lifecycle where it really helps to control the business outcome."

Lanowitz expects this shift to take off in 2015, she said, because the tools are getting much better. "It makes a huge difference," she said.

Service virtualization in particular is gaining traction in the enterprise, Lanowitz said. She defines it as the process of enabling dev and test teams to statefully simulate and model their dependencies of unavailable or limited services and data that cannot be easily virtualized by conventional server or hardware virtualization means. She stresses stateful simulation in her definition, "because many organizations will say service virtualization is the same as mocking and stubbing. Service virtualization is an architected solution; mocking and stubbing are workarounds."

The bottom line: Service virtualization allows for testing much earlier in the application lifecycle, which ultimately makes it possible to deliver better business value and outcomes. It's that value proposition that's going to cause Lifecycle Virtualization to show up on a growing number of developers' radar in the coming year.

"If you really believe in the potential of a collaborative environment that includes dev, QA, and operations, then a solution like service virtualization is a defining technology," she said. "The team that benefits from it most directly is the test team, of course. They can test more frequently, they understand their meantime to defect discovery, they can increase their test execution, and they can increase their test coverage. But it is the development team that has to implement service virtualization to make that happen."

Lanowitz is at work on a new white paper updating her 2012 Lifecycle Virtualization stats. I'll let you know when it's published.

Posted by John K. Waters on January 16, 20150 comments