Report: Oracle's Click-to-Play Feature Greatly Improves Java Security

During last October's JavaOne conference, I attended the post-keynotes Java panel, where leaders of the various Java organizations within Oracle, along with JCP chairman Patrick Curran, lined up at one end of the press room to answer reporters' questions. It's a traditional part of the event, this panel, and I've been to more than a few of them, so you'd think I would have noticed immediately the dearth of questions about the security of Java, which had kicked off the Q&A for the last few years. But it was Henrik Stahl, vice presidentof product management in Oracle's Platform Group, who observed at the end of the discussion that there had been no security questions at all.

I mentioned this later to Mike Milinkovich, the executive director of the Eclipse Foundation, who was on hand that day to lead a session. He was not surprised. "That's what happens when you have a squeaky clean year," he said.

I'm not sure I'd call 2014 "squeaky clean," but Java-based breaches -- not to mention headline -- were down last year. In fact, there were no major zero-day Java vulnerabilities discovered and exploited in the wild. Why? A new report released this week by HP Security Research offers at least part of the answer. The authors of "HP Cyber Risk Report 2015," (PDF) credited Oracle's click-to-play feature, introduced in 2014, for the improved security.

"Oracle introduced click-to-play as a security measure making the execution of unsigned Java more difficult," the report's authors wrote. "As a result we did not encounter any serious Java zero days in the malware space. Many Java vulnerabilities were logical or permission-based issues with a nearly 100 percent success rate. In 2014, even without Java vulnerabilities, we still saw high success rate exploits in other areas."

Click-to-play is the browser feature that blocks Java content by default. The Web page displays a blank space until the user clicks the box to enable that content. This seems to have mitigated the vulnerability of Java in the browser, which was largely the result of the way Oracle has bundled the Java browser extension with the Java runtime environment (JRE).

Among the exploits listed in the report's Top 10, none targeted Java, which had been one of the most commonly exploited targets in previous few years. "This may indicate that the security push, which caused delay in the release of Java 8, is getting some results," the researchers wrote, "although it may be too early to tell. It may also be a consequence of browser vendors blocking outdated Java plugins by default, making the platform a less attractive target for attackers."

The success of the click-to-play feature at thwarting Java attacks was "the one exception" in an "inherently vulnerable" environment in which systems are built on decades-old code, and patches are inadequately deployed, the researchers concluded. And that success may be responsible for shifting attacker focus to vulnerabilities in Microsoft's Internet Explorer and Adobe Flash.

"Attackers continue to leverage well-known techniques to successfully compromise systems and networks," the researchers wrote. "Many client and server app vulnerabilities exploited in 2014 took advantage of codes written many years back -- some are even decades old."

The most common exploit the researchers saw last year was CVE-2010-2568 (CVE: "Common Vulnerabilities and Exposures"), which accounted for just over a third of all discovered exploits. According to the CVE site, this vulnerability affects the Windows Shell in XP SP3, Server 2003 SP2, Vista SP1 and SP2, Server 2008 SP2 and R2, and Windows 7. It allows local users or remote attackers to execute arbitrary code via a crafted .LNK or a .PIF shortcut file, which is not properly handled during icon display in Windows Explorer. Six Java exploits were listed, accounting for a total of 28 percent.

There's much more in this report -- things like a deep-dive into highly successful vulnerabilities, an awesome glossary, and a lot of revealing statistics. The report is free for download. I also recommend the HP Security Research Blog.

Posted by John K. Waters on February 24, 20150 comments


Bosch ProSyst Acquisition Good News for Java and OSGi

German Internet of Things (IoT) platform provider Bosch Software Innovations (BSI) is acquiring ProSyst, a Java- and OSGi-based software vendor specializing in middleware for the IoT, the two companies announced this week. BSI, a subsidiary of the Bosch Group, specializes in the development of gateway software and middleware for IoT.

ProSyst is a provider of middleware for managing connected devices and implementing Machine-to-Machine (M2M) cloud-based applications. The company's roots are in Java and the Open Service Gateway initiative (OSGi) specification, and it has focused mainly on open, modular, and neutral software platforms that services providers and device manufacturers can use to deploy apps and services.

ProSyst products serve as a link between devices and the cloud, and that link is essential for interconnecting buildings, vehicles and machines, said BSI president Rainer Kallenbach, in a statement.

"[T]he ProSyst software will enable our customers to launch new applications on the Internet of Things more quickly and be one of the first to tap into new areas of business," Kallenbach said. "The ProSyst software is highly compatible with the Bosch IoT Suite, our platform for the Internet of Things. Above all, it complements our device management component by supporting a large number of different device protocols. This will allow us to achieve an even better market position than before."

BSI will be acquiring, among other assets, the ProSyst device runtime stacks, tools, SDKs and remote device management/provisioning platforms. Bosch also takes on the company's approximately 110 Java/OSGi engineers.

"ProSyst has been the leading provider of OSGi implementations for embedded systems for many years," Mike Milinkovich, executive director of the Eclipse Foundation, told ADTmag in an e-mail. "A quick look at their customer reference page shows a pretty amazing list of accounts, including Bosch. And those are just the ones that [the company is] allowed to talk about. There are other, very significant players who embed the ProSyst OSGi technology, but prefer anonymity."

The ProSyst customer list includes, among others, Intel Cisco, AT&T and Deutsche Telekom.

Milinkovich believes that the acquisition signals the intention of Bosch to become a significant player in the IoT, with a particular focus on the industrial applications.

"To me, [the Bosch] acquisition of ProSyst means that Java and OSGi will be an important part of [the company's] strategy," Milinkovich said. "That is great news for both Java and OSGi. In particular, I see this as significantly increasing the likelihood that Java and OSGi will be fundamental technologies in the Industrial Internet."

Posted by John K. Waters on February 19, 20150 comments


Understanding Service (not Server) Virtualization

"What's in a name?" Shakespeare's Juliet asked. Quite a lot, actually. Take it from me: the other John Waters. Another example: service virtualization. The name is so close to the most well-known and widely implemented type of virtualization -- server virtualization -- that it's gumming up the conversation about using virtualization in the pre-production portion of the software development lifecycle.

Industry analyst Theresa Lanowitz has been doing her part for a while now to clarify terms. It matters, she says, because service virtualization could have as big an impact on application development as server virtualization had on the datacenter.
"When many people hear the word 'virtualization,' the first thing that pops into their heads is serv-er virtualization, and of course, VMware," Lanowitz told me. "Which is understandable. Servervirtualization is incredible technology. It allows enterprises to make better use of their hardware and to decrease their overall energy costs. It allows them to do a lot more with underutilized resources. Serv-ice virtualization is almost the antithesis, in that it allows you to do more with resources that are in high demand."

To be clear, as Lanowitz defines it, service virtualization "allows development and test teams to statefully simulate and model the dependencies of unavailable or limited services and data that cannot be easily virtualized by conventional server or hardware virtualization means." Lanowitz stresses "stateful simulation" in her definition, she said, because some people argue that service virtualization is the same as mocking and stubbing. But service virtualization is an architected solution, while mocking and stubbing are workarounds.

"Lifecycle Virtualization" is voke's umbrella term for the use of virtualization in pre-production. The full menu of technologies for LV includes service virtualization, virtual and cloud-based labs, and network virtualization solutions. The current list of vendors providing service virtualization solutions includes CA, HP, IBM, Parasoft, and Tricentis.

Lanowitz and her voke, inc., co-founder, Lisa Donzek, take a closer look at recent developments in the service virtualization market in a new report, ("Market Snapshot Report: Service Virtualization"). The report looks at what 505 enterprise decision makers reported about their experiences with service virtualization between August 2014 and October 2014, and the results they got from their efforts. It also does an excellent job of defining terms and identifying the products and vendors involved.

This is voke's second market report on service virtualization. Their first survey was conducted in 2012 and included 198 participants. "In 2012, the market had just been legitimized," Lanowitz said. "We still had a long way to go."

What jumped out at me in this report was the change in the number of dependent elements to which the dev/test teams of the surveyed organizations required access. In 2012 participants reported needing access to 33 elements for development or testing, but had unrestricted access to only 18. In 2014, they reported needing access to 52 elements and had unrestricted access to only 23. Sixty-seven percent of the 2014 participants report unrestricted access to ten or fewer dependent elements.

"It's clear to us that if you're not using virtualization in your application life cycle, you're going to have some severe problems, whether it's meeting time-to-market demands, quality issues, or cost overruns," Lanowitz said. "Service virtualization helps you remove the constrains and wait times frequently experienced by development and test teams needing to access components, architectures, databases, mainframes, mobile platforms, and so on. Using service virtualization will lead to fewer defects, reduced software cycles, and ultimately, increased customer satisfaction."

There's a lot more in the report. It's a must read.

A rose by any other name might smell as sweet, but it's not going to jack up your productivity.

Posted by John K. Waters on February 9, 20150 comments


One Solution for Developer Fatigue

When you hear the words "developer fatigue," what images come to mind? Your team leader talking about yet another project with an impossible deadline? Bleary-eyed teammates on all-night coding sessions? Too much java (and Java)? Or maybe you see a more profound enervation brought on by the "the constant and increasing flood of new languages, libraries, frameworks, platforms, and programming models that are garnering popular attention in the developer community." Those are JNBridge CTO Wayne Citrin's words, soon to appear in a company blog post.

Citrin and other industry watchers have noted the growing frustration among developers faced with a constant need to learn and adjust to new languages and technologies while remaining productive. Citrin believes he has at least a partial solution to this problem, and I talked with him about it recently.

"It's great that there's so much innovation going on," he said, "but the half-life of many of these innovations seems to be decreasing. It used to be that something would come out, and once it got adopted, you'd expect it to be the big paradigm for at least a few years. That's just not the case anymore. It can be a genuine hassle to keep up with all this stuff, and it takes away from the time people need to be productive. At some point, people just start throwing up their hands."

JNBridge is a provider of interoperability tools for connecting Java and .NET frameworks. The company's flagship product, JNBridgePro, is a general purpose Java/.NET interoperability tool designed to bridge anything Java to .NET, and vice versa. The tool allows developers to access the entire API from either platform.

Citrin's solution, not surprisingly, lies in tools like JNBridgePro.

"It's a little self-serving on our part, I admit, but it's also true that interop tools can really help," he said. "They make it possible for you to introduce yourself to new technologies gradually, focusing on the things in those technology that can help you solve your immediate problems. If you're a developer who uses our stuff and you're deep into both Java and .NET and you need to make use of, say, Python or Groovy, you can reach out for features from those technologies at your own pace."

By taking it slowly and integrating features from the new tech into your projects as they need them, devs can leverage their existing skills, reduce the risk of making a bad bet, and reduce their stress -- which ultimately reduces developer fatigue, he said.

"As long as the implementation is Java- or .NET-based, we can help developers integrate any part of it into their existing project, when they're ready, and without having to throw all the existing stuff away, and completely learning the new thing, re-implementing it from scratch, and having to find and fix new bugs," he said. "It beats the heck out of the alternative of 'warehousing,' which is essentially jumping in and learning a new technology for its own sake."

Citrin isn't the only one noticing this phenomenon, of course. A lively back-and-forth on the consequences of these seemingly never ending demands on developers was sparked last July when front-end and server-side developer Ed Finkler (aka funkatron) wrote a blog post entitled "The Developer's Dystopian Future." In that post he confessed that his "tolerance for learning curves" was growing smaller every day.

"New technologies, once exciting for the sake of newness, now seem like hassles," he wrote. "I'm less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity." He also expressed what is almost certainly a widespread fear in this increasingly polyglot world of simply being left behind.

Finkler's post struck a nerve in more than a few developers and fellow bloggers. Tim Bray, one of the creators of the XML specification, talked about it in his "Discouraged Developer" blog.

"[T]here is a real cost to this continuous widening of the base of knowledge a developer has to have to remain relevant," he wrote. One of today's buzzwords is "full-stack developer." Which sounds good, but there's a little guy in the back of my mind screaming 'You mean I have to know Gradle internals and ListView failure modes and NSManagedObject quirks and Ember containers and the Actor model and what interface{} means in Go and Docker support variation in Cloud providers?' Color me suspicious."

Programmer/podcaster Marco Arment took up Finkler's commentary in his blog.

"I feel the same way," he wrote, "and it's one of the reasons I've lost almost all interest in being a web developer. The client-side app world is much more stable, favoring deep knowledge of infrequent changes over the constant barrage of new, not necessarily better but at least different technologies, libraries, frameworks, techniques, and methodologies that burden professional web development."

Author Matt Gemmel commented on Arment and Finkler's posts on his "Confessions of an ex-developer" blog. "There's a chill wind blowing, isn't there? I know we don't talk about it much, and that you're crossing your fingers and knocking on wood right now, but you do know what I mean," he wrote.

Redmonk analyst Stephen O'Grady noticed Arment's, Bray's, Finkler's, and Gemmell's posts and wrote about them on his "tecosystems" blog.

"Developers have historically had an insatiable appetite for new technology," he wrote, "but it could be that we're approaching the too-much-of-a-good-thing stage. In which case, the logical outcome will be a gradual slowing of fragmentation followed by gradual consolidation." (Be sure to check out his new O'Reilly book, The New Kingmakers: How Developers Conquered the World.)

If you haven't already, it's worth reading these connected blogs. (I'd start with Finkler's.) And keep an eye out for Citrin's upcoming post on the JNBridge Web site.

Posted by John K. Waters on February 9, 20150 comments


Following 'Whirlwind' Year, Docker Changes Operational Structure

The open source Docker project experienced "unprecedented growth" last year, its maintainers say, with project contributors quadrupling and pull requests reaching 5,000.

To cope with the surge of this "whirlwind year," Docker, Inc., the chief commercial supporter of the project, has modified its organizational structure, spreading out the responsibilities that had been handled by Docker's founder and CTO, Solomon Hykes, into three new leadership roles.

The new leadership roles include Chief Architect, Chief Maintainer, and Chief Operator. The new operational structure also defines the day-to-day work of individual contributors working in each of these areas. All three positions were filled by new or existing employees of Docker, Inc., the chief commercial supporter of the open source Docker.io container engine project.

"This is the natural progression of any successful company or open source project," Docker's new Chief Operator, Steve Francia, told ADTmag. "As your popularity grows, you eventually have to spread the load, and that's what this new structure is doing."

Since the release of Docker 1.0 last June, the project has attracted more than 740 contributors, and fostered more than 20,000 projects and 85,000 "Dockerized" applications.

Hykes will take on the role of Chief Architect, which Francia called "the visionary role. It trims his responsibilities to overseeing architecture, operations, and technical maintenance of the project. He will also be responsible for steering the general direction of the project, defining its design principles, and "preserving the integrity of the overall architecture as the platform grows and matures."

The role of Chief Maintainer has been assigned to Michael Crosby, who became a Docker team member in 2013, and has been a core project maintainer. He will be responsible for "all aspects of quality for the project, including code reviews, usability, stability, security, performance, and more." Crosby began working with the project in 2013 as a community member. "He was appointed to the position because he was already so good at supporting the other maintainers," Francia said. "It's a role that, in some ways, he's already been playing." Crosby is described in the Docker announcement as "one of its most active, impactful contributors."

As Chief Operator, Francia will be responsible for the day-to-day operations of the project, managing and measuring its overall success, and ensuring that it is governed properly and working "in concert" with the Docker Governance Advisory Board (DGAB). For the past three years Francia had served in a similar capacity as chief developer advocate at MongoDB, where he "created the strongest community in the NoSQL database world," the announcement declares.

"When I joined MongoDB, I'd been around long enough to realize that companies that transform the industry come along maybe once in a decade," Francia said, "and I knew how lucky I was to be a part of that. At Docker I get to be part of another transformation, one that is going to change the way development happens, forever. You always hope that lightening will strike twice, but I sure didn't expect it to happen so soon."

Francia introduced himself to the Docker community in a Q&A session today on IRC chat in #docker. He also posted his first blog as Chief Operator.

The Docker reorganization itself went through the same process as a proposed feature, and was documented in a pull request (PR #9137). It was commented on, modified, and merged into the project. The changes are intended to make the project more open, accessible, and scalable, and in an incremental way, without unnecessary refactoring.

Docker and containerization seem to be on everybody's mind these days as microservice architectures gain traction in the enterprise. Over the past few years, Netflix, eBay, and Amazon (among others) have changed their application architectures to microservice architectures. Thoughworks gurus Martin Fowler and James Lewis defined the microservice architectural style as "an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms." Containers are emerging as a popular means to this end. 

"The level of ecosystem support Docker has gained is stunning, and it speaks to the need for this kind of technology in the market and the value it provides," said IDC analyst Al Hilwa said in an earlier interview.

Posted by John K. Waters on January 28, 20150 comments


Can Containers Fix Java's Legacy Security Vulnerabilities?

I reported last week on Oracle's latest Critical Patch Update, which included 169 new security vulnerability fixes across the company's product lines, including 19 for Java. The folks at Java security provider Waratek pointed out to me that 16 of those Java fixes addressed new sandbox bypass vulnerabilities that affect both legacy and current versions of the platform. That heads-up prompted a conversation with Waratek CTO and founder John Matthew Holt and Waratek's security strategist Jonathan Gohstand about their container-based approach to one of the most persistent data center security vulnerabilities: outdated Java code.

Holt reminded me that the amount of Java legacy code in the enterprise is about to experience a kind of growth spurt, as Oracle stops posting updates of Java SE 7 to its public download sites in April.

"When you walk into virtually any large enterprise and you ask them which version of Java they're running, the answer almost always is, every version but the current one," Holt said. "That situation is not getting better."

Outdated Java code with well documented security vulnerabilities persists in most data centers, Gohstand said, which is where it's often the target during attacks. The reasons that legacy Java persists, in spite of its security risks (and the widespread knowledge that it's there), is up for debate. But Waratek's unconventional approach to solving that problem (and what Holt calls "the continued and persistent insecurity of Java applications at any level of the Java software stack") is a specialized version of a very hot trend.

Containers are not new, of course, but they're part of a trend that appears to have legs (thanks largely, let's face it, to Docker). Containers are lightweight, in that they carry no operating system; apps within a container start up immediately, almost as fast as apps running on an OS; they are fully isolated; they consume fewer physical resources; and there's little of the performance overhead associated with virtualization -- no "virtualization tax."

Waratek's containerization technology, called Java Virtual Containers, is a lightweight, quarantined environment that runs inside the JVM. It was developed in response to a legacy from the primordial Java environment of the 1990s, Holt said.

"It was a trendy idea at the time to have a security class sitting side-by-side with a malicious class inside the same namespace in the JVM," he said. "Sun engineers believed that the security manager would be able to differentiate between the classes that belonged to malicious code and those that belonged to the security enforcement code. But that led to a very complicated programming model that is maintained by state. And states are difficult to maintain. When we looked at the security models that have succeeded historically, we saw right away that they were based on separation of privileges."

Waratek began as a research project based in Dublin in 2010, an effort to "retrofit this kind of privilege and domain separation" into the JVM, Holt said. That research led to the company's Java virtual container technology. "Suddenly you have parts of the JVM that you know are safe, because they are in a different code space," he said.

Holt pointed out that containerization is a technique, not a technology, and he argued that that is a good thing.

"It means that it doesn't matter what containerization technology you use," he said. "People are starting to wake up to the value of putting applications into containers—which are really locked boxes. But the choice of one container doesn't exclude the use of another. You can nest them together. This is really important, because it means that people can assume that containers are going to be part of their roadmap going forward. Then the conversation turns to what added value can I get for this locked box."

Holt and company went on to build in a new type of security capability into their containers, called Runtime Application Self-Protection (RASP), producing in the process a product called Locker. Gartner has defined RASP as "a security technology built in or linked to an application or app runtime environment, and capable of controlling app execution and detecting and preventing real-time attacks." In other words, it's tech that makes it possible for apps to protect themselves.

"We see this as an opportunity to insert security in a place where security is going to be more operationally viable and scalable," Gohstand said.

Gohstand is set to give a presentation today (Wednesday) on this very topic at the AppSec Conference in Santa Monica.

Posted by John K. Waters on January 28, 20150 comments


2015 Enterprise Dev Predictions, Part 3: Digital Transformation and Lifecycle Virtualization

More on This Topic:

And finally...Okay, this one isn't so much a set of predictions as observations on some trends enterprise developers should be aware of at the dawn of 2015.

Industry analyst and author Jason Bloomberg is president of Intellyx, an analysis and training firm he founded last June. He's probably best known as a longtime ZapThink analyst (and president, before he went out on his own). He has also written several books; I'm a fan of The Agile Architecture Revolution (John Wiley & Sons, 2013).

I recently caught up with Bloomberg shortly before he headed to Las Vegas for the annual CES gizmogasm. He pointed to two trends that he believes will have a profound effect on enterprise developers this year. First, what he called "the digital transformation."

"Customer preferences and behavior are now driving enterprise technology decisions more than they ever have before," he said. "That includes B-to-B and B-to-C. They're driving this combination of digital touch points and ongoing innovation at the user interface, and the enterprise is upping the ante on performance, resilience, and the user experience. But it all has to connect, end-to-end. All the pieces have to fit together."

DevOps, which connects development and operations, is now being extended to the business, to the customer experience, Bloomberg said. (He called it "DevOps on steroids.") This trend also includes things like continuous integration, continuous delivery, and established Agile methodologies that now have to connect to the customer at increasing levels.

These changes could be especially challenging for enterprise developers, Bloomberg said, because the shift is organizational, which is very different from the technology changes they're used to. If companies get this right, he said, server-side developers and user-facing developer will be working together in a new way, focused on delivering technology value to customers.

"Developers are going to be called upon to expand past their boundaries, in terms of how they can contribute and provide value to the companies they work for," he said. "This shakes some traditional developers to their core, but it's also very exciting to a lot of people, especially the twentysomethings, who are becoming the go-to players for digital technology. This is what they live and breath."

These shifts are already showing up in retail and media (Google, Netflix, Spotify), but Bloomberg expects them to spread quickly to virtually every industry. "I think it's going into overdrive in 2015," he said.

Trend No. 2: the Moore's-Law-like progress of The Internet of Things, which is evolving around exponential improvements in things like batteries, which are shrinking as they become more powerful, and burgeoning memory capacity.

"People tend to think linearly," Bloomberg said. "They expect things to get twice as good every year. But things are going to explode. The question will quickly become, how do we take advantage of so many different improvements in the technology? What can I do with a battery that is a thousandth of the size of current batteries, with processors that are a thousand times more powerful, with terabytes of memory?"

Developers in the trenches who just need to get their jobs done will have a hard time finding solid ground amid all of these changes, he said.

"All of this stuff is changing so fast, it's hard to know what's real and what's hype," Bloomberg said. "You could argue that it's always been this way, but developers are facing a range of changes that are going to be disruptive and quite challenging in the coming year."

Theresa Lanowitz is another industry watcher who went out on her own. The former Gartner analyst founded Voke, Inc. in 2006 to cover "the edge of innovation driven by technology, innovation, disruption, and emerging market trends." The white papers she publishes are not to be missed.

Among other things, Lanowitz has been tracking the enterprise adoption of the practice of applying virtualization to the pre-production portion of the application lifecycle, which she has dubbed Lifecycle Virtualization. A number of technologies support this practice, including most prominently service virtualization (provided by vendors such as CA, Parasoft, and HP), but also virtual and cloud-based labs (Skytap), and network virtualization (HP with its Shunra acquisition).

"We're starting to see more and more organizations saying, okay, we recognize this need for parity among dev, QA, and operations," she said. "We also understand that we need to support our line of business. How do we do that? We move virtualization to the portion of the application lifecycle where it really helps to control the business outcome."

Lanowitz expects this shift to take off in 2015, she said, because the tools are getting much better. "It makes a huge difference," she said.

Service virtualization in particular is gaining traction in the enterprise, Lanowitz said. She defines it as the process of enabling dev and test teams to statefully simulate and model their dependencies of unavailable or limited services and data that cannot be easily virtualized by conventional server or hardware virtualization means. She stresses stateful simulation in her definition, "because many organizations will say service virtualization is the same as mocking and stubbing. Service virtualization is an architected solution; mocking and stubbing are workarounds."

The bottom line: Service virtualization allows for testing much earlier in the application lifecycle, which ultimately makes it possible to deliver better business value and outcomes. It's that value proposition that's going to cause Lifecycle Virtualization to show up on a growing number of developers' radar in the coming year.

"If you really believe in the potential of a collaborative environment that includes dev, QA, and operations, then a solution like service virtualization is a defining technology," she said. "The team that benefits from it most directly is the test team, of course. They can test more frequently, they understand their meantime to defect discovery, they can increase their test execution, and they can increase their test coverage. But it is the development team that has to implement service virtualization to make that happen."

Lanowitz is at work on a new white paper updating her 2012 Lifecycle Virtualization stats. I'll let you know when it's published.

Posted by John K. Waters on January 16, 20150 comments


Java in 2015: Predictions and More

I've been looking ahead with analysts and industry watchers at what 2015 might have in store for enterprise software developers in general, but I also reached out for some predictions for Java Jocks about the future of their favorite language and platform.

Al Hilwa, program director in IDC's Software Development Research group, sees the continued adoption of Java 8 as a preoccupying enterprise trend in 2015, though "absorbing major new language releases is typically a slow process." He also expects to see growing interest in functional programming as developers begin putting Lambda and the stream API into "serious applications."

2015 could also be the year that the seemingly interminable court case between Oracle and Google (which may get heard by the Supreme Court) gets resolved, he said, which would mean that "programmers can go back to programming with minimal change to their lives."

Another trend: The enormous popularity of Android, which has been a great win for the Java ecosystem, because of the huge number of new developers it has brought to the Java fold, will continue in 2015, he said.

He also admonished the Executive Committee of the Java Community Process to maintain the release momentum the community has achieved since the Oracle takeover of the stewardship of Java, and to fulfill some recent promises.

"What the Java Jedi council has to do now is keep the releases moving on schedule and implement the complex modularity they promised," Hilwa said. "Java is already in a variety of form-factors, but aligning the pieces and the frameworks is going to be crucial if Java is going to have a nice chunk of the IoT pie."

John R. Rymer, Principal Analyst at Forrester Research, who covers, among other things, Java application servers, expects Oracle to put some real effort into making Java "friendlier to the cloud."

"The runtime in particular needs to be brought into the cloud era," he said. "Things need to be way more modular and lightweight than they are today, where that's appropriate, and I think we're going to see Oracle buckling down for the hard work of doing that in 2015."

Modularization will also get a lot of attention in 2015, Rymer said, both from Oracle and the wider Java community. He and his colleagues are keeping an eye on the Java-native module system known as Project Jigsaw, which Oracle's Java Platform Group has promised to include in Java 9.

"As I see it, Oracle has put down a solid foundation that now allows them to pursue this work," he said.

Rymer also talked about the "huge gravitational pull" of JavaScript just getting stronger in 2015. His advice for Java developers: add JavaScript to your skill set.

"Server-side JavaScript is going to present a lot of opportunity for Java developers this year," he said. "And for .NET/C# people. There's sort of a lot of bleeding away in this direction. There's a lot of JavaScript on the client, of course, but most of the Java people I talk to are working on the server side. I think there's going to be a lot of demand for people who can bring knowledge of server-side architectures to bear using JavaScript. It's going to be a good skill to have."

I also caught up with Mike Milinkovich, executive director of the Eclipse Foundation. These days, Eclipse is about much more than Java, of course; it's a multi-language development environment. But that environment is written in Java (mostly), and Milinkovich keeps his eye on that technology and community. 

"All the numbers I have seen point to Java 8 being one of the most rapidly adopted new Java releases ever," he told me in an email. "So I view Java 8 as largely last year's news. What I am expecting in 2015 primarily is the work that will be done marching towards Java 9. In particular, I think that there will a lot of effort and discussion around modularity, and how that impacts the Java platform going forward."

A second significant area of growth for Java in 2015, Milinkovich said, will be the Internet of Things. That's not surprising: the Eclipse Foundation unveiled an IoT Stack for Java at JavaOne last year, and Ian Skerrett, the Foundation's VP of marketing, has been leading an Eclipse IoT initiative is to build an open-source community around IoT.

Milinkovich pointed to several Foundation projects focused on IoT and based on Java, including Kura, a complete and mature device gateway services framework; Smarthome, a residential home automation system that supports the integration of devices from many manufacturers, using many protocols; and Concierge, a lightweight, embeddable OSGi framework implementation. He also pointed out that Oracle is investing heavily in Java ME with an eye toward making it a serious player in the IoT space.

Milinkovich shared Rymer's opinion that Java would continue to move into the cloud "in a very big way" in 2015. "Cloud Foundry is the leading PaaS platform, and it is based on Java," he said. "IBM is using that platform for BlueMix [an implementation of IBM's Open Cloud Architecture], and I think we are going to be hearing a lot about cloud and BlueMix from IBM in 2015. Oracle has now caught the cloud religion, and I will be expecting to hear a steady drum beat of Java in the cloud from them as well."

Posted by John K. Waters on January 14, 20150 comments


2015 Enterprise Dev Predictions, Part 2: Convergence, Security, Automation and Analytics

More on This Topic:

The coming year looks to be a lively one for hard-working enterprise developers, most of whom will find themselves facing new and mutating challenges spawned by rampant mobility, Big (and Fast and Mean) Data, the oozing Internet of Everything and the Cloud. But those who pay attention to a few key trends and heed the advice of some smart industry watchers, will survive and thrive in 2015.

Industry analyst Dana Gardner, for example, expects 2015 to be the year that Platform-as-a-Service (PaaS) goes from tactical to strategic. In other words, the decisions about how to use PaaS in an organization and which version to standardize on will become more than simply a decision about developer tools and productivity. It will involve more strategic levels, he said, and concerns around concepts such as "cloud-first" and "mobile-first," as well as DevOps.

 "We've seen cloud-first, mobile-first, and DevOps mentalities, but they've always been separate," he said. "The requirements and decision making around them have been distinct and tactical. I think this is the year that changes. This is the year that all three become part of the same, strategic decision process."

Gardner, who is principle analyst at Interarbor Solutions, said he believes that these once mostly separate spheres cannot remain in their own orbits. "They will have to be considered together," he said. "And this will be very hard to do, because we're talking about a lot of moving parts that impact each other, and a process that crosses organizational boundaries and threatens entrenched cultures."

What this means for enterprise developers, Gardner said, is that they must contribute to the decision making process at the architectural level to ensure that developer requirements don't get short shrift. "They will have to advocate for themselves in a wider environment of decision making," he said, "so that concerns about things like security and deployment flexibility in the hybrid cloud don't obviate the needs and concerns of developers. They need to learn to explain their past decisions and current needs in such a way that they are respected in the larger picture."

Which is not to say that you should go charging into the CIO's office with a list of demands. Gardner says that development organizations would be well advised to anoint advocates to speak for them.

"They need to create a point person to be a liaison with the higher-level decision process," he said. "This should be someone who can speak for them, but at that level -- someone who can provide evidence and metrics, use cases and scenarios, in a way that an architect or a bean counter will understand. In other words, development organizations need to get a little more political and create channels of communication that go up -- and down -- the org chart."

 And the process must include security considerations.

 "We all saw what happened at Sony Entertainment, and how devastating a cyber attack can be," Gardner said. "Not all enterprises have the wherewithal to implement a high level of security. So now part of your decision making around your most important applications and data has to involve questions like, are we more secure on our own systems and networks, or are we better off partnering with a cloud provider that has security as one of their most important skill sets?"

Application security is a subject near and dear to Gary McGraw's heart. McGraw, who is CTO of Cigital and the author of several now classic books on application security, sees two security trends that should be on every enterprise developer's radar in 2015: the growing importance of application design to app security, and the security challenge posed by the increasing popularity of dynamic languages.

"Even if we got rid of all of the bug problems and all the coding errors, we would still only be solving half the app security problems, which lie in design flaws," he said. "We've learned that we need to emphasize design as much as coding, and we've been getting past the bug parade. I think we'll move even further in that direction in 2015."

Last year, McGraw led a group of security mavens and the IEEE Computer Society to form the Center for Secure Design (CSD), which is seeking to address this "Achilles' heel of security engineering."

The increasing use of dynamic languages, such as JavaScript, is also creating an app security problem that is likely to get worse in 2015, McGraw said. "There are lots of people coding in JavaScript these days, in one way or another," he said. "JavaScript and other dynamic languages present a real challenge for software security, because you don't really have the code to check until it's assembled, which doesn't happen until the very last minute. The static analysis systems that tend to rely on up are not well suited for that."

This will continue to grow in 2015, he said, and the software security industry must "deal with it head on, without retreating into old, broken ideas that still won't work."

Several of the industry watchers I tapped this year (or pestered, depending on their view of persistent reporters) mentioned micro-service architectures, so I was glad to hear back from Scott Johnston, senior vide presidentof product at Docker.

"The distributed app or microservice-based architecture does require careful thinking-through of the APIs, or the contracts between the microservices, early in the design process," Johnston said via e-mail. "To really get the full benefit ('this one goes to eleven') of a Dockerized distributed app, enterprise development teams will want to invest, if they haven't already, in end-to-end automation for their dev-build-test pipeline. Such continuous integration and delivery combined with Docker can compress development-to-deploy cycles from months to minutes. GitHub, Atlassian Bamboo, Jenkins, TravisCI, and IBM JazzHub are good examples of tools that can really help."

He also mentioned the so-called developer self-service: "To give enterprise developers less of a reason to fire-up Amazon EC2 instances on their own credit cards -- the 'shadow IT' dreaded by CIOs and CFOs alike -- DevOps and release engineering teams are rolling-out self-service portals that allow developers to provision IT-supported environments on-demand," he said. "This level of automation pulls together a number of technologies, including integration of build pipeline tools like GitHub, Atlassian Bamboo, and Jenkins; VMs like VMware and OpenStack; and configuration management tools like Puppet and Chef."

He also put in a plug for Dockerized apps, which allow DevOps teams to set-up "a self-service nirvana for their enterprise development teams." "On the front-end, the developer can self-service provision any environment for any stack in any language, and on the back-end ops can route the deployment of that app to any infrastructure -- VMware machines in the data center, public Amazon EC2 instances, a private OpenStack cloud, you name it. Now layer on top of that automated deployment routing decisions based on real-time spot prices, capacity management, and compliance policies. The result: a complete and awesome transformation of the enterprise app delivery pipeline."

I also heard from the ever succinct Forrester analysts Mike Gualtieri, who pointed to two trends that "will see a lot more beef behind their buzz" in the coming year: IoT and Apache Spark engine for large-scale processing.

"The Internet Of Things must become the Internet Of Analytics," Gualtieri said. "The IoT is nothing but a fifty-billion-dazzling-star field with no business value unless firms build the requisite in-motion and at-rest analytical capabilities that are essential to building IoT or IoT-informed applications. Apache Spark makes Hadoop stronger. Hadoop was designed for volume. Apache Spark was designed for speed. If you believe that opposites attract then Hadoop and Spark belong together. They are both cluster computing platforms that share nodes."

(I tried to do our annual enterprise developer prediction series two parts this year, but there's too much good stuff -- I'll more thoughts from industry watches on the coming year next week in Part 3.)

Posted by John K. Waters on January 10, 20150 comments