Java Security: It's a Multilayer Problem

Things have quieted down quite a bit on the Java security front during the last year or so. Rare these days are the heart-stopping revelations of zero-day vulnerabilities; and fewer are the grumbling editorials about the lack of end-user update hygiene. (Although, as far as I'm concerned, that issue is still quite grumble-worthy.) Oracle's click-to-play feature was at least partly responsible for a 2014 in which there were no major zero-day Java vulnerabilities discovered and exploited in the wild.

Which is great, but not the end of the Java security story. As long as Java's enormous popularity in the enterprise continues, it's going to be an alluring target, Java security expert John Matthew Holt reminded me recently.

Holt is the CTO of Waratek, a company specializing in Java security, so you could argue that he has vested interest in Java insecurity. But he's right to point out that the Java stack has more than one layer. Even if you manage to keep up with Oracle's patch schedule for the Java platform layer, you still have to deal with the app server layer, the libraries and the business logic. And update schedules vary. For example: Oracle releases Java security fixes on the Tuesday closest to the 17th day of January, April, July and October; Apache releases Struts patches every 72 days.

"I give great credit to Oracle for addressing the vulnerabilities in the Java Platform layer," Holt said. "That's kind of a never-ending battle. Even if an organization manages to keep up with the Java security fixes, the vulnerabilities shift to somewhere else in the software stack."

For example: By my count, there have been 10 Struts vulnerabilities reported over the past two years with a CVSS rating of 9 or 10, which is very high and marks them as critical.

Holt is an enthusiastic proponent of Runtime Application Self Protection, or RASP, which Gartner has defined as "a security technology built in or linked to an application or app runtime environment, and capable of controlling app execution and detecting and preventing real-time attacks." Holt's company makes a containerized RASP product, called Locker, which provides security monitoring, policy enforcement, and attack blocking from within the Java Virtual Machine (JVM).

"RASP is something very different," he said "We've never had a tool that lives inside the runtime and has the benefit of real, accurate, actionable intelligence about what the application is doing."

Holt's Dublin-based company also recently unveiled its new security technology I wanted to mention called the Taint Detection Engine, which is designed to detect and block SQL Injection attacks without generating false positives or relying on heuristics. The Taint Engine (Pipe down you snickering fifth graders!) is part of the company's AppSecurity for Java product.

As I'm sure you know, a SQL Injection involves inserting malicious SQL statements into an entry field for execution. A successful attack can, among other things, read and modify sensitive data and execute administration operations on the database. Depending on which analyst to pester until he/she emails you back just to shut you up, SQL Injection is responsible for as much as 80+ percent of the records stolen in hacking incidents. It's often at the top of most wanted list at OWASP and the SANS Institute. (OWASP has published a "Cheat Sheet" on SQL Injection that's worth reading.)

"It's insidious," Holt said. "Developers can download these kinds of libraries easily, and incorporate them into their applications. Their managers are happy because they delivered the product on time, but they've got all this code that the organization didn't write, didn't put up to a static analysis tool, didn't get results from, and hasn't been reviewed."

The AppSecurity for Java product performs transparent taint detection and validation of each character in a SQL query in real-time within the JVM. It's a cool product and worth investigating. Waratek went to SaaS and software security consultancy BCC Risk Advisory to have the above claims independently verified. Here's a link.

Posted by John K. Waters on April 8, 20150 comments


JFrog Adds Docker Support for its DaaS Platform

JFrog has joined the ever-expanding Docker ecosystem with new support for the container technology in its Bintray distribution-as-a-service (DaaS) platform. Developers use the popular platform to publish, download, store, promote, and share open source software packages.

I think it's fair to call Bintray "popular," because it won a Duke's Choice Award at JavaOne, and it's currently serving 125,690 packages in 39,981 repositories. Then there's the sexy customer list, which includes Apple, Netflix, Twitter, and Oracle.

The France-, US-, and Israel-based JFrog bills Bintray as a self-service platform that gives developers full control over their published software and how it's distributed. Fred Simon, JFrog's cofounder and chief architect, described Bintray as a "seasoned cloud platform," when I Skyped with him earlier this month. "Thousands of developers and DevOps teams use Bintray," he said.

The added Docker support in the new version makes it possible for organizations to create an unlimited number of private Docker repositories, Simon explained. The platform uses the Akamai content delivery network to decrease the download time of large Docker repositories, which speeds up DevOps efforts, he said.

Bintray works hand-in-glove with the company's flagship product, the cloud-based Artifactory binary repository manager (another Duke's Choice winner). Artifactory was one of the first binary repository management solutions. It integrates with the open-source Jenkins continuous integration (CI) server, Atlassian's Bamboo CI, JetBrains' TeamCity build and CI server, the Gradle and Apache Maven project automation tools, and the NuGet package manager for .NET, among others.

JFrog announced support for private Docker Registries in Artifactory last November. The Bintray support was an inevitable next step, Simon told me. "Artifactory is there to aggregate and manage the containers that you are creating, managing, or using; Bintray is really the place to publish and distribute those containers," he said. "You now have an end-to-end solution for many binary or package types."

The company's CEO, Shlomi Ben Haim, called support for Docker "a natural progression of JFrog's mission to provide agnostic, enterprise-grade support for every stage and aspect of code development and deployment."

JFrog launched a new commercial version of its Bintray last year. Bintray Premium supports "premium repositories," with unlimited storage and downloads, full download stats, access control, and download tracking, among other features.

JFrog is just the latest toolmaker to join in the warp-speed expansion of the Docker ecosystem. Containerization and microservice architectures are gaining serious traction in the enterprise, because container-based infrastructures continue to make life easier for the developers who adopt them. As the every insightful IDC analyst Al Hilwa puts it: "The level of ecosystem support Docker has gained is stunning, and it speaks to the need for this kind of technology in the market and the value it provides."

Posted by John K. Waters on March 25, 20150 comments


EclipseCon 2015 Wrap-Up

The San Francisco EclipseCon saw some interesting product/project announcements. From the Foundation itself came the milestone releases of two key IoT projects: Paho 1.1 and Mosquitto 1.4. They were actually released ahead of the conference, and I reported on them here. I wanted to highlight some other announcements to come out of the conference.

The Xtext project released version 2.8 of its open source framework for developing programming languages and domain specific languages (DSLs) at the show. The Xtext project combines a generic DSL infrastructure with an editor and a code generator written in Xtend, a Java dialect that compiles to Java 5-compatible source code, which means it can use existing Java libraries. Xtend is now a stand-alone Eclipse project.

The latest release of Xtext, which will be part of the Mars release train in June, comes with 180 bug fixes and big performance improvements, and a bunch of cool new features. It's a long list that includes new support for whitespace-aware languages, such as Python; grammar editor enhancements; new options for language code generation, including the ability to specify annotations to be added to each generated Java class; support for a new version of the Xbase compiler that allows developers to configure the Java version of the generated code; a new Java-to-Xtend converter; and a new formatter API.

The complete list of changes in Xtext 2.8 is available in the release notes.

Java toolmaker ZeroTurnaround released its Optimizer for Eclipse at the show. The free Eclipse plugin is designed to detect and fix common performance hiccups and configuration problems associated with the Eclipse IDE. The company is addressing what it sees as a common problem for Java developers, most of whom use the Eclipse dev tool.

"What Java developer hasn't, at some point in time, thought 'Wow, my Eclipse is really slow today?' " asked Jevgeni Kabanov, founder and CEO of ZeroTurnaround, in a statement. "We wanted to make coding in Eclipse more enjoyable by taking away the developer frustrations of a slow environment. We like to think of Optimizer for Eclipse as a jetpack for your Eclipse environment."

The plugin performs checks on configuration issues that negatively affect "the IDE user experienc" -- everything from insufficient memory allocation to class verification overhead, excessive indexes and history to lengthy build and redeploy times. Users can set the plugin to fix the type of problem automatically to speed up the performance of the IDE. It can also suss out a slow JDK and let users know if their IDE is out of date.

Codetrails announced the alpha release of its very cool Codecity for Eclipse at the show. This is an Eclipse plugin that calculates source code metrics and then provides a visualization of those metrics in the form of a navigable 3D map of a city block. It's a striking representation of data that emerged from the Codecity Project, which was developed at the Università della Svizzera italiana until 2010. These images communicate a ton of information instantly -- which, of course, is the purpose of these kinds of visualizations.

It works from within the IDE, providing users with a "Show in >> Codecity" option in the context menu. The metrics are computed in the background and then displayed in a browser window. The list of metrics supported by the plugin includes: number of declared methods, number of declared fields, number of problem markers, and number of commits. This last metric requires projects to be connected with an Eclipse team provider, the company says.

Codecity is a work in progress, but well worth checking out. It's available from the Eclipse Marketplace.

Posted by John K. Waters on March 16, 20150 comments


Java 9 Deep Dive at EclipseCon 2015

The Java community is still rolling around in the awesomeness of the long-awaited Java 8 release, with its support for lambda expressions, virtual extension methods and streams, compact profiles, the new the date/time API and so much more (but mostly that stuff). It was the largest-ever upgrade to the programming model, and by some accounts, it has been the most rapidly adopted update in the history of the platform.

But, you ask, what about Java 9?

Mark Reinhold, chief architect of Oracle's Java Platform Group, offered attendees at EclipseCon 2015, which wrapped on Thursday, a deep dive into the Even Cooler Java update, coming sometime next year.

The big change in Java 9, as everyone knows, is modularity, as laid out in Project Jigsaw, the oft-deferred capability that aims to make it possible for Java developers to create apps that don't need to lug around the entire environment. A Jigsaw module is a collection of Java classes, native libraries and other resources, along with metadata.

"From the beginning, the Java SE platform has been this huge monolithic thing," Reinhold said. "Even if you wanted to use just a small part of it, you had to install all of it." It has been difficult to run Java SE on small devices, he observed, but it has also been a pain on large devices and in some cloud environments. "What we want is a box of Lego parts [which are] modular that we can assemble as needed," he said.

The introduction of compact profiles in Java 8 was a "baby step" toward relieving some of that pain, Reinhold said, but Java 9 "will be composed of a set of finer-grained modules and will include tools to enable developers to identify and isolate only those modules needed for their application," he said.

Project Jigsaw comprises the three JEPs and a JSR. JEPs (JDK Enhancements Proposals) allow Oracle to develop small, targeted features for the Java language and virtual machine outside the Java Community Process (JCP), which requires full Java Specification Requests (JSRs).

  • JEP 200: The Modular JDK, defines a modular structure for the JDK. Reinhold described it as an "umbrella for all the rest of them."
  • JEP 201: Modular Source Code reorganizes the JDK source code into modules.
  • JEP 220: Modular Run-Time Images restructures the JDK and JRE run-time images to accommodate modules.
  • JSR 376: Java Platform Module System, the central component of Project Jigsaw, which defines the module system for the Java Platform.

Among other things, the modularization of Java will lead to the removal of the rt.jar files (runtime JAR), which Reinhold referred to as "a constant source of pain," in favor of compact profiles for a major reduction of the JVM footprint. (JAR files, he added, would be with us "until the heat death of the universe.") While the full JRE clocks in at 55Mb on a Linux ARM 32, Reinhold noted, compact1, the smallest profile, clocks in at 11Mb; compact2 is 17Mb, and compact3 is 30Mb. The modularization project will also lead to the elimination of the extension classpath and some reorganization of the lib path, he said.

Modularity will also improve security, he said. After "a bit of a rough period," Java security is now much better, but Java 9 will make it even better by making it possible to enforce strong modular boundaries -- defining what's internal to the module and what's external. Java 9 will also introduce a tool called jlink or the Java linker, which will make it possible to link modules to a single runtime.

"Vanilla Java is a dynamically linked environment," Reinhold said. "But when you are assembling modules into a one pile of bits that is custom JRE, you need a linker."

Of course, there's already a modular architecture for Java. The OSGi Alliance currently provides set of specifications that define a dynamic component system for the platform. Reinhold responded to the inevitable question about whether Java 9's new modular system will be compatible OSGi's modular architecture.

"We intend to explore ways of making standard Java modules available to other module systems," he said, but added, "We don't see how to achieve all of the goals we have for the module system if one of those goals is also to be completely compatible with OSGi."

Reinhold also looked into his crystal ball and speculated a bit on developments beyond Java 9. He touched on current efforts to improve Java's typing system and make the language more efficient at handling situations requiring identity less types via Project Valhalla , announced in August. He also pointed to Project Panama, which aims to improve the connections between the JVM and "foreign" non-Java APIs.

 

Posted by John K. Waters on March 13, 20150 comments


2 Open Source Eclipse IoT Projects Released Ahead of EclipseCon 2015

The San Francisco edition of the Eclipse Foundation's user conference, EclipseCon 2015, gets under way next week (March 9-12). I'm looking forward to catching some sessions and keynotes on a range of topics, but I'm particularly intrigued by the foundation's activities around the Internet of Things (IoT). The Eclipse IoT momentum just keeps building. In fact, two open-source projects that are part of that effort, Eclipse Paho and Eclipse Mosquitto, announced new releases this week.

Both projects -- Paho 1.1 and Mosquitto 1.4 -- implement the client and broker for the OASIS Message Queuing Telemetry Transport (MQTT) protocol. The MQTT protocol is designed to connect physical world devices and networks with applications and middleware. It has been widely adopted by IoT solution providers, largely because of its small footprint, minimal bandwidth requirement for messages, and its ability to adapt to unreliable network connections -- all essential qualities for an IoT protocol.

Ian Skerrett, the Foundation's vice president of marketing, who has been leading the Eclipse effort to foster an open-source community around IoT, told me that providing open-source implementations of MQTT has been something of a project focus. Interest in these two projects in particular has been high in the community, Skerrett said in an email, and the Foundation considers their release to mark "a pretty big milestone."

The Eclipse IoT Project aims to establish an open platform for IoT and machine-to-machine (M2M) communication that combines a set of services and frameworks, open-source implementations of standard protocols, and an Eclipse-based IDE for IoT/M2M development.

The Paho Project provides scalable open-source client implementations of open and standard messaging protocols for IoT/M2M apps. New in this release: support for .NET, WinRT, and Android clients; C and C++ libraries for embedded clients; updated versions of the Java, Python, and JavaScript clients to conform to the MQTT 3.1.1 standard. The new version is available for download now.

The Mosquitto project provides an open-source implementation of an MQTT broker. New in this release: easier integration with web sites through support for WebSockets; more flexible support for TLS v1.2, 1.1 and 1.0 f or enhanced security, plus support for ECDHE-ECDSA family ciphers; improved interoperability between MQTT brokers via better bridge support, including wildcard TLS certificates and conformance to MQTT 3.1.1. The new version is also available now for download.

"In the last year we have seen tremendous interest in the Eclipse IoT community, and in particular Paho and Mosquitto," said the Foundation's executive director Mike Milinkovich, in a statement. "Forty developers contributed to the new Paho and Mosquitto releases, demonstrating incredible interest for these projects and MQTT in general."

The Eclipse IoT project has evolved fairly quickly into a full-fledged community that is currently 15 projects strong. In addition to the MQTT protocol, those projects implement Lightweight M2M and CoAP, as well as several IoT-friendly frameworks.

A complete list of Eclipse IoT projects is available on the Foundation Web site here.

Posted by John K. Waters on March 6, 20150 comments


Report: Oracle's Click-to-Play Feature Greatly Improves Java Security

During last October's JavaOne conference, I attended the post-keynotes Java panel, where leaders of the various Java organizations within Oracle, along with JCP chairman Patrick Curran, lined up at one end of the press room to answer reporters' questions. It's a traditional part of the event, this panel, and I've been to more than a few of them, so you'd think I would have noticed immediately the dearth of questions about the security of Java, which had kicked off the Q&A for the last few years. But it was Henrik Stahl, vice presidentof product management in Oracle's Platform Group, who observed at the end of the discussion that there had been no security questions at all.

I mentioned this later to Mike Milinkovich, the executive director of the Eclipse Foundation, who was on hand that day to lead a session. He was not surprised. "That's what happens when you have a squeaky clean year," he said.

I'm not sure I'd call 2014 "squeaky clean," but Java-based breaches -- not to mention headline -- were down last year. In fact, there were no major zero-day Java vulnerabilities discovered and exploited in the wild. Why? A new report released this week by HP Security Research offers at least part of the answer. The authors of "HP Cyber Risk Report 2015," (PDF) credited Oracle's click-to-play feature, introduced in 2014, for the improved security.

"Oracle introduced click-to-play as a security measure making the execution of unsigned Java more difficult," the report's authors wrote. "As a result we did not encounter any serious Java zero days in the malware space. Many Java vulnerabilities were logical or permission-based issues with a nearly 100 percent success rate. In 2014, even without Java vulnerabilities, we still saw high success rate exploits in other areas."

Click-to-play is the browser feature that blocks Java content by default. The Web page displays a blank space until the user clicks the box to enable that content. This seems to have mitigated the vulnerability of Java in the browser, which was largely the result of the way Oracle has bundled the Java browser extension with the Java runtime environment (JRE).

Among the exploits listed in the report's Top 10, none targeted Java, which had been one of the most commonly exploited targets in previous few years. "This may indicate that the security push, which caused delay in the release of Java 8, is getting some results," the researchers wrote, "although it may be too early to tell. It may also be a consequence of browser vendors blocking outdated Java plugins by default, making the platform a less attractive target for attackers."

The success of the click-to-play feature at thwarting Java attacks was "the one exception" in an "inherently vulnerable" environment in which systems are built on decades-old code, and patches are inadequately deployed, the researchers concluded. And that success may be responsible for shifting attacker focus to vulnerabilities in Microsoft's Internet Explorer and Adobe Flash.

"Attackers continue to leverage well-known techniques to successfully compromise systems and networks," the researchers wrote. "Many client and server app vulnerabilities exploited in 2014 took advantage of codes written many years back -- some are even decades old."

The most common exploit the researchers saw last year was CVE-2010-2568 (CVE: "Common Vulnerabilities and Exposures"), which accounted for just over a third of all discovered exploits. According to the CVE site, this vulnerability affects the Windows Shell in XP SP3, Server 2003 SP2, Vista SP1 and SP2, Server 2008 SP2 and R2, and Windows 7. It allows local users or remote attackers to execute arbitrary code via a crafted .LNK or a .PIF shortcut file, which is not properly handled during icon display in Windows Explorer. Six Java exploits were listed, accounting for a total of 28 percent.

There's much more in this report -- things like a deep-dive into highly successful vulnerabilities, an awesome glossary, and a lot of revealing statistics. The report is free for download. I also recommend the HP Security Research Blog.

Posted by John K. Waters on February 24, 20150 comments


Bosch ProSyst Acquisition Good News for Java and OSGi

German Internet of Things (IoT) platform provider Bosch Software Innovations (BSI) is acquiring ProSyst, a Java- and OSGi-based software vendor specializing in middleware for the IoT, the two companies announced this week. BSI, a subsidiary of the Bosch Group, specializes in the development of gateway software and middleware for IoT.

ProSyst is a provider of middleware for managing connected devices and implementing Machine-to-Machine (M2M) cloud-based applications. The company's roots are in Java and the Open Service Gateway initiative (OSGi) specification, and it has focused mainly on open, modular, and neutral software platforms that services providers and device manufacturers can use to deploy apps and services.

ProSyst products serve as a link between devices and the cloud, and that link is essential for interconnecting buildings, vehicles and machines, said BSI president Rainer Kallenbach, in a statement.

"[T]he ProSyst software will enable our customers to launch new applications on the Internet of Things more quickly and be one of the first to tap into new areas of business," Kallenbach said. "The ProSyst software is highly compatible with the Bosch IoT Suite, our platform for the Internet of Things. Above all, it complements our device management component by supporting a large number of different device protocols. This will allow us to achieve an even better market position than before."

BSI will be acquiring, among other assets, the ProSyst device runtime stacks, tools, SDKs and remote device management/provisioning platforms. Bosch also takes on the company's approximately 110 Java/OSGi engineers.

"ProSyst has been the leading provider of OSGi implementations for embedded systems for many years," Mike Milinkovich, executive director of the Eclipse Foundation, told ADTmag in an e-mail. "A quick look at their customer reference page shows a pretty amazing list of accounts, including Bosch. And those are just the ones that [the company is] allowed to talk about. There are other, very significant players who embed the ProSyst OSGi technology, but prefer anonymity."

The ProSyst customer list includes, among others, Intel Cisco, AT&T and Deutsche Telekom.

Milinkovich believes that the acquisition signals the intention of Bosch to become a significant player in the IoT, with a particular focus on the industrial applications.

"To me, [the Bosch] acquisition of ProSyst means that Java and OSGi will be an important part of [the company's] strategy," Milinkovich said. "That is great news for both Java and OSGi. In particular, I see this as significantly increasing the likelihood that Java and OSGi will be fundamental technologies in the Industrial Internet."

Posted by John K. Waters on February 19, 20150 comments


Understanding Service (not Server) Virtualization

"What's in a name?" Shakespeare's Juliet asked. Quite a lot, actually. Take it from me: the other John Waters. Another example: service virtualization. The name is so close to the most well-known and widely implemented type of virtualization -- server virtualization -- that it's gumming up the conversation about using virtualization in the pre-production portion of the software development lifecycle.

Industry analyst Theresa Lanowitz has been doing her part for a while now to clarify terms. It matters, she says, because service virtualization could have as big an impact on application development as server virtualization had on the datacenter.
"When many people hear the word 'virtualization,' the first thing that pops into their heads is serv-er virtualization, and of course, VMware," Lanowitz told me. "Which is understandable. Servervirtualization is incredible technology. It allows enterprises to make better use of their hardware and to decrease their overall energy costs. It allows them to do a lot more with underutilized resources. Serv-ice virtualization is almost the antithesis, in that it allows you to do more with resources that are in high demand."

To be clear, as Lanowitz defines it, service virtualization "allows development and test teams to statefully simulate and model the dependencies of unavailable or limited services and data that cannot be easily virtualized by conventional server or hardware virtualization means." Lanowitz stresses "stateful simulation" in her definition, she said, because some people argue that service virtualization is the same as mocking and stubbing. But service virtualization is an architected solution, while mocking and stubbing are workarounds.

"Lifecycle Virtualization" is voke's umbrella term for the use of virtualization in pre-production. The full menu of technologies for LV includes service virtualization, virtual and cloud-based labs, and network virtualization solutions. The current list of vendors providing service virtualization solutions includes CA, HP, IBM, Parasoft, and Tricentis.

Lanowitz and her voke, inc., co-founder, Lisa Donzek, take a closer look at recent developments in the service virtualization market in a new report, ("Market Snapshot Report: Service Virtualization"). The report looks at what 505 enterprise decision makers reported about their experiences with service virtualization between August 2014 and October 2014, and the results they got from their efforts. It also does an excellent job of defining terms and identifying the products and vendors involved.

This is voke's second market report on service virtualization. Their first survey was conducted in 2012 and included 198 participants. "In 2012, the market had just been legitimized," Lanowitz said. "We still had a long way to go."

What jumped out at me in this report was the change in the number of dependent elements to which the dev/test teams of the surveyed organizations required access. In 2012 participants reported needing access to 33 elements for development or testing, but had unrestricted access to only 18. In 2014, they reported needing access to 52 elements and had unrestricted access to only 23. Sixty-seven percent of the 2014 participants report unrestricted access to ten or fewer dependent elements.

"It's clear to us that if you're not using virtualization in your application life cycle, you're going to have some severe problems, whether it's meeting time-to-market demands, quality issues, or cost overruns," Lanowitz said. "Service virtualization helps you remove the constrains and wait times frequently experienced by development and test teams needing to access components, architectures, databases, mainframes, mobile platforms, and so on. Using service virtualization will lead to fewer defects, reduced software cycles, and ultimately, increased customer satisfaction."

There's a lot more in the report. It's a must read.

A rose by any other name might smell as sweet, but it's not going to jack up your productivity.

Posted by John K. Waters on February 9, 20150 comments


One Solution for Developer Fatigue

When you hear the words "developer fatigue," what images come to mind? Your team leader talking about yet another project with an impossible deadline? Bleary-eyed teammates on all-night coding sessions? Too much java (and Java)? Or maybe you see a more profound enervation brought on by the "the constant and increasing flood of new languages, libraries, frameworks, platforms, and programming models that are garnering popular attention in the developer community." Those are JNBridge CTO Wayne Citrin's words, soon to appear in a company blog post.

Citrin and other industry watchers have noted the growing frustration among developers faced with a constant need to learn and adjust to new languages and technologies while remaining productive. Citrin believes he has at least a partial solution to this problem, and I talked with him about it recently.

"It's great that there's so much innovation going on," he said, "but the half-life of many of these innovations seems to be decreasing. It used to be that something would come out, and once it got adopted, you'd expect it to be the big paradigm for at least a few years. That's just not the case anymore. It can be a genuine hassle to keep up with all this stuff, and it takes away from the time people need to be productive. At some point, people just start throwing up their hands."

JNBridge is a provider of interoperability tools for connecting Java and .NET frameworks. The company's flagship product, JNBridgePro, is a general purpose Java/.NET interoperability tool designed to bridge anything Java to .NET, and vice versa. The tool allows developers to access the entire API from either platform.

Citrin's solution, not surprisingly, lies in tools like JNBridgePro.

"It's a little self-serving on our part, I admit, but it's also true that interop tools can really help," he said. "They make it possible for you to introduce yourself to new technologies gradually, focusing on the things in those technology that can help you solve your immediate problems. If you're a developer who uses our stuff and you're deep into both Java and .NET and you need to make use of, say, Python or Groovy, you can reach out for features from those technologies at your own pace."

By taking it slowly and integrating features from the new tech into your projects as they need them, devs can leverage their existing skills, reduce the risk of making a bad bet, and reduce their stress -- which ultimately reduces developer fatigue, he said.

"As long as the implementation is Java- or .NET-based, we can help developers integrate any part of it into their existing project, when they're ready, and without having to throw all the existing stuff away, and completely learning the new thing, re-implementing it from scratch, and having to find and fix new bugs," he said. "It beats the heck out of the alternative of 'warehousing,' which is essentially jumping in and learning a new technology for its own sake."

Citrin isn't the only one noticing this phenomenon, of course. A lively back-and-forth on the consequences of these seemingly never ending demands on developers was sparked last July when front-end and server-side developer Ed Finkler (aka funkatron) wrote a blog post entitled "The Developer's Dystopian Future." In that post he confessed that his "tolerance for learning curves" was growing smaller every day.

"New technologies, once exciting for the sake of newness, now seem like hassles," he wrote. "I'm less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity." He also expressed what is almost certainly a widespread fear in this increasingly polyglot world of simply being left behind.

Finkler's post struck a nerve in more than a few developers and fellow bloggers. Tim Bray, one of the creators of the XML specification, talked about it in his "Discouraged Developer" blog.

"[T]here is a real cost to this continuous widening of the base of knowledge a developer has to have to remain relevant," he wrote. One of today's buzzwords is "full-stack developer." Which sounds good, but there's a little guy in the back of my mind screaming 'You mean I have to know Gradle internals and ListView failure modes and NSManagedObject quirks and Ember containers and the Actor model and what interface{} means in Go and Docker support variation in Cloud providers?' Color me suspicious."

Programmer/podcaster Marco Arment took up Finkler's commentary in his blog.

"I feel the same way," he wrote, "and it's one of the reasons I've lost almost all interest in being a web developer. The client-side app world is much more stable, favoring deep knowledge of infrequent changes over the constant barrage of new, not necessarily better but at least different technologies, libraries, frameworks, techniques, and methodologies that burden professional web development."

Author Matt Gemmel commented on Arment and Finkler's posts on his "Confessions of an ex-developer" blog. "There's a chill wind blowing, isn't there? I know we don't talk about it much, and that you're crossing your fingers and knocking on wood right now, but you do know what I mean," he wrote.

Redmonk analyst Stephen O'Grady noticed Arment's, Bray's, Finkler's, and Gemmell's posts and wrote about them on his "tecosystems" blog.

"Developers have historically had an insatiable appetite for new technology," he wrote, "but it could be that we're approaching the too-much-of-a-good-thing stage. In which case, the logical outcome will be a gradual slowing of fragmentation followed by gradual consolidation." (Be sure to check out his new O'Reilly book, The New Kingmakers: How Developers Conquered the World.)

If you haven't already, it's worth reading these connected blogs. (I'd start with Finkler's.) And keep an eye out for Citrin's upcoming post on the JNBridge Web site.

Posted by John K. Waters on February 9, 20150 comments