Java 9 & Jigsaw: Reinhold on 'the State of the Modular System'

The first early access builds of JDK 9 with Project Jigsaw, the initiative that's bringing modularity to the Java platform, are now available for download. Before you jump in, you should definitely read Mark Reinhold's rich and readable "The State of the Module System," which he published online earlier this month. The chief architect of Oracle's Java Platform Group calls it "an informal overview of enhancements to the Java SE Platform prototyped in Project Jigsaw and proposed as the starting point for JSR 376."

JSR 376 is, of course, the Java Specification Request that aims to define "an approachable yet scalable module system for the Java Platform." But Project Jigsaw actually comprises JSR 376 and four JEPs (JDK Enhancements Proposals), which are sort of like JSRs that allow Oracle to develop small, targeted features for the Java language and virtual machine outside the Java Community Process (JCP). (The JCP requires full JSRs.) JEP 200: The Modular JDK defines a modular structure for the JDK. Reinhold has described it as an "umbrella for all the rest of them." JEP 201: Modular Source Code reorganizes the JDK source code into modules. JEP 220: Modular Run-Time Images restructures the JDK and JRE run-time images to accommodate modules. And the recently proposed JEP 260: Encapsulate Most Internal APIs (which I wrote about earlier), which Reinhold proposed to encapsulate unsupported, internal APIs, including sun.misc.Unsafe, within modules that define and use them.

The early access builds of Java 9 with Jigsaw include the latest prototype implementation of JSR 376 and the JDK-specific APIs and tools described in JEP 261, which will actually implement the changes and extensions to the Java programming language, JVM and standard Java APIs proposed by the JSR.

In his "state of" report, Reinhold provides a nuts-and-bolts breakdown of modularization, from the essential goals of the JSR, to detailed descriptions of modularization in the context of Java -- everything from "modules," "module artifacts" and "module descriptors" to the concepts of "readability," "accessibility" and "reflection."

In his conclusion, he writes:

"The module system described here has many facets, but most developers will only need to use some of them on a regular basis. We expect the basic concepts of module declarations, modular JAR files, module graphs, module paths, and unnamed modules to become reasonably familiar to most Java developers in the coming years. The more advanced features of qualified exports, increasing readability, and layers will, by contrast, be needed by relatively few."

He labeled the post "Initial Edition," so I'm expecting updates. I'd keep an eye out.

The long-awaited, much-delayed modularization of Java is going to be the biggest change since Java 8's support of lambdas. In a recent ADTmag post ("Is Oracle Dumping Its Java Evangelists?"), I followed up on a tweet by Gartner analyst Eric Knipp, who called Java a "dead platform." He made the case to me that Java is no longer the default choice for greenfield applications, and that change augurs its eventual demise. But I ran into Forrester analyst John R. Rymer at the recent Dreamforce event, and he posed the question, given all the recent and coming changes to Java -- especially modularization -- is it really the same language?

Posted by John K. Waters on September 23, 20150 comments


Nginx Adds Support for HTTP/2

Nginx Inc., the commercial provider of one of the most popular open-source Web servers, has released a new version of its namesake product with a fully supported implementation of the new HTTP/2 standard. Nginx Plus R7, available now, comes with the promise of an easier transition to the new standard, along with new performance and security enhancements.

This release actually comes with a bunch of improvements -- things like support for thread pools and asynchronous I/O; support for socket sharding optimizations to increase performance on multicore servers; new access controls and connection limits for TCP services; and a new "live activity monitoring" dashboard. But it's the HTTP/2 support that'll be getting the most attention.

HTTP/2 is the second major version the Hypertext Transfer Protocol Web standard -- the mechanism used by Web browsers to request information from a server and display Web pages on a screen -- and the first since HTTP/1.1 was approved in 1999. It's based on Google's SPDY (pronounced "speedy") open networking protocol. Nginx has been leading the effort to develop the updated standard over the past two years, and Google has said that it will deprecate SPDY in favor of HTTP/2 in its Chrome browser this year.

HTTP/2 is going to provide big performance and security improvements with things like multiplexing, header compression, prioritization, and protocol negotiation, but it's still a challenge for many Web sites to support the standard. What the company has created with Nginx Plus R7 is a kind of front-end HTTP/2 gateway and accelerator for new and existing Web services, Owen Garrett, the head of products, told me.

"This is a way you can deploy Nginx Plus in front of your applications and publish those applications using HTTP/2," he said. "It's an easy and powerful way to adopt this new Web standard."

This is all about making it possible for Web sites to operate at scale, Garrett said. "You can deploy a Web site on standard hardware and software and it can handle a handful of users, but in order for that Web site to be successful, it's got to be able to handle hundreds, thousands, even millions of users. And that's what we're doing. We transform a simple but rich Web site into something that can handle phenomenal amounts of traffic."

The popularity of Nginx has exploded in recent years. There's a reason Garrett and his colleague, Peter Guagenti, vice president of marketing, called it "the heart of the modern Web." The Apache Web server has been around since the mid-90s and it's probably more widely deployed than Nginx. But in a recent breakdown of Web server usage by analysts at W3Techs, Apache was used by 56.5 percent of all Web sites (with a known server) and 26.8 percent of the most heavily trafficked 1,000 sites; while Nginx was used in 25.4 percent of all Web sites and in 44.4 percent of the top 1,000. (W3Techs uses data from Web traffic tracker Alexa for its Web site ranking.)

"It all depends on how you slice the data," Guagenti said. "Nginx has been the only Web server on the market that has been growing in the last two years. We've been gaining a basis point every week or so, and we expect to surpass Apache in the top 100,000 sometime this week."

The modern Web is all about performance, Guagenti said, and that's why Nginx has become so popular.

"There has never been a focus on performance like we've seen in the past few years," he said. "Performance is everything. Milliseconds of latency costs thousands to millions of dollars in e-commerce. Milliseconds of latency mean you use one app over another on your phone, that you switch to a different media site to read the same article."

Guagenti also argued that the rise of Nginx has been driven, at least in part, by the DevOps movement. "People now expect a certain level of control and configurability over their entire stack," he said, "which they get from Nginx, but not so much from other tool chains."

The list of Web sites currently using Nginx offers a peak into Nginx's future: Airbnb, Box, Instagram, Netflix, Pinterest, SoundCloud and Zappos, among others.

"We call ourselves 'the secret heart of the modern Web,'" said Garrett, "but we're not a secret to developers. We're growing this fast because of the grass roots movement among our open source and commercial users."

Nginx is sponsoring a three-day conference in San Francisco this week. Looks like a lot of hands-on training, strategic sessions, and some rock-star speakers.

Posted by John K. Waters on September 21, 20150 comments


Is Oracle Dumping Its Java Evangelists?

The rumors are flying about the fate of some of Oracle's top Java evangelists, thanks to a tweet and a Reddit thread picked up by the press last week. These rumors follow hot on the heels of the departure last month of Cameron Purdy, who served as senior vice president of Oracle's Cloud Application Foundation and Java EE group.

The Reddit discussion grew from a comment citing a Facebook post by Simon Ritter, evangelist on Oracle's Cloud Development team, which read:

"I've heard it said that you should try something new every day. Yesterday I thought I'd see what it was like to be made redundant. One month of 'consultation' and then I'll be joining the ranks of the unemployed claiming my job seekers allowance. To be fair, I was expecting this, but feel bad for the numerous other people on my team whom I don't think saw this coming...."

A number of names of the newly departed or soon-to-be-departing emerged during the Reddit discussion. I wasn't able to talk with them -- and Oracle isn't commenting -- so I won't post their names here. (But you can see them in the thread.) I was, however, able to connect with jClarity co-founder and CTO Kirk Pepperdine, who posted the tweet, which read:

I caught up with Pepperdine via e-mail. "I only stated what was pretty much public knowledge at [the time] it was tweeted," he told me. "I'm a little surprised that it's taken off as it has."

Pepperdine said he caught a hint that something was up in July at his company's annual jCrete conference. jCrete is an invitation-only, think-tank event that typically draws about 75 people. One of the sessions was on the end of the Java evangelism team and some thoughts on what direction Oracle is taking. "My understanding was Java evangelism was to become cloud evangelism," he said. "I didn't expect that people would be let go. My guess is that they were on a round of cutbacks, and evangelism is a soft target."

Pepperdine believes that Oracle has been good for Java in general, but at moments like this, it's clear that its interests don't always coincide with the interests of the Java community. "Oracle is a top-down CCC organization that is very much focused on the bottom line," he said. "The reality is, making money from core Java is plain difficult. Supporting core Java is very expensive. Making moves without properly priming the community has always been a problem in that it inevitably turns out to be a PR disaster. And that is a shame, because on the whole, Oracle has been a great steward of Java ...."

"This move away from evangelism appears to be an attempt to refocus the business people," he added. "However, Java didn't become a pervasive technology because of business people, it became the platform of choice because of developers."

Pepperdine's tweet generated a lively conversation about the health of Java. Among the many comments was this one from Gartner Inc. analyst Eric Knipp:

"This one actually makes sense. Why promote a dead platform?"

I asked Knipp what he meant by that. "I look at it like this," he explained in an e-mail. "The platforms that dominate greenfield application today, will be the dominant platforms of tomorrow. The majority of application development occurs in the creation of packaged software (and then the technologies from the software 'as a product' world move into the enterprise). Packaged software is in transition from COTS [commercial off-the-shelf] products to SaaS [Software as a Service]. This transition will take some time, but I don't think anyone can argue that it isn't happening. For many years, the default choice for new packaged software was the Java platform. Java is no longer the default choice, and hasn't been for at least five years. In fact, I'd argue that today Java isn't even the dominant choice -- that mantle is moving to other runtimes more suitable for massively distributed cloud-native architectures, like Node.js, Go, Erlang and so on.

"So if you come back to my original point -- platforms that dominate greenfield today will be the vibrant 'winning platforms' of tomorrow -- it ought to be concerning to Oracle (and Java enthusiasts in general) that its platform is no longer dominant. That portends the death of the platform in terms of relevancy to enterprise IT. Would it be more accurate to say 'Java is dying a slow death' or 'Java is the new COBOL?' Maybe, but the gist is the same."

Pepperdine's partner, Martijn Verburg, CEO of jClarity and co-leader of the London Java Users Group, argues that evangelists still play an important role in the Java ecosystem. He listed his reasons, which included, among others:

  • Shifting customers that run on Java enterprise solutions in-house to Oracle Cloud means getting Java developers on board. No evangelists? Can't do that as easily.
  • Oracle cloud middleware, and so on, has a strong Java core and customers need to understand the how, what, when and why of that.
  • Java, despite being the No. 1 or 2 language (depending on who you ask) today, is under serious competition in the enterprise, thanks to server-side Javascript (Node.js), as well as .NET being open sourced and being made available on Linux.
  • Emerging markets have millions of developers who can be influenced to go down a certain ecosystem. Oracle potentially will lose out on having any good will with the millions of new developers arriving in China, India, South America, Africa and so on.
  • Undoing a lot of good work that they'd done with the existing Java community, many of whom are paying customers, it was a long slog to get the two sides to see eye to eye and work together; this move brings back old fears and doubts.

For what it's worth, this looks like cost-cutting to me. Oracle hasn't exactly been killing it lately, and as Pepperdine said, evangelists are a soft target. And maybe Java no longer needs an army of preachers spreading the gospel.

Posted by John K. Waters on September 9, 20150 comments


Java Interop Tool Now Supports Windows 10, Adds 'Proxy By Name'

The Java and Microsoft .NET Framework interoperability mavens at JNBridge have upgraded their flagship JNBridgePro tool to support both Windows 10 and Visual Studio 2015. That was to be expected from the guys who have been helping to build bridges between "anything Java and anything .NET" since 2001. What stood out in this release for me was the new "Proxy By Name" feature, which was much requested by JNBridge users, company CTO Wayne Citrin told me.

"Our users like the fact that they can use proxies in Visual Studio and Eclipse, etc., but don't like the parameter placeholder names they get when IntelliSense pops up," Citrin said. "They really wanted to see the names of the original parameters, which are generally in the metadata of the underlying binaries."

Simple, right? Except traditionally that metadata hasn't been so easily extracted from Java. Enter Java 8 and the Java Reflection API, which allows for the extraction of that parameter info. "It seemed like the time was right to add this very often requested feature," Citrin said.

As the Oracle doc page describes it, the Reflection API "enables Java code to discover information about the fields, methods and constructors of loaded classes, and to use reflected fields, methods, and constructors to operate on their underlying counterparts, within security restrictions. The API accommodates applications that need access to either the public members of a target object (based on its runtime class) or the members declared by a given class. It also allows programs to suppress default reflective access control."

Proxy By Name maps the names of the underlying parameters of methods when generating proxies so that the parameters of the proxied methods have the same names as the parameters in the underlying methods. The result: Developers can better understand how the proxied methods should be used.

"We're kinda proud of this one," Citrin said, "It's always fun to finally cross off a feature request that has been on the customer request list for a number of years."

JNBridgePro is a general purpose Java/.NET interoperability tool designed to allow developers to access the entire API from either platform. As Citrin explained it to me once, the tool "connects Java and .NET Framework-based components and applications with simple-to-use Visual Studio and Eclipse plug-ins that remove the complexities of cross-platform interoperability."

The Boulder, Colo.-based company is a member of Microsoft's Visual Studio Partner (VSIP) Program, and Citrin, of course, keeps a close on developments in Redmond. At a recent VSIP event, he got to spend time digging into Visual Studio 2015.

"A lot of the cool stuff in the new release isn't something we deal with directly at the company just yet," Citrin said. "But I have to say that I'm very impressed with the Universal Windows Platform. The idea of having a single binary that should work on your phone, your tablet, your PC, your Xbox, your HoloLens, is great. I think Microsoft is going in an interesting direction."

As I've mentioned before in this space, JNBridge publishes a series of interoperability scenarios called "Labs." The company calls them "cutting-edge scenarios that showcase the myriad possibilities available to developers when bridging Java and .NET frameworks." The description is a bit hyperbolic, but the labs, which are free kits that include documentation and source code, have gotten good reviews from users. One example of a Lab: "Create a .NET-based Visual Monitoring System for Hadoop," to visually monitor the status of all the nodes in a Hadoop cluster in real time. Another: "Using a Java SSH Library to Build a BizTalk Adapter," which shows how to use Java Secure Shell (SSH) to enable BizTalk Server to manipulate remote files securely. If you use JNBridge, the Web site is worth checking out.

Posted by John K. Waters on August 21, 20150 comments


Oracle Offers Solution for sun.misc.Unsafe in Java 9

What to do with sun.misc.Unsafe in Java 9? One side says it's simply an awful hack from the bad old days that should be gotten rid of; the other side says its heavy use is responsible for the rise of Java in the infrastructure space and popular tools still need it. The problem is, both sides are right. This week, Mark Reinhold, chief architect of Oracle's Java Platform Group, offered a solution.

Writing on the OpenJDK mailing list, Reinhold proposed encapsulating unsupported, internal APIs, including sun.misc.Unsafe, within modules that define and use them. That proposal is now a formal Java Enhancement Proposals (JEP). Posted this week, JEP 260 ("Encapsulate Most Internal APIs") aims to "make most of the JDK's internal APIs inaccessible by default, but leave a few critical, widely used internal APIs accessible, until supported replacements exist for all or most of their functionality." JEPs are similar to Java Specification Requests (JSRs), which are submitted to the Java Community Process (JCP).

"It's well-known that some popular libraries make use of a few of these internal APIs, such as sun.misc.Unsafe, to invoke methods that would be difficult, if not impossible, to implement outside of the JDK," Reinhold wrote, adding that the encapsulation scheme will, in the long run "reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these non-standard, unstable and unsupported internal APIs."

When word got around a few months ago that sun.misc.Unsafe might be removed or hidden in Java 9, howls of protest echoed across the public network. The plan was "an absolute disaster in the making," declared one blogger. Cooler heads organized a working group to develop a document to raise awareness of the problems ditching sun.misc.Unsafe would create. Although still a draft document, "What to do about sun.misc.Unsafe?" is well worth reading. It includes a clear explanation of the uses to which sun.misc.Unsafe has been put over the years, suggestions for what should be done about it now, and a surprisingly (to me, anyway) long list of products that use it (JRuby, Grails, Scala, Akka, Hazelcast, Neo4j, Apache Spark and XRebel, to name a few).

Greg Luck, CTO at Hazelcast and co-author of the JCache spec (and JCP Executive Committee member), is a member of the working group. He learned that Oracle was considering removing or hiding sun.misc.Unsafe in Java 9 in June. So-called unsafe code is sometimes required for low-level Java programming, Luck explained, where developers need to modify platform functionality for a specific purpose. Open source projects in particular use sun.misc.Unsafe as a Java Native Interface (JNI) workaround.

"It's not meant to be a standard part of Java, and yet it's built into every JDK, and everybody uses it," he said. "It's a genie that got out of the bottle."

Martijn Verburg, CEO of jClarity and co-leader of the London Java Users Group, is another member of the working group. The reason sun.misc.Unsafe can't simply be dumped, he told me in an e-mail, is that it provides a number of functionalities that aren't available through any of the standard classes in OpenJDK.

"[sun.misc.Unsafe] should be cleaned up and the safe parts should get standardized," Verburg said. "The rest should be removed! If you want to perform dangerous manual memory allocations, there are other languages for that."

Both Verburg and Luck praised Oracle's proposal to encapsulate unsupported, internal APIs, including sun.misc.Unsafe. Verburg called JEP 260 "a fantastic pragmatic compromise" that "clearly shows [that the] OpenJDK leadership and Oracle are willing to listen to the needs of the ecosystem."

The community seems to be heading toward a solution to the sun.misc.Unsafe problem, and I'm sure it's due, at least in part, to the efforts of Verburg, Luck, and their colleagues in the working group. But this internecine dustup also raises a question that has been lurking in the background since the formation of OpenJDK: Who really makes the decisions about the future of Java? OpenJDK is an open-source community, but unlike the JCP (and organizations like the Apache and Eclipse foundations), it's not vendor neutral. The main goal of the JEP Process, according to the OpenJDK Web site, is "to produce a regularly updated list of proposals to serve as the long-term Roadmap for JDK Release Projects and related efforts." The JEPs allow Oracle to develop small, targeted features for the Java language and virtual machine outside the JCP.

"Who's in charge of Java? That's a very complex [question]," said Verburg. "The reality is that Oracle has the loudest voice, but it's a heavy collaboration .... For the parts of OpenJDK that make up the Reference Implementation of Java, the JCP still has to approve."

A list of the internal APIs Oracle has proposed to remain accessible in JDK 9 are listed on the JEP 260 page. Oracle is welcoming suggested additions to the list "justified by real-world use cases and estimates of developer and end-user impact."

BTW: Another great source for understanding sun.misc.Unsafe is Rafael Winterhaulter's January 2014 blog post, "Understanding sun.misc.Unsafe."

Posted by John K. Waters on August 7, 20150 comments


Open Container Initiative Moving Fast

It's been almost exactly a month since a coalition of industry leaders and users joined forces to create the Open Container Project to establish common standards for software containers. Now known as the Open Container Initiative (OCI) (renamed to avoid confusion with another Linux Foundation project), the group has announced the availability for public scrutiny of a draft charter for the nascent organization and the addition of 14 new members.

You can tell the OCI has the potential to become a true standards body by the broad range of organizations it has brought together, not to mention the number of out-and-out rivals who've gotten onboard. The list of founding members includes Docker, CoreOS, Amazon Web Services, Apcera, Cisco, EMC, Fujitsu, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, The Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware. The new membership roster includes AT&T, ClusterHQ, Datera, Kismatic, Kyup, Midokura, Nutanix, Oracle, Polyverse, Resin.io, Sysdig, SUSE, Twitter and Verizon.

The OCI was established under the auspices of The Linux Foundation, which also this week announced the formation of the Cloud Native Computing Foundation. Both groups are "collaborative projects," which means they are Linux Foundation sponsored, but independently supported.

The hopes of the backers of the OCI are summarized in the mission statement of the draft charter:

"The Open Container Initiative provides an open source, technical community, within which industry participants may easily contribute to building a vendor-neutral, portable and open specification and runtime that deliver on the promise of containers as a source of application portability backed by a certification program."

Just as interesting, I think, is what the OCI says it will not be doing:

"The Open Container Initiative does not seek to be a marketing organization, define a full stack or solution requirements, and shall strive to avoid standardizing technical areas undergoing signification innovation and debate."

The initiative was unveiled in June at DockerCon, and the latest news was announced this week at OSCON. Docker is making a big upfront donation to the OCI: a draft specification for the base format and runtime and the code associated with a reference implementation of that spec. The company is donating the entire contents of its libcontainer project and all modifications needed to make it run independently of Docker.

I had a chance to talk with two Docker Dudes (Dockeroids? Dockerettes? Dockerers?) about the new organization and its initial momentum.

"The number of members just about doubled in 30 days," said David Messina, Docker's vice president of marketing. "That's serious velocity, which I think speaks to the widespread interest in having a single, open container specification. But also notice the diversity of that membership. We have large software vendors, smaller software vendors, large Web-scale users and large enterprise players. Everybody in the industry wants a universal standard."

The OCI is making fast moves on the technical side, too. Patrick Chanezon, a member of the technical staff at Docker who has been working on the OCI, said we can expect a draft spec in just a few weeks.

"I've been involved in several standards projects over the years at Sun, Google, and Microsoft," Chanezon said, "and I've never seen an industry standard being elaborated so fast. In just six weeks [from the launch] we'll have a first draft of a spec for something that will be the basis for container based computing. To me that is a testament to the fact that a standard like this was needed to be able to innovate faster at the higher level, like orchestration and things like that."

High demand is one reason the draft spec is coming along so quickly, but it didn't hurt that the OCI launched was followed two days later by the Docker Contributor Summit. Many of the maintainers of libcontainer, which provides a standard interface for making containers inside an operating system, attended that event, as did members of the OCI working group. "We spent the whole day working together, with the result that the spec is in pretty good shape," Chanezon said.

The OCI working group's rapid progress also shows how effective a model that emphasizes lightweight governance and a focus on a discrete set of technologies, and nothing, else can be, Messina said. "This is what can happen when the organization gets out of the way of the maintainers," he said.

There are around 10 maintainers on the project right now, many of whom are coming from the libcontainer project, Chanezon said. "What we did was move the libcontainer from the Docker GitHub organization to Open Container organization, and all the maintainers came with it," he said. But that group also includes people from Docker, Google, Red Hat, CoreOS and a few independents.

It's worth noting that libcontainer represents 5 percent of the Docker code base. Chanezon called it "the heart of Docker." runC is the reference implementation in the OCI spec, and Docker plans to use it as plumbing for creating its own containers.

Jim Zemlin, executive director of The Linux Foundation, has said that containers are revolutionizing the computing industry. Docker claims that containers based on Docker's image format have been downloaded more than 500 million times in the past year alone, and there are now more than 40,000 public projects based on the Docker format.

I asked Chanezon why we're seeing such intense interest in, and furious activity around, containers.

"When I give talks, I like to quote William Gibson, who said, 'The future is already here, it's just not evenly distributed,'" he said. "Right now, that future is getting evenly distributed, and that means that every organization on the planet is starting to build distributed applications. Docker arrived just at the right time to let them do that."

You gotta love a guy who can work in a quote from the author of the great cyberpunk novel, Neuromancer, and coiner of the term "cyberspace." (Not to mention one of my favorite writers.)

Posted by John K. Waters on July 24, 20150 comments


Pivotal and Cloud Native Java

O'Reilly's annual Open Source Convention, better known as OSCON, is in full swing this week in Portland. Among the more joyful attendees at this year's event is James Watters, vice president of Product, Marketing and Ecosystem for Cloud Foundry at Pivotal. How do I know Watters is a happy camper? His latest blog post, in which, among other things, he enthuses about the dramatic uptick in conference sessions on microservices -- 30 this year, up from only one last year, which Pivotal presented.

"People are talking about writing apps in a new way," Watters said when I caught up with him on the phone. "And they're talking about using microservices and Spring Cloud to do it. I haven't seen that kind of excitement in the Java community to restructure these kinds of things in the enterprise space, maybe ever. So yeah, I'm kind of excited.

Pivotal recently released the beta of its Spring Cloud Services (1.0.0), which integrates the Cloud Foundry-based NetFlixOSS microservices framework with Pivotal's Java-based Spring programming tools. The company plans to make Spring Cloud generally available in the fall. Between now and then, Pivotal will be adding distributed tracing into the framework via something called Spring Sleuth, Watters said.

At the SpringOne2GX conference in Washington this fall, Netflix is expected to talk about how it has begun to adopt Spring Cloud, Watters said. "Instead of configuring their apps in a complicated way, they're like, 'okay great, you wrote a wrapper for us? Cool, we'll just use your wrapper.' There's a virtuous feedback loop between the Spring team and Netflix team right now."

Watters describes himself as a lifetime enterprise Java guy (he's been working with it since high school), who worked at Sun Microsystems for about eight years. In his post, he claims (I think rightly) that Pivotal has been at the "intersection of microservices, continuous delivery and multi-cloud portability since being founded in 2013."

"There are two camps today," he told me, "people who are interested in different flavors of containers, and people like us, who are interested in building and running microservices apps. We have large companies asking us to come in and do two-day workshops on that. That's really where the excitement is right now."

Without a microservices architecture, container technologies aren't nearly as useful, Watters argued. "You can't run legacy monolithic Java, like Oracle and WebSphere, in that environment."

We're seeing a new wave, he wrote, that "fundamentally alters application architectures and workflows for developers and operators building the next generation of data-hungry, digital experiences." He's talking, of course, about what some people are calling the cloud native revolution. The Cloud Foundry open Platform-as-a-Service (PaaS) environment, of which Pivotal is the commercial maintainer, is a key enabling technology of that revolution.

"After our Spring Cloud product manager, Matt Stine, published "Migrating to Cloud-Native Application Architectures" for O'Reilly, we were just overwhelmed with requests from enterprises for workshops," Watters said. "We can't keep Matt off the road."

Cloud Native Java is especially appealing to enterprises that are looking to modernize their architectures, Watters said, because it leverages existing skill sets and allows for integration with legacy apps. From their perspective, he said, it's an "evolutionary approach."

In his post, Watters points to some telling successes of Pivotal's Cloud Native enterprise products. The company showcased 10 Fortune 500 companies at the recent Cloud Foundry Summit, all of which worked with the company on projects based on Cloud Foundry and Spring technologies. And downloads of Pivotal's Spring Boot rapid application development framework have gone through the roof. (More than 1.4 million downloads per month over the last year, he said. There's a graph.)

There's a lot more in Watters' post, which is well worth reading.

Posted by John K. Waters on July 22, 20150 comments


Oracle v. Google: Now the Fair Use Argument for Java APIs

Now that the Supreme Court has decided not to review Oracle America Inc. v. Google Inc., the long-running lawsuit returns to the Federal Circuit Court in San Francisco, where Google will have a chance to argue that its use of 37 Java APIs -- now considered copyrightable becuase of the Supreme Court's pass -- in its Android operating system falls under the doctrine of fair use.

Oracle has won a significant argument here, but not the lawsuit. You could say that Google has a Plan B. But what exactly is "fair use," and how do you prove it in court?

The U.S. Copyright Office defines fair use as "a legal doctrine that promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances." Federal courts decide fair use issues using four criteria:

  • the purpose and character of the use (is it commercial, nonprofit, educational, etc.)
  • the nature of the copyrighted work (is it a novel, movie, song, technical article, news item)
  • the amount and "substantiality" of the portion used (how much of it was used and was that the "heart" of the work)
  • the effect of the use upon the potential market value of the work

There's also the question of whether the use was "transformative." Transformative uses, the Copyright Office says, "are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work."

"Fair use is a fact-specific inquiry," explained attorney Case Collard via e-mail. "It depends on what the item is that is copyrighted and how the entity claiming fair use is using it."

I reached out to Collard, a partner at Dorsey & Whitney, who specializes in intellectual property disputes and developing strategies for safeguarding intellectual property rights, to get his take on the latest development in the Big O versus Big G saga. He said the Federal Circuit's decision, which will now stand, laid out something of a road map for how Google might apply a fair use argument.

"In my opinion, the biggest problem for Google is the commercial nature of its use [of the APIs]," he said. "That is generally a strike against finding fair use. Its best argument is probably interoperability -- in other words, it should be fair use because Google must use the APIs in order to make its products interoperable."

Both the Federal Circuit and the White House recognized that Google was entitled to a fair-use defense. At the high court's request, the U.S. Solicitor General actually weighed in with an amicus curiae brief.

"Petitioner argues that its copying of respondent's code promoted innovation by enabling programmers to switch more easily to another platform," he wrote. "But it is the function of the fair-use doctrine... to identify circumstances in which the unauthorized use of copyrighted material will promote rather than disserve the purposes of the copyright laws." And he concluded: "Although petitioner has raised important concerns about the effects that enforcing respondent's copyright could have on software development, those concerns are better addressed through petitioner's fair-use defense…"

But the legal eagles at the Electronic Frontier Foundation (EFF), a California-based international nonprofit that advocates for digital rights, argue that fair use should not be the only defense against API copyright claims.

"Fair use is a complex and potentially expensive defense to develop and litigate," EFF legal director Corynne McSherry and special counsel Michael Barclay wrote in a blog post. "While Google has the financial resources to take that defense to trial, few start-ups have the ability to do so. The Federal Circuit's decision thus could deter new companies from competing with a large, litigious competitor by using the latter's APIs..."

The EFF is one of the staunchest opponents of API copyright. In an amicus brief filed in support of Google last year on behalf of 77 computer scientists, the organization articulated some widely held fears about the consequences of the appeals court's decision that the APIs are protected under U.S. copyright law. "The Federal Circuit's decision poses a significant threat to the technology sector and to the public," the brief stated. "If it is allowed to stand, Oracle and others will have an unprecedented and dangerous power over the future of innovation. API creators would have veto rights over any developer who wants to create a compatible program -- regardless of whether she copies any literal code from the original API implementation. That, in turn, would upset the settled business practices that have enabled the American computer industry to flourish, and choke off many of the system's benefits to consumers."

IDC analyst Al Hilwa is less apprehensive about the potential impact of API copyright.

"The impact will be felt in various ways," Hilwa told me. "APIs are likely to be more explicitly associated with terms of use, for example, and potentially with more lawsuits relating to interoperability. But it also means that developers wanting to bring alternative implementations of a system may choose to be less imitative of the behavior of the system, and more innovative by creating entirely different competing systems. I think we just have to wait and see how it plays out."

"In the end, it may not matter to developers much whether APIs are copyrightable, if (big if) they can be used under the fair use doctrine," Collard said. "In other words, after this is all said and done, if the fair use doctrine allows developers to use APIs without fear of a lawsuit, then it would have a very similar practical effect."

"Fair use" is codified in the U.S. in section 107 of the Copyright Act of 1976.

Posted by John K. Waters on July 8, 20150 comments


VMware: Making the Developer a First-Class Datacenter User

Among the more interesting vendor announcements at last week's DockerCon was VMware's preview of two new products: AppCatalyst and Project Bonneville. Both are emblematic of VMware's relatively newly amped up effort to, as Kit Colbert, vice president and CTO of VMware's Cloud-Native Applications group, put it, "make the developer a first-class user of the datacenter through our cloud-native applications."

Colbert gave me a preview of the previews before the show, and explained why the server virtualization giant is pulling out all the stops to create developer-friendly tools.

"We all know that all companies are a becoming more like software companies, in the sense that software is the means by which they engage with users," he said. "IT is now less about minimizing costs and more about driving innovation and differentiation. Consequently, there has been this renewed focus on developers within enterprises and how to empower them, which will drive that business agility and velocity companies are looking for."

VMware responded to that trend with launch of its Cloud-Native Applications group back in April, along with Project Photon, a lightweight Linux distro optimized for cloud-native apps, and Project Lightwave, an open source identity and access management solution for containers.

The group showcased its two latest projects at the Docker event in San Francisco. AppCatalyst is a desktop hypervisor aimed specifically at developers. Driven by a REST API and a Command Line Interface (CLI), it's designed for Linux container development (Docker is fundamentally a Linux technology) by devs working on Macs. It supports Docker Machine, integrates with HashiCorp Vagrant, and ships with Photon.

"We wanted to provide developers with an easy-to-use engine to run their applications, but also to optimize it so they can speed up the local build/test/run/debug cycle," Colbert said. "It's like a datacenter on their laptops."

Project Bonneville is a nascent native container solution for VMware's hypervisor. It's a Docker runtime that will allow users to create containers directly on VMware's ESXi bare-metal hypervisor via the Docker API. The project aims to enable the seamless integration of Docker containers into the vSphere server virtualization platform -- to, as the company says, "bring the VMware ecosystem to Docker containers."

"Developers are flocking to Docker," Colbert said. "It has a lot of momentum. The question for us is, how do we get the ease, speed, and flexibility of the Docker API mapped onto vSphere and give those containers the same level of management and monitoring that the VM infrastructure has today."

Ben Corrie, principal investigator on Project Bonneville, offers a great explanation of the project's approach in a company blog post: "... The pure approach Bonneville takes is that the container is a VM, and the VM is a container. There is no distinction, no encapsulation, and no in-guest virtualization. All of the necessary container infrastructure is outside of the VM in the container host. The container is an x86 hardware virtualized VM -- nothing more, nothing less."

"What this means to a developer," Colbert said, "is that ESX will look like a Docker host, indistinguishable from any other Docker host."

Bonneville comes with Instant Clone, a new feature in vSphere 6 that makes it possible to clone a running VM, which makes it possible to get a new VM booted up and running in less than a second, Colbert said.

Although the focus in the next-gen-app world is around Linux, Bonneville is being designed to run Docker containers on any OS. During a recent internal hackathon, Colbert said, some creative VMwarians used a vanilla Docker client to pull an image of the old school Lemmings game and run it on MS DOS 6.22.

"They were just having fun with it, but I think it's a great proof point of the generalization of the technology," Colbert said.

AppCatalyst was released as a technology preview at DockerCon, and it's available for download here. VMware expects to make it generally available later this year. The company is currently distributing Project Bonneville internally and expects to begin private beta testing in the third quarter of this year.

Posted by John K. Waters on July 6, 20150 comments