Is Oracle Dumping Its Java Evangelists?

The rumors are flying about the fate of some of Oracle's top Java evangelists, thanks to a tweet and a Reddit thread picked up by the press last week. These rumors follow hot on the heels of the departure last month of Cameron Purdy, who served as senior vice president of Oracle's Cloud Application Foundation and Java EE group.

The Reddit discussion grew from a comment citing a Facebook post by Simon Ritter, evangelist on Oracle's Cloud Development team, which read:

"I've heard it said that you should try something new every day. Yesterday I thought I'd see what it was like to be made redundant. One month of 'consultation' and then I'll be joining the ranks of the unemployed claiming my job seekers allowance. To be fair, I was expecting this, but feel bad for the numerous other people on my team whom I don't think saw this coming...."

A number of names of the newly departed or soon-to-be-departing emerged during the Reddit discussion. I wasn't able to talk with them -- and Oracle isn't commenting -- so I won't post their names here. (But you can see them in the thread.) I was, however, able to connect with jClarity co-founder and CTO Kirk Pepperdine, who posted the tweet, which read:

I caught up with Pepperdine via e-mail. "I only stated what was pretty much public knowledge at [the time] it was tweeted," he told me. "I'm a little surprised that it's taken off as it has."

Pepperdine said he caught a hint that something was up in July at his company's annual jCrete conference. jCrete is an invitation-only, think-tank event that typically draws about 75 people. One of the sessions was on the end of the Java evangelism team and some thoughts on what direction Oracle is taking. "My understanding was Java evangelism was to become cloud evangelism," he said. "I didn't expect that people would be let go. My guess is that they were on a round of cutbacks, and evangelism is a soft target."

Pepperdine believes that Oracle has been good for Java in general, but at moments like this, it's clear that its interests don't always coincide with the interests of the Java community. "Oracle is a top-down CCC organization that is very much focused on the bottom line," he said. "The reality is, making money from core Java is plain difficult. Supporting core Java is very expensive. Making moves without properly priming the community has always been a problem in that it inevitably turns out to be a PR disaster. And that is a shame, because on the whole, Oracle has been a great steward of Java ...."

"This move away from evangelism appears to be an attempt to refocus the business people," he added. "However, Java didn't become a pervasive technology because of business people, it became the platform of choice because of developers."

Pepperdine's tweet generated a lively conversation about the health of Java. Among the many comments was this one from Gartner Inc. analyst Eric Knipp:

"This one actually makes sense. Why promote a dead platform?"

I asked Knipp what he meant by that. "I look at it like this," he explained in an e-mail. "The platforms that dominate greenfield application today, will be the dominant platforms of tomorrow. The majority of application development occurs in the creation of packaged software (and then the technologies from the software 'as a product' world move into the enterprise). Packaged software is in transition from COTS [commercial off-the-shelf] products to SaaS [Software as a Service]. This transition will take some time, but I don't think anyone can argue that it isn't happening. For many years, the default choice for new packaged software was the Java platform. Java is no longer the default choice, and hasn't been for at least five years. In fact, I'd argue that today Java isn't even the dominant choice -- that mantle is moving to other runtimes more suitable for massively distributed cloud-native architectures, like Node.js, Go, Erlang and so on.

"So if you come back to my original point -- platforms that dominate greenfield today will be the vibrant 'winning platforms' of tomorrow -- it ought to be concerning to Oracle (and Java enthusiasts in general) that its platform is no longer dominant. That portends the death of the platform in terms of relevancy to enterprise IT. Would it be more accurate to say 'Java is dying a slow death' or 'Java is the new COBOL?' Maybe, but the gist is the same."

Pepperdine's partner, Martijn Verburg, CEO of jClarity and co-leader of the London Java Users Group, argues that evangelists still play an important role in the Java ecosystem. He listed his reasons, which included, among others:

  • Shifting customers that run on Java enterprise solutions in-house to Oracle Cloud means getting Java developers on board. No evangelists? Can't do that as easily.
  • Oracle cloud middleware, and so on, has a strong Java core and customers need to understand the how, what, when and why of that.
  • Java, despite being the No. 1 or 2 language (depending on who you ask) today, is under serious competition in the enterprise, thanks to server-side Javascript (Node.js), as well as .NET being open sourced and being made available on Linux.
  • Emerging markets have millions of developers who can be influenced to go down a certain ecosystem. Oracle potentially will lose out on having any good will with the millions of new developers arriving in China, India, South America, Africa and so on.
  • Undoing a lot of good work that they'd done with the existing Java community, many of whom are paying customers, it was a long slog to get the two sides to see eye to eye and work together; this move brings back old fears and doubts.

For what it's worth, this looks like cost-cutting to me. Oracle hasn't exactly been killing it lately, and as Pepperdine said, evangelists are a soft target. And maybe Java no longer needs an army of preachers spreading the gospel.

Posted by John K. Waters on September 9, 20150 comments


Java Interop Tool Now Supports Windows 10, Adds 'Proxy By Name'

The Java and Microsoft .NET Framework interoperability mavens at JNBridge have upgraded their flagship JNBridgePro tool to support both Windows 10 and Visual Studio 2015. That was to be expected from the guys who have been helping to build bridges between "anything Java and anything .NET" since 2001. What stood out in this release for me was the new "Proxy By Name" feature, which was much requested by JNBridge users, company CTO Wayne Citrin told me.

"Our users like the fact that they can use proxies in Visual Studio and Eclipse, etc., but don't like the parameter placeholder names they get when IntelliSense pops up," Citrin said. "They really wanted to see the names of the original parameters, which are generally in the metadata of the underlying binaries."

Simple, right? Except traditionally that metadata hasn't been so easily extracted from Java. Enter Java 8 and the Java Reflection API, which allows for the extraction of that parameter info. "It seemed like the time was right to add this very often requested feature," Citrin said.

As the Oracle doc page describes it, the Reflection API "enables Java code to discover information about the fields, methods and constructors of loaded classes, and to use reflected fields, methods, and constructors to operate on their underlying counterparts, within security restrictions. The API accommodates applications that need access to either the public members of a target object (based on its runtime class) or the members declared by a given class. It also allows programs to suppress default reflective access control."

Proxy By Name maps the names of the underlying parameters of methods when generating proxies so that the parameters of the proxied methods have the same names as the parameters in the underlying methods. The result: Developers can better understand how the proxied methods should be used.

"We're kinda proud of this one," Citrin said, "It's always fun to finally cross off a feature request that has been on the customer request list for a number of years."

JNBridgePro is a general purpose Java/.NET interoperability tool designed to allow developers to access the entire API from either platform. As Citrin explained it to me once, the tool "connects Java and .NET Framework-based components and applications with simple-to-use Visual Studio and Eclipse plug-ins that remove the complexities of cross-platform interoperability."

The Boulder, Colo.-based company is a member of Microsoft's Visual Studio Partner (VSIP) Program, and Citrin, of course, keeps a close on developments in Redmond. At a recent VSIP event, he got to spend time digging into Visual Studio 2015.

"A lot of the cool stuff in the new release isn't something we deal with directly at the company just yet," Citrin said. "But I have to say that I'm very impressed with the Universal Windows Platform. The idea of having a single binary that should work on your phone, your tablet, your PC, your Xbox, your HoloLens, is great. I think Microsoft is going in an interesting direction."

As I've mentioned before in this space, JNBridge publishes a series of interoperability scenarios called "Labs." The company calls them "cutting-edge scenarios that showcase the myriad possibilities available to developers when bridging Java and .NET frameworks." The description is a bit hyperbolic, but the labs, which are free kits that include documentation and source code, have gotten good reviews from users. One example of a Lab: "Create a .NET-based Visual Monitoring System for Hadoop," to visually monitor the status of all the nodes in a Hadoop cluster in real time. Another: "Using a Java SSH Library to Build a BizTalk Adapter," which shows how to use Java Secure Shell (SSH) to enable BizTalk Server to manipulate remote files securely. If you use JNBridge, the Web site is worth checking out.

Posted by John K. Waters on August 21, 20150 comments


Oracle Offers Solution for sun.misc.Unsafe in Java 9

What to do with sun.misc.Unsafe in Java 9? One side says it's simply an awful hack from the bad old days that should be gotten rid of; the other side says its heavy use is responsible for the rise of Java in the infrastructure space and popular tools still need it. The problem is, both sides are right. This week, Mark Reinhold, chief architect of Oracle's Java Platform Group, offered a solution.

Writing on the OpenJDK mailing list, Reinhold proposed encapsulating unsupported, internal APIs, including sun.misc.Unsafe, within modules that define and use them. That proposal is now a formal Java Enhancement Proposals (JEP). Posted this week, JEP 260 ("Encapsulate Most Internal APIs") aims to "make most of the JDK's internal APIs inaccessible by default, but leave a few critical, widely used internal APIs accessible, until supported replacements exist for all or most of their functionality." JEPs are similar to Java Specification Requests (JSRs), which are submitted to the Java Community Process (JCP).

"It's well-known that some popular libraries make use of a few of these internal APIs, such as sun.misc.Unsafe, to invoke methods that would be difficult, if not impossible, to implement outside of the JDK," Reinhold wrote, adding that the encapsulation scheme will, in the long run "reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these non-standard, unstable and unsupported internal APIs."

When word got around a few months ago that sun.misc.Unsafe might be removed or hidden in Java 9, howls of protest echoed across the public network. The plan was "an absolute disaster in the making," declared one blogger. Cooler heads organized a working group to develop a document to raise awareness of the problems ditching sun.misc.Unsafe would create. Although still a draft document, "What to do about sun.misc.Unsafe?" is well worth reading. It includes a clear explanation of the uses to which sun.misc.Unsafe has been put over the years, suggestions for what should be done about it now, and a surprisingly (to me, anyway) long list of products that use it (JRuby, Grails, Scala, Akka, Hazelcast, Neo4j, Apache Spark and XRebel, to name a few).

Greg Luck, CTO at Hazelcast and co-author of the JCache spec (and JCP Executive Committee member), is a member of the working group. He learned that Oracle was considering removing or hiding sun.misc.Unsafe in Java 9 in June. So-called unsafe code is sometimes required for low-level Java programming, Luck explained, where developers need to modify platform functionality for a specific purpose. Open source projects in particular use sun.misc.Unsafe as a Java Native Interface (JNI) workaround.

"It's not meant to be a standard part of Java, and yet it's built into every JDK, and everybody uses it," he said. "It's a genie that got out of the bottle."

Martijn Verburg, CEO of jClarity and co-leader of the London Java Users Group, is another member of the working group. The reason sun.misc.Unsafe can't simply be dumped, he told me in an e-mail, is that it provides a number of functionalities that aren't available through any of the standard classes in OpenJDK.

"[sun.misc.Unsafe] should be cleaned up and the safe parts should get standardized," Verburg said. "The rest should be removed! If you want to perform dangerous manual memory allocations, there are other languages for that."

Both Verburg and Luck praised Oracle's proposal to encapsulate unsupported, internal APIs, including sun.misc.Unsafe. Verburg called JEP 260 "a fantastic pragmatic compromise" that "clearly shows [that the] OpenJDK leadership and Oracle are willing to listen to the needs of the ecosystem."

The community seems to be heading toward a solution to the sun.misc.Unsafe problem, and I'm sure it's due, at least in part, to the efforts of Verburg, Luck, and their colleagues in the working group. But this internecine dustup also raises a question that has been lurking in the background since the formation of OpenJDK: Who really makes the decisions about the future of Java? OpenJDK is an open-source community, but unlike the JCP (and organizations like the Apache and Eclipse foundations), it's not vendor neutral. The main goal of the JEP Process, according to the OpenJDK Web site, is "to produce a regularly updated list of proposals to serve as the long-term Roadmap for JDK Release Projects and related efforts." The JEPs allow Oracle to develop small, targeted features for the Java language and virtual machine outside the JCP.

"Who's in charge of Java? That's a very complex [question]," said Verburg. "The reality is that Oracle has the loudest voice, but it's a heavy collaboration .... For the parts of OpenJDK that make up the Reference Implementation of Java, the JCP still has to approve."

A list of the internal APIs Oracle has proposed to remain accessible in JDK 9 are listed on the JEP 260 page. Oracle is welcoming suggested additions to the list "justified by real-world use cases and estimates of developer and end-user impact."

BTW: Another great source for understanding sun.misc.Unsafe is Rafael Winterhaulter's January 2014 blog post, "Understanding sun.misc.Unsafe."

Posted by John K. Waters on August 7, 20150 comments


Open Container Initiative Moving Fast

It's been almost exactly a month since a coalition of industry leaders and users joined forces to create the Open Container Project to establish common standards for software containers. Now known as the Open Container Initiative (OCI) (renamed to avoid confusion with another Linux Foundation project), the group has announced the availability for public scrutiny of a draft charter for the nascent organization and the addition of 14 new members.

You can tell the OCI has the potential to become a true standards body by the broad range of organizations it has brought together, not to mention the number of out-and-out rivals who've gotten onboard. The list of founding members includes Docker, CoreOS, Amazon Web Services, Apcera, Cisco, EMC, Fujitsu, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, The Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware. The new membership roster includes AT&T, ClusterHQ, Datera, Kismatic, Kyup, Midokura, Nutanix, Oracle, Polyverse, Resin.io, Sysdig, SUSE, Twitter and Verizon.

The OCI was established under the auspices of The Linux Foundation, which also this week announced the formation of the Cloud Native Computing Foundation. Both groups are "collaborative projects," which means they are Linux Foundation sponsored, but independently supported.

The hopes of the backers of the OCI are summarized in the mission statement of the draft charter:

"The Open Container Initiative provides an open source, technical community, within which industry participants may easily contribute to building a vendor-neutral, portable and open specification and runtime that deliver on the promise of containers as a source of application portability backed by a certification program."

Just as interesting, I think, is what the OCI says it will not be doing:

"The Open Container Initiative does not seek to be a marketing organization, define a full stack or solution requirements, and shall strive to avoid standardizing technical areas undergoing signification innovation and debate."

The initiative was unveiled in June at DockerCon, and the latest news was announced this week at OSCON. Docker is making a big upfront donation to the OCI: a draft specification for the base format and runtime and the code associated with a reference implementation of that spec. The company is donating the entire contents of its libcontainer project and all modifications needed to make it run independently of Docker.

I had a chance to talk with two Docker Dudes (Dockeroids? Dockerettes? Dockerers?) about the new organization and its initial momentum.

"The number of members just about doubled in 30 days," said David Messina, Docker's vice president of marketing. "That's serious velocity, which I think speaks to the widespread interest in having a single, open container specification. But also notice the diversity of that membership. We have large software vendors, smaller software vendors, large Web-scale users and large enterprise players. Everybody in the industry wants a universal standard."

The OCI is making fast moves on the technical side, too. Patrick Chanezon, a member of the technical staff at Docker who has been working on the OCI, said we can expect a draft spec in just a few weeks.

"I've been involved in several standards projects over the years at Sun, Google, and Microsoft," Chanezon said, "and I've never seen an industry standard being elaborated so fast. In just six weeks [from the launch] we'll have a first draft of a spec for something that will be the basis for container based computing. To me that is a testament to the fact that a standard like this was needed to be able to innovate faster at the higher level, like orchestration and things like that."

High demand is one reason the draft spec is coming along so quickly, but it didn't hurt that the OCI launched was followed two days later by the Docker Contributor Summit. Many of the maintainers of libcontainer, which provides a standard interface for making containers inside an operating system, attended that event, as did members of the OCI working group. "We spent the whole day working together, with the result that the spec is in pretty good shape," Chanezon said.

The OCI working group's rapid progress also shows how effective a model that emphasizes lightweight governance and a focus on a discrete set of technologies, and nothing, else can be, Messina said. "This is what can happen when the organization gets out of the way of the maintainers," he said.

There are around 10 maintainers on the project right now, many of whom are coming from the libcontainer project, Chanezon said. "What we did was move the libcontainer from the Docker GitHub organization to Open Container organization, and all the maintainers came with it," he said. But that group also includes people from Docker, Google, Red Hat, CoreOS and a few independents.

It's worth noting that libcontainer represents 5 percent of the Docker code base. Chanezon called it "the heart of Docker." runC is the reference implementation in the OCI spec, and Docker plans to use it as plumbing for creating its own containers.

Jim Zemlin, executive director of The Linux Foundation, has said that containers are revolutionizing the computing industry. Docker claims that containers based on Docker's image format have been downloaded more than 500 million times in the past year alone, and there are now more than 40,000 public projects based on the Docker format.

I asked Chanezon why we're seeing such intense interest in, and furious activity around, containers.

"When I give talks, I like to quote William Gibson, who said, 'The future is already here, it's just not evenly distributed,'" he said. "Right now, that future is getting evenly distributed, and that means that every organization on the planet is starting to build distributed applications. Docker arrived just at the right time to let them do that."

You gotta love a guy who can work in a quote from the author of the great cyberpunk novel, Neuromancer, and coiner of the term "cyberspace." (Not to mention one of my favorite writers.)

Posted by John K. Waters on July 24, 20150 comments


Pivotal and Cloud Native Java

O'Reilly's annual Open Source Convention, better known as OSCON, is in full swing this week in Portland. Among the more joyful attendees at this year's event is James Watters, vice president of Product, Marketing and Ecosystem for Cloud Foundry at Pivotal. How do I know Watters is a happy camper? His latest blog post, in which, among other things, he enthuses about the dramatic uptick in conference sessions on microservices -- 30 this year, up from only one last year, which Pivotal presented.

"People are talking about writing apps in a new way," Watters said when I caught up with him on the phone. "And they're talking about using microservices and Spring Cloud to do it. I haven't seen that kind of excitement in the Java community to restructure these kinds of things in the enterprise space, maybe ever. So yeah, I'm kind of excited.

Pivotal recently released the beta of its Spring Cloud Services (1.0.0), which integrates the Cloud Foundry-based NetFlixOSS microservices framework with Pivotal's Java-based Spring programming tools. The company plans to make Spring Cloud generally available in the fall. Between now and then, Pivotal will be adding distributed tracing into the framework via something called Spring Sleuth, Watters said.

At the SpringOne2GX conference in Washington this fall, Netflix is expected to talk about how it has begun to adopt Spring Cloud, Watters said. "Instead of configuring their apps in a complicated way, they're like, 'okay great, you wrote a wrapper for us? Cool, we'll just use your wrapper.' There's a virtuous feedback loop between the Spring team and Netflix team right now."

Watters describes himself as a lifetime enterprise Java guy (he's been working with it since high school), who worked at Sun Microsystems for about eight years. In his post, he claims (I think rightly) that Pivotal has been at the "intersection of microservices, continuous delivery and multi-cloud portability since being founded in 2013."

"There are two camps today," he told me, "people who are interested in different flavors of containers, and people like us, who are interested in building and running microservices apps. We have large companies asking us to come in and do two-day workshops on that. That's really where the excitement is right now."

Without a microservices architecture, container technologies aren't nearly as useful, Watters argued. "You can't run legacy monolithic Java, like Oracle and WebSphere, in that environment."

We're seeing a new wave, he wrote, that "fundamentally alters application architectures and workflows for developers and operators building the next generation of data-hungry, digital experiences." He's talking, of course, about what some people are calling the cloud native revolution. The Cloud Foundry open Platform-as-a-Service (PaaS) environment, of which Pivotal is the commercial maintainer, is a key enabling technology of that revolution.

"After our Spring Cloud product manager, Matt Stine, published "Migrating to Cloud-Native Application Architectures" for O'Reilly, we were just overwhelmed with requests from enterprises for workshops," Watters said. "We can't keep Matt off the road."

Cloud Native Java is especially appealing to enterprises that are looking to modernize their architectures, Watters said, because it leverages existing skill sets and allows for integration with legacy apps. From their perspective, he said, it's an "evolutionary approach."

In his post, Watters points to some telling successes of Pivotal's Cloud Native enterprise products. The company showcased 10 Fortune 500 companies at the recent Cloud Foundry Summit, all of which worked with the company on projects based on Cloud Foundry and Spring technologies. And downloads of Pivotal's Spring Boot rapid application development framework have gone through the roof. (More than 1.4 million downloads per month over the last year, he said. There's a graph.)

There's a lot more in Watters' post, which is well worth reading.

Posted by John K. Waters on July 22, 20150 comments


Oracle v. Google: Now the Fair Use Argument for Java APIs

Now that the Supreme Court has decided not to review Oracle America Inc. v. Google Inc., the long-running lawsuit returns to the Federal Circuit Court in San Francisco, where Google will have a chance to argue that its use of 37 Java APIs -- now considered copyrightable becuase of the Supreme Court's pass -- in its Android operating system falls under the doctrine of fair use.

Oracle has won a significant argument here, but not the lawsuit. You could say that Google has a Plan B. But what exactly is "fair use," and how do you prove it in court?

The U.S. Copyright Office defines fair use as "a legal doctrine that promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances." Federal courts decide fair use issues using four criteria:

  • the purpose and character of the use (is it commercial, nonprofit, educational, etc.)
  • the nature of the copyrighted work (is it a novel, movie, song, technical article, news item)
  • the amount and "substantiality" of the portion used (how much of it was used and was that the "heart" of the work)
  • the effect of the use upon the potential market value of the work

There's also the question of whether the use was "transformative." Transformative uses, the Copyright Office says, "are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work."

"Fair use is a fact-specific inquiry," explained attorney Case Collard via e-mail. "It depends on what the item is that is copyrighted and how the entity claiming fair use is using it."

I reached out to Collard, a partner at Dorsey & Whitney, who specializes in intellectual property disputes and developing strategies for safeguarding intellectual property rights, to get his take on the latest development in the Big O versus Big G saga. He said the Federal Circuit's decision, which will now stand, laid out something of a road map for how Google might apply a fair use argument.

"In my opinion, the biggest problem for Google is the commercial nature of its use [of the APIs]," he said. "That is generally a strike against finding fair use. Its best argument is probably interoperability -- in other words, it should be fair use because Google must use the APIs in order to make its products interoperable."

Both the Federal Circuit and the White House recognized that Google was entitled to a fair-use defense. At the high court's request, the U.S. Solicitor General actually weighed in with an amicus curiae brief.

"Petitioner argues that its copying of respondent's code promoted innovation by enabling programmers to switch more easily to another platform," he wrote. "But it is the function of the fair-use doctrine... to identify circumstances in which the unauthorized use of copyrighted material will promote rather than disserve the purposes of the copyright laws." And he concluded: "Although petitioner has raised important concerns about the effects that enforcing respondent's copyright could have on software development, those concerns are better addressed through petitioner's fair-use defense…"

But the legal eagles at the Electronic Frontier Foundation (EFF), a California-based international nonprofit that advocates for digital rights, argue that fair use should not be the only defense against API copyright claims.

"Fair use is a complex and potentially expensive defense to develop and litigate," EFF legal director Corynne McSherry and special counsel Michael Barclay wrote in a blog post. "While Google has the financial resources to take that defense to trial, few start-ups have the ability to do so. The Federal Circuit's decision thus could deter new companies from competing with a large, litigious competitor by using the latter's APIs..."

The EFF is one of the staunchest opponents of API copyright. In an amicus brief filed in support of Google last year on behalf of 77 computer scientists, the organization articulated some widely held fears about the consequences of the appeals court's decision that the APIs are protected under U.S. copyright law. "The Federal Circuit's decision poses a significant threat to the technology sector and to the public," the brief stated. "If it is allowed to stand, Oracle and others will have an unprecedented and dangerous power over the future of innovation. API creators would have veto rights over any developer who wants to create a compatible program -- regardless of whether she copies any literal code from the original API implementation. That, in turn, would upset the settled business practices that have enabled the American computer industry to flourish, and choke off many of the system's benefits to consumers."

IDC analyst Al Hilwa is less apprehensive about the potential impact of API copyright.

"The impact will be felt in various ways," Hilwa told me. "APIs are likely to be more explicitly associated with terms of use, for example, and potentially with more lawsuits relating to interoperability. But it also means that developers wanting to bring alternative implementations of a system may choose to be less imitative of the behavior of the system, and more innovative by creating entirely different competing systems. I think we just have to wait and see how it plays out."

"In the end, it may not matter to developers much whether APIs are copyrightable, if (big if) they can be used under the fair use doctrine," Collard said. "In other words, after this is all said and done, if the fair use doctrine allows developers to use APIs without fear of a lawsuit, then it would have a very similar practical effect."

"Fair use" is codified in the U.S. in section 107 of the Copyright Act of 1976.

Posted by John K. Waters on July 8, 20150 comments


VMware: Making the Developer a First-Class Datacenter User

Among the more interesting vendor announcements at last week's DockerCon was VMware's preview of two new products: AppCatalyst and Project Bonneville. Both are emblematic of VMware's relatively newly amped up effort to, as Kit Colbert, vice president and CTO of VMware's Cloud-Native Applications group, put it, "make the developer a first-class user of the datacenter through our cloud-native applications."

Colbert gave me a preview of the previews before the show, and explained why the server virtualization giant is pulling out all the stops to create developer-friendly tools.

"We all know that all companies are a becoming more like software companies, in the sense that software is the means by which they engage with users," he said. "IT is now less about minimizing costs and more about driving innovation and differentiation. Consequently, there has been this renewed focus on developers within enterprises and how to empower them, which will drive that business agility and velocity companies are looking for."

VMware responded to that trend with launch of its Cloud-Native Applications group back in April, along with Project Photon, a lightweight Linux distro optimized for cloud-native apps, and Project Lightwave, an open source identity and access management solution for containers.

The group showcased its two latest projects at the Docker event in San Francisco. AppCatalyst is a desktop hypervisor aimed specifically at developers. Driven by a REST API and a Command Line Interface (CLI), it's designed for Linux container development (Docker is fundamentally a Linux technology) by devs working on Macs. It supports Docker Machine, integrates with HashiCorp Vagrant, and ships with Photon.

"We wanted to provide developers with an easy-to-use engine to run their applications, but also to optimize it so they can speed up the local build/test/run/debug cycle," Colbert said. "It's like a datacenter on their laptops."

Project Bonneville is a nascent native container solution for VMware's hypervisor. It's a Docker runtime that will allow users to create containers directly on VMware's ESXi bare-metal hypervisor via the Docker API. The project aims to enable the seamless integration of Docker containers into the vSphere server virtualization platform -- to, as the company says, "bring the VMware ecosystem to Docker containers."

"Developers are flocking to Docker," Colbert said. "It has a lot of momentum. The question for us is, how do we get the ease, speed, and flexibility of the Docker API mapped onto vSphere and give those containers the same level of management and monitoring that the VM infrastructure has today."

Ben Corrie, principal investigator on Project Bonneville, offers a great explanation of the project's approach in a company blog post: "... The pure approach Bonneville takes is that the container is a VM, and the VM is a container. There is no distinction, no encapsulation, and no in-guest virtualization. All of the necessary container infrastructure is outside of the VM in the container host. The container is an x86 hardware virtualized VM -- nothing more, nothing less."

"What this means to a developer," Colbert said, "is that ESX will look like a Docker host, indistinguishable from any other Docker host."

Bonneville comes with Instant Clone, a new feature in vSphere 6 that makes it possible to clone a running VM, which makes it possible to get a new VM booted up and running in less than a second, Colbert said.

Although the focus in the next-gen-app world is around Linux, Bonneville is being designed to run Docker containers on any OS. During a recent internal hackathon, Colbert said, some creative VMwarians used a vanilla Docker client to pull an image of the old school Lemmings game and run it on MS DOS 6.22.

"They were just having fun with it, but I think it's a great proof point of the generalization of the technology," Colbert said.

AppCatalyst was released as a technology preview at DockerCon, and it's available for download here. VMware expects to make it generally available later this year. The company is currently distributing Project Bonneville internally and expects to begin private beta testing in the third quarter of this year.

Posted by John K. Waters on July 6, 20150 comments


GitHub Announces Atom 1.0

It took 18 months, 155 releases, and the efforts of hundreds of contributors to get here, but version 1.0 of GitHub's Atom text editor is now available. First released to open source in May 2014, Atom is a customizable, cross-platform text editor built with HTML, JavaScript, CSS and Node.js integration. It runs on the Electron framework, and it works on OS X, Windows or Linux.

It's an understatement to say that this "hackable text editor for the 21st century" has proved to be popular. Since it was released last year, Atom has been downloaded 1.3 million times, GitHub says, and it now has 350,000 monthly active users. That sizeable community has to date created 660 themes for the editor and 2,090 packages. And some big names have added Atom to their enterprise tool belts, including Facebook, which based its new, open source Nuclide IDE on Atom.

What makes Atom such a great innovation for developers? Let's start with the "hackable" part.

"Your dream editor and my dream editor are not the same thing," GitHub senior engineer Ben Ogle, a core engineer on the Atom project, told me. "I like a dark theme; you like a light theme. I write front-end code for websites; you write system code. We should not have to use the same editor. What we want at GitHub, and what Atom gives you, is total control over the editor so you can make it your dream editor."

In other words, developers can tweak Atom's look and add features that suit their individual needs.

"We want you to feel empowered to dig in," Ogle said. "That's why we built Atom on familiar technologies. You won't need to learn something new like you would if you were to, say, extend Emacs. "With Atom, you can use the knowledge you already have."

And the "21st century" part?

I think that was best explained to me last year by Nathan Sobo, a founding member of the Atom team.

"Now that we're in this polyglot world, you'll notice that whenever a new programming language starts to emerge, the first tools available for it are always Emacs and Vim," Sobo said. "It always starts with this very general purpose editor that someone has extended to make themselves more productive in this environment. So we developed Atom is to provide a tool that accelerates that process. A new language comes along and very quickly people can build fantastic tooling around it without having to wait for some business to get started that needs a guaranteed capital flow to build a customized product around that language."

Atom is the brainchild of GitHub founder Chris Wanstrath, who, the story goes, began experimenting with a desktop editor based on Web technologies back in 2008. He called it "Atomicity," and worked on it as a side project, until it was shelved in 2009 while he focused on the launch of GitHub.com. Wanstrath later revived the project, which evolved into Atom.

GitHub looks at Atom 1.0 as a foundational release that will support a burgeoning community, Ogle said. "We focused on the core editing experience and modularity [in Atom 1.0]," he said. "Now we have this giant community around us, with tons of core contributors. Lots of them have push access, but don't work at GitHub. It's getting to the point where we're really just shepherding the community," he said.

How does Atom fit into GitHub's overall social coding mission?

"It's called social coding, but what that means is that our mission is to help people work better together," Ogle said.

"Atom is part of that mission, long term," Ogle said. "We're defining the base with this release, but down the road we will be asking, what does it mean to have social coding in your editor? Editors are, historically, very individual things with no social component. What we're thinking about is how we might bring the social ideas from GitHub into your editor."

And in case you're thinking that the release of a new text editor, no matter how "hackable" and "21st century" it might be, is small potatoes, consider this insight from my interview with Sobo: "There is no more personal relationship that a programmer has to anything in his or her career than to their text editor," he said. "It's literally in the muscles of your hands! Even as you're crossing programming languages, the text editor is the one thing that can go with a developer for their entire career."

Ogle put together lots of details about the Atom 1.0 release, including lots of links and a more complete history, in a great post on the GitHub blog.

Posted by John K. Waters on June 26, 20150 comments


Onno Kluyt on Java at 20

I knocked on quite a few doors last month, looking for Java mavens to talk with about the language on its 20th birthday. Lots of people got back to me (I think they got tired of the banging), and I heard some great stories. But I was surprised that, to my first question -- "What has been the most significant change in the Java language and/or platform in the past 20 years?" -- no one answered, "Open sourcing Java." It's probably the way I phrased the question, but I remember Java jocks clamoring back in the day for Sun to release their beloved language under an open source license.

Onno Kluyt, who chaired the Java Community Process (JCP) from 2002 until he stepped down in July 2006, helped build the OpenJDK community and, as he puts it, "held the JCP together while we were doing the open sourcing and building that other community." I asked him the question I should have asked: "How important was the open sourcing of Java?"

"Looking back on it, it was too little too late," Kluyt told me via e-mail. "Linux was already very well established, and Android had happened. If Sun had open sourced Java two or three years earlier, some of the history might have played out differently. But the Microsoft lawsuits made that timing impossible."

You might remember that Sun sued Microsoft for $35 million in 1997, claiming that Microsoft breached its contract by trying to extend Java so it would work differently, and, MSFT argued, perform better, on Windows computers. They didn't settle until 2003. Three years later, Sun released quite a bit of Java under the GNU General Public License (GPL), and a year after that, finished the job.

You might also remember that Kluyt took some heat in 2003 for asking the community, "What do you think [the open sourcing of Java] does that people can't do today?"

"There were a lot of misconceptions about what Sun's license for Java before the open sourcing (SCSL) allowed or didn't allowed you to do," Kluyt explained. "It was a lot of more open and lenient than most developers were aware of. And so I asked that question a few times during developer events to get a discussion going about what developers felt they needed to do with the code base, what they wanted to do, and of those things what they believed they couldn't do now. To some extent Sun's open sourcing of Java was a symbolic act. It didn't really mean a change of heart about code contributions from the outside [or] a loosening of its grip on the core APIs. Put the code base under a well-known free and open source license and move on."

So, what was the most significant change?

"Over this time span that is a little difficult to answer," Kluyt said. "I would probably pick the HotSpot VM technology and the concurrency APIs, which together gave Java near-native performance and enabled large-scale, real-world deployments. But there are so many others: generics, closures, the added byte code making it much easier for other programming languages (Scala for example) to run on top of the JVM, servlets, JSPs.

What it is about the JCP that has allowed it to continue supporting Java in all its forms?

"Internally we often paraphrased Churchill: it's the worst kind of governance except for all the alternatives," he said. "Java has one inventor and one owner: first Sun and now Oracle. But it has interest from companies large and small beyond that one actor. And there were and are many great opinions and expertise outside Sun/Oracle on how to evolve it. In the JCP, Sun found a tolerable way to allow that outside influence, while keeping its seat at the head of the table. Sun could not, and now Oracle cannot, push through Java changes without some decent support from its competitors, and conversely, those competitors cannot push through significant change without some buy-in from Oracle. So both sides need each other. It came close to blowing up about two or three times but in the end: see Churchill's quote."

What is it about Java, the language, that has allowed it to evolve and thrive all these years?

"One half is the language; the other half is the platform, the virtual machine," he said. "Java was the first well-adopted language that had security and networking built in, that had a memory management model that shielded developers. Its syntax was easy to learn for C/C++ developers, its OOP concepts were easier to grasp than Smalltalk, and it made supporting multiple platforms significantly better than anything else around. Sun was also luckily with the adoption of Java in that its timing was great; the World Wide Web was just emerging and Java's characteristics happened to lend itself very well for that."

How important was the development of the Java platform?

"Maybe I'll answer it this way," he said. "No Java, no Android."

More on This Topic:

Posted by John K. Waters on June 9, 20150 comments