It's been seven years since a group of software security mavens set out to create a "fact-based" set of best practices for developing and growing an enterprise-wide software security program. That set of practices, known today as the Building Security In Maturity Model (BSIMM ), was the first maturity model for security initiatives created entirely from real-world data.
"Our goal was to build an empirical model for software security based on real, observed practices," Gary McGraw, CTO of app security firm Cigital and co-author of the original BSIMM, told me at the time. "We believe that the time has come to put away the bug-parade boogey man, the top-twenty-five tea leaves, the black-box web-app goat sacrifice, and the occult reading of pen-testing entrails. This is an entirely data-driven model. If we didn't observe an activity, it didn't get into the model."
This week, McGraw and co-authors Sammy Migues, principal at Cigital, and Jacob West, chief architect at cloud-based business management software provider NetSuite, released BSIMM6. The latest edition was based on the real-world security initiatives reported by 78 companies, including Adobe Systems, Bank of America, Box, EMC, LinkedIn, PayPal, Salesforce, The Home Depot and VMware. The number of participating companies has grown every year since the first edition was published in 2008, based on studies of nine software security initiatives.
More companies actually participated this year, but the authors chose to focus on firms with data that were 42 months old or younger, McGraw told me. This year's model also includes data from two new verticals: healthcare and consumer electronics. (The other two are financial services and independent software vendors.) The inclusion of data from healthcare organizations was especially timely, following the recent Anthem and UCLA Health data breaches.
"It was interesting to expand the study into health care," McGraw said. "And it's pretty clear that vertical as a whole has some work to do, though there are some very good outliers in the population." McGraw points to managed health care company Aetna and its chief information security officer, Jim Routh, as an example of such an outlier.
"BSIMM continues to be the authoritative source of observed practices and activities from the most mature software security programs across industries," Routh said in a statement, "and BSIMM6 offers excellent trend analysis compared with past data points indicating the evolution of software security maturity." Routh also serves as chairman of NH-ISAC, which is a nonprofit organization responsible for cyber security in the healthcare sector.
A "maturity model" describes the capability of an organization's processes in a range of areas, from software engineering to personnel management. The Capability Maturity Model (CMM) is a well-known example from software engineering. The BSIMM (pronounced "bee-simm") serves as a kind of measuring stick, its authors say, which is best used "to compare and contrast your own initiative with the data about what other organizations are doing contained in the model."
The BSIMM is organized into a software security framework that comprises a set of 112 activities grouped under four domains:
- Governance, which includes practices that help organize, manage and measure a software security initiative. Staff development is also a central governance practice.
- Intelligence, which includes practices that result in collections of corporate knowledge used in carrying out software security activities throughout the organization. Collections include both proactive security guidance and organizational threat modeling.
- SSDL Touchpoints, which includes Software Security Development Lifecycle practices associated with analysis and assurance of particular software development artifacts and processes. All software security methodologies include these practices.
- Deployment, which includes practices that interface with traditional network security and software maintenance organizations. Software configuration, maintenance and other environment issues have direct impact on software security.
The data in this and other BSIMM releases shows that highly mature initiatives are "well-rounded" and carry out 12 core activities, including:
- Identifying gate locations and gathering necessary artifacts.
- Identifying PII (personally identifiable information) obligations.
- Providing awareness training.
- Creating a data classification scheme and inventory.
- Building and publishing security features.
- Creating security standards.
- Performing security feature review.
- Using automated tools along with manual review.
- Driving tests with security requirements and security features.
- Using external penetration testers to find problems.
- Ensuring host and network security basics are in place.
- Identifying software bugs found in operations monitoring and feeding them back to development.
"We're getting to the stage in the model where, now that we have 29 times more data than we started with, we understand what firms should be doing," McGraw said. "That's the good news; the bad news is, not everybody is doing it yet. We've measured 104 firms with the BSIMM, but there are a lot more companies out there than that."
The BSIMM is a useful reflection of the current state of software security initiatives in the enterprise, and, given how hard it can be to get any organization to communicate honestly about its security practices, something of a miracle. As McGraw likes to say, it was a science experiment that escaped the test tube to become a de facto standard.
"That's very gratifying, personally," McGraw said, "but the important thing is the emphasis here of real data, and the use of facts in computer security. I think we've finally moved past the witchdoctor days in software security."
There's much, much more in the free report, which I consider a must-read. Also, Cigital has scheduled a BSIMM6 webinar for Tuesday, Nov. 10, which organizers say will cover how companies can apply the BSIMM information to their security programs. The event will be led by Cigital's Paco Hope.
Posted by John K. Waters on 10/20/2015 at 1:44 PM0 comments
GitHub last week announced a new partnership with Yubico to expand its authentication system, unveiled a new directory of integrated applications, and made an extension for large binary files available on all repositories on GitHub.com.
CEO and co-founder Chris Wanstrath made the Yubico partnership announcement at his company's GitHub Universe event in San Francisco on Thursday. Yubico is a co-creator (with Google) of the Universal 2nd Factor (U2F) open authentication standard hosted by the FIDO (Fast IDentity Online) Alliance. U2F relies on USB-like tokens that generate login codes unique to the users and the applications being accessed. Yubicomakes the tokens, and the company's CEO and founder, Stina Ehrensvard, was on-hand at the event to give away about 1,000 of them to attendees.
GitHub isn't the first code-hosting site to support two-factor authentication. Both Google and DropBox have partnered with Yubico, and GitLab announced in May the addition of two-factor authentication to its open-source GitHub alternative. GitHub has, in fact, been using YubiKey internally since December 2014, said GitHub app security engineer Ben Toews in an e-mail, and began using FIDO-approved U2F YubiKeys internally in June 2015.
I talked with Sam Lambert, GitHub's director of systems, during the GitHub Universe event about the Yubico partnership. The decision by DropBox, Google, and others to adopt FIDO U2F publicly, he said, helped to validate that standard. He described the partnership as "hugely important" and explained how GitHub is now encouraging developers who use the popular social coding site to build FIDO U2F support into their own applications.
GitHub's expansion of its authentication system is likely to do two things: further the adoption of the FIDO U2F standard among developers and make GitHub look more enterprise-friendly. The social coding site claims 11 million registered users and more than 36 million unique visitors every month, so its potential influence is clear. But providing U2F security also means that more enterprise dev teams will be able to convince management that GitHub is a safe place.
"The truth is, we're making great strides in the enterprise," Lambert said. "Look around [the conference] and you'll see GE, John Deere, Ford Motor Company, Pixar, Etsy -- even NASA uses GitHub. Thousands of developers are working on single GitHub Enterprise instances. And there's more to come for the enterprise next year."
GitHub's decision to support the integration of large binary files into Git workflows is another enterprise-friendly move, and one that addresses a well-known weakness. "Distributed version control systems like Git have enabled new and powerful workflows," writes the mysterious Technoweenie on the company's blog, "but they haven't always been practical for versioning large files. Git LFS (Large File Storage) provides an open source extension that replaces large files (video, big datasets, graphics) with text pointers within Git; the contents of the files are stored on remote servers, such as GitHub.com or GitHub Enterprise. Git LFS was actually released to early adopters in April. Version 1.0 is now generally available.
GitHub also unveiled its new Integrations Directory, which showcases a bunch of developer tools that work with GitHub. It's not as enterprise-oriented as the other two announcements, but it's a very cool resource.
"The Integrations Directory exposes a nicely curated set of applications that are really well integrated with GitHub -- some of them with literally one-click functionality added straight through into your organization's application," Lambert said.
As the less mysterious Kyle Daigle writes in another company blog post, the directory is a collection of developer tools that integrate with GitHub and "let you build sophisticated ChatOps workflows, allow you to deploy software directly from your GitHub repositories, and make it easy to track analytics, customer feedback, performance issues, and runtime errors back to a line of code and the context for code changes."
Since it was launched in 2008, GitHub, which is based on the Git distributed version-control system developed by Linux kernel creator Linus Torvalds, has become one of the world's most popular social coding sites. The service has enjoyed endorsements from the likes of the Eclipse Foundation, which allows the hosting of its projects on GitHub to attract new and maturing projects.
Posted by John K. Waters on 10/06/2015 at 5:44 PM0 comments
The first early access builds of JDK 9 with Project Jigsaw, the initiative that's bringing modularity to the Java platform, are now available for download. Before you jump in, you should definitely read Mark Reinhold's rich and readable "The State of the Module System," which he published online earlier this month. The chief architect of Oracle's Java Platform Group calls it "an informal overview of enhancements to the Java SE Platform prototyped in Project Jigsaw and proposed as the starting point for JSR 376."
JSR 376 is, of course, the Java Specification Request that aims to define "an approachable yet scalable module system for the Java Platform." But Project Jigsaw actually comprises JSR 376 and four JEPs (JDK Enhancements Proposals), which are sort of like JSRs that allow Oracle to develop small, targeted features for the Java language and virtual machine outside the Java Community Process (JCP). (The JCP requires full JSRs.) JEP 200: The Modular JDK defines a modular structure for the JDK. Reinhold has described it as an "umbrella for all the rest of them." JEP 201: Modular Source Code reorganizes the JDK source code into modules. JEP 220: Modular Run-Time Images restructures the JDK and JRE run-time images to accommodate modules. And the recently proposed JEP 260: Encapsulate Most Internal APIs (which I wrote about earlier), which Reinhold proposed to encapsulate unsupported, internal APIs, including sun.misc.Unsafe, within modules that define and use them.
The early access builds of Java 9 with Jigsaw include the latest prototype implementation of JSR 376 and the JDK-specific APIs and tools described in JEP 261, which will actually implement the changes and extensions to the Java programming language, JVM and standard Java APIs proposed by the JSR.
In his "state of" report, Reinhold provides a nuts-and-bolts breakdown of modularization, from the essential goals of the JSR, to detailed descriptions of modularization in the context of Java -- everything from "modules," "module artifacts" and "module descriptors" to the concepts of "readability," "accessibility" and "reflection."
In his conclusion, he writes:
"The module system described here has many facets, but most developers will only need to use some of them on a regular basis. We expect the basic concepts of module declarations, modular JAR files, module graphs, module paths, and unnamed modules to become reasonably familiar to most Java developers in the coming years. The more advanced features of qualified exports, increasing readability, and layers will, by contrast, be needed by relatively few."
He labeled the post "Initial Edition," so I'm expecting updates. I'd keep an eye out.
The long-awaited, much-delayed modularization of Java is going to be the biggest change since Java 8's support of lambdas. In a recent ADTmag post ("Is Oracle Dumping Its Java Evangelists?"), I followed up on a tweet by Gartner analyst Eric Knipp, who called Java a "dead platform." He made the case to me that Java is no longer the default choice for greenfield applications, and that change augurs its eventual demise. But I ran into Forrester analyst John R. Rymer at the recent Dreamforce event, and he posed the question, given all the recent and coming changes to Java -- especially modularization -- is it really the same language?
Posted by John K. Waters on 09/23/2015 at 9:21 AM0 comments
Nginx Inc., the commercial provider of one of the most popular open-source Web servers, has released a new version of its namesake product with a fully supported implementation of the new HTTP/2 standard. Nginx Plus R7, available now, comes with the promise of an easier transition to the new standard, along with new performance and security enhancements.
This release actually comes with a bunch of improvements -- things like support for thread pools and asynchronous I/O; support for socket sharding optimizations to increase performance on multicore servers; new access controls and connection limits for TCP services; and a new "live activity monitoring" dashboard. But it's the HTTP/2 support that'll be getting the most attention.
HTTP/2 is the second major version the Hypertext Transfer Protocol Web standard -- the mechanism used by Web browsers to request information from a server and display Web pages on a screen -- and the first since HTTP/1.1 was approved in 1999. It's based on Google's SPDY (pronounced "speedy") open networking protocol. Nginx has been leading the effort to develop the updated standard over the past two years, and Google has said that it will deprecate SPDY in favor of HTTP/2 in its Chrome browser this year.
HTTP/2 is going to provide big performance and security improvements with things like multiplexing, header compression, prioritization, and protocol negotiation, but it's still a challenge for many Web sites to support the standard. What the company has created with Nginx Plus R7 is a kind of front-end HTTP/2 gateway and accelerator for new and existing Web services, Owen Garrett, the head of products, told me.
"This is a way you can deploy Nginx Plus in front of your applications and publish those applications using HTTP/2," he said. "It's an easy and powerful way to adopt this new Web standard."
This is all about making it possible for Web sites to operate at scale, Garrett said. "You can deploy a Web site on standard hardware and software and it can handle a handful of users, but in order for that Web site to be successful, it's got to be able to handle hundreds, thousands, even millions of users. And that's what we're doing. We transform a simple but rich Web site into something that can handle phenomenal amounts of traffic."
The popularity of Nginx has exploded in recent years. There's a reason Garrett and his colleague, Peter Guagenti, vice president of marketing, called it "the heart of the modern Web." The Apache Web server has been around since the mid-90s and it's probably more widely deployed than Nginx. But in a recent breakdown of Web server usage by analysts at W3Techs, Apache was used by 56.5 percent of all Web sites (with a known server) and 26.8 percent of the most heavily trafficked 1,000 sites; while Nginx was used in 25.4 percent of all Web sites and in 44.4 percent of the top 1,000. (W3Techs uses data from Web traffic tracker Alexa for its Web site ranking.)
"It all depends on how you slice the data," Guagenti said. "Nginx has been the only Web server on the market that has been growing in the last two years. We've been gaining a basis point every week or so, and we expect to surpass Apache in the top 100,000 sometime this week."
The modern Web is all about performance, Guagenti said, and that's why Nginx has become so popular.
"There has never been a focus on performance like we've seen in the past few years," he said. "Performance is everything. Milliseconds of latency costs thousands to millions of dollars in e-commerce. Milliseconds of latency mean you use one app over another on your phone, that you switch to a different media site to read the same article."
Guagenti also argued that the rise of Nginx has been driven, at least in part, by the DevOps movement. "People now expect a certain level of control and configurability over their entire stack," he said, "which they get from Nginx, but not so much from other tool chains."
The list of Web sites currently using Nginx offers a peak into Nginx's future: Airbnb, Box, Instagram, Netflix, Pinterest, SoundCloud and Zappos, among others.
"We call ourselves 'the secret heart of the modern Web,'" said Garrett, "but we're not a secret to developers. We're growing this fast because of the grass roots movement among our open source and commercial users."
Nginx is sponsoring a three-day conference in San Francisco this week. Looks like a lot of hands-on training, strategic sessions, and some rock-star speakers.
Posted by John K. Waters on 09/21/2015 at 10:40 AM0 comments
The rumors are flying about the fate of some of Oracle's top Java evangelists, thanks to a tweet and a Reddit thread picked up by the press last week. These rumors follow hot on the heels of the departure last month of Cameron Purdy, who served as senior vice president of Oracle's Cloud Application Foundation and Java EE group.
The Reddit discussion grew from a comment citing a Facebook post by Simon Ritter, evangelist on Oracle's Cloud Development team, which read:
"I've heard it said that you should try something new every day. Yesterday I thought I'd see what it was like to be made redundant. One month of 'consultation' and then I'll be joining the ranks of the unemployed claiming my job seekers allowance. To be fair, I was expecting this, but feel bad for the numerous other people on my team whom I don't think saw this coming...."
A number of names of the newly departed or soon-to-be-departing emerged during the Reddit discussion. I wasn't able to talk with them -- and Oracle isn't commenting -- so I won't post their names here. (But you can see them in the thread.) I was, however, able to connect with jClarity co-founder and CTO Kirk Pepperdine, who posted the tweet, which read:
I caught up with Pepperdine via e-mail. "I only stated what was pretty much public knowledge at [the time] it was tweeted," he told me. "I'm a little surprised that it's taken off as it has."
Pepperdine said he caught a hint that something was up in July at his company's annual jCrete conference. jCrete is an invitation-only, think-tank event that typically draws about 75 people. One of the sessions was on the end of the Java evangelism team and some thoughts on what direction Oracle is taking. "My understanding was Java evangelism was to become cloud evangelism," he said. "I didn't expect that people would be let go. My guess is that they were on a round of cutbacks, and evangelism is a soft target."
Pepperdine believes that Oracle has been good for Java in general, but at moments like this, it's clear that its interests don't always coincide with the interests of the Java community. "Oracle is a top-down CCC organization that is very much focused on the bottom line," he said. "The reality is, making money from core Java is plain difficult. Supporting core Java is very expensive. Making moves without properly priming the community has always been a problem in that it inevitably turns out to be a PR disaster. And that is a shame, because on the whole, Oracle has been a great steward of Java ...."
"This move away from evangelism appears to be an attempt to refocus the business people," he added. "However, Java didn't become a pervasive technology because of business people, it became the platform of choice because of developers."
Pepperdine's tweet generated a lively conversation about the health of Java. Among the many comments was this one from Gartner Inc. analyst Eric Knipp:
"This one actually makes sense. Why promote a dead platform?"
I asked Knipp what he meant by that. "I look at it like this," he explained in an e-mail. "The platforms that dominate greenfield application today, will be the dominant platforms of tomorrow. The majority of application development occurs in the creation of packaged software (and then the technologies from the software 'as a product' world move into the enterprise). Packaged software is in transition from COTS [commercial off-the-shelf] products to SaaS [Software as a Service]. This transition will take some time, but I don't think anyone can argue that it isn't happening. For many years, the default choice for new packaged software was the Java platform. Java is no longer the default choice, and hasn't been for at least five years. In fact, I'd argue that today Java isn't even the dominant choice -- that mantle is moving to other runtimes more suitable for massively distributed cloud-native architectures, like Node.js, Go, Erlang and so on.
"So if you come back to my original point -- platforms that dominate greenfield today will be the vibrant 'winning platforms' of tomorrow -- it ought to be concerning to Oracle (and Java enthusiasts in general) that its platform is no longer dominant. That portends the death of the platform in terms of relevancy to enterprise IT. Would it be more accurate to say 'Java is dying a slow death' or 'Java is the new COBOL?' Maybe, but the gist is the same."
Pepperdine's partner, Martijn Verburg, CEO of jClarity and co-leader of the London Java Users Group, argues that evangelists still play an important role in the Java ecosystem. He listed his reasons, which included, among others:
- Shifting customers that run on Java enterprise solutions in-house to Oracle Cloud means getting Java developers on board. No evangelists? Can't do that as easily.
- Oracle cloud middleware, and so on, has a strong Java core and customers need to understand the how, what, when and why of that.
- Emerging markets have millions of developers who can be influenced to go down a certain ecosystem. Oracle potentially will lose out on having any good will with the millions of new developers arriving in China, India, South America, Africa and so on.
- Undoing a lot of good work that they'd done with the existing Java community, many of whom are paying customers, it was a long slog to get the two sides to see eye to eye and work together; this move brings back old fears and doubts.
For what it's worth, this looks like cost-cutting to me. Oracle hasn't exactly been killing it lately, and as Pepperdine said, evangelists are a soft target. And maybe Java no longer needs an army of preachers spreading the gospel.
Posted by John K. Waters on 09/09/2015 at 3:04 PM0 comments
The Java and Microsoft .NET Framework interoperability mavens at JNBridge have upgraded their flagship JNBridgePro tool to support both Windows 10 and Visual Studio 2015. That was to be expected from the guys who have been helping to build bridges between "anything Java and anything .NET" since 2001. What stood out in this release for me was the new "Proxy By Name" feature, which was much requested by JNBridge users, company CTO Wayne Citrin told me.
"Our users like the fact that they can use proxies in Visual Studio and Eclipse, etc., but don't like the parameter placeholder names they get when IntelliSense pops up," Citrin said. "They really wanted to see the names of the original parameters, which are generally in the metadata of the underlying binaries."
Simple, right? Except traditionally that metadata hasn't been so easily extracted from Java. Enter Java 8 and the Java Reflection API, which allows for the extraction of that parameter info. "It seemed like the time was right to add this very often requested feature," Citrin said.
As the Oracle doc page describes it, the Reflection API "enables Java code to discover information about the fields, methods and constructors of loaded classes, and to use reflected fields, methods, and constructors to operate on their underlying counterparts, within security restrictions. The API accommodates applications that need access to either the public members of a target object (based on its runtime class) or the members declared by a given class. It also allows programs to suppress default reflective access control."
Proxy By Name maps the names of the underlying parameters of methods when generating proxies so that the parameters of the proxied methods have the same names as the parameters in the underlying methods. The result: Developers can better understand how the proxied methods should be used.
"We're kinda proud of this one," Citrin said, "It's always fun to finally cross off a feature request that has been on the customer request list for a number of years."
JNBridgePro is a general purpose Java/.NET interoperability tool designed to allow developers to access the entire API from either platform. As Citrin explained it to me once, the tool "connects Java and .NET Framework-based components and applications with simple-to-use Visual Studio and Eclipse plug-ins that remove the complexities of cross-platform interoperability."
The Boulder, Colo.-based company is a member of Microsoft's Visual Studio Partner (VSIP) Program, and Citrin, of course, keeps a close on developments in Redmond. At a recent VSIP event, he got to spend time digging into Visual Studio 2015.
"A lot of the cool stuff in the new release isn't something we deal with directly at the company just yet," Citrin said. "But I have to say that I'm very impressed with the Universal Windows Platform. The idea of having a single binary that should work on your phone, your tablet, your PC, your Xbox, your HoloLens, is great. I think Microsoft is going in an interesting direction."
As I've mentioned before in this space, JNBridge publishes a series of interoperability scenarios called "Labs." The company calls them "cutting-edge scenarios that showcase the myriad possibilities available to developers when bridging Java and .NET frameworks." The description is a bit hyperbolic, but the labs, which are free kits that include documentation and source code, have gotten good reviews from users. One example of a Lab: "Create a .NET-based Visual Monitoring System for Hadoop," to visually monitor the status of all the nodes in a Hadoop cluster in real time. Another: "Using a Java SSH Library to Build a BizTalk Adapter," which shows how to use Java Secure Shell (SSH) to enable BizTalk Server to manipulate remote files securely. If you use JNBridge, the Web site is worth checking out.
Posted by John K. Waters on 08/21/2015 at 6:22 AM0 comments
What to do with sun.misc.Unsafe in Java 9? One side says it's simply an awful hack from the bad old days that should be gotten rid of; the other side says its heavy use is responsible for the rise of Java in the infrastructure space and popular tools still need it. The problem is, both sides are right. This week, Mark Reinhold, chief architect of Oracle's Java Platform Group, offered a solution.
Writing on the OpenJDK mailing list, Reinhold proposed encapsulating unsupported, internal APIs, including sun.misc.Unsafe, within modules that define and use them. That proposal is now a formal Java Enhancement Proposals (JEP). Posted this week, JEP 260 ("Encapsulate Most Internal APIs") aims to "make most of the JDK's internal APIs inaccessible by default, but leave a few critical, widely used internal APIs accessible, until supported replacements exist for all or most of their functionality." JEPs are similar to Java Specification Requests (JSRs), which are submitted to the Java Community Process (JCP).
"It's well-known that some popular libraries make use of a few of these internal APIs, such as sun.misc.Unsafe, to invoke methods that would be difficult, if not impossible, to implement outside of the JDK," Reinhold wrote, adding that the encapsulation scheme will, in the long run "reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these non-standard, unstable and unsupported internal APIs."
When word got around a few months ago that sun.misc.Unsafe might be removed or hidden in Java 9, howls of protest echoed across the public network. The plan was "an absolute disaster in the making," declared one blogger. Cooler heads organized a working group to develop a document to raise awareness of the problems ditching sun.misc.Unsafe would create. Although still a draft document, "What to do about sun.misc.Unsafe?" is well worth reading. It includes a clear explanation of the uses to which sun.misc.Unsafe has been put over the years, suggestions for what should be done about it now, and a surprisingly (to me, anyway) long list of products that use it (JRuby, Grails, Scala, Akka, Hazelcast, Neo4j, Apache Spark and XRebel, to name a few).
Greg Luck, CTO at Hazelcast and co-author of the JCache spec (and JCP Executive Committee member), is a member of the working group. He learned that Oracle was considering removing or hiding sun.misc.Unsafe in Java 9 in June. So-called unsafe code is sometimes required for low-level Java programming, Luck explained, where developers need to modify platform functionality for a specific purpose. Open source projects in particular use sun.misc.Unsafe as a Java Native Interface (JNI) workaround.
"It's not meant to be a standard part of Java, and yet it's built into every JDK, and everybody uses it," he said. "It's a genie that got out of the bottle."
Martijn Verburg, CEO of jClarity and co-leader of the London Java Users Group, is another member of the working group. The reason sun.misc.Unsafe can't simply be dumped, he told me in an e-mail, is that it provides a number of functionalities that aren't available through any of the standard classes in OpenJDK.
"[sun.misc.Unsafe] should be cleaned up and the safe parts should get standardized," Verburg said. "The rest should be removed! If you want to perform dangerous manual memory allocations, there are other languages for that."
Both Verburg and Luck praised Oracle's proposal to encapsulate unsupported, internal APIs, including sun.misc.Unsafe. Verburg called JEP 260 "a fantastic pragmatic compromise" that "clearly shows [that the] OpenJDK leadership and Oracle are willing to listen to the needs of the ecosystem."
The community seems to be heading toward a solution to the sun.misc.Unsafe problem, and I'm sure it's due, at least in part, to the efforts of Verburg, Luck, and their colleagues in the working group. But this internecine dustup also raises a question that has been lurking in the background since the formation of OpenJDK: Who really makes the decisions about the future of Java? OpenJDK is an open-source community, but unlike the JCP (and organizations like the Apache and Eclipse foundations), it's not vendor neutral. The main goal of the JEP Process, according to the OpenJDK Web site, is "to produce a regularly updated list of proposals to serve as the long-term Roadmap for JDK Release Projects and related efforts." The JEPs allow Oracle to develop small, targeted features for the Java language and virtual machine outside the JCP.
"Who's in charge of Java? That's a very complex [question]," said Verburg. "The reality is that Oracle has the loudest voice, but it's a heavy collaboration .... For the parts of OpenJDK that make up the Reference Implementation of Java, the JCP still has to approve."
A list of the internal APIs Oracle has proposed to remain accessible in JDK 9 are listed on the JEP 260 page. Oracle is welcoming suggested additions to the list "justified by real-world use cases and estimates of developer and end-user impact."
BTW: Another great source for understanding sun.misc.Unsafe is Rafael Winterhaulter's January 2014 blog post, "Understanding sun.misc.Unsafe."
Posted by John K. Waters on 08/07/2015 at 6:32 AM0 comments
It's been almost exactly a month since a coalition of industry leaders and users joined forces to create the Open Container Project to establish common standards for software containers. Now known as the Open Container Initiative (OCI) (renamed to avoid confusion with another Linux Foundation project), the group has announced the availability for public scrutiny of a draft charter for the nascent organization and the addition of 14 new members.
You can tell the OCI has the potential to become a true standards body by the broad range of organizations it has brought together, not to mention the number of out-and-out rivals who've gotten onboard. The list of founding members includes Docker, CoreOS, Amazon Web Services, Apcera, Cisco, EMC, Fujitsu, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, The Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware. The new membership roster includes AT&T, ClusterHQ, Datera, Kismatic, Kyup, Midokura, Nutanix, Oracle, Polyverse, Resin.io, Sysdig, SUSE, Twitter and Verizon.
The OCI was established under the auspices of The Linux Foundation, which also this week announced the formation of the Cloud Native Computing Foundation. Both groups are "collaborative projects," which means they are Linux Foundation sponsored, but independently supported.
The hopes of the backers of the OCI are summarized in the mission statement of the draft charter:
"The Open Container Initiative provides an open source, technical community, within which industry participants may easily contribute to building a vendor-neutral, portable and open specification and runtime that deliver on the promise of containers as a source of application portability backed by a certification program."
Just as interesting, I think, is what the OCI says it will not be doing:
"The Open Container Initiative does not seek to be a marketing organization, define a full stack or solution requirements, and shall strive to avoid standardizing technical areas undergoing signification innovation and debate."
The initiative was unveiled in June at DockerCon, and the latest news was announced this week at OSCON. Docker is making a big upfront donation to the OCI: a draft specification for the base format and runtime and the code associated with a reference implementation of that spec. The company is donating the entire contents of its libcontainer project and all modifications needed to make it run independently of Docker.
I had a chance to talk with two Docker Dudes (Dockeroids? Dockerettes? Dockerers?) about the new organization and its initial momentum.
"The number of members just about doubled in 30 days," said David Messina, Docker's vice president of marketing. "That's serious velocity, which I think speaks to the widespread interest in having a single, open container specification. But also notice the diversity of that membership. We have large software vendors, smaller software vendors, large Web-scale users and large enterprise players. Everybody in the industry wants a universal standard."
The OCI is making fast moves on the technical side, too. Patrick Chanezon, a member of the technical staff at Docker who has been working on the OCI, said we can expect a draft spec in just a few weeks.
"I've been involved in several standards projects over the years at Sun, Google, and Microsoft," Chanezon said, "and I've never seen an industry standard being elaborated so fast. In just six weeks [from the launch] we'll have a first draft of a spec for something that will be the basis for container based computing. To me that is a testament to the fact that a standard like this was needed to be able to innovate faster at the higher level, like orchestration and things like that."
High demand is one reason the draft spec is coming along so quickly, but it didn't hurt that the OCI launched was followed two days later by the Docker Contributor Summit. Many of the maintainers of libcontainer, which provides a standard interface for making containers inside an operating system, attended that event, as did members of the OCI working group. "We spent the whole day working together, with the result that the spec is in pretty good shape," Chanezon said.
The OCI working group's rapid progress also shows how effective a model that emphasizes lightweight governance and a focus on a discrete set of technologies, and nothing, else can be, Messina said. "This is what can happen when the organization gets out of the way of the maintainers," he said.
There are around 10 maintainers on the project right now, many of whom are coming from the libcontainer project, Chanezon said. "What we did was move the libcontainer from the Docker GitHub organization to Open Container organization, and all the maintainers came with it," he said. But that group also includes people from Docker, Google, Red Hat, CoreOS and a few independents.
It's worth noting that libcontainer represents 5 percent of the Docker code base. Chanezon called it "the heart of Docker." runC is the reference implementation in the OCI spec, and Docker plans to use it as plumbing for creating its own containers.
Jim Zemlin, executive director of The Linux Foundation, has said that containers are revolutionizing the computing industry. Docker claims that containers based on Docker's image format have been downloaded more than 500 million times in the past year alone, and there are now more than 40,000 public projects based on the Docker format.
I asked Chanezon why we're seeing such intense interest in, and furious activity around, containers.
"When I give talks, I like to quote William Gibson, who said, 'The future is already here, it's just not evenly distributed,'" he said. "Right now, that future is getting evenly distributed, and that means that every organization on the planet is starting to build distributed applications. Docker arrived just at the right time to let them do that."
You gotta love a guy who can work in a quote from the author of the great cyberpunk novel, Neuromancer, and coiner of the term "cyberspace." (Not to mention one of my favorite writers.)
Posted by John K. Waters on 07/24/2015 at 2:16 PM0 comments
O'Reilly's annual Open Source Convention, better known as OSCON, is in full swing this week in Portland. Among the more joyful attendees at this year's event is James Watters, vice president of Product, Marketing and Ecosystem for Cloud Foundry at Pivotal. How do I know Watters is a happy camper? His latest blog post, in which, among other things, he enthuses about the dramatic uptick in conference sessions on microservices -- 30 this year, up from only one last year, which Pivotal presented.
"People are talking about writing apps in a new way," Watters said when I caught up with him on the phone. "And they're talking about using microservices and Spring Cloud to do it. I haven't seen that kind of excitement in the Java community to restructure these kinds of things in the enterprise space, maybe ever. So yeah, I'm kind of excited.
Pivotal recently released the beta of its Spring Cloud Services (1.0.0), which integrates the Cloud Foundry-based NetFlixOSS microservices framework with Pivotal's Java-based Spring programming tools. The company plans to make Spring Cloud generally available in the fall. Between now and then, Pivotal will be adding distributed tracing into the framework via something called Spring Sleuth, Watters said.
At the SpringOne2GX conference in Washington this fall, Netflix is expected to talk about how it has begun to adopt Spring Cloud, Watters said. "Instead of configuring their apps in a complicated way, they're like, 'okay great, you wrote a wrapper for us? Cool, we'll just use your wrapper.' There's a virtuous feedback loop between the Spring team and Netflix team right now."
Watters describes himself as a lifetime enterprise Java guy (he's been working with it since high school), who worked at Sun Microsystems for about eight years. In his post, he claims (I think rightly) that Pivotal has been at the "intersection of microservices, continuous delivery and multi-cloud portability since being founded in 2013."
"There are two camps today," he told me, "people who are interested in different flavors of containers, and people like us, who are interested in building and running microservices apps. We have large companies asking us to come in and do two-day workshops on that. That's really where the excitement is right now."
Without a microservices architecture, container technologies aren't nearly as useful, Watters argued. "You can't run legacy monolithic Java, like Oracle and WebSphere, in that environment."
We're seeing a new wave, he wrote, that "fundamentally alters application architectures and workflows for developers and operators building the next generation of data-hungry, digital experiences." He's talking, of course, about what some people are calling the cloud native revolution. The Cloud Foundry open Platform-as-a-Service (PaaS) environment, of which Pivotal is the commercial maintainer, is a key enabling technology of that revolution.
"After our Spring Cloud product manager, Matt Stine, published "Migrating to Cloud-Native Application Architectures" for O'Reilly, we were just overwhelmed with requests from enterprises for workshops," Watters said. "We can't keep Matt off the road."
Cloud Native Java is especially appealing to enterprises that are looking to modernize their architectures, Watters said, because it leverages existing skill sets and allows for integration with legacy apps. From their perspective, he said, it's an "evolutionary approach."
In his post, Watters points to some telling successes of Pivotal's Cloud Native enterprise products. The company showcased 10 Fortune 500 companies at the recent Cloud Foundry Summit, all of which worked with the company on projects based on Cloud Foundry and Spring technologies. And downloads of Pivotal's Spring Boot rapid application development framework have gone through the roof. (More than 1.4 million downloads per month over the last year, he said. There's a graph.)
There's a lot more in Watters' post, which is well worth reading.
Posted by John K. Waters on 07/22/2015 at 5:12 AM0 comments