It's official: Jenkins 2.0 has arrived. Available today, this is the first major release of the open source continuous integration (CI) server in 10 years, and the excitement surrounding it is palpable. (Sorry, that was my Apple Watch telling me to stand up -- again.)
But seriously, this truly is an event. After 655 weekly releases, the once-controversial CI server has evolved into a continuous integration/continuous delivery (CI/CD) system that provides a flexible way to model, orchestrate and visualize the entire software delivery pipeline.
"Pipeline" is the key word here -- or rather "pipeline-as-code." With this release comes the first officially supported implementation of a domain-specific language (DSL) for the coding of pipelines for continuous delivery. The new Pipeline plug-in is designed to help Jenkins users model their software delivery pipelines as code that can be checked in and version-controlled along with the rest of their project's source code.
"Probably one of the most important changes in this release is that Jenkins can now understand and model a delivery pipeline, natively, which it couldn't do before," Jenkins community leader Tyler Croy told me last week. "For me as a practitioner and Jenkins administrator, the out-of-the-box defaults we've changed in this release are also important, especially for new users. New users of Jenkins are going to find it much easier to get set up with the tools they need, and Jenkins will be more secure out of the box."
Croy is a Jenkins evangelist and community manager at CloudBees (the chief commercial supporter of Jenkins), which means he gets to work on Jenkins full time. He's been a leader in this community since before the fork from Hudson, so he's seen most of its evolution. He told me that, among other things, Jenkins 2.0 represents some fundamental rethinking at CloudBees -- and by Jenkins creator Kohsuke Kawaguchi -- about how the CI/CD server is going to be developed going forward.
"This was the first time we've had a parallel branch of development -- literally, on Git -- to work on big features," Croy told me. "Kohsuke has been a big proponent of the agile development concept and getting changes out to users as quickly as possible and getting feedback. But it's been hard to step outside that release process and think about bigger initiatives we could tackle to improve Jenkins over the next five to 10 years. Incorporating some of the UI changes and improving the getting-started setup to make the out-of-the-box experience better, for example, were things we developed over the past five or six months in parallel with the weekly release cycle. We have the ability now to experiment, try new things, and take on bigger initiatives that are challenging and time consuming in that separated little space of Jenkins development."
Storage plugability improvements are at the top of the community's to-do list, Croy said. No committers to anything yet, but this is the type of project that can now be accommodated by this parallel Jenkins development process for bigger releases.
Continuous integration has almost become a default in the standard software development stack, Croy observed, and continuous delivery is heading in that direction, too.
"There was a time when source control management wasn't part of the stack," he said. "Now it's standard to use Git or Subversion, and if you don't, people look at you like you're crazy. The same is true now for CI, and the direction the industry is moving tells me that CD is going to become standard part of that stack, too. People are saying, great, you can build and test your software, but how do you deliver it to your customers? It's just a matter of time before CD is standard in the stack, too."
"My personal goal," Croy added, "is to make Jenkins vastly more usable and much better documented and easier to adopt by 2016. This is something that we've started to do in Jenkins 2.0, and I plan to continue focusing on that comprehensive user experience."
A detailed overview of the Jenkins 2.0 release is available on the Jenkins Web site. And there's some great documentation ("Getting Started with Pipeline") online.
Posted by John K. Waters on 04/26/2016 at 3:59 PM0 comments
The big news in the evolving container ecosystem this past week was Mesosphere's announcement that it will be open sourcing its flagship Data Center Operating System, rebranded as DC/OS. (The company will also offer a commercial Enterprise version.) Among the long list of companies partnering with Mesosphere on DC/OS beta is a startup called Avi Networks, which got my attention at the Container World 2016 conference.
Started by former Cisco execs, the Sunnyvale, Calif.-based company emerged from stealth mode in December 2014 with the aim of delivering an application services platform fully implemented in software. The Avi Vantage Platform provides an alternative to legacy application delivery controllers (ADCs). It comprises the Avi Controller and distributed "elastic micro-ADCs" called Avi Service Engines, which are distributed across the environment to deliver services close to the applications. The platform's virtual engines perform Layer 4-7 functions (transport, session, presentation, application), which puts the company in competition with such established providers as F5 Networks, Citrix Systems, A10 Networks and Radware. Avi execs announced the integration of the Vantage Platform with Mesosphere's then-commercial-only DCOS at the Container World conference.
I had a chance to talk with Avi's CEO, Amit Pandey, about his company and its efforts to modernize application services. Pandey, who has been working around the Layer 4-7 space since 1998, said he has seen very little change in the architecture of the upper layers of the OSI model.
"It sits so close to the application that it's really surprising changes haven't happened," Pandey said, "because the fundamental way applications are build and deployed have changed dramatically. We've gone through a couple of generations of change, in fact. We had three-tier applications, then the onset of virtualization led to significant changes, especially in the way apps were deployed, and more recently we have microservices, which fundamentally changes the architecture and the tools for building and deploying applications. And through all this change, the ADC has just sort of chugged along. It's a very sleepy industry."
Avi's founders, Umesh Mahajan, Murali Basavaiah and Ranga Rajagopalan, decided to wake up that sleepy industry with an architecture that mimics a software-defined networking (SDN) architecture, Pandey said, resulting in what amounts to software-defined ADCs and load balancing. But the founders also recognized that, to accommodate things like microservices, containers, and public clouds, the architecture would have to be distributed, with a centralized control plane, and the ability to, essentially, learn about deployments and traffic patterns and bring that learning back in a kind of feedback loop to facilitate things like elastic scaleout, remediation against attacks, and/or giving developers the ability to troubleshoot microservices.
"In this brave new world of modern applications, we need the ability to control and manage highly distributed architectures; to manage multiple clouds; and to give analytics, visibility, and feedback for a high level of elasticity," Pandey said.
In a nutshell, Avi is providing appdev teams with a software-defined system that delivers discovery, security, and load balancing services elastically, with centralized control and monitoring. And Avi's layer of infrastructure is so application-like it can be put into a container and deployed almost anywhere. "Our customers often tell us that we are more like an application than infrastructure," Pandey said. And that's really what our vision of networking infrastructure is. It should be part of your app, because then the level of visibility and flexibility you have is phenomenal. Why should your network services be a distinct and inflexible hardware box when you can actually containerize it?"
Posted by John K. Waters on 04/25/2016 at 10:46 AM0 comments
To describe the 2.0 release of the Jenkins continuous integration (CI) server as long-awaited would be the understatement of the decade -- which is literally how long Jenkins has been a 1.0 release. Really. Ten years.
"I don't know if it's the longest 1.0 release in history," CloudBees CEO Sacha Labourey told me during a recent visit to Silicon Valley, "but it's got to be close."
Technically, Jenkins has been around since it was forked from the Hudson CI back in 2011. Hudson was launched by Sun Microsystems in 2004. After Oracle acquired Sun, the company announced that it would be migrating the project to its java.net infrastructure and trademarking the Hudson name. The community objected to this move, voted to rename the project and moved the code to GitHub. Shortly thereafter, Oracle surprised the community by contributing the Hudson code, domain name and trademark to the Eclipse Foundation.
Kohsuke Kawaguchi, who created Hudson and instigated the Jenkins fork, became an elite developer and architect at CloudBees, and he's been a part of the community throughout the evolution of this technology. The open source, Java-based Jenkins CI server has been updated almost on a weekly basis for many years. Along the way, it has been evolving from a pure CI server to provide continuous delivery, as well.
"A lot of people still think of Jenkins as CI only," Labourey said. "But the teams have done a lot of work around Docker, pipeline support, usability -- so much work has happened in the last 12 to 18 months, in particular, that it's important to signal that this is not the good old Jenkins you knew five years ago."
CloudBees now refers to the Jenkins CI/CD server as an automation server, and many of the changes in version 2.0, the alpha build of which was recently released, reflects this evolving identity.
"People ask me, what is the competition for Jenkins," Labourey said. "The real competition for Jenkins is companies still doing everything manually. It's a tough culture to change."
Among other things, this release will bring the concept of "Pipeline as code" to Jenkins. The new Pipeline plug-in introduces a domain-specific language (DSL) designed to help Jenkins users model their software delivery pipelines as code, which can be checked in and version-controlled along with the rest of their project's source code. Users will be able to define simple and complex pipelines through the DSL and easily share pipelines among teams by story common "step" in shared repositories. (There's a great description with diagrams on the Web site.)
Jenkins 2.0 also comes with a buffed up setup and UI improvements. And Labourey emphasized that Jenkins 2.0 will be 100 percent backward-compatible with existing Jenkins installations.
"There will be no reason for people not to upgrade," he said.
CloudBees, which had been known since it was founded in 2010 primarily as one of the few providers of a Java-based PaaS, refocused on Jenkins in 2014. The company was an early supporter of the CI server and continues to be its leading commercial supporter.
Earlier this year the company rolled out the first-ever Jenkins-based CD-as-a-Service (CDaaS) platform. Last year the company combined its Jenkins Enterprise and Operations Center products into a single platform.
Details about the Jenkins 2.0 release are available now on the company Web site.
Posted by John K. Waters on 04/12/2016 at 10:25 AM0 comments
The list of Java evangelists exiting Oracle got a little longer this month when Reza Rahman announced that he would be leaving the company. But Rahman is not going quietly. In a personal blog post, he stated that he left because of his growing skepticism about Oracle's stewardship of enterprise Java, which he said was "independently shared by the ever vigilant Java EE community outside Oracle."
"As an evangelist, your entire existence depends on trust from the community," Rahman told me in an interview, "but I found I could not give the community straight answers about what Oracle is doing. In the end, I could not reconcile this."
And he couldn't leave it alone, either: Rahman and members of the community he served have joined forces to form the Java EE Guardians, a group of volunteers committed to supporting enterprise Java where they believe Oracle is falling down on the job.
The group has been meeting informally behind the scenes for a while, Rahman said, but is now formalizing the organization and going public with a Google Group (Java EE Guardians) and a Twitter handle (@javaee_guardian). It's still early days, but Rahman said to expect vision and mission statements, soon.
About 100 people have joined the Java EE Guardians as of this writing. The group's immediate plan is to assemble evidence to support their assertion that Java EE is, in fact, critical technology that needs more attention from Oracle. We can expect "complete metrics," Rahman said. Once that's established, they'll go on to substantiate their assertion that Oracles investment in server-side Java is not where it should be. They also intend to raise awareness of their concerns among Oracle's customers.
"Java EE is very much key to the overall server-side Java ecosystem, and maybe even the health of Java itself," Rahman said. "Without core investments from Oracle into Java EE, there's a very large part of the ecosystem that will be severely weakened. I simply do not believe that Oracle is doing enough with the Java EE specs, and I do not think they are fulfilling their commitment to the community. And I am very much not alone in this belief."
Ultimately, he said the group wants to secure the evolution of Java EE, and they're volunteering their own time to do it -- to "fill the resources gap" they perceive now to exist. They're even prepared take over some Java Specification Requests (JSRs), Rahman said.
"The reality is, the Java EE community is the most vocal and the most passionate, and yet they are the people Oracle is not leveling with today," Rahman said. "It's a huge problem."
Rahman was an independent consultant before joining Oracle, and he's gone back to that work with CapTech Consulting. He has served on the Java EE, EJB, and JMS expert groups for the JCP. He implemented the EJB container for the Resin open source Java EE application server. And he co-wrote EJB 3 in Action (with Debu Panda and Derek Lane). To say that he's passionate about Java EE would be an understatement.
He saw the writing on the wall at Oracle, he said, when the head of company's Java EE group, senior vice president Cameron Purdy, left last year amid rumors that Oracle was thinning its Java evangelist ranks. Rahman described Purdy in his personal blog as "a gem in the executive ranks of our industry," and one of the reasons he signed on to evangelize Java EE at Oracle.
Rahman says the greatest challenge the Java EE Guardians face is getting Oracle to accept their work, to essentially acknowledge that there is a problem. "The bottom line is, if Oracle is not committed to server-side Java and not committed to supporting the EE space, then fundamentally, someone else needs to step in."
I contacted Oracle for comment, but the company had not gotten back to me at press time.
This is an unfolding story, so stay tuned.
Posted by John K. Waters on 03/22/2016 at 2:54 PM0 comments
You know how I'm always banging on about how we need a technology-agnostic conference focused on the challenges facing the makers and maintainers of the purpose-designed software that drives organizations in virtually every industry in the world -- in other words, the readers of Application Development Trends? Well, it turns out, the organizers of the enormously popular Live! 360 conference agree with me. (Or maybe they just got tired of the noise.)
App Dev Trends 2016 is the newest addition to the Live! 360 Orlando conference, scheduled for Dec. 5-9. This multi-conference combines several co-located events, including Visual Studio Live, SQL Server Live!, Office/SharePoint Live!, Modern Apps Live! and TechMentor. Live! 360 always attracts a big crowd of attendees ranging from down-in-the-trenches developers to team leaders and decision-makers at just about every level.
We're organizing App Dev Trends 2016 to add another dimension to the larger conference. We want to throw a spotlight on the unique challenges faced by enterprise software professionals, whatever their preferred platforms. Our event is about cutting-edge intelligence on a wide range of trends, tools and best practices that our readers need to keep up with the ever-evolving demands of their organizations. It's about knowing what's next and preparing for it; acquiring new skills and adapting existing skill sets; and boosting organizational efficiency, productivity, and competitiveness. It's also about rubbing elbows with peers and pros facing the same challenges.
Earlier this week we issued the official App Dev Trends 2016 Call for Presentations (CFP). We are currently looking for presenters on the following topics:
- Agile: Real World Practices in the Enterprise
- Cloud Application Development
- Big Data Analytics
- Mobility in the Enterprise
- DevOps: Changing Roles
- Internet-of-Things at Enterprise Scale
- Continuous Integration/Continuous Delivery
- Virtual Reality/Augmented Reality
- Java 8: Lambdas
- Java 9: Jigsaw
- Functional Programming
- JVM Languages
The Web site for submitting presentation proposals is up and running now, and the deadline for submissions is April 1, 2016.
The conference is a ways off, but that CFP submission deadline is just around the corner, so please send us your proposals soon. We're expecting a lot them, and I'm looking forward to reading every one. (Did I mention that I'm the Conference Chair?)
Posted by John K. Waters on 03/18/2016 at 1:29 PM0 comments
There are few trickier tasks in business -- any business -- than changing a company's name. There are branding issues, legal complications, marketing considerations. Just ask the folks at Lightbend, formerly Typesafe, who have been going through the process for months, and today announced its new moniker.
"It was an interesting process," the company's president and CEO, Mark Brewer, told me. "But I wouldn't want to go through it again."
Why change what many considered to be a pretty cool name?
"We've broadened our product portfolio so much that we felt we needed a name change so that the market would recognize us for more than just Scala," Brewer said. "The Scala community continues to grow and adoption continues at an accelerating rate, but we, as a company, are about more than that. We are about providing enterprise developers with a platform for building this next generation of apps on the JVM. The Reactive Platform, from its inception, has supported both Scala and Java."
More than half the company's current customer based is Java developers, Brewer said. And virtually every new Lightbend customer starts out using Java.
The original name, "Typesafe," comes from the concept of type safety, the property of some languages that prevents unwanted behavior caused by discrepancies among differing data types. Scala is a type-safe language for the Java Virtual Machine (JVM), so the name made sense when the organization was new and focused on Scala. But the company evolved. It's now the force behind Akka, an open source, asynchronous, event-driven middleware implemented in Scala; the Play Web app framework, which is written in Scala and Java; and the open source Apache Spark Big Data processing framework, among other products. And today the company announced a new framework for Java developers creating microservices.
The company's Reactive Platform combines a number of its products to support the development of reactive applications on the JVM in both Scala and Java. Conceptualized in the "Reactive Manifesto," which was co-authored by CTO and co-founder Jonas Bonér, reactive applications are apps that better meet the "contemporary challenges of software development," in a world in which applications are deployed to everything from mobile devices to cloud-based clusters running thousands of multicore processors.
So there was a reasonable argument for the name change. I also understand that the old name was frequently misspelled (Typeface, Typespace and so on), which must have been frustrating. (I never misspelled it myself. Just sayin'.)
Typesafe enlisted Lexicon Branding, a branding firm with a good rep in open source circles, to help with the process, Brewer said. The company also included its customers and the Scala community in the name-change process, starting last May, seeking feedback and suggestions via the Typesafe blog. Judging from the initial comments, many weren't immediately on board, but they seem to have embraced the idea eventually.
"The name was intended to evoke a sense of something interesting and cool and next-gen, but nothing specific to any technology," Brewer said. "It's also easy to spell and easy to remember."
BTW: Brewer has gone through this process twice before: once for the change from Interface 21 to SpringSource. That was a good move, and I think this will prove to be a good move, too, once we all get used to it.
Posted by John K. Waters on 02/23/2016 at 10:14 AM0 comments
I know, it's February, but I reached out to a lot of smart people last month for their thoughts on what lies ahead for enterprise developers in 2016 -- more than I could squeeze into parts 1 and 2. In addition to industry analysts, I connected with some of the thoughtful execs I spoke with last year.
Martijn Verburg, for example, sent me an intriguing list of predictions. Verburg is the CEO of jClarity, which focuses on automating optimization for Java and JVM-related technologies, and he serves as co-leader of the London Java Users Group. Last year he predicted that developers would begin to feel a new pressure to fully understand DevOps and virtualization, "or be left scratching their heads when their apps fail miserably in their private or public cloud deployments." This year he added containerization to the trends that will stress developers in 2016.
"Docker has now moved beyond the mass adoption curve and is more or less here to stay," he said, "although many SA's won't touch it for production. Interestingly some of the DevOps tooling (Chef, Puppet, Ansible) will become less important as developers cobble together containers running services instead. For example, we used to Chef up a Host with Java, MongoDB, and a host of other software. Now we just deploy a set of Docker containers that all contain one major service each (MongoDB, or Java or Foobar)."
"The wise developer who really starts to utilize Docker in test will be the productive developer," he added. "No more relying on QA infrastructure, just bring up and down containers as you need! Extremely wise developers will not rely on the results of performance testing on container based architectures, unless they mimic production."
He also believes that microarchitectures "will continue to be much hyped, seriously abused, and difficult to co-ordinate and maintain." But we should also expect to see some good tooling emerge by the end of the year.
Jonas Bonér also sent me a list of prognostications. Bonér is the CTO and co-founder of Typesafe, the company behind Scala, the general purpose, multi-paradigm language that runs on the Java Virtual Machine (JVM), and Akka, the open-source run-time toolkit for concurrency and scalability on the JVM. Last year he said that he expected container-based infrastructures to continue to make life easier for the developers who adopt them. He pointed to several technologies driving this trend, including Docker, Apache Mesos, Google Kubernetes, and CoreOS. He also said that the game-changing impact of Java 8 would continue throughout 2015 as adoption spread -- which would have the interesting side effect of allowing more functional style programming.
In the coming year, Bonér expects microservices to "graduate" from an early adopter tool to "the first wave" of real mainstream adoption. The reason: The technical constraints that held microservices back -- things like single machines running single core processors, slow networks, expensive disks, expensive RAM, and organizations structured as monoliths -- are gone.
"Networks are fast, disks are cheap (and a lot faster), RAM is cheap, multi-core processors are cheap, and cloud architectures are revolutionizing how we design and deploy systems," he said in an e-mail. "In 2016 we have a much more refined foundation for isolation of services, using virtualization, Linux Containers (LXC), Docker, Unikernels, and Reactive runtimes like Akka. This has made it possible to treat isolation as a first class concern -- a necessity for resilience, scalability, continuous delivery and operations efficiency -- and has paved the way for the rising interest in microservices-based architectures, allowing you to slice up the monolith and develop, deploy, run, scale and manage the services independently of each other."
"The need for building Reactive applications is driving people towards microservices," he added, "and Reactive power users in particular already claim a very high incidence of building microservices." To support that claim, he points to a recent survey, the results of which are post on the company's Web site. (It's well worth reading.)
Bonér also expects a growing number of "Fast Data" tools and libraries to embrace Reactive Streams, an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure. He pointed to such Reactive Streams-compatible products as Akka Streams, upcoming Java 9 Flow and Spark Streaming, Gearpump, Jetty, Vert.x and Cassandra.
Verburg also sees Reactive/Dataflow style applications becoming increasingly popular in the coming year, but also increasingly misunderstood, because of a lack of visual tooling support. He also recommended the Vert.x tool for Java JVM developers.
I also received an interesting list of predictions from Docker, and I had a chance to talk about them with Scott Johnston, the company's SVP of Product Management. His company sees containers becoming "the prevalent mechanism for application development and deployment" in the coming year, thanks to advancements in 2015 around container security, manageability, storage, networking, which are driving rapid growth. He pointed to an O'Reilly report ("The State of Containers and the Docker Ecosystem 2015," which shows that 40 percent of organizations using Docker have it in production currently, and that those numbers are expected to rise sharply in the coming year. (Another report you should read.) He also pointed out that downloads of Docker images have risen from 67 million in December 2014 to 1.2 billion last year.
Perhaps Johnston's most interesting prediction: the rise of Container-as-a-Service (CaaS) architectures, which will facilitate Ops-originated application delivery.
"Think of Containers-as-a-Service as a platform for development teams to get the agility they need to build, ship, and deploy containers, while giving Ops the control they need to adhere to governing and regulatory standards and uptime SLAs in the datacenter," Johnston told me. "It's the notion of Ops standing up infrastructure that is container-ready and aware, while providing a self-service interface for the development teams to use to pick up the containers they've made and deploy them and manage them in production. We're trying to balance the agility needs of the Dev organization with the control needs of the Ops org."
"CaaS will succeed without requiring organizational changes as seen with the rise of DevOps," the company stated, "eliminating the need to retool and re-skill by refocusing on what Ops can do for Dev through integration of core and container technologies, thereby creating a more circular pattern of collaboration."
"At the end of the day, whether you're in Dev or Ops, the business of IT is shipping apps that deliver value to the business," Johnston said. "So it helps for devs to carry pagers, to make sure the apps they're shipping maintain the quality they need, and thereby get a little taste of the Ops world. And it makes sense for Ops to push toward the Dev side, to get their hands a little dirty with code so they understand how the code is working, so they can make sure they create environments that will stand it up."
Johnston said we should expect to see the balance between Dev and Ops—between agility and control—improve in the coming year as container-based services become more "Ops-led" and less of a Dev-only model. Dev and Ops will share the development lifecycle, his company predicts, with Ops setting up development environments in which everything, from security to management, is baked into the platform. And containers will move beyond Dev and Test to become a production mainstay, thanks to the integration of enabling technologies from the container ecosystem, as well as accelerated innovation from de-facto container leaders.
"Apps are increasingly where the value is," Johnston said, "and lower-in-the-stack layers are increasingly becoming commodities. In some ways that's just a continuation of that trend we've seen with hardware and operating systems."
More 2016 enterprise development predictions:
Posted by John K. Waters on 02/16/2016 at 1:41 PM0 comments
News that Oracle Corp. plans to deprecate the Java browser plug-in in JDK 9 prompted a rousing chorus of "Ding Dong the Witch is Dead" from the Internet last week. But the news came as no surprise. A growing number of browser vendors have either stopped supporting the plug-in or announced plans to do so. (Flash and Silverlight, too.)
Dalibor Topic, a member of Oracle's Java Strategy Team, posted the news on the Java Platform Group blog. "With modern browser vendors working to restrict and reduce plug-in support in their products," he wrote, "developers of applications that rely on the Java browser plug-in need to consider alternative options such as migrating from Java Applets (which rely on a browser plug-in) to the plug-in-free Java Web Start technology."
Java Web Start is a framework designed to allow users to download and run Java apps from the Web. It has been included in the Java Runtime Environment (JRE) since the Java 5.0 release.
The vulnerability of Java in the browser, which was largely the result of the way Oracle bundled the extension with the JRE, has been a thorn in Oracle's side for awhile now. Back in 2013, when the plug-in was the target of some high-profile breaches, Oracle's senior product security manager, Milton Smith, told Java User Group (JUG) leaders during a conference call that the company's chief security concern was Java plug-ins running applets on the browser. "A lot of the attacks that we've seen, and the security fixes that apply to them, have been [about] Java in the browser," he said. "It's the biggest target now."
"Browsers are powerful gateways, and when they're used as platforms for extensions from other vendors (for example, Java from Oracle or Flash from Adobe) the picture of management and accountability for security becomes complicated," Smith added. "This is why the industry is shifting to HTML5 for browser applications, so that the browser vendors own the security of the platform end-to-end."
IDC analyst Al Hilwa agrees. "The browser plug-in has been problematic," he said in an e-mail, "but more importantly, in the face of trends in client-side software development, it makes great sense to clean things up now. The world is shifting to HTML5, and while there are legacy apps that use Java and Flash, they are likely slotted to be rewritten to operate without a plug-in. For Java this is a positive step to reduce its complexity and surface area, and focus it on staying current."
Martijn Verburg, CEO of jClarity, a start-up focused on automating optimization for Java and JVM-related technologies, and co-leader of the London Java Users Group, said that deprecating the plug-in has been on Oracle's to-do list for a long time.
"This has been a long term strategy of Oracle for a very long time," he said via e-mail. "I suspect they just needed to get enough of their customers comfortable with it before they could officially decide on a time frame. It's a good thing for the world, although this decision won't have any practical impact for a number of years yet. Some businesses/organizations will complain, but this is a reality of doing business in the modern information age. IT infrastructure is a core part of any business, and companies/organizations that ignore this fact will continue to get caught out, it's another wake up call for senior management who still have outdated thinking on this."
Oracle plans to make JDK 9 generally available in early 2017. Early access releases are available now for download and testing. It's safe to say few will complain about the absence of the plug-in.
Posted by John K. Waters on 02/01/2016 at 10:48 AM0 comments
We're only a month into 2016 and it's already shaping up to be another lively year for enterprise developers. Mobile, cloud, DevOps, IoT, microservices, the API economy, cognitive computing, virtual reality -- all are reshaping organizations in fundamental ways, and it looks like devs are going to have a large role to play in that change.
Analyst Clive Howard, who keeps an eye on the mobile and IoT space for UK-based Creative Intellect Consulting, expects 2016 to be the year most companies figure out what all this stuff actually means for their businesses. Projects that have hovered mostly around the edges will find their way into the heart of the enterprise, and a few -- IoT, cognitive computing and cloud -- will mature as a "progressive few" organizations "begin to shape them into exciting new products that will start to appear in 2016 but really emerge in 2017+," he said.
Howard also expects IoT to continue to pull in developers, both consumer and enterprise, with the action heating up on the enterprise side. Mobile B2B and B2E (Business to Employee) will grow significantly, he said, which means and more developers within organizations will be involved in mobile—which means lots of people are going to have to buff up their skill sets.
Meanwhile, non-developers are likely to be more involved in enterprise development, Howard predicts, via so-called low-code tools and services. Given the current skills shortage, it's likely that companies will create strategies that embrace these non-developers, he said.
The coming year will not, however, see the hype around connected cars and wearables live up to the reality, he said. "I think there will be little activity [around connected cars] outside of those already involved in the car industry," he said. "Cars are probably heading to the top of the hype bubble. And wearables will go nowhere in 2016, certainly in terms of the consumer. Industrial use cases may see some interesting developments, but not at significant scale."
Another UK-based analyst, Gartner's Gary Olliffe, sees the growing interest in microservice architectures as something of a harbinger, signaling a rediscovery of the value of service orientation. "Enterprise developers are waking up to good old fashioned architecture principles that aren't new," he said. "Microservices are a kind of beacon, showing them the benefits of those principles and how they can apply them to their own work.
"Developers are excited about microservices, because they allow them to simply their development stack, chose optimal technology, and not have bloated middleware forced upon them," he added.
Microservices have gained enough mindshare that even though most organizations are not trying to, say, replicate a Netflix-style microservice architecture, they're learning from that example, and feeding that knowledge back into the business, Olliffe said. In 2016 microservices will have an impact on most enterprises delivering anything that needs to be exposed or managed as a service, he said,
One bump on this particular road: no true microservice platform. "The tools to help you get your feet wet are easily accessible, and have become more so in the past 18 months," he said. "But the vendors have yet to step up and really provide a true microservice platform for developers. We're not seeing what you might call the next generation of app servers, the platforms onto which I deploy my units and which handles all of the complexities of the outer architecture."
Microsoft's Azure Service Fabric comes closer to providing more rapid developer productivity in that environment, Olliffe said. But to actually manage and operate tens or hundreds of instances of multiple microservices in production is a whole different game, he said. The app lifecycle teams that increasingly include coders, integrators, architects, operations, and QA, have yet to really get their arms around these environments, he said.
Microservices are poised on the edge of the phase of Gartner's hype cycle known as "The Trough of Disillusionment," Olliffe said, which follows "The Peak of Inflated Expectations." During this phase, people focus on a technology's shortcomings and limitations, and a few products fail. But he expects developers to dig in and figure out what really works, prompting speedy progress to the "Slope of Enlightenment," followed by the "Plateau of Productivity."
You can read more about microservices in Part 1 of this series ("2016 Dev Predictions, Part 1: DevOps, APIs, Microservices, More"), and I'll soon share additional observations about the year ahead in Part 3 ("2016 Dev Predictions, Part 3: Mainstream Microservices, Reactive Streams and Containers-as-a-Service.")
Posted by John K. Waters on 01/29/2016 at 1:54 PM0 comments