Our First Podcast Features "Hope Speech" Researcher Ashique KhudaBukhsh

We finally dipped our quarantined toes into the ever-widening podcast ocean last week, because we just didn't have enough to do around here. But seriously, after more than two decades on this beat, it really seemed like the right time to start sharing some of the amazing conversations I get to have on a daily basis with the brilliant and inventive people driving high tech.

We were lucky to have as our first guest Ashique KhudaBukhsh, a project scientist in the School of Computer Science at Carnegie Mellon University's Language Technologies Institute (LTI). I met Ashique in January, when he was still a post-doctoral researcher. I stumbled upon one of his team's published papers, and I called him to talk about what they were up to. That conversation led to two stories in ADTmag's sister publication, Pure AI.

Ashique and his team are engaged in a unique and compelling line of research. He and his colleagues are using artificial intelligence (AI) to analyze online comments in social media and pick out those that defend or are sympathetic to disenfranchised groups. That research led to the development of machine learning classifiers that effectively sort the "hopeful" and "helpful" from the hateful on social media.

The LTI researchers focused initially on finding supporting content about the Rohingya people, who began fleeing Myanmar in 2017 to avoid ethnic cleansing. Ashique explains why that group was chosen, and how his team used the fastText text representation and classification library with polyglot embedding. He also explains how they developed an original strategy they call "active sampling," which used the nearest neighbors in the comment-embedding space to construct a classifier able to detect comments defending the Rohingyas among larger numbers of disparaging and neutral comments.

He also talks about how Facebook, Twitter, YouTube, and other social media platforms, which are employing strategies to identify hate speech and misinformation on their platforms, could use his team's machine learning classifiers to complement that effort.

Ashique is teaching now, but his research continues, and he came to the podcast ready to share his story. It's a great story. You should check it out.

The WatersWorks Podcast will be available soon on iTunes and other podcast apps. But you can listen to it now on the Pure AI website. While you're there, feel free to read the two stories about Ashique's team's work ("Carnegie Mellon Uses AI To Counter Hate Speech with 'Hope Speech'" and "Carnegie Mellon Continues its Research on "Hostility-Diffusing, Peace-Seeking Hope Speech"). They include links to his group's research papers, which you also might want to read.

We'll be podcasting twice a month. I'll let you know when we finish the next one.

Posted by John K. Waters on September 3, 20200 comments


GitHub's Ruby 2.7 Upgrade Journey

GitHub's upgrade this year to Ruby 2.7 was a massive, months-long undertaking that required a serious investment in engineering resources and time. The team maintaining the popular Microsoft-owned code-hosting-and-collaboration platform recently shared some of the details of that transition, which, among other things, required that they fix more than 11,000 warnings.

"Fixing that many warnings, some of which were coming from external libraries, takes a lot of coordination and teamwork," observed Eileen M. Uchitelle, a staff software engineer at GitHub and core Rails team member, in a blog post. "In order to be successful we needed a solid strategy for sharing the work."

Ruby 2.7 was released last December; the GitHub team completed the upgrade this summer and deployed to production in July. The team completed a major Rails upgrade almost exactly two years ago.

"Upgrading Rails on an application as large and as trafficked as GitHub is no small task," Uchitelle wrote in an earlier post. "It takes careful planning, good organization, and patience. The upgrade started out as kind of a hobby; engineers would work on it when they had free time. There was no dedicated team. As we made progress and gained traction it became not only something we hoped we could do, but a priority."

The team learned a lot from that Rails upgrade, Uchitelle said, and they used that knowledge on the Ruby upgrade, which was a bit more focused undertaking. They set up the application to be dual-bootable in both Ruby 2.6 and Ruby 2.7 by using an environment variable, she said. "This made it easy for us to make backwards compatible changes, merge those to the main branch, and avoid maintaining a long running branch for our upgrade," she said. "It also made it easier for other engineering teams who needed to make changes to get their system running with the new Ruby version."

GitHub was built with Ruby on Rails and launched in February 2008, and it's now one of the largest source code hosting service in the world, with an estimated 40 million users and more than 100 million repositories. The app itself is huge: more than 400,000 lines of code. And it gets 100s of pull requests daily.

One of the key goals of the upgrade was to make it possible to run both Ruby and Rails in deprecation-free mode and not be left behind in the future by a modern upgrade cadence, the code hoster has said. With this release, future versions of Ruby will no longer accept passing an options hash when a method expects keyword arguments. "At GitHub, we're committed to running deprecation-free on both Ruby and Rails to prevent falling behind on future upgrades," Uchitelle said.

Uchitelle left no doubt that the team feels the latest upgrade was worth all this effort, if for the performance improvements alone. " The Ruby Core team is well on their way to fulfilling the promise of Ruby 3.0 being 3x faster, she said. She also pointed to improvements in application boot times in production mode (down from about 90 seconds to about 70 seconds). She also cited a decrease in object allocations, which went from about 780k allocations to about 668k allocations. Object allocations affect available memory, she noted, so it's important to lower these numbers whenever possible.

"For any companies that are wondering if this upgrade is worth it, the answer is: 100%," she said. "Even without the performance improvements, falling behind on Ruby upgrades has drastic negative effects on the stability of your codebase. Upgrading Ruby supports your application health, improves performance, fixes language and framework bugs, and guides the future of the language!"

Both of Uchitelle's blog posts are well worth reading: "Upgrading GitHub to Ruby 2.7," and "Upgrading GitHub from Rails 3.2 to 5.2."

Posted by John K. Waters on September 1, 20200 comments


Preview of Java Message Service 2.0 over AMQP on Azure Service Bus

Microsoft wants to empower its customers to lift and shift their Java and Spring workloads to Azure, while also helping them to modernize their application stack with best-in-class enterprise messaging in the cloud. Toward that end, Redmond recently announced preview support for Java Message Service (JMS) 2.0 over AMQP in Azure Service Bus premium tier.

The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for passing business messages among apps or organizations. It comprises an efficient wire protocol that separates the network transport from broker architectures and management. AMQP version 1.0 supports a range of broker architectures that may be used to receive, queue, route, and deliver messages, or used peer-to-peer.

Microsoft's Azure Service Bus is a fully managed enterprise integration message broker that can decouple applications and services. It's used to connect applications, devices, and services running in the cloud, and often acts as a messaging backbone for cloud-based apps.

Microsoft program manager Ashish Chhabria announced the support in a blog post.

"The enterprise messaging ecosystem has been largely fragmented compared to the data ecosystem until the recent AMQP 1.0 protocol standardization in 2011 that drove consistent behavior across all enterprise message brokers guaranteed by the protocol implementation," Chhabria wrote. "However, this still did not lead to a standardized API contract, perpetuating the fragmentation in the enterprise messaging space.

"The Java Enterprise community (and by extension, Spring) has made some forward strides with the Java Message Service (JMS 1.1 and 2.0) specification to standardize the API utilized by producer and consumer applications when interacting with an enterprise messaging broker. The Apache QPID community furthered this by its implementation of the JMS API specification over AMQP. QPID-JMS, whether standalone or as part of the Spring JMS package, is the de-facto JMS implementation for most enterprise customers working with a variety of message brokers."

In this preview, Azure Service Bus supports all JMS API contracts, Chhabria said, enabling customers to bring their existing apps to Azure without rewriting them. The list of JMS feature supported today includes:

  • Queues.
  • Topics.
  • Temporary queues.
  • Temporary topics.
  • Subscriptions.
    • Shared durable subscriptions.
    • Shared non-durable subscriptions.
    • Unshared durable subscriptions.
    • Unshared non-durable subscriptions.
  • QueueBrowser.
  • TopicBrowser.
  • Auto-creation of all the above entities (if they don't already exist).
  • Message selectors.
  • Sending messages with delivery delay (scheduled messages).

To connect an existing JMS based application with Azure Service Bus, Chhabria explained, simply add the Azure Service Bus JMS Maven package or the Azure Service Bus starter for Spring boot to the application's pom.xml and add the Azure Service Bus connection string to the configuration parameters.

Posted by John K. Waters on August 26, 20200 comments


Google's Jib Gaining Traction in the Broader Java Dev Ecosystem

Google introduced the beta version of its open-source Jib tool for containerizing Java applications in July 2018 with relatively little fanfare. Two years later, the tool has put on some serious muscle in the form of new features and plug-ins, and quietly become a developer favorite.

Jib is an open-source Java tool maintained by Google for building Docker images of Java applications. Jib 1.0.0, released to general availability last year, was designed to eliminate the need for deep Docker mastery. It effectively circumvented the need to install Docker, run a Docker daemon, and/or write a Dockerfile.

Jib accomplishes this by separating the Java application into multiple layers for more granular incremental builds. (Traditionally, a Java app is built as a single image layer with the application JAR.) "When you change your code, only your changes are rebuilt, not your entire application," the GitHub page explains. "These layers, by default, are layered on top of a distro-less base image."

"Jib has come a long way since it went GA," wrote Google software engineers Chanseok Oh and Appu Goundan in a blog post, "and now has a sizable community around it. The core Jib team has been working hard to expand the ecosystem, and we're confident that the community will only grow larger."

For example, Google publishes Jib as both a Maven and a Gradle plugin. The GitHub repository of Jib extensions to those plugins--the Jib Extension Framework, published in June-- enables users to easily extend and tailor the Jib plugins behavior. Jib extensions are supported from Jib Maven 2.3.0 and Jib Gradle 2.4.0.

"We think that the extension framework opens up a lot of possibilities, from fine-tuning image layers to containerizing GraalVM native images for fast startup or jlink images for small footprint," Oh and Goundan, said. 

Google published first-party Jib Maven and Gradle extensions to cover the Quarkus framework's "special containerization needs." (It was already possible to direct Quarkus to create an optimized image with the core Jib engine without applying the Jib build plugin.) Using the Jib build plugins enables finer-grained control over how to build and configure an image compared with Quarkus' built-in Jib engine-powered containerization.

Google has also put some effort into supporting the implementation of first-party integration for Spring Boot in Jib. For example, Jib's packaged containerizing-mode now works out of the box for Spring Boot, containerizing the original thin JAR rather than the fat Spring Boot JAR that's unsuitable for containerization.

Finally, Google has made sure that Jib works out of the box with Skaffold File Sync. Skaffold is a command line tool that facilitates continuous development for Kubernetes-native applications. Using the keyword auto, developers can take advantage of remote file synchronization to a running container with zero sync configuration.

Posted by John K. Waters on August 25, 20200 comments


A Roundup of Red Hat Revelations from KubeCon+CloudNativeCon

So much Red Hat news has been coming out of the KubeCon + CloudNativeCon EU 2020 Virtual event this week that it has been hard to keep up. We reported earlier on the spotlight announcements around its dev tools for Kubernetes. But that was just the tip of the iceberg. The IBM subsidiary has had a busy week!

Here's a roundup of Red Hat's other big revelations from the show:

OpenShift 4.5 Gets Virtualization Platform
Red Hat's enormously popular packaged distribution of the open-source Kubernetes container management and orchestration system gets an upgrade that includes a new virtualization platform.

OpenShift 4.5, announced this week, includes OpenShift Virtualization, a new platform feature that enables IT organizations to bring standard VM-based workloads to Kubernetes, helping eliminate the workflow and development silos that typically exist between traditional and cloud-native application stacks. Virtual machines can now coexist side-by-side with cloud-native services and containers on Kubernetes simultaneously, either for to be rebuilt as a container image or to simply make workflows more efficient, Red Hat said in a statement. OpenShift 4.5 also introduces full-stack automation for VMware vSphere deployments, making it "push-button" easy to deploy OpenShift on top of all currently supported vSphere environments.

Advanced Cluster Management for Kubernetes Goes GA
The advanced cluster management capability, now generally available, was designed to help organizations more effectively scale OpenShift deployments via unified Kubernetes management.

Built specifically for a cloud-native world, the cluster management toolset supports containerized application deployments across multiple clusters, whether an organization is just beginning to explore cloud-native computing or they are running next-generation workloads in production, Red Hat says. Advanced Cluster Management for Kubernetes "meets organizations where they are on their containerization journey," from container proofs-of-concepts to containerized production deployments, with tools to more effectively manage multiple Kubernetes clusters and enforce security policies and governance controls. The toolset also provides a single control plane, which aims to eliminate the fragmented tools that can be required to manage Kubernetes across the hybrid cloud.

New Edge-Computing Support
Red Hat also announced the addition of new capabilities and technologies to its hybrid cloud portfolio designed to support enterprise-grade edge computing, starting with OpenShift.

Red Hat OpenShift now supports three-node clusters, scaling down the size of Kubernetes deployments without compromising on capabilities, and making it better suited for space-constrained edge sites. The Advanced Cluster Management for Kubernetes tools provides management for thousands of edge sites along with core locations via a single, consistent view across the hybrid cloud, which managing scaled out-edge architectures as straightforward as traditional datacenters, Red Hat says.

Red Hat Joins Intuit on Argo Project
The two company's announced that they will be collaborating on Argo CD, a declarative continuous delivery tool for Kubernetes deployments. If it works as planned, the tool will make it easier to manage configurations, definitions, and environments for both Kubernetes itself and the applications it hosts using Git as the source of truth.

Argo CD, which was open sourced by Intuit in January 2018, is also an incubation-level project within the Cloud Native Computing Foundation (CNCF) and is currently deployed in production by many companies, including Electronic Arts, Major League Baseball, Tesla, and Ticketmaster.

Red Hat, a long-time leader in the open-source community, will help to drive the contributor base and engage with a broader open source ecosystem, the company's said. Red Hat also intends to work to integrate Argo's GitOps capabilities into future versions of OpenShift, which would provide a more developer-centric way of controlling Kubernetes infrastructure and applications.

Posted by John K. Waters on August 20, 20200 comments


The Facial Rec Tech Wreck

Facial recognition technology has been taking it on the chin lately (pardon the pun). Earlier this week, the BBC reported that a UK court ruled the use of the technology by British police violated human rights and data protection laws in that country. A week before that, a team of researchers at the University of Chicago unveiled Fawkes, an algorithm and software tool that makes pixel-level changes to your image that are invisible to the human eye, but effectively mask you from the current crop of facial recognition applications. And back in July, Amazon, Google, and Microsoft were sued over claims they used photos of individuals to train their facial recognition software without getting prior consent, which violated an Illinois biometric privacy statute. (Facebook had already settled a class-action claim that it also violated that law.)

Remember when we were all thrilled that we could open our phones with our faces?

One of the most important things to keep in mind as you're thinking about the future of the facial recognition technology industry, Gartner analyst Nick Ingelbrecht told me, is that it's not a single monolithic market today, and it never has been.

"It's made up of lots of different segments and use cases," he said, "ranging from 1:1 verification of customers' identities to border control screening; mobile payment verification to password replacement; one-to-many face matching for building access control to the many-to-many systems the police use."

Ingelbrecht is a research director with Gartner's Technology and Service Provider Research group. He focuses on computer vision, emerging trends and technologies, video and image analytics, and physical security. I asked him about recent developments in the facial recognition technology market, and announcements by Amazon, Microsoft, and IBM that they would be curtailing their efforts around this technology.

"The large US technology companies have been backing away from marketing facial recognition products for some years now," he said. "The recent announcements are just the latest in a series of retrenchments by large US tech firms to avoid the reputational risks and potential legal exposures associated with misapplication of biometric technologies. These decisions tend to slow commercial adoption generally, as well as the investment return on research. That said, facial recognition is not going away. Technologies will continue to mature, and research continues outside the US, especially in the People's Republic of China, where there is a very large internal market for facial recognition products."

Amazon has said it will stop selling facial recognition technology to police forces for a year. Microsoft, which doesn't currently sell facial rec tech to U.S. law enforcement, said it won't do so until the federal government passes a law regulating its use. And in a letter sent to Congress in June, IBM's CEO Arvind Krishna said his company has sunset its general-purpose facial recognition and analysis software products, and he called on Congress to regulate the use of the controversial technology by police.

Ironically, the most recent lawsuit against the three tech giants cited their use of IBM's Diversity in Faces Dataset, which the company developed to reduce racial and gender inaccuracies and biases in the technology. Big Blue released the dataset in January 2019 to the global research community "to advance the study of fairness and accuracy in facial recognition technology."

That effort was a response, at least in part, to conclusions by researchers from MIT and Stanford University in 2018, who found that commercially available facial-analysis programs from major technology companies demonstrated both skin-type and gender bias. Facial recognition is a type of image recognition technology that detects faces in captured images, and then quantifies the features of the image to match against a templates stored in a database. In the researchers' experiments, facial recognition algorithm errors in three leading solutions were 49 times more likely for dark-skinned women than white men. These results raised serious questions about how neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated.

Clearly, facial recognition technology has come a long way since Woodrow Wilson Bledsoe began his pioneering work in a field he all but created back in the 1960s. It's now part of a class of biometric tech in increasingly widespread use, primarily in security applications, that includes fingerprint, iris, speech, and gate recognition. It's also worth a lot of money--billions, according to industry watchers. The global facial recognition market was valued at $3.4 billion in 2019 (according to a Grand View Research report) and is anticipated to expand at a CAGR of 14.5% from 2020 to 2027.

Biometrics continue to be used extensively across a range of security applications--primarily access control and attendance tracking. And recent headlines notwithstanding, the technology also continues to improve, evolve, and expand at an explosive rate. Advancements in artificial intelligence and machine learning have been applied to biometric technology, leading to increased accuracy and accessibility.

Despite recent controversies and what could be called growing pains, facial recognition technology isn't going away any time soon, Ingelbrecht said. However, the market is already changing.

"The current arguments about facial recognition in the US will put further pressure on small vendors and accelerate consolidation, aggregation, or exit at a time when the industry is suffering from  a slump in buying activity, severe cash constraints, and supply chain difficulties," he said. "Outside the US, we expect facial recognition technologies to evolve at a rapid pace. It is ironic that in Europe, the GDPR [General Data Protection Regulation] has made commercial deployments of facial recognition very difficult, while governments and law enforcement there enjoy exemptions. In the US, there is extensive commercial use of facial recognition, especially in the retail and hospitality sector, but constraints are largely targeted at law enforcement."

Ingelbrecht's advice to the facial rec tech vendors during this volatile time: "They need to be very focused on their target market segments and differentiate themselves clearly via the business value they deliver to customers."

Posted by John K. Waters on August 13, 20200 comments


JetBrains Kicks Off Product Release Binge with New IntelliJ IDEA IDE

Software development toolmaker JetBrains, has been on a bit of a product-release binge that started on July 28 with the release of IntelliJ IDEA 2020.2, which was followed by the releases of the IntelliJ Scala Plugin 2020.2,  PyCharm 2020.2, CLion 2020.2, PhpStorm 2020.2, the EduTools Plugin 3.9, GoLand 2020.2, IntelliJ Rust 2020.2, the Space Beta, and TeamCity 2020.1.3.

(Whew!)

Leading this pack of products promulgations, of course, is the venerable code-centric Java IDE, IntelliJ IDEA. The company released version 2020.1, the first major update of the year, in April with support for the latest Java 14 release, as well as new features for several Web and test frameworks, an upgrade of the debugger with dataflow analysis assistance, and a new LightEdit mode. The company's newest product is Space, an all-in-one team collaboration environment.

IntelliJ IDEA 2020.2 comes with numerous updates, including the ability to review and merge GitHub pull requests from inside the IDE, navigate between warnings and errors in a file with the Inspections widget, view the full list of issues in a current file with the Problems tool window, and get notified if code changes would break other files. This release also provides new features for Jakarta EE, Quarkus, Micronaut, Amazon SQS API, and OpenAPI.

But the marquee feature in this release is probably support for Java 15, which is due in September. IntelliJ IDEA 2020.2 is fully ready for that release, said Zlata Kalyuzhnaya, JetBrains marketing manager, in a blog post. "We've updated our support for the Records feature, which is now in its second preview, added basic support for Sealed Classes, and provided full support for Text Blocks, which are a full-fledged feature in Java 15," she wrote.

The list of features Kalyuzhnaya highlighted in this release includes:

  • Inlay hint: If changes you make to a Java method or field will cause errors in other files, the IDE will notify you about it with an inlay hint. Click on the hint and the IDE will provide a list of the errors to fix.
  • Pinpointing runtime exception causes: In this release, JetBrains has supplemented exception stack trace analysis with dataflow analysis. Clicking on the stack trace takes you to the exact place in the code where the exception appears.
  • Improved autocompletion for Stream API methods: This release of the IDE is designed to work better with the Stream API. I allows you to start typing the stream method name within the collection itself, which will trigger IntelliJ IDEA to insert 'stream()'automatically. Also, the IDE now suggests chained calls of expected type in the autocompletion.
  • New Variable refactorings: This release introduces this feature, which allows you to replace occurrences of a variable in intermediate scopes, as opposed to only replacing one or all occurrences.
  • Regrouped Java Live Templates: This release groups the Java live templates under the Java node in Settings/Preferences to make it easier for developers to locate them among the live templates for all the other languages.

To learn more, visit the Java section of the JetBrains' what's new page.

Posted by John K. Waters on August 12, 20200 comments


Oracle's March Madness-Style Java Bracket

Oracle's Java Platform Group created a March Madness-style bracket to mark Java's 25th anniversary, substituting JEPs for the college basketball teams and using Twitter polls to determine the winners of the matchups.

The "Best of the JDK Feature Face-Off" concluded last week, with JDK Mission Control edging Records in the final round.

"I'm not saying that all developers like sports, but this thing really resonated," Sharat Chander, senior director of Oracle's Java Product Management and Developer Relations group, told me.

A total of 28,321 votes were cast in the mock tournament, with 8,969 votes cast in the finals alone. The tournament was followed on Twitter by a range of Java community leaders, including Java Champion Venkat Subramaniam and Oracle's language rockstar Brian Goetz, who offered comments and congratulations. It even inspired a kind of Twitter poetry slam, with exchanges of bits of doggerel like this one from Java architect Erik Costlow (@costlow):

To think I could ever see
A tool so lovely: JMC
A tool that streams events all day
Yet still performs without delay

The purpose of this project, Chander said, was to give visibility to Java features that were on releases starting with Java 9, with a couple of features in Java 8 added to round out the brackets. But the lighthearted exercise also yielded at least one unexpected insight, he said.

"This is a huge ecosystem of users, and we really saw where their passions lie," he said. "In the finals, 60 percent of the voters selected a tool over significant language enhancement. I think this is proof that open-sourcing JMC was the right decision. I don't think it would have advanced if it had been a gated feature."

JDK Mission Control (JMC) is a profiling and diagnostics tools suite for the Java Virtual Machine (JVM) used by developers to gather detailed low-level information about how the JVM and the Java application are behaving. Records (JEP 359) is a kind of type declaration in the Java language.

Designing the brackets to provide a meaningful competition was tricky, Chander said. They settled on four categories (language, libraries, tooling and runtime), and they used a randomizer to set the matchups. "There was no science involved," he said, "like saying feature x will go up against feature y for these specific reasons. We thought leaving it to chance would get us closer to a level playing field. Then it's about what you're really passionate about."

And there were a couple of deliberate omissions. "People hit us with questions right away about why we left out xyz feature," Chander said. "Where's RMI? Where's serialization? And we did omit some features outright. But we had to make some choices. If we tried to encompass everything, the bracket structure would simply be too large and everything would have gone sideways. And let's face it, if we had included lambdas, there'd have been no competition at all."

In case you missed it, you can see the entire bracket and final tournament results on @java.

BTW: Chander posted his own bracket on Twitter.

Posted by John K. Waters on July 16, 20200 comments


Eclipse Foundation Releases 2020 Jakarta/Java Survey Findings

The Eclipse Foundation has been busy since the organization announced its move to Belgium last month. It announced the first milestone release of Jakarta EE 9, and published a white paper about open source in Europe and it just posted the results of its 2020 Jakarta EE Developer Survey. 

Based on the responses of several thousand enterprise developer, the survey provides a fascinating look at the growth of open source enterprise Java, as well as some details on what developer interest in things like microservices and platforms.

"Since the release of Jakarta EE 8 in September 2019, we have witnessed meteoric growth for Jakarta EE, both in its use by developers and the certification of compatible products based on its specifications," said Mike Milinkovich, executive director of the Eclipse Foundation, in a statement. "Jakarta EE 8 has seen more certifications of Full Platform Compatible Products in 8 months than Java EE 8 had in over 2 years. With Jakarta EE 9 on track for release in fall of this year, the real work on innovation and the transition to cloud native Java and microservices support can begin."

The 2020 Jakarta EE Developer Survey received 19% more responses than last year's survey.

The survey results in their entirety can be found here.

The results show significantly increased growth in the use of Jakarta EE 8 and interest in cloud-native Java overall.

A list of the key findings from this survey include:

  • Java/Jakarta EE 8 hits the mainstream with 55% adoption among the developers surveyed.
  • Spring/Spring Boot continues to be the leading framework for building cloud native applications, but its share declined 13% (from 57% in 2019 to 44% in 2020).
  • With the delivery of Jakarta EE 8 in September 2019, Jakarta EE starts to fulfill its promise of accelerating business application development for the cloud, emerging as the second place cloud native framework with 35% usage in this year's survey.
  • Since its announcement early in 2019, the adoption of Red Hat's Quarkus has skyrocketed with 16% of developers now using the framework.
  • The overall usage of the microservices architecture for implementing Java systems in the cloud technically declined since last year (39% in 2020 vs 43% in 2019). This could potentially be due to implementers realizing that microservices are not a "one size fits all" solution, which is further borne out by the use of the monolithic architecture approach doubling since last year with 25% adoption reported in 2020.
  • Even with the generally flat use of microservices year-over-year, there is still continued interest from the Jakarta EE community for better support for microservices in the platform. Combined with the decline in adoption of Spring Boot and the rise of Jakarta EE, the takeaway here may be that developers are looking past single vendor microservices frameworks in favor of vendor-neutral standards for building Java microservices.
  • The adoption of Eclipse Che, an open source, Java-based developer workspace server and cloud Integrated Development Environment (IDE) for creating cloud native, enterprise applications on Kubernetes, has surged with reported usage growing from 4% in 2019 to 11% in 2020.
  • Java 11 use has surged to 28% (20% in 2019). Sitting at 11% usage, enterprises are also adopting Java 14. Java 14 uptake may be due to the cloud providers are looking to stay on the latest and greatest
  • Java 8 adoption has decreased to 64% (84% in 2019). This is an indicator that developers are finally moving away from Java 8 and Java 11 is replacing Java 8 as the default Java.

 

Posted by John K. Waters on June 25, 20200 comments