Bruno Souza on Eclipse Jakarta EE 9 and the Future of Java

One of the biggest events in the Java universe last year was the official release of the Eclipse Jakarta EE 9 Platform, Web Profile specifications, and related TCKs. It moved enterprise Java fully from the javax.* namespace to the jakarta.* namespace. That's about all it did, actually, but it was an extremely consequential change.

I talked with lots of people about the latest shift in the evolution of enterprise Java, and one of the guys I was most excited to connect with on this topic was Bruno Souza, founder and leader of the Brazil-based SouJava, the largest Java User Group (JUG) in the world. Souza was one of the initiators of the Apache Harmony project to create a non-proprietary Java virtual machine, he serves on the Executive Committee of the Java Community Process (JCP), and he's on the board of advisors at Java-based Platform-as-a-Service (PaaS) provider Jelastic.

"I was very excited about Jakarta EE, and we [at SouJava] were onboard from the very beginning," Souza told me. "I think it was a very courageous move for Oracle to get all this awesome and extremely valuable IP and donate it to the Eclipse Foundation.… [When] you have this big player that does everything, it's very hard for anyone to come in and help. Oracle was this big guy doing everything, and so, everyone was just kind of doing small things around design… The only way to get a Java EE [community] that was more open and participatory was for Oracle to reduce its participation. And they did it!"

Souza offered kudos to the Eclipse Foundation and its executive director, Mike Milinkovich, even though Oracle refused to give up the javax.* namespace, forcing the foundation to adopt jakarta.*, and he believes the change will be better for the enterprise Java community in the long run. Souza was also on the side of the slow move, he said, and not a supporter of the so-called Big Bang, the plan for a complete change-over from javax.* to  jakarta.*, which the foundation did employ.

"Mike was a superb negotiator in all this," he said, "I don't think people realize what a huge thing this was…. The Java trademark is very valuable and it impacts a lot of different things, so I do understand [Oracle's position]…. And I wasn't a fan of the Big Bang, but honestly, I got convinced, and when the decision was made, I got behind it. Instead of us having this process that's going to take years and years and years, let's do it once, 'ripping off the band aid,' so everyone can get mad this one time, and we move on from there."

The release of Jakarta EE 9 might not seem like a big deal, Souza said, because it was primarily a shift of name spaces with very little in the way of upgrades. But now that that herculean task is complete, the community has the ability to innovate free of potential constraints from a dominant commercial player.

"My expectation is that now people will feel that this change was slow," he said, "but if you look at the long history of Java EE, it's relatively fast, and the process is going to get a lot faster…. And I think people want an ecosystem they can trust."

I asked Souza about what seems to be the background role played by the JCP in this exceedingly consequential change in the Java world.

"The JCP is the standards organization, and the truth is, innovation does not happen within the JCP," he said. "Innovation has always happened outside the JCP, in the field, where you can experiment, go as fast as you want, and break things. The standards process is not the place to innovate. But I will say that the JCP was very clear and very much at peace with that idea. And the elephant in the room is that the JCP is an Oracle entity. It's very open, and we've made it even more open in the last few years. And at the same time I think that there's always a barrier to how open you can be when you're inside a company. With Jakarta EE, we broke from this barrier and now we have an independent organization, which Oracle and IBM and others are part of, that can run a standards process. So, I don't see the JCP as reducing in size, but Java as growing."

You can hear the rest of my conversation with Bruno Souza, in which he reflects on the future of Java, on The WatersWorks Podcast. It's available on Apple Podcasts, Spotify, and most other providers.

Posted by John K. Waters on January 7, 2021 at 4:00 PM0 comments

New App Intelligence Platform Gets Control of 'Chaos'

Bionic, a company offering a new application intelligence platform, emerged from stealth recently and caught my eye. The platform was designed to provide enterprises with the ability to understand and control the "chaos" created by the "onslaught" of application changes pushed to production every day."

"Basically, we realized is that, with everybody moving to cloud containers, Kubernetes, Agile, and DevOps, it's empowering developers like never before," Idan Ninyo, co-found and CEO, told me. "And they're releasing more and more changes into production every day. The result is developers are so empowered and independent it creates this chaos, because every developer can kind of do whatever he wants."

The Palo Alto, Calif.-based company was founded in 2019 by Ninyo and CTO Eyal Mamo to manage this chaos, Ninyo said. Both co-founders spent over five years in Unit 8200, the Israeli Intelligence Corps unit of the Israel Defense Forces responsible for collecting intelligence and code decryption.

Bionic's application intelligence platform automatically reverse engineers applications to deliver a comprehensive inventory with architecture and dataflows, monitoring critical changes in production, and enabling developer "guardrails" to enforce architecture. Bionic is agentless and designed to work across all environments, from on-premises monolithic apps to hosted cloud-native microservices.

Ninyo emphasized that Bionic's offering is not an observability solution.

"You can think about this technology as a kind of automated reverse engineering engine that deciphers the architecture when the developers get out of the binary or the script or whatever it is. And by doing so, we can create a full map of the production environment within minutes.  after the point. Our current customers include developers, DevOps platform engineers, security--basically everybody needs this data."

The current pandemic could have thrown a wrench into the company's plans, but because it turned out to be something of an accelerator of digital transformation initiatives, Ninyo said, the  platform has seen a rapid adoption uptake. (He and his partner ran the launch from Tel Aviv.) Bionic's app intelligence platform is currently in use by IT, operations, and security teams at pharmaceutical, financial services, and technology companies.

"The pandemic accelerated digital transformation efforts across almost all organizations, especially since employees are working from home and enterprises are becoming more reliant on their digital offerings," Ninyo said in a follow-up email. "That has made the issue of application chaos ever more acute for enterprise IT teams. All these organizations realize that they must maintain compliance, reduce risk, and improve resiliency without slowing down the rate of development."

Posted by John K. Waters on December 17, 2020 at 4:01 PM0 comments

Intel Distro of OpenVINO Helps Devs with Inferencing Beyond Computer Vision

Developers continue to adapt to the advent in nearly everything of Artificial Intelligence (AI), or more precisely, machine learning (ML) and deep learning (DL). And the tools are evolving with them and this growing demand.

One example is the OpenVINO toolkit. OpenVINO stands for Open Visual Inference and Neural Network Optimization. It's a toolkit developed and open sourced by Intel to facilitate faster inferencing in deep learning models on Intel hardware. It's based on convolutional neural networks (CNN), a deep learning algorithm designed for working with two-dimensional image data, although it can also be used with one-dimensional and three-dimensional data. The open-source version is licensed under and Apache License v2.

I talked with Bill Pearson, VP of Intel's IoT group, about his company's distro of OpenVINO.

"I suppose the key thing this tool does is to simplify the process of helping developers with their high performance inferencing needs," Pearson told me. "We're seeing a number of different use cases and requirements for doing inference--which silicon do I need, and what programming models on top of it. This can be quite confusing, so with OpenVINO, we created an API-based programming model that allows you to write once and deploy anywhere. What that means practically is, we take all the complexity of writing for FPGA, GPU, CPU, or whatever. We hide that complexity, so the developers have a consistent interface that lets them literally write once and then deploy to any of those different architecture types, depending on their needs and their requirements."

The first set of use cases Intel focused on were all computer vision related, Pearson said, because that's where most developers needed with inferencing. But with the most recent release of the toolkit, Intel is looking at inference beyond computer vision.

OpenVINO 2021.1, announced in October, is designed to enable end-to-end capabilities that leverage the toolkit for workloads beyond computer vision. These capabilities include audio, speech, language, and recommendation with new pretrained models; support for public models, code samples, and demos; and support for non-vision workloads in the DL Streamer component.

This release also introduces official support for models trained in the TensorFlow 2.2.x framework; support for 11th generation Intel Core processors (formerly code named Tiger Lake); and new inference performance enhancements with Intel Iris Xe graphics, Intel Deep Learning Boost (Intel DL Boost) instructions, and Intel Gaussian & Neural Accelerator 2.0 for low-power speech processing acceleration.

It comes with the OpenVINO model server, which is an add-on to the Intel distro, and a scalable microservice. "This add-on provides a gRPC or HTTP/REST endpoint for inference, making it easier to deploy models in cloud or edge server environments," the company said in a statement. Also, it's now implemented in C++, which enables reduced container footprint (for example, less than 500 MB), and delivers higher throughput and lower latency.

There's also a beta release due this quarter that integrates the Deep Learning Workbench with the Intel DevCloud for the Edge. The result: developers can now graphically analyze models using the Deep Learning Workbench on Intel DevCloud for the Edge, instead of being stuck on a local machine only, to compare, visualize, and fine-tune a solution against multiple remote hardware configurations. And there's another add-on: the OpenVINO model server, which provides a gRPC or HTTP/REST endpoint for inference, making it easier to deploy models in cloud or edge server environments. It's now implemented in C++, to enable reduced container footprint (for example, less than 500 MB), and deliver higher throughput and lower latency.

"We have to deliver features, of course, but we also need to deliver on performance," Pearson said. "With this release, that's what we've done. Applying CNN is helping us to take advantage of modern AI, to be able to get models that do a much better job at delivering what we'd expect from inference, and to be able to detect that defect or notice that object."

Pearson offered a customer example: "There are some great you know customer examples where we've seen this be the true. The earliest things we've seen are in finding manufacturing defects. There's an aluminum engine provider, who was doing inspections the old way--by hand. They had to wait for these objects to cool so that a person could turn it and look at it. You can imagine that doing that with a computer and a camera is going to be much more accurate and much quicker. And we've just seen this expand way beyond that to use cases in health care for medical screenings, and things like sewer pipe inspections--which basic, but essential--and in businesses like retail, where we're going through these frictionless checkout systems and trying to be able to see what objects are going through the scanner and track tell what what's being processed"

Intel OpenVINO is available today on its DevCloud.

Posted by John K. Waters on December 15, 2020 at 4:02 PM0 comments

Jakarta EE 9 Released

The Eclipse Foundation's Jakarta EE Working Group today announced the release of the Jakarta EE 9 Platform, Web Profile specifications, and related TCKs. The Foundation made the announcement during its JakartaOne Livestream event, currently underway online.

This is the release that moves enterprise Java fully from the javax.* namespace to the jakarta.* namespace. It "provides a new baseline for the evolution and innovation of enterprise Java technologies under an open, vendor-neutral, community-driven process," the Foundation said in a statement. The fact that it doesn't do much more than that is the key virtue of this release, says the Foundation's executive director, Mike Milinkovich.

"It's important to understand that announcing a release in which the only thing we did was change the namespace was very much by design," Milinkovich told me. "When you're taking about a 20-year-old, multibillion-dollar ecosystem, and moving it forward, it's really important that you do it in a way that makes it as easy as possible for the ecosystem to come along with you."

The Foundation's Jakarta Working Group was established in March 2018, so they've been at this for a while. And the group has faced a few headwinds along the way--perhaps most notably Oracle's refusal to give up the javax.* namespace. The plan for a complete change-over from javax.* to the jakarta.* was, of course, controversial. The move even had a nickname: "The Big Bang."

"To be fair to Oracle, it's a two-decades-old platform they acquired from Sun Microsystems that had lots of legal constraints on it," Milinkovich said, "agreements that go back many decades. At the end of the day, I don't think there was any ill will or anything like that [from Oracle], and I even have to acknowledge that the engineering teams that we worked with from Oracle--who have made all this possible--were fantastic. It's unfortunate that we weren't able to just carry the javax.* namespace forward, because that would have been easier for everybody. But it just turned out to be an unsolvable set of constraints."

This namespace change firmly establishes Jakarta EE 9 as a foundation on which cloud-era innovations can be built for future Java infrastructure, the Foundation says. Also, Jakarta EE 9 enables enterprise end users and enterprise software vendors to migrate from older, previous versions to newer cloud-native tools, products, and platforms.

"Just changing the namespace is going to have a big impact," he said. "It allows the vendors who sell application servers--like WebLogic, WebSphere, JBoss, Open Liberty, Payara, etc.--the tooling ecosystem--IntelliJ, JetBrains, Apache NetBeans, our own Eclipse IDE--and the other Java runtimes--Spring Boot, Micronaut, Orcas, and the like--to migrate forward with the least possible disruption. We are now free to innovate in our own namespace."

Approximately 90 percent of the Fortune 500 are running enterprise Java apps in production, the Foundation has said, and the Jakarta EE 9 specifications "give new life to this massive installed base."

The enterprise Java ecosystem is generating more interest from vendors than it has in years, Milinkovich said, which is something of a validation of the Foundation's approach.

"On the vendor side, it had been whittled down to IBM, Red Hat, Payara, Tomitribe, and Fujitsu," he said. "But now, we're getting a lot more vendor engagement, participation, and  support. All good things."

With this release the Eclipse Foundation is also announcing the certification of Eclipse GlassFish 6.0.0, as well as several solutions working on compliance for 2021, including:

● Apusic AAS
● Fujitsu Software Enterprise Platform
● IBM Websphere Liberty
● Jboss Enterprise Application Platform
● Open Liberty
● Payara Platform
● Piranha Micro
● Primeton AppServer
● TMax Jeus
● WildFly

It's worth keeping in mind that specification approval was fresh territory for the Eclipse Foundation, and it had to put together a brand new specification process. Jakarta EE is being developed and maintained under the Jakarta EE Specification Process, which replaces the Java Community Process (JCP) for Java EE.

Accepting the stewardship of enterprise Java and shepherding its successful journey to Jakarta EE is a real feather in the Foundation's cap, Milinkovich said.

"I think a lot of other organizations might have given up along the way," he said, "but the persistence, experience, and intellectual property sophistication we have at the Foundation led us to find a path that got us to where we are."

The Jakarta EE roadmap for 2021 includes at least one more (huge) baby step in Jakarta 9.1: the move of the base platform from Java SE 8 to Java SE 11. Efforts are already under way for that release, though Milinkovich didn't offer a release date.

"I don't know when it's coming, but we're turning the crank as fast as possible," he said.

Moving the base Java platform from Java SE 8 to Java SE 11 is a logical next step in this process. Java eight is still the most widely used version of Java, and Java 11 is next in popularity.

"By moving this Java platform forward we're helping to modernize the Java ecosystem," Milinkovich said.

Posted by John K. Waters on December 8, 2020 at 4:02 PM0 comments

JavaScript Turns 25: Pluralsight Gurus Weigh In

On Friday, December 4, JavaScript turns 25. The venerable client-side scripting language had a wobbly start when Mozilla co-founder Brendan Eich unveiled it in 1995, but today it runs on every Web browser, cell phone, Internet-enabled TV, and smart dishwasher.

When Java turned 25 in May of this year, online course provider Pluralsight shared the insights of its Java course authors on the impact of that juggernaut of a programming language and platform with ADTmag readers. The technology workforce development company turned to its popular JavaScript course authors on the occasion of the senior scripter's silver anniversary "to reflect on its impact and continued influence on the world, as well as their own personal journey with the programming language."

In addition to the wisdom of its teachers, Pluralsight is offering free access to 25 of its most popular JavaScript courses throughout December (five free courses a week). Also, check out the relaunched, which includes new resources "designed to help JavaScript developers of all abilities."

How important is JavaScript today compared to when it first launched?

Cory House, @housecor: When JavaScript first launched, it was unclear if it would take off. It was written in a few days, and initially only offered in a single browser. Microsoft's first browser shipped with their own flavor of JavaScript, called JScript. Today, JavaScript makes the world go 'round. It runs on every computer. Every phone. TVs. Even some appliances. A huge portion of humanity relies on JavaScript every day without realizing it.

Jonathan Mills, @jonathanfmills: When JavaScript first launched, it was just there to help a webpage be interactive. JS is no longer contained to the browser. Now JavaScript has grown into a massive ecosystem that has impact in every area of software development. As a JS developer, I can write applications on the backend, frontend, mobile device, and IoT devices.

Nate Taylor, @taylonr: The easy answer is to talk about how JavaScript is used today across the entire spectrum of software development. From web applications, mobile applications, servers and even as stored functions in databases. And while that's true, I think it neglects the importance of JavaScript when it first launched. Prior to JavaScript's introduction, the web was not much more than static hypertext delivered in a browser. Without JavaScript, we likely don't have the web that we do today, but we didn't necessarily understand that when it was first released.

What makes JavaScript such a timeless programming language?

House: JavaScript is timeless because it's approachable, multiparadigm, and ubiquitous. There are multiple ways to accomplish a given task. You can code in an object-oriented or functional style. And since JavaScript has a C-like syntax, it feels familiar to people who have worked in other C-like languages. JavaScript remains "timeless" by continually embracing good ideas from other languages.

Mills: Honestly, I think it's a combination of simplicity and flexibility. The learning curve of JavaScript is much lower than the typical enterprise languages of C# and Java, so it is easy to pick up. But its flexibility in running everywhere and its very lightweight nature make it easy to get things done everywhere. The combination of those two things make JavaScript an easy tool to reach for given any job.

Taylor: I think the number one thing is the community. It's driven by countless engineers who are constantly exploring and trying out new things. Because of the community, we now have NodeJS, so that we can run JavaScript on the server. We have libraries like RamdaJS, which brings in concepts from functional programming languages and makes them accessible to JavaScript developers. We even have TypeScript as a super-set of JavaScript. And through all of that, the language has grown and adapted. In some ways, the fluidity of the language that causes so many of us problems when we first learn it, is part of what keeps it going even today.

What would the web or e-commerce look like if we didn't have JavaScript?

House: Without JavaScript, the web would be similar to the late 90's. Simpler and lighter-weight, but also less feature-rich. We'd have to post back to the server on every request, leading to a clunkier user experience.

Mills: While it's almost impossible to say what it would look like without JavaScript, I will say it would be fundamentally different.

Taylor: It would be slower and more frustrating. Imagine signing up for a service. The only way to know if your username was available would be to submit the entire form to the server and have it tell you if that was available. If the name was taken, you'd have to fill out the entire form again and resubmit. Eventually you would either find a unique name, or you'd give up. But with JavaScript, we're able to do this behind the scenes. While you're filling out the form--sometimes while you're typing the username--you can receive instant feedback if that name is available.

Additional problems would exist for e-commerce, as well. A common situation today is to see something in your cart and decide to change the quantity, or possibly even save it for later in a wish list. Those are relatively straightforward JavaScript calls. Without that, you would again be forced to resubmit the entire form until you were ready to proceed.

When did you first learn JavaScript? What impact has it had on you personally?

House: I learned JavaScript in the late 90s. It was awful. The debugging experience was horrendous. I often couldn't tell clearly what had failed. It ran significantly differently on Internet Explorer than Netscape. It was so painful early on that I embraced Flash and expected it to overtake HTML and JavaScript in popularity. Clearly, I was wrong! As JavaScript matured, so did related libraries and browsers. Today, coding in JavaScript is a wonderful, rapid feedback experience.

Mills: For the vast majority of my career I have been a backend developer in the .NET and Java space. But as the ecosystems grew and the sheer weight of projects increased, I found myself looking for alternatives that would let me solve business problems faster. I made the transition to node and AngularJS a while ago and have never looked back. The speed and reliability of the tooling is something I really enjoy.

Taylor: Sometime around 2009 was when I first started learning JavaScript. I didn't care for it, because I liked working on thick client applications in C#. That said, I did see its usefulness, particularly on one-side project[s], where I was able to use jQuery for a grid that was displaying data. Experimenting with that bit of JavaScript helped open several doors for me. It allowed me to interview for a web developer position that ended up changing the course of my career.

In addition to helping me land jobs for my 9-to-5 work, JavaScript also indirectly led me to more teaching as I advanced in my career. I found that JavaScript offered different ways of solving problems than I was used to. And in explaining those ideas to other developers I realized that I enjoyed helping others learn and grow. It was exciting to see developers grasp new ideas.

What does the future look like for JavaScript? What's coming next year, 2-3 years from now, etc.?

House: For around 10 years, JavaScript didn't change at all. Thankfully, today new JavaScript releases occur every June. In the short term, I expect to continue to see mostly minor enhancements that implement good ideas from competing languages. Longer-term, I expect to see JavaScript decreasingly used as a compile target. People will increasingly use languages that compile to JavaScript. Today, TypeScript is popular example, but we may see other more popular, higher-level alternatives in the future. And while Web Assembly is likely to grow increasingly popular in the coming years, it will continue to interface with JavaScript to get things done.

Mills: One of the primary complaints I have heard about JavaScript is that the massive open-source ecosystem is so hard to navigate and new frameworks pop up every day. I find that is less the case now than it was a year ago, and that trend will continue. I find most developers are using one of frameworks on the frontend (React and Vue), and almost everyone I know is using Express on the backend, and I see that trend continuing. Improvements will be made and features added, but for the most part, I think the ecosystem has solidified to a point that you can reliably pick up a tool and know that it will be around for a while.

Taylor: I think we've finally moved past the phase of JavaScript where everyone was making jokes about how fast a new library came out, and now we're to the point where we're trying to use it to provide real value to our users and clients. As a result, I think we'll continue to see JavaScript maturing. It will continue to get new features that help ease development in JavaScript. We'll continue to see more and more uses in areas that we don't immediately expect. It wasn't that long ago that it was not possible to write a mobile app in either Java or Swift, but now with frameworks like ReactNative, it's possible to use the same JavaScript skills that developers already have to create mobile apps.

Posted by John K. Waters on December 1, 2020 at 4:02 PM0 comments

CNCF Survey 'Takes the Pulse' of the Global Cloud Native Community

This week's KubeCon + Cloud Native online event, wrapping up today, dominated our headlines this week, and for good reason. The flagship conference of the Cloud Native Computing Foundation (CNCF) was chock-a-block with vendor announcements and Kubernetes community news.

Among the noteworthy news from the CNCF itself was the publication of the results of its 2020 survey. Based on the responses of 1,324 members of the global cloud native community, the survey "takes the pulse" of that community to provide some clarity on where and how cloud native technologies are begin adopted.

My list of key takeways from this survey includes:

  • The use of containers in production has increased to 92%, up from 84% last year, and up 300% from our first survey in 2016.
  • Kubernetes use in production has increased to 83%, up from 78% last year.
  • There has been a 50% increase in the use of all CNCF projects since last year's survey.
  • Usage of cloud native tools:
    • 82% of respondents use CI/CD pipelines in production.
    • 30% of respondents use serverless technologies in production.
    • 27% of respondents use a service mesh in production, a 50% increase over last year.
    • 55% of respondents use stateful applications in containers in production.

Public cloud continued to be the most popular data center approach in this year's survey (that's three years in a row). It increased slightly in usage from last year (64%, up from 62%). Private cloud or on-prem usage had the most significant increase (52%, up from 45%). Hybrid decreased slightly (36% down from 38% in 2019). Multi-cloud usage, a new survey option this year, accounted for 26%.

"For the purpose of this analysis, hybrid cloud refers to the use of a combination of on-premises and public cloud," the report explains. "Multi-cloud means using workloads across different clouds based on the type of cloud that fits the workload best. The portability that Kubernetes and cloud native tools provide makes it much simpler to switch from one public cloud vendor to another. The addition of multi-cloud as an option this year does not necessarily explain the drop in hybrid unless respondents use a different definition."

The survey compiled responses from the community gathered between May and June 2020. Of those responding, 54% indicated their organization is part of the CNCF End User Community, which comprises more than 140 companies and startups "committed to accelerating cloud native technologies and improving the deployment experience." Many of the respondence were based in Europe and North America, but it was a worldwide survey: 38% were from Europe; 33% from North America; 23% from Asia; and 6% from South and Central America, Africa, Australia, and Oceania. Two-thirds of respondents were from organizations with more than 100 employees, and 30% were from organizations with more than 5,000 employees, showing a strong enterprise representation.

This is a thoughtful survey with more stats on release cycles, normalized use of containers, Kubernetes environments, "container challenges," and more.

Posted by John K. Waters on November 19, 2020 at 10:15 AM0 comments

Java and Python Top List of Languages People Most Want to Teach Themselves

Here's a report for the times: Specops Software sifted data from using its Google and YouTube search analytics tool to surface a list of the programming languages people most want to teach themselves. Python and Java topped that list of most "self-mastered" coding languages, not surprisingly. And YouTube was the primary tutor.

Specops found the most commonly searched for programming languages on Google and YouTube within the last month, and then, using, teased out the 13 languages with the most global searches, relying on phrases like "Learn Python" and "Learn Java." That search was further refined and the results merged the results to find the most searched for language overall around the world.

The researchers then investigated search volumes in the United States, United Kingdom, Canada, and Australia, to see which programming language these countries have been searching for the most on Google and YouTube.

Python had the most global searches on Google (182,000 monthly searches) and YouTube ( 53,000 monthly searches) for a combined volume of  235,000 each month.

"On a global scale, Python is the most searched for programming language to learn," the report states…. As one of the most versatile coding languages today, it should come as no surprise that this is one of the most popular programming languages for those wanting to learn how to code – particularly beginners. What's more, our recent study found that it is one of the most sought-after programming languages by employers around the world too."

Coming in second was Java, with 64,000 Google searches and 20,000 YouTube searches, for a total volume of 84,000 monthly. "Learn Java" was the second most popular keyword search for those wanting to learn how to code," the report states.

SQL, PHP, and R placed fourth, fifth and sixth, respectively, with the combined Google and YouTube searches reaching 45,000, 31,400, and 14,000.

C++ came in third, with 56,000 total searches per month. The least in-demand programming language in this report was Rust, with only 2,150 total searches monthly. Next to last on the list was JavaScript, with only 1,900 searches

The US ranks the highest among the UK, Canada, and Australia, for the highest volumes of collective searches across the 13 languages, which totaled 182,150.

"As the employment market becomes more competitive, self-taught skills and experience have become increasingly valuable across the globe, and programming languages are no exception," the report states.

Posted by John K. Waters on November 12, 2020 at 12:02 PM0 comments

Jonas Bonér and the Reactive Manifesto II

It's been about seven years since Jonas Bonér, co-founder and CTO of Lightbend and creator of the Akka project, first published "The Reactive Manifesto" with contributions from Dave Farley, Roland Kuhn, and Martin Thompson. He and his colleagues used that document to provide an accessible and succinct definition of reactive systems--software developed using message-driven and event-driven approaches to achieve the resiliency, scalability, and responsiveness required for cloud-native applications.

"We needed a way to explain what we we're talking about that wasn't full of geeky buzzwords and ended up just being confusing," Bonér told me at the time. "The manifesto distills things down to the essence of these new applications, which are being built right now, and provided a vocabulary that would allow developers to talk about these things."

This week, under the auspices of the Linux Foundation and the newly formed Reactive Foundation, Bonér and a veritable crowd of collaborators published an updated and expanded version of that document, entitled "The Reactive Principle." The press announcement characterized the new manifesto as a complement to the original that "incorporates the ideas, techniques, and patterns from both Reactive Programming and Reactive Systems into a set of practical principles, to apply Reactive to cloud native applications to realize the efficiencies of building for and running on the cloud."

"One of the problems with reactive is that it has been a little bit diluted over the years," Bonér explained during a recent Zoom interview. "People slapped 'reactive' on almost anything. Some things are actually reactive and some are variations. And some things called reactive aren't really living up to what we think it is. And that's why I felt it was important to get together with a lot of people, not just me, to define what reactive means and sort of breathe some new life into it."

The new document is the product of a collaboration among leading minds in the Reactive and broader distributed computing communities. Along with Bonér, the list of collaborators includes Roland Kuhn, Ben Christensen, Sergey Bykov, Clement Escoffier, Peter Vlugter, Josh Long, Ben Hindman, Vaughn Vernon, James Roper, Michael Behrendt, Kresten Thorup, Colin Breck, Allard Buijze, Derek Collison, Viktor Klang, Ben Hale, Steve Gury, Tyler Jewell, Ryland Degnan, James Ward, and Stephan Ewen

The original manifesto was intentionally short and designed to be easily digestible ("Even CIOs read it," Bonér said.) The new "Principles" document is as rich as the original was lean. Among other things, it lays out the eight principles an application must embrace in its design, its architecture, and even its programming model to be considered Reactive:

  • Stay Responsive -- always respond in a timely manner
  • Accept Uncertainty -- build reliability despite unreliable foundations
  • Embrace Failure -- expect things to go wrong and build for resilience
  • Assert Autonomy -- design components that act independently and interact collaboratively
  • Tailor Consistency -- individualize consistency per component to balance availability and performance
  • Decouple Time -- process asynchronously to avoid coordination and waiting
  • Decouple Space -- create flexibility by embracing the network
  • Handle Dynamics -- continuously adapt to varying demand and resources

"The Reactive Principles" also offers sets of design principles for cloud-native and edge-native applications, as well as patterns that can help codify and apply the Reactive Principles to applications and systems.

The Reactive Foundation, launched last year with founding members Alibaba Cloud, Facebook, Lightbend, VMWare, and VLINGO, is a non-profit organization established to provide a formal open governance model and neutral ecosystem for creating open-source Reactive projects. The group is a top-level project within the Linux Foundation that it is "dedicated to being a catalyst for advancing a new landscape of technologies, standards, and vendors."

Bonér was set to unveil "The Reactive Principles" today during his keynote presentation at the Reactive Summit 2020 virtual event.

"The cloud needs a programming model that brings the same reliability, predictability, and scalability at the application layer that Kubernetes has brought to the infrastructure layer," Bonér said in a statement.

You can find an early edition of "The Reactive Manifesto" online. At least you could as of this writing. It's worth a look before digging into the new document, which, though much longer, is just as accessible.

The Reactive Foundation also announced that two open-source projects, R2DBC and Reactive Streams, have joined the foundation, and that a newly formed Technical Oversight Committee is evaluating additional open-source project candidates. The R2DBC project brings Reactive programming APIs to relational databases in an effort to provide a better alternative to JDBC and the "blocking" issues it creates for SQL databases in Reactive Systems. Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure, encompassing runtime environments (JVM and JavaScript) as well as network protocols.

The first project of the foundation, RSocket, is an implementation of Reactive Streams that provides a message-driven binary protocol for use on byte stream transports ,such as TCP and WebSockets.


Posted by John K. Waters on November 10, 2020 at 12:02 PM0 comments

'Nature vs. Nurture' in Application Security Testing

It'll surprise no one in the software-making business to hear an app security vendor claim that the majority of applications contain at least one security flaw. (Really? Only one?) But a new report from Application Security Testing (AST) solutions provider Veracode serves as a cogent reminder that it often takes months to fix those flaws.

The report, "State of Software Security," available as a free download, analyzes 130,000 applications. The report's authors determined that it takes about six months for teams to close half the security flaws they find. The report also outlines some best practices to significantly improve those deplorable fix rates.

Veracode's researchers found that there are some factors that teams tend to have a lot of control over, and those over which they often have very little control. The report's authors went with "nature vs. nurture" categories for these factors. Within the "nature" category, Veracode considered factors such as the size of the application and the organization, as well as security debt; the "nurture" side accounts for such actions as scanning frequency, cadence, and scanning via APIs.

Again, not surprisingly, addressing issues with modern DevSecOps practices results in higher flaw remediation rates, they found. Some examples: Using multiple application security scan types, working within smaller or more modern apps, and embedding security testing into the pipeline via an API. They all make a difference in reducing time to fix security defects, the report's authors found, even in apps with a less than ideal "nature." 

"The goal of software security isn't to write applications perfectly the first time, but to find and fix the flaws in a comprehensive and timely manner," said Chris Eng, Chief Research Officer at Veracode, in a statement. "Even when faced with the most challenging environments, developers can take specific actions to improve the overall security of the application with the right training and tools."

This is Veracode's 11th annual report on secure application development. A partial list of some other key findings includes:

  • Flawed applications are the norm: 76% of applications have at least one security flaw, but only 24% have high-severity flaws. This is a good sign that most applications do not have critical issues that pose serious risks to the application. Frequent scanning can reduce the time it takes to close half of observed findings by more than three weeks.
  • Open source flaws on the rise: while 70% of applications inherit at least one security flaw from their open source libraries, SOSS 11 also found that 30% of applications have more flaws in their open source libraries than in the code written in-house. The key lesson is that software security comes from getting the whole picture, which includes identifying and tracking the third-party code used in applications.
  • Multiple scan types prove efficacy of DevSecOps: teams using a combination of scan types including static analysis (SAST), dynamic analysis (DAST), and software composition analysis (SCA) improve fix rates. Those using SAST and DAST together fix half of flaws 24 days faster.
  • Automation matters: those who automate security testing in the SDLC address half of the flaws 17.5 days faster than those that scan in a less automated fashion.
  • Paying down security debt is critical: the link between frequently scanning applications and faster remediation times has been established in Veracode's prior State of Software Security research. This year's report also found that reducing security debt – fixing the backlog of known flaws – lowers overall risk. SOSS 11 found that older applications with high flaw density experience much slower remediation times, adding an average of 63 days to close half of flaws.

Veracode's native SaaS solution is designed to enable companies to move AppSec to the cloud securely, and it supports cloud-native applications "while empowering developers to fix, not just find, flaws," the company says. Veracode has helped customers fix more than 10.5 million security defects in their software via analysis of more than 7.8 trillion lines of code between Jan. 1, 2020, and Oct. 5, 2020, the company says.

Posted by John K. Waters on November 5, 2020 at 12:01 PM0 comments