Here's a report for the times: Specops Software sifted data from Ahrefs.com using its Google and YouTube search analytics tool to surface a list of the programming languages people most want to teach themselves. Python and Java topped that list of most "self-mastered" coding languages, not surprisingly. And YouTube was the primary tutor.
Specops found the most commonly searched for programming languages on Google and YouTube within the last month, and then, using Ahrefs.com, teased out the 13 languages with the most global searches, relying on phrases like "Learn Python" and "Learn Java." That search was further refined and the results merged the results to find the most searched for language overall around the world.
The researchers then investigated search volumes in the United States, United Kingdom, Canada, and Australia, to see which programming language these countries have been searching for the most on Google and YouTube.
Python had the most global searches on Google (182,000 monthly searches) and YouTube ( 53,000 monthly searches) for a combined volume of 235,000 each month.
"On a global scale, Python is the most searched for programming language to learn," the report states…. As one of the most versatile coding languages today, it should come as no surprise that this is one of the most popular programming languages for those wanting to learn how to code – particularly beginners. What's more, our recent study found that it is one of the most sought-after programming languages by employers around the world too."
Coming in second was Java, with 64,000 Google searches and 20,000 YouTube searches, for a total volume of 84,000 monthly. "Learn Java" was the second most popular keyword search for those wanting to learn how to code," the report states.
SQL, PHP, and R placed fourth, fifth and sixth, respectively, with the combined Google and YouTube searches reaching 45,000, 31,400, and 14,000.
C++ came in third, with 56,000 total searches per month. The least in-demand programming language in this report was Rust, with only 2,150 total searches monthly. Next to last on the list was JavaScript, with only 1,900 searches
The US ranks the highest among the UK, Canada, and Australia, for the highest volumes of collective searches across the 13 languages, which totaled 182,150.
"As the employment market becomes more competitive, self-taught skills and experience have become increasingly valuable across the globe, and programming languages are no exception," the report states.
Posted by John K. Waters on November 12, 20200 comments
It's been about seven years since Jonas Bonér, co-founder and CTO of Lightbend and creator of the Akka project, first published "The Reactive Manifesto" with contributions from Dave Farley, Roland Kuhn, and Martin Thompson. He and his colleagues used that document to provide an accessible and succinct definition of reactive systems--software developed using message-driven and event-driven approaches to achieve the resiliency, scalability, and responsiveness required for cloud-native applications.
"We needed a way to explain what we we're talking about that wasn't full of geeky buzzwords and ended up just being confusing," Bonér told me at the time. "The manifesto distills things down to the essence of these new applications, which are being built right now, and provided a vocabulary that would allow developers to talk about these things."
This week, under the auspices of the Linux Foundation and the newly formed Reactive Foundation, Bonér and a veritable crowd of collaborators published an updated and expanded version of that document, entitled "The Reactive Principle." The press announcement characterized the new manifesto as a complement to the original that "incorporates the ideas, techniques, and patterns from both Reactive Programming and Reactive Systems into a set of practical principles, to apply Reactive to cloud native applications to realize the efficiencies of building for and running on the cloud."
"One of the problems with reactive is that it has been a little bit diluted over the years," Bonér explained during a recent Zoom interview. "People slapped 'reactive' on almost anything. Some things are actually reactive and some are variations. And some things called reactive aren't really living up to what we think it is. And that's why I felt it was important to get together with a lot of people, not just me, to define what reactive means and sort of breathe some new life into it."
The new document is the product of a collaboration among leading minds in the Reactive and broader distributed computing communities. Along with Bonér, the list of collaborators includes Roland Kuhn, Ben Christensen, Sergey Bykov, Clement Escoffier, Peter Vlugter, Josh Long, Ben Hindman, Vaughn Vernon, James Roper, Michael Behrendt, Kresten Thorup, Colin Breck, Allard Buijze, Derek Collison, Viktor Klang, Ben Hale, Steve Gury, Tyler Jewell, Ryland Degnan, James Ward, and Stephan Ewen.
The original manifesto was intentionally short and designed to be easily digestible ("Even CIOs read it," Bonér said.) The new "Principles" document is as rich as the original was lean. Among other things, it lays out the eight principles an application must embrace in its design, its architecture, and even its programming model to be considered Reactive:
- Stay Responsive -- always respond in a timely manner
- Accept Uncertainty -- build reliability despite unreliable foundations
- Embrace Failure -- expect things to go wrong and build for resilience
- Assert Autonomy -- design components that act independently and interact collaboratively
- Tailor Consistency -- individualize consistency per component to balance availability and performance
- Decouple Time -- process asynchronously to avoid coordination and waiting
- Decouple Space -- create flexibility by embracing the network
- Handle Dynamics -- continuously adapt to varying demand and resources
"The Reactive Principles" also offers sets of design principles for cloud-native and edge-native applications, as well as patterns that can help codify and apply the Reactive Principles to applications and systems.
The Reactive Foundation, launched last year with founding members Alibaba Cloud, Facebook, Lightbend, VMWare, and VLINGO, is a non-profit organization established to provide a formal open governance model and neutral ecosystem for creating open-source Reactive projects. The group is a top-level project within the Linux Foundation that it is "dedicated to being a catalyst for advancing a new landscape of technologies, standards, and vendors."
Bonér was set to unveil "The Reactive Principles" today during his keynote presentation at the Reactive Summit 2020 virtual event.
"The cloud needs a programming model that brings the same reliability, predictability, and scalability at the application layer that Kubernetes has brought to the infrastructure layer," Bonér said in a statement.
You can find an early edition of "The Reactive Manifesto" online. At least you could as of this writing. It's worth a look before digging into the new document, which, though much longer, is just as accessible.
The Reactive Foundation also announced that two open-source projects, R2DBC and Reactive Streams, have joined the foundation, and that a newly formed Technical Oversight Committee is evaluating additional open-source project candidates. The R2DBC project brings Reactive programming APIs to relational databases in an effort to provide a better alternative to JDBC and the "blocking" issues it creates for SQL databases in Reactive Systems. Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure, encompassing runtime environments (JVM and JavaScript) as well as network protocols.
The first project of the foundation, RSocket, is an implementation of Reactive Streams that provides a message-driven binary protocol for use on byte stream transports ,such as TCP and WebSockets.
Posted by John K. Waters on November 10, 20200 comments
It'll surprise no one in the software-making business to hear an app security vendor claim that the majority of applications contain at least one security flaw. (Really? Only one?) But a new report from Application Security Testing (AST) solutions provider Veracode serves as a cogent reminder that it often takes months to fix those flaws.
The report, "State of Software Security," available as a free download, analyzes 130,000 applications. The report's authors determined that it takes about six months for teams to close half the security flaws they find. The report also outlines some best practices to significantly improve those deplorable fix rates.
Veracode's researchers found that there are some factors that teams tend to have a lot of control over, and those over which they often have very little control. The report's authors went with "nature vs. nurture" categories for these factors. Within the "nature" category, Veracode considered factors such as the size of the application and the organization, as well as security debt; the "nurture" side accounts for such actions as scanning frequency, cadence, and scanning via APIs.
Again, not surprisingly, addressing issues with modern DevSecOps practices results in higher flaw remediation rates, they found. Some examples: Using multiple application security scan types, working within smaller or more modern apps, and embedding security testing into the pipeline via an API. They all make a difference in reducing time to fix security defects, the report's authors found, even in apps with a less than ideal "nature."
"The goal of software security isn't to write applications perfectly the first time, but to find and fix the flaws in a comprehensive and timely manner," said Chris Eng, Chief Research Officer at Veracode, in a statement. "Even when faced with the most challenging environments, developers can take specific actions to improve the overall security of the application with the right training and tools."
This is Veracode's 11th annual report on secure application development. A partial list of some other key findings includes:
- Flawed applications are the norm: 76% of applications have at least one security flaw, but only 24% have high-severity flaws. This is a good sign that most applications do not have critical issues that pose serious risks to the application. Frequent scanning can reduce the time it takes to close half of observed findings by more than three weeks.
- Open source flaws on the rise: while 70% of applications inherit at least one security flaw from their open source libraries, SOSS 11 also found that 30% of applications have more flaws in their open source libraries than in the code written in-house. The key lesson is that software security comes from getting the whole picture, which includes identifying and tracking the third-party code used in applications.
- Multiple scan types prove efficacy of DevSecOps: teams using a combination of scan types including static analysis (SAST), dynamic analysis (DAST), and software composition analysis (SCA) improve fix rates. Those using SAST and DAST together fix half of flaws 24 days faster.
- Automation matters: those who automate security testing in the SDLC address half of the flaws 17.5 days faster than those that scan in a less automated fashion.
- Paying down security debt is critical: the link between frequently scanning applications and faster remediation times has been established in Veracode's prior State of Software Security research. This year's report also found that reducing security debt – fixing the backlog of known flaws – lowers overall risk. SOSS 11 found that older applications with high flaw density experience much slower remediation times, adding an average of 63 days to close half of flaws.
Veracode's native SaaS solution is designed to enable companies to move AppSec to the cloud securely, and it supports cloud-native applications "while empowering developers to fix, not just find, flaws," the company says. Veracode has helped customers fix more than 10.5 million security defects in their software via analysis of more than 7.8 trillion lines of code between Jan. 1, 2020, and Oct. 5, 2020, the company says.
Posted by John K. Waters on November 5, 20200 comments
The Cloud Security Alliance (CSA) published a report in late September that I just got around to reading. I guess it was the Halloween season that drew me to the title, "Top Threats to Cloud Computing: Egregious 11 Deep Dive." It provides case study analyses of last year's The Egregious 11: Top Threats to Cloud Computing, with nine recent cybersecurity attacks and breaches. (Both reports featured a scary octopus on their covers.)
All kidding aside, the deep dive is well worth a look, and its free. The so-called Egregious 11, you'll recall, were culled from a survey of 241 industry experts on security issues in the cloud. The respondents rated 11 "salient threats, risks, and vulnerabilities" in their cloud environments. The Top Threats Working Group used the survey results, along with its own expertise, to create the final 2019 report.
The new report looks at nine actual attacks and breaches, including "a major financial services company, a leading enterprise video communications firm, and a multinational grocery chain," for its foundation. The report "connects the dots between the CSA Top Threats in terms of security analysis," Jon-Michael C. Brook, chair of the Top Threats Working Group, wrote in a forward to the report. And I think it does so quite effectively.
The list of organizations whose breaches were analyzed is a sexy one. It includes Capital One, Disney+, Dow Jones, GitHub, Imperva, Ring, Tesco, Tesla, and Zoom.
Each of the nine examples is presented in the form of a reference chart and a detailed narrative. The reference chart's format provides an attack-style synopsis of the actor spanning from threats and vulnerabilities to end controls and mitigations.
Here's one example of the narrative portion of the Capital One breach analysis:
Actor: Former engineer of AWS with insider knowledge on platform vulnerabilities gained credentials from a misconfigured web application to extract sensitive information from protected cloud folders.
Attack: Open-source anonymity network (Tor) and VPN services (iPredator) hides attacker. Misconfigured ModSecurity WAF used by Capital One with their AWS cloud operations relayed AWS cloud metadata services including credentials to cloud instances. Over privileged access given to the WAF allowed the attacker to gain access to protected cloud storage (AWS S3 buckets) with the ability to read data sync and exfiltrate sensitive information.
Vulnerabilities: A Server Side Request Forgery (SSRF) vulnerability on the platform was exposed in which a server (e.g. Capital One's WAF) was tricked into requests from an attacker to access cloud server configurations (e.g. EC2 metadata service) including credentials to whatever the server had access to.
Data Breach: A web application was compromised for IAM credentials to access multiple cloud folders. The cloud folders accessed had read rights to 106 million records of customer information that were exfiltrated.
Data Loss: The data extracted were credit card applications and credit card customer status reports between 2005-2019. Personal Identified Information (PII) from the applications included applicant names, addresses, zip codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income. The credit card customer PII and financial records extracted included credit scores, credit limits, balances, payment history, contact information, social security numbers, and linked bank accounts. Approximately 140,000 Social Security numbers and 80,000 linked bank account numbers of secured credit card customers were exfiltrated.
I think these narratives read like mystery/thrillers, and the companies are name brands for the most part. Even if you're not into this kind of thing, this is an accessible report with useful insights that you should definitely read, developers and IT pros alike.
Both reports were prepared by the CSA's Top Threats Working Group, which, the CSA says, aims to provide organizations with "an up-to-date, expert-informed understanding of cloud security risks, threats, and vulnerabilities in order to make educated risk-management decisions regarding cloud adoption strategies."
When the CSA first hit my radar in 2012, it described itself as a not-for-profit coalition of companies, individuals, organizations, and "key stake holders" with an interest in promoting secure cloud computing. It's mission hasn't changed, and the website features a nice history and list of milestones. The group also issues the Certificate of Cloud Auditing Knowledge (CCAK) certification, currently the only credential for industry professionals who demonstrate expertise in the essential principles of auditing cloud computing systems. The CSA developed the most widely adopted cloud security audit criteria and organizational certification, which makes the group uniquely positioned to lead industry efforts to make sure that industry professionals have the requisite skill set for auditing the cloud environment.
Posted by John K. Waters on November 2, 20200 comments
It's been almost exactly one year since Docker Inc. sold its enterprise platform business to Mirantis, a commercial distributor of OpenStack, to focus on the needs of enterprise application development teams. Since then, the company behind the leading containerization platform has concentrated on refining its dev tools and building an ecosystem of partners to support a "code-to-cloud" automations for developers.
Docker CEO Scott Johnston talked with a group of reporters this week about the progress of that strategy and laid out the company's path going forward.
The sale to Mirantis was a burn-the-ships commitment to a massive restructuring of the company. "We sold off three quarters of our employee base, all the enterprise customers, and all the enterprise customer revenue," Johnston said. "All in the spirit of restarting the company with a new mission."
That new mission would see Docker spending the next year with a "laser-like focus" on app development teams, embracing partners "in a first-class-citizen type way," and "building a sustainable community, sustainable code, and a sustainable company around the restructured entity."
The result? 11.3 million monthly active users sharing applications from 7.9 million images on Docker Hub repositories with 13.6 billion code pulls per month--up 70% over last year, Johnston said.
Johnston emphasized Docker's commitment to embracing a community to provide for the needs of enterprise developers. "We walk the talk," he said, citing the open sourcing of the Compose Specification in April on GitHub with open governance. Compose is a developer-focused standard for defining cloud and platform agnostic container-based applications.
He pointed to some key partnerships, including a deal with Microsoft that integrates the Azure public cloud and the Virtual Studio Code editor with Docker Desktop. The company also recently announced a partnership with open-source security platform provider Snyk to deliver a native vulnerability scanning service for container images. And he pointed to the recent agreement with Amazon Web services (AWS) to create a simplified workflow for developers using Docker Compose to build apps for Amazon's Elastic Container Service (ECS) and Amazon ECS on AWS Fargate. Docker has also partnered with Atlassian and the Microsoft-owned GitHub to make Docker Hub something of a nexus for integrating, configuring, and managing application components.
Former industry analyst Donnie Berkholz, who joined the company as VP of product just a few weeks again, was on hand for the briefing. All of these integrations are about helping developers get from code to cloud quickly, he said. "And that's not just about things we deliver, but these partnerships," he said. "Because developers are building, sharing, and deploying all over the internet. We can't just have a single point solution to solve their problems. We have to meet developers where they are and where they're going. And so really the partnership ecosystem that we're forming around Docker is the core to doing that."
Johnston also addressed some pricing changes that went into effect over the past year. The company added per-seat pricing for subscriptions, and then followed up with an annual purchasing option that offers discounts for longer-term commitments. The company's free plan, which gave developers unlimited public repositories and one private repository, proved not to be economically sustainable "when we have tens of millions of developers today and tens of millions more coming tomorrow," Johnston said. The adjustments are intended to make sure a small subset of "overconsuming" users doesn't negatively impact the rest of the users.
"In order to build a sustainable community and sustainable code, we have to build a sustainable company around the new restructured entity," he said. "So we put limits on the upper bounds of the all-you-can-eat buffet, so we're able to scale to tens of millions more developers and continue to offer free services, while still having a viable business that can sustain all the investments required in order to do that."
Posted by John K. Waters on October 29, 20200 comments
Open-source Java platform provider Azul Systems today unveiled a new series of migration tools and services designed to help enterprise and public sector IT teams transition from proprietary Oracle Java SE to its Zulu builds of OpenJDK. These tools and services include inventory and usage auditing, testing, and certification, "to help organizations move their entire Java estate quickly, easily, and securely from Oracle to Azul's OpenJDK platform," the company said in a statement.
"Oracle's new Java licensing and commercial support pricing changes--its subscription model--is definitely not for everyone," Azul president and CEO Scott Sellers told ADTmag. "Lots of users are looking for cost-effective, open source alternatives. And the truth is, for most organizations, migration to Azul Zulu builds of OpenJDK is fairly easy. It's just a straightforward drop-in replacement for Oracle Java SE, because it's based on the same underlying source code developed in the OpenJDK project. Oracle Java and Azul's Java products are effectively identical with regard to Java specification compliance and performance."
But some Java-based organizations face more complex migration scenarios, Sellers explained--situations in which the developers of legacy systems are long gone, or there's simply a lack of the necessary resources in-house to manage such a project themselves. For those types of customers, Azul and its certified partner ecosystem now provide advisory support and project management, plus turnkey migration and application modernization.
Azul is offering two levels of its migration services: Migration and Modernization.
- Migration: A typical scenario involves an organization who wishes to migrate directly from Oracle Java to Azul Zulu builds of OpenJDK. In this case, Azul partners work alongside the organization's technical teams to expedite a complete turnkey migration, from inventory and usage auditing, through testing and certification. The process typically takes a few weeks from planning to completion, and results in creating an inventory of the Java estate by vendor, by Java version, by Java security patch level, and by which Java runtimes are currently being used, and then defines the timetable and executes the migration through final test and 'go live.'
- Modernization: This service is ideal for customers wishing to modernize their applications from older Java versions to more current releases, for example applications based on Java 6 or 7 updated to run using Java 8 or 11. Modernization initiatives result in Java deployments being inherently more secure and maintainable.
Azul is partnering with EPAM Systems, a global provider of digital platform engineering and software development services, to deliver the new migration services. EPAM's end-to-end solutions (from strategic consulting to engineering at scale) help customers quickly migrate and modernize legacy Java systems with minimal disruption and risk.
"We're still a relatively small company," Sellers said. "What we've done is to develop the tools and services, and we're partnering with others to deliver them.
"Migrating from Oracle Java SE to an open source OpenJDK distributions, like Azul Zulu, in complex legacy systems or across an enterprise is an undertaking that requires thorough planning and implementation, and a technology partner experienced in Java and open source as well as complex enterprise landscapes," said Eli Feldman, CTO in EPAM's Advanced Technology group, in a statement. "We're pleased to be working with Azul in offering this new migration service and look forward to using our depth and breadth of experience to provide a seamless process to those interested in successfully completing the switch."
Sunnyvale, Calif.-based Azul bills itself as the only vendor focused exclusively on the Java and the Java Virtual Machine (JVM). The Zing JVM is based on Oracle's HotSpot, a core component of Java SE. Zing is a "no-pause" JVM designed to eliminate Garbage Collection (GC) pauses, a long-standing challenge for Java developers. This pauselessness, which Azul calls "generational pauseless garbage collection" (GPGC), enables Java app instances to scale dynamically and reliably. Sellers has called GC "the Achilles heel of Java."
Posted by John K. Waters on October 22, 20200 comments
When IBM and the organizers of the Call for Code Global Challenge announced the grand prize winner last week (our coverage here) of its third annual international tech-for-good competition, they also unveiled a new Call for Code initiative: Call for Code for Racial Justice, which IBM is describing as "a vital initiative that brings together technology and a powerful ecosystem to combat one of the greatest challenges of our time: racial injustice."
Just as the original Call for Code Challenge urged developers around the world to use their skills to address climate change, and then both climate change and the COVID-19 pandemic, the Call for Code for Racial Justice expands the admonition further, calling on the international community of hundreds of thousands of developers to contribute to solutions to confront racial inequalities.
Call for Code for Racial Justice encourages the adoption and innovation of open source projects to drive progress in three key areas: Police and Judicial Reform and Accountability; Diverse Representation; and Policy and Legislation Reform.
The new initiative emerged from an internal IBM program called the Call for Code Emb(race) Challenge. It was started by Black IBMers who, along with Red Hatters and IBM allies, applied their ingenuity and expertise to design and develop technology solutions to address the problem of systemic racism. These solutions are now being opened up to the world as open source projects through the Call for Code tech-for-good platform.
The organizers are partnering with a number of organizations, including: Black Girls Code, Collab Capital, Dream Corps, The United Way Worldwide, American Airlines, Cloud Native Computing Foundation, and Red Hat.
"Black Girls Code was created to introduce programming and technology to a new generation of coders," said Anesha Grant, director of alumnae and educational programs at Black Girls Code, in a statement, "and we believe that a new generation of coders will shape our futures. We're excited to participate in Call for Code for Racial Justice and to spark meaningful change."
The Call for Code for Racial Justice launched officially this week at the virtual All Things Open.
The IBM Call for Code for Racial Justice team kicked off the competition by contributing "solution starters" to the open source community. These projects were built using technologies such as Red Hat OpenShift, IBM Cloud, IBM Watson, Blockchain ledger, Node.js, Vu.js, Docker, Kubernetes and Tekton, said Evaristus Mainsah, General Manager, IBM Hybrid Cloud and Edge Ecosystem and co-chair of IBM's Black Executive Council, and Willie Tejada, General Manager, IBM Developer Ecosystems Group and Chief Developer Advocate, in a joint blog post.
"We're asking developers and ecosystem partners to join us in combatting racial injustice by testing, extending and implementing these open source solutions, and contributing their own diverse perspectives and expertise to make them even stronger," they said.
The list of solution starters includes:
- Five Fifths Voter: This web application empowers Black people and other minorities to ensure their voices are heard by exercising their right to vote. It is a virtual one-stop-shop to help determine optimal voting strategies for each individual and limit the impact of previous suppression issues.
- Legit-info: Local legislation and policies can have significant impact on areas as far-reaching as jobs, the environment and safety. Legit-info helps individuals understand in their own language the legislation that shapes their lives.
- Incident Accuracy Reporting System: This platform for police incident reporting allows witnesses and victims to corroborate evidence from multiple sources and assess against an official police report. The system creates a more reliable record of all accounts of the incident.
- Open Sentencing: To help public defenders better serve their clients, Open Sentencing identifies racial bias in data such as demographics that can help make a stronger case.
- Truth Loop: This app helps communities simply understand the policies, regulations and legislation that will impact them the most.
"Each year I'm amazed by how this global community of developers comes together to help solve some of the world's most pressing issues, and this year is no different," said Call for Code creator David Clark, in a statement. "Through the support of UN Human Rights, IBM, The Linux Foundation, the Call for Code ecosystem, world leaders, tech icons, celebrities, and the amazing developers that drive innovation, Call for Code has become the defining tech for good platform the world turns to for results."
Posted by John K. Waters on October 20, 20200 comments
The decade-long court battle between Google and Oracle over 37 Java APIs Google used without Oracle's permission in its Android mobile operating system is finally coming to an end. (Really this time…. probably.) Oral arguments before the Supreme Court of the United States (SCOTUS) ended on Friday.
The case has been pending at the High Court for almost two years. It was set originally for oral argument in March, but was rescheduled to this fall when the coronavirus pandemic scrambled the spring argument sessions. (My earlier report includes a summary of the long history of this case, which started when Oracle sued Google in 2010.)
Google is asking the court to reverse a federal circuit court's finding that the structure, sequence, and organization (SSO) of Oracle's Java API package was copyrightable, and also that Google's use of that SSO was not a "fair use" under copyright law.
There's a lot at stake in this case--and not just the $8.8 million in damages Oracle is seeking from Google, which is an Alphabet subsidiary. It has the potential to be one of the most important copyright cases of the decade.
In January, several small companies and tech organizations joined the Mozilla software community in a friend of the court brief, urging the High Court to reverse the federal circuit court's decision.
The brief makes its argument from the perspective of small, medium, and open-source tech organizations, said Abigail Phillips, head of the Mozilla Foundation's legal department, in a blog post. "Mozilla believes that software reimplementation [the process of writing new software to perform certain functions of a legacy product] and the interoperability it facilitates are fundamental to the competition and innovation at the core of a flourishing software development ecosystem."
The list of organizations on the Mozilla amici curiae includes Medium, Cloudera, Creative Commons, Shopify, Etsy, Reddit, the Open Source Initiative, Mapbox, Patreon, the Wikimedia Foundation, and the Software Freedom Conservancy.
"The case has potentially huge implications on copyright protection for software, fair use, and the sanctity of jury verdicts," attorney J. Michael Keyes told ADTmag in an email. "Both parties' counsel were peppered with questions that focused primarily on copyright protection for software, the idea/expression merger, and whether Google's use was 'fair.'"
Keyes is an intellectual property attorney and a partner at the international law firm Dorsey & Whitney. He listened to the argument last week, and offered a rundown:
"Several of the justices' questions seemed skeptical of Google's position and troubled by Google's use of Oracle's software code," Keys said. "Justice Roberts said [that] just because you 'crack the safe' doesn't give you the right to take the money. Justice Alito was worried that if the Court adopted Google's position, it would effectively end copyright protection for software. Justice Sotomayor appeared to express her skepticism of Google's position in pretty blunt terms: 'What gives you the right to use their original work?' Justices Gorsuch and Kagan seemed troubled that other tech giants like Apple and Microsoft have created successful mobile platforms without copying Java—why should the Court give Google a pass?
Oracle's counsel was also met with questions about whether its "declaring code" was copyrightable, Keyes said.
"Surprisingly, there wasn't a big emphasis or focus on the federal circuit's disruption of the jury verdict on fair use," Keyes added. "Justice Alito did seem to question whether the lower court applied the wrong standard, s did Justice Gorsuch. Given that there isn't a single instance of a federal appeals court overturning a jury verdict on fair use, it seemed like this issue would have received more 'air time.'
Bill Frankel, shareholder at Brinks, Gilson & Lione, and chair of the firm's copyright group, also reached out with an email.
"The Justice's grappled with Google's argument that software interfaces are purely functional lines of code and not the kind of creative expression that copyright exists to protect," Frankel said. "Where do you draw the line between copyrightable code and uncopyrightable code? But they deeply probed into the issue of the merger doctrine and whether there has been a merger of the expression in Oracle's declaring code and the functional purposes of that code. Were the Court to adopt Google's argument of merger, the holding presumably would be limited to the Java declaring code at issue, and would leave open, if not uncertain, the scope of copyright protection for APIs in future disputes."
Frankel noted (as did many delighted reporters) the number analogies the Justices came up with during arguments to get their arms around some technical concepts.
"The Justices came up with a number of analogies to suggest the possible functionality of Oracle's APIs," he said, "including mechanical devices like QWERTY keyboards and telephone switchboards and the like. But these analogies seemed inapposite. Chief Justice Roberts' analogy to a menu was closer to the mark. But even a menu divided into appetizers, entrees, and desserts can be written a myriad of ways. At the end of the day, Oracle's code was original, creative, and properly the subject of copyright. The questions to be resolved are what the scope of that copyright should be and how the fair use factors should properly be weighed in the context of software copyrights."
Posted by John K. Waters on October 14, 20200 comments
There's a lot going on this week in the Kotlin community. JetBrains, the Prague-based maker of the venerable code-centric Java IDE, IntelliJ IDEA, and creator of Kotlin, is hosting an online event focused on the programming language.
Kotlin 1.4 (named, obviously, for the latest release) is a three-day event, underway now (Oct 12-14) that's bringing together Kotlin experts to share insider insights with the global developer community.
Most of the speakers are JetBrains people, but fans of the company's must-read blogs will recognize many of names on the speaker roster, including: Kotlin lead Andrey Breslav; Stanislav Erokhin, Kotlin's head of development; Egor Tolstoy, Kotlin product manager; Roman Elizarov, team Lead in Kotlin language research; and Hadi Hariri, the company's developer advocate. Also speaking: Florina Muntenescu, Android developer advocate at Google and Sébastien Deleuze, Spring Framework committer at Pivotal.
Still time to log on.
Meanwhile, JetBrains officially announced a new release cadence for Kotlin and the IntelliJ Kotlin plugin. According to JetBrains' Kotlin community manager Alina Dolgikh, users can expect new releases of Kotlin 1.x every six months. These releases will be date-driven, not feature-driven, Dolgikh said in a blog post, which brings the language into what has emerged as something of a standard release cycle for software development tools over the past few years.
"Since Kotlin 1.0 came out in 2016, we've built our release schedule around new key features in the language," Dolgikh said. "This meant that, until big language features were ready, we would not release anything at all. As a consequence, we delivered changes and improvements once a year or sometimes even less frequently. The language has evolved more slowly than we would like, and release dates have been somewhat unpredictable for the users. The main goal of the new date-driven release cadence is to accelerate the delivery of important language updates."
There are three types of Kotlin releases: feature, incremental, and bug-fix. The new cadence will mostly impact the feature releases. Major IDE features will arrive in releases synchronized with IntelliJ IDEA, she said. Dolgikh provides a useful diagram of the new Kotlin release cadence in her post.
The Kotlin IDE plugin will be released simultaneously with the Kotlin language release, she said, and every time IntelliJ IDEA is released. Why?
"Nowadays, major changes in the Kotlin IDE plugin depend on the IntelliJ Platform more than on the Kotlin compiler," she explained. "So, from now on, new versions of the Kotlin plugin will ship with every release of the IntelliJ Platform, as well as with new versions of Kotlin."
"Kotlin is evolving quickly and we're keen to remove any obstacles preventing the team and the community from achieving their goals," Dolgikh added. "Today we've introduced two major process improvements, and we believe they will speed up the language progress even more."
Kotlin ranked 13th among the most popular languages for professional developers in the StackOverflow Developer Survey 2020, and it cracked the Top 20 in the most recent RedMonk Programming Language Rankings. The Kotlin developer community claims that more than 30,000 members are exchanging "knowledge and support" on Slack and Reddit, and the official Twitter account has more than 90,000 followers.
Created by Prague-based JetBrains, Kotlin is a statically typed language that compiles to both JVM byte code and JavaScript. JetBrains has claimed that Kotlin is more stable at runtime than Java, because it can statically check weak points and supports things like variable type interface, closures, extension functions, and mix-ins. It's also less verbose than Java, which means devs can write less code with a more readable syntax.
JetBrains unveiled Kotlin at the 2011 JVM Language Summit in Santa Clara, CA, and later released it for distribution under the Apache 2 Open Source License.
Posted by John K. Waters on October 14, 20200 comments