I've been thinking about California Governor Gavin Newsom’s veto of Senate Bill 1047 (SB 1047), the proposed regulation aimed at safeguarding against the misuse of artificial intelligence. I was surprised by the veto, because of its popularity in the legislature, but I probably shouldn't have been. Newsom has long positioned himself as a champion of innovation and economic growth, especially in California’s tech sector. And he has historically favored policies that foster innovation and economic competitiveness over early-stage regulatory interventions.
Opinions about the impact of the veto abound, and what I've tried to do in this post is boil them down to provide some context around what this veto might mean for software developers.
The Pros: Why Newsom’s Veto Benefits Developers
Preserves Innovation and Flexibility: To state the obvious, AI is still rapidly evolving, and SB 1047 would have introduced regulations that might have restricted the pace of innovation. The veto means that AI development can continue without requiring adherence to potentially cumbersome bureaucratic guidelines. As a developer, you always want the flexibility to experiment, iterate, and build without having to comply with rigid regulatory frameworks, especially while the technology is still in its early stages. Newsom’s decision allows devs to continue working in an environment that encourages exploration and creativity, without the constraints of audits and impact assessments for every new tool or application.
Avoids Burdensome Compliance Requirements: SB 1047 would have introduced mandatory audits, bias assessments, and regular impact reports for AI systems. Although these measures are intended to ensure ethical AI deployment, they also present a heavy administrative burden, particularly for smaller companies and startups. Meeting these compliance requirements could slow down the development process and drain resources, especially for teams with limited bandwidth. The veto alleviates this pressure, allowing developers to focus on building products and scaling their operations without dedicating time and resources to regulatory compliance.
Maintains California’s Competitive Edge: California is a global leader in AI development, and Newsom’s veto helps maintain its competitive edge by avoiding early, overly restrictive regulations that could push developers and companies to more lenient states or countries. For software developers, this means that the state will remain an attractive hub for innovation and talent. With fewer regulatory hurdles, developers in California can continue to be at the forefront of AI research, attracting investment and top talent from around the world.
Encourages Industry Self-Regulation: Newsom’s veto can be seen as a nod to the idea of industry self-regulation. Many developers and tech companies are already working to establish internal guidelines for ethical AI development, addressing issues such as bias, transparency, and accountability on their own terms. Without top-down regulation, developers have the opportunity to take the lead in defining what responsible AI looks like, potentially shaping industry standards that are both ethical and flexible enough to accommodate innovation.
The Cons: What Developers Lose with the Veto
Missed Opportunity for Ethical Frameworks: Although industry self-regulation is a nice idea, it lacks the enforceability of legislation. SB 1047 was intended to create a standardized ethical framework for AI development, ensuring that developers across the board would be held to a common set of principles regarding fairness, transparency, and accountability. Without these legal requirements, developers may face increased pressure from their employers to prioritize speed and profit over safety and ethics. This could lead to a race to the bottom, where developers feel compelled to cut corners on AI safety and fairness in order to compete.
Unregulated AI Can Lead to Harm: The risks associated with unregulated AI development are not hypothetical. Bias in AI systems has led to real-world harm, from biased hiring algorithms to discriminatory loan approval systems. SB 1047 sought to address these issues by requiring developers to assess and mitigate bias in AI models. Without these legal safeguards, developers may find themselves unintentionally building systems that perpetuate inequality or violate privacy. For developers who care about the societal impact of their work, this veto represents a missed opportunity to put guardrails in place to prevent such harm.
Uncertainty in Future AI Regulation: By vetoing SB 1047, Newsom has delayed the establishment of a regulatory framework, but it is likely that AI regulation will come eventually. This leaves developers in a state of uncertainty, not knowing when or what form future regulation might take. SB 1047, though not perfect, provided clear guidelines on what was expected of developers. Now, they must continue to operate in a regulatory gray area, where the future of AI oversight is unclear. This uncertainty can make it difficult for developers to plan long-term projects or strategies, as they may need to adjust their work to comply with future regulations.
Potential for Unequal Playing Field: The lack of standardized regulations can lead to an unequal playing field for developers. Larger companies with vast resources may choose to implement their own rigorous AI ethics protocols, building public trust in their systems. Meanwhile, smaller developers and startups may lack the resources to independently conduct bias audits and impact assessments, potentially leading to less responsible AI systems. Without a level playing field, some developers may find themselves at a disadvantage, particularly in an industry where public perception of ethical AI is becoming increasingly important.
Conclusion: Developers Must Be Proactive
Clearly, Newsom’s veto of SB 1047 presents both opportunities and challenges for developers. On one hand, the decision allows for continued innovation without the immediate burden of regulatory compliance, offering developers the freedom to experiment and iterate quickly. On the other hand, it leaves the AI landscape largely unregulated, opening the door to ethical risks and leaving developers without clear guidelines to ensure responsible AI development.
For developers, the key takeaway from this veto is the need for proactive involvement in shaping the future of AI regulation. Although Newsom’s decision gives the industry more time to grow and innovate, it also places the responsibility on developers to self-regulate and ensure that their work is ethical and aligned with societal values. Developers should see this as an opportunity to lead the conversation on responsible AI, helping to craft future regulations that balance innovation with safety and fairness.
Ultimately, the veto is a reminder that AI is still in its early stages, and its regulation will require thoughtful, collaborative efforts between developers, policymakers, and the public. For now, the absence of SB 1047 gives developers the freedom to innovate—but with that freedom comes the responsibility to ensure that AI is built for the good of all.
Posted by John K. Waters on October 9, 20240 comments
Oracle announced the general availability of JDK 23 this week, a feature release with enhanced capabilities focused on cloud-native applications, enterprise performance, and the growing demands of AI.
Available now, this release of the Java dev kit includes 12 new JDK enhancement proposals (JEPs), as well as thousands of smaller performance, stability, and security updates. It builds on its predecessors, of course, with updates in areas such as concurrency, memory management, and language simplicity, ensuring that Java—which just passed the three-decade mark—remains one of the most popular and widely-used programming languages in the world.
As part of Oracle's ongoing six-month release cycle, JDK 23 introduces a variety of key features aimed at improving both developer productivity and application performance. Highlights include the adoption of a new garbage collector (GC), enhancements for AI and machine learning integration, and improved security measures to safeguard applications against emerging threats.
Generational ZGC: Tackling Massive Data Sets
A standout feature of JDK 23 is the integration of the generational Z Garbage Collector (ZGC) as the default option. ZGC, designed to handle extremely large datasets, offers sub-millisecond pause times even when processing heaps that can stretch into terabytes of data. The generational mode further optimizes performance by separating young and old objects during garbage collection, which reduces the overall memory footprint and enhances application responsiveness.
"ZGC has become a highly reliable and predictable garbage collection mechanism for managing huge datasets with minimal latency," Chad Arimura, Oracle's VP of Developer Relations, told me on a Zoom call. "By introducing a generational mode, we've improved Java's efficiency for handling modern workloads like AI and large-scale data analytics."
This development is crucial as industries reliant on big data processing—such as AI, machine learning, and financial services—face growing performance demands. JDK 23's advancements in memory management make it an ideal solution for these environments, providing faster and more predictable performance.
AI and Modern Workloads: Project Loom and Panama
JDK 23 is also well-positioned to support the rising demand for AI and machine learning workloads, a priority for Oracle, Arimura said, not to mention the broader tech industry. The release builds on the work done under Project Loom, which introduces virtual threads to simplify concurrency and improve the performance of highly parallelized applications.
Project Loom enables Java developers to efficiently manage millions of virtual threads, making it easier to reason about complex concurrent applications. Structured concurrency, introduced as part of this project, allows developers to handle large-scale thread operations with greater predictability. This feature is especially relevant for AI workloads that demand high concurrency without compromising reliability.
Also, Project Panama continues to expand Java's ability to interoperate with foreign memory and foreign code. The Foreign Function and Memory API simplifies interactions with native languages like C, facilitating easier integration of machine learning frameworks such as TensorFlow. Panama also includes a vector API, designed to enhance performance in tasks such as numerical computation, which are critical to AI applications.
"Java has always been a platform for building large-scale, enterprise-grade applications," Arimura said, "but with these enhancements, it's also now one of the best platforms for AI and machine learning."
Language Improvements: Project Amber and Simpler Code
JDK 23 introduces new features under Project Amber, which focuses on making the language more concise and easier to use. As applications increasingly become data-driven, developers are looking for ways to write less verbose code without sacrificing performance or functionality.
One of the key improvements is the enhancement of records and pattern matching, which simplifies how developers handle and manipulate data in Java. These features allow developers to write more concise and readable code, making Java a viable option for smaller, data-focused applications as well as large enterprise projects.
Project Amber is also helping Java compete with newer, "trendier" languages, such as Python and Go, which are often chosen for their simplicity in AI and cloud-native environments. The improvements brought by Amber give Java developers more flexibility in writing quick scripts and "glue code," allowing them to remain within the Java ecosystem without resorting to other languages.
"Many developers still think of Java as a language for large, heavy applications, but the truth is, Java has evolved tremendously over the past decade," Donald Smith, Senior Director of Product Management at Oracle, said during that Zoom call. "Project Amber's focus on language simplicity makes Java more competitive in areas where Python and Go have traditionally dominated, especially in AI and cloud-native development."
Security and Platform Integrity
Security continues to be a major focus in JDK 23, with Oracle making significant progress in platform integrity. One key initiative is the gradual removal of access to internal APIs that have long been used by developers but were never meant for public use. By restricting access to these APIs, Oracle aims to create a more secure platform and encourage developers to use newer, safer alternatives.
JDK 23 includes enhanced support for the Panama memory segment API, which allows developers to interact with native memory in a secure and controlled manner, reducing the risk of security vulnerabilities such as buffer overflows. Oracle has also emphasized the continued development of its module system, ensuring that public APIs are clearly defined, and internal APIs are protected.
Platform integrity remains a crucial area for Oracle as it seeks to maintain Java's position as a secure, stable, and scalable platform. "With Java 23, we've made significant progress in ensuring platform integrity," said Bernard Traversat, VP of Software Development at Oracle, who was also on the Zoom call, "which is essential as we evolve the platform without breaking compatibility with existing applications."
New Tools for Developers: Graal JIT and Visual Studio Code Integration
In a notable shift, JDK 23 introduces the Graal Just-In-Time (JIT) compiler as an option within the Oracle JDK. The Graal JIT offers faster startup and warm-up times, especially for applications that generate large amounts of garbage early in their execution. This addition gives developers more flexibility in optimizing their applications based on workload requirements.
Previously, developers had to download a separate version of the Graal JDK to take advantage of the Graal JIT. Now, the JIT can be enabled with a simple command-line flag, streamlining the development process and making it easier for developers to experiment with different configurations.
JDK 23 also sees continued investment in making the language more accessible to new developers. Oracle's Visual Studio Code plugin, which has recently surged in popularity, now supports the latest JDK in real-time, allowing developers to access new features as they are released. This feature has helped bridge the gap between experienced Java developers and newer entrants to the field, ensuring that Java remains a popular choice among learners and hobbyists.
A Platform Built for the Future
As Java enters its fourth decade, Oracle's focus remains on balancing innovation with stability. The company's "tip and tail" model ensures that developers who want the latest features can access them through the regular six-month release cadence, while those who prioritize long-term stability can continue to rely on Long-Term Support (LTS) releases. The next LTS version, Java 25, is expected next year.
"Java is not the same language it was 10 years ago," Arimura said. "We've spent the past several years refining and modernizing the platform to ensure it meets the needs of today's developers. Whether you're building AI applications, working with big data, or just maintaining a legacy system, Java 23 has something for you."
In Additional Awesome News
Oracle will host a series of events and webinars over the coming months to help developers familiarize themselves with Java 23's new features. A JavaOne conference is also scheduled for March 2025 in California (yea!), marking the first independent event since JavaOne was folded into Oracle's CloudWorld conference. More info on the homecoming of that venerable devcon here.
Posted by John K. Waters on September 18, 20240 comments
Open-source Java platform provider Azul just announced the winners of its first-ever Azul Java Hero Awards, which recognizes exceptional achievements in Java deployments worldwide. The company named 17 organizations and individuals who "who achieved innovative world-class results with Java to help their businesses become more cost-effective, successful and efficient."
As far as I can tell (and I've looked), Azul is the only company out there right now that is 100% focused on Java. So, even though these awards were given to users of Azul's Platform Prime, a Java virtual machine and runtime platform built on OpenJDK (formerly known as "Zing"), and not Java generally, I think it's worth giving the winners a spotlight. Lest we forget, a great big chunk of the world still runs on Java.
Among the winners highlighted in Azul announcement was a company called Workday, which snagged the award for "Application Performance in Operations & Efficiency" by leveraging Azul Platform Prime, to dramatically reduced application pause times and improved scalability, resulting in a 95% increase in operational efficiency. This transformation not only saved the company millions of dollars, Azul noted in the announcement, it also freed developers to focus on more productive tasks.
In the public sector, Newcastle City Council (NCC) was recognized for its "Best Industry Use Case." NCC, the local government authority providing services to more than 300,000 residents of Newcastle, the largest city in northeast England, addressed critical security vulnerabilities by transitioning from Oracle Java to Azul Platform Core, thereby ensuring the resilience and compliance of its Java-based systems used by more than 1,100 employees. This move significantly reduced security risks across 5,000 desktops, earning high praise from NCC’s head of ICT & Digital, Jenny Nelson.
"Through our strategic partnership with Azul, we significantly reduced our security risk level with our Java applications and Java-based infrastructure, which certainly helps me sleep better at night," Nelson said in a statement.
Travelport, a key player in the global travel technology sector, won the award for "Application Performance in Customer Experience." After implementing Azul Platform Prime, Travelport saw a substantial improvement in application response times and a significant reduction in server usage, enhancing its ability to serve travel content to customers efficiently.
SGX FX, a leading provider of foreign exchange trading technology, was honored for "Best Industry Use Case in Trading Exchanges." The company’s transition to Azul Platform Prime enabled it to handle an enormous volume of transactions with minimal latency, setting new benchmarks for speed and reliability in the industry.
Jagex, the British gaming company behind the popular MMORPG RuneScape, received the award for "Best Industry Use Case in Gaming." By utilizing Azul Platform Prime, Jagex eliminated detectable pauses during gameplay, improving performance by 20% and significantly enhancing the player experience.
Other winners include LMAX Group, recognized for its remarkable performance in high-frequency trading, and Taboola, which received the "Best Industry Use Case in AdTech" award for its success in improving the efficiency and environmental impact of its server infrastructure.
The awards also highlighted individual achievements, such as those of Jeff Korpa from Teledyne Controls, who earned the "Java Migration Trailblazer" award for his strategic move away from Oracle Java SE, resulting in significant cost savings and enhanced security for his company.
"I’m inspired to see the remarkable ways in which companies around the world are leveraging Java to drive innovation, deliver exceptional performance, and optimize costs," said Scott Sellers, co-founder and CEO at Azul, in a statement. "The winners of our inaugural Azul Java Hero Awards embody the best of what can be achieved with Java, showcasing its enduring power and versatility in enterprise software. Their achievements highlight why Java remains the go-to language for businesses striving for excellence and efficiency."
The Java Hero Awards span multiple categories, recognizing excellence in Java migration, cloud cost savings, application performance, and specific industry use cases. These awards underscore the critical role Java continues to play in driving innovation and efficiency in various sectors.
For a full list of winners and categories, visit Azul's website.
Posted by John K. Waters on September 3, 20240 comments
Low-code/no-code AI platform provider Sway AI announced this week the integration of its namesake offering with Microsoft Azure, giving Azure customers a unique way to build and deploy secure AI and machine learning (ML) applications directly within the Azure ecosystem.
Sway AI is a fascinating application of low-code/no-code (LCNC) development. It's not a traditional development environment used for writing and managing code. Instead, it serves as a dev platform specifically for creating AI-driven solutions. It's not about coding, debugging, or version control, but developing AI models, performing data analysis, and deploying AI applications. Users can develop AI models by selecting pre-built algorithms, configuring them, and training them on their data, all without writing code.
Sway AI also allows users to automate complex workflows involving data processing, model training, and deployment. And the platform supports the integration of AI models into existing systems and their deployment in production environments, all handled through a no-code interface.
It's an LCNC tool that makes AI accessible to businesses and individuals across various industries—those citizen developers my colleague Howard M. Cohen writes about in his column.
The drag-and-drop interface is key, of course. It's designed to enable users to create AI workflows by simply connecting pre-built modules. This visual approach allows users to define data inputs, apply machine learning algorithms, and configure outputs without needing to write any code. The platform also comes with pre-built templates for common AI tasks, such as data analysis, image recognition, and natural language processing. Users can select and customize these templates according to their specific needs. It also automates many of the complex steps involved in AI model training and deployment. Users can upload their data, select an appropriate algorithm, and let the platform handle the training process. Once the model is trained, deployment is also simplified with just a few clicks. And the platform supports integration with a range of data sources and external applications through APIs, which makes it possible for users to connect their AI models to existing systems without needing to write integration code.
The integration of Sway AI with Azure's cloud infrastructure makes it possible for businesses to create, test, and deploy AI models without requiring extensive programming or data science expertise. This collaboration is expected to streamline AI workflows, reduce development time, and lower costs, making AI more accessible to a wider range of industries.
By integrating with Azure, Sway AI offers seamless compatibility with a bunch of Azure services, such as AKS Managed Kubernetes, Key Vault, and Virtual Networks, which allows organizations to leverage their existing data and infrastructure while benefiting from Sway AI's simplified AI development process.
I think it's fair to call Sway AI a cutting-edge platform, and its integration with Azure a significant step towards making AI development more accessible and secure for enterprises.
Posted by John K. Waters on August 20, 20240 comments
It has been said that coding can sometimes feel like trying to explain quantum physics to a cat. If any dev tool maker understands this, it's JetBrains, which is no doubt why they’ve supercharged their AI Assistant in the 2024.2 updates for their suite of IDEs. The latest version introduces advanced and faster code completion for Java, Kotlin, and Python, alongside a smarter AI chat powered by GPT-4o.
The AI Assistant integrated into the JetBrains IDEs can generate code, suggest fixes, refactor functions, and even help you come up with cool names for your variables. It’s like having a genius coder living in your computer, minus the annoying habits. Plus, it’s integrated with GitLab, so it can help you manage your code repositories without breaking a sweat.
JetBrains has trained its own large language models to improve code completion for Java, Kotlin, and Python, but more languages are on the way, the company says. The AI chat is now smarter, multilingual, and can even understand complex questions, thanks to the GPT-4o upgrade. It also boasts new tricks like AI-assisted VCS conflict resolution and customizable prompts for documentation and unit tests.
The updated IDEs also come with a shiny new user interface designed to be easier on the eyes and the brain. The new UI reduces visual clutter, making it simpler to find what you need, while still keeping the old-school UI available. The Search Everywhere feature has been enhanced to let you preview the elements you’re hunting for, and the IDEs now auto-detect and use your system’s proxy settings, so you don’t have to fiddle with them yourself.
Each IDE in JetBrains’ lineup has received individual love and attention. For instance, IntelliJ IDEA 2024.2 Ultimate can now run Spring Data JPA methods directly for instant repository query verification. It also features advanced cron expression autocompletion and a souped-up HTTP Client using the GraalJS execution engine.
PyCharm 2024.2 has revamped its Jupyter notebooks and introduced new AI cells for quicker data analysis. It also improved support for Hugging Face models and added the ability to connect to Databricks clusters. PhpStorm now has command auto-completion for Laravel, Symfony, WordPress, and Composer, and GoLand supports the latest Go features and method refactoring.
CLion’s updates include new features with the ReSharper C++ language engine, remote development via SSH, and collaborative development tools. WebStorm now lets you run and debug TypeScript files directly and supports frameworks like Next.js.
DataGrip users will find that the AI Assistant can help improve SQL queries by attaching a database schema for context. Aqua adds Playwright support for Python and Java, while RubyMine supports Hotwire Stimulus and completion for Kamal configuration files.
Finally, Rider has introduced LLM-powered single-line code completion for various languages and support for .NET 9 Preview SDK and C# 13 features.
JetBrains’ 2024.2 updates are packed with features that make coding less of a hair-pulling experience and more of a smooth, brainy ride. These awesome updates will be available soon, the company says.
I know you know this, but JIC: Prague-based JetBrains makes a lineup of more than 30 intelligent software development tools, including the popular IntelliJ IDEA IDE for Java developers and PyCharm for Python devs. The company is also the creator of Kotlin, a popular cross-platform, statically typed, general-purpose high-level programming language with type inference. The company's tools are used by more than 11.4 million professionals and 88 of the Fortune Global Top 100 companies.
Posted by John K. Waters on August 6, 20240 comments
I read a lot of industry reports based on surveys of one group or another, mostly developers, but it's not often I lay my eyes on one that makes me laugh and shudder at the same time.
The report, "Know the Enemy: What Execs Need to Understand to Secure their Software Supply Chain," was sent to me by the folks at JFrog, best known for Artifactory, a universal DevOps solution for hosting, managing, and distributing binaries and artifacts, but currently billed more expansively as a universal software supply chain platform for DevOps, Security, and MLOps. The report organizes the findings of a global survey of C-level and senior executives, managers, and individual contributors (analysts, specialists, developers, programmers, engineers, etc.) conducted by Atomik Research on behalf of the company.
Here's the funny bit: The research revealed "significant disconnects between senior executives/managers and developers regarding enterprise application security." No! Really? That execs and devs have divergent views on the state of their organizations' security is a hilarious understatement.
Now, the scary part: According to the researchers, malicious actors see the software supply chain (SSC) as the new "soft target," because there are fewer protections in place than in other enterprise systems. They support this conclusion with a troubling statistic: Nearly a quarter of respondent (23%) to a June 2023 survey said their organization experienced some type of SSC breach, which is an increase of 241% from 2022.
Perhaps even scarier, less than a third of respondents (30%) indicated that a vulnerable software supply chain was a top security gap.
This lack of alignment and communication between decision-makers and the teams implementing security protocols is exacerbated, the survey suggests, by the diversity of programming languages and the integration of AI and machine learning (ML) models into software further. More than half of the surveyed organizations use four to nine different programming languages, and a third use more than ten. This variety not only broadens the attack surface but also challenges the ability to maintain consistent security standards.
The report further quantified the disconnect between senior execs and the devs on the ground when it comes to open-source security: While 92% of the responding executives believe their companies have measures in place to detect malicious open-source packages, only 70% of responding developers agree.
The solution might seem obvious: Talk to each other! But the researchers were a bit more specific in their recommendation: Companies should take steps now to adopt a comprehensive, end-to-end application security platform. This platform would unify security practices across the software development lifecycle, integrating automated scanning tools to identify vulnerabilities, unauthorized changes, and compliance issues. And almost as important, it would foster a culture of security awareness and collaboration across all levels of the organization.
The rapid adoption of AI/ML technologies, particularly in regions like the United States, underscores the urgency for robust SSC security frameworks, the report's authors conclude. Executives must recognize that the future of their company hinges, not only on innovation, but also on the resilience and trustworthiness of their software ecosystems.
In the end, securing the software supply chain is not just a technical challenge; it's a strategic imperative. As the lines between development and security blur, the need for cohesive, proactive measures becomes ever more critical. For businesses striving to stay ahead in a competitive market, the message is clear: secure your software supply chain, or risk becoming the next headline in a data breach story.
The folks at JFrog are hosting a webinar focused on the findings of this report on August 20, with Paul Davis, JFrog field CISO, and Aran Azarzar is JFrog's Chief Information Officer. You can register here.
Posted by John K. Waters on July 24, 20240 comments
Software development tools maker JetBrains has announced the availability of a self-hosted version of its Qodana code quality platform. An extension of the cloud version launched last summer, this release is also based on the static code analysis engine of JetBrains' IDEs. The platform supports native integration with both those IDEs and VS Code, allowing developers to build quality gates in any CI environment, which helps to enforce coding standards enterprise-wide.
To state the obvious, code quality platforms are tools designed to evaluate the quality of a developer's code. They provide a general assessment of the effectiveness, reliability, and maintainability of the code, as well as how well it adheres to established coding standards. High-quality code is more readable, comprehensible, and modifiable, which reduces the likelihood of errors and enhances its adaptability to changes.
With Qodana, developers can identify issues as a part of their CI/CD pipelines and resolve them from within their IDEs, ensuring the code aligns with established quality standards. This is a time-saving feature meant to enhance overall code quality and reduce the risk of security failures and production issues while accelerating the delivery of new functionality.
Since the company launched the cloud version of Qodana last year, JetBrains has been bombarded with requests for a self-hosted version, Valerie Kuzmina, Product Marketing Manager in JetBrains Qodana and IDE Services group, said in a blog post.
"With Qodana, we are on a mission to create an exceptional experience for development teams, making the entire journey – from setup to result analysis and fixes – easier and more enjoyable, increasing the adoption of server-side analysis," Kuzmina wrote.
The Qodana platform was developed to address a number of factors that contribute to the low adoption of static code analysis tools among developers, Kuzmina explained, which poses risks in product quality. Server-side analysis results are either ignored or, at best, grudgingly tolerated, she wrote because of the number of false positives, conflicts with IDE inspections , misaligned code quality guidelines, convoluted setups, an inability to fix issues quickly. And outdated UIs—all leading to what she called a "suboptimal developer experience."
"Following successful Beta tests with some of our clients," she wrote, "we're now launching the first release of Qodana Self-Hosted, allowing you to manage, maintain, and upgrade Qodana entirely on your end."
Currently, Qodana Self-Hosted supports Amazon Web Services (AWS). Additional hosting options will be added in future versions, the company says. If you're interested, you can request a demo here.
Prague-based JetBrains makes a lineup of more than 30 intelligent software development tools, including the popular IntelliJ IDEA IDE for Java developers and PyCharm for Python devs. The company is also the creator of Kotlin, a popular cross-platform, statically typed, general-purpose high-level programming language with type inference. The company's tools are used by more than 11.4 million professionals and 88 of the Fortune Global Top 100 companies.
Posted by John K. Waters on July 10, 20240 comments
When I hear the word "infotainment," I automatically think of TV shows like "Animal Planet" or "The Daily Show." But it's also a term of art in the auto industry referring to in-car systems that combine entertainment, such as radio and music, with driving information, such as navigation. Modern in-vehicle infotainment systems connect with smart automotive technologies, such as Advanced Driver Assistance Systems (ADAS) and Vehicle-to-Everything (V2X) technology, which use sensors, cameras, and wireless connectivity to allow cars to connect to and communicate with their drivers and surroundings.
The entire automotive industry is developing technologies to enable better connectivity solutions, improve vehicle safety, and enhance the "in-vehicle user-experience," so the announcement that Qt Group and LG Electronics (LG) are collaborating to embed the Qt software framework within LG’s webOS-based in-vehicle entertainment platform, ACP, was not surprising. This partnership aims to equip automotive OEM developers and designers with the tools needed to create cutting-edge, immersive content-streaming services for vehicles.
This new initiative leverages Qt’s existing support for LG’s highly customizable, open-source webOS, which has been a staple in consumer electronics like smart TVs, signage, smart monitors, and home appliances. Historically, LG has utilized the Qt framework to develop user-friendly interfaces and intuitive user experiences. Now, the focus shifts to LG’s ACP, a platform specifically designed for enhancing the in-car content-streaming experience.
The collaboration with Qt is set to play a pivotal role in the continued evolution of this automotive content platform as it is integrated into more brands’ infotainment systems. Qt is a cross-platform application development framework for desktop, embedded and mobile. Its robust out-of-the-box features accelerate development processes, offering faster boot times, enhanced performance, and efficient memory usage, thus ensuring reliable and powerful capabilities.
"The development of advanced software is crucial for enhancing in-vehicle experiences, and the partnership between LG and Qt will increase our capabilities in this all-important area of mobility innovation," said Sang-yong Lee, senior VP of R&D at LG, in a statement. "LG will continue to collaborate with innovative partners like Qt to create immersive in-cabin experiences that meet the diverse demands of automakers and their customers."
This announcement coincides with new market research projections, predicting that the global infotainment market will reach USD 35.4 billion by 2030. More broadly, software-defined vehicles are expected to generate more than $650 billion in value for the auto industry by the same year. To support this growth, Qt has recently expanded access to its design and development tools for automotive brands such as General Motors and Mercedes-Benz. Earlier in 2024, Qt’s human-machine interface development platform was also added to the AWS Marketplace.
"LG has been a trusted Qt partner and leader in infotainment innovation for years, so we’re excited to help them enhance immersive in-car experiences," said Juha Varelius, CEO of Qt Group. "There’s a big ecosystem of developers making web-based applications for cars, but with Qt integrated into LG’s ACP powered by webOS, they can more easily build and run these applications natively within the OS. Most automotive players already have Qt-based assets in their software, and this partnership marks another significant milestone for us in the industry."
Helsinki-based Qt Group’s suite of tools for designing, developing, and ensuring product quality aims to foster closer alignment between developers and designers. These tools were created to streamline workflows, enabling concurrent work within the same framework, and are particularly suited for cross-platform development, especially for low-powered and embedded devices.
This partnership between Qt and LG represents a significant step forward in the infotainment space, promising to deliver more innovative and engaging experiences for drivers and passengers alike. But the real message is that developers will have to tools they need to leverage their skillsets as demand increases for so-called modern in-car experiences.
Posted by John K. Waters on June 26, 20240 comments
The folks at the Eclipse Foundation, in collaboration with the Adoptium Working Group, recently unveiled the latest release of Eclipse Temurin, the working group's OpenJDK distribution. This is the largest release to date; it with support for 54 version/platform combinations and five major OpenJDK versions, highlighting a commitment to diverse and comprehensive builds across Linux, Mac, Windows, and various architectures, including x64, ARM, and RISC-V.
"The incredible growth of Eclipse Temurin reflects a strong demand among developers for secure, high-quality, and community-driven open-source Java runtimes," said Thabang Mashologu, vice president of Community and Outreach for the Eclipse Foundation, in a statement. "The Adoptium Working Group’s efforts have been instrumental in delivering enterprise-ready runtime binaries and expanding the potential use cases for open-source Java. Eclipse Temurin is one of the first open-source Java distributions to support RISC-V, introducing new opportunities for Java in Industrial IoT and beyond."
The Eclipse Foundation is one of the world’s largest open-source software foundations. The Adoptium Working Group, which is the successor to AdoptOpenJDK, promotes and supports high-quality, TCK certified runtimes and associated technology for use across the Java ecosystem. Since it was established back in 2021, Adoptium has become the leading provider of high-quality OpenJDK-based binaries.
The list of key updates and developments in this Temurin release includes:
Unprecedented Growth and Adoption: "Growth" was a key word in this announcement. Eclipse Temurin is currently the fastest-growing open-source Java SE runtime, with more than 23 million downloads per month and more than 380 million downloads to date. According to a recent report by New Relic (New Relic, State of the Java Ecosystem, April 2024), Temurin has experienced 50% year-over-year growth, now representing 18% of the Java market as the second most popular JDK vendor.
Security Enhancements: Eclipse Temurin is pioneering software supply chain security practices, with nominated platform builds independently verified and inclusive of a comprehensive software bill of materials. The Foundation published a case study that underscores this commitment.
RISC-V Support: The new release supports RISC-V microprocessors, expanding its applications to embedded technologies, IoT, machine learning, automotive software, and high-performance computing.
The stats cited in that New Relic study are well worth noting:
In 2020, Oracle was the most popular JDK vendor, comprising roughly 75% of the Java market. There was a noticeable movement away from Oracle binaries after the more restrictive licensing of its JDK 11 distribution (before the return to a more open stance with Java 17), and we’ve seen a steady decline year-over-year (YoY) ever since then. While Oracle retained the top spot in 2022 (34%), it slipped to 29% in 2023, and it’s now at 21%—which represents a 28% decrease in one year.
The use of Amazon increased to 31% of the market in 2023 (up from 2.2% in 2020 and 22% in 2022), but has dropped to 18% in 2024, which represents a 43% decrease YoY.
The rising star this year is Eclipse Adoptium, adoption of which rose 50% YoY from 12% to 18%. Because Eclipse Adoptium is community-managed, this JDK tends to be updated more frequently than the Oracle and Amazon JDKs.
Eclipse Temurin is currently available for a wide range of platforms and Java SE versions. Multiple commercial support options are available for Temurin, with enterprise-grade support provided by members of the Adoptium Working Group, including Azul Systems, IBM, Open Elements, and Red Hat.
Posted by John K. Waters on June 4, 20240 comments