There's a difference between a bug and a flaw, and an impressive group of software security mavens thinks it's time to pay more attention to the latter. To shift some of the industry's focus away from finding implementation bugs and toward identifying common design flaws -- "the Achilles' heel" of security engineering -- the IEEE Computer Society has formed the Center for Secure Design (CSD).
The CSD grew out of a foundational workshop, held in April, which brought together software security experts from industry, academia and government to talk about the problem of secure software design. Among the 10 workshop participants were representatives from Twitter, Google, RSA, Intel and Harvard University.
Gary McGraw, CTO of Cigital, hosted a soirée at the Cantina art bar in San Francisco to launch the CSD and to generate interest in its mission. McGraw was among the original workshop members. "The price of admission was a bag of flaws -- a real bag of flaws -- from your practice," McGraw told attendees. "We dumped them all on the table and picked the tallest 10 piles."
That mission, by the way, is to "gather software security expertise from industry, academia and government" to provide guidance on "recognizing software system designs that are likely vulnerable to compromise" and "designing and building software systems with strong, identifiable security properties." And those 10 piles led to the publication of an inaugural CSD report, "Avoiding the Top 10 Software Security Design Flaws."
McGraw, who is author of numerous books about building secure software, called finding and fixing design flaws "the hardest problem that nobody has solved."
"Software security has grown into a $7 or $8 billion industry, and it's continuing to grow very fast," he told me. "But the field seems to be myopically focused on bugs and hackers. And yet, from a technical perspective, half of the problem is a design problem. We're hoping to shepherd the field in the right direction."
The CSD is part of a larger IEEE cybersecurity initiative launched this year "with the aim of expanding its ongoing involvement in cybersecurity." Jim DelGrosso, principal consultant at Cigital, will serve as the CSD's executive director. One of the problems the group will address, DelGrosso said, is the relative opaqueness of the work being done on design flaws.
"We've known about these things for a decade or three," he told attendees, "and yet the problems persist. We also know that this work is being done, but much of it is being done internally, so it's not available to the public. One of the goals of the CSD is to change that. We want people to stop making these mistakes."
Google information engineer Christoph Kern shared an example of such internal work from his own company, where he has been developing Web application frameworks that make it hard for developers to introduce cross-site scripting bugs. One team that adopted the frameworks saw a marked reduction in their bug-tracker stats. "There's a real connection between bugs and design-level considerations," he said.
Here's the list of initial participants in the Center for Secure Design:
- Iván Arce, Sadosky Foundation
- Neil Daswani, Twitter
- Jim DelGrosso, Cigital
- Danny Dhillon, RSA
- Christoph Kern, Google
- Tadayoshi Kohno, University of Washington
- Carl Landwehr, George Washington University
- Gary McGraw, Cigital
- Brook Schoenfield, Intel/McAfee
- Margo Seltzer, Harvard
- Diomidis Spinellis Athens University of Economics and Business
- Izar Tarandach, EMC
- Jacob West, HP
Here are those top 10 security design flaws; each one is fleshed out considerably in the CSD report:
- Earn or give, but never assume, trust
- Use an authentication mechanism that cannot be bypassed or tampered with
- Authorize after you authenticate
- Strictly separate data and control instructions, and never process control instructions received from untrusted sources
- Define an approach that ensures all data are explicitly validated
- Use cryptography correctly
- Identify sensitive data and how they should be handled
- Always consider the users
- Understand how integrating external components changes your attack surface
- Be flexible when considering future changes to objects and actors
Posted by John K. Waters on 09/02/2014 at 6:43 AM0 comments
There's nothing like seeing the final agenda go up on a Web site to drive home the reality that you're chairing your first technology conference.
Fortunately for me, that agenda -- the one for our first ever App Dev Trends conference coming in December in Las Vegas -- is filled with workshops and sessions led by some of my favorite enterprise software experts, industry mavens, market watchers and serious codederos. I might be as nervous as a nerd at the prom about stepping onstage in my chairing duties (man, that simile brought up some bad memories), but I couldn't be more relaxed about our kick-ass presenter lineup.
I'm very excited, for example, to have David Intersimone (better known as "David I.") speaking at the show. Intersimone is vice president of developer relations and chief evangelist for toolmaker Embarcadero Technologies, and he's a programmer's programmer. He worked for more than two decades at Borland, the company that invented the IDE, then at CodeGear, the company that emerged from Borland's decision to shed its tools business. David will be presenting two sessions: "Integrating Devices and Gadgets into Your Enterprise" and "Clouds: The Final Frontier – Integrating BaaS into your Enterprise Apps."
We also have one of my all-time favorite conference keynoters, Miko Matsumura, leading a session. He's now vice president of developer relations at Hazelcast, the open source in-memory data grid company, but I first saw Miko when he served as chief Java evangelist at Sun Microsystems in the late '90s. (Back when he had shoulder-length hair!) He was one of the most visible spokespeople for Java back then, and a member of the team that popularized the Java platform among developers. In his session, "Elastic Application Performance Market View," Miko will examine the dizzying array of options available today for architecting scalability into applications from Day 1.
When Dr. James McCaffrey, a popular veteran of 1105 Media's Visual Studio Live! conferences and Visual Studio Magazine columnist, responded to my e-mail pestering by saying that he might have a totally new tool to present at our show, and that this tool was designed for developers interested in neural networks, I swallowed my gum! You've probably heard about Microsoft's new cloud-based machine-learning tool, Machine Learning Studio, the beta of which was unveiled in July. His presentation is titled: "Understanding Neural Networks Using Python." McCaffrey, who works at Microsoft Research, promises that attendees will come away from his session with an in-depth understanding of neural networks -- and to include one of the first public demos of the new Machine Learning Studio.
I am also very excited about Ian Skerrett's session, "Introducing Eclipse IoT: Accelerating IoT Development." Over the past two years, the Eclipse Foundation has been developing a community of open source projects for Internet of Things developers. That community now comprises 15 different projects, and includes implementations of popular IoT standards, such as CoAP, MQTT, and Lightweight M2M. Ian is the man who has been leading the effort to build that community. He will be talking about the project itself and how to use the technologies it encompasses to get started building IoT solutions.
One of my favorite tech industry watchers, Theresa Lanowitz, founder of voke inc., is also presenting at our show. Her official bio says that she's widely recognized as "a strategic thinker and influencer in the application life cycle, virtualization, cloud computing, and convergence markets." I usually hate to use PR-speak, but that line is right on the money. Cool tidbit from that bio: She worked on the original JBuilder IDE. I've interviewed Theresa many times, and I'm looking forward to both of her sessions: "Extreme Automation: Software Quality for the Next-Generation Enterprise," and "Software Quality in the Sound Bite Era."
And our own Agile Architect, Dr. Mark Balbes, will be among the speakers kicking off the show with his session, "The State of Agile." Mark will be talking about the evolution of Agile -- what works, what doesn't and where the Agile movement might be heading in the future. He'll also be there to wrap things up with our closing panel, "Agile Techniques and Best Practices," which will feature Mark, Matt Philip and Jason Tice, the Three Agilistos from our popular summer webcast. I'll be moderating this panel, so it'll be worth attending for my embarrassing gaffes alone.
It's no exaggeration to say that this is just the tip of the iceberg. This is our first-ever ADT-branded event, and we went all out to put together what I believe is a killer agenda with sessions focused on the enterprise developer. App Dev Trends 2014 runs Dec. 8-11 at the Mandalay Bay Resort and Casino in Las Vegas. Hope to see you there!
Posted by John K. Waters on 08/20/2014 at 3:25 PM0 comments
Java toolmaker ZeroTurnaround's software release automation tool, LiveRebel, is a little less live than it was a week ago. The company pulled the plug on the three-year-old sibling of its JRebel JVM plug-in (and newly birthed XRebel Java profiler). Company founder and CEO Jevgeni Kabanov, delivered the news in a blog post, though he says customers were contacted before he posted.
I caught up with Kabanov via Skype in Estonia, where his company is headquartered, to ask him about it. He said there just wasn't enough of a mid-range release management market to sustain the product.
"LiveRebel was aimed at the mid-market," he said. "That's a few dozen up to a couple hundred servers. But most of our competitors were going after customers with hundreds to thousands of servers. We just felt that there was a significant opportunity cost for going after that market."
Another problem, Kabanov said he believes, is that there is no agreement currently on exactly what a "release management" product should do -- especially within the context of a rapidly evolving of DevOps and continuous delivery movements. But perhaps more important, whatever release management is, it doesn't currently seem to be at the top of ZT's customers' to do lists. In his blog post, he put it this way: "Release management provides little value if you don't have automated builds, provisioning, and a well-defined release process, and unfortunately most potential customers would have none of those."
"It was a tough decision emotionally, but from a business perspective, it was quite straight forward," he said. "For now, we're continuing to focus on the developer tools market, which is our strength. But we're not closing any doors on what we might do in the future."
LiveRebel 1.0 was released in May 2011 after about three years of development "in the far northern country of Estonia as an attempt to re-invent product updates." The final version, 3.1, was released last month. Kabanov said active customers would be getting refunds and support and help migrate off LiveRebel until August 2015.
The Tartu, Estonia-based company is probably best known for its JRebel plug-in, which integrates with the Java Virtual Machine (JVM) and app servers on the class loader level, and allows developers to make on-the-fly code changes in Java class files. In June, the company released an interactive Java profiler called XRebel. The company also operates a research and content organization, Rebel Labs, which publishes free, vendor-neutral technical resources.
Posted by John K. Waters on 08/13/2014 at 4:34 PM0 comments
Enterprise developers struggling with the challenges associated with mobile application development may not be settling on a single, one-size-fits-all tool or platform just yet, but they are approaching those challenges more strategically. That, according to IDC analyst Al Hilwa, writing in a recently published report: "Negotiating the Mobile Disruption: Approaches for Multiplatform Application Development."
Hilwa, who is a research director in IDC's Application Development Software group, told me that he's trying to be the voice of reason in this report, offering "concrete, realistic advice to enterprises."
He first states the obvious, that "the central problem in mobile application development is addressing the variety of platforms and devices that employees can bring into the enterprise in a productive and agile manner…"
What's not so obvious, he reminds us, is that until recently, enterprises have been side-stepping the so-called mobile revolution "engaging the consumers of their products with B2C apps, often developed by outside agencies or contractors." But the no-end-in-sight proliferation of mobile devices in the hands of employees -- and the promise of a more productive and mobile workforce -- has forced enterprises to address mobile more strategically and in-house.
"Since 2012," Hilwa writes, "enterprises have begun to tackle mobility more strategically by tagging their internal custom application development teams to skill up on mobile application development and take a more systematic approach to developing suites of applications that overhaul how certain internal business divisions, especially mobile sales and field forces, operate."
And there's another problem: The big three platforms—iOS, Android, and Windows—have focused on consumers, and haven't paid much attention to enterprise software licensing requirements. "For enterprises, tightly controlled application platforms with vendor-anointed tools and programming languages mean limitations in choice, an inability to leverage existing developer skills and, most importantly, an inability to easily develop code for multiple platforms," Hilwa writes.
In the midst of this "mobile disruption," four general approaches have emerged or enterprise mobile app developers: native, Web, hybrid, and third-party. Developing native apps for particular platforms requires platform owner–supplied/approved programming languages, runtimes, frameworks, and tool chains. Developing Web apps that use the browser as a runtime and app platform are typically sandboxed, so they can't access native platform features. Hybrid apps use Web apps with a thin wrapper that allows them to be distributed via app stores, which means they can access to some native platform features. And third-party-supplied runtimes or app platforms, which can come with their own programming languages, frameworks, and developer tool chains, produce full "native" apps that can be delivered through device platform owner–supplied app stores.
In the end, most enterprises will follow more than one mobile app development approach, Hilwa predicts, though he expects most to deliver as hybrids or as third-party cross-platform apps.
Now for some of that concrete advice: Because app dev tools, frameworks, and middleware aimed at enterprises are increasingly integrating HTML5 support, it will be in the best interest of enterprise developers to "embrace the Web ecosystem of skills and set up Web developer teams along with existing Java and Microsoft ecosystem developer teams."
Here's another one: Enterprises moving from a tactical to a strategic approach to mobile app development will want to "embrace an API architecture for back-end systems." In fact, "such enterprises should begin re-architecting their back-end systems and data assets into API services before embarking on extensive mobile application building."
There's a lot more in this report, and it's well worth reading in its entirety. Also, in case you missed it, my colleague David Ramel's most recent Dev Watch column ("Doubts About Cross-Platform Mobile Development") cites a number of studies on or related to this topic.
Posted by John K. Waters on 08/07/2014 at 4:32 PM0 comments
How are shifting consumer behaviors, new digital channels, application standards, and open source trends influencing current approaches to customer-facing software development? That's a big, scary question, but the panel of experts assembled to answer that question during Actuate's iHub F-Type launch in San Jose recently weren't intimidated in the least.
In fact, customer strategist Esteban Kolsky, principal and founder of ThinkJar, took issue with the title of the panel -- "Building the Next Big App" -- arguing that the next big app could very well be small.
"Anyone here have the Starbucks app on their phones?" he asked the crowd. (Some hands went up.) "It's more than that I bet! My point is, this is a very small app that does only three things: it finds a new Starbucks, it lets you charge it, and it lets you see what your rewards are. It's not a big app at all, and that's what's very interesting going forward." The next big app, he said, is probably going to be a small, special purpose application.
Stephen O'Grady, principal analyst and co-founder of Redmonk and a truly developer-focused industry watcher, said it was clear to him that, whatever the next big app might be, its development will be driven by data. He pointed to ride-sharing service Uber, which relies on a mobile app that not only connects passengers with drivers of vehicles for hire, but also provides end-user data the company can mine to make better decisions about growing the business.
Mike Milinkovich, executive director of the Eclipse Foundation, who has watched more than a decade of open-source action from the front lines, observed that the small, narrowly focused apps Kolsky saw in our future would depend on the data O'Grady saw as a driver of development.
"I don't think [the next big apps] have to be data driven," he said. "But even the simplest of apps will generate data that can be utilized, monetized, and become the source of new and interesting business models, and even social models. That's where I think things are going."
Which is not to say that the data isn't a driver in this space, Milinkovich added. "It used to be that new product ideas were essentially based on conjecture," he said. "We were guessing that this might be an opportunity. Now there's much more hard data to base that guess on. There's much more data-driven innovation, rather than inspiration-driven innovation."
Loie Maxwell, chief marketing officer at Social Imprints and former vice president ofcCreative at Starbucks, suggested that the relationship between small apps and big data lies at the heart of the kinds of end-user experiences that will differentiate the next big app.
"That data doesn't just sit somewhere," she said. "Or it shouldn't, because the data can drive continuous improvement, which can lead to better user experiences, better services, and possibly those new business opportunities you were talking about."
Kolsky pointed to two more essential links in this Next Big App chain: connectivity and analytics tools. "The data has existed forever," he said. "But now everything is connected and we have the tools to collect and process it. That's the critical aspect of this environment."
Moderator Allen Bonde, vice president of product marketing and innovation at Actuate, asked the panel what they thought about the notion that, when it comes to data, "fast" is the new "big." Kolsky liked the idea.
"The whole concept of big data stems from the fact that we can collect, process, store, and manipulate data hundreds of times faster than we ever could before," Kolsky said. "When people talk to me about big data, the first thing they ask is how to deal with it in real time."
"The truth is, an awful lot of the data that companies we deal with are looking at is not even remotely big data," O'Grady said. "We're not talking petabytes, but terabytes. In some cases the [smaller] data sets can give you huge insights."
How much is the consumerization of IT influencing the development of the Next Big App, Bonde asked the panel.
"It's about the visualization of data," Maxwell. "It used to be that you needed to hire an analyst at hundreds of dollars an hour to analyze the data, break it down, and tell you what you needed to be doing. Now we're creating tools that allow individuals to create visualizations of data for their own business needs. There's a democratization there, with apps showing up [in the enterprise] that are much more user friendly in this way. That's certainly picking up on what the consumer population has to have in order to adopt a product. It's changing the designs of enterprise software."
To Bonde's question about the impact of open source on the Next Big App, O'Grady opined that OSS hasn't been much of a direct driver of good design or usability, but that it has driven bottom-up adoption, which has profoundly changed the way technologies are procured in the enterprise -- and almost as a side effect, improved design.
"Ten years ago, I could sell my business applications basically one person, the CIO, and what the product looked like wasn't all that important," he said. "But today it's a lot different. It's much more like selling iPhones. You don't sell iPhones to an executive who then rolls them out to the company. You sell an iPhone to each individual in the company. So things like design, usability, availability, installation, and ease of use matter."
Milinkovich pointed out that "big data" was one of the first major enterprise software trends to emerge first from the open source community, via the Apache Hadoop project, which spawned companies like Cloudera and Hortonworks. "It was bottom-up adoption, as developer realized what they could do with these tools, that made it happen," he said. "And it was definitely driven by open source."
Bonde wrapped up the presentation by asking for some advice from the panelists for developers and designers thinking about the next big app.
Milinkovich pointed to the "huge opportunities" emerging over the next decade from the Internet of Things and a world in which we will be increasingly surrounded by sensors gathering data. Application developers should be prepared to take advantage of the tools and technologies that support the analysis of that data for real-time decision making.
O'Grady advised developers to take advantage of technologies connect them with users to gain deeper insights into their applications. "You should consider the possibility that in many cases the best outcome of a question [from a user] is the next question," he said. "You're never going to be able to answer perfectly a given user's question. But each question presents you an opportunity to say, hey, that's something I didn't know, and to follow up with the next question and the next question after that. That's how you get the big insight."
"Among the tech startups I've worked with, the ones who invested in the user experience and made it something users could fall in love with are the ones who saw the greatest return the fastest," said Maxwell. "In too many situations I've seen, that's almost an afterthought."
"I wouldn't spend any time thinking or worrying about big data or any of that," Kolsky said. "I'd just say, invest your time developing something people actually need."
Posted by John K. Waters on 07/28/2014 at 4:33 PM0 comments
Actuate signed on with the Eclipse Foundation as a Strategic Developer back in 2004, just a few months after the organization was founded. The South San Francisco-based company proposed the industry's first open-source Business Intelligence and Reporting Tools project (BIRT), and a decade later, BIRT is one of the best known open-source initiatives for data-driven development.
Now, the company says it's entering a "new chapter" with the launch of a freemium version of its iHub data visualization platform.
The BIRT iHub platform integrates and manages BIRT Analytics apps and BIRT-based information. It converts that information into graphs, charts, tables and diagrams, and more. The new BIRT iHub F-Type is designed to manage and distribute content created with both the open-source BIRT and the company's commercial BIRT Designer Pro IDE. It gives developers free access to the features of the commercial BIRT iHub platform with "metered output capacity." In other words -- or rather, in the words of Actuate CEO Pete Cittadini, "Actuate is now a subscription business."
"We've seen IT shifting to the so-called subscription economy for several years now," Mike Milinkovich, executive director of the Eclipse Foundation, told ADTmag. "It's becoming an increasingly common way to sell software. But this is still a big move for the company."
In Actuate's version of this model, the volume of daily data output is limited to a level "suitable for many developers' needs" (50MB), but there's no limit on data input. When the daily output exceeds 50MB, they can buy additional capacity from within iHub F-type. And the company is allowing devs to exceed that daily limit twice in a month before hitting them with a charge.
"We're targeting Eclipse BIRT developers, of course, but non-BIRT Java developers, too" said Nobby Akiha, Actuate's senior vice president of marketing. "And a big focus has been on the user experience. You'll be up and running in 15 minutes. "
Actuate's move to a subscription model is probably a smart one for the company, says Redmonk analyst Stephen O'Grady, and good news for developers. "They're giving developers a chance to leverage these capabilities in a way that makes it easy for them to do it," he said. "Availability and ease of access are overlooked surprisingly often by commercial software organizations. You can have the best solution in the world, but if it's hard for me to get, and there's something that's even half as good, frankly, that I can get easily, I'm going to do that. We see this over and over again."
Posted by John K. Waters on 07/24/2014 at 4:33 PM0 comments
Typesafe this month marked the five-year anniversary of Akka, its open-source run-time toolkit for concurrency and scalability on the Java Virtual Machine (JVM).
Written in Scala and used to build highly scalable, fault-tolerant applications in both Scala and Java, Akka has gained serious traction since Swedish programmer Jonas Bonér pushed out the first public release (v.05) on July 12th, 2009. The company now includes some big names on its Akka user list, including Amazon, BBC, Cisco, Credit Suisse, eBay and more.
Bonér, who is Typesafe's CTO and co-founder, had worked for years building compilers, runtimes, and open source frameworks for distributed apps. Somewhere along the way, he says, he became "fed up" with the scale and resilience limitations of CORBA, RPC, XA, EJBs, SOA, and the Web Services standards and abstraction techniques Java developers typically used.
"It started to dawn on me that it wasn't that we were using the wrong tools," Bonér told ADTmag. "It was a fundamentally wrong approach to building software."
A better approach, Bonér concluded after "tinkering" with the Oz and Erlang languages, was the Actor Model, which utilizes objects that encapsulate state and behavior. Each actor also has a mailbox, and communicates exclusively by exchanging messages placed into a recipient's mailbox. This model provides a unified, single abstraction over concurrency and distributed computing. And an Actor's behavior can be redefined at runtime.
"I found the Actor Model to be a really good basis for building this next generation middleware," Bonér said. "And I could see that we needed to bring them over to the JVM."
Akka is one of the technologies emerging around the concept of reactive applications, described in "The Reactive Manifesto" as apps that better meet the "contemporary challenges of software development," in a world in which applications are deployed to everything from mobile devices to cloud-based clusters running thousands of multicore processors. Bonér wrote the first version of manifesto, which defines the four "critical traits" of reactive apps: event-driven, scalable, resilient, and responsive. By embracing these traits, the manifesto asserts, developers produce apps that are highly responsive to user experiences, provide a real-time feel, and are backed by a "scalable and resilient application stack" that can be deployed just about anywhere. A number of others contributed to later drafts of that manifesto, including Typesafe's other co-founder, Martin Odersky, who created the Scala language. That list of contributors also includes Erik Meijer, Greg Young, Martin Thompson, Roland Kuhn, James Ward, and Guillaume Bort. Since it was published, hundreds have "signed" the manifesto.
Odersky created Scala, a general purpose, multi-paradigm language that runs on the JVM, to integrate features of object-oriented programming and functional programming. Typesafe is also responsible for the Play Web app framework, a development and runtime environment billed as "a clean alternative to legacy Enterprise Java stacks." Play compiles Java and Scala sources directly and "hot reloads" them into the JVM.
In 2013 Typesafe acquired of Spray.io, a suite of lightweight Scala libraries that provide client- and server-side REST/HTTP support on top of Akka. The company hoped to broaden the appeal of the Typesafe Platform to Java developers with "one of the best performing REST/HTTP libraries in the Java ecosystem," Boner said.
The Akka team, which now comprises six members ("It's a good size," Boner said), shipped version 2.2 in July 2013. That release included full support for clustering. In October of that year, the team introduced Akka Persistence to allow stateful actors to recover from JVM crashes "in a way that Actors themselves are persisted in memory," the company explained. In April 2014, Typesafe unveiled early preview releases of two projects designed to improve data streaming on the JVM: Akka Streams and Reactive Streams (ADTmag coverage here).
Typesafe has posted an infographic of the history of Akka that shows its evolution, the influence of Scala Actors on its development, the Oz/Erlang connection, the emergence of the Akka Persistence module, and all the creative people involved—not to mention how it got its name. (Hint: It's not an acronym.)
Posted by John K. Waters on 07/14/2014 at 4:33 PM0 comments
The annual Google I/O developer conference, which wrapped up this week in San Francisco, packed its usual punch with a number of announcements and free stuff for attendees.
We saw the first example of Android Wear software for wearable devices, coming initially in LG's G Watch and Samsung's Gear Live, and later in Motorola's Moto 360. We saw Android TV, which Google will make available to new television sets from vendors like Sony and Sharp.
There was no mention of Google+ in the keynote; make of that what you -- and everybody else -- surely will. There wasn't much talk about Google Glass, either. No skydiver. And no Larry Page on the keynote stage.
In his mobile-focused keynote, Sundar Pichai, Google's SVP of Android, Chrome, and Apps, announced Android One, a new initiative that provides low-end hardware and software to emerging mobile markets, such as his home country of India. And he unveiled the new Android for Work initiative, which will partition personal and work apps on Android and Chrome for added security. (Look for Android at Work in the upcoming Android L operating system update.) And the new Android Auto.
Pichai also dropped some mad stats: a billion "30-day active" users (currently active) of the Android Platform, who send 20 billion test messages and take 93 million selfies per day. Android tablets now account for 62 percent of the global market, and app installs on the tablet is up 236 percent. He also mentioned that this year's conference is made up of 20 percent women, up from just 8 percent last year. Not sure what that means, either, but that's quite a year-to-year jump.
But the big news for developers at this year's show: 5,000 new APIs that will connect Android devices to a broad set of services on the Net and on other devices. Google also released the Android SDK for devs building apps for wearable computing.
IDC analyst Al Hilwa called the Google announcements "an amazing buffet of capabilities and APIs for developers that truly expands the Google ecosystem."
"The reach of the platform to wear, auto, home and TV, as well as connecting them together, really begins to show the connectedness of Android as the leading mobile platform," Hilwa told me. "What came across [in the keynotes] is the expansiveness of the platform overall. I did get a sense at the keynote that Google is talking to the whole world not just to a premium club. Does this mark a transition of the mobile leadership from Apple to Google? We will find out for sure when Apple makes its late fall announcements."
Google highlighted four new Cloud Platform tools for developers:
- A new version of Cloud Save, a service that enables non-relational, per-user data to be storage and synced in apps with no backend programming required.
- Google Cloud Monitoring, which uses technology from the company's recent Stackdriver acquisition to provide dashboards and alerting capabilities for finding and fixing performance problems.
- Cloud Debugger Monitoring, which changes the standard debugging model by allowing developers of cloud-based apps to set "watchpoints" on lines of code, which gives them "a snapshot of all the local variables, parameters, instance variables and a full stack trace," Google says.
- Cloud Trace Monitoring, a performance tool designed to give developers the ability to "visualize and understand the time spent by your application for request processing," and thus pinpoint bottlenecks.
"For developers there's a lot to play with here," Hilwa added. "APIs at every level will enable a new generation of apps connecting smartphones to other objects in life .... It will take some time for developers to absorb all the new APIs, but we will begin to see the impact of these announcements in the coming year. I expect that new SDK's will be added over time to cover other realms of the IoT world."
Posted by John K. Waters on 06/27/2014 at 2:19 PM0 comments
The Eclipse Foundation's annual Release Train will be in the spotlight later this week, but first a bit of that metaphorical illumination should fall on a new Foundation project. Announced on Monday, the newly organized Eclipse Science Working Group (SWG) is being described as "a global collaboration for scientific software." It aims to bring together groups from academia, industry and government to create open software that can be used in basic scientific research.
Actually, a better word here is "reused." As the Foundation's executive director Mike Milinkovich explained, the SWG is about freeing scientific researchers from the need to build so much software from scratch.
"There's a lack of reusable software components for basic scientific research," Milinkovich told ADTmag. "And yet more and more science projects are deeply dependent on software. The current model seems to be, they get a grant and then immediately start developing single-purpose software from the ground up. The goal of this new group is to create a set of tools and frameworks -- complete building blocks, really -- to help accelerate scientific research."
The SWG's founding steering committee includes Oak Ridge National Laboratory in Oak Ridge, Tennessee; Diamond Light Source in Oxfordshire, UK; and IBM. The SWG website lists 14 total members as of this writing, including Clemson University, Uppsala Universitet, The Facility for Rare Isotope Beams, Marintek, Lablicate, Kichwa Coders, Tech Advantage, and IFP Energies Nouvells.
The working group was originally proposed German programmer Philip Wenig, who works at Lablicate. Wenig, who is the developer of OpenChrom, an open source software tool for the mass spectrometric analysis of chromatographic data, brought the results of an inventory he had done of Eclipse-based scientific research tools, Milinkovich explained.
"It showed just how many groups were reinventing the wheel," Milinkovich said. "He argued that we could do better if we worked together, and a number of people agreed with him."
The current members of the working group will be collaborating on a range of open source projects focused on big-brain science software, such as tools for plotting and visualizing 1D, 2D and 3D data, and managing data for structured and unstructured grids: modeling and simulation software for physical and social sciences, such as physics, chemistry, biology, sociology, and psychology, among others; standard descriptions and definitions for scientific data; and infrastructure software to support scientific computing (e.g.: job launching and monitoring, parallel debugging, and remote project management).
"These are the kind of things that scientists are working with everyday," Milinkovich said. "We think we've assembled a visionary set of organizations here that are focused on the value they can bring to scientific research by collaborating on the basic software building blocks to make this research more productive."
The two initial SWG open source projects are based on code contributions from Oak Ridge National Laboratory and Diamond Light Source. The first is the Eclipse Integrated Computational Environment (ICE), which is a platform for modeling and simulation projects in science and engineering. The aim of the project is to provides a standard set of tools that allow scientists to set up the input model, launch a simulation job, analyze the results, and manage the input and output data. The code is based on technology created at Oak Ridge National Laboratory to develop a computational environment for modeling and simulation of nuclear reactors.
The second is the Eclipse DawnSci project, which defines Java interfaces for data description, plotting and plot tools, and data slicing and file loading. The project aims to provide interoperability of algorithms between different scientific projects. It's based on a code contribution by Diamond Light Source.
Keep in mind that these aren't Eclipse people, but rather experts who are bringing their special expertise to the platform.
"This is not Eclipse tool vendors looking for a way to approach science," Milinkovich said. "These are visionary scientific organizations realizing that they can use Eclipse to solve a problem."
Posted by John K. Waters on 06/24/2014 at 10:58 AM0 comments