"What's in a name?" Shakespeare's Juliet asked. Quite a lot, actually. Take it from me: the other John Waters. Another example: service virtualization. The name is so close to the most well-known and widely implemented type of virtualization -- server virtualization -- that it's gumming up the conversation about using virtualization in the pre-production portion of the software development lifecycle.
Industry analyst Theresa Lanowitz has been doing her part for a while now to clarify terms. It matters, she says, because service virtualization could have as big an impact on application development as server virtualization had on the datacenter.
"When many people hear the word 'virtualization,' the first thing that pops into their heads is serv-er virtualization, and of course, VMware," Lanowitz told me. "Which is understandable. Servervirtualization is incredible technology. It allows enterprises to make better use of their hardware and to decrease their overall energy costs. It allows them to do a lot more with underutilized resources. Serv-ice virtualization is almost the antithesis, in that it allows you to do more with resources that are in high demand."
To be clear, as Lanowitz defines it, service virtualization "allows development and test teams to statefully simulate and model the dependencies of unavailable or limited services and data that cannot be easily virtualized by conventional server or hardware virtualization means." Lanowitz stresses "stateful simulation" in her definition, she said, because some people argue that service virtualization is the same as mocking and stubbing. But service virtualization is an architected solution, while mocking and stubbing are workarounds.
"Lifecycle Virtualization" is voke's umbrella term for the use of virtualization in pre-production. The full menu of technologies for LV includes service virtualization, virtual and cloud-based labs, and network virtualization solutions. The current list of vendors providing service virtualization solutions includes CA, HP, IBM, Parasoft, and Tricentis.
Lanowitz and her voke, inc., co-founder, Lisa Donzek, take a closer look at recent developments in the service virtualization market in a new report, ("Market Snapshot Report: Service Virtualization"). The report looks at what 505 enterprise decision makers reported about their experiences with service virtualization between August 2014 and October 2014, and the results they got from their efforts. It also does an excellent job of defining terms and identifying the products and vendors involved.
This is voke's second market report on service virtualization. Their first survey was conducted in 2012 and included 198 participants. "In 2012, the market had just been legitimized," Lanowitz said. "We still had a long way to go."
What jumped out at me in this report was the change in the number of dependent elements to which the dev/test teams of the surveyed organizations required access. In 2012 participants reported needing access to 33 elements for development or testing, but had unrestricted access to only 18. In 2014, they reported needing access to 52 elements and had unrestricted access to only 23. Sixty-seven percent of the 2014 participants report unrestricted access to ten or fewer dependent elements.
"It's clear to us that if you're not using virtualization in your application life cycle, you're going to have some severe problems, whether it's meeting time-to-market demands, quality issues, or cost overruns," Lanowitz said. "Service virtualization helps you remove the constrains and wait times frequently experienced by development and test teams needing to access components, architectures, databases, mainframes, mobile platforms, and so on. Using service virtualization will lead to fewer defects, reduced software cycles, and ultimately, increased customer satisfaction."
There's a lot more in the report. It's a must read.
A rose by any other name might smell as sweet, but it's not going to jack up your productivity.
Posted by John K. Waters on 02/09/2015 at 1:32 PM0 comments
I reported last week on Oracle's latest Critical Patch Update, which included 169 new security vulnerability fixes across the company's product lines, including 19 for Java. The folks at Java security provider Waratek pointed out to me that 16 of those Java fixes addressed new sandbox bypass vulnerabilities that affect both legacy and current versions of the platform. That heads-up prompted a conversation with Waratek CTO and founder John Matthew Holt and Waratek's security strategist Jonathan Gohstand about their container-based approach to one of the most persistent data center security vulnerabilities: outdated Java code.
Holt reminded me that the amount of Java legacy code in the enterprise is about to experience a kind of growth spurt, as Oracle stops posting updates of Java SE 7 to its public download sites in April.
"When you walk into virtually any large enterprise and you ask them which version of Java they're running, the answer almost always is, every version but the current one," Holt said. "That situation is not getting better."
Outdated Java code with well documented security vulnerabilities persists in most data centers, Gohstand said, which is where it's often the target during attacks. The reasons that legacy Java persists, in spite of its security risks (and the widespread knowledge that it's there), is up for debate. But Waratek's unconventional approach to solving that problem (and what Holt calls "the continued and persistent insecurity of Java applications at any level of the Java software stack") is a specialized version of a very hot trend.
Containers are not new, of course, but they're part of a trend that appears to have legs (thanks largely, let's face it, to Docker). Containers are lightweight, in that they carry no operating system; apps within a container start up immediately, almost as fast as apps running on an OS; they are fully isolated; they consume fewer physical resources; and there's little of the performance overhead associated with virtualization -- no "virtualization tax."
Waratek's containerization technology, called Java Virtual Containers, is a lightweight, quarantined environment that runs inside the JVM. It was developed in response to a legacy from the primordial Java environment of the 1990s, Holt said.
"It was a trendy idea at the time to have a security class sitting side-by-side with a malicious class inside the same namespace in the JVM," he said. "Sun engineers believed that the security manager would be able to differentiate between the classes that belonged to malicious code and those that belonged to the security enforcement code. But that led to a very complicated programming model that is maintained by state. And states are difficult to maintain. When we looked at the security models that have succeeded historically, we saw right away that they were based on separation of privileges."
Waratek began as a research project based in Dublin in 2010, an effort to "retrofit this kind of privilege and domain separation" into the JVM, Holt said. That research led to the company's Java virtual container technology. "Suddenly you have parts of the JVM that you know are safe, because they are in a different code space," he said.
Holt pointed out that containerization is a technique, not a technology, and he argued that that is a good thing.
"It means that it doesn't matter what containerization technology you use," he said. "People are starting to wake up to the value of putting applications into containers—which are really locked boxes. But the choice of one container doesn't exclude the use of another. You can nest them together. This is really important, because it means that people can assume that containers are going to be part of their roadmap going forward. Then the conversation turns to what added value can I get for this locked box."
Holt and company went on to build in a new type of security capability into their containers, called Runtime Application Self-Protection (RASP), producing in the process a product called Locker. Gartner has defined RASP as "a security technology built in or linked to an application or app runtime environment, and capable of controlling app execution and detecting and preventing real-time attacks." In other words, it's tech that makes it possible for apps to protect themselves.
"We see this as an opportunity to insert security in a place where security is going to be more operationally viable and scalable," Gohstand said.
Gohstand is set to give a presentation today (Wednesday) on this very topic at the AppSec Conference in Santa Monica.
Posted by John K. Waters on 01/28/2015 at 12:12 PM0 comments
The open source Docker project experienced "unprecedented growth" last year, its maintainers say, with project contributors quadrupling and pull requests reaching 5,000.
To cope with the surge of this "whirlwind year," Docker, Inc., the chief commercial supporter of the project, has modified its organizational structure, spreading out the responsibilities that had been handled by Docker's founder and CTO, Solomon Hykes, into three new leadership roles.
The new leadership roles include Chief Architect, Chief Maintainer, and Chief Operator. The new operational structure also defines the day-to-day work of individual contributors working in each of these areas. All three positions were filled by new or existing employees of Docker, Inc., the chief commercial supporter of the open source Docker.io container engine project.
"This is the natural progression of any successful company or open source project," Docker's new Chief Operator, Steve Francia, told ADTmag. "As your popularity grows, you eventually have to spread the load, and that's what this new structure is doing."
Since the release of Docker 1.0 last June, the project has attracted more than 740 contributors, and fostered more than 20,000 projects and 85,000 "Dockerized" applications.
Hykes will take on the role of Chief Architect, which Francia called "the visionary role. It trims his responsibilities to overseeing architecture, operations, and technical maintenance of the project. He will also be responsible for steering the general direction of the project, defining its design principles, and "preserving the integrity of the overall architecture as the platform grows and matures."
The role of Chief Maintainer has been assigned to Michael Crosby, who became a Docker team member in 2013, and has been a core project maintainer. He will be responsible for "all aspects of quality for the project, including code reviews, usability, stability, security, performance, and more." Crosby began working with the project in 2013 as a community member. "He was appointed to the position because he was already so good at supporting the other maintainers," Francia said. "It's a role that, in some ways, he's already been playing." Crosby is described in the Docker announcement as "one of its most active, impactful contributors."
As Chief Operator, Francia will be responsible for the day-to-day operations of the project, managing and measuring its overall success, and ensuring that it is governed properly and working "in concert" with the Docker Governance Advisory Board (DGAB). For the past three years Francia had served in a similar capacity as chief developer advocate at MongoDB, where he "created the strongest community in the NoSQL database world," the announcement declares.
"When I joined MongoDB, I'd been around long enough to realize that companies that transform the industry come along maybe once in a decade," Francia said, "and I knew how lucky I was to be a part of that. At Docker I get to be part of another transformation, one that is going to change the way development happens, forever. You always hope that lightening will strike twice, but I sure didn't expect it to happen so soon."
Francia introduced himself to the Docker community in a Q&A session today on IRC chat in #docker. He also posted his first blog as Chief Operator.
The Docker reorganization itself went through the same process as a proposed feature, and was documented in a pull request (PR #9137). It was commented on, modified, and merged into the project. The changes are intended to make the project more open, accessible, and scalable, and in an incremental way, without unnecessary refactoring.
Docker and containerization seem to be on everybody's mind these days as microservice architectures gain traction in the enterprise. Over the past few years, Netflix, eBay, and Amazon (among others) have changed their application architectures to microservice architectures. Thoughworks gurus Martin Fowler and James Lewis defined the microservice architectural style as "an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms." Containers are emerging as a popular means to this end.
"The level of ecosystem support Docker has gained is stunning, and it speaks to the need for this kind of technology in the market and the value it provides," said IDC analyst Al Hilwa said in an earlier interview.
Posted by John K. Waters on 01/28/2015 at 9:16 AM0 comments
More on This Topic:
And finally...Okay, this one isn't so much a set of predictions as observations on some trends enterprise developers should be aware of at the dawn of 2015.
Industry analyst and author Jason Bloomberg is president of Intellyx, an analysis and training firm he founded last June. He's probably best known as a longtime ZapThink analyst (and president, before he went out on his own). He has also written several books; I'm a fan of The Agile Architecture Revolution (John Wiley & Sons, 2013).
I recently caught up with Bloomberg shortly before he headed to Las Vegas for the annual CES gizmogasm. He pointed to two trends that he believes will have a profound effect on enterprise developers this year. First, what he called "the digital transformation."
"Customer preferences and behavior are now driving enterprise technology decisions more than they ever have before," he said. "That includes B-to-B and B-to-C. They're driving this combination of digital touch points and ongoing innovation at the user interface, and the enterprise is upping the ante on performance, resilience, and the user experience. But it all has to connect, end-to-end. All the pieces have to fit together."
DevOps, which connects development and operations, is now being extended to the business, to the customer experience, Bloomberg said. (He called it "DevOps on steroids.") This trend also includes things like continuous integration, continuous delivery, and established Agile methodologies that now have to connect to the customer at increasing levels.
These changes could be especially challenging for enterprise developers, Bloomberg said, because the shift is organizational, which is very different from the technology changes they're used to. If companies get this right, he said, server-side developers and user-facing developer will be working together in a new way, focused on delivering technology value to customers.
"Developers are going to be called upon to expand past their boundaries, in terms of how they can contribute and provide value to the companies they work for," he said. "This shakes some traditional developers to their core, but it's also very exciting to a lot of people, especially the twentysomethings, who are becoming the go-to players for digital technology. This is what they live and breath."
These shifts are already showing up in retail and media (Google, Netflix, Spotify), but Bloomberg expects them to spread quickly to virtually every industry. "I think it's going into overdrive in 2015," he said.
Trend No. 2: the Moore's-Law-like progress of The Internet of Things, which is evolving around exponential improvements in things like batteries, which are shrinking as they become more powerful, and burgeoning memory capacity.
"People tend to think linearly," Bloomberg said. "They expect things to get twice as good every year. But things are going to explode. The question will quickly become, how do we take advantage of so many different improvements in the technology? What can I do with a battery that is a thousandth of the size of current batteries, with processors that are a thousand times more powerful, with terabytes of memory?"
Developers in the trenches who just need to get their jobs done will have a hard time finding solid ground amid all of these changes, he said.
"All of this stuff is changing so fast, it's hard to know what's real and what's hype," Bloomberg said. "You could argue that it's always been this way, but developers are facing a range of changes that are going to be disruptive and quite challenging in the coming year."
Theresa Lanowitz is another industry watcher who went out on her own. The former Gartner analyst founded Voke, Inc. in 2006 to cover "the edge of innovation driven by technology, innovation, disruption, and emerging market trends." The white papers she publishes are not to be missed.
Among other things, Lanowitz has been tracking the enterprise adoption of the practice of applying virtualization to the pre-production portion of the application lifecycle, which she has dubbed Lifecycle Virtualization. A number of technologies support this practice, including most prominently service virtualization (provided by vendors such as CA, Parasoft, and HP), but also virtual and cloud-based labs (Skytap), and network virtualization (HP with its Shunra acquisition).
"We're starting to see more and more organizations saying, okay, we recognize this need for parity among dev, QA, and operations," she said. "We also understand that we need to support our line of business. How do we do that? We move virtualization to the portion of the application lifecycle where it really helps to control the business outcome."
Lanowitz expects this shift to take off in 2015, she said, because the tools are getting much better. "It makes a huge difference," she said.
Service virtualization in particular is gaining traction in the enterprise, Lanowitz said. She defines it as the process of enabling dev and test teams to statefully simulate and model their dependencies of unavailable or limited services and data that cannot be easily virtualized by conventional server or hardware virtualization means. She stresses stateful simulation in her definition, "because many organizations will say service virtualization is the same as mocking and stubbing. Service virtualization is an architected solution; mocking and stubbing are workarounds."
The bottom line: Service virtualization allows for testing much earlier in the application lifecycle, which ultimately makes it possible to deliver better business value and outcomes. It's that value proposition that's going to cause Lifecycle Virtualization to show up on a growing number of developers' radar in the coming year.
"If you really believe in the potential of a collaborative environment that includes dev, QA, and operations, then a solution like service virtualization is a defining technology," she said. "The team that benefits from it most directly is the test team, of course. They can test more frequently, they understand their meantime to defect discovery, they can increase their test execution, and they can increase their test coverage. But it is the development team that has to implement service virtualization to make that happen."
Lanowitz is at work on a new white paper updating her 2012 Lifecycle Virtualization stats. I'll let you know when it's published.
Posted by John K. Waters on 01/16/2015 at 2:51 PM0 comments
I've been looking ahead with analysts and industry watchers at what 2015 might have in store for enterprise software developers in general, but I also reached out for some predictions for Java Jocks about the future of their favorite language and platform.
Al Hilwa, program director in IDC's Software Development Research group, sees the continued adoption of Java 8 as a preoccupying enterprise trend in 2015, though "absorbing major new language releases is typically a slow process." He also expects to see growing interest in functional programming as developers begin putting Lambda and the stream API into "serious applications."
2015 could also be the year that the seemingly interminable court case between Oracle and Google (which may get heard by the Supreme Court) gets resolved, he said, which would mean that "programmers can go back to programming with minimal change to their lives."
Another trend: The enormous popularity of Android, which has been a great win for the Java ecosystem, because of the huge number of new developers it has brought to the Java fold, will continue in 2015, he said.
He also admonished the Executive Committee of the Java Community Process to maintain the release momentum the community has achieved since the Oracle takeover of the stewardship of Java, and to fulfill some recent promises.
"What the Java Jedi council has to do now is keep the releases moving on schedule and implement the complex modularity they promised," Hilwa said. "Java is already in a variety of form-factors, but aligning the pieces and the frameworks is going to be crucial if Java is going to have a nice chunk of the IoT pie."
John R. Rymer, Principal Analyst at Forrester Research, who covers, among other things, Java application servers, expects Oracle to put some real effort into making Java "friendlier to the cloud."
"The runtime in particular needs to be brought into the cloud era," he said. "Things need to be way more modular and lightweight than they are today, where that's appropriate, and I think we're going to see Oracle buckling down for the hard work of doing that in 2015."
Modularization will also get a lot of attention in 2015, Rymer said, both from Oracle and the wider Java community. He and his colleagues are keeping an eye on the Java-native module system known as Project Jigsaw, which Oracle's Java Platform Group has promised to include in Java 9.
"As I see it, Oracle has put down a solid foundation that now allows them to pursue this work," he said.
I also caught up with Mike Milinkovich, executive director of the Eclipse Foundation. These days, Eclipse is about much more than Java, of course; it's a multi-language development environment. But that environment is written in Java (mostly), and Milinkovich keeps his eye on that technology and community.
"All the numbers I have seen point to Java 8 being one of the most rapidly adopted new Java releases ever," he told me in an email. "So I view Java 8 as largely last year's news. What I am expecting in 2015 primarily is the work that will be done marching towards Java 9. In particular, I think that there will a lot of effort and discussion around modularity, and how that impacts the Java platform going forward."
A second significant area of growth for Java in 2015, Milinkovich said, will be the Internet of Things. That's not surprising: the Eclipse Foundation unveiled an IoT Stack for Java at JavaOne last year, and Ian Skerrett, the Foundation's VP of marketing, has been leading an Eclipse IoT initiative is to build an open-source community around IoT.
Milinkovich pointed to several Foundation projects focused on IoT and based on Java, including Kura, a complete and mature device gateway services framework; Smarthome, a residential home automation system that supports the integration of devices from many manufacturers, using many protocols; and Concierge, a lightweight, embeddable OSGi framework implementation. He also pointed out that Oracle is investing heavily in Java ME with an eye toward making it a serious player in the IoT space.
Milinkovich shared Rymer's opinion that Java would continue to move into the cloud "in a very big way" in 2015. "Cloud Foundry is the leading PaaS platform, and it is based on Java," he said. "IBM is using that platform for BlueMix [an implementation of IBM's Open Cloud Architecture], and I think we are going to be hearing a lot about cloud and BlueMix from IBM in 2015. Oracle has now caught the cloud religion, and I will be expecting to hear a steady drum beat of Java in the cloud from them as well."
Posted by John K. Waters on 01/14/2015 at 10:52 AM0 comments
More on This Topic:
The coming year looks to be a lively one for hard-working enterprise developers, most of whom will find themselves facing new and mutating challenges spawned by rampant mobility, Big (and Fast and Mean) Data, the oozing Internet of Everything and the Cloud. But those who pay attention to a few key trends and heed the advice of some smart industry watchers, will survive and thrive in 2015.
Industry analyst Dana Gardner, for example, expects 2015 to be the year that Platform-as-a-Service (PaaS) goes from tactical to strategic. In other words, the decisions about how to use PaaS in an organization and which version to standardize on will become more than simply a decision about developer tools and productivity. It will involve more strategic levels, he said, and concerns around concepts such as "cloud-first" and "mobile-first," as well as DevOps.
"We've seen cloud-first, mobile-first, and DevOps mentalities, but they've always been separate," he said. "The requirements and decision making around them have been distinct and tactical. I think this is the year that changes. This is the year that all three become part of the same, strategic decision process."
Gardner, who is principle analyst at Interarbor Solutions, said he believes that these once mostly separate spheres cannot remain in their own orbits. "They will have to be considered together," he said. "And this will be very hard to do, because we're talking about a lot of moving parts that impact each other, and a process that crosses organizational boundaries and threatens entrenched cultures."
What this means for enterprise developers, Gardner said, is that they must contribute to the decision making process at the architectural level to ensure that developer requirements don't get short shrift. "They will have to advocate for themselves in a wider environment of decision making," he said, "so that concerns about things like security and deployment flexibility in the hybrid cloud don't obviate the needs and concerns of developers. They need to learn to explain their past decisions and current needs in such a way that they are respected in the larger picture."
Which is not to say that you should go charging into the CIO's office with a list of demands. Gardner says that development organizations would be well advised to anoint advocates to speak for them.
"They need to create a point person to be a liaison with the higher-level decision process," he said. "This should be someone who can speak for them, but at that level -- someone who can provide evidence and metrics, use cases and scenarios, in a way that an architect or a bean counter will understand. In other words, development organizations need to get a little more political and create channels of communication that go up -- and down -- the org chart."
And the process must include security considerations.
"We all saw what happened at Sony Entertainment, and how devastating a cyber attack can be," Gardner said. "Not all enterprises have the wherewithal to implement a high level of security. So now part of your decision making around your most important applications and data has to involve questions like, are we more secure on our own systems and networks, or are we better off partnering with a cloud provider that has security as one of their most important skill sets?"
Application security is a subject near and dear to Gary McGraw's heart. McGraw, who is CTO of Cigital and the author of several now classic books on application security, sees two security trends that should be on every enterprise developer's radar in 2015: the growing importance of application design to app security, and the security challenge posed by the increasing popularity of dynamic languages.
"Even if we got rid of all of the bug problems and all the coding errors, we would still only be solving half the app security problems, which lie in design flaws," he said. "We've learned that we need to emphasize design as much as coding, and we've been getting past the bug parade. I think we'll move even further in that direction in 2015."
Last year, McGraw led a group of security mavens and the IEEE Computer Society to form the Center for Secure Design (CSD), which is seeking to address this "Achilles' heel of security engineering."
This will continue to grow in 2015, he said, and the software security industry must "deal with it head on, without retreating into old, broken ideas that still won't work."
Several of the industry watchers I tapped this year (or pestered, depending on their view of persistent reporters) mentioned micro-service architectures, so I was glad to hear back from Scott Johnston, senior vide presidentof product at Docker.
"The distributed app or microservice-based architecture does require careful thinking-through of the APIs, or the contracts between the microservices, early in the design process," Johnston said via e-mail. "To really get the full benefit ('this one goes to eleven') of a Dockerized distributed app, enterprise development teams will want to invest, if they haven't already, in end-to-end automation for their dev-build-test pipeline. Such continuous integration and delivery combined with Docker can compress development-to-deploy cycles from months to minutes. GitHub, Atlassian Bamboo, Jenkins, TravisCI, and IBM JazzHub are good examples of tools that can really help."
He also mentioned the so-called developer self-service: "To give enterprise developers less of a reason to fire-up Amazon EC2 instances on their own credit cards -- the 'shadow IT' dreaded by CIOs and CFOs alike -- DevOps and release engineering teams are rolling-out self-service portals that allow developers to provision IT-supported environments on-demand," he said. "This level of automation pulls together a number of technologies, including integration of build pipeline tools like GitHub, Atlassian Bamboo, and Jenkins; VMs like VMware and OpenStack; and configuration management tools like Puppet and Chef."
He also put in a plug for Dockerized apps, which allow DevOps teams to set-up "a self-service nirvana for their enterprise development teams." "On the front-end, the developer can self-service provision any environment for any stack in any language, and on the back-end ops can route the deployment of that app to any infrastructure -- VMware machines in the data center, public Amazon EC2 instances, a private OpenStack cloud, you name it. Now layer on top of that automated deployment routing decisions based on real-time spot prices, capacity management, and compliance policies. The result: a complete and awesome transformation of the enterprise app delivery pipeline."
I also heard from the ever succinct Forrester analysts Mike Gualtieri, who pointed to two trends that "will see a lot more beef behind their buzz" in the coming year: IoT and Apache Spark engine for large-scale processing.
"The Internet Of Things must become the Internet Of Analytics," Gualtieri said. "The IoT is nothing but a fifty-billion-dazzling-star field with no business value unless firms build the requisite in-motion and at-rest analytical capabilities that are essential to building IoT or IoT-informed applications. Apache Spark makes Hadoop stronger. Hadoop was designed for volume. Apache Spark was designed for speed. If you believe that opposites attract then Hadoop and Spark belong together. They are both cluster computing platforms that share nodes."
(I tried to do our annual enterprise developer prediction series two parts this year, but there's too much good stuff -- I'll more thoughts from industry watches on the coming year next week in Part 3.)
Posted by John K. Waters on 01/10/2015 at 2:54 PM0 comments
More on This Topic:
The coming year offers both promise and peril for enterprise software developers -- which, of course, is something you can say about every year (every month, for that matter). But I always think it's worth taking a moment during this first week of the New Year to talk with industry watchers about what might lie ahead during this particular orbit around the Sun.
Forrester analyst Jeffrey S. Hammond got out his scarily accurate crystal ball for a quick gaze into 2015. He told me via email that, among other things, he sees the pressure growing on mobile and Internet-of-Things (IoT) developers to "go native and specialize" because of the quickly separating ecosystems of Google, Apple, and to a lesser extent, Microsoft.
"Whether it's smart watches, home automation or vehicle integration," Hammond said, "the challenge for devs will be to plug into mobile devices and the ecosystems of connected products that are designed to work with them without having to write (and maintain) the same functions in multiple code bases."
And then there are the micro-service architectures: "Figuring out how to do things 'the Netflix way' will force developers to come to grips with technologies like Docker, Kubernetes, and the AWS EC2 Container Service, as well a lighter-weight runtimes like Node," he said. He also believes that the move to micro-services will amp up the pressure on Microsoft and Oracle to "slim-down" the .NET and Java VM's to decrease their footprint.
I also connected with Jonas Bonér, CTO and co-founder of Typesafe, the company behind Scala, the general purpose, multi-paradigm language that runs on the Java Virtual Machine (JVM), and Akka, the open-source run-time toolkit for concurrency and scalability on the JVM.
Bonér also pointed to container-based infrastructures, which really took off in 2014, to continue to make life easier for the devs who adopt them. The technologies he sees driving this trend include Docker, Apache Mesos, Google Kubernetes, and CoreOS.
"The floodgates have really opened in terms of moving away from server-based JEE and .NET 'old stack' models to more service simplicity, single-responsibility, composable and isolated approaches for service design," he said. "We see this trend continuing to pick up momentum in 2015 as the industry debates about the ideal size and behavior of services in the new world of applications that need to run across multiple cores."
Java 8 was the "game changer" for developers last year, Bonér observed, and he expects that impact to continue as adoption spreads in 2015—with at least one interesting side effect. "When the 800 pound gorilla (Oracle / Java) endorsed new abstractions like Lambdas, Streams, and CompleteableFuture to allow more functional style programming, it really set the wheels in motion for new ways of thinking about writing asynchronous and concurrent systems," he said, "and it opened a lot of mainstream Java developers' eyes to a range of other languages and possibilities."
Big data and how developers should be dealing with it dominated many conversations in 2014. Bonér expects that conversation to continue, but with a different emphasis.
'We're seeing the discussion around big data moving away from size and more towards velocity," he said. "Call it Fast Data. Speed is the hardest problem to solve—getting in-memory cached, real-time processing of data. When analysis needs to be done on the fly, on live data streams, with real-time feedback to systems, there are a host of major challenges. We think Fast Data will be the rallying cry for big data developers in 2015, and there is a lot of symbiosis between new Reactive Programming models and the challenges of achieving Fast Data."
The demands of reactive/dataflow-based architectures are also going to push developers out of their comfort zones, Verberg said. "Software projects are increasingly about shipping and transforming large amounts of data," he said. "Even within enterprises where the number of end users tend to be smaller, you have larger, richer data sets to deal with. The old three-tier 'straight to the data store' architectures no longer can support the scalability needs. Reactive and functional thinking is a huge challenge for traditional enterprise developers; it's a whole new (well actually old, but reinvented) way of thinking and architecting software. I see many developers struggling with the concepts and a lack of visualization tools around reactive programming and data flow makes it even trickier."
Verberg also expects devs to feel a new pressure to fully understand DevOps and virtualization, "or be left scratching their heads when their apps fail miserably in their private or public cloud deployments."
I'll have more 2015 prognostication from the savvy and the insightful later this week.
Posted by John K. Waters on 01/07/2015 at 2:56 PM0 comments
James Gosling, whom we all know as the Father of Java, and Brazilian Java community leader Bruno F. Souza, whom the community knows as "the Brazilian Javaman," have joined the platform development advisory team of Java/PHP Platform-as-a-Service provider Jelastic, the company announced this week.
Gosling will join as an independent director, and Souza will become an official advisor.
The Palo Alto, Calif.-based Jelastic, which was founded in 2010 by Hivetext, a Zhytomyr, Ukraine-based startup focused on Java application development in the cloud, bills itself as the only cloud company whose underlying platform is Java, and CEO Ruslan Synytsky says having such prominent Java figures contributing their expertise will give the company "even more in-depth coverage and analysis of Java features on our always transforming and improving platform."
Gosling and Souza are the most recent additions to the company's growing list of advisors. Jelastic first announced the creation of an advisory group to help with the development of its PaaS product in 2011. That group currently includes Rasmus Lerdorf (creator of the PHP language); Mark Zbikowski (former Microsoft Architect); Serguei Beloussov (Parallels founder); Monty Widenius (founder of MySQL and MariaDB); Igor Sysoev (founder of NGINX); and David Blevins (founder of a Apache TomEE, OpenEJB, and Geronimo).
Without a doubt, this is a big "get" for Jelastic. Gosling, a former Fellow at Sun Microsystems, is credited with inventing the Java programming language in 1994 (though Silicon Valley entrepreneur and former Sun product manager Kim Polese gets the credit for naming it). Gosling briefly joined Oracle after the database giant acquired Sun in 2010, left to work at Google for a while, and now serves as chief software architect at Liquid Robotics, a very cool company that makes "autonomous, ocean going platforms," including the Wave Glider, which is used for research.
Souza is a former president of SouJava, a Brazil-based Java User Group (JUG), and was one of the initiators of the Apache Harmony project to create a non-proprietary Java virtual machine. SouJava filled the vacancy left by the Apache Software Foundation (ASF) on the Java Community Process Executive Committee (EC) in 2011. That vacancy, readers will recall, was created when the ASF's decided to quit the EC. The non-profit organization behind more than 100 open-source projects had been threatening to leave the organization for some time. When the JCP executive committee voted to approve Java SE 7, which the ASF opposed, the group walked.
The São Paulo-based SouJava was the first JUG to join the JCP, and it claims tens of thousands members, for which it hosts activities in several cities around the country.Souza represented the organization on the EC. Souza is also a member of the Open Source Initiative (OSI) and an outspoken proponent of open source.
Souza gushed about Jelastic in a statement: "Throughout my career, I have been promoting freedom and choice for developers," he said. "Jelastic has a unique business model, that promotes choice. Jelastic philosophy changed the way I look into cloud infrastructure. Jelastic's Java-based implementation shows the power of Java technology. Giving developers the freedom to leave gives us the confidence to choice to stay. This is the power of the Java ecosystem. The power of choice. I'm very happy to be more directly involved in the future of Jelastic. This is an amazing opportunity to help bring more freedom and choice for developers worldwide."
Jelastic gushed about their newest advisors, promising to use their souped up advisory group over the next year "to influence Java development to make it even more dynamic, by eventually implementing the ability to reload all configurations/settings such as Xmx on the fly, without the need to restart an application/JVM, to bring/adapt desktop applications to the cloud."
And Gosling, who, in my experience, is not given to gushing, came pretty close in his statement: "Configuring cloud infrastructures is fun the first time you do it. But it doesn't take too long before it becomes a tedious time sink," said James Gosling. "And, if you have the misfortune of being a software developer that has to fight it out with an IT organization, who usually wants consistency, control and visibility, you find that you're always fighting with them. Jelastic solves all of that. Easy configuration tools for developers, management tools for IT. Peace and productivity. I love it."
Posted by John K. Waters on 11/19/2014 at 11:56 AM0 comments
Forrester Research analysts have been talking about "modern applications," a term they more or less coined, for a couple of years now. One of the clearest definitions of a modern app comes from application development and delivery specialist Jeffrey S. Hammond, who listed the qualities of a modern app in a 2013 blog post.
According to Hammond, a modern application is designed to work across a range of devices, from smartphones to desktops (not to mention your car and toaster). They react to multiple modes of input, including voice, touch, and the good old mouse. They're highly elastic and "take advantage of cloud economics." They use open source software. They're API-oriented, built on open web techniques, and use REST, XML, and JSON "to make it easy for all types of devices and clients to easily consume data." They're also responsive, organic, and contextual. (It's well worth reading the whole post.)
Increasingly, the source for this modern species of app is non-traditional developers, he said during a recent panel discussion among in-the-trenches coders.
"Sometimes I feel like I'm living in two completely different markets these days," Hammond said. "There's the market of the traditional IT developer, where we have conversations about whether they're a .NET or Java shop, and whether they're going to release two times this year or three, and how many millions of lines of code they're writing for the middleware they're building on top of these app servers.
Hammond moderated the panel, which was held last month at Telerik's Silicon Valley headquarters in Palo Alto. It featured representatives from Telerik partner organizations who are facing the challenge of bridging the two worlds Hammond described. In keeping with the theme of the event ("Coding Tomorrow's Masterpieces"), Hammond asked the panelists for examples of modern apps they considered to be masterpieces.
Thomas Stein, computer systems manager in the Department of Earth and Planetary Sciences at Washington University in St. Louis, who works in the school's NASA laboratory, pointed to Uber as a modern masterpiece, calling it "an amazing piece of work."
"I've hated the taxi experience my entire life," he said. "Uber puts me in direct contact with the driver, separating out the awkwardness of payment and tipping and all of that, and just really focusing on making me comfortable, giving me what I want, and getting me where I need to be -- with the mobile device is the touchpoint. It's not just the business model; the application is brilliant. I know it's not simple underneath, of course, but it feels simple from the top, and that's essential in a masterpiece."
For Chuck Ganapathi, founder and CEO of Tactile, which makes a mobile CRM app called Tact, it was DropBox. (It was actually his org's app, but they made him name another.)
"To me, a modern software masterpiece is something the users just fall in love with, because it does something simply and it just works," he said. "DropBox has that kind of feel. Suddenly, you have this file that you drop onto your computer and it magically appears on your computer at work."
Krupa Rocks, senior manager in the Clinical Data Systems group at St. Jude Medical, Inc. (not the hospital, but the medical device company), cited Google's driverless car, because it exemplifies the coming tight integrations of hardware and software.
"People don't know how to drive," she said. "Computers can do a better job. If Google can really provide a self-driving car, that would definitely be a masterpiece."
Todd Anglin, executive vice president of Telerik's Cross Platform Tools group, pointed out that modern software masterpieces are being created all the time that most people never see. "Consumer apps get all the attention," he said, "but there are masterpieces out there that never make it to the app store. Working with our customers, we get to see the apps that make business go and help people get their jobs done. When I look at those kinds of applications, it's really clear to me that a software masterpiece is something that evolves over time. That's one of the things that makes it modern."
Not surprisingly, Anglin also argued that modern application development is more dependent than ever on the evolving capabilities of modern tools. (His company is all about the dev tools.)
"We assume now a certain starting point," he said, "and tools are what get us there. They give teams the space to really think about how to define an application elegantly, rather than just 'how do I make this thing work?'"
Long Le, principle and App/Dev Architect at real estate services firm CB Richard Ellis (CBRE) , agreed with Anglin."Picking the right tools at every stage of your ALM process is super important to how fast you can get [the software] out there," he said, "especially if you have limited resources."
Ganapathi added that, for modern apps especially, analytics capabilities that help developers truly understand end users have become critical. "Today, it's all about being very iterative in your development and constantly re-tuning that on a day-to-day basis," he said. "You put something out there, and then observe the data to see how people are actually using it, and then you respond to that. And you don't rely on what they're telling you in user interviews, which is so often very different."
He also pointed to the growing importance of designers in modern app development. "As developers, we've always said to designers, we'll develop it, you just make it look pretty," he said. "That's so wrong! Everybody expects phenomenal design today. If you don't have great designers -- especially when you're thinking about modern mobile apps, let alone creating a masterpiece -- you're screwed."
Rocks added that in her organization, automated testing tools have become fundamental to fast solution delivery. "Developers aren't the best testers," she said. "So testing would become a bottleneck for us without those tools." She also agreed that designers have become essential to the process. "Users may not know what they want," she said, "but they know what they don't want."
Hammond noted that the emergence of such new tools as Grunt and the enormously popular Git could be evidence that classic IDEs, such as Visual Studio and Eclipse, aren't as useful for modern application development. He also suggested that the modern application space has birthed "a new humility" among developers.
The panelists also agreed that modern apps are increasingly being built by those non-traditional developers Hammond mentioned, people with a wide range of skills, from software engineers with computer science degrees to "not developers" in the sales department who rely heavily on tools and frameworks.
And they might even come up with a few masterpieces.
Posted by John K. Waters on 11/12/2014 at 11:14 AM0 comments