When the CSLA .NET framework made its first appearance in a book written by its creator, Rockford Lhotka, back in 1998, it was little more than a hunk of sample code -- at least that's how he saw it. But readers of that extremely popular book, VB6 Business Objects, saw it as something more.
"That first implementation was not really a framework per se," Lhotka recalls. "But after I published the book, I would get these e-mails from people who would say, 'Hey, I bought your book and I was using your framework and I wish it did this,' or, 'Your framework has a bug.' Initially I would respond that I don't have a framework. Over time I gave in and decided, hey, maybe I do have a framework."
Today CSLA is one of the most widely used open source software development frameworks for .NET. It's designed to help developers build a business logic layer for Windows, Web, service-oriented and workflow applications.
"It helps developers create a set of business objects that contain all of their business rules in a way that allows those object to be reused to create many different kinds of user interfaces or user experiences," Lhotka explains. "And once you've created this business layer using CSLA, you can create a WPF interface, a Silverlight interface, a Web interface, or a service interface on top of it."
"But then it gets even more interesting," he continued, "because those same objects can work on a Windows Phone, an Android device, and the new Windows Runtime (WinRT). Even if you're not building distributed applications (which most developers are these days), the CSLA framework gives an application a lot of structure and organization, which leads to long-term maintainability."
Lhotka (Rocky to his friends), CTO of Magenic, will be holding workshops on "Full Application Lifecycle with TFS and CSLA .NET" at the upcoming Visual Studio Live! New York and Visual Studio Live! Redmond conferences, as well as sessions about other topics. Lhotka is both a Microsoft Regional Director, which is a designated technical expert and community leader who's not a Microsoft employee, and an MVP (Microsoft Most Valuable Professional).
Lhotka created the .NET implementation of CSLA in 1999. The framework was originally conceived in 1996 in the world of Microsoft's Component Object Model (COM) and Visual Basic 5, and dubbed "Component Based Scalable Logical Architecture." But when Lhotka re-implemented it for .NET, which is not component based, the name "CSLA" became "just an unpronounceable word," he says.
CSLA .NET is currently in version 4.2, which supports Visual Studio 2010, Microsoft .NET 4.0, Silverlight 4 and Windows Phone 7. Version 4.2 and higher supports Android, Linux and OS X through the use of Mono, MonoTouch and Mono for Android.
More information about the CSLA framework, including a FAQ page, a download page, documentation, and a blog, can be found on Lhotka's Web site here.
Posted by John K. Waters on 05/07/2012 at 10:53 AM1 comments
While there's lots of talk (a lot of talk) about big data these days, according to Andrew Brust, Microsoft Regional Director and MVP, there currently is no good, authoritative definition of big data.
"It's still working itself out," Brust says. "Like any product in a good hype cycle, the malleability of the term is being used by people to suit their agendas."
"That's okay," he continues, "There's a definition evolving."
Still, Brust, who will be speaking about big data and Microsoft at the upcoming Visual Studio Live! New York conference, says that a few consistent big data characteristics have emerged. For one, it can't be big data if it isn't...well...big.
"We're talking about at least hundreds of terabytes," Brust explains. "Definitely not gigabytes. If it's not petabytes, we're getting close, and people are talking about exabytes and zettabytes. For now at least, if it's too big for a transactional system, you can legitimately call it big data. But that threshold is going to change as transactional systems evolve."
But big data also has "velocity," meaning that it's coming in an unrelenting stream. And it comes from a wide range of sources, including unstructured, non-relational sources -- click-stream data from Web sites, blogs, tweets, follows, comments and all the assets that come out of social media, for example.
Also, the big data conversation almost always includes Hadoop, Brust Says. The Hadoop Framework is an open source distributed computing platform designed to allow implementations of MapReduce to run on large clusters of commodity hardware. Google's MapReduce is a programming model for processing and generating large data sets. It supports parallel computations over large data sets on unreliable computer clusters.
"The truth is, we've always had Big Data, we just haven't kept it," says Brust, who is also the founder and CEO of Blue Badge Insights. "It hasn't been archived and used for analysis later on. But because storage has become so much cheaper, and because of Hadoop, we can now use inexpensive commodity hardware to do distributed processing on that data, and it's now financially feasible to hold the data and analyze it."
"Ultimately the value Microsoft is trying to provide is to connect the open-source Big Data world (Hadoop) with the more enterprise friendly Microsoft BI (business intelligence) world," Brust says.
Posted by John K. Waters on 04/10/2012 at 10:53 AM1 comments
It may not happen tomorrow, but sooner or later you're going to find yourself writing multitouch, gesture- and audio-input-based applications, Tim Huckaby declared during his day two keynote at the Las Vegas edition of the Visual Studio Live! 2012 developer conference series.
"I'm old enough that I remember when using a mouse was an unnatural act!" Huckaby told a packed auditorium at the Mirage hotel on Wednesday. "Now it's second nature. I'd argue that some of this voice- and gesture-capable stuff will be just as natural in a few short years."
Huckaby's keynote focused on human interactions with computers in non-traditional "natural-type" ways -- sometimes referred to as the Natural User Interface, or NUI -- and how it will impact the lives of .NET developers.That's something of a specialty of his Carlsbad, Calif.-based company, InterKnowlogy, which has delivered dozens of large WPF, Silverlight, Surface and Windows 7 Touch applications to clients across the country. He also founded a company, Actus, that specializes in interactive kiosk applications.
In a lively keynote during which he interacted with various gesture- and audio-based applications by flailing his arms and shouting commands, Huckaby argued that multitouch is now cheap, consumer-grade technology that everyone already wants.
"It's now cheap to do multitouch," Huckaby said. "And it improves usability, incredibly. You will see every computing device from here on in -- whether it's a smart phone or your desktop -- every one of them will be multitouch enabled."
To illustrate the pace of NUI evolution, Huckaby demonstrated a 3D application built by his company in early 2007 for cardiac surgeons that allows the user to manipulate the heart image via a touch screen. He contrasted that app with a similar one InterKnowlogy developed recently based on Microsoft's Kinect motion sensing input device.
"This was prototyped in a couple of weeks, and it's just .NET," Huckaby said.
He also demonstrated a touch-screen craps table built by his company that interacts with real-world objects. The bets were activated with physical chips laid down on the screen and "dialed" to establish the size of the bet, and the dice were actual transparent cubes that, when tossed, registered on the board.
The keynoter drew good-natured laughter from his audience as he waved his arms and strained a damaged rotator cuff to demo a physical therapy application designed to track a patient's movements through prescribed exercises and display them on a screen in real time. The application provided feedback to help the patient get the movements right. The application was based on Kinect, which Huckaby said is currently the world's fastest selling consumer electronics device.
The audience was also treated to a video about a neural computer interface, a spider-like contraption worn on the head, which was used to send commands to a wheelchair. Huckaby said the software for the device could be built with .NET right now. He wrapped up his demos with a video of an application that supported physical interactions with virtual objects. He called the C3-based app "a first go at the Holodeck" from Star Trek. He also showed off a game-based app developed for NASA.
"It's time for all of you to start thinking about building applications that use a Natural User Interface," Huckaby told the crowd. "Gesture is coming, fast; multitouch is here. And we might not be thinking commands at computers just yet, but we'll be doing that, too. It's just a matter of time."
Posted by John K. Waters on 03/29/2012 at 10:53 AM0 comments
Last month, the CEO network at Technet.org published a study, titled "Where the Jobs Are: The App Economy," that puts the number of jobs generated in the U.S. by the so-called app economy in the last four years somewhere near the half million mark. The organization, which bills itself as a bipartisan political network of senior executives focused on promoting the growth of "technology-led innovation," concluded the following: "The incredibly rapid rise of smartphones, tablets and social media, and the applications -- 'apps' -- that run on them, is perhaps the biggest economic and technological phenomenon today."
That conclusion came as no surprise to Jake Ward, head of communications for the newly formed Application Developers Alliance. His nascent organization is only a few weeks old, but it has been under development for a couple of years.
"That work involved a lot of research -- a lot of focus groups, surveys and conversations with individual developers and the companies that care about them," Ward told me. "One thing that was prevalent in all of their answers, and the overarching theme of every conversation was, Wow, there are a lot of apps out there!"
The Washington, DC-based Apps Alliance, as its growing membership calls the organization, is a nonprofit support, education, and advocacy group "committed to helping developers test and ship great ideas," the Web site says. Launched earlier this year, the Alliance membership currently comprises both individual developers (about 55 percent) and corporate members (about 45 percent). Developers of every stripe are welcome, Ward says.
"We are as agnostic as we can possibly be," he says. "If you are a developer -- whether you're an independent app builder or an enterprise programmer -- and you see value in the organization and want to participate, we want you," Ward says. "If you're an enterprise software developer by day, you might be a Python coder by night. It only matters to us that that's what you want to do. The next great way to build an app can come from anywhere."
The cornerstone of the org's member benefits is its Alliance Network, which is a social network for members only. It's designed to allow developers to collaborate, to find each other, to have discussions on message boards and to engage with corporate members through dedicated landing pages to which they can subscribe (it looks something like Facebook Fan Pages meets LinkedIn Groups).
The Alliance is the brain child of attorney Jonathan Potter, who also founded the Digital Media Association, where he served as executive director for about 12 years.
Individuals can join for free by registering on the Alliance Web site. Ward says individual memberships will be free for the foreseeable future. The Alliance will be funded by the annual dues of corporate memberships, but Ward vows that the developers are going to "drive the bus."
There's a lot to like about the Apps Alliance, but let's face it, there are a lot of developer-focused organizations out there. Do we really need another one?
"We believe that the formation of the Apps Alliance is an essential step toward normalizing the apps industry so that it's trajectory continues upward, with no slowing, no plateauing, just a continuous driver of innovation and standardization and the rising tide of the ecosystems," Ward says.
"The mission of this organization," he adds, "is to be the connective tissue of the industry."
More information on the Application Developers Alliance is available on the organization's Web site. Stop by and watch the intro video, and then let us know what you think.
Posted by John K. Waters on 03/22/2012 at 10:53 AM0 comments
VMware's recent announcement of an integration of its Spring Framework with Apache Hadoop is aimed at making life easier for enterprise Java developers who want to use the popular open-source platform for data-intensive distributed computing. The new Spring Hadoop is a lightweight framework that combines the capabilities of the Spring framework with Hadoop's ability to allow developers to build applications that scale from one server to thousands and deliver high availability through the software, rather than hardware.
By integrating the Hadoop Framework, a Java-based, open-source platform for the distributed processing of large data sets across clusters of computers using a simple programming model, with the Spring Java/J2EE application development framework, VMware has created a project that fits neatly under the Spring Data umbrella. The open-source Spring Data project comprises a group of sub-projects seeking to make it easier to develop apps that use a bunch of new data access technologies, such as non-relational databases, cloud-based data services and MapReduce frameworks like Hadoop.
In addition to Apache Hadoop, the list of Spring Data sub-projects includes, among others, the Spring Data JPA, which simplifies the development of Java Persistence API-based data access layers; VMware's GemFire distributed DB management platform; the Redis advanced key-store; and the MongoDB document-oriented database.
The new framework also supports comprehensive HDFS data access through such Java Virtual Machine (JVM) scripting languages as Groovy, JRuby, Jython and Rhino. HDFS (Hadoop Distributed File System) is designed to scale to petabytes of storage and to run on top of the file systems of the underlying OS.
The list of Spring Hadoop capabilities also includes: declarative configuration support for HBase; dedicated Spring Batch support for developing workflow solutions that incorporate HDFS operations and "all types of Hadoop jobs;" support for the use with Spring Integration "that provides easy access to a wide range of existing systems using an extensible event-driven pipes and filters architecture;" Hadoop configuration options and templating mechanism for client connections to Hadoop; and declarative and programmatic support for Hadoop Tools, including FsShell and DistCp.
Developer Costin Leau announced the integration on the SpringSource Community blog. "…Spring Hadoop stays true to the Spring philosophy offering a simplified programming model and addresses 'accidental complexity' caused by the infrastructure," he wrote. "Spring Hadoop, provides a powerful tool in the developer arsenal for dealing with big data volumes."
VMware has released Spring Hadoop under the open source Apache 2.0 license. It's available now as a free download.
Posted by John K. Waters on 03/13/2012 at 10:53 AM0 comments
The Multipurpose Internet Mail Extensions (MIME) specification that defines the way multimedia objects are labeled, compounded and encoded for transport over the Internet turns 20 this month. Ned Freed and Nathaniel Borenstein were the two primary authors of the spec. Borenstein, who worked at New Jersey-based Bellcore at the time, sent out the first real MIME message on March 11, 1992. That message included an audio clip of the Telephone Chords, an all-Bellcore barbershop quartet featuring John Lamb, David Braun, Michael Littman and Borenstein, singing about MIME to the tune of "Let Me Call You Sweetheart."
"Those of you not running MIME-compliant mail readers won't get a lot out of this," Borenstein wrote in that message.
Are there any non-MIME-compliant mail readers today?
Borenstein, who is today Chief Scientist for cloud-based email management company Mimecast, was in Silicon Valley recently to speak at the Cloud Connect conference. I grabbed a few minutes with him when he stopped in at the Computer History Museum in Mountain View. We got the two questions he's asked most often out of the way first.
"Everybody assumes I founded this company, but when I joined, it was six years old," he said. "When I first heard the name, before I ever thought about working there, I thought, they can't do that! But as I learned about the company, I found that I loved what they were doing and I liked the people a lot. And it certainly doesn't hurt them to have the author of MIME working at Mimecast."
Borenstein says most people also want to know if he ever thinks about how much money he would have made if he'd had some sort of financial stake in the now-ubiquitous Internet standard for multimedia data. "They ask me, 'Have you ever thought about what it would be like if you got a penny for every time MIME was used?' The answer is, yeah. It's hard to be precise, but I'd estimate that MIME is used about a trillion times a day. My current income would be roughly the GDP of Germany."
Borenstein joined Mimecast in 2010 after spending eight years at IBM as a distinguished engineer. His duties include long-term product planning, external writing and speaking, and patent strategy and submissions.
"When I joined the company, it had never filed a patent, because all the principals believe, as I do, that patents are deeply evil," he said. "Unfortunately, I had to point out that it's also true that deeply evil people can hurt you, and you really have a responsibility to protect yourself. Our patent strategy is primarily defensive."
Mimecast filed for a patent last year to support a cloud-enabled e-mail analytics system it is developing. The company's flagship service provides cloud-based e-mail management for Microsoft Exchange. That service includes e-mail archiving, continuity and security. It unifies "disparate and fragmented e-mail environments into one holistic solution that is always available from the cloud," the company says on its Web site.
"The cloud makes it possible for companies of a size that could never really contemplate it before to make practical and valuable use of big data and business analytics," Borenstein said. "They can take all that data and finally use it for something besides a dead repository."
In fact, Mimecast's new e-mail analytics system, which he called "proactive e-mail," takes on that very problem. If the demo he showed me is any indication, it could go a long way toward solving the so-called organizational memory problem.
"In any large organization, there's always someone who knows what you're trying to find out," he said, "and yet finding that information is almost always harder than rediscovering it. This is where I see the cloud going: supporting value-added apps that dig into those company archives and bring your own information back to you so that you can use it."
Borenstein is an energetic and positive guy, and he seems to like the work he's doing now at Mimecast very much. But he does miss the days when pure research labs like the one that spawned MIME weren't so uncommon.
"Labs like Bellcore, which was an institution of nearly pure research, are rare birds these days," he said. "And we all suffer for that rarity. After all, MIME grew out of a simple mandate to come up with something that would increase bandwidth usage."
"People would ask me," he added, "why are you working so hard on getting pictures into e-mail? And I'd say, someday I'm going to have grandchildren, and I want to be able to get pictures of them by e-mail. And they would laugh, because back in the 1980s that was too far-fetched."
Borenstein showed me the first photo sent to him in an email by his daughter of his twin granddaughters: an ultrasound image of a cluster of cells.
"The thing I had envisioned all those years ago was supposed to be much cuter," he said.
On March 5, ACS, the corporate successor to Bellcore, celebrated the twentieth anniversary of MIME at its New Jersey headquarters with, among other things, a reunion of the Telephone Chords. Borenstein said he was practicing "so I don't miss the notes this time." I couldn't make it to the event, but the original message featuring the Telephone Chord's singing their MIME song in four-part harmony is available on Borenstein's "MIME & Me" Web page.
Posted by John K. Waters on 03/09/2012 at 10:53 AM0 comments
The annual CloudConnect conference got under way this week in Santa Clara, Calif. There's a great speaker lineup, and lots of vendor news at this year's event. Among the more noteworthy vendor announcements was Nimbula's beta release of its Director 2.0 product. The company expects a general availability release in March.
Nimbula is a Menlo Park, Calif.-based provider of "cloud operating system technology" that was developed by the company's founders in Cape Town, South Africa. The company describes Nimbula Director as "a new class of cloud infrastructure and services system that uniquely combines the flexibility, scalability and operational efficiencies of the public cloud with the control, security and trust of today's most advanced data centers." Think of it as Amazon EC2-like services behind the firewall.
Nimbula Director is an extensible cloud platform designed to embrace augmentation by third parties. The custom logic of network, data, PaaS and other cloud services can be embedded in the cloud and run and managed as though Nimbula wrote it. Consequently, these services inherit Director's high availability, multi-tenancy, and network security functionality.
The latest release adds some nifty enhancements, including support for VMware's ESXi hypervisor. Version 2.0 will also support VMware's Cloud Foundry PaaS. Director 2.0 is among the first solutions to bring the Cloud OS model of an EC2-style cloud to VMware customers. (Nimbula is now a member of VMware's Technology Alliance Program.)
This version also extends its management "from the control plane up into the end user application space," the company says. In other words, users can now orchestrate the provisioning of applications and let the system monitor and manage them over their lifetimes.
And this release also "rounds out" Director's IaaS networking feature set (which already included DHCP, NAT, firewall and VLAN services) with DNS and VPN services. This gives Director a complete set of the networking services required for running real world apps.
Two other announcements caught my attention:
- Cloud-based hosting service AppFog has added Blitz.io and Iron.io to its recently introduced add-on program. Blitz provides application and Web site developers with "powerful yet simple capabilities including continuous monitoring, performance testing and remediation." Iron.io specializes in cloud queuing systems; IronMQ is its elastic message queue and IronWorker is its task queue.
Portland, Ore.-based AppFog was founded about a year ago by Web developer Lucas Carlson, co-author (with Leonard Richardson) of Ruby Cookbook: Recipes for Object Oriented Scripting. The company initially billed itself as a PHP-based Platform-as-a-Service (PaaS) provider, but quickly began supporting Ruby, Python, Perl, Node.js and Java. The company launched the add-on program in December, and just keeps adding vendors. The list also includes New Relic, MongoLab, MailGun and MongoHQ.
- The OpenStack experts at Mirantis have teamed up with cloud training organizer CloudCamp to deliver a series of training courses in OpenStack technology. The program aims to "expand the pool of skilled engineering talent and expertise that a growing number of enterprises require in order to create, deploy and maintain OpenStack solutions."
OpenStack is an open-source project made up of several interrelated projects focused on delivering various components for a cloud infrastructure solution. As the community Web site describes it, the project "aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich." More than 145 leading companies participate in the OpenStack project, including AMD, Cisco, Citrix, Dell, HP, Intel and Microsoft.
Posted by John K. Waters on 02/14/2012 at 10:53 AM2 comments
Back in October at JavaOne, representatives from the Java Community Process (JCP), the group that certifies Java specifications, talked about changes coming to the organization. First on the list was the "low-hanging fruit" of transparency, participation, agility and governance addressed in Java Specification Request (JSR) 348. Since then the community has been, in the words of JCP chair Patrick Curran, "revising the process through the process."
Now the JCP has gotten around to a moderately trickier adjustment: The promised merger of the two JCP Executive Committees: the SE/EE EC and the ME EC.
"JSR 355: JCP Executive Committee Merge" seeks to merge the two ECs and reduce the total number of committee members (there are 32 right now). The JSR includes a provision for maintaining the existing two-to-one ratio of ratified-to-elected seats, and a rule that neither Oracle nor any other member may hold more than one seat on the merged EC.
The ECs are charged with guiding "the evolution of Java," and it's not a small job. The ECs pick the JSRs that will be developed, approve draft specs and final specs, approve Technology Compatibility Kit (TCK) licenses, approve maintenance revisions and occasionally defer features to new JSRs, approve transfer of maintenance duties between members and provide guidance to the Program Management Office (PMO).
Curran is the spec lead on JSR 355, and the initial Expert Group membership consists of all members of the ME and the SE/EE Executive Committees -- including, among others, Mike Milinkovich, executive director of the Eclipse Foundation; Red Hat's Mark Little; brand new EC member Twitter's Attila Szegedi; Intel's Anil Kumar; and Google's Joshua Block.
The Expert Group has committed on the JCP Web site "to complete the JSR within about six months, thereby permitting the changes to be initiated during the 2012 elections," but also allows that the changes "may need be phased in over time."
The JCP also states its case for the consolidation:
Changes in the Java ME market, and the increasing maturity and consolidation of the Java market generally, suggests that some rebalancing between Java ME and the other platforms, together with a modest reduction in the total number of EC members, would be appropriate. Looking forward, the expected convergence between Java ME and Java SE is likely to render the current division into two separate ECs increasingly irrelevant. Since Java is One Platform, it ought to be overseen by a single Executive Committee.
Curran put it more succinctly during his aforementioned JavaOne session: "It seems like the right thing to do," he said, "that we should have a single executive committee which will deal with all of the three platforms -- because it is one platform with three flavors."
Let me know what you think about this merging of ECs in the comments!
Posted by John K. Waters on 02/07/2012 at 10:53 AM0 comments
If you were wondering whether Mac developers were also facing the pressure to become polyglot programmers so many industry watchers mentioned in their 2012 predictions (like this one and this one), consider the recent announcement from Zend Technologies. The creator and commercial maintainer of PHP announced the general availability of its Web app server, Zend Server 5.6, featuring new support for Macheads.
According the company's CEO and co-founder Andi Gutmans, Zend has been seeing an increased demand from "the community that develops on the Mac."
"We've seen this demand for Mac support from the Zend PHP developer community, at the annual ZendCon event, on Zend Forums and among user groups worldwide," Gutmans told me in an e-mail. "Now, Mac developers can leverage the agility of PHP for Web app development when they use Zend Server, with the best PHP runtime, powerful monitoring, diagnostics and performance optimization."
Okay, maybe it was his PR team. The point is, Zend does a consistent job of responding to trends surfaced by customers asking for new features. Not bleeding-edge fads, but real, late-in-the-hype-cycle trends. Last August, for example, the Cupertino, Calif.-based company released Zend Server 5.5, which essentially addressed the increasing unpredictability of traffic and workloads caused both by the growing popularity of the cloud and the proliferation of end user devices. Mobile and the cloud are not breaking news. Zend was responding directly to customer demand, Kent Mitchell, Zend's director of product management, told me at the time.
And now Zend is responding to Mac developers who, they say, want to have access to the full suite of Zend Server application development capabilities previously available only to Linux and Windows developers. Actually, that should be enterprise Mac developers. And my lead sentence should probably have read, "If you were wondering whether Mac is moving into the enterprise…"
Gutmans' PR team also sent an interesting list, entitled "Why Mac Matters to the Development World," which included:
- There are now nearly 60 million Mac users worldwide.
- Mac runs neck and neck with Linux based on Zend's 3-year history of Zend Studio IDE downloads
- Zend has seen steadily growing interest for Zend Server's enterprise-class capabilities on the Mac platform -- from within the Zend PHP developer community, at the annual ZendCon event, on Zend Forums, and among user groups worldwide.
As Mac use grows in the enterprise, the company reasons, so does PHP use.
More info on Zend Server is available on the company Web site. There's also video tour of the product.
Posted by John K. Waters on 02/01/2012 at 10:53 AM0 comments