Category Archives: Rich Internet Applications

Software Architecture Cheatsheet (Part 3/3)

In the previous post, I tried to think of the business constraints that intervene in the choices of a software architect. In this one, I’ll take a feww shots at guessing which technologies are important nowadays to build software solutions for these constraints.

I see… I see…

There are so many technologies out there that I will not risk myself in designing some sort of female magazine test like “tell me about your application, I’ll tell you what technologies you should use”. That’s a very exciting part of what I perceive as what is the job of a software architect: finding the right combination of tools and techniques for a specific business context in order to develophigh-quality, high-value and robust software for customers.

That said, there are a few important areas that seem very important to explore or even master in this world, and more specifically in this new economy we’re facing.

Productive dynamic Java

Java is a very mature and popular technology, so much so that many people have predicted its death times and times again. But in my view, it’s very much alive, especially with recent developments that made Java development much more productive. Of course, SpringSource-originated frameworks like Spring and its galaxy have changed the enterprise Java environment for a long time.

But even more recently, inspiration has come from the “casual programmer” side with Ruby on Rails and Python/Django yielding even more interesting developments like Groovy and Grails that combine the flexibility of a dynamic language with the incredible power and richness of the Java platform.

In my opinion, Groovy/Grails are about to rejuvenate enterprise development in an incredible way.

Modular Java

There has been a lot of marketing fuzz a few months ago about something called Service-Oriented Architecture. Unfortunately, although it was based on common sense, marketers and tool vendors completely killed the concept in the egg, but still, some important aspects have emerged and remain limitations of the most popular technology platforms. One of them is the importance of modularity: the ability to change one part of a system without touching anything else, whether it is to adapt them or to restart them.

OSGi (Open Service Gateway initiative) is a standard that has made a remarkable progression on the server side in the past few months, and with its massive adoption by major vendors, it’s definitely going to be something to watch.

Server-agnostic Rich Internet Applications

RIA-enabling technologies compose a very competive landscape: Adobe Flex, Microsoft Silverlight, Sun JavaFX, and even more niche technologies like OpenLaszlo, Curl. And I’m not even considering all those Javascript frameworks and AJAX-generating techniques that I personally don’t see as viable alternatives in an enterprise environment.

My technology of choice is definitely Adobe Flex: it’s open (and it’s even become one of the most impressive examples of Open Source development lately), it’s robust, it’s server-agnostic (it works with Java, .Net, PHP, Python, what have you), it offers desktop integration capabilities, making it possible to cover many of the use cases mentioned above, and it’s very elegant by design. More importantly it was one of the first RIA technologies out there, which makes it both very mature AND very popular.

Native Mobile Development

Mobile development has always been a hobby. Taking useful applications with you is an old fantasy. For a long time, it’s been so poor that it was difficult to turn this hobby of mine into a professional activity. That was until I came in touch with iPhone SDK development, which really blew me away. For the first time we have some great mobile hardware with unique usability capabilities, and we have the software development platform to use those capabilities like never before. And it’s going to be even better with the release of iPhone OS 3.0.

Of course, it’s about to become a very competitive area too, with the release of Palm WebOS, Google Android and Nokia Qt. But for now, the iPhone SDK is by far the most advanced native mobile development option.

What’s my point?

The purpose of this series is double:

1. try to show why software in general, and software architecture in particular are such exciting fields
2. wake up people who tend to have only one single hammer in their toolbox

Now if in addition to that, it can create a debate, then I have a few questions for you guys (and hopefully gals :oP) So, what technologies do you think are important to know in the current and future software world?

Software Architecture Cheatsheet (Part 2/3)

In the previous post in this series, I tried to enumerate the most frequent kinds of applications. The question I’m going to ask myself here is what are the constraints that intervene in choosing the right paradigm and the correponding technologies to implement it.

Environment! Environment! Environment!

Before we start answering that question, let’s just be clear with something. We live in a world where there are plenty of free and Open Source libraries and frameworks and tools of all kinds. It doesn’t mean that free is always good, but at least it’s an option, and if you have a commercial option that can add some value somewhere, then go for it, it’ll be worth it. So I won’t consider tooling cost as a parameter here.

Performance (high computational power and low bandwidth)

Whenever you hear your customer say “I need it to handle several million transactions per second”or “I quickly want to make decisions based on thousands and thousands of records”, you know that you will have to think about performance. There can be several kinds of performance: memory consumption, CPU cycles, disk space, network bandwidth, hardware cost, etc. And all those metrics very often play together, which means that any change to one of these metrics has an impact on all of the others. For instance, it’s very common that you have to increase memory consumption to optimize CPU or disk access (caching).

Another important characteristics of performance is that optimization requires you to dig deeper into low-level details, because most of the performance is lost when abstracting machine constructs to be closer to human users. That’s why optimizing performance requires more work than doing things naively, and it’s very important to weigh the benefits of this work compared to the cost.

Moreover, it’s sometimes tempting to think of performance very early on and to focus on that more than the business value the application is supposed to create. But experience proves that you can quickly end up with very fast systems that don’t do what they are supposed to do because the closer you are to the machine, the harder it is to develop on it or maintain it. That’s why Donald Knuth said:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Distributivity (number and spreading of end-users)

Nowadays, it seems like all applications are meant to be web applications, all the more so with the recent fashion for cloud-based applications that attempt to “webify” traditionally desktop-based apps like word processors, worksheets, and so on.  And there has been so much effort spent in web apps in the past 10 years or so, that everyone knows about the technologies to build them. Yet it’s always important to ask yourself a few questions: will the application be accessible to the general public? Will it be extranet or intranet? How many users are likely to access the application at the same time? Are potential end users ALWAYS online? What would be the impact of the browser crashing in the middle of a session?

Sometimes, having to think about data access concurrency, network bandwidth or security is a useless hurdle that you can avoid just by developing a desktop application.

Automation (launch it regularly and in the background)

What if your application doesn’t need a complex user interface but requires just a few parameters to do its job? What if, on the other hand, it needs to be easily automated and integrated into a batch processing system? When you face such a business context, it’s important to consider the option of a CLI app, because then it can also be easier to integrate with other kernel system apps through scripting.

Whenever you hear your customer say words like “data analysis”, “system check” or “automatic synchronization”, you’d better think twice about your web app idea.

Ergonomics (easy and quick data input and visualization)

At the other hand of the data analysis pipeline, there is data input. And the more data there is to input, the higher the risk of rejection of the application by end users. And since end users generally wait a long time to get theirs hands on the application, this rejection traditionally happens very late in the development process. Combine that with the fact that people who ask for the application are not the ones using it, and the very special mindset of developers and you have all the chance in the world to miss your target and have the project fail before it reaches the finish line.

Of course, technology is not the primary solution to this problem. The first thing is to consider end users, consult them, talk with them, even if the business owner doesn’t think it’s useful. Then of course, methodology goes a long in putting the application into end users hands as soon and often as possible. But as soon as you realize the specificity of what users are expecting, you understand that you need a technology that gives you all the freedom to implement very complex use cases, without forgetting about the conventions and paradigms that people are used to.

Integration (with operating system and external systems)

Web apps have another big drawback in addition to ergonomic limitations, which is desktop integration. This issue comes from the security model of the web. Because it’s so easy to access a web application, because you don’t have anything to install and because the application is directly connected, it also creates a huge opportunity for malicious use. Which is why web apps usually work in what is called a sandbox: network access is limited to the originating domain (unless specified otherwise), no direct access to the file system is allowed, no native API access to things like system tray icons, drag and drop and so on.

And if your application has tom import or export very big files, or notify the user on a regular basis, those limitations can be a killer. There are some technologies now that create some sort of a bridge between a runtime plugin in your browser and a runtime app on your machine, but portability of this bridge across systems and across browsers is sometimes limited.

Productivity (getting things done and adapting fast)

How stable are the business rules you’re asking me to implement? How sure do you know what you expect from this application? If your customer answers “not very” to any of these questions, you might think twice about using this low-level highly-optimized programming language. Because if it takes you weeks to implement any change or new feature, your application might quickly end very far away from the business value is was supposed to create.

Fortunately, with the maturity of web application development, there has been a lot of very interesting developments in the area of development productivity lately. Development tools like integrated development environments certainly go a long way in making developers more productive, but when this concern is dealt with at the programming language level, it’s even better.

Maintenance cost (number and quality of resources)

Whatever technologies you plan to use, you definitely must consider the constraint of resources. There are so many techniques out there that it’s impossible for everyone to know all of them. Some of those technologies are very mature and popular, thus making it easier to find people to maintain and evolve your application on the long term. But the more mature the less innovative they often are. So finding the ideal compromise between the benefits of innovation versus the cost of resources to maintain your application is very important. Thus is might require some insight and technology watch in order to anticipate which of these innovative techniques will grow fast and be there for a long time.

And if you really need one of these innovative technologies that is not very popular yet, then don’t forget to include training costs in your plan. Last but not least, don’t forget to consider company-wide policies: IT architecture departments can create substantial impediments on your way, which might lead you to weigh in the cost of those impediments.

Continuity (robustness and evolutivity)

Beyond people able to maintain it, there is another thing that is very important for the longevity of your application: the intrinsic software quality assets of the technologies that you use. Testability, decoupling, Domain Specific Language support, portability, internationalization support, integration capabilities with other technologies and platforms, extensibility, modularity. All those characteristics can be very important to consider if your application is supposed to stay there for more than 5 years and evolve with the business at hand.

A lot of money is spent and sometimes wasted in reegineering entire applications just to keep up with current technologies or new business constraints, so much so that choosing robust and evolutive techniques can greatly reduce the long term ownership cost of the application.

In the final issue, I’ll risk myself into making some predictions about the technologies that seem very important in order to implement applications with that kind of constraints. But before I do that, do you see other business constraints that might be important to consider before choosing the best tools for the job?

Software Architecture Cheatsheet (Part 1/3)

What I really like about being a software artist is the richness of tools and techniques you have at your disposal. And the more tools you have, the harder it is to use the right ones, the more tempting it is to limit yourself to a few of them. But to me it’s like analogic versus digital DJing: given that your ultimate purpose is to create sounds that make people move, why limit yourself to sync-and-scratch when you can have effects, loops, samples and a virtually unlimited library of tracks?

But I’m sort of missing my point here. Let’s get back to software. I’ve recently come to work on a new project that has been in the works for almost 2 years. For 2 years, wanna-be software developers have tried to solve a very difficult problem with very usual tools. It’s like Maslow said:

When you know how to use a hammer, everything starts to look like a nail.

Well, guess what! Everything is NOT a nail. And I’m gonna try to go over the reasons why in this post.

Software is one big family…

…and each member of the family has its own personality.

The most popular right now is certainly web applications. And by web applications, I mean traditional ones. HTML, CSS, throw is a little bit of Javascript, and maybe generate all of that with some server-side scripting like PHP, Python, JSF or whatever. Heavy load on the server, but very lightweight on the client. The interface is somehow poor because it relies heavily on technologies that were designed for documents gathered in websites, rather than for full-blown applications, with all the interactivity that it implies. Yes, some progress has been made in the past few years with all this AJAX stuff, but bear with me, this all seems like tinkering to me.

The mirror opposite of lightweight clients are certainly fat clients, aka desktop applications. Those applications are based on a composition of graphical widgets the user interacts with, throwing events around and interacting with the operating system. Contrary to their web counterpart, they usually require quite a procedure for deployment and maintenance, because they are physically running on the user’s machine and only check in with the server if they need to. But damn they’re fast.

More recently, a new compromise solution has shown up, offering the best of both worlds: the great ergonomy of desktop clients combined with the ease of deployment and maintenance of web clients. That’s what marketing guys have lovingly called Rich Internet Applications. Now behind this lovely RIA thing, there are a few technologies that make it a lot easier to write rich user interfaces that run within the confines of a web browser. But still, those have limitations compared to their pure desktop brethren: poor integration with the operating system, security constraints all over the place, heavily rely on server-side business code.

Now if Rich Internet Applications are web applications that solve the ergonomy problem, there is of course the other side of the compromise: desktop applications that solve the deployment and maintainability issues. Those are sometimes called smart clients: local database, offline mode, online synchronization, automatic updates, easy one-click installation.

Even though, those seem to fulfill the family picture, there are a few weird cousins out there that are good to be known. Command-Line Interface (CLI) applications have poor to no user interface at all. Their main purpose is to be run on the command-line by some geeky system administrator somewhere, or to be part of batch scripts running automatically every night. Very useful for maintenance apps, and for all long tasks like data analysis or system checks.

And of course there are mobile applications and all kinds of embedded systems. The user interface simply cannot be rich here, because the display is so small, and the computing resources are so limited. Small memory, small keyboard. The iPhone is certainly changing the landscape here, but you still have to manage memory!

Don’t forget extension apps, like SAP modules, CMS plugins, MS Access applications. Those are applications of their own. Usually highly specialized but very fast to develop for simple use cases, to get things done quickly.

Finally, even though, they’re less and less popular, there are still many mainframe applications out there. Now I won’t go into much details here because I’ve never set foot on that ground. But it certainly doesn’t harm to remember that it exists.

Now there certainly are a few other kinds of software applications out there that I didn’t think of, but you get my point. There are a lot of different tools out there, and very different techniques to use those tools in order to create software solutions to very different problems. And what makes those problems so different, you might ask. Well, it’s all about the business context. In the second part of this series, I’ll focus on the characteristics of a business environment can influence the tools you choose to implement the solution to a problem.

But before we get there, do you see other kinds of applications that I forgot to mention?

Flex Guys Are Going Too Far

There’s been a lot of fuzz lately in the Flex community around the way the Flex SDK Open Source efforts are handled by Adobe. And let’s just say things have gotten a little messy around here. It’s funny because I’ve just watched the last episode of Battlestar Galactica, and without any spoiler, let’s just say that people tend to get things wrong when it comes to rebellion.

I mean, sure, everything is not perfect. Especially, Adobe doesn’t seem to be listening to votes on bugs.adobe.com as much as I hoped. But when I read things like “the war of open sourcing the Flex framework”! I mean… come on, guys!

When Simeon Bateman started to take matters into his own hands and talked about forking the Flex SDK, I already thought he was going too far because I think Adobe is really trying here, and I respect their effort considering what’s at stake and the size of the boat they’re steering. And what surprised me even more is that Adobe actually responded to the feedback from the community by organizing a big open live discussion with the whole community. And then they even got way further by opening up their Open Iteration Kick-Off meeting. Hey, that’s an amazing effort! Can you imagine what it takes in a big “old school” company like Adobe to do that.

But as in every rebellion, it seems that people can’t seem to be satisfied now. They’re forming committees, gathering grievances, it looks more and more like a French revolution to me. And let’s just say that’s part of the reasons I don’t particularly like the country where I was born. But that’s another topic.

The problem with that kind of mutiny is that you always end up with self-proclaimed representatives who are honestly convinced they’re speaking up for the greater good, but they always forget about crowd control, about how people can forget everything about reason and lucidity when you inject in them the “it’s unfair” feeling.

I say thank you Adobe for Open Sourcing the Flex SDK. I say, let’s not forget that you were not forced to do so, that you could have kept it free as in “free beer”, as it was with Flex 2. I say, let’s not forget that Open Source licenses still guarantee intellectual property and ownership to the people who actually created all this code for us. And I say yes, they made the decision to move to Open Source and they should take the good side as well as the responsibility side of it, and really involve the community. But I don’t think that threatening them to fork, or getting into a war with them is going to make things any better… unless your only goal is to cover your butt for your ultimate purpose to actually fork it.

I think we should all calm down here, take a deep breath, show some acknowledgement of the efforts they’ve made so far, and insist on the fact that the community is here to help and work WITH them, and not to undermine their efforts, “start working on Flex 5 as a community now and let them join us when they are ready”. Uservoice, committees, those can be good ideas, but we have to show some good will and honesty here, because nothing is owed to us and because it’s not in our best interest to get people nervous and vindictive now.

We don’t need an online IDE!

I’m on vacation this week, and it’s great. I usually try to take at least one week off in between missions because it’s the best time to do it. Nothing to worry anymore about the one before, no knowledge of the one ahead, my head is free to dream and wander around and I have plenty of time to move forward on personal projects.

Now beyond my work on betRway.com, I’ve been thinking a lot today about the idea of an online development environment. On my last project, I’ve seen a lot of the possibilities AND limitations of current RIA technology, and more specifically the most advanced one IMO, ie Flex. Combine that with some of the modularity offered by OSGi on the server-side, the productivity brought by things like Grails, and more importantly the promise of unification of all of those technologies under SpringSource’s sponsorship… and it’s starting to become more and more interesting.

So I googled “online IDE”, and I found this great article on DZone, which is a few months old but gives a lot of links to some of the services that are already offering both niche and generic online environments. I’ve browsed through all of these and I have a bad feeling of “déjà vu”. It’s like those mobile web sites that try to reproduce a trimmed down version of their web counterpart: it doesn’t really work, and worse, it doesn’t even use all the potential of mobile platforms. It’s like we’re trying to copy our desktop IDE onto the web.

But when you really think of it for a moment, the reason why we need version control systems for example, it’s because everyone is working on its own, and we need to synchronize at some point. But this constraint almost disappears if we’re all working online. And online development can have impacts of similar magnitude on things like continuous integration, deployment, monitoring. Some of these tasks don’t even make sense anymore. And how about several programmers working on the same artifact at the same time? And think about the computing power that could be available for code generation or compilation…

The way I see it, even the programming languages that we use today are not designed to work like that. Think of Java packages correlated with file system directories. Think of all these destructured text files that our IDE’s have to parse and index to make some sense out of them and allow us to navigate through them: what if text files are replaced by databases at some point?

I’m just thinking out loud here, but I’m wondering whether we shouldn’t forget about everything we know in terms of programming paradigms, and get back to the objective: producing working software collaboratively. What would be the best and simplest way to do that, knowing that we are starting to have all the tools we will need in order to move our workspace into the cloud?

As far as I’m concerned, I already know what I would like in such an environment:

  • More graphical programming, especially for high-level design, because visualizing is still the most efficient way to create and design IMO.
  • No more boilerplate tasks like “check out”, “check in”, “run tests” and things like that. Just “share my code in the public workspace” and “test on save”.
  • The ultimate reproducible build environment: everyone is using the very same frameworks and libraries at any time. No more environment variables or shell idiosyncrasies.
  • Smart client stuff, with offline mode, auto-synchronization and conflict resolution when back online.
  • Methodology directly integrated into the environment, not some small window on the real thing. Issue tracking, user stories, all of that at your fingertips.
  • Direct communication with your remote peers through advanced tools like whiteboard, webcam, code review.

In fact, we don’t need an online IDE, not in the sense we think of an IDE today on our desktops. We need more than that. We need a collaborative environment that uses the full potential of the cloud to help us produce better software more quickly.

What do you think? What kind of high-level functionality would you like to see in such an environment? How do you picture it?

2008 flashback

That’s it. Holidays are here, this year is almost over, time slows down for a few days so that we can look back, then enjoy and start again.

img_0076This post is probably not be of great use to anyone but me, but it’s a blog after all, sometimes I can use it as a personal journal too, like all those teenagers.

This year was actually very rich to me. I got to discover a few very interesting technologies, starting with OSGi and its ability to break up with big monolithic enterprise applications. And I really dived into Flex and how it can integrate with Java, which culminated in this tutorial, which was even republished on the Adobe Developer Connection. And finally, in my quest to discover better ways to develop applications, I came across Jetbrains MPS. And when I realized it was still far from being ready for prime time, I decided to keep it on my radar but focus on a more realistic alternative on a shorter term: Groovy/Grails. Now I’ve widened my technological scope beyond just JEE and AndroMDA, and I feel quite ready for the challenges ahead.

More importantly, this year will have been a great initiatic journey. Last year at about the same period, I decided I wanted to turn one of my ideas into a company, to create my own startup. The road was bumpy, my colleagues at Axen could feel it, friends who joined me on this journey could feel it too. I put a lot of things in question: my job, my ambitions, my strengths and weaknesses. And the project failed… so far! But they say there’s always hope at the end of the tunnel.

I had the idea, I had the skills, I had the will, time and energy, and yet there was this big barrier to entry, this thing that seems so important to everybody that nations and societies, people and governments are falling apart just because of it: money. I’ve always felt bad about money, like it was the poisonous blood flowing in the veins of an otherwise healthy man, eating him from the inside out, like something that is here to transport what makes us live, but that we’ve come to confuse for life itself. And it happened, crisis happened, the disease hit us all. We’re still far from the bottom since we still believe that the best cure against poisonous blood is more blood. But I don’t want to wait. My failed project and the recent course of events have made me realize something: THERE’S GOTTA BE A BETTER WAY!

2009 is full of promises. I have already started to understand many things about myself, about why I failed on this project, about how I live, how I can improve. I moved to a new place too, more comfortable, more like me. And now I have this new project, this game changer, this new way of thinking about how creative people can turn ideas into something real. 2009 is going to be fan… wait for it… tastic!

Adobe MAX Europe – Day 4

Much less to say about today. First because it was a full-day lab session, second because the whole thing was just fantastic. The content, the pedagogy, the examples, the trainer, everything was perfect. It allowed me to discover all the features specific to AIR runtime and API’s and we didn’t spend to much time on Flex details. AIR is definitely an interesting technology. The integration with the desktop offers all the features that I need most and I sincerely think about migrating You And The World administration to the desktop since it will remove so many barriers that we have right now like HTML rendering, client-side image cropping and big data synchronization.

Now that the event is over, it’s time for global feedback:

  • The organization was just great: it’s easy to understand why the entrance fee is so high, given the quality of the food, the number of computers for labs, the number of teacher’s assistants and staff people, the Wii’s, etc. One thing though: I really missed drinks in fridges where you don’t have to wait in line for a glass of water.
  • The venue was just awesome, with enough room for everyone, plenty of electric plugs to reload our small iPhone batteries, Wii’s everywhere, couches, toilets (not just one spot where you have to wait in a long line if you don’t choose your time like Metropolis). Two negative aspects though: mere chairs hurt when you got used to theatre seats, and Fiera Congressi Milano is a little bit lost in the middle of nowhere: not a lot of restaurants nearby.
  • The wireless network was good enough most of the time, except inbetween sessions where I managed to get an IP address, but impossible to get any bandwidth. It was good enough for email checking and a few commits on Subversion though.
  • The content covered was great and gave me a sense of cohesion and innovation on the whole Adobe product line-up. And feedback forms for each session were definitely a good idea.
  • The hotel I stayed in was just awesome. I still don’t understand why it’s called a mini-hotel as the service was really great and the room was very cool. I missed some more French-speaking  channels but I had some content on my laptop to compensate. And 8€ for 24 hours of Internet connection is not too expensive.
  • I didn’t see much of Milan but the taxis you can’t stop in the street outside of a taxi station, the plane that doesn’t come and pick you up directly at the terminal, the old-school trams, gave me the impression of a not-so-modern city.

Overall, this first Adobe MAX Europe was really a wonderful experience for me and I’ll really thank my manager for sending me there. Now, one day of roadmap design on You And The World, and next week, on to Devoxx. I just love technology!