Category Archives: Geek Culture

Let’s Solve Problems

Let me paraphrase Clay Shirky here. Let’s talk about problem solving. When it comes to solving big problems, we assume that problems really worth solving can hardly be solved by one person alone. We need to join forces, energy, knowledge and skills of many people and coordinate the efforts of those people towards the collective solution of the problem. And there’s nothing wrong with that logic.

Now the question is, how do you coordinate such a collective effort when people live on their own? Well, the traditional answer is creating an institution: a common ground where all the persons involved in the problem solving effort can be at the same place at the same time so that communication cost is reduced to a minimal. That’s what companies, governments, associations are made for.

The problem is that those institutions tend to create 3 major side-effects. First, since people are used to live their life on their own in a very individual way, it’s not natural for them to coordinate. Hence you need to hire some more people who don’t participate directly in the problem solving effort, but merely help them to come to a consistent and viable solution. Second, because institutions are like some sort of engine, you don’t only need fuel to run it, and lubricant to compensate for the friction, but you also need a structure to hold it, a vehicle to move forward. And that is administration, taxes, financial stuff. Finally, in addition to management and administration byproducts, maybe the most important side-effect is goal shift: in some sort of fractal way, most institutions eventually shift their goal from whatever it was supposed to be at the beginning, to survival. It’s as if the institution became a being of its own, trying to survive at all costs, and making it necessary to group institutions into even bigger institutions.

The consequences of these evolutions are many, but most of them revolve around inefficiency. Since you literally move people from wherever they’re living to the institution, assuming it’s impossible to be at several places at the same time (right?) then it’s very hard to belong to several institutions. Hence institutions are highly exclusive, which means they keep hold on people from other institutions who might need their help, and keep people for themselves even when they don’t really need them. This ends up with a massive waste of force, energy, knowledge and skills. Another aspect of the problem is that management, administration and politics (institutional manifestation of survival instinct) create such overhead that some choices have to be made in terms of the scope of the problem the institution was created to solve. In other words, because of this overhead, most of the time it is just unrealistic to solve all of the problem, so you end up using 20% of the resources that are enough to solve 80% of the problem. And of course the next question is: can a problem considered to be solved when it’s only 80% solved? Maybe some problems more than others. Maybe we think that it’s sufficient because we were told it’s enough. What if climate change, financial crises and all those big problems we’re facing today were the byproduct of those 20% we left off of all our solutions?

In his presentation, Clay Shirky correctly notices that it might be time to reevaluate the roots of the institutional response to problem solving, that is coordination costs. Now that we have the Internet, VOIP, cellular networks and so on, communication costs have never been so low. In this era, do we still need to be at the same place at the same time in order to join forces? Of course not! Those changes have already influenced our professional world with video-conference, corporate collaboration systems like emails. But what if this is just the beginning of a massive transition, comparable to the one initiated by the printing press, or the one started by the mechanization of agriculture? Well, Clay’s vision (that I totally share) is that the professional world is slowly moving away from institutions and towards more collaboration for problem solving. No need for people to be at the same place at the same time anymore, so no more administration overhead. And since the beast is only virtual, there is less chance for it to turn into a monster who wants to survive at the cost of its initial purpose. And since such collaborative groups would not be exclusive anymore, maybe we could get back those wasted resources in order to actually solve those 20% of the solution we were missing.

Now of course, there’s no miracle, only evolution. And if we have to recruit managers today in order to help people coordinate their efforts, it’s very likely that we will need to find ways to help people collaborate more efficiently tomorrow. My assumption is that this issue is part of those 20% we left off yesterday because it was cheaper to teach a few managers how to coordinate efforts for others, rather than actually bringing this knowledge to every single individual. What if even that became possible tomorrow? Instead of creating specialized institutions for teaching some people how to solve problems, others how to coordinate problem solvers, and yet others to run the problem-solving structure, what if we educated people through the same collaborative groups, the purpose of which being to expand the knowledge and skills of people as well as teach them everything they need in order to gradually evolve towards solving concrete problems?

That’s what I thought when I read Bruce Eckel’s article about Edupunk (and other stuff, but this part struck me the most). Now I don’t know how or when, but this could very well become a major breakthrough in my personal and professional life somehow. So if you agree with me, what do you say we join our forces and collaborate on that? Let’s solve the problem of expanding our global knowledge and skills in a collaborative way?

About Inertia in Corporate Software

When you’re passionate about software, it can get really frustrating to notice how corporate customers are sometimes suffering from software inertia. You talk to them about all those wonderful technologies that you’ve come to master, all those innovative solutions you can offer… and they tell you that they’re looking for AS/400 developers, or people with 10+ years of experience with EJB’s. As a consulting company, it can be tempting to give in to that kind of request. As a software consultant, I always feel like it’s my duty to improve the level of information of the person I’m talking to, and be a little more aggressive.

Why so much inertia?

There are a lot of factors that play a role in this situation, proportions varying depending on the company and its history. An important one is certainly vendor lock-in. Old-school software vendors like Microfost, IBM, Oracle, Serena and others have become masters in selling big expensive licenses for their tools. Their strategy is not based on the quality of what they’re selling, but on things like their long-time reputation, the belief that when the price is high then it cannot be bad, the coherence principle (you never question your own decision when it requires a big sacrifice from you) and so-called integrated offers: operating system + development environment, reporting+collaboration, tools+methodology. And they always add something exclusive to the mix so that switching to another vendor is as hard as it can be.

Very often combined with vendor lock-in is the belief that coherence lies in unity, that if you go for everything Microsoft or everything IBM, you’ll never have to make big decisions again. Whenever there is a new need in your software infrastructure, you’ll just take what your “preferred partner” has to offer for that, and you will end up with choosing tools not because they are the best at what they do, but only because they integrate nicely with what you already have.

Another dangerous quest is the one of the silver-bullet technology, the tool/architecture/methodology that will solve all of your problems today and tomorrow. Once again, big software vendors have traditionally played an important role in sustaining this myth, abusing buzzwords and repackaging old offers in new shiny boxes. And what’s even worse is that very interesting principles have been completely perverted because of this quest for the one technology that will solve all problems. Unfortunately, like most myths, this one was born in laziness and experience shows you that it’s very important to characterize each problem in order to always choose the right tool for the job.

Another motivation for sticking with your current infrastructure is to capitalize on your expertise. With time, your people start to master your infrastructure, know its weaknesses, learn how to work around them. All this experience is definitely precious and it is for me the first good reason to maintain a certain level of inertia. That and the fact that all this internal knowledge can make you save some budget on training as people can train one another and as you can build your own knowledge base.

Last but not least, there is a very important aspect, that is not always admitted explicitly, but is usually very present in big old companies: fear of chaos. IT in general and software in particular are very dynamic fields, and this trend has been accentuated by the Internet in the past decade. Combine that with the fact that most young IT professionals have a tendency to play with the newest toys on the market, and you quickly notice that things can get a little out of control when people try to incorporate immature technologies and then move backwards when they realize the cost of this immaturity in terms of maintainability, durability, integration, training and so on. This fear is perfectly understandable, but as always it’s the extreme response to this fear that can have dangerous effects.

What are the problems?

The first issue that comes to mind is the lack of agility (Titanic effect). The older the tools are, the more likely it is that they were designed at a time when the business conditions didn’t evolve as fast as they do today. Hence it becomes harder and harder to adapt to those changing business conditions, to collaborate with partners, to merge infrastructures, to expand business models and so on.

Behind lack of agility, there is a more general symptom that shows that your software infrastructure is bloated: people lose an awful amount of time solving very old problems that have long been solved by newer technologies. Productivity is key both for the motivation of your software team and for your ability to quickly implement new requirements. But mature technologies are also getting older and older, and as time goes, it becomes exponentially harder for them to integrate new features, to the point when this evolution is just not possible anymore, forcing you to do the Big Switch.

It’s a well-known disease in corporate IT: instead of making it evolve continuously, inertia often leads companies to evolving in eras, the transition for one era to the next one being often marked by big rewritings. The cost of these efforts is usually extremely high, not only because of the resources you need to allocate to this rewriting, but also because of the time during which you won’t be able to make your features evolve. Some of this cost might be compensated by the fact that you take the opportunity to improve performance or add new features. But it can also be amplified by bad business decisions like trusting Indians to do the porting or something like that.

Another important issue is the growing cost of maintenance of legacy systems. Even when you realize that rewriting everything on each transition is risky, maintaining legacy software becomes more and more expensive as it is harder to find resources with the knowledge of your technologies. Combine that with the point mentioned above about productivity and you understand that it takes more time to more expensive people to maintain your system.

Finally, as I said before, technology passionate people often tend to like playing with new toys. Now even if letting them completely free to do so can be very dangerous for you as a decision maker, forcing them to use only old technologies that they will probably never need anywhere else can also frustrate them to the point when your will lose their intelligence and insight. That’s when the cost of turnover hits you in the face.

So what can I do?

That’s the question you might ask yourself if you have some decision power in a company with too much software inertia and too many of the issues mentioned above. Here are a few leads that can help you improve the situations. Most of them are based on the generic principle that you have to find the right balance between capitalizing on your existing infrastructure and make it evolve continuously.

The first key for me is technology watch. RSS feeds and conferences are a great way to keep in touch with what’s happening outside of your organization, and even what’s coming in the next few years. When you’re in darkness and you don’t have a clue about what’s around you, it’s always safer not to move at all. But since you need to move, you also must learn more about your industry and what others are using, even if you don’t incorporate all of them.

The next step is to be honest with yourself about the limitations of your current infrastructure. As I said before, a lot of software vendors rely on the fact that you won’t discuss your own decisions when they are very expensive. So it takes some courage to be lucid about the weaknesses of your current system and monitoring those weaknesses in time is crucial, either to find local solutions to those limitations, or to decide when it’s time to replace a technology that is too limited. This is especially hard to do in bureaucratic environments where decisions tend go from the top down, and bottom feedback is not often listened to. So my advice: listen more to the people who are using the technologies you choose on a day-to-day basis, and you might realize that your situation is not as perfect as you currently think it is (Ostrich-style).

When you hear about interesting new stuff, don’t hesitate to try it out, to allocate some resources to develop a proof-of-concept for it, or go through a getting-started tutorial. Very important: do not base your opinion solely on rumors, analyst reports and opinion of others. Don’t forget that in the same way as silver-bullet is a myth, the technology that is inherently and universally bad does not exist. So what might be the right tool for the job of other might not be good for you, and vice-versa. That’s why making your own opinion is very important, even if it will never be comprehensive.

Another important thing to understand is that innovation starts in opposition. Opposition against the status quo, against existing technologies, which often leads innovators to restart from scratch in very original ways, making it harder to integrate those innovations with existing technologies. But if those are successful, there is very often a second phase in which those innovations are incorporated into existing technologies, or at least some work is done to ease their integration with mature tools. For almost every new technique, version 1 is a proof-of-concept, version 2 turns into a more complete solution, version 3 gets integrated with existing technologies. So when it gets to that version 3, it’s always interesting to try it out.

Next, let’s talk about Open Source. It’s incredible to see all the limiting beliefs that circulate around Open Source. Beyond the fact that people tend to associate free and low-quality (a belief that is of course sustained by big software vendors, even when they benefit from Open Source themselves), a lot of IT decision-makers also think that if it’s Open Source, they won’t have any support for it. My answer is that if it doesn’t have any support, it is in fact not worth it, but make sure there really is no support, because one of the most successful business models for Open Source is corporate support. Hence most interesting Open Source technologies are backed by at least one company offering high-grade professional support. And maybe the most unfair argument is that Open Source is dangerous because it can disappear over night. When you understand how most Open Source licenses work, you see that they simply forbid project owners to hide their code, and beyond that, communities are often much more reliable than software vendors who can bankrupt in no time. I’m not saying that Open Source software is more reliable and durable than commercial software, but it is certainly not less so. Now beyond this reliability and durability issue, by definition, Open Source projects don’t rely on vendor lock-in, but on the quality and usefulness of their technology, which leaves you with more choice to go from one to another and to combine technologies coming from different providers. Now I just want to mitigate what I just said: Open Source can definitely help you to get rid off your inertia more than you think, but it might also create new problems like free lock-in, in which you end up using a tool not because it’s the best for the job, but simply because it’s free. Fortunately for us, not all commercial software vendors are as rotten as big old ones.

Architecture can also play an important role in gaining more agility with your software infrastructure. Try to avoid big monolithic solutions and favor modular architectures allowing you to change parts of your systems without rewriting everything. Virtualization and componentization are definitely going to help you in that endeavor, as well as getting your software architects more experienced with composing their own toolkits for specific problems, instead of forcing them to always apply the same bloated infrastructure.

Finally, important enemies of agility are over-confidence and self-sufficiency. Of course I’m preaching for my church here but I think that bringing new blood to your teams in the form of external consultants can really help you make your software technologies evolve continuously, especially when you don’t just use them to fill in boxes but instead empower them to help you get more agile… and avoid the iceberg (shameless plug inside).

How Open Source can be a Game Changer

A few years back, I read a very interesting article (that I can’t find anymore) by John Newton, the CEO of Alfresco, explaining how Open Source changed their strategy. Of course, one of the main interests of having an Open Source version of your product is to expand your market by making it more accessible, thus creating opportunities for high-value services such as training, customization and so on. But beyond this shift in “how do we make money?”, he explained that it totally changed the nature of their salesforce. When you sell that kind of enterprise system, the best way to get potential customers to know and see your product is to invest a whole lot of money in marketing and sales. And you end up with a sales cycle of several months between the first prospection call and the actual sale… when it goes through. No wonder why license fees are so expensive for that kind of product, making it even harder to sell, increasing the length of the sales cycle, and here goes the vicious cycle.

Having at least an Open Source version of your product completely changes that because people who don’t have the checkbook but will eventually use your product can try it on their own. Maybe they’re working on personal projects, maybe even Open Source projects, and they can install and use your product for free. Then if they like it, they are more likely to go see their boss at work and recommend your product. In other words, you can replace a big chunk of your very expensive salesforce by free happy end-users and their recommendations. Hence not only does Open Source change how you make money, it makes it possible for you to save money where it’s not that necessary, and focus a lot more on your core business: building a great product.

But the way I see it, there’s another very important change with Open Source. As I said before, when your business model is based only on license fees, you focus most of your attention on your salesforce, which costs a lot of money, which makes your license fees so high that it is very likely to represent a big investment for your customers. So your salesmen end up attacking prospects at high levels of management. The problem is that the guys who have the checkbook also have very different concerns and priorities compared to the people who will eventually use your product. Let’s say you are building a software project tool suite with things like issue tracking, source control and so on. You try to sell your product to companies who do in-house software development, and the people who will eventually use your software are developers. But you don’t talk to those guys directly because they have almost no decision power, especially when it comes to spending several hundreds of thousands of euros for software tools. So you talk to their Program Manager or VP of Software Engineering or CTO of some sort. And all of a sudden, you realize that those people have concerns like keeping things under control whatever happens, minimizing risk (whatever that means), reporting, making sure that no developer does anything on their own, and so on. So of course, you will implement features to satisfy those needs because you want the big guys to be happy.

Now let’s go back to the Open Source alternative. If you hope that developers will use your software on their own project, find it cool and then go back to their boss and say “we should have that”, you’re not talking to the same people. It really makes things straight again because you’re back to consider end-users’ concerns first. Developers want tools that don’t get in their way, that allow them to save some time, not waste it, that are integrated in their development environment, and so on. My point is that an Open Source model does not only change where you make money, or how you balance your own investments. It can also change your whole product itself by changing whom you’re talking to.

Now of course, I’m opposing 2 strategies here: the traditional top-down approach that forces you to focus on upper-management priorities versus the Open Source bottom-up strategy in which end-users are the ones you have to convince. And I know that this kind of 2-way alternative is likely to create controversy because project managers will read this article and think “but we need reporting, we need control, we need power!”. Now of course you know that I don’t agree with this obsession for power and control as it is associated to the software engineering mirage. It’s like the big guys couldn’t stand the IT guys anymore, with their strange language and culture and so on. So they irrationally tried to isolate us from them with layers and layers of project management and control. But they lost a great deal of intelligence in the process and they’ve created a whole generation of frustrated developers who just execute what they have been asked to, even when decisions are made by people who don’t understand a damn thing about what software is and how powerful it can really be. And of course, big old-school software vendors have created opportunities out of this mess and we’re all forced to work with their products now. But hopefully, Open Source is already changing that, it is participating in a movement of IT empowerment, together with Agile methodologies, that will lead to more intelligent uses of technology and fill in the gap between business and IT.

Now to conclude, I’m not saying that management is totally useless. YES, we have a strange culture and a weird way to look at things. So YES, we need people to help us interface with business people and teach us how to do it ourselves. NO, the solution is certainly not to reduce our field of action to the minimum but YES, we need people who understand both the power that we have AND the big picture of what is necessary, and are able to make them coincide. And YES, this mission might require additional features in the tools we’re using, for things like reporting, prioritization and so on. But those features should never conflict with our concerns as end-users, they should never get in our way to building better software. But the good news is that there are very clever ways to do just that: have a look at Mylyn, or FogBugz. And of course, I myself am thinking about bringing my own contribution. So stay tuned…

Now I would love to try an experiment and use comments on this post as an informal survey. So if you are a software developer, what tools are you using at work for things like source control, issue tracking, documentation management, and so on? Do you like those tools and why?

Software Engineering Is NOT Dead!

There’s been a few reactions lately about the Tom DeMarco article. To be honnest with you, I didn’t know the guy but apparently he’s quite respected. And the reason why I didn’t know the guy might be related to the fact that he wrote a software-related book entitled “Controlling Software Projects: Management, Measurement and Estimation”. In other words, he’s the one (OK, one of the guys) responsible with all those project managers asking us for sharp estimates, negotiating them down with us, and committing on them without our agreement, leaving us with no choice but to produce crappy software and/or fail to meet the deadlines. For me, Tom DeMarco looks a lot like this Dr. Winston D. Royce dude, who first described the theory of Waterfall methodologies and forgot to write the following sentence in bold font: “the implementation described above is risky and invites failure“. No kidding!

My reaction is about the same to what DeMarco took 40 years to realize. Software Engineering is not dead, it has never lived! There has been a lot of controversy about that, some people arguing it’s more of an art, others saying it’s science, yet others that it’s more like craftsmanship. At the end of the day, the only fact that there is a controversy shows that it doesn’t fit in the hole we’re trying to stick it into. And yes, this blog is called Software Artist, but I have to tell you that it’s more as a reaction to those who are fiercely trying to “industrialize IT processes”. Oh man, that makes me angry! In the end, I have to say that the more I think about it, the more I like the idea of craftsmanship.

Because craftsmanship is all about finding the right balance between the techniques that you’ve learnt and the intuition that you have about what you’re doing. And finding this right balance requires a lot of practice. Note that I didn’t say “time”, I said “practice”. I like it also because for any craftsman, using the right tools for the job is fundamental, and once again, note that I said “the right tools”, not “the tools that everyone else is using in here” or “the tools we invested a lot of money on and we need to make up for”. And finally I like it because when a joiner tells you that he doesn’t know exactly when this wardrobe will be finished, you don’t bother him with that because the fact that it corresponds exactly to what you need, and the fact that it will last a very long time, largely makes up for this little uncertainty. And if he tells you he’s going to need another 4 days, you don’t negotiate, because if you know better how to do it faster, why in the world didn’t you do it yourself in the first place? Or why didn’t you go and buy one at IKEA?

Now after re-reading the above paragraphs, I realize that my statements might seem a little harsh and extremist (as usual). But I don’t want to sound negative here. So if you manage software people and you feel disturbed by this perspective to lose control, let me just tell you this. Software can be incredibly powerful, it can really create tremendous value in a very flexible way. And I’m not talking about allowing you to just save money. I’m talking about the possibility to develop your business and create value from scratch. But the more value you expect to create, the more you have to ask yourself whether you absolutely need to control every aspect of the software creation process. Remember, you’ve tried it before, and it most probably didn’t work as expected. But fortunately for you there’s a different approach. Empower your software team. Don’t lead them, but make their ride smoother. Don’t make them work FOR you, work WITH them. Don’t consider them as standard office workers. Instead, hire the best craftsmen you can find and trust them with your project. Make it theirs as well as yours. Surrender on this idea of controlling everything and you’ll be surprised at the result.

Software engineering is not dead, because it has never existed. Hence there is no need to mourn it. Let’s just roll up our sleeves because we have a few palaces to build for the years to come.

Why Do I Hate Eclipse?

As a consultant, I’m very often forced to use Eclipse as a development environment, and every time I do, it’s such a pain for me that I can’t help complaining about the poorness of this thing. And every time I do, most of my team mates, who have been brainwashed by the monopolistic propaganda of Eclipse, just keep asking me what’s wrong with it. And sometimes it’s hard to explain because it’s really a matter of user experience. And each time I find a specific example, I get answers like “yeah, but that’s just one thing”, or “I’ve never had that, you’re not lucky”, or “this is just because you’re not used to the Eclipse way of doing things”, or even the worst one “maybe yes, but it’s free!”. Since when is “free” a feature?

Right now, I’m reading the SpringSource dm Server getting started guide, and I was very surprised to read that SpringSource guys, who aren’t exactly stupid, and seem very experienced with Eclipse itself as they have based all their development tools on it (Spring IDE, STS, etc.), talk about what they call the “Eclipse Dance”. I didn’t know about the expression but I’ve definitely danced it more than once: every now and then, Eclipse views get all mixed up, some views indicate errors in a file, while other views on the same file say everything is OK. Or you get a message saying that it cannot find a class where you have the source in front of your eyes. Or like now, I have 2 maven projects at the same level referencing the same parent POM, and one of the projects says it can’t find the parent artifact, whether the other one seems to find it without problem. And when that kind of things happen, the only thing to do is to try a combination of closing all projects and reopening them, clean all projects to force a clean build, or even restart the whole Eclipse workbench. WTF?

How can SpringSource support such a poorly designed environment while admitting such unacceptable bugs? Oh yeah right! It’s free, so everybody uses it. This is really the perfect example of when Open Source can also kill innovation instead of fostering it. It’s free so everybody uses it, including corporate customers, so all tool vendors base their tools on it (Spring IDE, Flex Builder, Weblogic Portal Workshop, etc.), so even more people use it (even if they have better tools in their bag), and we’re screwed.

I would love that framework vendors focus first on command-line integration with tools like Maven and Ant, and then provide IDE integration for a few popular environments, including Eclipse, Netbeans, and my personal choice, IntelliJ IDEA. This would reinforce competition between IDE vendors instead of killing it while considerably lowering the barrier to entry to their frameworks. Right now, SpringSource is lucky I really need to understand more about dm Server, because if it had been only for cusriosity’s sake, I would have given up already just because of the tight integration with this crappy Eclipse thing and all the pain I have to make it work consistently.

So if you’ve already been in that situation, and you start to think there’s gotta be a better way, try out IntelliJ IDEA.

PS: I’m not related to Jetbrains in any way. I just happen to be a very happy customer of theirs, happy to pay a few hundred bucks every year to get their latest version, because as a Jetbrains guy said it last year at Devoxx, “IntelliJ iDEA is the only IDE worth paying for.”

Apple MacBook Touch On Its Way?

Maybe I see too many things, but I’m currently reading iPhone documentation about Core Data (which is really great stuff by the way), and I came across this comment:

The framework is equally useful as the basis of a vector graphics application such as Sketch or a presentation application such as Keynote.

Now I know quite a lot of Mac applications, but I’ve never heard of anything called Sketch! Especially not in Apple portfolio, and it only makes sense they take their own apps as examples in their documentation.

Now combine it with this Apple tablet rumor that’s been going around for a while, and it makes even more sense to see a Sketch app added to iWork. Now don’t you see things coming together too?

OK, I just googled a little further and it seems that Sketch has been a demo app for XCode for quite a while. But still, wouldn’t that be great?

Why The H*** Would I Open Source My Project?

A friend of mine asked me this question today. And even though I’m NOT one of these “free software” zealots who think that beyond Open Source, there is nothing valuable, I found it interesting to think about a few arguments. But let’s be clear right away:

  • I’m not opposing Open Source and commercial software, so I focused on the reasons to open source a project, not the reasons NOT TO sell it
  • Even though I do believe in those, I won’t mention philosophical reasons, but only on reasons that could convince my boss (and my boss doesn’t give a damn about “users should be free”)

#1 To build an indirect but wider business model

Most people think of software applications as normal products. Exactly like RIAA majors would like to reduce music to the same status. But common usage has proven them wrong, and for a very simple reason: a traditional product is something that you don’t have anymore once you’ve sold it to somebody. The main part of the unit price for each instance of this product covers raw materials and production cost. Software is fundamentally different, so the business model has to be adapted. Many big companies have already understood that: instead of selling a few instances to those who can afford it, you give it to everybody for free and then you charge them for additional services: training, customization, hosting. Of course, this implies that your product can generate such satellite services, that it can be of some interest to professional users. The general idea is: the wider your user base, the more services you can sell, which can potentially be much more profitable.

#2 More hands and brains

Sometimes, a software application becomes really interesting once it has enough features to be adapted in as many different situations as possible. But on the other hand, it’s very important to focus. So if your application is modular, like a CMS for example, the bigger the developer community, the more feature-rich your application, the wider your user base… and back to #1.

#3 Believe it or not, some returns from users are more important than money

Feedback, ideas, feature requests are the best way to make your product evolve in the right direction and make sure it is as useful as it can be. But gathering feedback for something you sell is more difficult: “why would I contribute to a product I’ve already paid for, all the more so as I have nothing to gain from contributing?” On the contrary, Open Source plays on what is called “the reciprocity principle”: people who get something for free are more volunteer to “pay for it” in other ways. The general idea is simple: give it to one user for free, he will suggest one excellent feature that will convince 10 more users to use it, 5 out of which will buy satellite services… and back to #1.

#4 Double licensing

Open Source doesn’t mean that anyone can do anything with your code. Open Source projects are protected by a license. The most popular Open Source license out there is GPLv3. It might be controversial, it offers one major protection: anyone that reuses or integrates your code MUST open source their project under GPLv3 too. This has a very important corollary: no one is allowed to sell your product instead of you. Practically, this is just a legal protection and some big companies violate it every day. But if it happens to you, you can sue them!

But since you understand that some users might be interested in reusing or embedding your product for commercial purposes, it makes a lot of sense to offer an alternative: a commercial license that people will have to pay for. And all of a sudden, your free users become your best salesmen: they use your product for free at home, they suggest it at work, and their boss buys a commercial license… and satellite services. You win!

#5 Do it first, before someone else does

If you are not convinced yet with the 4 reasons before, someone else might be. And even if you don’t release your code, unless you have protected your application with a patent (which doesn’t exist here in Europe since it’s pure nonsense, but that’s another debate :oP), anyone can take your idea, do it better, and open source it, thus cutting the grass right under your feet. On the other hand, if you open source your project, people will have more interest in contributing to it than creating their own version. And if they still do, it means your execution of the idea is really bad anyway so…

Now what do YOU think? Do you see other reasons why Open Source might be a good option for your next big idea? On the contrary, what reasons do you see NOT to Open Source a project?

iPhone ORM Just Rocks

I have been pretty quiet lately, mostly for 2 reasons:

  1. I’ve been busy with betRway.com
  2. I’ve been playing a lot with the iPhone SDK.

I didn’t know anything about Objective-C so it is a challenging experience to go back to the C world, but I’m starting to find it very exciting. After I had read Cocoa Fundamentals and the iPhone Application Programming Guide (very boring stuff, but hardly avoidable), after having gone through “Your First iPhone Application“, I found myself pretty frustrated because I missed a lot of knowledge to jump into sample projects.

Fortunately, I stumbled upon this great blog with plenty of very complete and up-to-date tutorials. There is this especially this TodoList example that taught me how to do most of the things I couldn’t figure out by myself just by reading the code of the Books sample.

But at the end of that tutorial, I realized that a very important part of my code was made of boilerplate and ugly ANSI-C code to setup database stuff, like in the old days of JDBC. Google was my best friend and allowed me to find SQLitePersistentObjects.  And boy it works great! And it makes the code so simpler. Make your own mind:

This used to be the application launching code:

- (void)applicationDidFinishLaunching:(UIApplication *)application {

[self createEditableCopyOfDatabaseIfNeeded];
[self initializeDatabase];

// Configure and show the window
[window addSubview:[navigationController view]];
[window makeKeyAndVisible];
}

// Creates a writable copy of the bundled default database in the application Documents directory.
- (void)createEditableCopyOfDatabaseIfNeeded {
// First, test for existence.
BOOL success;
NSFileManager *fileManager = [NSFileManager defaultManager];
NSError *error;
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *writableDBPath = [documentsDirectory stringByAppendingPathComponent:@"todo.sqlite"];
success = [fileManager fileExistsAtPath:writableDBPath];
if (success) return;
// The writable database does not exist, so copy the default to the appropriate location.
NSString *defaultDBPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"todo.sqlite"];
success = [fileManager copyItemAtPath:defaultDBPath toPath:writableDBPath error:&error];
if (!success) {
NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]);
}
}

// Open the database connection and retrieve minimal information for all objects.
- (void)initializeDatabase {
NSMutableArray *todoArray = [[NSMutableArray alloc] init];
self.todos = todoArray;
[todoArray release];
// The database is stored in the application bundle.
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *path = [documentsDirectory stringByAppendingPathComponent:@"todo.sqlite"];
// Open the database. The database was prepared outside the application.
if (sqlite3_open([path UTF8String], &database) == SQLITE_OK) {
// Get the primary key for all books.
const char *sql = "SELECT pk FROM todo";
sqlite3_stmt *statement;
// Preparing a statement compiles the SQL query into a byte-code program in the SQLite library.
// The third parameter is either the length of the SQL string or -1 to read up to the first null terminator.
if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) == SQLITE_OK) {
// We "step" through the results - once for each row.
while (sqlite3_step(statement) == SQLITE_ROW) {
// The second parameter indicates the column index into the result set.
int primaryKey = sqlite3_column_int(statement, 0);
// We avoid the alloc-init-autorelease pattern here because we are in a tight loop and
// autorelease is slightly more expensive than release. This design choice has nothing to do with
// actual memory management - at the end of this block of code, all the book objects allocated
// here will be in memory regardless of whether we use autorelease or release, because they are
// retained by the books array.
Todo *td = [[Todo alloc] initWithPrimaryKey:primaryKey database:database];

[todos addObject:td];
[td release];
}
}
// "Finalize" the statement - releases the resources associated with the statement.
sqlite3_finalize(statement);
} else {
// Even though the open failed, call close to properly clean up resources.
sqlite3_close(database);
NSAssert1(0, @"Failed to open database with message '%s'.", sqlite3_errmsg(database));
// Additional error handling, as appropriate...
}
}

Now it’s just that:

- (void)applicationDidFinishLaunching:(UIApplication *)application {
NSMutableArray *todoArray = [[NSMutableArray alloc] init];
self.todos = todoArray;
[todoArray release];
[self.todos addObjectsFromArray:[Todo allObjects]];

// Configure and show the window
[window addSubview:[navigationController view]];
[window makeKeyAndVisible];
}

And this is just one example. All the CRUD operations are so much simpler. And I didn’t even need to create the SQLite database. It’s almost a shame Apple didn’t include such a framework in the SDK. For your curiosity, you can download the project here.

I love it! Now I should be able to get my hands dirty with my real project. More on that later ;o)

Flex Guys Are Going Too Far

There’s been a lot of fuzz lately in the Flex community around the way the Flex SDK Open Source efforts are handled by Adobe. And let’s just say things have gotten a little messy around here. It’s funny because I’ve just watched the last episode of Battlestar Galactica, and without any spoiler, let’s just say that people tend to get things wrong when it comes to rebellion.

I mean, sure, everything is not perfect. Especially, Adobe doesn’t seem to be listening to votes on bugs.adobe.com as much as I hoped. But when I read things like “the war of open sourcing the Flex framework”! I mean… come on, guys!

When Simeon Bateman started to take matters into his own hands and talked about forking the Flex SDK, I already thought he was going too far because I think Adobe is really trying here, and I respect their effort considering what’s at stake and the size of the boat they’re steering. And what surprised me even more is that Adobe actually responded to the feedback from the community by organizing a big open live discussion with the whole community. And then they even got way further by opening up their Open Iteration Kick-Off meeting. Hey, that’s an amazing effort! Can you imagine what it takes in a big “old school” company like Adobe to do that.

But as in every rebellion, it seems that people can’t seem to be satisfied now. They’re forming committees, gathering grievances, it looks more and more like a French revolution to me. And let’s just say that’s part of the reasons I don’t particularly like the country where I was born. But that’s another topic.

The problem with that kind of mutiny is that you always end up with self-proclaimed representatives who are honestly convinced they’re speaking up for the greater good, but they always forget about crowd control, about how people can forget everything about reason and lucidity when you inject in them the “it’s unfair” feeling.

I say thank you Adobe for Open Sourcing the Flex SDK. I say, let’s not forget that you were not forced to do so, that you could have kept it free as in “free beer”, as it was with Flex 2. I say, let’s not forget that Open Source licenses still guarantee intellectual property and ownership to the people who actually created all this code for us. And I say yes, they made the decision to move to Open Source and they should take the good side as well as the responsibility side of it, and really involve the community. But I don’t think that threatening them to fork, or getting into a war with them is going to make things any better… unless your only goal is to cover your butt for your ultimate purpose to actually fork it.

I think we should all calm down here, take a deep breath, show some acknowledgement of the efforts they’ve made so far, and insist on the fact that the community is here to help and work WITH them, and not to undermine their efforts, “start working on Flex 5 as a community now and let them join us when they are ready”. Uservoice, committees, those can be good ideas, but we have to show some good will and honesty here, because nothing is owed to us and because it’s not in our best interest to get people nervous and vindictive now.

We don’t need an online IDE!

I’m on vacation this week, and it’s great. I usually try to take at least one week off in between missions because it’s the best time to do it. Nothing to worry anymore about the one before, no knowledge of the one ahead, my head is free to dream and wander around and I have plenty of time to move forward on personal projects.

Now beyond my work on betRway.com, I’ve been thinking a lot today about the idea of an online development environment. On my last project, I’ve seen a lot of the possibilities AND limitations of current RIA technology, and more specifically the most advanced one IMO, ie Flex. Combine that with some of the modularity offered by OSGi on the server-side, the productivity brought by things like Grails, and more importantly the promise of unification of all of those technologies under SpringSource’s sponsorship… and it’s starting to become more and more interesting.

So I googled “online IDE”, and I found this great article on DZone, which is a few months old but gives a lot of links to some of the services that are already offering both niche and generic online environments. I’ve browsed through all of these and I have a bad feeling of “déjà vu”. It’s like those mobile web sites that try to reproduce a trimmed down version of their web counterpart: it doesn’t really work, and worse, it doesn’t even use all the potential of mobile platforms. It’s like we’re trying to copy our desktop IDE onto the web.

But when you really think of it for a moment, the reason why we need version control systems for example, it’s because everyone is working on its own, and we need to synchronize at some point. But this constraint almost disappears if we’re all working online. And online development can have impacts of similar magnitude on things like continuous integration, deployment, monitoring. Some of these tasks don’t even make sense anymore. And how about several programmers working on the same artifact at the same time? And think about the computing power that could be available for code generation or compilation…

The way I see it, even the programming languages that we use today are not designed to work like that. Think of Java packages correlated with file system directories. Think of all these destructured text files that our IDE’s have to parse and index to make some sense out of them and allow us to navigate through them: what if text files are replaced by databases at some point?

I’m just thinking out loud here, but I’m wondering whether we shouldn’t forget about everything we know in terms of programming paradigms, and get back to the objective: producing working software collaboratively. What would be the best and simplest way to do that, knowing that we are starting to have all the tools we will need in order to move our workspace into the cloud?

As far as I’m concerned, I already know what I would like in such an environment:

  • More graphical programming, especially for high-level design, because visualizing is still the most efficient way to create and design IMO.
  • No more boilerplate tasks like “check out”, “check in”, “run tests” and things like that. Just “share my code in the public workspace” and “test on save”.
  • The ultimate reproducible build environment: everyone is using the very same frameworks and libraries at any time. No more environment variables or shell idiosyncrasies.
  • Smart client stuff, with offline mode, auto-synchronization and conflict resolution when back online.
  • Methodology directly integrated into the environment, not some small window on the real thing. Issue tracking, user stories, all of that at your fingertips.
  • Direct communication with your remote peers through advanced tools like whiteboard, webcam, code review.

In fact, we don’t need an online IDE, not in the sense we think of an IDE today on our desktops. We need more than that. We need a collaborative environment that uses the full potential of the cloud to help us produce better software more quickly.

What do you think? What kind of high-level functionality would you like to see in such an environment? How do you picture it?