All posts by Sébastien

New Adventures

It’s been a while since I posted my last article on this blog, but what is really weird is that I have not mentioned yet what has been bothering my days and nights for the past 8 months.

In January this year, the first Startup Weekend was organized in Betagroup Coworking, Brussels. At first I didn’t want to attend, because I thought it was silly to hope you could do anything meaningful in merely 2 days. But Leo insisted, and 2 days before the event, I finally made up my mind. I was working on HuddleKit back then but I wanted to propose a simple idea for the event, something powerful and effective enough to be quick to implement. I had discovered AirBnB a few weeks before, and the whole collaborative consumption trend with it. Around a lunch at Vivansa‘s HQ, Said started thinking out loud about how difficult it was for small companies to adapt their office size as they grow, and how companies could mutualize their office space. We started dreaming about what a concept like AirBnB’s would mean applied to office space, how it could enrich business relationships. And then it struck me: 2 years ago, I published a post entitled “Let’s Solve Problems“, that ended like this.

Now I don’t know how or when, but this could very well become a major breakthrough in my personal and professional life somehow.

Well, guess what. The same day, I registered kodesk.com, and four days later, Kodesk won the grand prize of the jury and the leanest startup prize of the first Startup Weekend Brussels. After that, it was pretty obvious to me: I stopped working on HuddleKit and all my other pet projects, I progressively decreased my involvement with Vivansa and I started working almost full-time on Kodesk. In May this year, Frederic joined me in this adventure and we invested our own money in it. In late May, we released the first version of Kodesk and let’s just say that beyond visibility and publicity, the first results are not all as satisfactory as we expected. Now I’m certainly not complaining: we know that we are on the right track, our vision is crystal clear, we have a lot of support and it’s very rare for startups to get it right the first time. It’s even more rare for ventures that try to change common beliefs and evolve an entire culture.

But today it becomes obvious that building the right product and finding a profitable business model is going to take some time. And until we do, it’s going to be very hard to raise any external investment. Now we don’t want this financial constraint to remove all the fun from the adventure of creating our dream business from scratch, so today I am making a new move.

I love technology. I’m a geek and I am proud of it. My friends find it very useful and my passion for technology has allowed me to develop quite a reputation. Now it is time to earn some money with this passion and reputation, and to use that money to fund this groundbreaking business I’m building. So starting today, I am going to provide businesses of all sizes with three main services: technology watch, training and iPhone/iPad development, because those are the three things I love the most and I am really good at.  If you want more information, I created this page to promote my services. And if you know any company or individual who could be interested in my services, please feel free to pass my information along.

Grails, Vaadin and Spring Security Core

I got kind of bored with Flex and all the complexity it introduces by forcing you to switch between ActionScript and whatever you are using for the backend (Groovy in my case). I also got bored with having to regenerate my data service stubs on each server-side change, and having to handle the asynchronous remoting. So I started to have a look at Vaadin.

Vaadin offers the same richness of components as Flex, but I can code my UI with Groovy and it completely removes the need to bother about remoting and all that stuff. It’s really like my old Swing days and I love it.

Last week-end, I tried their AddressBook tutorial, and I adapted it to Grails using the Grails-Vaadin plugin. Then I modified the sample so that it uses GORM to store contacts. And finally I installed spring-security-core plugin to secure my business services with @Secured annotations. And it worked absolutely great.

I just released a new version of the Grails-Vaadin plugin with Vaadin upgraded to 6.5.1 (the latest version at this point), and I uploaded my version of addressbook to GitHub.

For me, the most interesting part is how I got security to work. All I had to do was to install spring-security-core plugin into grails and then define a simple SecurityService like the following:

package org.epseelon.addressbook.business

import org.springframework.security.core.context.SecurityContextHolder as SCH
import org.springframework.security.authentication.BadCredentialsException
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken

class SecurityService {

static transactional = true

def springSecurityService
def authenticationManager

void signIn(String username, String password) {
try {
def authentication = new UsernamePasswordAuthenticationToken(username, password)
SCH.context.authentication = authenticationManager.authenticate(authentication)
} catch (BadCredentialsException e) {
throw new SecurityException("Invalid username/password")
}
}

void signOut(){
SCH.context.authentication = null
}

boolean isSignedIn(){
return springSecurityService.isLoggedIn()
}
}

Then I injected this SecurityService into my AddressBookApplication and used it:

class AddressBookApplication extends Application {
private SecurityService security = (SecurityService)getBean(SecurityService)

[...]

boolean login(String username, String password) {
try {
security.signIn(username, password)
refreshToolbar()
return true
} catch (SecurityServiceException e) {
getMainWindow().showNotification(e.message, Notification.TYPE_ERROR_MESSAGE);
return false
}
}
}

Then whenever I try to call a @Secured method:

package org.epseelon.addressbook.business

import org.epseelon.addressbook.dto.PersonListItem
import org.epseelon.addressbook.domain.Person
import grails.plugins.springsecurity.Secured

class PersonService {

static transactional = true

[...]

@Secured(["ROLE_USER"])
PersonListItem updatePerson(PersonListItem item) {
Person p = Person.get(item.id)
if(p){
p.firstName = item.firstName
p.lastName = item.lastName
p.email = item.email
p.phoneNumber = item.phoneNumber
p.streetAddress = item.streetAddress
p.postalCode = item.postalCode
p.city = item.city
p.save()

return new PersonListItem(
firstName: p.firstName,
lastName: p.lastName,
email: p.email,
phoneNumber: p.phoneNumber,
streetAddress: p.streetAddress,
postalCode: p.postalCode,
city: p.city
)
}
return null
}
}

If I’m not logged in as a user, I get an “access denied” exception:

package org.epseelon.addressbook.presentation.data

import com.vaadin.data.util.BeanItemContainer
import org.epseelon.addressbook.dto.PersonListItem
import org.epseelon.addressbook.business.PersonService
import com.vaadin.data.util.BeanItem
import com.vaadin.ui.Window.Notification
import org.epseelon.addressbook.presentation.AddressBookApplication

/**
*
* @author sarbogast
* @version 19/02/11, 11:12
*/
class PersonContainer extends BeanItemContainer<PersonListItem> implements Serializable {
[...]
boolean updateItem(Object itemId) {
try {
personService.updatePerson((PersonListItem) itemId)
return true
} catch (Exception e) {
AddressBookApplication.application.getMainWindow().showNotification(
e.message,
Notification.TYPE_ERROR_MESSAGE
);
return false
}
}
}

To see what it looks like, all you have to do is to download the code from GitHub, and run “grails run-app” at the root of it.
If you try to create a new contact of edit an existing one and save it without being logged in, you get an “access denied” message. But if you login as ramon/password, it works.

Note that this project uses Grails 1.3.6 but the plugin supports any version of Grails above 3.2 included. As always, your feedback is more than welcome.

My Case for DTO’s

In many of my posts about Grails and Flex integration, I take for granted that I use Data Transfer Objects to transfer data between my Grails backend and my Flex frontend. Put simply, Data Transfer Object are pure data containing classes different from the domain entity classes used to store data in the backend. I take it for granted because I’m deeply convinced that it’s the best way to do things and so far, experience has never proved me wrong. But I often get this question in comments or by mail (this is for you Martijn): why bother create an entirely separate class structure and copy data from entities to DTO’s and back instead of just using entities?

I’ve expressed my arguments a couple of times across various posts but I thought it would be nice to sum things up in here for future reference.

Where does it come from?

When I first started to work on enterprise applications and ORM-based architectures, it was with a Model-Driven Architecture framework called AndroMDA. AndroMDA was absolutely key in helping me getting started with Spring and Hibernate and I was especially inspired by one paragraph in their “getting started” tutorial, which I quote here:

Data Propagation Between Layers

In addition to the concepts discussed previously, it is important to understand how data propagates between various layers of an application. Follow along the diagram above as we start from the bottom up.

As you know, relational databases store data as records in tables. The data access layer fetches these records from the database and transforms them into objects that represent entities in the business domain. Hence, these objects are called business entities.

Going one level up, the data access layer passes the entities to the business layer where business logic is performed.

The last thing to discuss is the propagation of data between the business layer and the presentation layer, for which there are two schools of thought. Some people recommend that the presentation layer should be given direct access to business entities. Others recommend just the opposite, i.e. business entities should be off limits to the presentation layer and that the business layer should package necessary information into so-called “value objects” and transfer these value objects to the presentation layer. Let’s look at the pros and cons of these two approaches.

The first approach (entities only, no value objects) is simpler to implement. You do not have to create value objects or write any code to transfer information between entities and value objects. In fact, this approach will probably work well for simple, small applications where the the presentation layer and the service layer run on the same machine. However, this approach does not scale well for larger and more complex applications. Here’s why:

  • Business logic is no longer contained in the business layer. It is tempting to freely manipulate entities in the presentation layer and thus spread the business logic in multiple places — definitely a maintenance nightmare. In case there are multiple front-ends to a service, business logic must be duplicated in all these front-ends. In addition, there is no protection against the presentation layer corrupting the entities – intentionally or unintentionally!
  • When the presentation layer is running on a different machine (as in the case of a rich client), it is very inefficient to serialize a whole network of entities and send it across the wire. Take the example of showing a list of orders to the user. In this scenario, you really don’t need to transfer the gory details of every order to the client application. All you need is perhaps the order number, order date and total amount for each order. If the user later wishes to see the details of a specific order, you can always serialize that entire order and send it across the wire.
  • Passing real entities to the client may pose a security risk. Do you want the client application to have access to the salary information inside the Employee object or your profit margins inside the Order object?

Value objects provide a solution for all these problems. Yes, they require you to write a little extra code; but in return, you get a bullet-proof business layer that communicates efficiently with the presentation layer. You can think of a value object as a controlled view into one or more entities relevant to your client application. Note that AndroMDA provides some basic support for translation between entities and value objects, as you will see in the tutorial.

Because of this paragraph, I started writing all my business services with only data transfer objects (what they call “value objects”) as input and output. And it worked great. Yes it did require a little bit of coding, especially as I had not discovered Groovy yet, but it was worth the time, for all the following reasons.

The conceptual argument: presentation/storage impedance mismatch

Object-relational mapping is what Joel Spolsky calls a “Leaky Abstraction“. It’s supposed to hide away the fact that your business entities are in fact stored in a relational database, but it forces you to do all sorts of choices because of that very fact. You have to save data in a certain order in order not to break certain integrity constraints, certain patterns are to be avoided for better query performance, and so on and so forth. So whether we like it or not, our domain model is filled with “relational choices”.

Now the way data is presented involves a whole different set of constraints. Data is very often presented in a master/detail format, which means you first display a list of items, with only a few fields for each item, and possible some of those fields are calculated based on data that is stored in the database. For example, you may store a country code in your database, but you will display the full country name in the list. And then when the user double-clicks an item, he can see all the fields for that item. This pattern is totally different from how you actually store the data.

So even though some of the fields in your DTO’s will be mere copies of their counterparts in the entity, that’s only true for simple String-typed fields. As soon as you start dealing with dates, formatted floats or enum codes, there is some transformation involved, and doing all that transformation on the client-side is not always the best option, especially when you have several user interfaces on top of your backend (a Flex app and an iPhone app for example), in which case you’re better off doing most of these transformations on the server.

In anyway, if you change the way you store data, it should not influence too much the way you present the same data, and vice-versa. This decoupling is very important for me.

The bandwidth argument: load just the data you need

In the master/data use case, when you display the list of items, you just need a subset of the fields from your entities, not all of them. And even though you’re using Hibernate on the backend with lazy-loading enabled, fields are still initialized and transferred over the wire. So if you use entity classes for data transfer, you will end up transferring a whole bunch of data that may never be used. Now it might not be very important for hundreds of records, but it starts being a problem with thousands of records, especially when there is some parsing involved. The less data you transfer the better.

The security argument: show only the data you want to show

Let’s say you’re displaying a list of users, and in the database, each user has a credit card number. Now of course when you display a list of users, you might not want everyone to see the list of credit card numbers. You might want to expose this data only in detail view for certain users with certain privileges. DTO’s allow you to tailor your API to expose just the data you need.

The error-prone argument: argh! Yet another LazyInitializationException!

Of course there are associations between your business entities, and by default, those associations are lazy-loaded, which means they are not initialized until you actually query them. So if you just load a bunch of instances from your entity manager and send them over to your client, the client might end up with null collections. Now of course you can always pay attention, or use some tricks to initialize associations up to a certain level before you send your data, but this process is not automatic and it’s very error-prone. As for using things like dpHibernate, I think it just adds too much complexity and uncontrolled server requests.

The laziness argument: Come on! It’s not that hard!

I think that most of the time, the real reason why people don’t want to use DTO’s is because they’re lazy. Creating new classes, maintaining code that does “almost” the same as existing code, adding some code to service implementation to copy data back and forth, all of that takes time and effort. But laziness has never been a good reason for ditching a design pattern altogether. Yes, sometimes, best practices force us to do more stuff for the sake of maintainability and robustness of our code, and for me the solution is certainly not to shortcut the whole practice, but just to find the best tools to minimize the added work. With its property support and collection closures, Groovy makes both creating, maintaining and feeding DTO’s as simple and fast as it can be. AndroMDA had converters. There are even some DTO-mapping frameworks like Dozer to help you. No excuse for laziness.

For me, all the reasons above largely overcome the added work to maintain a parallel DTO structure.

Now of course, this is a very opinionated topic and you will probably have a different view. So all your comments are welcome as long as they remain constructive and argumented.

Flex on Grails, Take 2: Part 3

At the end of the second article in this series, we ended up with a working application but it was not really ready for the real world because it had one major flaw: the URL of the AMF endpoint was hardcoded in the client in such a way that it was impossible to change after compilation and very hard to handle several environments (dev, test, prod). The solution to that problem is to integrate dependency injection into the mix.

Now there are a lot of such frameworks for Flex/ActionScript applications, including Parsley, Swiz, Cairngorm, etc. But I’ve never been a big fan of those big MVC frameworks that impose their own interpretation of the MVC pattern and completely limit the initial capabilities of Flex itself. For me, the Flex framework itself is clean enough that you don’t need all that overhead and it’s better to use a non-intrusive framework like Spring ActionScript. So that’s what we are going to do.

Continue reading Flex on Grails, Take 2: Part 3

JVM Web Framework Survey, First Results

Yesterday at Devoxx, Matt Raible did a very interesting talk on comparing JVM web frameworks. On this occasion he had the incredible courage of voicing his opinion on each of the most well-known frameworks, rating them in a matrix and the craziest part: showing this matrix to everyone.

Immediately after his talk, Twitter was on fire with advocates of each of those frameworks complaining about how those ratings were unfair and biased. I mean of course, I can hardly talk about 3 or 4 of these frameworks with the same level of confidence, but the guy has enough experience to have played with at least 13 of them, and it’s perfectly normal to expect him to have up-to-date and accurate feedback about all of them.

Anyway, his talk was highly entertaining but in the end it inspired me 2 reflections:

  1. His list of 20 criteria is excellent and covers pretty much everything, except maybe for “graphics design integration” which I think is very important and some frameworks make it much easier than other, like Flex with Flash Catalyst for example. So even if you don’t agree with the ratings, you can still reuse his methodology, build 13 proof of concepts and rate them yourself.
  2. Rather than complaining, let’s do a survey.

So right after the talk, I built a small Google docs survey, and to this date I got 26 answers, which I think is far less than the number of complaints, but is already a good start. Here are the first results based on those 26 responses. As you can see, there was a small issue with Google Docs who didn’t correctly save the results for the last criterion, degree of risk. So the rankings are based on only 19 criteria so far. But if you still want to voice your opinion, you can still do it. I will update the results from time to time and this time, the 20th criterion should be considered.

Thanks again Matt Raible for this very inspiring talk, and thanks to all the Devoxx team for yet another memorable edition.

By the way, as far as I’m concerned, I’m pretty happy with Grails and Flex at the moment, but after the amazing demos I saw at Devoxx, I will probably have a deeper look at Vaadin very soon. And please Jetbrains, we need more visual designers (Flex? Vaadin?).

Flex on Grails: Take 2, Part 2

This is a follow-up post to Flex on Grails, Take 2.

Task creation

Now that we have a basic working application, let’s improve it by adding some security in there. If your Grails application is still running, you can leave it that way as you won’t need to restart your backend every time you modify it. That’s the whole beauty and productivity of Grails and this plugin.

Continue reading Flex on Grails: Take 2, Part 2

Flex on Grails: Take 2

A little bit of history

When I first discovered Flex, one of my first obsessions was how to make it work with a Java backend. I’m a java developer at heart and my Java backend stack of choice back then was Spring/Hibernate-based. That’s why I published a series of full-stack articles that became quite popular. But another obsession of mine has always been productivity so when I discovered Grails, it became my new preferred environment and I started looking for ways to plug a Flex frontend into a Grails backend. All of this work culminated in the release of my Grails BlazeDS plugin which worked great but had a few limitations (only Java DTO’s, run-war instead of run-app, etc.). I mean, it worked great… until it didn’t. For some obscure reason, my plugin didn’t work at all with Grails 1.3.x. I fought with this for months, but there were just too many technologies involved (Groovy, Grails, BlazeDS, Spring-Flex, Spring, etc.) and my knowledge of some of those technologies was too shallow to really understand everything happening under the hood. That’s why I called upon SpringSource and/or Adobe to help me or provide the community with a decent Flex support for Grails. And guess what! They listened. A couple of months ago, I got in touch with Burt Beckwith, from SpringSource, who intended to work on that. So he asked me for feedback and really that’s all I did: I explained to him some of the issues I had with the plugin, the typical environment that we Flex developers work with, etc. And today… TADAAAA! The new Flex support plugins are here.

Continue reading Flex on Grails: Take 2

Versioning is Just Too Complex

I’m currently trying to unlearn 10 years of CVCS and Subversion certainties to learn DVCS and Git. And although a lot of people see those as a huge leap forward, I can’t help thinking it’s still way too hard. Versioning is such a basic need, it shouldn’t require that much knowledge. Right now, whether it’s Subversion or Git (or Mercurial, or whatever), the problem is always the same for me: it’s just too low-level. Those systems are designed as if I needed to be a mechanic to drive my car. I don’t care about injectors, gearbox and all that stuff. All I want is to go faster or slower, turn left or right, and that’s about it. Now sure if I’m a mechanic, I can make my car perform much better, but if I just need to get from point A to point B, do you really think I will learn mechanics? Most people won’t. They will just walk there, or ride a bike. And that’s exactly what’s happening in too many companies: “manual versioning”, version numbers in file names, shared drives, even $harepoints, are just poor solutions to a real problem. We need versioning, but existing solutions have an unacceptable learning curve. They’re not smart enough, and they require us to be smart, even though that’s not our main purpose.

Even as a developer, sure I can understand such concepts, but I don’t want to learn what merge, pull, push, commit, branch or update mean. I want to start new projects, participate in existing projects, fix bugs, implement features, refactor my code. Whatever underlying system my company or team is using, whatever technologies I’m working with, my own workflow should not be influenced so much by the tools I use to keep track of my work and collaborate with my team. It should be transparent.

My point is: Git is a leap forward, but it’s a small one, and we need a giant one. We need a versioning system that implements workflows at a higher level, that abstracts away all those silly commands with similar names but different meanings. We need an abstraction layer that can work on top of existing lower-level tools like git, subversion and others, and that can be integrated seamlessly in development environments like IntelliJ Idea or Eclipse (sigh). Sure we will probably lose some power in the process, but we can leave it as an option.

In fact, we need to do with versioning what Maven has done to build lifecycle. Convention over configuration. What are the most common tasks we do on a daily basis:

  • start a new project
  • enter an existing project
  • start fixing a bug
  • start implementing a new feature
  • start refactoring some code
  • switch to another bugfix/feature/refactoring
  • complete a bugfix/feature/refactoring
  • share a bugfix/feature/refactoring with my team or with the world
  • what else?

And then let’s try to map those high-level tasks with lower-level command sequences. And let’s do it in such a way that I can easily change the underlying implementation, or reconfigure certain aspects of it. And why not add the possibility to define your own workflows for other things than traditional dev. In fact, to make things clearer, let’s just call git, subversion and other existing tools “versioning tools”, and let’s design a “collaboration framework”. Exactly like Maven is not a “build tool”, it’s a “build lifecycle framework”.

I’m just ranting out loud here, but what do you think?

Mac Runtimes, What a Mess!

First of all, let’s make things clear: I’ve been a very satisfied Mac user for the past 4 years or so, but I’m also a Java and a Flex developer, which means I have interests in all three of those technologies. And yes, I’m also a big fan of Steve Jobs, but despite all expectations, I try to be lucid about him and some of his weirdest choices/decisions/open letters ;o).

The problem I have at the moment is that, in the name of sensationalism, a lot of blogs post with titles like “Macs won’t have Flash anymore”, or “Java is dead on the Mac”, as if it was just an evil continuation of the “no Flash on iPhone/iPad” fuss that started at the beginning of this year. Now it’s certainly a great way to draw attention to those sites who only live thanks to advertisement, and hence number of visits. But let’s try to reestablish a few realities here.

First off, let’s talk about what everybody has at the back of their head when they think about Apple and runtimes: iOS. Yes, iOS doesn’t support any alternative runtime. In fact, besides Javascript, iOS doesn’t support any virtual machine. Flash and Java work on virtual machines and they’re not supported on iOS. There are 2 major reasons for that. The first one is performance, because a virtual machine, that is a software execution environment on top of a hardware one, will never be as performant as the native one. Despite all the optimization efforts that Adobe has done with Flash on mobiles, first experiences on Android tend to confirm that there’s still work to be done. Even though they have improved a lot in the past 3 years thanks to the iPhone impulse, mobile devices still run with very limited hardware capacities. And they still haven’t reached the point where they have a lot of free resources to spare, like personal computers have. So the official reason makes sense. But of course the less official reason is also important for Apple: iPhone’s number one sales argument is apps. When you think about it, it’s almost funny because when the first iPhone came out without an SDK, everybody complained about it, and then Steve Jobs answered that there was no use for a SDK. And obviously at that time, Apple was already working very hard on the App Store and the iPhone SDK. But when you know you have something huge in the pipeline, something that will make your device even more frightening to the competition, what is the best thing to say to the competition? “Don’t worry, this is just another one of our silly shiny gadgets that will just convince our existing fans”. And then a mere 18 months later, Apple comes out with not only an excellent SDK, but a whole new sales and distribution channel, and a marketing strategy that is based solely on all the apps your can install. I’m sure that there must have been a couple of WTF-moments at Nokia, RIM and others. So when your whole marketing strategy relies on your controlled and polished SDK and distribution channel, you have absolutely no interest in letting others in, be it J2ME crap (I’ve done J2ME development too, iark!) or the more threatening Adobe AIR. So let’s deal with it: no virtual machines on iOS, and whether we like it or not, it makes sense.

So are recent news just a continuation of that? Is Steve Jobs trying to eliminate all competition on the Mac too. NO! He’s not! It’s a completely different story!

Let’s start with Flash on the Macbook Air. Yes, the new Macbook Air doesn’t have Flash pre-installed. Actually, Safari does not have the Flash plugin preinstalled anymore. But nothing prevents you from installing it yourself. As nothing prevents you from installing Firefox and its Flash plugin as well. On iOS, it’s not pre-installed, and you can’t install it yourself. On MacOSX, from now on, it won’t be pre-installed but you will still be able to install it yourself. Huge difference! The Flash community has complained enough about the outdated version of the pre-installed Flash plugin. Of course Apple will not change their systems every time Adobe fixes a security or performance bug. So the best way to avoid any remanent hole, is to allow no hole at all by default. And if you need Flash, you just install the latest version and you’re good to go. That’s for the official reason. But as always there is… one more thing! One of the main marketing arguments for Flash is that, unlike any other cross-platform runtime, it’s installed on a crushing majority of machines, somewhere above 95% of them. But that is partly thanks to those integration deals that make Flash ship with every new PC or Mac, independently of the popularity of Flash as a development platform. Apple’s bet is that with the advent of HTML5, users will use the Flash plugin less and less often. But if they pre-install it, this drop in usage won’t reflect on Adobe’s marketing. Once again, whether we like it or not, it makes perfect sense for Apple. And it even makes sense to me: even if I’m a big Flash advocate, even if I think the HTML5 fuss is just oversold, I think Adobe has been a little too slow to react lately, as if they were resting on their dominance of the cross-platform runtime market. So everything that makes them fight harder to build a better development and runtime environment is good. And I’m sure they will fight. They just need to invest more in it. Mobile Flex development only in early 2012 (and that’s the first estimates, the ones that are always wrong) will just be too late for the show. So that’s it: no Flash plugin preinstalled in Macs means no Mac shipping with outdated security holes built-in and no built-in popularity bias either, which is good for competition. But nothing will prevent your from installing Flash yourself.

Let’s talk about Java now. When you read the news, you tend to feel like Apple’s war on competition is nothing personal against Adobe, that  it’s targeted at everyone else, that Java will be Steve’s next victim. But that’s just so untrue! First off, contrary to what happens with Flash, Apple never said that they would ship Macs without Java built-in. They just said that it would enter a pure maintenance phase and that they would stop supporting it… themselves! But once again, they won’t prevent anyone else to take over support for Java on the Mac. In fact, that’s probably why they took this decision: there was a time when Apple had their own interests in Java, when there was a Java-Cocoa bridge in the development environment, when Java was even a great way to make the Mac ecosystem richer, because a lot of developers would write their desktop applications in Java to support all platforms with a single code base. But of course, with the deprecation of Java-Cocoa bridge and the advent of the iPhone and what it means in terms of popularity for Objective-C and Cocoa native environment, Apple’s stake in Java has decreased dramatically. So much so that today, those who have the most interest in Java on the Mac are… those who support Java developers. And since Steve Jobs and Larry Ellison are known to be big friends, I’m sure Oracle and Apple are perfectly clear with who is going to take over. Maybe the community can help with Soy Latte and OpenJDK, but I can’t believe that Oracle won’t step up themselves, given the overwhelming Mac install base amongst java devs. And still, whatever the solution, Apple won’t prevent any one else to support Java and offer a Mac installation package for it.

So Flash and Java are not dead on the Mac! At least not based on existing statements and choices from Apple. But we can’t know what Steve has in mind, and I can’t help worrying about the end game of all this. Given the huge success of iOS, which makes perfect sense in the mobile world, I’m really afraid that Steve Jobs won’t know where to stop and will want to reproduce the same model on desktop. And I certainly don’t want that. I’m not ready for it yet. And I think a lot of people are not ready either, so if Apple moves too fast in this direction, they could loose a lot of customers in the process, especially if Steve Jobs starts this transition and then leaves this for others to deal with. But we’re not there yet. So please bloggers, keep your heads cold and please avoid feeding fear, uncertainty and doubt.

How To Introduce Yourself… I Mean Practically

For years, I’ve been using a very simple but very effective technique to introduce myself in job interviews, and I’ve always got some excellent feedback about it. I’m not talking about the content here, but the format. It can always be a bit tricky to introduce yourself without diving too much into irrelevant details, or losing yourself along the way, or boring the interviewer to death. To avoid all that, I’ve learnt this technique at Axen, but since Axen doesn’t exist anymore per se, I might as well share it with you guys, because it’s always a shame to miss a good recruitment because the candidate wasn’t clear enough during his/her interview. So here you go…

Continue reading How To Introduce Yourself… I Mean Practically