And you’re wondering why Europe sucks for innovative startups?

I’m experimenting with a new collaborative consumption business idea at the moment and I’m trying to set up a payment processing infrastructure. I contacted Paymill, the European clone of Stripe but they seem to be stalling on activating my merchant account. So I got in touch with Braintree Payments, since they opened their services for merchants in Europe last year and they are the payment provider of AirBnB, THE pioneer of collaborative consumption. But this morning, I got the following email:

We would love to have you as a business partner, but unfortunately we cannot move forward with your application. Your payment model is something called Third Party Payment Aggregation.  This means you are accepting payment on behalf of someone else, and then passing on payment to them at a later time. TPPA is the highest risk business model there is in payment processing. Unfortunately we do not have a European sponsoring bank willing to underwrite applications for companies with this payment model at the present time.

And they add a link to a post on their blog, explaining what TPPA is and why it is risky.

Now I understand the risk but then it got me thinking. Isn’t it a bank’s job to manage the risk and be paid for it? And what does it mean for the future of collaborative consumption in Europe? Given that collaborative consumption is on its way to replace ad-supported free services as the leading business model family, and that the go-to option for collaborative consumption revenue stream is commission on transactions, how can we do that without a bank supporting this model?

So the next step is to see how existing collaborative consumption businesses in Europe are doing. Let’s try first for 9flats, the European clone of AirBnB (they’re based in Germany). Unfortunately, their terms and conditions do indicate that they process the transactions themselves (“9flats undertakes to pay to the host as the purchase price for the receivable the amount of the accommodation price minus a commission (receivable price)”) but they don’t mention what payment processing service they’re using. Now let’s look at BlaBlaCar, one of the leading ride-sharing services in Europe. Here is what they say in article 2.3 of their Terms and Conditions: “it is the Driver’s responsibility to collect payment from the Passenger at the time of the Trip”. Woops! There seems to be a pattern of avoidance here. I posted a message on OuiShare‘s Facebook group and I got the reference of Leetchi. Going to their website, they say they are using Payline to process payments, which I had never heard about and will investigate (it seems they have the French subsidiary of BlaBlaCar as a customer too). But the smaller the operator, the less likely it is they will support modern API’s enabling mobile payments for example (which both Braintree and Paymill do).

This is going to be harder than I thought. But I guess the point of this post is this: if you are thinking of establishing a business in Europe whose business model relies on taking commissions on transactions, be prepared to fight… or flight! The US of A are so appealing to me right now…

Top 5 reasons why you should consider Groovy and Grails for your enterprise software architecture right now

I’m so amazed when I see how so few companies are using Groovy and Grails right now, and are still using old stuff like Spring and Hibernate, that I thought I would jump in and do my share of educating. And why not give in to the fashion of top lists while I’m at it? So here it goes: if you are an enterprise software architect and you have a lot of Java in your world, you might want to read carefully what follows.

(more…)

Book publishing is dying? No kidding!

It’s been a while since I posted my last article here. But tonight I just can’t help it. For a few weeks I’ve been reading MongoDB in Action, from Manning, as an eBook on my iPad. And for a few days, since I started diving into the more complex parts of it, I’m struggling with errors all over the place. And I’m not just talking about typos here, I’m talking about massive errors that completely change the meaning of code samples and leave you wondering about your own sanity and stupidity. Here are a few examples coming from their Errata page:

Page 81, First code snippet

First off, how the hell am I supposed to match there erratas with my eBook? An eBook has no such pages, it depends on what you’re reading it on, the size of the font, etc. Now I know these erratas are designed for paper books originally. But these are technical books we’re talking about. How many technical people are still reading ink on dead trees around here?

Replace

category = db.categories.findOne({'slug': 'outdoors'})
siblings = db.categories.find({'parent_id': category['_id']})
products = db.products.find({'category_id': category['_id']}).
                            skip((page_number - 1) * 12).
                            limit(12).
                            sort({helpful_votes: -1})

with

category = db.categories.findOne({'slug': 'outdoors'})
siblings = db.categories.find({'parent_id': category['parent_id']})
products = db.products.find({'category_id': category['_id']}).
                            skip((page_number - 1) * 12).
                            limit(12).
                            sort({average_review: -1})

In the paragraph before this code sample, you can read that we’re looking for siblings of the current category, and then in the original code sample above, you see the code is looking for children, not siblings. So you start hitting your head agains the wall, reading the same sample over and over again, trying to make sense of it, and then you locate the errata and you see THIS!

But we’re not done yet, there’s more, there’s worse!

From the same errata page:

The line reading:

emit(shipping_month, {order_total: this.sub_total, items_total: 0});

should read:

emit(shipping_month, {order_total: this.sub_total, items_total: 0});

You can read it again… and again… no, there’s no difference between the original and the correction. It’s exactly the same fricking line! The worst part is that you can feel there’s something wrong with this line. It’s obvious that it seems odd to start a map-reduce with different starting points, but which one is the right one? Not so easy to figure out when you’re encountering your first map-reduce. So what, is there an errata page for the errata page somewhere?

But my favorite is definitely the next one:

The lines reading:

var tmpTotal = 0;
var tmpItems = 0;

tmpTotal += doc.order_total;
tmpItems += doc.items_total;

return ( {order_total: tmpTotal, items_total: tmpItems} );

should read:

var tmpTotal = 0;
var tmpItems = 0;

values.forEach(function(doc) {
  tmpTotal += doc.order_total;
  tmpItems += doc.items_total;
});

return ( {order_total: tmpTotal, items_total: tmpItems} );

Ah! So that’s where this “doc” variable comes from! How the hell could two entire lines be removed by mistake? And then I noticed something odd. The last line of what’s supposed to be the original is wrong. What I’m actually reading in my version of the book is:

return ( {total: tmpTotal, items: tmpItems} );

And not:

return ( {order_total: tmpTotal, items_total: tmpItems} );

So same question again… Errata for this errata?

So to sum it up: Manning is a technical editor, they publish technical books for technical readers. They release draft versions in advance for the community to review them on the cheap. Then they sell you eBooks for 30$ a pop, the final version is still littered with massive errors. And then they can’t figure out how to patch your book so they write an errata page that is simply unusable because A- you can’t match page numbers with an eBook and B- the errata page itself is full of errors.

And they wonder why their industry is on the decline? Seriously? I’ll tell you what, next time I’ll save a few tens of bucks and I’ll find “another source”…

And in the meantime, I just found out about Sigil. So I’ll see if I can patch the book and republish it, just as a provocation.

Why I switched back from Heroku to CloudBees

I used to have several grails applications deployed on CloudBees. I liked the fact that they were Java all along, I liked the smooth integration between Jenkins CI and the deployment environment. I really liked the fact that you could hide an application behind a username and password during testing. I just hated their design (seriously guys, hire a good designer) and I was not thrilled by their catalog of third-party services. So when Heroku announced that they supported Java applications, and then Grails applications, it was not long before I migrated all my apps over to their servers.

But more recently, I’ve had issues with a more plugin-rich application. And tonight, after several weeks of fighting, I migrated this app to CloudBees for one general reason: Heroku was really designed with RoR in mind, and even though they made a new stack for java apps, the old rules still apply:

  • if an application takes more than 60 seconds to boot, it is crashed. There is no way to adjust this timeout, and we all know that it can be pretty common for a java app to take a little more than that.
  • another consequence of this slow boot intolerance is that an application is automatically put in sleep mode after several minutes of not being used. Consequence: when someone comes to the site after such a period, say, in the morning, the app is rebooted out of sleep mode… and sometimes it crashes again. Terrible for availability. And apparently, even if you pay, there is no way to prevent that sleep behavior and to keep your app alive all the time. Actually to be fair, there is a way to disable idling: once you scale up to 2 dynos, idling is disabled.
  • since your app can be sleeping, implementing long-running background processes is very complicated too. You have to use their worker processes, but there is no documentation on how to do that in a java application, let alone in a grails app.
  • last but not least, even though I tried to limit the amount of memory of my app, it kept going over 512MB, which is the absolute limit. Once again no way to change that. It doesn’t crash my app, but it clutters my log with plenty of annoying warning messages.

That plus the fact that they refused to answer my last support request about memory consumption and marked it as solved when it got hard. The fact that they don’t have an easy continuous integration feature was not a deal breaker, but it adds to the rest.

Now I don’t know if CloudBees will be better on all these points, but it looks good on paper. Unfortunately, we java devs don’t have a lot of choice when it comes to cloud deployment. CloudFoundry is way too low-level, AppFog is still in private beta, and Amazon Elastic Beanstalk is awful for deployment (40MB take a long time to upload). What other options do we have that I’ve never heard of?

 

New Adventures

It’s been a while since I posted my last article on this blog, but what is really weird is that I have not mentioned yet what has been bothering my days and nights for the past 8 months.

In January this year, the first Startup Weekend was organized in Betagroup Coworking, Brussels. At first I didn’t want to attend, because I thought it was silly to hope you could do anything meaningful in merely 2 days. But Leo insisted, and 2 days before the event, I finally made up my mind. I was working on HuddleKit back then but I wanted to propose a simple idea for the event, something powerful and effective enough to be quick to implement. I had discovered AirBnB a few weeks before, and the whole collaborative consumption trend with it. Around a lunch at Vivansa‘s HQ, Said started thinking out loud about how difficult it was for small companies to adapt their office size as they grow, and how companies could mutualize their office space. We started dreaming about what a concept like AirBnB’s would mean applied to office space, how it could enrich business relationships. And then it struck me: 2 years ago, I published a post entitled “Let’s Solve Problems“, that ended like this.

Now I don’t know how or when, but this could very well become a major breakthrough in my personal and professional life somehow.

Well, guess what. The same day, I registered kodesk.com, and four days later, Kodesk won the grand prize of the jury and the leanest startup prize of the first Startup Weekend Brussels. After that, it was pretty obvious to me: I stopped working on HuddleKit and all my other pet projects, I progressively decreased my involvement with Vivansa and I started working almost full-time on Kodesk. In May this year, Frederic joined me in this adventure and we invested our own money in it. In late May, we released the first version of Kodesk and let’s just say that beyond visibility and publicity, the first results are not all as satisfactory as we expected. Now I’m certainly not complaining: we know that we are on the right track, our vision is crystal clear, we have a lot of support and it’s very rare for startups to get it right the first time. It’s even more rare for ventures that try to change common beliefs and evolve an entire culture.

But today it becomes obvious that building the right product and finding a profitable business model is going to take some time. And until we do, it’s going to be very hard to raise any external investment. Now we don’t want this financial constraint to remove all the fun from the adventure of creating our dream business from scratch, so today I am making a new move.

I love technology. I’m a geek and I am proud of it. My friends find it very useful and my passion for technology has allowed me to develop quite a reputation. Now it is time to earn some money with this passion and reputation, and to use that money to fund this groundbreaking business I’m building. So starting today, I am going to provide businesses of all sizes with three main services: technology watch, training and iPhone/iPad development, because those are the three things I love the most and I am really good at.  If you want more information, I created this page to promote my services. And if you know any company or individual who could be interested in my services, please feel free to pass my information along.

Grails, Vaadin and Spring Security Core

I got kind of bored with Flex and all the complexity it introduces by forcing you to switch between ActionScript and whatever you are using for the backend (Groovy in my case). I also got bored with having to regenerate my data service stubs on each server-side change, and having to handle the asynchronous remoting. So I started to have a look at Vaadin.

Vaadin offers the same richness of components as Flex, but I can code my UI with Groovy and it completely removes the need to bother about remoting and all that stuff. It’s really like my old Swing days and I love it.

Last week-end, I tried their AddressBook tutorial, and I adapted it to Grails using the Grails-Vaadin plugin. Then I modified the sample so that it uses GORM to store contacts. And finally I installed spring-security-core plugin to secure my business services with @Secured annotations. And it worked absolutely great.

I just released a new version of the Grails-Vaadin plugin with Vaadin upgraded to 6.5.1 (the latest version at this point), and I uploaded my version of addressbook to GitHub.

For me, the most interesting part is how I got security to work. All I had to do was to install spring-security-core plugin into grails and then define a simple SecurityService like the following:

[code=groovy]
package org.epseelon.addressbook.business

import org.springframework.security.core.context.SecurityContextHolder as SCH
import org.springframework.security.authentication.BadCredentialsException
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken

class SecurityService {

static transactional = true

def springSecurityService
def authenticationManager

void signIn(String username, String password) {
try {
def authentication = new UsernamePasswordAuthenticationToken(username, password)
SCH.context.authentication = authenticationManager.authenticate(authentication)
} catch (BadCredentialsException e) {
throw new SecurityException("Invalid username/password")
}
}

void signOut(){
SCH.context.authentication = null
}

boolean isSignedIn(){
return springSecurityService.isLoggedIn()
}
}
[/code]

Then I injected this SecurityService into my AddressBookApplication and used it:

[code=groovy]
class AddressBookApplication extends Application {
private SecurityService security = (SecurityService)getBean(SecurityService)

[…]

boolean login(String username, String password) {
try {
security.signIn(username, password)
refreshToolbar()
return true
} catch (SecurityServiceException e) {
getMainWindow().showNotification(e.message, Notification.TYPE_ERROR_MESSAGE);
return false
}
}
}
[/code]

Then whenever I try to call a @Secured method:

[code=groovy]
package org.epseelon.addressbook.business

import org.epseelon.addressbook.dto.PersonListItem
import org.epseelon.addressbook.domain.Person
import grails.plugins.springsecurity.Secured

class PersonService {

static transactional = true

[…]

@Secured(["ROLE_USER"])
PersonListItem updatePerson(PersonListItem item) {
Person p = Person.get(item.id)
if(p){
p.firstName = item.firstName
p.lastName = item.lastName
p.email = item.email
p.phoneNumber = item.phoneNumber
p.streetAddress = item.streetAddress
p.postalCode = item.postalCode
p.city = item.city
p.save()

return new PersonListItem(
firstName: p.firstName,
lastName: p.lastName,
email: p.email,
phoneNumber: p.phoneNumber,
streetAddress: p.streetAddress,
postalCode: p.postalCode,
city: p.city
)
}
return null
}
}
[/code]

If I’m not logged in as a user, I get an “access denied” exception:

[code=groovy]
package org.epseelon.addressbook.presentation.data

import com.vaadin.data.util.BeanItemContainer
import org.epseelon.addressbook.dto.PersonListItem
import org.epseelon.addressbook.business.PersonService
import com.vaadin.data.util.BeanItem
import com.vaadin.ui.Window.Notification
import org.epseelon.addressbook.presentation.AddressBookApplication

/**
*
* @author sarbogast
* @version 19/02/11, 11:12
*/
class PersonContainer extends BeanItemContainer<PersonListItem> implements Serializable {
[…]
boolean updateItem(Object itemId) {
try {
personService.updatePerson((PersonListItem) itemId)
return true
} catch (Exception e) {
AddressBookApplication.application.getMainWindow().showNotification(
e.message,
Notification.TYPE_ERROR_MESSAGE
);
return false
}
}
}
[/code]

To see what it looks like, all you have to do is to download the code from GitHub, and run “grails run-app” at the root of it.
If you try to create a new contact of edit an existing one and save it without being logged in, you get an “access denied” message. But if you login as ramon/password, it works.

Note that this project uses Grails 1.3.6 but the plugin supports any version of Grails above 3.2 included. As always, your feedback is more than welcome.

My Case for DTO’s

In many of my posts about Grails and Flex integration, I take for granted that I use Data Transfer Objects to transfer data between my Grails backend and my Flex frontend. Put simply, Data Transfer Object are pure data containing classes different from the domain entity classes used to store data in the backend. I take it for granted because I’m deeply convinced that it’s the best way to do things and so far, experience has never proved me wrong. But I often get this question in comments or by mail (this is for you Martijn): why bother create an entirely separate class structure and copy data from entities to DTO’s and back instead of just using entities?

I’ve expressed my arguments a couple of times across various posts but I thought it would be nice to sum things up in here for future reference.

Where does it come from?

When I first started to work on enterprise applications and ORM-based architectures, it was with a Model-Driven Architecture framework called AndroMDA. AndroMDA was absolutely key in helping me getting started with Spring and Hibernate and I was especially inspired by one paragraph in their “getting started” tutorial, which I quote here:

Data Propagation Between Layers

In addition to the concepts discussed previously, it is important to understand how data propagates between various layers of an application. Follow along the diagram above as we start from the bottom up.

As you know, relational databases store data as records in tables. The data access layer fetches these records from the database and transforms them into objects that represent entities in the business domain. Hence, these objects are called business entities.

Going one level up, the data access layer passes the entities to the business layer where business logic is performed.

The last thing to discuss is the propagation of data between the business layer and the presentation layer, for which there are two schools of thought. Some people recommend that the presentation layer should be given direct access to business entities. Others recommend just the opposite, i.e. business entities should be off limits to the presentation layer and that the business layer should package necessary information into so-called “value objects” and transfer these value objects to the presentation layer. Let’s look at the pros and cons of these two approaches.

The first approach (entities only, no value objects) is simpler to implement. You do not have to create value objects or write any code to transfer information between entities and value objects. In fact, this approach will probably work well for simple, small applications where the the presentation layer and the service layer run on the same machine. However, this approach does not scale well for larger and more complex applications. Here’s why:

  • Business logic is no longer contained in the business layer. It is tempting to freely manipulate entities in the presentation layer and thus spread the business logic in multiple places — definitely a maintenance nightmare. In case there are multiple front-ends to a service, business logic must be duplicated in all these front-ends. In addition, there is no protection against the presentation layer corrupting the entities – intentionally or unintentionally!
  • When the presentation layer is running on a different machine (as in the case of a rich client), it is very inefficient to serialize a whole network of entities and send it across the wire. Take the example of showing a list of orders to the user. In this scenario, you really don’t need to transfer the gory details of every order to the client application. All you need is perhaps the order number, order date and total amount for each order. If the user later wishes to see the details of a specific order, you can always serialize that entire order and send it across the wire.
  • Passing real entities to the client may pose a security risk. Do you want the client application to have access to the salary information inside the Employee object or your profit margins inside the Order object?

Value objects provide a solution for all these problems. Yes, they require you to write a little extra code; but in return, you get a bullet-proof business layer that communicates efficiently with the presentation layer. You can think of a value object as a controlled view into one or more entities relevant to your client application. Note that AndroMDA provides some basic support for translation between entities and value objects, as you will see in the tutorial.

Because of this paragraph, I started writing all my business services with only data transfer objects (what they call “value objects”) as input and output. And it worked great. Yes it did require a little bit of coding, especially as I had not discovered Groovy yet, but it was worth the time, for all the following reasons.

The conceptual argument: presentation/storage impedance mismatch

Object-relational mapping is what Joel Spolsky calls a “Leaky Abstraction“. It’s supposed to hide away the fact that your business entities are in fact stored in a relational database, but it forces you to do all sorts of choices because of that very fact. You have to save data in a certain order in order not to break certain integrity constraints, certain patterns are to be avoided for better query performance, and so on and so forth. So whether we like it or not, our domain model is filled with “relational choices”.

Now the way data is presented involves a whole different set of constraints. Data is very often presented in a master/detail format, which means you first display a list of items, with only a few fields for each item, and possible some of those fields are calculated based on data that is stored in the database. For example, you may store a country code in your database, but you will display the full country name in the list. And then when the user double-clicks an item, he can see all the fields for that item. This pattern is totally different from how you actually store the data.

So even though some of the fields in your DTO’s will be mere copies of their counterparts in the entity, that’s only true for simple String-typed fields. As soon as you start dealing with dates, formatted floats or enum codes, there is some transformation involved, and doing all that transformation on the client-side is not always the best option, especially when you have several user interfaces on top of your backend (a Flex app and an iPhone app for example), in which case you’re better off doing most of these transformations on the server.

In anyway, if you change the way you store data, it should not influence too much the way you present the same data, and vice-versa. This decoupling is very important for me.

The bandwidth argument: load just the data you need

In the master/data use case, when you display the list of items, you just need a subset of the fields from your entities, not all of them. And even though you’re using Hibernate on the backend with lazy-loading enabled, fields are still initialized and transferred over the wire. So if you use entity classes for data transfer, you will end up transferring a whole bunch of data that may never be used. Now it might not be very important for hundreds of records, but it starts being a problem with thousands of records, especially when there is some parsing involved. The less data you transfer the better.

The security argument: show only the data you want to show

Let’s say you’re displaying a list of users, and in the database, each user has a credit card number. Now of course when you display a list of users, you might not want everyone to see the list of credit card numbers. You might want to expose this data only in detail view for certain users with certain privileges. DTO’s allow you to tailor your API to expose just the data you need.

The error-prone argument: argh! Yet another LazyInitializationException!

Of course there are associations between your business entities, and by default, those associations are lazy-loaded, which means they are not initialized until you actually query them. So if you just load a bunch of instances from your entity manager and send them over to your client, the client might end up with null collections. Now of course you can always pay attention, or use some tricks to initialize associations up to a certain level before you send your data, but this process is not automatic and it’s very error-prone. As for using things like dpHibernate, I think it just adds too much complexity and uncontrolled server requests.

The laziness argument: Come on! It’s not that hard!

I think that most of the time, the real reason why people don’t want to use DTO’s is because they’re lazy. Creating new classes, maintaining code that does “almost” the same as existing code, adding some code to service implementation to copy data back and forth, all of that takes time and effort. But laziness has never been a good reason for ditching a design pattern altogether. Yes, sometimes, best practices force us to do more stuff for the sake of maintainability and robustness of our code, and for me the solution is certainly not to shortcut the whole practice, but just to find the best tools to minimize the added work. With its property support and collection closures, Groovy makes both creating, maintaining and feeding DTO’s as simple and fast as it can be. AndroMDA had converters. There are even some DTO-mapping frameworks like Dozer to help you. No excuse for laziness.

For me, all the reasons above largely overcome the added work to maintain a parallel DTO structure.

Now of course, this is a very opinionated topic and you will probably have a different view. So all your comments are welcome as long as they remain constructive and argumented.

Flex on Grails, Take 2: Part 3

At the end of the second article in this series, we ended up with a working application but it was not really ready for the real world because it had one major flaw: the URL of the AMF endpoint was hardcoded in the client in such a way that it was impossible to change after compilation and very hard to handle several environments (dev, test, prod). The solution to that problem is to integrate dependency injection into the mix.

Now there are a lot of such frameworks for Flex/ActionScript applications, including Parsley, Swiz, Cairngorm, etc. But I’ve never been a big fan of those big MVC frameworks that impose their own interpretation of the MVC pattern and completely limit the initial capabilities of Flex itself. For me, the Flex framework itself is clean enough that you don’t need all that overhead and it’s better to use a non-intrusive framework like Spring ActionScript. So that’s what we are going to do.

(more…)

JVM Web Framework Survey, First Results

Yesterday at Devoxx, Matt Raible did a very interesting talk on comparing JVM web frameworks. On this occasion he had the incredible courage of voicing his opinion on each of the most well-known frameworks, rating them in a matrix and the craziest part: showing this matrix to everyone.

Immediately after his talk, Twitter was on fire with advocates of each of those frameworks complaining about how those ratings were unfair and biased. I mean of course, I can hardly talk about 3 or 4 of these frameworks with the same level of confidence, but the guy has enough experience to have played with at least 13 of them, and it’s perfectly normal to expect him to have up-to-date and accurate feedback about all of them.

Anyway, his talk was highly entertaining but in the end it inspired me 2 reflections:

  1. His list of 20 criteria is excellent and covers pretty much everything, except maybe for “graphics design integration” which I think is very important and some frameworks make it much easier than other, like Flex with Flash Catalyst for example. So even if you don’t agree with the ratings, you can still reuse his methodology, build 13 proof of concepts and rate them yourself.
  2. Rather than complaining, let’s do a survey.

So right after the talk, I built a small Google docs survey, and to this date I got 26 answers, which I think is far less than the number of complaints, but is already a good start. Here are the first results based on those 26 responses. As you can see, there was a small issue with Google Docs who didn’t correctly save the results for the last criterion, degree of risk. So the rankings are based on only 19 criteria so far. But if you still want to voice your opinion, you can still do it. I will update the results from time to time and this time, the 20th criterion should be considered.

Thanks again Matt Raible for this very inspiring talk, and thanks to all the Devoxx team for yet another memorable edition.

By the way, as far as I’m concerned, I’m pretty happy with Grails and Flex at the moment, but after the amazing demos I saw at Devoxx, I will probably have a deeper look at Vaadin very soon. And please Jetbrains, we need more visual designers (Flex? Vaadin?).

Flex on Grails: Take 2, Part 2

This is a follow-up post to Flex on Grails, Take 2.

Task creation

Now that we have a basic working application, let’s improve it by adding some security in there. If your Grails application is still running, you can leave it that way as you won’t need to restart your backend every time you modify it. That’s the whole beauty and productivity of Grails and this plugin.

(more…)