First the hook: the game Yavalath sounds interesting. It's played on a hex map (a hex of hexes); two players take take turns placing pieces in two colors; you win if you get four in a row of your color ... but you LOSE if you place three in a row.
The Ludi program uses a sort of evolutionary programming process ... it scrambles a bunch of game rules to make rule sets, then simulates playing the games and uses some kind of heuristic to decide which games would be interesting to human players.
Further, the humans working on the project then had people play the top-ranked games (Ludi generated 1048 games and ranked 19 as interesting to humans). The people's choice, Yavalath, was actually #4 on Ludi's list.
Other details show that clearly the algorithm for predicting human interest isn't perfect. One of the highly-rated games had rules too complex for people to like. But so what? Think of this thing as a game design tool, a way to test and iterate options. Because isn't that the problem with game design -- one can imagine many rule choices, but it's expensive to try them all.
The story I read about the famous German game Setters of Catan is that the author, Klaus Teuber, spent months playing the game with his family every night, testing out various rules. That's fantastic, and supports the widespread popularity of the game, but few people can make that sort of testing happen. I know I've played many a game that didn't seem like it had been playtested nearly enough.