Last week saw the kick-off of the first completely unofficial Dutch Wordfeud tournament. I’m competing, and so far, things are going great. I have no illusions about making it to the next round though. Even though I’m winning most of the round one games, my scores are mediocre at best, and the accumulated total scores decide which sixteen players will compete in round two. But there’s a catch…
But although I wouldn’t really mind losing, I’d hate to lose to cheaters. There are a lot of cheat apps out there that can suggest the absolute best word to put on the board. Some are so advanced that they interpret screenshots from the game, or even interface directly with the Wordfeud server. In order to see if there was any way to spot when these apps were at work, my wife and I decided to play a game of Wordfeud using the best cheat apps on our respective phone platforms and see what would happen.
Scrabulizer for iOS
Scrabulizer takes a screenshot of the current game, recognizes all the words currently on the board, and then finds the best move. It uses the same dictionary as Wordfeud does, and offers a list of suggestions sorted by potential score. In our test, we used the top suggestion for each turn. Taking screenshots all the time is a bit of a pain, but aside from that the app is really easy to work with.
Wordfeud Helper plugin beta
Wordfeud Helper uses a different approach. It connects directly to your Wordfeud account, lets you select all current games, and then does the same thing Scrabulizer does. It too offers a list suggestions sorted by score, but there’s a nasty bug that affects Dutch language games.
It appears that the app continues to use the English letter values with the Dutch dictionary. This means that the predicted score can be off by as much as 20 points. You could probably get around this by manually calculate the scores for the top ten suggestions, but in our test we also used the top one. If this bug gets fixed, chances are that “Helper” will perform identical to it’s iOS competitor.
The movie above shows the game from the iPhone end. The final score was iPhone: 573, Android: 361. However, the Android “player” was continually getting worse letters than the iPhone, and the language bug probably also affected its performance. I’ll try and do an English game and update this post, but the main conclusion has to be that playing with these apps is no fun at all, although it does sometimes result in monster scores.
As you can see, these apps tend to clutter words together. Human players tend to look for open spaces, but it makes sense that packing words tightly together, thereby extending or creating “collateral words”, yields higher scores. I’m not sure whether there’s enough of a difference to spot cheaters, but this pattern may provide an excellent clue.
Here’s another video of both apps playing against each other. This time the game is English, and I’m happy to report that “Helper” did predict the correct scores. Again, there’s a similar pattern in the way words are connected.
Unfortunately, this particular game unearthed another shortcoming in the Android app. It does not use blank tiles at all. The Android player got one early on in the game, and the app only used the other six letters from there on in. This caused the app to lose the game, and by quite a large margin (340 vs. 501).