I’m really sad to see that Google continues its approach of tolerating — and thus, in my view, encouraging — when people build sites that break the Web’s architecture. First, we had this hashbang mess (tl;dr version: Ajax/JS-only sites suck, long version here), now the Google crawler will issue POST requests (no kidding).
Sure, there are worse things happening in the world, but from a REST perspective this is so utterly, totally wrong that it makes me really mad. A GET request is the only thing a crawler should ever issue if it intends to conform to the architecture of the Web, as these requests are safe. Issuing POSTs just because so many people don’t understand the distinction between GET and POST (or use crappy web frameworks that don’t) just means that even more people will do so. In the end, everyone will have to use heuristics to find out what can be called safely, and what can’t — effectively trading specified behavior for the typical kind of crap that you usually only get when something evolves without any architectural vision.
Google’s very core business was enabled by the Web’s architecture, now they’re slowly helping to ruin it.
“Do no evil” my ass.
“The Web is more a social creation than a technical one.”
This robot POST abuse might force more people to read the specification and make their server conform to the standard and security requirements. Google robot offers a free test not only for GETs but also for POSTs now.