Update Aug 1, 2011:With Google's acquisition of Dealmap one would have to assume this will become a Google API. The deal raises a number of interesting questions related to how deals would be served to specific clients. Should be interesting to watch.
Update Jun 11, 2011: I made one more tweak I neglected to mention to the Dealmap python API which I highly recommend if you intend to use it in a production environment which is to modify the call to urllib2.urlopen and add the timeout parameter. In fact, in my revision I've modified all of the calls into the API to accept a timeout parameter which is passed along to the urllib2.urlopen call.
I’ve been looking into the $l(DealMap API) and found what appears to be a semi/partially/maybe/sorta-official python implementation though there appears to be a missing module reference called “Util” containing an $g(ordered dictionary) and some XML serialization bits. I posted a message to the Google group for the project and even followed up with Dealmap directly but unfortunately haven’t gotten any response, not exactly a great sign though seeing as this is a pretty straightforward $g(REST API) let’s move on shall we...
There’s a fairly simple workaround for this unknown (at least to me) module using Beautiful Soup (gotta love that domain name) and a few simple wrapper classes making it easier to work with the API. Beautiful Soup is defined thusly:
Beautiful Soup is an HTML/XML parser for Python that can turn even invalid markup into a parse tree. It provides simple, idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work.
In dealmap.py I removed all of the Util module references and changed the deserialize calls to return an instance of a BeautifulStoneSoup wrapper class called Deals:
def __init__(self, dealmarkup=""):
BeautifulStoneSoup.__init__(self, dealmarkup, convertEntities=BeautifulSoup.XML_ENTITIES)
self._deals = None
def getDeal(self, index):
deals = self.getDeals()
if self._deals == None:
self._deals = self.findAll('deal')
l = 
for d in self._deals:
self._deals = l
Here’s a method from the Service class where I replaced the call to deserialize(…) with my new Deals class:
def search_deals(self, activities, capabilities, expirationDate, location, query="*", distance=5.0, startIndex=0, pageSize=20):
searchDeals = self.__build_get_url(self.__dealmapUrls["search_deals"],
result = self.__dealmap_get_request(searchDeals)
obj = Deals(result)
In the Deals class above you might have noticed I reference a Deal (singular) class which is a thin wrapper for accessing the properties of the Deal Tag object returned via BeautifulSoup using a __getattr__ override:
def __init__(self, dealtag):
self._dealtag = dealtag
def __getattr__(self, name):
Accessing information from a deal now looks like this:
dealmap = Service("dealmap_api_key")
deals = dealmap.search_deals(None, None, None, location="+37.0491490732-122.025146484")
d = deals.getDeal(0)
You can specify any of XML child property names for a Deal and they’ll be returned (moreinfolink is such an example).
Fair, warning what’s posted here has essentially no error checking.
Btw, having been looking at the API I’ve found the performance to be much better if you provide a lat/lng location rather than city/state or zip where the latter seems to be particularly slow.
If you’ve worked with this API I’d be interested to get your impressions.