Web scrapping remax.com for python

我们两清 提交于 2019-12-24 20:44:15

问题


This is similar to the question I had here. Which was answered perfectly. Now that I have something to work with what I am trying to do now is instead of having a url entered manually in to take data. I want to develop a function that will take in just the address, and zipcode and return the data I want.

Now the problem is modifying the url to get the correct url. For example

url = 'https://www.remax.com/realestatehomesforsale/25-montage-way-laguna-beach-ca-92651-gid100012499996.html'

I see that besides the address, state, and zipcode there is also a number that follows i.e. gid100012499996 which seems to be unique for each address. So I am not sure how to be able to achieve the function I want.

Here is my code:

import urllib
from bs4 import BeautifulSoup
import pandas as pd

def get_data(url):
    hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
            'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
            'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
            'Accept-Encoding': 'none',
            'Accept-Language': 'en-US,en;q=0.8',
            'Connection': 'keep-alive'}
    request = urllib.request.Request(url, headers=hdr)
    html = urllib.request.urlopen(request).read()

    soup = BeautifulSoup(html,'html.parser')
    foot = soup.find('span', class_="listing-detail-sqft-val")
    print(foot.text.strip())

url = 'https://www.remax.com/realestatehomesforsale/25-montage-way-laguna-beach-ca-92651-gid100012499996.html'
get_data(url)

What I want to have is something like the above but instead get_data() will take in address, state, and zipcode. My apologies if this is not a suitable question for this site.


回答1:


The site has a JSON API that lets you get all of the details of properties in a given rectangle. The rectangle is given by latitude and longitude coordinates for the NW and SE corners. The following request shows a possible search:

import requests

params = {
    "nwlat" : 41.841966864112,          # Calculate from address
    "nwlong" : -74.08774571289064,      # Calculate from address
    "selat" : 41.64189784194883,        # Calculate from address
    "selong" : -73.61430363525392,      # Calculate from address
    "Count" : 100,
    "pagenumber" : 1,
    "SiteID" : "68000000",
    "pageCount" : "10",
    "tab" : "map",
    "sh" : "true",
    "forcelatlong" : "true",
    "maplistings" : "1",
    "maplistcards" : "0",
    "sv" : "true",
    "sortorder" : "newest",
    "view" : "forsale",
}

req_properties = requests.get("https://www.remax.com/api/listings", params=params)
matching_properties_json = req_properties.json()

for p in matching_properties_json[0]:
    print(f"{p['Address']:<40}  {p.get('BedRooms', 0)} beds | {int(p.get('BathRooms',0))} baths | {p['SqFt']} sqft")

This results in 100 responses (obviously a tighter rectangle would then reduce the results). For example:

3 Pond Ridge Road                         2 beds | 3.0 baths | 2532 sqft
84 Hudson Avenue                          3 beds | 1.0 baths | 1824 sqft
116 HUDSON POINTE DR                      2 beds | 3.0 baths | 2455 sqft
6 Falcon Drive                            4 beds | 3.0 baths | 1993 sqft
53 MAPLE                                  5 beds | 2.0 baths | 3511 sqft
4 WOODLAND CIR                            3 beds | 2.0 baths | 1859 sqft
.
.
.
95 S HAMILTON ST                          3 beds | 1.0 baths | 2576 sqft
40 S Manheim Boulevard                    2 beds | 2.0 baths | 1470 sqft

Given you have an address, you would then need to calculate the latitude and longitude for that address. Then create a small rectangle around it for the NW and SE corners. Then build a URL with those numbers. You will then get a list of all properties (hopefully 1) for the area.


To make a search square, you could use something like:

lat = 41.841966864112
long = -74.08774571289064
square_size = 0.001

params = {
    "nwlat" : lat + square_size,
    "nwlong" : long - square_size,
    "selat" : lat - square_size,
    "selong" : long + square_size,
    "Count" : 100,
    "pagenumber" : 1,
    "SiteID" : "68000000",
    "pageCount" : "10",
    "tab" : "map",
    "sh" : "true",
    "forcelatlong" : "true",
    "maplistings" : "1",
    "maplistcards" : "0",
    "sv" : "true",
    "sortorder" : "newest",
    "view" : "forsale",
}

square_size would need to be adjusted depending on how accurate your address is.



来源:https://stackoverflow.com/questions/54894330/web-scrapping-remax-com-for-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!