Quantcast
Channel: CodeSection,代码区,Python开发技术文章_教程 - CodeSec
Viewing all articles
Browse latest Browse all 9596

Using the Requests Module in Python

$
0
0

Requests is a python module that you can use to send all kinds of HTTP requests. It is an easy-to-use library with a lot of features ranging from passing parameters in URLs to sending custom headers and SSL Verification. In this tutorial, you will learn how to use this library to send simple HTTP requests in Python.

You can use Requests with Python version 2.6 2.7 and 3.3 3.6. Before proceeding further, you should know that Requests is an external module, so you will have to install it first before trying out the examples in this tutorial. You can install it by running the following command in the terminal:

pip install requests

Once you have installed the module, you can verify if it has been successfully installed by importing it using this command:

import requests

If the installation has been successful, you won't see any error messages.

Making a GET Request

It is very easy to send an HTTP request using Requests. You begin by importing the module and then make the request. Here is an example:

import requests
req = requests.get('https://tutsplus.com/')

All the information about our request is now stored in a Response object called req . For example, you can get the encoding of the webpage using the req.encoding property. You can also getthe status code of the requestusing the req.status_code property.

req.encoding # returns 'utf-8'
req.status_code # returns 200 You can access the cookies that the server sent back using req.cookies . Similarly, you can get theresponse headers using req.headers . The req.headers property returns a case insensitive dictionary of response headers. This means that req.headers['Content-Length'] , req.headers['content-length'] and req.headers['CONTENT-LENGTH'] will all return the value of the 'Content-Length' response header.

You can check if the responseis a well-formed HTTP redirect that could have been processed automaticallyusing the req.is_redirect property. It will return True or False based on the response. You can also get the time elapsed between sending the request and getting back a response using the req.elapsed property.

The URL that you initially passed to the get() function can be different than the final URL of the response for a variety of reasons, including redirects. To see the final response URL, you can use the req.url property.

import requests
req = requests.get('http://www.tutsplus.com/')
req.encoding # returns 'utf-8'
req.status_code # returns 200
req.elapsed # returns datetime.timedelta(0, 1, 666890)
req.url # returns 'https://tutsplus.com/'
req.history
# returns [<Response [301]>, <Response [301]>]
req.headers['Content-Type']
# returns 'text/html; charset=utf-8'

Getting all this information about the webpage you are accessing is nice, but you most probably want to access the actual content. If the content you are accessing is text, you can use the req.text property to access it. The content is then parsed as unicode. You can pass the encoding with which to decode the text using the req.encoding property.

In the case of non-text responses, you can access them in binary form using req.content . The module will automatically decode gzip and deflate transfer-encodings. This can be helpful when you are dealing with media files.Similarly, you can access the json-encoded content of the response, if it exists,using req.json() .

You can also get the raw response from the server using req.raw . Keep in mind that you will have to pass stream=True in the request to get the raw response.

Some files that you download from the internet using the Requests module may have a huge size. In such cases, it will not be wise to load the whole response or file in the memory at once. You can download a file in pieces or chunks using the iter_content(chunk_size = 1, decode_unicode=False) method.

This method iterates over the response data in chunk_size number of bytes at once. When stream=True has been set on the request, this methodwillavoid reading the whole file into memory at once for large responses. The chunk_size parameter can be either an integer or None . When set to an integer value, chunk_size determines the number of bytes that should be read into the memory.

When chunk_size is set to None and stream is set to True , the data will be read as it arrives in whatever size ofchunks arereceived. When chunk_size is set to None and stream is set to False , all the data will be returned as a single chunk.

Let's download this image of a forest on Pixabay using the Requests module. Here is the actual image:


Using the Requests Module in Python

This is the code that you need:

import requests
req = requests.get('path/to/forest.jpg', stream=True)
req.raise_for_status()
with open('Forest.jpg', 'wb') as fd:
for chunk in req.iter_content(chunk_size=50000):
print('Received a Chunk')
fd.write(chunk)

The 'path/to/forest.jpg' is the actual image URL; you can put the URL of any other image here to download something else. The givenimagefile is 185kb in size, and you have set chunk_size to 50,000 bytes. This means that the "Received a Chunk" message should be printed four times in the terminal. The size of the last chunk will just be 39350 bytes because the part of thefile that remains to be received after the first three iterations is 39350 bytes.

Requests also allows you to pass parameters in a URL. This can be helpful when you are searching a webpage for some results like a specific image or tutorial. You can provide these query strings as a dictionary of strings using the params keyword in the GET request. Here is an example:

import requests
query = {'q': 'Forest', 'order': 'popular', 'min_width': '800', 'min_height': '600'}
req = requests.get('https://pixabay.com/en/photos/', params=query)
req.url
# returns 'https://pixabay.com/en/photos/?order=popular&min_height=600&q=Forest&min_width=800' Making a POST Request

Making a POST request is just as easy as making GET requests. You just use the post() function instead of get() . This can be useful when you are automatically submitting forms. For example, the following code will download the whole Wikipedia page on Nanotechnology and save it on your PC.

import requests
req = requests.post('https://en.wikipedia.org/w/index.php', data = {'search':'Nanotechnology'})
req.raise_for_status()
with open('Nanotechnology.html', 'wb') as fd:
for chunk in req.iter_content(chunk_size=50000):
fd.write(chunk) Sending Cookies and Headers As previously mentioned, you can access the cookies and headers that the server sends back to you using req.cookies and req.headers . Requests also allows you to send your own custom cookies and headers with a request. This can be helpful when you want to, let's say, set a custom user agent for your re

Viewing all articles
Browse latest Browse all 9596

Trending Articles