Python: Support Prepared Requests - Experimental Module
Aims to add support for requests library's Prepared Requests as a part of the Http::Client::Request API. To support the required getAUrlPart method, we do some tracking to the Request Object where the value is defined in an argument position 1. Intended to mirror the existing OutgoingRequestCall getAUrlPart behavior for the traditional requests calls.
https://requests.readthedocs.io/en/latest/user/advanced/#prepared-requests
Example 1:
import requests
req = requests.Request('GET', 'https://example.com')
r = req.prepare()
s = requests.Session()
res = s.send(r, verify=True)
Example 2:
from requests import Request, Session
s = Session()
def getprepped():
req = Request('GET', 'https://example.com')
prepped = req.prepare()
# do something with prepped.body
prepped.body = 'No, I want exactly this as the body.'
return prepped
prepped = getprepped()
resp = s.send(prepped,
verify=True
)
Example 3:
from requests import Request, Session
from example_2 import getprepped
s = Session()
prepped = getprepped()
resp = s.send(prepped,
verify=True
)
Example 4:
from requests import Request, Session
s = Session()
def getRequest():
req = Request('GET', 'https://example.com')
return req
def getprepped():
req = getRequest()
prepped = req.prepare()
# do something with prepped.body
prepped.body = 'No, I want exactly this as the body.'
return prepped
prepped = getprepped()
resp = s.send(prepped,
verify=True
)
The disablesCertificateValidation and getFramework method are the same as the existing OutgoingRequestCall.
Thank you for your contribution, this looks good!
Would you mind adding some test cases in python/src/experimental/library-tests?
Awesome thanks! Sure I will give it a try, are there docs on how to write test cases? I tried looking for some but I came up short.
You can create a directory with some python files and some .ql files that query the things you'd like to test; and a .expected file with the same name as the .ql file that will contain the expected output (can start with it empty). Then you should be able to use codeql test from the codeql cli, or the test runner in the codeql vscode extention, to run the tests; and to accept the output (copying the .actual file generated by the test run to the .expected file)
You can also use .qlref files in place of .ql files to link to an existing query.
If you don't get the results you expected, a .testproj databse will be generated that you can open with the vscode extention and run queries against to debug.
This PR has been inactive for a while, and is being closed. Please reopen the PR or open a new one if you'd like to move this forward.