Given we now use Guzzle as well, and don't require cURL for any
features at a basic level, our code should not be as assumptious
about the backend being curl. Such assumptions may be wrong
and lead to confusion and/or bugs.
Bug: T137926
Change-Id: I6ad7f76768348e1eb8c1fb46c8125cce9285dc22
* Uses asynchronous I/O, allowing purges to be done in a highly parallel
* manner.
*
- * Could be replaced by curl_multi_exec() or some such.
+ * @todo Consider using MultiHttpClient.
*/
class SquidPurgeClient {
/** @var string */
}
/**
- * Checks that the given URI is a valid one. Hardcoding the
- * protocols, because we only want protocols that both cURL
- * and php support.
+ * Check that the given URI is a valid one.
*
- * file:// should not be allowed here for security purpose (r67684)
+ * This hardcodes a small set of protocols only, because we want to
+ * deterministically reject protocols not supported by all HTTP-transport
+ * methods.
+ *
+ * "file://" specifically must not be allowed, for security purpose
+ * (see <https://www.mediawiki.org/wiki/Special:Code/MediaWiki/r67684>).
*
* @todo FIXME this is wildly inaccurate and fails to actually check most stuff
*
$scalerThumbUrl = $scalerBaseUrl . '/' . $file->getUrlRel() .
'/' . rawurlencode( $scalerThumbName );
- // make a curl call to the scaler to create a thumbnail
+ // make an http request based on wgUploadStashScalerBaseUrl to lazy-create
+ // a thumbnail
$httpOptions = [
'method' => 'GET',
'timeout' => 5 // T90599 attempt to time out cleanly