问题
I wanted to know if anyone using Puppeteer-Cluster could elaborate on how the Cluster.Launch({settings}) protects against sharing of cookies and web data between pages in different context.
Do the browser contexts here, actually block cookies and user-data is not shared or tracked? Browserless' now infamous page seems to think no, here and that .launch({}) should be called on the task, not ahead of the queue.
So my question is, how do we know if puppeteer-cluster is sharing cookies / data between queued tasks? And what kind of options are in the library to lower the chances of being labeled a bot?
Setup: I am using page.authenticate with a proxy service, random user agent, and still getting blocked(403) occasionally by the site which I'm performing the test.
async function run() {
// Create a cluster with 2 workers
const cluster = await Cluster.launch({
concurrency: Cluster.CONCURRENCY_BROWSER, //Cluster.CONCURRENCY_PAGE,
maxConcurrency: 2, //5, //25, //the number of chromes open
monitor: false, //true,
puppeteerOptions: {
executablePath,
args: [
"--proxy-server=pro.proxy.net:2222",
"--incognito",
"--disable-gpu",
"--disable-dev-shm-usage",
"--disable-setuid-sandbox",
"--no-first-run",
"--no-sandbox",
"--no-zygote"
],
headless: false,
sameDomainDelay: 1000,
retryDelay: 3000,
workerCreationDelay: 3000
}
});
// Define a task
await cluster.task(async ({ page, data: url }) => {
extract(url, page); //call the extract
});
//task
const extract = async ({ page, data: dataJson }) => {
page.setExtraHTTPHeaders({headers})
await page.authenticate({
username: proxy_user,
password: proxy_pass
});
//Randomized Delay
await delay(2000 + (Math.floor(Math.random() * 998) + 1));
const response = await page.goto(dataJson.Url);
}
//loop over inputs, and queue them into cluster
var dataJson = {
url: url
};
cluster.queue(dataJson, extract);
}
// Shutdown after everything is done
await cluster.idle();
await cluster.close();
}
回答1:
Direct answer
Author of puppeteer-cluster
here. The library does not actively block cookies, but makes use of browser.createIncognitoBrowserContext():
Creates a new incognito browser context. This won't share cookies/cache with other browser contexts.
In addition, the docs state that "Incognito browser contexts don't write any browsing data to disk" (source), so that restarting the browser cannot reuse any cookies from disk as there were no data written.
Regarding the library, this means when a job is executed, a new incognito context is created, which does not share any data (cookies, etc.) with other contexts. So as long as Chromium properly implements the incognito browser contexts, there is no data shared between the jobs.
The page you linked only talks about browser.newPage()
(which shares cookies between pages) and not about incognito contexts.
Why websites might identify you as a bot
Some websites will still block you, because they use different measures to detect bots. There are headless browser detection tests as well as fingerprinting libraries that might report you as bot if the user agent does not match the browser fingerprint. You might be interested in this answer by me that provides some more detailed explanation how these fingerprints work.
You can try to use a library like puppeteer-extra that comes with a stealth plugin to help you solve the problem. However, this basically is a cat-and-mouse game. The fingerprinting tests might be changed or another sites might use a different "detection" mechanism. All-in-all, there is no way to guarantee that a website does not detect you.
In case you want to use puppeteer-extra
, be aware that you can use it in conjunction with puppeteer-cluster
(example code).
来源:https://stackoverflow.com/questions/59672126/is-puppeteer-cluster-stealthy-enough-to-pass-bot-tests