-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug - Actions] All scraping engines failed! #884
Comments
Me, too. Mac, Python 3.12.4 |
Can you please share an example URL where this fails? |
I believe this may be fixed by f097cdd, but it is hard to debug without a URL. |
The |
Hi there, I cannot recreate the issue anymore. Is it fixed for you as well? |
I just confirmed, it's fixed on my end too, thanks! Is there any repair done on the firecrawl server side? Or is it due to my local reasons? If this problem recurs in the future, it will be easier for me to locate it. |
This was repaired server-side, in commit f097cdd. We weren't accounting for the wait actions in our timeout logic. |
I am also facing this similar issue and getting the error.
I am providing these actions.
I am using the timeout of > 10 mins as well. Can you please help here @mogery ? |
I tested for 3 runs with the following code, and "all scraping engines failed" is still happening for over 50% of scrapes. Testing code: import FirecrawlApp from "@mendable/firecrawl-js";
const app = new FirecrawlApp({ apiKey: "fc-<redacted>" });
const main = async () => {
let allEnginesFailedCounter = 0;
let successCounter = 0;
let otherErrorCounter = 0;
for (let i = 0; i < 100; i++) {
console.log(`Crawl: ${i + 1}`);
try {
const constructedUrl = 'https://www.bolagsfakta.se/5566352844-Runlack_Industrilackering_AB';
const scrapeResponse = await app.scrapeUrl(constructedUrl, {
formats: ["html"],
actions: [
{
type: "wait",
milliseconds: 5000
},
{
type: "click",
selector: "#report-container > div:nth-child(18) > div > div > div.row > div > div > table:nth-child(4) > tbody > tr:nth-child(6)"
}
],
onlyMainContent: false,
});
if (!scrapeResponse.success) {
otherErrorCounter++;
} else {
successCounter++;
}
} catch (error) {
if (error.message.includes("All scraping engines failed!")) {
allEnginesFailedCounter++;
} else {
otherErrorCounter++;
}
}
}
console.log({
allEnginesFailedCounter,
successCounter,
otherErrorCounter
})
}
main() run 1:
run 2:
run 3:
|
Describe the Bug
When using FirecrawlApp.app.scrape_url to scrape a page, the following error is received:
Error: Internal Server Error: Failed to scrape URL. (Internal server error) - All scraping engines failed! - No additional error details provided.
The same code used to work properly.
Screenshots
Environment (please complete the following information):
Logs
Additional Context
The text was updated successfully, but these errors were encountered: