-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use crawler.crawl to full-page scrolling? #406
Comments
@helenatthais I have fixed this problem with this PR, |
Tried to execute the code from the referred PR and still the full scrolling page feature doesn't work:
|
@helenatthais Hi, could you please provide more details about your setup and how you're running the code? I've tested it in my local environment and everything seems to work fine. One possible cause of the issue might be that the original crawl4ai package is still installed. Could you check if that's the case? |
Sure, I installed crawl4ai with pip install crawl4ai and I've recently upgraded with --upgrade. I'm trying to run the following code to scrape Google Maps reviews:
|
@helenatthais I understand that. This is because my PR hasn't been merged into the main branch yet. You can either: Wait for the new version of crawl4ai (which should be available soon), or |
Despite simulate full-page scrolling feature released with 0.4.1. version, I'm struggling to make it work because I'm still not sure where to insert crawler.crawl function. The docs (https://crawl4ai.com/mkdocs/blog/releases/0.4.1/) cite the following example:
await crawler.crawl( url="https://example.com", scan_full_page=True, # Enables scrolling scroll_delay=0.2 # Waits 200ms between scrolls (optional) )
The text was updated successfully, but these errors were encountered: