I was going to suggest example.com: Code: curl -s http://www.example.com | wc 50 120 1270 curl -s http://www.google.com | wc 5 313 12292 Then I realized the example.com page links to iana.org, so if your "website downloader" follows HTML links, that could get ugly. One simple solution is to make your own test "website". Put a couple HTML pages on an Amazon S3 bucket, with as many or as few links in them as you want. Then point the downloader at the root URL of your bucket resources (make them public, i.e. no permission required; see Mac App Store for S3 utility apps). You could also enable some of the S3 logging and reporting, so you can see all the GET requests, or counts of them. Given the low cost of S3, this might cost a few pennies. Except I'm not sure they'll bill you for amounts that low, so it could well be a cost of $0.00.
Or make your own site on your local computer, start a server, and then point your downloader to it. If you haven't built a Ruby On Rails app yet, you could do it in less time than it took for the above replies to get posted to this thread.
Hell, if I could do it, anyone can. #GrowthMindset #LearnThruStruggle #BetterEveryDay #ItsOkayToSayIDontKnowAsLongAsTheNextWordIsYET