Hey guys, I found this site: http://www.nytimes.com/pages/todayspaper/index.html where the NYT publishes ever article from that day's paper on the same page. If one could use some Automator and possible AppleScript, on could legally download all of the articles from the site, and skip out of paying the $600 a year. This is again legal because the NYT has "Print This" links on all of its articles. I'll explain what I've done so far in hopes that someone can use it to finish this project. First I opened Automator and I used these actions: Get Specified URLS...with the url supplies above >>> Get link URLs from Webpages >>> Get text from Webpages I got an error that looked like this: sh: -c: line 1: unexpected EOF while looking for matching `'' sh: -c: line 2: syntax error: unexpected end of file (2) I asked a friend and he mentioned I would need AppleScript which I know nothing about. Please if anyone could help me build this workflow I would appreciate it very much. My hope is to have the final product universal where I won't need to tweak at all everyday when I run it, and that all the pages are just the text of the articles to save on space and distractions. Then I hope to print, or read it directly off my new MacBook.