How to Save a Webpage as a Plain Text File on Mac

Sometimes you just want the words from a webpage — no navigation, no sidebars, no ads, no HTML tags. Just the content, saved locally as a .txt file. macOS gives you a few ways to do this, ranging from a simple copy-paste to a Terminal one-liner.

Method 1 — Copy, paste, download (quickest)

This is the most reliable method for most situations. Select the text you want on the webpage (⌘A to select everything on the page), copy it (⌘C), then go to txtnote.online, paste it into the editor, give the file a name, and click Download. Done — clean .txt file in your Downloads folder in about ten seconds.

Method 2 — Safari Reader + Save

Safari has a built-in Reader mode that strips a webpage down to just the article text, removing ads, navigation, and sidebars. Click the Reader button in the address bar (it looks like lines of text, on the left side). Once in Reader mode, the page shows just the content.

From Reader mode, select all the text (⌘A), copy it, and paste it wherever you want to save it. It won't save directly as .txt from Safari, but pasting into txtnote.online and downloading takes five extra seconds.

Method 3 — Terminal with curl

If you want raw HTML stripped to text via Terminal, curl can fetch the page and you can pipe it through a simple text processor:

curl -s "https://example.com" | sed 's/<[^>]*>//g' | sed '/^$/d' > ~/Desktop/page.txt

This fetches the page, strips HTML tags with sed, removes blank lines, and saves the result to your Desktop. The output is rough — it'll include navigation text and other page elements — but it's fast and works without installing anything.

For cleaner extraction, install lynx via Homebrew (brew install lynx) and run: lynx -dump "https://example.com" > ~/Desktop/page.txt. Lynx formats the page as a terminal browser would display it — much cleaner than raw tag-stripping.

Saving a full webpage including HTML

If you want the actual HTML source saved as a .txt file rather than the rendered text, Terminal makes this straightforward:

curl -s "https://example.com" > ~/Desktop/page.txt

This saves the raw HTML source. Open it in a text editor and you'll see the markup. Useful for archiving, scraping, or inspecting a page's structure.

Automating it for multiple pages

If you need to save text from multiple pages regularly, a shell script can loop through a list of URLs. Create a file called urls.txt with one URL per line, then:

while read url; do
curl -s "$url" | lynx -dump -stdin > "~/Desktop/$(date +%s).txt"
done < urls.txt

This saves each page as a timestamped .txt file on your Desktop. It requires lynx to be installed.


For a one-off webpage, copy-paste into txtnote.online is the fastest path. For bulk work or automation, Terminal with curl and lynx gets the job done without a GUI. Reader mode in Safari is a good middle ground when you want clean text from an article quickly.