For years I’ve on and off looked for web archiving software that can capture most sites, including ones that are “complex” with lots of AJAX and require logins like Reddit. Which ones have worked best for you?
Ideally I want one that can be started up programatically or via command line, an opens a chromium instance (or any browser), and captures everything shown on the page. I could also open the instance myself and log into sites and install addons like UBlock Origin. (btw, archiveweb.page must be started manually).
Just like I told you in the other thread, there’s no application which exists to clone files which are private to the webserver, like most dynamic content.
You can perfectly clone 1:1 public and static files. You can grab media like images, javascript, ajax, even videos. But you cannot grab the inner-workings of a website and somehow make them work, like php scripts or any other compiled languages, database connections, etc.
I crossposted the post btw.