Run a shallow crawl from a seed URL, log HTTP status, canonical hints, and robots directives. Built for migration checks, indexability issues, and answering “what does a basic crawler actually hit?”.
Same-host URLs only, bounded by depth and page limit. This is recon, not a full crawler: enough to spot bad redirects, missing canonicals, and noindex landmines.