Some sites are not fond of having random crawlers stalk their sites and therefore tweak the robots.txt files in the way that the bots are not allowed to enter the site structure. It might look something like this:
User-agent: Screaming Frog SEO Spider
In order to try and bypass these settings, one can set the Robots Settings to Ignore robots.txt. This does not always work but is definitely worth trying as the first step in addition to having Googlebot as the default user-agent, since the majority of business sites allow Google to crawl their pages for obvious reasons.