XOVI Experts

Best of Google Webmaster Hangouts 2015

Michael Schoettler | December 21, 2015

Google’s Webmaster Hangouts have become the place to be for international SEOs when it comes to picking up on the latest developments at Google. In the German-speaking world, industry experts and interesees tune in several times a month to hear what Google’s Swiss Webmaster Trends Analyst John Mueller has to say and to ask questions.

Since only 10 people can take part in each Google+ hangout, we at SEO Portal have taken it upon ourselves to transcribe the hangouts throughout the course of the year in order to share them with webmasters who couldn’t take part or simply don’t have time to sit through an hour-long video.

Now, exclusively for XOVI’s Expert Panel, we’ve compiled a “Best Of” Google Webmaster Hangouts for 2015 featuring the best questions and most informative answers of the year. Have a rummage through and enjoy!

Sub-folder or sub-domain? (13.02.15, 00:51)

User question: What does Google say to the claim that sub-folders are better than sub-domains when it comes to SEO? Which method would you recommend for a blog?

Google’s John Mueller: For Google, there’s no difference. Both are valued equally. When it comes down to it, it’s a question of what suits the site’s infrastructure best and what’s easiest for the webmaster.

301 redirects (13.02.15, 02:59)

Q: Why does it take Google so long to recognise and process 301 redirects? Months often go by without anything happening …

A: Google follows 301 redirects as soon as the relevant URL is crawled. If the redirect is site-wide, this affects a lot of pages. Now Google doesn’t crawl all sites regularly, so it can take a few weeks or months before we crawl a URL and identify a redirect. Always make sure you adhere to Google’s best practice!

Server down (25.02.15, 43:51)

Q: When a server is temporarily down, what affect can this have on rankings? My server was out of action for 1 and a half days and I noticed that my rankings had dropped signifcantly on the previous week.

A: If a server goes down, it is important that it sends a 503 message to Google to signal that it’s only a temporary problem. Google will then try to crawl the website again at a later point before penalising the page in terms of ranking.

Testing a website in advance (03.06.2015, 31:50)

Q: I would like to test my website before it gets indexed. I wanted to check it in Google Search Console but it didn’t work because the robots.txt contained “Disallow: / (All)”. Then when I run “Fetch as Google” in the Search Console, it tells me that access is denied. How can I test my website in advance?

A: Delete robots.txt and use “x robots no index” tag in the http header. Google then crawls the site, but doesn’t index it.

Disavow for sub-domains (05.06.2015, 04:30)

Q: Our website has lots of sub-domains. When we upload our file containing the links which we would like to disavow, is it better to create a single file for the entire site or rather one for each sub-domain?

A: Always upload one file per sub-domain. Anything else won’t work.

Indexing and blocking (05.06.2015, 35:15)

Q: I have a few sites which I have blocked using robots.txt but they’re still in the index. What can I do to get rid of them?

A: You should enable them in the robots.txt and set them to noindex, then they’ll be removed from the index at the next crawl. If you need your site removing quickly, you can send us a sitemap file. You can also use the parameter tool, although this only works before a site is indexed.

Local rankings (18.06.2015, 36:54)

Q: What works better for local rankings? Four seperate websites for each of my company’s branches or a single website which a page for each? We currently have four domains …

A: As soon as you start having more than a handfull of branches with a website for each one, you’re quickly heading towards doorway page territory where, in effect, each website contains the same content and only differ in terms of the city they are based in. So be careful!

Having said that, if the four branches are all relatively unique with different clientelles (hotels in different cities, for instance), then separate websites can be a good idea. From Google’s point of view, neither is particularly preferred. Personally, I would go for different pages on a single website, purely for ease of administration.

Search Console filters (19.06.2015, 49:15)

Q: Will there ever be the possibility of saving filter settings in Google Search Console’s search analysis? At the moment you have to reapply the filter each time which is a bit frustrating.

A: I just bookmark the pages which have specific filters. The filters are in the URL so they can be easily accessed at any time without the need to reapply the same settings. But Google is working on an API for this.

Link Disavow (30.06.2015, 24:43)

Q: One of our customers has accidently disavowed all of his backlinks. How can we clean this up in the Search Console? If you disavow too many links by accident, can you avoid a negative effect by quickly uploading another file before the next update? One of your patents says that a site has less value when it is re-linked?

A: Just upload a new, empty disavow file. Or a new file containing just those links which you actually want to be disavowed. Each new file replaces the previous one so if you do this before the next update, you should be fine. However all URLs which have already been crawled in the meantime will already have been disavowed. But since they’ll soon be recrawled fairly quickly, there shouldn’t be too much damage.

As for the patent you mentioned, I have no idea. We have lots of patents but that doesn’t necessarily mean that we actually use them all. It’s always interesting to have a look at them but don’t think that we do everything that is mentioned in them.

URL structure in online shops (02.07.2015, 09:46)

Q: Online shops are increasingly connecting their directories with “-” rather than “/”. Which URL structure does Google recommend and why?

For example: 
/children/childrensfashion/childrenstrousers2 or 

A: It makes no difference whatsoever. The one thing to avoid however is switching between the two. Pick one variant and stick to it!

Exact match domains (02.07.2015, 24:05)

Q: Are Exact Match Domain penalties linked to poor quality content? Are they therefore ok when you produced good content?

A: In Google’s eyes this is no problem. Exact Match Domain and good content is fine.

Crawling JavaScript and CSS (30.07.2015, 00:33)

Q: Many webmasters worry that the crawling of Java and CSS can lead to security problems. You have ruled out that this data is indexed but there is still concern that certain bits are indexed and that this can lead to gaps in security.

A: When files are incorporated into an HTML site, JavaScript files are not indexed seperately and not made visible in web search. I think it’s important to say that a website’s security should not be dependent on these types of measures. Robots.txt does not necessarily make a website more secure. A hacker trying to attack a site is not going to look at a robots.txt and obey its commands. Rather, they use programs which crawl all over sites looking for security lapses – they don’t use web search to find individual JavaScript versions. If you notice that there are gaps in security, for example in JavaScript, simply hiding them doesn’t help. You need to fix those gaps and ensure that the website is always up to date and current.

Lazy Load (30.07.2015, 23:50)

Q: We have implemented Lazy Load on our homepage and have the problem that the images are no longer indexed by the the Googlebot. Can Google suggest a solution?

A: The main problem on the page is that the images first load when you scroll down. The Googlebot doesn’t actively scroll down the page so it can’t see the images and therefore can’t index them. There’s no immediate solution – you’ll probably have to re-add some of the images directly. You could also try letting images load in the background – use the Fetch & Render tool for this.

JavaScript and CSS (31.07.2015, 14:55)

Q: Hi John, I have received a message in the Search Console that Google requires access to JavaScript/CSS. Why does Google need access to these files? What are you looking for?

A: In principal, Google wants to render sites just as a browser does and wants to be certain that it is seeing the same content as the user. This is also linked to the mobile friendly update. If Google can recognise that a website if mobile friendly, it can give the site a mobile friendly label in mobile search. This is why we need access to JavaScript/CSS.

Redirects (31.07.2015, 27:54)

Q: Can you please explain the technical difference between a 302 and a 303? When should you use a 303?

A: This is a commonly asked question. Google differentiates between temporary and permanent redirects and tries to work out which type is suitable. With temporary redirects, Google tries to retain the original site in the index, whereas with permanent redirects, Google tries to index the whole of the new site. When Google notices that a temporary redirect is now permanent, it acts accordingly.

Duplicate content (12.08.2015, 21:49)

Q: Would the following URLs count as duplicate content?:


A: Translations are not duplicate content. As soon as something has been translated, it is then unique. A hreflang can help us display the right page in search results, though.

404 (14.08.2015, 03:55)

Q: Why can we still see pages which have been returning 404s for years? Why is Google still crawling them?

A: We don’t crawl them that often any more but sometimes we detect a new link to the page and check it again. Sometimes we revisit sites anyway just to see if they’re back online and accessible and to make sure we don’t miss any new content.

Google Analytics for rankings (14.08.2015, 10:09)

Q: One of the MOZ 2015 SEO Correlations showed a positive correlation between ranking and use of Google Analytics. How can you explain that? Isn’t that a monopoly?

A: We don’t use Google Analytics for crawling, indexing and ranking, so any correlations are purely coincidental. Such programs are often used for large sites which rank well but it doesn’t help them to rank. The correlation in studies like this don’t demonstrate any concrete link between two phenomena, and we don’t give anyone a boost just because they use Google Analytics or AdWords. We try to make our search tools as neutral as possible so as not to have any effect – positive or negative – on rankings.

Stolen content (28.08.2015, 02:25)

Q: A webpage with over 190,000 pages has been copied – without permission. Will the resulting duplicate content have an influence on rankings for the original website?

A: It’s not a problem. Google is pretty good at determining which websites are original in these cases and will always show the original in SERPS.

Fetch as Google (28.08.2015, 23:56)

Q: I have some long posts on my website (over 7,000 words) which are not shown in their entirety in the Fetch as Google preview in the Search Console. It works fine with shorter posts. Is this because the preview is limited for longer posts or is there a bigger problem?

Yes, the preview screenshot is limited in terms of space. To make sure that everything has been indexed properly, you can simply run a Google search for a section of text further down the post to see if it is displayed. If it is, then it’s been indexed.

New tab (10.09.2015, 51:12)

Q: Hi John, does it have any negative effects on SEO if internal links are all set to open in a new browser tab? There seem to be differing opinions from a usability point of view.

A: It’s completely up to you. It makes no difference to us.

Duplicate content (24.09.2015, 00:30)

Q: Our directory descriptions are often identical. In Google+, too. Is this a duplicate content problem?

A: No, in this case we would just choose one of the pages. It is technically duplicate content, but you don’t need to avoid it at all costs. It won’t cause your site any problems.

Breadcrumbs (24.09.2015, 15:59)

Q: Hi John, what’s the best way to use breadcrumbs? Up until now it’s always been via data-vocabulary. Now you can also use schema.org, or is it best to stick with data-vocabulary? Thanks!

A: The schema.org markup for breadcrumbs is compatible with Google and should work fine. Both versions work just as well, just make sure you’re consistent.

Plurals and synonyms (16.10.2015, 12:17)

Q: We have vastly differing results for “ferry” and “ferries” for our domain. We understand that singular and plural are different, but the difference is huge (page 1 vs page 5). Do you know why this is?

A: We have no semantic model to work out singular and plural, we try to incorporate it through algorithmic learning. So it depends what your users are searching for. Your results probably just represent search behaviour patterns in that language. There’s not much you can do about that.

Google translate penalty (27.10.2015, 03:32)

Q: I’ve just been speaking to a journalist who was asking about Google translate penalties – which sounded like nonsense to me. Have you ever heard of that? Are people perhaps confusing auto-translate with duplicate content?

A: What’s that? I’m pretty sure there’s no such thing as a translate penalty. Then again, if you use translation programs, Google might see that as auto-generated content which the webspam team will then deal with. But in principal there’s no issue with translated content. Auto-generated content without added value is more likely to be the problem.

Server location (05.11.2015, 08:59)

Q: Our webhoster is moving his server farm from Munich to France. We have good local rankings in the Munich area. Could the move influence our rankings?

A: No it shouldn’t be a problem at all. Just make sure no 404 errors crop up during the move.

PageSpeed (11.05.2015, 14:57)

Q: What’s Google’s current benchmark for recommended optimal load times?

A: There’s no exact figure. We try to differentiate between “normal” load time (quite a wide range) and extremely slow load times. We can’t guarantee that a page will rank better just because it loads a few tenths of a second quicker. But obviously users can achieve more on a page which loads quickly and are therefore more likely to recommend it. So there’s an indirect effect on ranking because of positive user experience.

Noindex (06.11.2015, 23:43)

Q: A shop has similar content just with different colours. The URLs are therefore also similar. Should all versions of a product except for one be set to noindex in order to avoid a penalty?

A: That can only lead to a penalty when the content is bad. But in principal such things aren’t a problem in e-commerce. Google might put them all together on SERPs but there’s no need to use noindex.

Forum (01.12.2015, 04:53)

Q: A friend has a forum and traffic has been descreasing signifcantly since 2012. I think it’s something to with the Panda update, but how can webmasters like him be responsible for user-generated content?
A: A webmaster is always responsible for his website. If you have a forum with bad user-generated content, that’s your bad content. You have to deal with it just as others do, even though it’s more difficult on a large forum. Perhaps much of it can be set to noindex?

SSL certificates (03.12.2015, 15:03)

Q: Is there any difference between different SSL certificates when it comes to rankings? Or is it just important to have one?

A: At the moment there is no difference between the different verions. As long as it’s modern and up to date, there’s no problem. The important thing is that the page is accessible via HTTPS.