Google: pages should show what real people want to see
September 15, 2016

Image from jasoneppink used under a Attribution License
There are two main things you can do to improve your website’s SEO
Part One: the “marketing” side
One part is all about marketing in the best sense of the word:
- identifying the people you want to reach
- identifying how they think about your products, services, ideas, or interests
Then creating content that
- meets their interests
- speaks to their needs
- speaks to them in terms they’re familiar with
Yeah, yeah, you’re saying, but what about what Google and other search engines want to see?
Well, that really is what they want to see! Really! That said…
Part two: The “structural” side
Here’s the other part, which is what can you do to help search engines understand your pages, from Google’s Webmaster Guidelines page
- Create a useful, information-rich site, and write pages that clearly and accurately describe your content.
- Think about the words users would type to find your pages, and make sure that your site actually includes those words within it.
- Ensure that your
<title>
elements andalt
attributes are descriptive, specific, and accurate. - Design your site to have a clear conceptual page hierarchy.
- Follow our recommended best practices for images, video, and structured data.
- When using a content management system (for example, Wix or WordPress), make sure that it creates pages and links that search engines can crawl.
- To help Google fully understand your site’s contents, allow all site assets that would significantly affect page rendering to be crawled: for example, CSS and JavaScript files that affect the understanding of the pages. The Google indexing system renders a web page as the user would see it, including images, CSS, and JavaScript files. To see which page assets that Googlebot cannot crawl, or to debug directives in your robots.txt file, use the blocked resources report in Search Console and the Fetch as Google and robots.txt Tester tools.
- Allow search bots to crawl your site without session IDs or URL parameters that track their path through the site. These techniques are useful for tracking individual user behavior, but the access pattern of bots is entirely different. Using these techniques may result in incomplete indexing of your site, as bots may not be able to eliminate URLs that look different but actually point to the same page.
- Make your site’s important content visible by default. Google is able to crawl HTML content hidden inside navigational elements such as tabs or expanding sections, however we consider this content less accessible to users, and believe that you should make your most important information visible in the default page view.
- Make a reasonable effort to ensure that advertisement links on your pages do not affect search engine rankings. For example, use robots.txt or
rel="nofollow"
to prevent advertisement links from being followed by a crawler.
Here at Real Basics we can’t really help you much with your marketing decisions, but! We can help you with the technical and structural side.