Presentation is loading. Please wait.

Presentation is loading. Please wait.

How to create a technical SEO audit GroupM Search EMEA

Similar presentations


Presentation on theme: "How to create a technical SEO audit GroupM Search EMEA"— Presentation transcript:

1 How to create a technical SEO audit GroupM Search EMEA 2011-06-10
Lihem Zeru Mobile: +33 (0) 32 rue Guersant Paris Cedex 17 France

2 Introduction to this document
This document is created to simplify and assist your work when creating a technical audit. Before you continue reading, open these 3 documents (CLICK ON THE ICONS TO OPEN THE FILES: Excel file: Template - SEO Technical audit Scorecards Builder 2) Word file: Template - Your Technical Audit (Where you will create your tech audit) Ppt file: MSF SEO Toolkit – how to access 4) Bookmarks online tools : Bookmarks SEO Tech Audit

3 Introduction to this document
This “How to” document is divided in 7 subjects: Overall summery Domain Name Hosting Review Visitors Handling Bot Handling and Sitemaps URLs and Error Handling On Page Elements Each subject is presented with an image of associated part of the scorecard (“1 Overall Summary” is an exception). For each step you’ll have: “What to do”, “Best practice” and “Example” The image shows in detail each topic that will be presented. Pay attention to these blue boxes! They will let you know what tools to use and other important tips. Subjects

4 To have before you start:
Firefox Add ons you need to have installed: Web – developer - SEO Status PageRank/Alexa Toolbar (shows cashe) - US/firefox/addon/seo-status-pagerankalexa-toolb/ Quick Locale Switcher- switcher/ Tip: To add an excel file with icon = CLICK on Insert  Object Create from folder Browse you document  Tick the box “Show as an icon”  OK

5 Table of Contents OVERALL SUMMARY Domain review 2.1 Domain Name
2.2 Domain Age 2.3 Domain Content History 2.4 Mirrored Domains 2.5 Redirected Domains Server review 3.1 Server Type 3.2 Server Hosting Neighbourhood 3.3 Server Location 3.4 Server Header Visitor handling 4.1 Browser Compatibility 4.2 GeoLocation Handling 4.3 Language Handling 4.4 Graceful Degradation 4.4.1 No Flash Plugin 4.4.2 Disabled JavaScript 4.4.3 Cookies Disabled Bot Handling and Sitemaps 5.1 Googlebot Handling 5.2 Crawlability Directives 5.2.1 Robots.txt Content 5.2.2 Meta Robots Tag 5.3 Sitemaps 5.3.1 XML Sitemaps 5.3.2 HTML Sitemap 5.4 Bot accessibility 5.4.1 Non Crawlable Links 5.4.2 Non Indexable Conten 5.5 Google Current Index 5.5.1 Last Time Crawled 5.5.2 Indexed Pages Count 6 URLs and Error Handling 6.1 URL Header Codes 6.1.1 Redirects and Errors 6.2 URL Format 6.2.1 Session ID Parameter Presence 6.2.2 Static or Dynamic 6.2.3 Excessive URL Parameters 6.2.4 Folder Depth 6.3 Canonicalization 6.3.1 Domain Canonicalization 6.3.2 Folder Canonicalization 6.3.3 Duplicate Content 6.4 Error handling (404) 6.4.1 Error Page Presence 6.4.2 Error Page Header Code 6.4.3 Error Page Content and Markup 6.4.4 Error Page Link Check 7 On Page Elements 7.1 Document Declaration 7.2 Title and Meta Tags Presence 7.3 RSS / Atom Presence 7.4 HTML Markup 7.5 Website Assets 7.6 Performance Review 7.7 Risky Techniques

6 1 Overall Summary To do at the end of this tech audit, since you need a completed Scorecard. Tools needed to complete this part: SEO toolkit Template scorecard builder

7 1 Overall Summary What to do? Add the information you see below:
Also add the finished score card you have worked out in the xls document: Template – SEO Technical Audit Scorecard Builder – sheet “Document Level” This part should be added when you have finished the technical audit. This review card summarizes each section of this document in a grade measured out of 5. Here is what each grade means: The breakdown of each line can be found later in the document

8 2 Domain review Tools needed:
Online tools - for Bookmarked URL’s Click HERE

9 2.1 Domain Name What to do? Best Practise
This is done by a visual check. Go to the client website and ask your self: Is the domain name relevant to the content and the target that they would like to reach? Where is the content targeted (county/ countries) and what language/languages are available? Best Practise The best practise is when the domain name corresponds with the activity or purpose of the site. This part needs to be a part of the tech audit but does not have a big effect in terms of SEO, don’t put much effort of analysing what score to put on the domain name. In a tech audit we will almost never put a score lower than 3 for the domain name. What is a Domain name? Domain names are hostnames that identify Internet Protocol (IP) resources such as websites. Domain names are formed by the rules and procedures of the Domain Name System (DNS). The top-level-domains (TLDs) are the highest level of domain names of the Internet. They form the DNS rootzone of the hierarchical Domain Name System. Every domain name ends in a top-level or first-level domain label. Below the top-level domains in the domain name hierarchy are the second-level domain (SLD) names. These are the names directly to the left of .com, .net, and the other top-level domains. As an example, in the domain en.wikipedia.org, wikipedia is the second-level domain.

10 2.1 Domain Name Example Something that can be interesting to consider:
The best practise explained would not be the case with brand domain names like: nike.com, shell.com etc. With these domains the score would of course be high but mainly because of the strength of their brand name. Another example could be a new or fairly unknown brand name in its sector, in this case the domain name would not correspond to best practise and the score would be lower but in a tech audit we will never put a score lower than 3. It can also be interesting to consider the country code: co.uk, .pt, .se etc…depending on where/what country the client is active. Example of reason for lower score Shell.com was given a score of 3 because they had a very important part of their business, core business information, attached to their corporate pages and not their main pages. We suggested that this part would be moved to its own domain, with a domain name that matches perfectly with the purpose and therefore gave a lower score to indicate this problem.

11 2.2 Domain Age What to do? Best Practise
The domain age can be found out by going to: and pasting the home URL there. How old is the domain.? Best Practise An old domain age, Expiration date well in the future (if not there it is important to indicate), A Recent last update date. The score here should not be put lower than 4, this part needs to be a part of the tech audit but does not have a big effect in terms of SEO.

12 2.2 Domain Age Example Created: 1997-08-27 Expires: 2012-08-26
Updated:

13 2.3 Domain Content History
What to do? The Domain Content History shows how the home was before. This can be found out by going to: Different images can then be seen and pasted below. First relevant content should be searched for and the earliest image and content.

14 2.3 Domain Content History
Best Practise Best practise here is vague because it can depend on different factors. If the domain is bought the history coming with that domain is important to take into consideration. The old history can be divided in to two parts, domain age and incoming links/backlinks. The domain age can not be an harmful factor, the older the age the better. Incoming links however can be damaging, if the domain was involved in a non‐relevant industry and has “bad” links to the website. Otherwise this part can for example be to clarify to the client that an update might be needed. The score here should not be put lower than 4, this part needs to be a part of the tech audit but does not have a big effect in terms of SEO. Example Content first on jobsearch.monster.co.uk on Oct The screenshot below is from 28th Nov 1999.

15 2.4 Mirrored and Redirected Domains
What to do? This is done by manually checking the web pages. Enter the domain in Majestic SEO neighborhood checker: Mirror domain means the exact same site is served under a different domain. Redirected domain means the domain showing in the list redirects to the site we are analyzing. If the domain is not redirected or not a mirror, it will end up in the Server neighborhood section below (3.2) Mirror or redirected domains will appear here:

16 2.4 Mirrored and Redirected Domains
Best Practise A mirrored domain is any additional domain name that a visitor can use to access your website. It is called 'mirrored' because the website and its content is identical even if it is a different domain name. A mirror domain is a server alias on the hosting server that allows you to access the same site content under more than one URL address. If example.com and example.net point to the same site they are mirrors. If mirroring is found there can be a risk of being viewed as duplicate content. You can also lose link juice that should be going to the main site but are ending up on the mirror.. For example, you can have both a .com and a .net site that share the same site content. For the search engine the and means different webpages. So since technically there is more than one url that leads to the same text, they will tag it as plagiarized content. This will not only result in low rankings but there is a strong possibility that your site will be totally banned from the search engines. It is better to use a Redirect Domain, there are no risks if you want the same content accessible from different domains. This can be positive in terms of making it easier for traffic to find/get to the site this gives surfers are automatically redirected to your primary domain, instead of seeing identical content at the alternative domains. Example Main website: Mirror:

17 3 Server review Tools needed: Online tools – for Bookmarked URL’s
Click HERE

18 Server Type What to do? A server is a computer that is connected to the Internet 24/7 and stores files and makes those files available to you (the surfer) when requested. The server host can be found out here. Source:

19 3.1 Server Type Example Answer:
You will simply copy and paste this information: Hosting: Kimberly-Clark Corporation host the domain poise.com IP Address: Name Servers: ns1.kcc.com, ns2.kcc.com

20 3.2 Server Hosting Neighborhood
What to do? A reverse IP domain check takes a domain name or IP address pointing to a web server and searches for other sites known to be hosted on that same web server. Enter the domain in Majestic SEO neighborhood checker: This section will mention any domain reviewed in 2.4 that is not redirecting to, nor mirroring the analyzed site content. Sites probably in the same datacenter (not reviewed in this audit) Sites on the same server

21 3.2 Server Hosting Neighborhood
Best Practise Best practise here is to determine the quality of the server. Preferrably the server neighbourhood would include websites from your clients own company for example. A bad server neighbourhood would be where different types of sites owned by somebody else.

22 3.3 Server Location What to do? Where is the server located?
The location can be found here:

23 3.3 Server Location Best Practise Example Answer:
For SEO purposes it is useful to locate servers in the country you wish to serve. For example, if you are targeting the UK market, thus UK search engines then it can be advantageous to host your website in the UK. Hosting in your target country may also increase customer satisfaction as web pages will load quicker and have faster response times. This can have substantial influence on search engine results, it will affect the position on the search engine results pages This combined with your domain name suffix (.com, .co.uk, etc), can determine which geographical version of Google your results will best appear in. If you have a generic .com, .org or so on domain that is not country specific like a .co.uk, the server location will be used to determine the location of the website’s business. Example Answer: Servers are located in Maynard Massachusetts in the United States. The location is outside the country and market you are targeting. (Source maxmind)

24 3.4 Server Header What to do?
Check server headers and verify HTTP status codes. There are seven HTTP status codes that we are primarily interested in from an indexing and search engine marketing perspective. It is recommended to verify that the URIs are returning the proper Status-Code in the Server Header. The server header can be checked here: or Copy and paste the header returned in this section:

25 3.4 Server Header Best Practise Example
The HTTP protocol is a request/response protocol. A client sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content over a connection with a server. The server responds with a status line, including the message's protocol version and a success or error code, followed by a MIME-like message containing server information, entity meta information, and possible entity-body content. Example HTTP/ OK Date: Mon, 07 Mar :30:38 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: X-AspNetMvc-Version: 1.0 Cache-Control: private Expires: Mon, 07 Mar :30:38 GMT Content-Type: text/html; charset=utf-8 Content-Length: 16015

26 4 Visitor handling Tools needed:
Online tools - for Bookmarked URL’s Firefox add-ons Click HERE

27 4.1 Browser Compatibility
What to do? This is to check the web page renders properly in different browsers and different versions of the browsers. This can be checked here: Enter the domain with the default browsers set, click enter and wait for the screenshots to appear. We will only check the homepage – Compare to see if the website you see is the same in all the other navigators. Once you reviewed the screenshots/If everything is OK just state that all good (no screenshot to add). If there is a problem, add the screenshot and the name of the browser for reference.

28 4.1 Browser Compatibility
Best Practise Best Practice is when the homepage looks the same on all the browser screen shots you get in the results. Example Jobsearch.monster.co.uk renders well in all major browsers.

29 4.2 GeoLocation Handling What to do?
Geolocation redirect is the act of assigning a geographic location to a website or webpage. This can be checked by going to Choose a link from another than the country/countries the client website is targeted in. Open the site and type in the URL of your client website to get an image of how the site looks like from another country.

30 4.2 GeoLocation Handling Best Practise Example
Best Practice is when a site that targets different markets and countries redirects a visitor to the version that corresponds to the languages and website. Example When reaching no redirection is made based on the client IP.

31 4.3 Language Handling What to do?
This is how the web page responds to the language settings of the browser. This can be checked using Firefox and changing the language of the browser then navigate to the home page. Quick Locale Switcher-

32 4.3 Languages Handling Best Practise Example
Best Practice is when the web page responds to the language settings of the browser. Example When reaching no redirection is made based on the browser language settings.

33 4.4 Graceful Degradation 4.4.1 No Flash Plugin What to do?
This is to test how the home page performs without Flash. Adobe Flash (formerly Macromedia Flash) is a multimedia platform used to add animation, video, and interactivity to Web pages. Flash is frequently used for advertisements and games. More recently, it has been positioned as a tool for "Rich Internet Applications" ("RIAs").

34 4.4.1 No Flash Plugin What to do?
There are some add-ons like ‘Image and Flash Blocker’ or ‘Flashblock’ that can be installed to stop the flash on the site BUT what you will see in that case is the flash box in grey. What we want to see is how a website looks like on a browser that don’t have any flash installed at all. To properly check this is to uninstall flash active x if you can then use internet explorer.

35 4.4.1 No Flash Plugin Best Practise Answer Example
Best Practise is when pages performs perfectly without Flash. Answer Example When Flash is disabled or not available the area to the left below the search bar and advertisement below my saved searches fail to load however these are replaced by static equivalents that can be crawled. This is problem free from a SEO perspective and job searches can still be completed by users. These areas are highlighted in red.

36 4.4.2 Disabled JavaScript What to do?
This is to test how the page performs without Flash. To check this disable the JavaScript plug-in in Firefox and navigate around the web site and note any problems.

37 4.4.2 Disabled JavaScript Best Practise Example
Best Practise is when you can navigate around the website perfectly with JavaScript disabled. Example With JavaScript disabled some elements on the homepage are not displayed. Furthermore this renders all AJAX content inoperable which precludes any possibility of completing a job search, but does allow for limited browsing of footer sections such as career advice. Screenshot of homepage with JavaScript disabled:

38 4.4.3 Cookies Disabled What to do?
To check this, disable cookies then checked that the site can be browsed with any problems.

39 4.4.3 Cookies Disabled Best Practise Example
A cookie, also known as a web cookie, browser cookie, and HTTP cookie, is a piece of text stored by a user's web browser. A cookie can be used for authentication, storing site preferences, shopping cart contents, the identifier for a server-based session, or anything else that can be accomplished through storing text data. A cookie consists of one or more name-value pairs containing bits of information, which may be encrypted for information privacy and data security purposes. The cookie is sent as an HTTP header by a web server to a web browser and then sent back unchanged by the browser each time it accesses that server. Cookies, as with a cache, can be cleared to restore file storage space. If not manually deleted by the user, cookies usually have an expiration date associated with them (established by the server that set it). Once that date has passed, the cookies stored by on the website will automatically be deleted. Best practise is when a website can be browsed with no problem with cookies disabled. Example The site can be browsed with cookies disabled.

40 5 Bot Handling Sitemaps Tools needed in this section:
Online tools - for Bookmarked URL’s Firefox add-ons Click HERE (only for 5.1) SEO toolkit all queries starting with 5

41 5.1 Googlebot Handling What to do?
Googlebot is the search bot software used by Google, (also called crawlers or spiders) which collects documents from the web to build a searchable index for the Google search engine. To test how the web site handles this, activate the Googlebot and then refresh the page.

42 5.1 Googlebot Handling Best Practise Example
Best Practise is when the crawlers get all the information that you want them to retrieve from the website. Example To see an example - open this URL and check the cache date by right clicking on the firefox add-on SEO Status PageRank/Alexa Toolbar : - you don’t see the same as before. Answer example: No specific googlebot handling was found in the sampled crawl.

43 5.2 Crawl ability Directives
Robots.txt Content What to do? Open SEO analytics toolkit and your Site Analysis Report. See also SEO toolkit – how to access.ppt Then  Execute query “5.2.1-Robots.txt Content” then view details. Copy and paste the robots.txt content and comment on what you found there

44 5.2.1 Robots.txt Content What to do? Best Practise
To take this information in to your audit, simply mark the area and press ctrl+C and paste it in an excel sheet or directly in to your word doc. Then content Best Practise A robots.txt file should only be used if the site includes content that they don't want search engines to index. If you want the search engines to index everything on their site, there is no need for robots.txt file (not even an empty one).

45 5.2.1 Robots.txt Content Example
The following content is disallowed in the robots.txt file. These appear to be search query URLs used to list all available positions from a particular organisation. User-agent: * Disallow: /ManageSavedSearch Disallow: */DoNotAddToP4/UserControls/ Disallow: /Search.aspx?co=* Disallow: /Search.aspx?pg=2&co=* Disallow: /Search.aspx?pg=3&co=* Disallow: /Search.aspx?pg=4&co=* Disallow: /Search.aspx?pg=5&co=* Disallow: /Search.aspx?pg=6&co=* Disallow: /search.aspx?pg=12&co=* Disallow: /search.aspx?pg=13&co=* Sitemap:

46 5.2.2 Meta Robots Tag 5.2.2.1 Meta Robots Tag Presence What to do?
Are they used on the site? Where can we find them? Open a new query and execute “ Meta Robots Tag Presence” then export the URLs. To take this information in to your audit: Mark the area by clicking on the first line, scroll down to the last line and hold in shift before clicking the last line. Now that you have marked everything press ctrl+C and paste it in an excel sheet or directly in to your word doc.

47 5.2.2.1 Meta Robots Tag Presence
Best Practise The Robots Meta Tag is meant to provide users who cannot upload or control the robot.txt file at their websites, with a last chance to keep their content out of search engine indexes. Robots can ignore your <META> tag. Important to reflect that NOFOLLOW directive only applies to links. It's likely that a robot might find the same links on other page without a NOFOLLOW (perhaps on some other site), and still arrive at the undesired page.

48 5.2.2.1 Meta Robots Tag Presence
Answer example: The Robots Meta tag was found on the following 181 pages. These were themselves random sampled and the tag is likely to be the same for additional pages. The tag is displayed as follows: <meta name="ROBOTS" content="NOINDEX,FOLLOW" /> These pages are almost exclusively related to the ‘South-East-Southern’ region and return a 404 header code suggesting that the code served in the event of a 404 server response is the source of this tag. A full list of these pages can be found in the file below: To create this icon please see 3rd slide or click HERE Overall this content represents a small fraction of the overall content on the site however if it is intended for these pages to be indexed by robots and available in search engines then these tags should be removed immediately.

49 5.2.2.2 NoIndex and Nofollow Tag Directives
What to do? Open a new query and execute “ Nofollow” then export the URLs. Open a new query and execute “ NoIndex” then export the URLs. Best Practise If you want to prevent Googlebot from following any links on a page of your site, you can use the nofollow meta tag. Noindex meta tagging allows you to control access to your site on a page-by-page basis. When Google sees noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it. Other search engines, however, may interpret this directive differently. As a result, a link to the page can still appear in their search results.

50 5.2.2.2 NoIndex and Nofollow Tag Directives
Answer example: The NoIndex META robots tag appears to only be served when a 404 server response is encountered. The META robot NOFOLLOW directive was not encountered in any page of the site.

51 5.2.2.3 NOYDIR and NOODP Tag Directives
What to do? Open a new query and execute “ NOODP” then export the URLs. Open a new query and execute “ NOYDIR” then export the URLs. Best Practise If the website is listed in DMOZ (ODP), the search engines will display snippets of text about your site taken from them instead of the description meta tag. To force the search engine to ignore the ODP information then include a robots meta tag like this: <meta name="robots" content="noodp" /> For Yahoo <meta name="robots" content="noydir" />. Answer example: The NOYDIR and NOODP directory tags are not used on this site. These are recommended from the perspective of tailoring the site snippet included in the search engine results pages.

52 5.2.2.4 Canonical Tag What to do? Best Practise Answer example:
Open a new query and execute “ Canonical Tag” then export the URLs. Best Practise The canonical tag help webmasters and site owners eliminate self-created duplicate content in the index. The tag tells Yahoo, Live and Google that the page in question should be treated as though it were a copy of the URL and that all of the link and content metrics the engines apply should technically flow back to that URL. Answer example: Canonical mark-ups are not used. These should be considered especially for content that is accessible through URLs with differing capitalisation.

53 5.3 Sitemaps 5.3.1 XML Sitemaps 5.3.1.1 XML Sitemap Presence
What to do? Open a new query and execute “ XML Sitemap Presence” then export the URLs. If you can’t find it there, try to append “/sitemap.xml” at the root of the domain. Important: This file could be referenced anywhere on the server from the Google Webmaster Toolkit account. Thus, we will never be able to say “no sitemap.xml is used” – We will rather say “we were not able to find any XML sitemap”.

54 5.3.1.1 XML Sitemap Presence Best Practise Answer example:
The XML sitemap protocol is specifically designed for spiders. In its simplest form, an XML sitemap is a 'behind the scenes' XML file that lists URLs for a site along with additional metadata about each URL. It's a map of all links in a site, uploaded to the root directory of a website, invisible to visitors. XML site maps are accessible to all major search engines, they can more intelligently crawl the site through both top and deep level links. Answer example: An XML/ASHX sitemap was found at the following location, This contains a further 15 XML/ASHX sitemaps with content distributed as follows:

55 5.3.1.2 XML Sitemap Fields What to do? Best Practise
If you found the xml sitemap, review the fields used. Is there any field missing? Are they all populated correctly? Best Practise The Sitemap must: Begin with an opening <urlset> tag and end with a closing </urlset> tag. Specify the namespace (protocol standard) within the <urlset> tag. Include a <url> entry for each URL, as a parent XML tag. Include a <loc> child entry for each <url> parent tag. All other tags are optional. Support for these optional tags may vary among search engines. Refer to each search engine's documentation for details. For more Best practices see:

56 5.3.1.2 XML Sitemap Fields Answer example:
The following fields are populated in the XML sitemap: Loc, lastmod, changefreq, priority. For every entry these fields are as follows: This gives the impression of a site that is less dynamic in its sitemap than it is likely to be in reality.

57 5.3.1.3 XML Sitemap Link Check What to do?
Open the page details screen if the sitemap was identified by the Microsoft SEO Toolkit Or check the number of links included in it and compare it with the number of html pages. The number of link should be the same number as html pages. Count the amount of links and compare it with the amount of links in the actual sitemap: 54 links 47 links

58 5.3.1.3 XML Sitemap Link Check Best Practise Answer example:
The number of link should be the same number as html pages. Answer example: There are 54 xml Sitemap Links found and only 47 links in the HTLM sitemap, there are 7 links missing in the xml sitemap.

59 5.3.2 HTML Sitemap 5.3.2.1 HTML Sitemap Presence What to do?
Open a new query and execute “ HTML Sitemap Presence” then export the URLs This query looks for a page name that includes sitemap. If not successful with the Microsoft SEO Toolkit query, you can scan the pages with most links (this should be the HTML Sitemap page).

60 5.3.2.1 HTML Sitemap Presence Best Practise Answer example:
An HTML sitemap is an actual page on a site containing links to the website's most important pages. Any site with anything approaching complex navigation should consider an HTML sitemap for visitors. Whilst an HTML sitemap can be read and the links followed by spiders (so some passing of link juice is involved that might affect SEO), it also has the advantage of being readable by humans as well. HTML sitemaps however offer only a general guide to a site, an overview, not necessarily containing all the links available to spider in a website. This is especially the case for websites that contain dynamically generated pages (shopping carts, etc). Answer example: No dedicated HTML sitemap was found. This should be considered.

61 5.3.2.2 HTML Sitemap Link check
What to do? If you didn’t find a query, then check the link status column and find the site map (normally with one of the pages with most link and double click on it to get more information. Make a screen shot if possible. Also check if the links are OK - if not - list the ones that are not (ex: Found, Moved permanently etc..) Open the page details screen and check the Link Status column

62 5.3.2.2 HTML Sitemap Link check
Best Practise Best practise is when the links are compatible with the content on the pages. Also see so that pages where users could complete an action could be conceivably be searched for. A goal is to benefit from a specific SEO friendly landing page. An XML/ASHX sitemap was found at the following location, This contains a further 15 XML/ASHX sitemaps with content distributed as follows: A summary of the contents of these files is available below:

63 5.3.2.3 HTML Sitemap Markup review
What to do? Check the content tab in the page details and comment on the HTML you reviewed. AND Check the violations tab, it contains all the errors identified. Tip: click on the violation and get a description of the particular violation.

64 5.3.2.3 HTML Sitemap Markup review
Best Practise Best practise here would be not to have any violations, and if there are, verify how serious the violations are in terms of scoring and also explain what is wrong and how it can be corrected. Answer example: The page at " contains a large (1389 characters) block of embedded Cascading Style Sheet (CSS) code. Search engines will ignore CSS code, but large quantities of CSS code that precede the actual text content of the page will force the text content further down in the HTML. Since search engines may analyze only the first 100 KB of a page, it is possible that the CSS block may prevent search engines from indexing any page content. Best would be to move a large blocks of CSS code to externally linked CSS files. Search engines often parse text so that words that appear earlier in a sentence are weighted higher than words that appear near the end of a sentence. Page relevancy is calculated by the use of important keywords that describe the page content. A page about a specific topic should use a keyword related to that topic at the beginning of the <title> tag instead of using a site name or brand name ("bauhaus"), because those do not describe the contents on the page. Place the targeted search term for the page at the front of the <title> tag. Do not use branding terms at the front of the <title> tag, because they are not directly related to the keyword for a given page.

65 5.4 Bot accessibility 5.4.1 Non Crawlable Links
No follow Links What to do? Open a new query and execute “ “No Follow Links” then view details. Then click on the tab Links:

66 5.4.1.1 No follow Links Best Practise Answer example:
The rel="nofollow" is an attribute you can set on an HTML <a> link tag. Those links won't get any credit when Google ranks websites in the search results, thus removing the main incentive behind blog comment spammers robots. Source: Answer example: The nofollow directive is not used directly in links contained in We didn’t encounter any link under the form <a href=”destination” rel=”nofollow”>anchor text</a>

67 5.4.1.2 JavaScript Links What to do? Best Practise Answer example:
Open a new query and execute “ JavaScript links” then export the URLs. Best Practise Search engines were programmed to be able to read plain HTML. A JavaScript link is meant to trigger an action to happen on the page.  Since the target of a JavaScript link is telling search engines that it is going to ‘JavaScript: void(0)’ instead of an actual page, it won’t be able to pass authority the same way through the link. Example of scoring: Score 4 if print button in JavaScript Score 2 if navigational links are made solely with JavaScript Answer example: JavaScript links are used throughout the site however these have only been identified as directly detrimental to SEO efforts in the sitemap ‘Browse for a Job’. Aside all other JavaScript links have been used in sections where bot accessibility is hindered due to required login details.

68 5.4.1.3 Image Map Links What to do? Best Practise Answer example:
Open a new query and execute “ Image Map Links” then export the URLs. Best Practise It is true that many search engines cannot or will not follow the links inside of an image map. One of the main reasons for this is the potential for spam abuse. However image maps serve a valuable purpose for site visitors. Image maps can be the most usable navigational element on a web page. A single image map can often download much more quickly than a large set of navigation buttons. For examples of scoring see Example scoring for JavaScript Links. Answer example: 3 html pages have image maps.

69 5.4.2 Non Indexable Content 5.4.2.1 Flash What to do? Best Practise
Open a new query and execute “ Flash” then export the URLs. Best Practise Flash is difficult to crawl. Relevant content is often scattered in the code or otherwise spider-inaccessible. It concentrates content, content embedded in a Flash shell won't generate the 'link juice' it would as a sub-page. Flash inhibits backlinking, especially to any content other than the main page. Fundamental SEO elements (like HTML tags) are not commonly used in Flash content Answer example: Flash is used as a video player (88 pages).

70 5.4.2.2 Images What to do? Best Practise Answer example:
Check the site for images that would contain a lot of text. This has to be done manually, browsing the site. Best Practise Best practise would be to have HTML formatted text embedded in the image itself. Answer example: This text is not HTML formatted, it is embedded in the image itself.

71 5.4.2.3 JavaScript/AJAX What to do? Best Practise Answer example:
Can only be checked visually. Best Practise Same for JavaScript the search engines can’t see any AJAX information on a site Answer example: We didn’t find a good example, but please if you have or comes across any site that would be a good AJAX example please let me know, my

72 5.4.2.4 Frames What to do? Best Practise Answer example:
Open a new query and execute “ Frames” and “ IFrames”, then export the URLs. We are looking to spot the pages that contain a <frame> or <iframe> Best Practise A frame creates a window in a web page to display content through. Since search engines like Google can't see inside the frame, they just think you have the same page duplicated over and over. Even if there are a wealth of great content it might as well have it locked in the basement if it's in a frame. Other bad things: It is difficult to code a site in frames due to browser compatibility. Users trying to bookmark a specific inner page find when they access it the home page loads. Answer example: Flash content is not heavily used on this site. We found one flash object on the homepage.

73 5.5 Google Current Index 5.5.1 Last Time Crawled What to do?
How to find out the last time when Google crawled the website. Go to Google and put the name of the web page in the Google search box, then click on the cached link. (There are many other ways but this is what we recommend to be the best method). Best Practise Cashe date: Last week – Good Last two weeks – Ok Last 30 days – Not good More then a month ago - Bad Answer example: The homepage was last crawled on 31 Mar :22:50 GMT (Checked 31/03/ :45 GMT)

74 5.5.2 Indexed Pages Count What to do? Best Practise Answer example:
To find out how many pages Google has indexed, go to Google then insert in the search box “Site: Best Practise Indexed pages means that Google algorithm include them in its evaluation of the overall content and quality of the site. In addition to checking the total number of pages indexed, you’re going to want to find out if your site’s most important landing pages have been indexed. Answer example: When checked Google held 1,460,000 pages from the jobsearch.monster.co.uk domain in its index.

75 6 URLs and Error Handling
Tools needed in this section: Online tools - for Bookmarked URL’s Firefox add-ons Click HERE (for and 6.4.2) SEO toolkit all queries starting with 6 and data found under the Summery (ex Violations)

76 6.1 URL Header Codes What to do?
Open a new query and execute “6-1.URL Header Codes” then fill in the table below True and False indicates External or internal urls. Total OK 301 302 404 Internal URLs External URLs

77 6.1 URL Header Codes Best Practise
What does the status code mean? OK Means the URL resolved properly 301 and 302 means that the URL redirected the user to another URL 404 means the URL references some content not found on the server (redirected to the 404 page). MSFT SEO TK Header Code HTTP Code Explication OK 200 Success Found 302 Redirect - Page temporarily moved Moved Permently 301 Redirect - Page moved permanently Not Found 404 Error - Page not found 302 – Found is an example of industrial practice contradicting the standard, to perform a temporary redirect (the original describing phrase was "Moved Temporarily"), but popular browsers implemented 302 with the functionality of 303 and 307 to distinguish between the two behaviors. However, the majority of Web applications and frameworks still use the 302 status code as if it were the 303. Source:

78 6.1 URL Header Codes Answer example:
Here is a breakdown by header code of the URLs found in the site. (External URL status codes were also analysed.) What does the status code mean? OK Means the URL resolved properly 301 and 302 means that the URL redirected the user to another URL 404 means the URL references some content not found on the server (redirected to the 404 page). Total OK 301 302 404 Internal URLs 86 55 31 External URLs 28 23 1 4

79 6.1.1 Redirects and Errors What to do? Best Practise
Redirects can be found directly under the “Links” option. No query is needed. Best Practise Not to have any redirect is the best! 301/ moved permanently - is OK since the link juice is being moved to the site we want. 302/moved temporarily – is bad since no linked juice is getting spread

80 6.1.1 Redirects and Errors Answer example:
The following 302 redirected URL was found: Which redirects to The embedded file contains a list of html pages that return 404 status codes (broken links): The following broken hyperlinks were encountered: (if you would like to see any files please ask me to send you the monster case example)

81 6.2 URL Format 6.2.1 Session ID Parameters Presence What to do?
In this section look at the list found: Content Content Types Summary  text/html Click on: text/html then Copy this area and paste it in your excel sheet

82 6.2.1 Session ID Parameters Presence
Best Practise Usually, session IDs are used for tracking a single visitor's navigation path through the site. A session ID is a unique number that a Web site's server assigns a specific user for the duration of that user's visit (session). The session ID can be stored as a cookie, form field, or URL. Some Web servers generate session IDs by simply incrementing static numbers. However, most servers use algorithms that involve more complex methods, such as factoring in the date and time of the visit along with other variables defined by the server administrator. Every time an Internet user visits a specific Web site, a new session ID is assigned. Closing a browser and then reopening and visiting the site again generates a new session ID. The same session ID is sometimes maintained as long as the browser is open, even if the user leaves the site in question and returns. In some cases, Web servers terminate a session and assign a new session ID after a few minutes of inactivity.

83 6.2.1 Session ID Parameters Presence
Example You are to verify so that this does not appear in the URLs: sessionID= phpsessionID= To think about sessionID: A new session ID is attached with each new visit, each time the search engine comes around they are essentially fed all new URLs. A ten page site will the second time the search engines visit add the "new" 10 pages to the index, for a total of 20 pages, 30 for the third time etc. Once the search engines start analyzing these pages they find page after page after page of duplication. If site visitors start bookmarking and linking to the site then very link they add contains their very own session ID. The search engines follow that link to the site and now there is another 10 pages of duplication. If they follow another link the site, that's 10 more.

84 6.2.2 Static or Dynamic What to do?
This is a visual check by looking at the URLs. You could also browse the URLs in the toolkit:

85 6.2.2 Static or Dynamic Best Practise Answer example:
Many times the URL that is generated for the content in a dynamic site looks something like this: A static URL on the other hand, is a URL that doesn't have variable strings. It looks like this: The site makes extensive use of dynamic URLs. These are used to pass parameters related to job searches such as keywords, location and other information. Answer example: The site makes extensive use of dynamic URLs. These are used to pass parameters related to job searches such as keywords, location and other information.

86 6.2.3 Excessive URL Parameters
What to do? This can be found under the “Violations” section. No query is needed. Best Practise Having excessive URL parameters are bad for both search and social media. Search engines have limits to the number of characters crawled for each URL.

87 6.2.3 Excessive URL Parameters
Answer example: The 20 URLs listed in the file below were found to have excessive parameters:

88 6.2.4 Excessive URL Parameters
What to do? No query is needed. This can be found under Content Types Summary  text/html  check the URL depth. Folder depth is the number of directories included in the URL.

89 6.2.4 Excessive URL Parameters
Best Practise As a rule of thumb, we will notify if we find a directory depth above 4. Example URL: The page is on level 4, Folder Depth is 3

90 6.3 Canonicalization 6.3.1 Domain Canonicalization What to do?
No query is needed. To check go to: This is to check how the server responds if we enter the domain with no “www”. Enter the domain without the www in a header checker tool.

91 6.3.1 Domain Canonicalization
Best Practise Canonicalization (www vs. non-www, redirects, duplicate urls, 302 “hijacking,” etc etc) is the process of picking the best url when there are several choices, and it usually refers to home pages. For example, most people would consider these the same urls: - example.com/ - - example.com/home.asp But technically all of these urls are different. A web server could return completely different content for all the urls above. When Google “canonicalizes” a url, they try to pick the url that seems like the best representative from that set. If a client use canonicalization best practice is that it is done with a 301 permanent redirect. Also good if a website can be found without www. Example Search engines identify unique pages by using URLs. When a single page can be accessed by using any one of multiple URLs, a search engine assumes that there are multiple unique pages. Use a single URL to reference a page to prevent dilution of page relevance. You can prevent dilution by following a standard URL format.

92 6.3.2 Domain Canonicalization and Duplicate content
What to do? Open the violation named “The page contains multiple canonical formats.” Double click on “The page contains multiple canonical formats” and “The page specifies more than one canonical URL” and then copy the first part containing URLs to an Excel file  Attach in the Tech audit. Best Practise Search engines identify unique pages by using URLs. When a single page c can be accessed by using any one of multiple URLs, a search engine assumes that there are multiple unique pages. Use a single URL to reference a page to prevent dilution of page relevance. You can prevent dilution by following a standard URL format.

93 6.3.2 Domain Canonicalization and Duplicate content
Example Google sees the same URL with minuscule and majuscule letters as different ( and – Google sees these as 2 different URLs)

94 6.4 Error handling (404) 6.4.1 Error Page Presence What to do?
A 404 should occur when someone visits a URL that doesn’t exist in the site. Do a manual check, entering junk (ksfsdfgyef) after an existing URL of the site. And answer the question “Is there a specific page for users that would land on a non-existing page of the domain?” Best Practise A well designed custom 404 page that returns a 404 header code with link that guides you back to the homepage and other important pages.

95 6.4.1 Error Page Presence Example
A well designed custom 404 page exists which returns a 404 header code this is SEO best practise, however this is done by serving content based on the 404 server code and not through a redirect to a dedicated custom 404 error. (ad a screen shot of the 404 page)

96 6.4.2 Error Page Header Code What to do? Best Practise Best Practise
Go to Enter the url of the website with anything that rendered an error (ex: Check the server header and copy and paste the header code found. Best Practise To know more about the response header fields please see: Best Practise

97 6.4.2 Error Page Header Code Example HTTP/1.1 404 Not Found
Connection: close Date: Wed, 31 Mar :40:35 GMT Server: Microsoft-IIS/6.0 P3P: CP=CAO DSP COR CURa ADMa DEVa TAIi IVAi IVDi CONi HISa TELi OUR DELi SAMi BUS PHY ONL UNI PUR COM NAV INT DEM CNT STA PRE X-Powered-By: ASP.NET X-AspNet-Version: Set-Cookie: scsjsv=0; domain=.monster.co.uk; path=/ Set-Cookie: ssljsv=1; domain=.monster.co.uk; path=/ Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: text/html; charset=utf-8

98 6.4.3 Error Page Content and Markup
What to do? This is a description of the content and markup on the 404 page. This is done by visually checking the page. Best Practise Is it the 404 page user friendly with information in the body area/main content on the page (not only menu bar)? Example The 404 page contains the following links along with a job search header and featured jobs panel:

99 6.4.4 Error Page Link Check What to do? Best Practise Example
Manually check that the links on the error page work correctly. This is done by visually checking the page and testing the links. Best Practise That all links are working and linking to helpful pages. Example All links on the custom 404 page work correctly. However the link we believe is not SEO optimized link, it takes you back to the error page.

100 7 On Page Elements Tools needed in this section:
SEO toolkit all queries starting with 6 and data found under the Summery (ex Violations)

101 7.1 Document Declaration 7.1.1 Doctype Definition What to do?
Run the query “ Doctype” and select only the records where status code = OK. Ensure the doctype is used for every HTML pages Each HTML document must begin with a document type declaration that declares which version of HTML the document adheres to. The doctype document is not having any influence on the SEO job. However, if it exists, make sure the tag is added on all page. Otherwise don’t mind about it. Read more: DOCTYPE Tag and SEO Best Practise

102 7.1.1 Doctype Definition Example
<!DOCTYPE HTML PUBLIC “-//W3C//DTD HTML 4.01 Transitional//EN” “ Is the DOCTYPE, it’s not a meta tag and it’s not essential you add this to a page for good search engine placement, but if you want a page to validate in a HTML validator (i.e. you’ll need to add the right one.

103 7.1.2 Content Type Definition
What to do? Run the query “ Content type” and select only the records where status code = OK. Ensure the content type is defined for all URLs that have a status code = OK  Best Practise Without quality, informative content nobody is going to want to visit or link to your site. SEO professionals who are unaware of the different types of content there are out there – it’s not just a text based world anymore – and how they can be used together to drive traffic, increase rank and generate profit for a website. Example Examples of content types include "text/html", "image/png", "image/gif", "video/mpeg", "text/css", and "audio/basic". For the current list of registered MIME types, please consult [MIMETYPES].

104 7.2 Title and MetaTag Presence
7.2.1 Title Tag Presence What to do? This can be found under the “Violations” / “Violations Summary”. No query is needed. Best Practise The title must be unique, descriptive, and accurate. The title must be between 5 and 65 characters long. The title should contain keywords that reflect the page content, and it should be easy to read.

105 7.2.1 Title Tag Presence Answer Example
9906 pages were identified with titles longer than the recommended 65 characters. These can be found in the embedded file below: (Add the xls as an icon.)

106 7.2.2 Meta Description Presence
What to do? This can be found under the “Violations” / “Violations Summary”. No query is needed. Best Practise All pages should contain a Meta Description. The description meta tag is used to provide a short description of the page content. Search engines use the information from the description meta tag to index the page and to present the summary of the content on the search results page. An inaccurate or irrelevant description can affect the ranking of the page and reduce the number of click-throughs from the search results page to your site. Answer Example The following URLs do not contain any meta descriptions: (Add the xls as an icon.)

107 7.2.3 Meta Keywords Presence
What to do? This can be found under the “Violations” / “Violations Summary”. No query is needed. Best Practise Best practice: Search engines does not take into account for the Meta keywords, however the your competitors might find it interesting to “steel” your keywords in that case it is not recommended to have more then approx Meta keywords present or even any at all. Example The following 186 URLS do not contain any meta keywords content (Add the xls as an icon) , this is related to the 186 pages containing the ‘nofollow’ and ‘noindex’ meta robots instructions covered in section and likely originates from the 404 code served under this URL because the intended content is unavailable.

108 7.2.4 Redundant Meta Tags What to do? Best Practise
Open a new query and execute “ Redundant Meta Tags” then export the URLs. Best Practise All major Search Engines will ignore meta tags such as: rating, distribution, rating,  author, designer and publisher. The clients might have their own reasons for including these, but do not expect them to make a difference in your websites rank! Some (such as the ‘rating’ meta tag) were genuinely proposed as a method for allowing webmasters to set the ‘age appropriateness’ of web pages. The difficulty is that without the backing of W3C, it is not standard. Without a set standard, we cannot expect search engines to habitually use meta tags like these. There is also an issue of honesty when reporting on the self.

109 7.2.4 Redundant Meta Tags Example
<meta http-equiv=”content-type” content=”text/html; charset=iso “> <meta name=”keywords” content=”meta tags, rogue meta tags, useless meta tags, dangerous meta tags” /> <meta name=”description” content=”The Manchester SEO Blog guide to meta tags, rogue meta tags and downright dangerous ones.” /> <meta http-equiv=”refresh” content=”3;URL= <meta name=”robots” content=”would you pass the Turing test?” /> <meta name=”title” content=”Redundant Meta Title” /> <meta name=”rating” content=”unsuitable for homosapiens” /> <meta name=”distribution” content=”global” /> <meta name=”publisher” content=”Rogue Meta Tag Technology Ltd.” /> <meta name=”author” content=”John Doe“> <meta name=”designer” content=”Jane Doe“> <meta name=”copyright” content=”Rogue Meta Tag Technology Ltd. All Rights Reserved“> <meta name=”abstract” content=”A brief overview of some of the more useful, the useless and the downright dangerous meta tags people use on their web pages.“>

110 7.3 RSS/ Atom Presence What to do?
Open a new query and execute “7-3.RSS - Atom Presence” then export the URLs. Best Practise For a site that has a blog or similar activity with a weekly publication should influence RSS feds Answer example: An RSS feed is available on the jobsearch.monster.co.uk subdomain

111 7.4 HTML Markup 7.4.1 Headings Presence
This can be found under the “Violations” / “Violations Summary”. No query is needed. An <h1> tag is an indicator of what the page content is about. Search engine crawlers use the <h1> tag, which is almost as important as the <title> tag, as a major component for determining page relevancy to given keyword(s). The <h1> tag reinforces the core keyword(s) that are found in the <title> and <meta name="description" /> tags, which improves page relevancy and search engine ranking. An <h1> tag needs to be added to the <body> section of the page. The text in the <h1> tag should include keywords that reflect the page content and must be between 25 and 150 characters long.

112 7.4.1 Headings Presence Answer example:
The following URLs contain multiple H1 tags – most of the site pages. More comments are provided in section 7.7 Risky techniques. Having multiple H1 tags is risky because you are “trying to hard” and going outside of the W3 rules, which could get the site penalized by search engines.

113 7.4.2 Inline Code and Embedded Scripts
What to do? This can be found under the “Violations” / “Violations Summary”. No query is needed. Best Practise Search engines will ignore script code, but large quantities of script will force the actual text content of the page further down in the HTML. Since search engines may analyze only the first 100 KB of a page, it is possible that the script block may prevent search engines from indexing any page content. Answer example: The following URLs contain large amounts of embedded script: (add xls icon)

114 7.5 Website Assets 7.5.1 Assets Identification What to do?
This information can be found under “Performance”/ “By Content Type”. No query is needed. Best Practise See Best Practice § Content Type Definition.

115 Content Type Normalized
7.5.1 Assets Identification Answer example: Total assets identified under the sampled crawl are as follows: Content Type Normalized Count text/xml 43 text/html 86 application/atom+xml 11 text/plain 2 image/jpeg 45 application/x-gzip 1 application/javascript 12 image/gif 33 application/xml text/css 6 image/png 21 image/x-icon

116 7.5.2 Images 7.5.2.1 Image name review What to do? Best Practise
Do the names of the images included relevant terms related to the image? You can scan the image name list for each format opening the following queries: GIF image names JPG image names PNG image names Best Practise One of the tactics often employed in Search Engine Optimization is the naming of images to include important keywords. Answer example: No images were found under the jobsearch.monster.co.uk subdomain as images are hosted on the separate media.monster.com domain. This contributes greatly to page load times.

117 7.5.2.2 Image alt tag presence and optimization
What to do? The result that you find is the image tags found on the site, therefor the amount can excess the amount of images found on Best Practise All images should have ALT tag! The more descriptive, the better, within reason, of course. You don’t want to be too generic, but at the same time you don’t want your alt tags to contain a Tolstoy novel. For instance, instead of using 'Ford Mustang' as your alt tag, 'This blue 1965 Ford Mustang won best of show' is better.

118 7.5.2.2 Image alt tag presence and optimization
Answer example: Although the alt attribute is missing from the following 7 IMG tags which are used extensively throughout the site this is not considered to be detrimental to SEO. (Image referenced follows tags.) <img src="

119 7.5.2.3 Image size What to do? Best Practise Answer example:
Visually check the website. Are Images sizes well optimized? Best Practise The purpose of the site should be recognized approximately 10 second after entering the page. Several reports suggesting that Google Image Search prefers images that are on the larger side. While there is no first-hand evidence of this, it's important to remember that SEO isn't truly effective unless users click on your listings. Since it does make sense that if someone is searching for an image, they will probably be more inclined to click on a larger image with higher quality than a smaller image with lower quality, using a larger image seems to be a good approach where available and appropriate. Answer example: All images are appropriately sized and do not take up excessive resources when displayed.

120 7.5.2.4 Flash What to do? Best Practise Answer example:
Open new query and execute “ Flash”. Review the pages were flash is found. Best Practise Google will crawl all those links in your Flash animation. But will they accord them the same value as links in an HTML page? They're not saying. Flash content is fundamentally different from HTML on webpage URLs, and being able to parse links in the Flash code and text snippets does not make Flash search-engine friendly. Answer example: Flash is used on the site in isolated sections and is replaced by plain HTML content when flash plugins aren’t available. This supports SEO efforts.

121 7.5.2.5 PDFs What to do? Best Practise
Open new query and execute “ PDFs”. Review and list the pages were PDFs are found. Best Practise PDF is not bad but if it used the best practise is that the same information should also exist in HTML on the website. PDF files are seen in normal search results. They appear to have no “special treatment” and are ranked as any HTML document would be. This leads us to conclusion that same ranking factors apply to both the PDF files and any other type of indexed web content and ranking factors that you take into consideration when you are trying to optimize a page will work for the PDFs as well. Link to them with relevant anchors, use bold, italic, H1, H2 and so on.

122 7.5.2.5 PDFs Best Practise Answer example:
Simply treat the PDF as any other URL and not as a “download file”. You should also be careful in choosing the name of the file as it will be as important or even more important then the name of the HTML files in the website’s URL. For example, if this article was a PDF, I’d name it How_to_SEO_optimize_PDF_files.pdf Do not apply any encryption or password protection to your PDFs if you want them to be indexed. Also be sure that text is in fact text in your file and not turned into curves or pictures when you export your PDF file. Make your files small in size. If your PDFs rank well, this means that people have to download / “stream” them. If your files are to big, this will result in really high bounce rate as people will not wait for a long time for the PDF to load. Answer example: No PDF documents are in use on the jobsearch.monster.co.uk subdomain.

123 7.6 Performance Review What to do? Best Practise Answer example:
A quick loading web pages are important. Problems can be found under the “Performance” / “Slow Pages”. No query is needed. Best Practise Google has included a new signal in their search ranking algorithms: site speed. Site speed reflects how quickly a website responds to web requests. Answer example: One page was found to be excessively long taking: 9,594 seconds:

124 7.7 Risky Techniques What to do? Best Practise Example Answer:
Run queries for content in the form of “content includes X” Hidden content: look for display: none in the files Too many heading tags: mentioned in the violations summary tab Keyword repeated too many times: browsing the site Best Practise Clearly no use of any risky techniques is best practice. Example Answer: The only risky technique identified in this audit is the multiple H1 tags used on a significant number of pages. A complete list of these can be found in section By way of an example the URL contains the <h1> tag eight times. Coding standards require that the h1 tag is used once only. These implementations can be seen below.

125 Congratulation! You have finished this Technical Audit


Download ppt "How to create a technical SEO audit GroupM Search EMEA"

Similar presentations


Ads by Google