URL Structure

URL structure is a low-weight ranking signal but a high-weight usability signal. Clean, descriptive URLs reinforce topical relevance, improve click-through rate, and make a site significantly easier to maintain. Bad URL structure is one of the most common sources of SEO migration disasters.

What good URL structure looks like

https://seohandbook.co.uk/on-page-seo/title-tags/

The pattern:

  • HTTPS protocol
  • Short, lowercase domain
  • Clear category in the path (/on-page-seo/)
  • Specific page slug at the end (/title-tags/)
  • No file extensions, no query parameters for canonical pages, no session IDs
  • Trailing slash applied consistently across the site

Conventions that consistently perform

Use lowercase. Mixed-case URLs are technically distinct from lowercase versions on most servers and can cause duplicate content issues. Lowercase universally.

Hyphens, not underscores. Google treats hyphens as word separators and underscores as part of a single token. core-web-vitals is parsed as three words; core_web_vitals is parsed as one.

Keep slugs short. A slug should be the shortest accurate description of the page content. Stop words (“a”, “the”, “and”, “of”) can usually be omitted without loss of meaning. core-web-vitals is better than what-are-the-core-web-vitals-and-why-do-they-matter.

Reflect site structure in the path. A URL like /technical-seo/core-web-vitals/ tells the user (and search engines) that the page is part of the technical SEO category. Flat URL structures (/core-web-vitals/) work for small sites but lose informational value as the site grows.

Match URL to the primary keyword, naturally. The slug should contain the target keyword if it can be done without forcing. Don’t keyword-stuff URLs; one natural keyword inclusion is enough.

Be consistent with trailing slashes. Either always use them or never. Astro defaults to trailingSlash: 'always'; Next.js defaults to no trailing slash. Pick one, configure the framework accordingly, and add 301 redirects from the alternate form.

What to avoid

PatternProblem
Dynamic IDs (/article?id=4837)No semantic meaning; weaker for ranking and CTR
Date-based URLs for evergreen contentURLs imply staleness; rewrites are awkward
File extensions (.html, .php)Couples URL to implementation; fragile across migrations
Mixed caseServer-side duplicate content risk
Excessive depth (/category/subcategory/sub-subcategory/page/)Diluted authority; user-hostile
Stop-word-heavy slugsLonger URLs without added meaning
Tracking parameters in canonical URLsIndexing fragmentation

URL structure and migrations

The single biggest risk in any site rebuild is URL changes that aren’t properly redirected. The basic discipline:

  1. Inventory existing URLs. Crawl the live site (Screaming Frog, Sitebulb) and export the full URL list with response codes.
  2. Map old URLs to new URLs. Every URL that changes needs an explicit 301 redirect to the new equivalent. Bulk pattern-based redirects (regex) are fine for predictable transformations; one-to-one mapping is necessary for irregular changes.
  3. Test the redirect map. Spot-check critical URLs (homepage, top-traffic pages, top-converting pages) and verify each returns a 301 (not 302) to the correct destination.
  4. Monitor post-launch. Watch Search Console for crawl errors and indexing drops in the weeks after launch. Some loss is normal; sustained loss indicates redirect or canonicalisation problems.

Migrations done badly cost months of organic traffic. Migrations done well cost a week of recovery and then resume normal trajectory.

URL parameters

Parameters are fine for filtering and sorting interfaces, provided the canonical version of each page is set explicitly. The patterns:

  • Allow parameters when each parameter combination represents a genuinely distinct page (paginated archives, faceted filters with unique inventory).
  • Set canonicals to the parameter-stripped or canonical-parameter version when multiple URL variants serve substantially the same content.
  • Use Google Search Console’s URL parameters tool sparingly. It still works for some properties but is largely deprecated. Canonical tags and robots.txt Disallow rules are more reliable.

Subdomains vs subdirectories

The long-running debate. The practical answer for nearly all use cases: use subdirectories. A blog at example.com/blog/ accumulates authority that supports the rest of the site; a blog at blog.example.com is treated as a separate property and accumulates authority independently.

Use subdomains only when there is a genuine technical or organisational reason: a multi-tenant SaaS, a regional site that must be separately hosted, a logically distinct property that doesn’t share user audience.

Frequently asked questions

Does URL length affect rankings? Marginally. Shorter URLs tend to perform slightly better, but the effect is small and almost always confounded by other factors. Optimise for clarity first; length follows.

Should I include the year in URLs? For evergreen content, no. The URL becomes awkward when the content is updated. For news and dated content where the date is part of the topic, including the year can be appropriate.

Are non-English characters in URLs OK? Yes, Google handles them, but they’re harder to share and copy. For international sites, consider whether transliterated ASCII slugs are more practical than native-character slugs in the URL. Hreflang and language targeting work either way.