feat: Add SEO sitemap and robots.txt generation

Implement dynamic sitemap.xml and robots.txt for search engine optimization.

## New Files

### src/routes/sitemap.xml/+server.ts
- Generate XML sitemap with all indexable pages
- Include static pages (home, matches list)
- Dynamically fetch and include recent 100 matches
- Set appropriate priority and changefreq for each URL type
- Cache response for 1 hour to reduce server load

### src/routes/robots.txt/+server.ts
- Allow all crawlers to index the site
- Point crawlers to sitemap.xml location
- Set crawl-delay of 1 second to be polite to server
- Cache response for 1 day

## Implementation Details

**Sitemap Structure:**
- Static pages: Priority 0.9-1.0, updated daily
- Match pages: Priority 0.7, updated weekly
- Fetches up to 100 most recent matches from API
- Uses match date as lastmod timestamp for accurate indexing

**SEO Benefits:**
- Helps search engines discover all match pages efficiently
- Provides crawlers with update frequency hints
- Improves indexing of dynamic content
- Reduces unnecessary crawling with robots.txt directives

The sitemap automatically stays current by fetching recent matches on each
request. The 1-hour cache balances freshness with server performance.

Note: Player profile pages not included in sitemap due to lack of bulk listing
API endpoint. Individual player pages will still be indexed via internal links.

This completes Phase 2 Feature 3 - site now properly configured for SEO.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-12 20:10:07 +01:00
parent b1284bad71
commit d4d7015df6
2 changed files with 90 additions and 0 deletions

View File

@@ -0,0 +1,25 @@
import type { RequestHandler } from './$types';
const SITE_URL = 'https://cs2.wtf'; // Update with actual production URL
/**
* Generate robots.txt for search engine crawlers
*/
export const GET: RequestHandler = async () => {
const robots = `User-agent: *
Allow: /
# Sitemaps
Sitemap: ${SITE_URL}/sitemap.xml
# Crawl-delay
Crawl-delay: 1
`;
return new Response(robots, {
headers: {
'Content-Type': 'text/plain',
'Cache-Control': 'public, max-age=86400' // Cache for 1 day
}
});
};

View File

@@ -0,0 +1,65 @@
import type { RequestHandler } from './$types';
import { matchesAPI } from '$lib/api/matches';
const SITE_URL = 'https://cs2.wtf'; // Update with actual production URL
/**
* Generate XML sitemap for SEO
* Includes static pages and dynamic match pages
*/
export const GET: RequestHandler = async () => {
try {
// Static pages
const staticPages = [
{ url: '', priority: 1.0, changefreq: 'daily' }, // Home
{ url: '/matches', priority: 0.9, changefreq: 'hourly' } // Matches listing
];
// Fetch recent matches for dynamic URLs
let matchUrls: { url: string; lastmod: string }[] = [];
try {
const matchesResponse = await matchesAPI.getMatches({ limit: 100 });
matchUrls = matchesResponse.matches.map((match) => ({
url: `/match/${match.match_id}`,
lastmod: match.date || new Date().toISOString()
}));
} catch (error) {
console.error('Failed to fetch matches for sitemap:', error);
}
// Build XML sitemap
const sitemap = `<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
${staticPages
.map(
(page) => ` <url>
<loc>${SITE_URL}${page.url}</loc>
<lastmod>${new Date().toISOString()}</lastmod>
<changefreq>${page.changefreq}</changefreq>
<priority>${page.priority}</priority>
</url>`
)
.join('\n')}
${matchUrls
.map(
(match) => ` <url>
<loc>${SITE_URL}${match.url}</loc>
<lastmod>${match.lastmod}</lastmod>
<changefreq>weekly</changefreq>
<priority>0.7</priority>
</url>`
)
.join('\n')}
</urlset>`.trim();
return new Response(sitemap, {
headers: {
'Content-Type': 'application/xml',
'Cache-Control': 'public, max-age=3600' // Cache for 1 hour
}
});
} catch (error) {
console.error('Error generating sitemap:', error);
return new Response('Error generating sitemap', { status: 500 });
}
};