Technology is Life

Praises, rants & worries about technology.

computer, ai, code, dramatic, laptop centered in a minimalist scene

Leveraging ChatGPT for Essential Technical SEO Insights

Navigating the complex world of SEO can be a daunting task, even for the technically inclined among us. But recently, a nifty session with ChatGPT Plus opened up new avenues for me in website analysis, and the journey was nothing short of revelatory.

It was my understanding that because ChatGPT Plus brought back browsing with Bing that I could then have it scan a website and analyze it and give me suggestions. This didn’t work out as intended but what occurred through the conversation was pretty cool!

When I asked it to scan my client’s website I got back some error saying it couldn’t do it because of some other thing I couldn’t understand and frankly don’t have the experience to express in detail… BUT…

The hiccup paved the way for an unexpected solution—ChatGPT rolled up its digital sleeves and dished out Python code (copied and pasted for your pleasure, below) for me to execute the analysis myself. I also got it to instruct me on how to run the code from my local Terminal. How convenient! After I got a bunch of results that I didn’t understand… I copied the results from my Terminal back into ChatGPT and it broke it down for me in layman’s terms.

Then I was able to circle back to my friend and client what technical things on her WordPress website she could do or be mindful of with her web developer before dealing with the website redesign I will be proposing to her.

It was pretty amazing and genuinely so easy for me, as a technical person, to be able to give basic web technical advise on her site without having to actually snoop around the code myself.

Some of the things she was missing, for example, were h1 tags and some meta data like a site description. How awesome?!

Of course, let’s not sidestep the elephant in the room—the possibility of AI ‘hallucinations’ but keeping absolute transparency with the client, I made it clear that if there was anything that was indeed actually done then this was just something to be mindful of for near future site iterations. Plus I knew the client wasn’t very tech savvy so it was helpful for her to learn some basic technical website lingo to ensure she can direct the web developer to secure some basic SEO needs.

It’s a brave new world where technology meets human creativity, and even the most complex tech tasks are becoming accessible to everyone. Isn’t that just awesome?

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin

# Define the URL of the website to scan
url = "url"

# Send a request to the website
response = requests.get(url)

# Check if the request is successful
if response.status_code == 200:
    # Parse the website content
    soup = BeautifulSoup(response.content, 'html.parser')

    # Title and Meta Description
    title = soup.title.text if soup.title else "No Title Found"
    meta_description = soup.find("meta", {"name": "description"})['content'] if soup.find("meta", {"name": "description"}) else "No Description Found"

    # H1 and H2 Tags
    h1_tags = [h1.text.strip() for h1 in soup.find_all('h1')]
    h2_tags = [h2.text.strip() for h2 in soup.find_all('h2')]

    # Internal and External Links
    internal_links = set()
    external_links = set()
    for link in soup.find_all('a', href=True):
        href = link['href']
        if href.startswith('/'):
            internal_links.add(urljoin(url, href))
        elif href.startswith('http'):
            external_links.add(href)

    # SEO Analysis: Meta Tags and Images without Alt Tags
    meta_tags = {tag['name']: tag['content'] for tag in soup.find_all('meta', attrs={'name': True})}
    images_with_no_alt = [img['src'] for img in soup.find_all('img') if not img.get('alt')]

    # Page Content Analysis: Word Count
    text = ' '.join(soup.stripped_strings)
    word_count = len(text.split())

    # Performance Analysis: Large Images
    large_images = [img['src'] for img in soup.find_all('img') if img.get('size') and int(img['size']) > 100000]  # Example size threshold

    # Mobile Responsiveness Check
    viewport = soup.find("meta", {"name": "viewport"})
    mobile_responsive = bool(viewport)

    # Social Media Integration
    social_media = ['facebook', 'twitter', 'linkedin', 'instagram']
    social_links = {site: [] for site in social_media}
    for link in soup.find_all('a', href=True):
        for site in social_media:
            if site in link['href'].lower():
                social_links[site].append(link['href'])

    # Output the extracted information
    website_info = {
        "Title": title,
        "Meta Description": meta_description,
        "H1 Tags": h1_tags,
        "H2 Tags": h2_tags,
        "Internal Links": internal_links,
        "External Links": external_links,
        "Meta Tags": meta_tags,
        "Images without Alt Tags": images_with_no_alt,
        "Word Count": word_count,
        "Large Images": large_images,
        "Mobile Responsive": mobile_responsive,
        "Social Media Links": social_links
    }

    # Print the results
    for key, value in website_info.items():
        print(f"{key}: {value}\n")

else:
    print("Failed to retrieve website information.")

Leave a Reply