Google Site: Search Hack

If you’ve been doing SEO for a while, then you’re probably very familiar with the “site:” search on Google.

If you’re not familiar with the command, if you go to Google (or Bing) and search for “site:domainname.com” it will show you the pages from the site that are indexed. It’s a useful command for getting a rough count of how many pages from a site are indexed, and you can combine the command with other search operators to further refine your search.

What many people don’t realize, however, is that the “site:” command also works on other vertical searches, like Image Search, Video Search, and News Search.

For example, searching for “site:domain.com” in Google Images returns all the images from the site that are indexed:

Google Image Search

Video Search reveals all the pages on the site that have videos on them:

Google Video Search

News Search shows all the articles from the site that have been featured in Google News:

Google News Search

As you can see, the “site:” search works in many different contexts, and you can use it not only on your own site, but on your competitors’ sites as well. Some practical uses for this command might be:

  • Searching your own site to make sure content you don’t want indexed isn’t being discovered by Google/Bing
  • Viewing every single image you’ve uploaded to your site, on one page
  • Doing an analysis of the content (images/videos) that competitors are using
  • Content discovery on image sites

Those are just some of the uses for the command I can think of off the top of my head, but there are sure to be more. Hope you found this useful, and leave a comment if you know of any other uses for the “site:” operator.

My First Website

#Throwbackthursday!

Flabio's Place

This is the very first website I created, sometime back in 1998. It was a website about a popular Mac RPG called Realmz, and it was my first foray in HTML and web design. It was originally hosted on Geocities (Area51/Hollow/3695) and moved around over the years until it’s final resting place on Tripod, where it lives to this day.

Looking back, I wish I knew then what I do now about SEO and online marketing– the opportunities back then were huge, and I could have easily made a lot more money than I did at the time designing websites for friends & family for a couple hundred dollars.

At the same time, I’m glad that I got started learning HTML and web programming as early as I did, which eventually led to learning about SEO and internet marketing and getting me to where I am today.

It’s been a long and crazy adventure, and I look forward to taking advantage of the new opportunities as the web continues to change and evolve.

Problems With Bing Site Verification

So the other day, I was trying to access some data in Bing Webmaster Tools for one of the sites that I manage, and I noticed that all of the sites had become unverified. I tried to re-verify the sites, but it kept timing out with the following message:

“Timed Out
If you were adding a site or sitemap, we timed out trying to talk to your site. Please make sure the web page is up and visible on the web. If not, this may be a momentary issue; try again, or check back later. ”

What was puzzling was that both the sitemap and the meta tag for verification were still present on our site, and the site was obviously up and running. I double checked to see if anything was blocked by robots.txt, but nope that wasn’t the problem either. I tried adding another site to BWT, and was successfully able to do so, so the issue wasn’t Bing either.

At this point I reached out to Bing to see if they could fix the problem, but received a response in broken English that didn’t really address the problem.

I sent a message to IT, and it turns out that our firewall was the culprit here, blocking Bing from re-verifying our site. We were able to track down the Bing IP by combing through our log files, and once the Bing IP address was added to our whitelist, I was able to re-verify the sites with no problem.

Bing Webmaster Tools’ IP address is 131.253.38.67, if you’re curious, although I’m not sure if they use multiple IPs. So there you have: if you’re having verification trouble in Bing, check your firewall.

New Current/Past Contributor Feature In Google+

I just noticed that there is now a designator for “current contributor” or “past contributor” in the “Contributor To” section of Google+. Presumably this is for people who might have been contributors to a publication or a blog, but have since moved onto another company.

GooglePlus Current Contributor

My first thought was that this is related to Google AuthoRank, but the more I think about, the less I think this would be useful. I guess this could be of some use if you used to work at say the New York Times, and want the authorship benefit without pretending that you still work there, but that’s the only use I can think of.

I also don’t see how this would work with guest blogging. For example, I’ve contributed posts to SEOMoz before, but I don’t work there and never have. Does that make me a current or past contributor? Seems somewhat confusing.

Anyway, regardless of its usefulness, there it is.

UPDATE: Looks like Mike Arnesen might have been the first one to notice this change, and he’s got a good write up on his blog.

Google Crawl Stats Exporter

I haven’t seen this mentioned anywhere yet, but it looks as though Google is now providing exact numbers in their Crawl Stats report in Google Webmaster Tools:

Google Crawl Stats

I’m not sure how accurate these numbers are, but since they don’t appear to be rounded, I’m going to assume that they’re fairly accurate.

So far Google has not provided an export functionality for this data, but based on their history of what they did with indexation data, I’m guessing it’s only a matter of time before you can export this data from GWT.

If you don’t want to wait for Google, however, you’re in luck. Back when Google first added numbers to their Indexation report, I created a little script that would extract the data from the report, and format it nicely for export into Excel. Today I retooled the script so that it works with the new Crawl Report, and you can find it below.

Instructions

To run the script, go into Health > Crawl Stats in GWT and view the source code for the page. Select all the text using command-A, command-C (control if you’re on a PC), paste it into the form below, click the button, and voila, instantly formatted data! Enjoy!


Google Webmaster Tools Crawl Stats Exporter




Why Responsive Design Sucks

Ever since Google officially recommended responsive design for mobile SEO, I’ve heard SEOs parroting Google and claiming that responsive design is the best approach to take for mobile sites. However, I’ve always argued that responsive design is far from ideal since it doesn’t allow you to optimize load time or user experience for mobile users.

Josh Chan just published a great post over at Six Revisions, making much of the same arguments in great detail. To summarize his points:

  • Responsive design leads to poor mobile performance since you have to download the same content as if you were on a desktop (images, CSS files, javascript, etc). Because you’re trying to serve multiple devices with a single design, the file sizes end up being bigger than if you were just targeting a single device.
  • Responsive design is complex and requires a lot of designer and developer resources. It’s a complex design and programming challenge for which there is no clear workflow, compared to standard web design.
  • Responsive design leads to poor user experience in mobile since you’re just changing how you display content, instead of targeting the unique needs of mobile users. Responsive design can hinder creativity and innovation, and prevent designers from creating a great mobile experience.

Josh makes some really good points, and I completely agree as someone who is currently redesigning one of my sites for mobile. There’s no way I could use a few CSS hacks and shrink down the site for mobile browsers.

If you’re running a simple blog, responsive design can be a quick and easy solution, but if you have a more complex site dynamic serving is the way to go, since it preserves the URL structure while providing you with the flexibility to customize your design.

All Links From Squidoo.com Now Nofollow

nofollow Big news today for any SEOs using Squidoo for link building purposes, as Bonnie Diczhazy, the Head of Community at Squidoo announced that all outbound links from Squidoo will be nofollowed. The change is effective immediately, though they say it may take a few days for the change to rollout due to caching issues.

I just checked my profile and all my lenses, and yep, they’re all nofollowed. Just goes to show that you don’t want to invest too heavily on a third-party platform. I was able to find a hack that allowed me to remove the nofollow attribute from my profile page (e-mail me if you want more info), but we’ll see how long that lasts.

Squidoo says in their announcement that they “eventually hope to start giving our best (and most trusted) lensmasters ‘follow’ privileges on a case-by-case basis”. This could actually be a boon for SEOs, as it raises the bar for those who want a backlink from Squidoo and increases the value of links from the site. As long as the process isn’t too onerous (I’m guessing it will be for Giant Squids), it could be worth it to invest the time to get Squidoo’s recommendation.

For now, I see the change as a big reset, since all the old sites out there which have invested heavily in Squidoo have now lost all their links. Now it’s just wait and see to see what the process is like for getting the nofollow removed.

Speaking At SMX Toronto

Takeshi SMX Toronto

I had a great time speaking at SMX Toronto this past week, where I spoke on a panel on Social Sites You Shouldn’t Overlook about the benefits of using Tumblr for SEO for social media marketing.

Aside from a small glitch with the slides, the talk went very well, and people seemed to gain some value from the talk, which I’m glad for. It was also my first time in Toronto (or Canada for the matter) so I had a great time exploring the snowy landscape and checking out some of the salsa clubs in the area in the evenings (thanks to the dancers of United Salseros for showing me a great time!).

Anyway, I am still recovering from jetlag, but I will eventually post some of the tactics and strategies I shared during the talk. For now, here are the slides from my presentation:

Why Getting Banned From Adsense Was The Best Thing That Happened To Our Business

In January of 2010, things were finally starting to look up for our business. After a year of diligently plugging away at our online business, we were able to grow our revenue to the point of earning a couple hundred dollars a day, not a lot by any means, but enough to convince us that our “4-Hour Workweek” dreams were coming true.

At the time, we had one main niche site that we’d started with, along with a couple dozen exact-match domains that we’d built out, complete with crappy articles written by overseas workers. This was all pre-Panda, and rankings were good (we were ranking #1 for “cheap liposuction” at one point). We monetized all these sites through Adsense, and for the first time since we started, money was coming in, and it actually felt like we were running a business.

Then we got banned from Adsense.

Anyone that’s been banned by Adsense can tell you that the experience is terrible. There is no warning, no explanation, and very little recourse. And when you are banned, you are banned for LIFE. To make things worse, they closed not only our business account, but the personal accounts of me and my business partner as well. And we lost the $400+ we had accumulated in our account that month.

We filed multiple reconsideration requests, pleading with Google, but all were denied with automated responses (to this day, I still don’t know why were banned, although we suspect it’s because one of our sites was hit by a spammer). We even tried setting up a new account, but that was banned within a month, with us again losing all our earnings.

Needless to say, we despaired. All the hard work we’d built up working long days and nights was gone in an instant.

But then a strange thing happened. Within a few months of losing our Adsense accounts, we were able to turn our business around and earn MORE than what we were earning from Adsense. By the end of 2010, we were able to grow the business to making $500 a month, then $1000 a month in 2011, and $3000 a month in 2012.

What changed? First we started by selling all our crappy Adsense sites on Flippa for $1400, which was a nice windfall.

Then, with all the other sites gone, we were able to focus on our main site. Instead of relying on automated ads provided by Google and receiving pennies per click, we reached out to advertisers directly and negotiated direct advertising deals worth hundreds a month. Instead of just advertising for other people, we began signing up for affiliate programs and selling products for big commissions, eventually creating our own product so we could keep 100% of the profit. We focused on growing our SEO traffic and building an e-mail list so we could create a longterm relationship with our visitors.

So in the end, getting banned by Adsense was the best thing that happened to our business.

If you think about it, Adsense is the middleman of online advertising. Sure, it’s easy to setup and start earning, but Google takes a cut of every click you generate. And the people that are advertising through Adsense are earning even MORE money (otherwise they wouldn’t keep advertising).

The way to maximize your online revenue then is to cut out the middleman and work with advertisers directly, or eliminate advertising entirely by figuring out how the advertisers are making money. That’s where you’ll find the highest revenue potential.