Danny Ng https://dannyism.com I Write, You Read Thu, 10 Aug 2017 18:14:11 +0000 en-US hourly 1 Running Supervisor with Laravel Workers on Heroku https://dannyism.com/running-supervisor-with-laravel-workers-on-heroku/ https://dannyism.com/running-supervisor-with-laravel-workers-on-heroku/#comments Mon, 07 Aug 2017 21:08:15 +0000 http://www.dannytalk.com/?p=1104 It’s been almost 4 years since my last post – yikes! I guess it’s time to dust off the cob webs and share something which took me many hours to figure how to run Supervisor with Laravel on Heroku. Assumptions Here are some assumptions that I’ve made that you know about before reading this article:…

The post Running Supervisor with Laravel Workers on Heroku first appeared on Danny Ng.]]>
It’s been almost 4 years since my last post – yikes! I guess it’s time to dust off the cob webs and share something which took me many hours to figure how to run Supervisor with Laravel on Heroku.

Assumptions

Here are some assumptions that I’ve made that you know about before reading this article:

The Problem

I’m running my application on Laravel 5.4 and using Amazon SQS as my queue driver. I run an hourly process which pushes a whole bunch of jobs to the queue.

The problem I was experiencing was that there were too many jobs pushed to the queue and my single worker dyno was only processing 1 message at a time, which meant by the time the next cycle came, more jobs was being pushed and my worker dyno was falling behind trying to catch up and process the jobs.

I was looking for a way to process more jobs in the queue concurrently and it seemed possible after reading up on Laravel’s documentation on Supervisor. Supervisor is a great way to manage your queue workers, specifying the number of instances to run and also restarting your queue worker if it fails.

In my case, I wanted to have more workers working on my queue to process the jobs quicker, but the only want to do so on Heroku was to either scale horizontally or to create more processes in your Procfile (free and hobby dynos have a limit of 1 dyno per process type, so you can’t scale).

This seemed like a waste of resources for me since my dynos had a limit of 512MB RAM and I was only utilizing like < 64MB consistently. I figured by increasing the number of workers working concurrently, I could utilize more RAM before needing to scale more dynos.

The Solution

So after hours and days spent Googling for a solution, it was very surprising that there are no documentations out there on how to run Supervisor on Heroku. After lots of tinkering around, I think I may have figured out how to do it.

Heroku Buildpacks

Since I’m running Laravel, my buildpack is heroku/php. However, you will need the heroku/python buildpack to run Supervisor since it is written in python.

To add an additional buildpack, using Heroku’s CLI, run

heroku buildpacks:add --index 1 heroku/python

This will set the heroku/python buildpack in position 1 and the heroku/php buildpack in position 2. The order of this is important. You can also run heroku buildpacks to verify this.

Python Dependencies & Version

Create a requirements.txt file in your app’s root directory and add supervisor. This will install Supervisor when your app is being built after you push a commit to Heroku.

According to Heroku’s documentation, newly created Python applications will run version 3.6.2 but Supervisor will only work with Python 2.4 or later and will not working with Python 3+. This means you will need to change your runtime to version 2.7.13.

Create a runtime.txt file in your app’s root directory and add python-2.7.13. When you push to production later, you should see the following during the build to verify Supervisor is being installed with the correct version of Python.

remote: -----> Python app detected
remote: -----> Installing python-2.7.13
remote: -----> Installing pip
remote: -----> Installing requirements with pip
remote:        Collecting supervisor (from -r /tmp/build_a86d7c5833de0290e4d9eae6bb15574d/requirements.txt (line 1))
remote:          Downloading supervisor-3.3.3.tar.gz (418kB)
remote:        Collecting meld3>=0.6.5 (from supervisor->-r /tmp/build_a86d7c5833de0290e4d9eae6bb15574d/requirements.txt (line 1))
remote:          Downloading meld3-1.0.2-py2.py3-none-any.whl
remote:        Installing collected packages: meld3, supervisor
remote:          Running setup.py install for supervisor: started
remote:            Running setup.py install for supervisor: finished with status 'done'
remote:        Successfully installed meld3-1.0.2 supervisor-3.3.3

Supervisor Configuration for Laravel

You can follow the instructions on Laravel to create a config file for your worker. These are my configurations for my app specifically,

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /app/artisan queue:work --queue=queue_name --tries=3 --memory=64 --sleep=3
autostart=true
autorestart=true
numprocs=8
redirect_stderr=true
stdout_logfile=/app/worker.log

I’m running my queue worker with a limit of 64MB RAM and running 8 instances to maximize my 512MB RAM allocation (64MB x 8 = 512MB).

Make sure your path to artisan is correctly set. Assume I’ve saved this to laravel-worker.conf.

Supervisor Configuration

After adding the supervisor config file, python buildpack, dependencies and runtime version, commit the files and deploy to Heroku. Once deployed, run heroku run bash to gain access to your dyno.

Once inside your dyno’s terminal, run echo_supervisord_conf which will print out a sample configuration. Copy and paste this into a file and save it locally (let’s assume supervisor.conf). Heroku runs an ephemeral filesystem which means no files created during runtime are saved in the filesystem once the dyno stops or restarts (hence why you need to save locally first and commit later).

In the [include] section, uncomment and add,

[include]
files = laravel-worker.conf ; update appropriately to where your Laravel config file is

This will tell Supervisor to include other configuration files within the configuration. Make sure to assign the path of your config file relative to where the supervisor.conf file is.

In the [supervisorctl] section, the serverurl will be set to something like unix:///tmp/supervisor.sock. You will need to update this as this socket is only accessible to root. Change it to,

[supervisorctl]
serverurl=http://127.0.0.1:9001

Otherwise, you’ll run into this error,

error: <class 'socket.error'>, [Errno 111] Connection refused: file: /app/.heroku/python/lib/python2.7/socket.py line: 575

Putting It All Together

Lastly we need tell Heroku to run this as a process by updating the Procfile. Add the following line,

supervisor: supervisord -c supervisor.conf -n # update config path relative to Procfile

Make sure to include the -n flag to run supervisord in the foreground, otherwise it will crash (I’m not sure why). Commit all changes and push to Heroku again.

Checking the logs (run heroku logs --tail) you should see Supervisor initialized, your Laravel config file parsed and your workers being spawned.

INFO Included extra file "/app/supervisor/laravel-worker.conf" during parsing
INFO RPC interface 'supervisor' initialized
CRIT Server 'unix_http_server' running without any HTTP authentication checking
INFO supervisord started with pid 17
INFO spawned: 'laravel-worker_00' with pid 20
INFO spawned: 'laravel-worker_01' with pid 21
INFO spawned: 'laravel-worker_02' with pid 22
INFO spawned: 'laravel-worker_03' with pid 23
INFO spawned: 'laravel-worker_04' with pid 24
INFO spawned: 'laravel-worker_05' with pid 25
INFO spawned: 'laravel-worker_06' with pid 26
INFO spawned: 'laravel-worker_07' with pid 27
INFO success: laravel-worker_00 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: laravel-worker_01 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: laravel-worker_02 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: laravel-worker_03 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: laravel-worker_04 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: laravel-worker_05 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: laravel-worker_06 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: laravel-worker_07 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

I can also see my 8 spawned workers processing my SQS queue 8 messages at a time.

Finally, I can also see my dyno’s RAM being utilized more now.

Now I have 8 queue workers working against my queue using a single dyno, rather than having to use 8 separate dynos which would’ve increased my cost significantly.

Known Issues

When running supervisorctl, I’m still running into socket errors as I’m not sure how to bypass the permissions issue on Heroku.

Usually this is needed to run reread and update but since Supervisor starts from scratch every time the dyno boots up, it isn’t necessary as supervisord -c <config file> will include the extra config file during start up.

The post Running Supervisor with Laravel Workers on Heroku first appeared on Danny Ng.]]>
https://dannyism.com/running-supervisor-with-laravel-workers-on-heroku/feed/ 2
Amazon now live in Australia https://dannyism.com/amazon-now-live-in-australia/ https://dannyism.com/amazon-now-live-in-australia/#respond Wed, 13 Nov 2013 10:41:30 +0000 http://www.dannytalk.com/?p=941 The long-waited e-commerce giant has finally arrived on the shores of Australia – www.amazon.com.au, just in time for the peak season of online Christmas shopping. What’s available so far are Kindle hardware devices and e-books. Very limited product selection at the moment but I would imagine this would change over time, especially throughout 2014. The…

The post Amazon now live in Australia first appeared on Danny Ng.]]>
Amazon Australia

The long-waited e-commerce giant has finally arrived on the shores of Australia – www.amazon.com.au, just in time for the peak season of online Christmas shopping.

What’s available so far are Kindle hardware devices and e-books. Very limited product selection at the moment but I would imagine this would change over time, especially throughout 2014.

The Kindle e-readers (such as the Paperwhite) it seems is only sold through Amazon’s partnered local retailers such as Dick Smith and Big W. It seems Amazon AU isn’t able to handle shipping physical products yet as you will still need to shop via the US site. It explains why they’ve launched with a strong focus on digital goods only.

With the current launch, local booksellers will face strong competition from a company that found it’s roots upon – books.

It’s reported that Amazon will not be geo-blocking Australian consumers from visiting and transacting on the US site, so it’d be interesting to see how the prices of Australian goods stack up against US goods, inclusive of all shipping costs.

The post Amazon now live in Australia first appeared on Danny Ng.]]>
https://dannyism.com/amazon-now-live-in-australia/feed/ 0
IAB Awards 2011: First Rate wins SEO category https://dannyism.com/iab-awards-2011-first-rate-wins-seo-category/ https://dannyism.com/iab-awards-2011-first-rate-wins-seo-category/#respond Wed, 03 Aug 2011 00:19:57 +0000 http://www.dannytalk.com/?p=843 Just wanted to share with you all that the company that I work for, First Rate, has won the IAB award for search marketing – organic search (SEO). We were competing against Mediacom and Outrider in this category. You can see the full list of winners at the IAB Australia Awards website. Special thanks to our…

The post IAB Awards 2011: First Rate wins SEO category first appeared on Danny Ng.]]>
Just wanted to share with you all that the company that I work for, First Rate, has won the IAB award for search marketing – organic search (SEO). We were competing against Mediacom and Outrider in this category.

You can see the full list of winners at the IAB Australia Awards website.

IAB Awards 2011 - First Rate

Special thanks to our team here at First Rate for their hard work and enthusiasm, and also thanks Focus Property for allowing us to submit our work with them!

First Rate team

You can read our press release on our company blog.

The post IAB Awards 2011: First Rate wins SEO category first appeared on Danny Ng.]]>
https://dannyism.com/iab-awards-2011-first-rate-wins-seo-category/feed/ 0
4 Important Reasons for SEO 301 Redirection https://dannyism.com/4-important-reasons-for-seo-301-redirection/ https://dannyism.com/4-important-reasons-for-seo-301-redirection/#comments Tue, 14 Dec 2010 09:43:48 +0000 http://www.dannytalk.com/?p=817 When it comes to making URL structural changes to your website, it is very important to ensure you 301 redirect your old URLs to the new URLs. Common cases of doing this are migrating between pages on a site or migrating between sites. Doing 301 redirects for migrations has SEO and usability impact and if…

The post 4 Important Reasons for SEO 301 Redirection first appeared on Danny Ng.]]>
When it comes to making URL structural changes to your website, it is very important to ensure you 301 redirect your old URLs to the new URLs. Common cases of doing this are migrating between pages on a site or migrating between sites.

Doing 301 redirects for migrations has SEO and usability impact and if not followed correctly, may cost your site valuable organic traffic and rankings.

4 reasons why you should and must 301 redirect your old URLs:

  1. Search engines (such as Google) most likely have crawled and indexed your site on the SERPs. If a user then queries and finds your site organically on the SERPs, it would be poor user experience if the link found lead to a 404 page.
  2. Search engines may recrawl your site via the old URLs and if stumbles upon a 404 page, will most likely drop you out of the SERPs if they can’t see the association to the new URL. This is also because of poor user experience as search engines place high importance on ensuring users find what they’re looking for.
  3. You will lose link juice from external sites as these trust & authority juices aren’t flowed from the old URL to the new URL. Loss of link juice means your site will lose authority & trust: 2 important factors in SEO.
  4. If you did not update your internal links to point to the new URLs, you will have a lot of broken links too which will negatively affect your internal PageRank flow.

You’d be surprised but I have seen websites lose 50%+ organic traffic due to this oversight. Imagine if you ran a multi-million dollar e-commerce site. What are the implications of not doing this?

What is a 301 Redirect?

The term 301 is a HTTP status code response given from a web server whenever a request is made to a page that is no longer there but mapped to another page.

The example below shows how a typical request from your web browser looks like and how responses are given from the web server.

How to implement 301 Redirects?

You should always use server-side redirects where possible instead of client-side redirects (i.e. javascript, meta refresh) as this the strongest directive to search engines.

There are many ways to do this so here’s a good 301 redirection guide produced by webconfs.com.

How to check your 301 Redirects?

Once you’ve implemented your 301 redirects, make sure you do the following:

  1. Use a HTTP header scanner to ensure the server response code you’re getting is 301 for your old URLs. I personally use a Firefox plugin called Live HTTP Headers.
  2. Check Google Webmaster Tools for 404 errors.
  3. Run Xenu on your site to find internal broken links.

If you haven’t noticed, I recently updated the structure of my URL permalinks to remove the year and month components in the URL. I will share in another post on how to do exactly what I’ve talked about here for WordPress.

The post 4 Important Reasons for SEO 301 Redirection first appeared on Danny Ng.]]>
https://dannyism.com/4-important-reasons-for-seo-301-redirection/feed/ 2
Track SEO Organic Rankings with Google Analytics https://dannyism.com/track-seo-organic-rankings-with-google-analytics/ https://dannyism.com/track-seo-organic-rankings-with-google-analytics/#comments Tue, 07 Dec 2010 12:00:21 +0000 http://www.dannytalk.com/?p=801 In April 2009, Google announced that they were making some changes to how the referral URL would look like on their search engines. One of the key information that’s provided here is the listing’s organic ranking (cd parameter). This can be found in the referral URL property (or document.referrer when referring to the DOM). It’s…

The post Track SEO Organic Rankings with Google Analytics first appeared on Danny Ng.]]>
In April 2009, Google announced that they were making some changes to how the referral URL would look like on their search engines.

One of the key information that’s provided here is the listing’s organic ranking (cd parameter). This can be found in the referral URL property (or document.referrer when referring to the DOM).

It’s been more than 1.5 years since the announcement so I figured that the gradual roll out would be almost complete (I still see instances of the old referral URL being used though) so I decided to implement a filter for Google Analytics that will pull in the organic ranking data and show it in the keyword reports.

Before we get into it, there’s something important to know about the cd parameter. Traditionally in SEO, we’ve always known the SERPs to contain 10 organic listings (as shown below).

Google Tradional SERP

However, since the inception of universal search, Google has continuously added a myriad of listing types (in addition to the traditional ones) to the SERPs such as images, videos, news and places.

As a result, this has changed the way we look at organic rankings and this is how Google reports on organic rankings through the cd parameter. Below is an illustration of how the cd parameter reports the organic rankings on the SERPs.

Google SERP Universal

This means that getting an organic ranking of > 10 does not necessarily mean it is not on the first page so you should be aware of it.

Now, onto the Google Analytics filter to implement this.

Implementation of this feature requires 2 advanced filters that need to be in a specific order.

Filter 1: This filter will extract the ranking data from the cd parameter and store it temporarily into custom field 1.

GA Filter 1

Filter 2: This filter will then extract the data from custom field 1 and rewrite the campaign’s keyword filter field by appending the organic ranking data to the pre-existing campaign keyword data.

Note: This will overwrite your keyword filter field. If you wish to preserve the original format, I would then suggest you implement these filters in a new profile.

GA Filter 2

Filter 1 must be above filter 2 in the filter manager in order for this to work. Otherwise custom field 1 will have no data for filter 2 since it is only assigned a value from filter 1.

This is the result of how it looks like in your keyword reports. You can see that each organic keyword that drove traffic to your site now has organic ranking data next to it.

GA Keyword Report

It is important to understand that when you see an organic ranking > 10 in your keyword reports, it does not necessarily mean that it is not on the first page. The best way to check this is to do a manual search and see whether your listing is outside of the first page or is on the first page but with many other listing types.

Or you could look into a software solution to automate this.

How is this helpful?

With this data right at your fingertips, you can now:

  • Analyse the keywords drives conversions to your site and see how well they rank on the SERPs.
  • Analyse over time how different positions on the SERPs affect your traffic/conversions. It is pretty normal for organic rankings to fluctuate slightly.So assume that you’re normally #X for certain keywords, analyse how a drop or an increase in organic rankings affect your traffic/conversions.
  • Analyse over time your portfolio of keywords. How many keywords are on the first page? Your aim is to get your percentage of keywords on the first page as close to 100% as possible. Even better, number 1.

So go ahead and build a business case to focus and invest in those keywords by improving their organic rankings knowing that they bring in traffic and conversions.

Happy analysing :)

The post Track SEO Organic Rankings with Google Analytics first appeared on Danny Ng.]]>
https://dannyism.com/track-seo-organic-rankings-with-google-analytics/feed/ 15
Read Google Analytics Cookie Script https://dannyism.com/read-google-analytics-cookie-script/ https://dannyism.com/read-google-analytics-cookie-script/#comments Thu, 19 Aug 2010 06:02:56 +0000 http://www.dannytalk.com/?p=711 I’ve taken some time out to write a script that provides a nice API to access Google cookies. If you’ve seen the Google cookies before, they can look pretty cryptic and will require you to memorise the syntax of how the cookies are formed which you don’t necessarily want to do to save brain space.…

The post Read Google Analytics Cookie Script first appeared on Danny Ng.]]>
I’ve taken some time out to write a script that provides a nice API to access Google cookies. If you’ve seen the Google cookies before, they can look pretty cryptic and will require you to memorise the syntax of how the cookies are formed which you don’t necessarily want to do to save brain space.

 I won’t really go into the intricate details of Google cookies so this post will assume you know what you’re looking for. I may write up a post to explain more in-depth how Google cookies work later on. In the mean time, you can watch this presentation by Google on cookies (it’s pretty good!) or read the documentation to find out more about Google cookies.

So how is this useful? Well it really depends. You may use it to read GA campaign values and integrate it with your CRM system to track where your leads/sales are coming from or write custom scripts that integrate with GA (i.e. custom variables). It’s really up to you!

Anyway, on to the script.

Below you will find the source code of the script I’ve written. To copy it, simply double click on the source code area (which will highlight the code) and simply do a ctrl-c to copy.

/**
 * @author: Danny Ng (https://www.dannyism.com/2010/08/19/read-google-an…-cookie-script/)
 * @modified: 19/08/10
 * @notes: Free to use and distribute without altering this comment. Would appreciate a link back :)
 */
// Strip leading and trailing white-space
String.prototype.trim = function() {
    return this.replace(/^\s*|\s*$/g, '');
}

// Check if string is empty
String.prototype.empty = function() {
    if (this.length == 0)
        return true;
    else if (this.length & gt; 0)
        return /^\s*$/.test(this);
}

// Breaks cookie into an object of keypair cookie values
function crumbleCookie(c) {
    var cookie_array = document.cookie.split(';');
    var keyvaluepair = {};
    for (var cookie = 0; cookie & lt; cookie_array.length; cookie++) {
        var key = cookie_array[cookie].substring(0, cookie_array[cookie].indexOf('=')).trim();
        var value = cookie_array[cookie].substring(cookie_array[cookie].indexOf('=') + 1, cookie_array[cookie].length).trim();
        keyvaluepair[key] = value;
    }

    if (c)
        return keyvaluepair[c] ? keyvaluepair[c] : null;

    return keyvaluepair;
}

/**
 * For GA cookie explanation, see http://services.google.com/analytics/breeze/en/ga_cookies/index.html.
 *
 * @return -
 *
 * @pre-condition - pageTracker initialised properly
 * @post-condition - provides 'get' methods to access specific values in the Google Analytics cookies
 */
function gaCookies() {
    // Cookie syntax: domain-hash.unique-id.ftime.ltime.stime.session-counter
    var utma = function() {
        var utma_array;

        if (crumbleCookie('__utma'))
            utma_array = crumbleCookie('__utma').split('.');
        else
            return null;

        var domainhash = utma_array[0];
        var uniqueid = utma_array[1];
        var ftime = utma_array[2];
        var ltime = utma_array[3];
        var stime = utma_array[4];
        var sessions = utma_array[5];

        return {
            'cookie': utma_array,
            'domainhash': domainhash,
            'uniqueid': uniqueid,
            'ftime': ftime,
            'ltime': ltime,
            'stime': stime,
            'sessions': sessions
        };
    };

    // Cookie syntax: domain-hash.gif-requests.10.stime
    var utmb = function() {
        var utmb_array;

        if (crumbleCookie('__utmb'))
            utmb_array = crumbleCookie('__utmb').split('.');
        else
            return null;
        var gifrequest = utmb_array[1];

        return {
            'cookie': utmb_array,
            'gifrequest': gifrequest
        };
    };

    // Cookie syntax: domain-hash.value
    var utmv = function() {
        var utmv_array;

        if (crumbleCookie('__utmv'))
            utmv_array = crumbleCookie('__utmv').split('.');
        else
            return null;

        var value = utmv_array[1];

        return {
            'cookie': utmv_array,
            'value': value
        };
    };

    // Cookie syntax: domain-hash.ftime.?.?.utmcsr=X|utmccn=X|utmcmd=X|utmctr=X
    var utmz = function() {
        var utmz_array, source, medium, name, term, content, gclid;

        if (crumbleCookie('__utmz'))
            utmz_array = crumbleCookie('__utmz').split('.');
        else
            return null;

        var utms = utmz_array[4].split('|');
        for (var i = 0; i < utms.length; i++) {
            var key = utms[i].substring(0, utms[i].indexOf('='));
            var val = decodeURIComponent(utms[i].substring(utms[i].indexOf('=') + 1, utms[i].length));
            val = val.replace(/^\(|\)$/g, ''); // strip () brackets
            switch (key) {
                case 'utmcsr':
                    source = val;
                    break;
                case 'utmcmd':
                    medium = val;
                    break;
                case 'utmccn':
                    name = val;
                    break;
                case 'utmctr':
                    term = val;
                    break;
                case 'utmcct':
                    content = val;
                    break;
                case 'utmgclid':
                    gclid = val;
                    break;
            }
        }

        return {
            'cookie': utmz_array,
            'source': source,
            'medium': medium,
            'name': name,
            'term': term,
            'content': content,
            'gclid': gclid
        };
    };

    // Establish public methods

    // utma cookies
    this.getDomainHash = function() {
        return (utma() && utma().domainhash) ? utma().domainhash : null
    };
    this.getUniqueId = function() {
        return (utma() && utma().uniqueid) ? utma().uniqueid : null
    };

    this.getInitialVisitTime = function() {
        return (utma() && utma().ftime) ? utma().ftime : null
    };
    this.getPreviousVisitTime = function() {
        return (utma() && utma().ltime) ? utma().ltime : null
    };
    this.getCurrentVisitTime = function() {
        return (utma() && utma().stime) ? utma().stime : null
    };
    this.getSessionCounter = function() {
        return (utma() && utma().sessions) ? utma().sessions : null
    };

    // utmb cookies
    this.getGifRequests = function() {
        return (utmb() && utmb().gifrequest) ? utmb().gifrequest : null
    };

    // utmv cookies
    this.getUserDefinedValue = function() {
        return (utmv() && utmv().value) ? decodeURIComponent(utmv().value) : null
    };

    // utmz cookies
    this.getCampaignSource = function() {
        return (utmz() && utmz().source) ? utmz().source : null
    };
    this.getCampaignMedium = function() {
        return (utmz() && utmz().medium) ? utmz().medium : null
    };
    this.getCampaignName = function() {
        return (utmz() && utmz().name) ? utmz().name : null
    };
    this.getCampaignTerm = function() {
        return (utmz() && utmz().term) ? utmz().term : null
    };
    this.getCampaignContent = function() {
        return (utmz() && amp; utmz().content) ? utmz().content : null
    };
    this.getGclid = function() {
        return (utmz() && utmz().gclid) ? utmz().gclid : null
    };
}

API reference:

  • getDomainHash() – returns the domain hash that GA uses to uniquely identify each host name.
  • getUniqueId() – returns a unique id set by GA.
  • getInitialVisitTime() – returns timestamp (seconds since 1 June, 1970 – otherwise known as ctime) of your first visit.
  • getPreviousVisitTime() – returns timestamp of your last visit.
  • getCurrentVisitTime() – returns timestamp of your current session.
  • getSessionCounter() – returns the number of sessions you’ve had on the site.
  • getGifRequests() – returns the number of times a GIF request is sent. This is how GA communicates with the Google servers.
  • getCampaignSource() – returns the campaign source.
  • getCampaignMedium() – returns the campaign medium.
  • getCampaignName() – returns the campaign name.
  • getCampaignTerm() – returns the campaign term (or keyword).
  • getCampaignContent() – returns the campaign content (which can be set by using &utm_content when doing custom tagging).
  • getGclid() – returns the gclid value (if you run Adwords and have auto-tagging enabled).

Usage:

Copy and paste the source code into a javascript file and upload it to your server. Import the script into your code by doing <script type=”text/javascript” src=”PATH YOU UPLOADED TO” />.

After that, all you have to do is make an instance of gaCookies class to start accessing it’s public methods.

Note: Make sure you create the instance after pageTracker or _gaq (async script) has been initialised because you can only access the cookies once the cookies has been set by GA.

<script type="text/javascript">
    var gac = new gaCookies();
    alert(gac.getCampaignSource());
</script>

Known Issues:

Currently one of my helper methods, crumbleCookie() isn’t able to accommodate cookies with the same name since it is using an associative array to store the cookie name & value. For example, if there are 2 utma cookies, you will only get access to the latest one (as the later one overrides the former one).

This can happen with GA when cookies (multiple sets of GA cookies) are set to the root domain, sub-domain or even when you use the methods _setDomainName(‘none’) or _setAllowHash(false), doing stuff like cross-domain tracking.

Google uses an internal hashing method that produces an unique hash to identify the domain the cookie is set to, hence they’re able to distinguish which is which. Unfortunately I can’t be bothered to try and decrypt their cryptic javascript file so that I can use their hashing method internally, and for the time being I can’t think of an elegant solution to overcome this problem. One of the major problems with the javascript cookie object is that there is no way to actually read the domain it has been set to – annoying!

Well, that’s all from me. If this has been helpful to you, do let me know!

The post Read Google Analytics Cookie Script first appeared on Danny Ng.]]>
https://dannyism.com/read-google-analytics-cookie-script/feed/ 13
SEO Implications of using CSS Display None/Image Replacement https://dannyism.com/seo-implications-of-using-css-display-noneimage-replacement/ https://dannyism.com/seo-implications-of-using-css-display-noneimage-replacement/#comments Mon, 05 Oct 2009 12:45:24 +0000 http://www.dannytalk.com/?p=644 It is a known problem that search engine crawlers aren’t able to read images, hence they aren’t able to determine what the image is about. However, this can be overcomed by utilising the alt attribute of the img tag to describe the image so that search engines are able to read what the image is…

The post SEO Implications of using CSS Display None/Image Replacement first appeared on Danny Ng.]]>
It is a known problem that search engine crawlers aren’t able to read images, hence they aren’t able to determine what the image is about. However, this can be overcomed by utilising the alt attribute of the img tag to describe the image so that search engines are able to read what the image is about.

On the other hand, I do believe that it may be better to optimise your on-page using actual text than using the image alt attribute on certain situations. How you can do this is by using CSS to replace text that can either be within anchor, header tags or simply text in general with background images . Obviously you don’t want to overdo this (i.e. apply to all images on the page) lest you trigger Google’s spam alert and also it’s very time consuming!

So is using CSS to optimise your on-page illegal in the eyes of Google? Will you be considered trying to obfuscate the search engines for SEO purposes, hence getting yourself banned from the SERPs? The answer to this is how you do it and the question to ask yourself is, are you trying to be dodgy?

The Way To Get Yourself Banned

In Google Webmasters help under hidden text and links, it is pretty clear what are the criterias to get yourself banned. Although not in the list, I would avoid using text-indent: -9999px to hide your text but rather use display:none instead.

I would say the reason for this is because the text-indent property according to w3c is to be used for text formatting purposes, not visual formatting. However, the display property is used for visual formatting purposes instead which fits the purpose of using it to ‘replace’ text with images as it is a visual aspect.

Algorithmically, Google does not ban websites from the SERPs that use CSS to hide things and obviously would go through some sort of manual review. That’s why it’s important that you ensure you don’t have comments in the source code that reveal your intention of keyword spamming or displaying optimised keywords only for the search engines.

The simply rule to follow is this: if you find yourself questioning whether what you’re doing is spam worthy, then it’s probably spam worthy. What you want to make sure is that you’re using css image replacements with the right intention which is to provide accessibility to users that have CSS disabled and to ensure that search engines are able to read and recognise the important aspects of your page that add value to the user experience.

Using CSS Image Replacement The Right Way

allianz-css

Let’s take a look at Allianz’s homepage, a major insurance provider and how they’ve used CSS replacement the correct way.

You can see on the homepage that navigation menu (highlighted in red box) comprises of image menu items that search engines aren’t able to read. Well, they can actually read it if the image files are optimised with the alt attribute, however I do believe that optimised anchor texts have a greater weighting in SEO than image alt attributes. This use of images is obviously aesthetically more appealing to the user than using normal anchor text.

Looking at the non-CSS version when you disable CSS, you can see below (highlighted in red box) that these navigational menus are actually anchor texts. From a text perspective, this is essentially what search engines see when they’re crawling the site.

allianz-non-css

You can see how Allianz have used CSS to replace these anchor texts with images, which provide users a rich experience but also optimising for search engines at the same time. There’s nothing spammy about this as the anchor text correlates back to what the images are about and there’s no discrepancies or confusion in terms of understanding what the navigational menu is trying to achieve or convey, with or without CSS.

Now let’s look at an example that currently isn’t using CSS replacement but has the potential to do so. When you look at Fallout’s homepage, you will notice that the navigational menus and links comprise of images and no text at all. Even the image alt attributes aren’t optimised which means search engines aren’t able to tell what the images are about and the internal links aren’t optimised at all!

Looking more specifically at the header sections of the document (highlighted in red box), you can see there’s potential here to use CSS replacement to optimise using header tags (i.e. h1, h2, h3). Essentially what they could’ve done is create a div container as the parent of the header tags, use CSS to display the container’s background image and use display:none on the header tags. The end result is with CSS, it still looks the same but without CSS, the header sections are optimised for search engines.

fallout3

Looking at the current header sections without CSS, you can see these are still images, unrecognisable by search engines.

fallout-unoptimised

However, when you optimise it using CSS image replacement, you can see that the header sections are in text and optimised. This is then readable by search engines and informs them that these sections are important as they are headers.

So go ahead and don’t be afraid to use CSS image replacements if you’re doing it in a positive way that help describes what your page and images are about to search engines and does not seem spammy to users when CSS is disabled. Remember, sites are manually reviewed before deemed keyword spamming so make sure it passes a human test beforehand.

The post SEO Implications of using CSS Display None/Image Replacement first appeared on Danny Ng.]]>
https://dannyism.com/seo-implications-of-using-css-display-noneimage-replacement/feed/ 5
Geo Targeting / Localisation on Google Bing Yahoo https://dannyism.com/geo-targeting-localisation-on-google-bing-yahoo/ https://dannyism.com/geo-targeting-localisation-on-google-bing-yahoo/#respond Tue, 29 Sep 2009 14:44:48 +0000 http://www.dannytalk.com/?p=650 When launching a website, it is important to identify where your target audience is located and to ensure that you get found within that region. For example, if you were an e-commerce store in Australia where you only shipped goods locally and not internationally, your target customers would be Australians and hence why it is…

The post Geo Targeting / Localisation on Google Bing Yahoo first appeared on Danny Ng.]]>
When launching a website, it is important to identify where your target audience is located and to ensure that you get found within that region. For example, if you were an e-commerce store in Australia where you only shipped goods locally and not internationally, your target customers would be Australians and hence why it is important for you to localise your website to the local search engines.

The reason for this is whenever a user types in Google.com or Yahoo.com into their browser, these search engines will redirect them to the localised search engines based on IP detection. This means that if you’re browsing from Australia and you type in Google.com, you will be redirected to Google.com.au and for Yahoo.com, au.Yahoo.com. Once you’re at these localised search engines, you have the option of choosing local search results only.

Google AU

Yahoo AU

Bing AU

This means that if you are found on these localised search results, the organic traffic that you will be getting highly qualified. This is a win-win for both search engines and website owners as search engines are serving more relevant search results, thus better quality and website owners are receiving relevant localised traffic. It’s all about relevancy!

Now you know the importance of localisation (or geo-targeting), here’s some common factors that search engines look for and factors specific to the major search engines (Google, Bing, Yahoo).

Top Level Domain (TLD)

One of the things search engines look out for when it comes to localisation is the top level domain (TLD).

For example, if your website had a TLD of .com.au, search engines would see and evaluate this and determine that the site may be more relevant to the Australian region. By analysing your site’s TLD, search engines are able to determine which region your website may be most relevant to.

You can find out more on the list of country’s relevant TLDs.

Web Server Location

The physical location of your web server’s hosting is important factor in search engine localisation. Each web server has a IP address which can be looked up to see the location of it. You can test this yourself by pinging your website and retrieving the IP address reply and then use a geo IP tool to lookup the location.

There are many reasons why someone may choose to find a web host outside of their local region and one of them is cost.

Take for example my site. Although I’m located in Australia, I’ve decided to go with US hosting because it is simply cheaper. Looking at Google Analytics, it is interesting to note that majority of my search traffic comes from the US and Australia even though I have not localised my site in Google Webmaster Tools.

geo-traffic-distribution

However, something you should keep in mind is that if your web server is hosted in the same region as your traffic, your page load time will be much faster (i.e. Australian visitor and Australian hosted site). This will definitely improve your site’s user experience.

Content

Yes, this is true. Content is another factor search engines can use to determine whether your site may be more relevant to a certain region or not through contextual analysis. Search engines would then build a geographical profile about your page/site for relevancy purposes. This is probably one of the reasons why I’m ranked well for some keywords/phrases on Google AU and thus, receive a lot of Australian traffic due to the fact that I’ve had blog posts relevant to where I live.

Let’s say if you lived in California and you started writing about how great your local coffee shop is or your favourite restaurant down the road, it would make sense for search engines to determine that your page (or site) is more relevant to people living in California than people living in Sydney!

Sub-Domains and Sub-Directories

A lot of sites nowadays also use sub-domains and/or sub-directories to separate out geographically content aimed at targeting the visitors from the relevant region. Such examples are Yahoo who uses sub-domains to localise their search engines (i.e. au.yahoo.com) and Microsoft who uses sub-directories to localise their content (i.e. microsoft.com/australia).

Search engines are now smart enough to figure out abbreviations such as au, uk, us, and ca to determine the relevancy of these pages to their respective regions. Of course you should always support this by having content relevant to the target region as well.

Take for example au.yahoo.com. You will see that the content on that site has news, articles and headlines relevant to Australia which is different to the yahoo.com site, which has content relevant to US visitors.

yahoo-au-content

Google Webmaster Tools

Google has provided a great tool for webmasters to geo-target sites to a certain region. You can geo-target your main domain, sub-domain and sub-directory by adding them to your dashboard. Once you’ve verified your site entries, you can go to Site configuration, then Settings to set the geographic target.

gwt-geotarget

This is handy only if you have a generic TLD such as .com or .org. If you’ve set your site to target users in the UK, it’s pretty much telling Google to treat your .com or .org site as .co.uk or .org.uk. If your site is already using a country-coded TLD, then this isn’t necessarily as Google automatically associates your site with the relevant regions.

Bear in mind that this feature only affects Google’s SERPs and not other search engines.

Content Language Meta Tag

Although Bing’s webmaster tools currently doesn’t have a geo-targeting feature like Google webmaster tools, a Bing employee has verified that using the content language meta tag does help Bing determine the site’s target region.

For example, if you want your site localised to Bing Australia, put the following meta tag within your head tag.

<meta http-equiv=’Content-Language’ content=’en-au’ />

The purpose of this content language meta tag is to specify the language used for processing content and to identify the audience. Although I couldn’t find whether this affects Google or Yahoo’s algorithm, I can’t see why this shouldn’t help at all.

Finally, to conclude, Bing has written some best practices to follow when considering localising websites.

Hopefully this post has been helpful for you to think about what needs to be done when it comes to localising your site.

The post Geo Targeting / Localisation on Google Bing Yahoo first appeared on Danny Ng.]]>
https://dannyism.com/geo-targeting-localisation-on-google-bing-yahoo/feed/ 0
Google Organic Rankings Quality Score https://dannyism.com/google-organic-rankings-quality-score/ https://dannyism.com/google-organic-rankings-quality-score/#respond Sat, 29 Aug 2009 11:29:32 +0000 http://www.dannytalk.com/?p=611 Is there such a thing? I’ve always wondered whether Google’s organic search algorithm factors in clickthrough rate (CTR) and SERP ranking (normalised) to provide a quality score for organic listings. This is similar to how Google Adwords’ quality score works. The higher CTR you have and normalising it against the ad position gives you a…

The post Google Organic Rankings Quality Score first appeared on Danny Ng.]]>
Is there such a thing? I’ve always wondered whether Google’s organic search algorithm factors in clickthrough rate (CTR) and SERP ranking (normalised) to provide a quality score for organic listings.

This is similar to how Google Adwords’ quality score works. The higher CTR you have and normalising it against the ad position gives you a higher quality score which translates into lower CPC bid. This is how Google rewards advertisers that focus on quality and relevancy instead of just pure bidding.

The understanding is that the better optimised your text ad is (relevancy) for the actual search query and the bid term, the higher CTR you’ll receive which means searchers are taking an action and making a decision (i.e. they’ve found what they’re looking for!).

This in turn rewards Google as well because they’ve provided quality advertisements from advertisers and thus, more people continue using Google and more revenue is made through CPC ads.

So the question is, does this theory of quality score from Adwords apply to organic rankings? If I optimise my site so that it gets equal or higher CTR than what the average is for the SERP position, in Google’s eyes, is my site more relevant to the user thus rewarding my site with a higher ‘quality score’ and a higher quality score means my site would generally rank higher/better for the theme surrounding the search query?

There’s been some research done on what are the CTR distributions on the 1st page of Google’s SERP. Below is a graph compiled by SEO Scientist showing various research data conducted by Eye Tracking and AOL.

Another study done by Searchlight Digital shows CTR distribution of the first page of Google shows somewhat similar results. I haven’t really gone through the references yet of these two sites so please take it with a grain of salt :)

So for example, it seems like 9th position on Google attracts around 2-3% of clicks. What if my website’s average was like 4-5%? Would that mean my site has high relevancy to what users are looking for and thus, should move up the rankings based on relevancy?

Google’s focus (I think anyways) has always been to provide the most relevant search results to users which makes sense to increase rankings for sites that attract a higher CTR than usual.

So how do you optimise your site listing to maximise CTR? I guess there are only 2 on-page elements which you can play around with: the title and meta description. Everyone in the SEO industry knows that the title element is the most important on-page factor for organic rankings but do you know that meta descriptions do not affect organic rankings?

However, it doesn’t mean we should neglect meta descriptions as they do affect CTR. I always treat meta descriptions as your sales pitch or your teaser to entice users to click through to your site. There’s a great blog post by Google Webmaster on how to improve your meta descriptions.

By default, Google looks for the meta description to display in the SERPs but if not found, either scans the page for relevant information or retrieves the description from DMOZ (if listed there).

Would be keen to know what your thoughts are on this topic.

The post Google Organic Rankings Quality Score first appeared on Danny Ng.]]>
https://dannyism.com/google-organic-rankings-quality-score/feed/ 0
Google Analytics: Advanced Filters Guide https://dannyism.com/google-analytics-advanced-filters-guide/ https://dannyism.com/google-analytics-advanced-filters-guide/#comments Sat, 11 Jul 2009 13:07:21 +0000 http://www.dannytalk.com/?p=623 Thought I’d share a post on how to use advanced filters in Google Analytics and what are some scenarios where they can come in handy. Advanced filters are very useful for extracting information from available fields (i.e. campaign source, campaign term) using regular expressions and then using the extracted information to manipulate other fields in…

The post Google Analytics: Advanced Filters Guide first appeared on Danny Ng.]]>
Thought I’d share a post on how to use advanced filters in Google Analytics and what are some scenarios where they can come in handy.

Advanced filters are very useful for extracting information from available fields (i.e. campaign source, campaign term) using regular expressions and then using the extracted information to manipulate other fields in Google Analytics so that you can customise how data is recorded in your reports.

advanced-filters

First of all, before you use advanced filters, it is important that you have some kind of basic knowledge on regular expressions. If you don’t, then perhaps it’s a good idea to read up on what those cryptic symbols mean and how they’re very useful in pattern matching.

Once you’ve gotten a handle on some basic regular expressions, go to your filter manager and create a new filter. In filter type, select custom filter and then the advanced radio button. You will see three drop down menus with a text field next to each one of them. In the drop down menus, you will see a list of fields available in Google Analytics available for you to use.

Each advanced filter allows you to extract information from a maximum of 2 fields, field A and field B while allowing you to write information to a maximum of 1 field, the constructor.

Once you’ve selected a desired field to extract information from, you will need to enter the regular expression in the text field beside it. In your regular expressions, the parenthesis () is used to extract the matching parts of the field value and then storing these matches into variables. The variables $A and $B refer to fields and the numbers refer to the order of the parenthesis (i.e. $A3 corresponds to the 3rd parenthesis of field A’s matched regular expression.). These variables can then be accessed in the constructor to return the extracted value.

So let’s look at an example. Let’s say I wanted to append the category name based on the URL query string parameter to the page title because my CMS can’t seem to generate the category names in the page title and I can’t get any meaningful insights into which categories are popular from the content reports.

filter-example

In field A, I will choose Request URI and next to it, my regular expression (\?|&)cat=([^&])* which means extract the category value from the URL. This is stored in $A2 because the value I want is in the 2nd parenthesis.

In field B, I’ll just select everything because I want to append $A2 to whatever is already there. This is stored as $B1.

Then in the constructor, I’m telling Google Analytics to append the value in $A2 to the value in $B1. So if the category value extracted is ‘Shoes’ and my original page title is ‘Online Shopping’, then the modified page title will be ‘Online Shopping Shoes’. This should be reflected in the content reports.

I will need the request URI and page title field so I’ll make field A and B as required. By selecting yes for the override output field means that if that field already has a value, override it with the new value. Finally since this isn’t case sensitive, I’ll select no.

That’s pretty much it! I’ve written an example to append true searched terms to the keywords report so you can have a look at that example too.

Remember to always test filters in a test profile before applying it to your main profile.

The post Google Analytics: Advanced Filters Guide first appeared on Danny Ng.]]>
https://dannyism.com/google-analytics-advanced-filters-guide/feed/ 2