Load Pages With Ajax (jQuery)

As you might have noticed, this site has been updated quite a lot recently. One of the new things is that it's fully controlled within javascript/ajax (with a fallback for crawlers). In this post I'll explain how I've accomplished this. This might not be the best possible solution to do this kind of stuff, but it works so... I'm happy :)

Filestructure

The file structure used for this website is actually quite easy. I have a "blog" folder which contains all the php code etc. To make my blog work (+ little admin section). And then i also have a folder "content" which contains all pages as seperate html files. Note that these files won't contain any header / script information as this is located in the index.php file in the main directory (where the initial routing happens). Another important file in this process is of course the javascript file, which takes care of all the ajax stuff.

getPage function

First of all, I started creating a function which did all the page loading work.

function getPage(page){}

This function takes one parameter "page" which contains the url to the page which needs to be loaded. After that, it takes care of all the ajax magic. But first of all, we need to take care of some actions to make everything easier & more beautiful. This small line of code animates a scroll to the top of the page so that the user will get at the top of the new page and can start reading from there.

function getPage(page){
    var current = window.location.pathname;
    $("html, body").animate({ scrollTop: 0 }, "fast");
}

After having these rather easy lines of code, we can start creating the ajax request.

$.ajax({
    type: 'GET',
    url: page,
    success: function(html){
        $('#content').html('');
        $('#content').html(html);
        }
});

As you can see this doesn't contain any crazy stuff neither. It loads the page from the given url and puts it in the div with the ID "content". And that's about it.

Now, to add some fancy loading animation, I've chosen to use the plugin NGUpload. All you need to do to get this plugin working is including it in your main web page (the index.php). like this (note that this needs to be declared before you start using the plugin):

<script src="/js/ng-upload.js"></script>

Now we just have to point out when the loader needs to start, and when it needs to stop. Afterwards, you have yourself a fancy loader.

function getPage(page){
    $("html, body").animate({ scrollTop: 0 }, "fast");
    NProgress.start();
    $.ajax({
            type: 'GET',
            url: page,
            success: function(html){
                $('#content').html('');
                $('#content').html(html);
                NProgress.done();
            }
        });
}

To map your pages easily to the right content you can use PathJS. To use this plugin you need to define/map all your pages like this:

Path.map("#!/home(/)").to(function(){
    getPage("yourhome.html");
});
Path.map("#!/content(/)").to(function(){
    getPage("yourcontent.html");
});

Path.root("#/posts");
Path.listen();

As you can see, all we do is map an url (for example: yourdomain.com/#!/home) to a certain content page with the getPage function we've just created. This will fetch the content and put it in the container. and BOOM you have it ... you're content gets loaded via Ajax.

You might be wondering why i'm using '#!' instead of just '#', well this is for Google to know that the page is crawlable. To make it like this... if created some PHP routing in my index file which generates static pages for google: first of all i have some variables on top of the page:

<?php
$crawler = isset($_GET["_escaped_fragment_"]);
$parts = explode('?', $_SERVER['REQUEST_URI'], 2);
$url = "http://".$_SERVER['HTTP_HOST'].$parts[0];
?>

The first one check if there is an escaped fragment in the url (this means that it's a crawler checking the page). The second one explodes the url, and the third one creates the baseurl of the page.

<?php
    //routing (for crawlers)
    if($crawler){
        $urlparts = explode("/", $_GET["_escaped_fragment_"]);
        if($urlparts[1] == "portfolio"){
            readfile($url."yourhome.html");
        }else{
            readfile($url."yourcontent.html");
        }
    }else{
?>

Nothing high tech here neither. This code explodes the url... Checks the escaped fragment (this is the url generated by the google crawler) and returns the right page data for the requested page. If it's not a crawler, it returns the regular ajax page.

Et voila, there you have it an ajax page that is crawlable.