RSS

List of HTTP status codes – HTTP Error Code

1xx: Information

Message: Description:
100 Continue The server has received the request headers, and the client should proceed to send the request body
101 Switching Protocols The requester has asked the server to switch protocols
103 Checkpoint Used in the resumable requests proposal to resume aborted PUT or POST requests

2xx: Successful

Message: Description:
200 OK The request is OK (this is the standard response for successful HTTP requests)
201 Created The request has been fulfilled, and a new resource is created
202 Accepted The request has been accepted for processing, but the processing has not been completed
203 Non-Authoritative Information The request has been successfully processed, but is returning information that may be from another source
204 No Content The request has been successfully processed, but is not returning any content
205 Reset Content The request has been successfully processed, but is not returning any content, and requires that the requester reset the document view
206 Partial Content The server is delivering only part of the resource due to a range header sent by the client

3xx: Redirection

Message: Description:
300 Multiple Choices A link list. The user can select a link and go to that location. Maximum five addresses
301 Moved Permanently The requested page has moved to a new URL
302 Found The requested page has moved temporarily to a new URL
303 See Other The requested page can be found under a different URL
304 Not Modified Indicates the requested page has not been modified since last requested
306 Switch Proxy No longer used
307 Temporary Redirect The requested page has moved temporarily to a new URL
308 Resume Incomplete Used in the resumable requests proposal to resume aborted PUT or POST requests

4xx: Client Error

Message: Description:
400 Bad Request The request cannot be fulfilled due to bad syntax
401 Unauthorized The request was a legal request, but the server is refusing to respond to it. For use when authentication is possible but has failed or not yet been provided
402 Payment Required Reserved for future use
403 Forbidden The request was a legal request, but the server is refusing to respond to it
404 Not Found The requested page could not be found but may be available again in the future
405 Method Not Allowed A request was made of a page using a request method not supported by that page
406 Not Acceptable The server can only generate a response that is not accepted by the client
407 Proxy Authentication Required The client must first authenticate itself with the proxy
408 Request Timeout The server timed out waiting for the request
409 Conflict The request could not be completed because of a conflict in the request
410 Gone The requested page is no longer available
411 Length Required The “Content-Length” is not defined. The server will not accept the request without it
412 Precondition Failed The precondition given in the request evaluated to false by the server
413 Request Entity Too Large The server will not accept the request, because the request entity is too large
414 Request-URI Too Long The server will not accept the request, because the URL is too long. Occurs when you convert a POST request to a GET request with a long query information
415 Unsupported Media Type The server will not accept the request, because the media type is not supported
416 Requested Range Not Satisfiable The client has asked for a portion of the file, but the server cannot supply that portion
417 Expectation Failed The server cannot meet the requirements of the Expect request-header field

5xx: Server Error

Message: Description:
500 Internal Server Error A generic error message, given when no more specific message is suitable
501 Not Implemented The server either does not recognize the request method, or it lacks the ability to fulfill the request
502 Bad Gateway The server was acting as a gateway or proxy and received an invalid response from the upstream server
503 Service Unavailable The server is currently unavailable (overloaded or down)
504 Gateway Timeout The server was acting as a gateway or proxy and did not receive a timely response from the upstream server
505 HTTP Version Not Supported The server does not support the HTTP protocol version used in the request
511 Network Authentication Required The client needs to authenticate to gain network access
Advertisements
 
Comments Off on List of HTTP status codes – HTTP Error Code

Posted by on May 30, 2018 in HTML5, Programming, Website Administration

 

Validating Form Input in JavaScript

Validating Form Input

When you submit a form to a CGI program that resides on the server, it is usually programmed to do its own check for errors. If it finds any it sends the page back to the reader who then has to re-enter some data, before submitting again. A JavaScript check is useful because it stops the form from being submitted if there is a problem, saving lots of time for your readers.

The CGI script is still more reliable, as it always works regardless of whether JavaScript is enabled on the client-side or not; but having this extra safety barrier is a nice thing to have in place. It makes your page much more user-friendly, and takes out the frustration of having to fill out the same form repeatedly. It’s also very precise, as you can point out the exact field where there’s a problem.

Implementing the Check

We’re going to be checking the form using a function, which will be activated by the form’s submit event — therefore, using the onSubmit handler. Add an attribute like this to the form you wish to validate:

<form action="script.cgi" onSubmit="return checkform()">

Where checkForm is the name of the function we’re about to create. If you’ve learnt your functions properly, you should be able to guess that our function will return a Boolean value — either true or false. Submit‘s default action is to submit the data, but if you give onSubmit a value of return false, it will not be submitted; just like how we can stop a link from being followed. Of course, if there are no problems, the function call will be replaced by true and the data will be submitted. Simple…

It’s impossible for me to give you a definitive validation script, as every form is different, with a different structure and different values to check for. That said, it is possible to give you the basic layout of a script, which you can then customise to the needs of your form.

A general script looks like this:

function checkform()
{
	if (value of first field is or isn't something)
	{
		// something is wrong
		alert('There is a problem with the first field');
		return false;
	}
	else if (value of next field is or isn't something)
	{
		// something else is wrong
		alert('There is a problem with...');
		return false;
	}
	// If the script gets this far through all of your fields
	// without problems, it's ok and you can submit the form

	return true;
}

If your form is quite complex your script will grow proportionally longer too, but the fundamentals will stay the same in every instance — you go through each field with if and else statements, checking the inputted values to make sure they’re not blank. As each field passes the test your script moves down to the next.

If there is a problem with a field, the script will return false at that point and stop working, never reaching the final return true command unless there are no problems at all. You should of course tailor the error messages to point out which field has the problem, and maybe offering solutions to common mistakes.

Accessing Values

Having read the Objects and Properties page, you should now know how to find out the values of form elements through the DOM. We’re going to be using the name notation instead of using numbered indexes to access the elements, so that you are free to move around the fields on your page without having to rewrite parts of your script every time. A sample, and simple, form may look like this:

<form name="feedback" action="script.cgi" method="post" onSubmit="return checkform()">
<input type="text" name="name">
<input type="text" name="email">
<textarea name="comments"></textarea>
</form>

Validating this form would be considerably simpler than one containing radio buttons or select boxes, but any form element can be accessed. Below are the ways to get the value from all types of form elements. In all cases, the form is called feedback and the element is called field.

Text Boxes, <textarea>s and hiddens

These are the easiest elements to access. The code is simply

document.feedback.field.value

You’ll usually be checking if this value is empty, i.e.

if (document.feedback.field.value == '') {
	return false;
}

That’s checking the value’s equality with a null String (two single quotes with nothing between them). When you are asking a reader for their email address, you can use a simple » address validation function to make sure the address has a valid structure.

Select Boxes

Select boxes are a little trickier. Each option in a drop-down box is indexed in the array options[], starting as always with 0. You then get the value of the element at this index. It’s like this:

document.feedback.field.options
[document.feedback.field.selectedIndex].value

You can also change the selected index through JavaScript. To set it to the first option, execute this:

document.feedback.field.selectedIndex = 0;

Check Boxes

Checkboxes behave differently to other elements — their value is always on. Instead, you have to check if their Boolean checked value is true or, in this case, false.

if (!document.feedback.field.checked) {
	// box is not checked
	return false;
}

Naturally, to check a box, do this

document.feedback.field.checked = true;

Radio Buttons

Annoyingly, there is no simple way to check which radio button out of a group is selected — you have to check through each element, linked with Boolean AND operators . Usually you’ll just want to check if none of them have been selected, as in this example:

if (!document.feedback.field[0].checked &&
!document.feedback.field[1].checked &&
!document.feedback.field[2].checked) {
	// no radio button is selected
	return false;
}

You can check a radio button in the same way as a checkbox.

 

 

 

Source: http://www.yourhtmlsource.com/javascript/formvalidation.html

 

 
Comments Off on Validating Form Input in JavaScript

Posted by on May 30, 2018 in Javascript

 

JS Charting

jsFiddle for Drill-down graph
http://jsfiddle.net/gh/get/jquery/1.7.2/highslide-software/highcharts.com/tree/master/samples/highcharts/drilldown/async/

Parameters based Graph generator: https://my.infocaptor.com/free_data_visualization.php
Dashboard Graph: http://bl.ocks.org/NPashaP/96447623ef4d342ee09b

Free js charting component:  http://code.highcharts.com/

asp.net drill-down
http://www.intertech.com/Blog/asp-net-chart-drill-down/
http://www.flex888.com/894/flex-drill-down-charts.html

d3.js. See the gallery:

https://github.com/mbostock/d3/wiki/Gallery

Drill down demos or examples:
•http://mbostock.github.com/d3/talk/20111116/bar-hierarchy.html
•http://mbostock.github.com/d3/talk/20111018/treemap.html
•http://mbostock.github.com/d3/talk/20111018/partition.html
•http://bost.ocks.org/mike/miserables/
•http://www.jasondavies.com/coffee-wheel/
•http://thepowerrank.com/visual/NCAA_Tournament_Predictions
•http://square.github.com/crossfilter/
•http://windhistory.com/map.html#4.00/36.00/-95.00 / http://windhistory.com/station.html?KMKT
•http://trends.truliablog.com/vis/tru247/
•http://trends.truliablog.com/vis/metro-movers/
•http://marcinignac.com/projects/open-budget/viz/index.html
•http://bl.ocks.org/3630001
•http://bl.ocks.org/1346395
•http://bl.ocks.org/1314483
•http://slodge.com/teach/
•http://tympanus.net/Tutorials/MultipleAreaChartsD3/
•http://bl.ocks.org/3287802

 

 
Comments Off on JS Charting

Posted by on May 30, 2018 in Charting

 

Sending email in ASP.NET with email validation

//MailMessage tipsMail = new MailMessage();
//tipsMail.To.Add(email);
//tipsMail.From = new MailAddress(System.Configuration.ConfigurationManager.AppSettings[“fromAddress”].ToString());
//tipsMail.From = new MailAddress(“PMTips@mosaiquegroup.com”);
//tipsMail.Subject = “Project Management Tips from Mosaique”;
//tipsMail.Body = tipsMessage;
//tipsMail.IsBodyHtml = true;
// tipsMail.To.Add(System.Configuration.ConfigurationManager.AppSettings[“adminEmail”].ToString());
// tipsMail.To.Add(System.Configuration.ConfigurationManager.AppSettings[“adminEmail2”].ToString());
//tipsMail.To.Add(System.Configuration.ConfigurationManager.AppSettings[“adminEmail3”].ToString());
// tipsMail.To.Add(“javedarifkhan1@gmail.com”);

// SmtpClient smtp2 = new SmtpClient(“localhost”);
//// NetworkCredential credential2 = new NetworkCredential(System.Configuration.ConfigurationManager.AppSettings[“smtpUser”], System.Configuration.ConfigurationManager.AppSettings[“smtpPass”]);
// NetworkCredential credential2 = new NetworkCredential(“smtpUser”, “XXXXX”);
// smtp2.Credentials = credential2;
// smtp2.EnableSsl = true;
// smtp2.Send(tipsMail);

//MailMessage msg = new MailMessage();
//System.Net.Mail.SmtpClient client = new System.Net.Mail.SmtpClient();
//msg.From = new MailAddress(“smtpUser@domain.com”);
//msg.To.Add(email);
//msg.IsBodyHtml = true;
//msg.Body = tipsMessage;
//client.Host = “localhost”;
//System.Net.NetworkCredential basicauthenticationinfo = new System.Net.NetworkCredential(“Username”, “password”);
////client.Port = int.Parse(“587”);
//client.EnableSsl = true;
//client.UseDefaultCredentials = false;
//client.Credentials = basicauthenticationinfo;
//client.DeliveryMethod = SmtpDeliveryMethod.Network;
//client.Send(msg);

//MailMessage tipsMail = new MailMessage();
//tipsMail.From = “user@domain.com”;
//tipsMail.To.Add(email);
//System.Net.Mail.SmtpClient mail = new System.Net.Mail.SmtpClient();
//tipsMail.Body = tipsMessage;
//tipsMail.To.Add(“javedarifkhan1@gmail.com”);
//tipsMail.IsBodyHtml = true;

//SmtpClient smtp = new SmtpClient(“localhost”);
// smtp.Credentials = credential;
//smtp.EnableSsl = true;
//smtp.Send(tipsMail);

//////////// USING GOOGLE SMTP SERVER //////////////////
//// smtp.Host = “smtp.gmail.com”; // smtp.UseDefaultCredentials = true;
//SmtpClient smtp = new SmtpClient();
//smtp.Host = System.Configuration.ConfigurationManager.AppSettings[“smtpHost”].ToString();
//smtp.EnableSsl = true;
//NetworkCredential NetworkCred = new NetworkCredential(System.Configuration.ConfigurationManager.AppSettings[“smtpGUser”].ToString(), System.Configuration.ConfigurationManager.AppSettings[“smtpGPass”].ToString());
//smtp.Credentials = NetworkCred;
//smtp.Port = int.Parse(System.Configuration.ConfigurationManager.AppSettings[“smtpGPort”].ToString());
//smtp.Send(tipsMail);

 
Comments Off on Sending email in ASP.NET with email validation

Posted by on May 30, 2018 in Uncategorized

 

The psychology of colour in marketing and branding

The psychology of colour as it relates to persuasion is one of the most interesting – and most controversial – aspects of marketing.


Yellow is psychologically the happiest colour in the spectrum

Ever wondered what attracts you to an advert/poster? The first thing that will draw your attention will be the colour.

According to PrintUK.com, “colour has an enormous effect on our attitudes and emotions because when our eyes take in colour they communicate with a part of the brain called the hypothalamus, which sends a message to the pituitary gland and sets off an emotion.”

It claimed that colour has a powerful psychological influence on the human brain, mentally, physically, consciously and subconsciously. These responses to colour can be used to the advantage of marketeers to illicit the desired response to their marketing campaigns.

“The affects of colour on our well-being are well documented,” it said. “Red and Green, ‘society and nature’ have been wired so deeply into our subconscious that no two other colours have such opposing meanings. The most obvious example of this is traffic lights – this combination is used worldwide. Sometimes the connection is not so obvious, but red is often used to reject, disagree, remove, close and cancel. On the other hand, green is a positive colour associated with yes, accept, go, add and agree. Words often just clarify the meaning.”

Read more about psychology in business:

Colours are also considered to have a temperature. Warm colours often consist of pale green through yellows to deep red, and cool colours from dark purple, blues to dark green.

“Understanding how the mind works is an important integral part of marketing,” maintained PrintUK.com. “Consequently, it’s extremely important that you consider the colour palette of your brand before printing your corporate brand material whether that’s internal newsletters or company letterheads.”

Top colour tips

1) Investigate your industry’s colours

When you look at the business cards and websites of different companies you’ll begin to notice that businesses which operate within the same field of industry utilise similar colour schemes. This is no coincidence; business leaders opt for particular colours because they invoke certain feelings for customers.

For instance, blue is the predominant colour used by social networking sites, such as Twitter, Facebook and LinkedIn, due to its subconscious associations with logic, calm and communication. As Karen Haller, a business colour and branding expert stated, “blue relates to the mind, so consumers associate it with logic and communication. It’s also serene, like the ocean, and calming to look at”.

Consequently, before designing your printed material you should investigate the predominant colour schemes associated with your industry and incorporate these tones within your design.

2) Use primary colours for calls to action

A study by Kissmetrics revealed that the highest converting colours for calls to action are bright primary and secondary colours such as red, yellow, orange and green. Due to the fact that these vibrant colours attract attention, it’s useful to incorporate them within your business card design and website calls to action in order to capture the interest of your key consumers – and to encourage them to investigate your brand in greater depth.

3) Be consistent

From your business card printing to your company website, it’s important to promote cohesion and unity with all aspects of your brand’s overall design. For example, when you’re designing your business cards, you should aim to incorporate colour schemes and design traits that currently exist within your company website’s graphic design.

By doing so, you can begin to establish your brand’s reputation and its subconscious colour associations within the minds of your key consumers. Although this may seem like a minor aspect of your direct mail and digital branding strategies, over time it could earn you the loyalty, recommendations and return custom of a broad consumer base.

 
Comments Off on The psychology of colour in marketing and branding

Posted by on September 4, 2015 in Artwork Design, Branding

 

An Introduction To Full-Stack JavaScript

Nowadays, with any Web app you build, you have dozens of architectural decisions to make. And you want to make the right ones: You want to use technologies that allow for rapid development, constant iteration, maximal efficiency, speed, robustness and more. You want to be lean and you want to be agile. You want to use technologies that will help you succeed in the short and long term. And those technologies are not always easy to pick out.

In my experience, full-stack JavaScript hits all the marks. You’ve probably seen it around; perhaps you’ve considered its usefulness and even debated it with friends. But have you tried it yourself? In this post, I’ll give you an overview of why full-stack JavaScript might be right for you and how it works its magic.

To give you a quick preview:

toptal-blog-500-opt
(Large view)

I’ll introduce these components piece by piece. But first, a short note on how we got to where we are today.

Why I Use JavaScript

I’ve been a Web developer since 1998. Back then, we used Perl for most of our server-side development; but even since then, we’ve had JavaScript on the client side. Web server technologies have changed immensely since then: We went through wave after wave of languages and technologies, such as PHP, ASP, JSP, .NET, Ruby, Python, just to name a few. Developers began to realize that using two different languages for the client and server environments complicates things.

In the early era of PHP and ASP, when template engines were just an idea, developers embedded application code in their HTML. Seeing embedded scripts like this was not uncommon:

<script>
    <?php
        if ($login == true){
    ?>
    alert("Welcome");
    <?php
        }
    ?>
</script>

Or, even worse:

<script>
    var users_deleted = [];
    <?php
        $arr_ids = array(1,2,3,4);
        foreach($arr_ids as $value){
    ?>
    users_deleted.push("<php>");
    <?php
        }
    ?>
</script>

For starters, there were the typical errors and confusing statements between languages, such as for and foreach. Furthermore, writing code like this on the server and on the client to handle the same data structure is uncomfortable even today (unless, of course, you have a development team with engineers dedicated to the front end and engineers for the back end — but even if they can share information, they wouldn’t be able to collaborate on each other’s code):

<?php
    $arr = array("apples", "bananas", "oranges", "strawberries"),
    $obj = array();
    $i = 10;
    foreach($arr as $fruit){
        $obj[$fruit] = $i;
        $i += 10;
    }
    echo json_encode(obj);
?>
<script>
    $.ajax({
        url:"/json.php",
        success: function(data){
            var x;
            for(x in data){
                alert("fruit:" + x + " points:" + data[x]);
            }
        }
    });
</script>

The initial attempts to unify under a single language were to create client components on the server and compile them to JavaScript. This didn’t work as expected, and most of those projects failed (for example, ASP MVC replacing ASP.NET Web forms, and GWT arguably being replaced in the near future by Polymer). But the idea was great, in essence: a single language on the client and the server, enabling us to reuse components and resources (and this is the keyword: resources).

The answer was simple: Put JavaScript on the server.

JavaScript was actually born server-side in Netscape Enterprise Server, but the language simply wasn’t ready at the time. After years of trial and error, Node.js finally emerged, which not only put JavaScript on the server, but also promoted the idea of non-blocking programming, bringing it from the world of nginx, thanks to the Node creator’s nginx background, and (wisely) keeping it simple, thanks to JavaScript’s event-loop nature.

(In a sentence, non-blocking programming aims to put time-consuming tasks off to the side, usually by specifying what should be done when these tasks are completed, and allowing the processor to handle other requests in the meantime.)

Node.js changed the way we handle I/O access forever. As Web developers, we were used to the following lines when accessing databases (I/O):

var resultset = db.query("SELECT * FROM 'table'");
drawTable(resultset);

This line essentially blocks your code, because your program stops running until your database driver has a resultset to return. In the meantime, your platform’s infrastructure provides the means for concurrency, usually using threads and forks.

With Node.js and non-blocking programming, we’re given more control over program flow. Now (even if you still have parallel execution hidden by your database (I/O) driver), you can define what the program should do in the meantime and what it will do when you receive the resultset:

db.query("SELECT * FROM 'table'", function(resultset){
   drawTable(resultset);
});
doSomeThingElse();

With this snippet, we’ve defined two program flows: The first handles our actions just after sending the database query, while the second handles our actions just after we receive our resultSet using a simple callback. This is an elegant and powerful way to manage concurrency. As they say, “Everything runs in parallel — except your code.” Thus, your code will be easy to write, read, understand and maintain, all without your losing control over program flow.

These ideas weren’t new at the time — so, why did they become so popular with Node.js? Simple: Non-blocking programming can be achieved in several ways. Perhaps the easiest is to use callbacks and an event loop. In most languages, that’s not an easy task: While callbacks are a common feature in some other languages, an event loop is not, and you’ll often find yourself grappling with external libraries (for example, Python with Tornado).

But in JavaScript, callbacks are built into the language, as is the event loop, and almost every programmer who has even dabbled in JavaScript is familiar with them (or at least has used them, even if they don’t quite understand what the event loop is). Suddenly, every startup on Earth could reuse developers (i.e. resources) on both the client and server side, solving the “Python Guru Needed” job posting problem.

So, now we have an incredibly fast platform (thanks to non-blocking programming), with a programming language that’s incredibly easy to use (thanks to JavaScript). But is it enough? Will it last? I’m sure JavaScript will have an important place in the future. Let me tell you why.

Functional Programming

JavaScript was the first programming language to bring the functional paradigm to the masses (of course, Lisp came first, but most programmers have never built a production-ready application using it). Lisp and Self, Javascript’s main influences, are full of innovative ideas that can free our minds to explore new techniques, patterns and paradigms. And they all carry over to JavaScript. Take a look at monads, Church numbers or even (for a more practical example) Underscore’s collections functions, which can save you lines and lines of code.

Dynamic Objects and Prototypal Inheritance

Object-oriented programming without classes (and without endless hierarchies of classes) allows for fast development — just create objects, add methods and use them. More importantly, it reduces refactoring time during maintenance tasks by enabling the programmer to modify instances of objects, instead of classes. This speed and flexibility pave the way for rapid development.

JavaScript Is the Internet

JavaScript was designed for the Internet. It’s been here since the beginning, and it’s not going away. All attempts to destroy it have failed; recall, for instance, the downfall of Java Applets, VBScript’s replacement by Microsoft’s TypeScript (which compiles to JavaScript), and Flash’s demise at the hands of the mobile market and HTML5. Replacing JavaScript without breaking millions of Web pages is impossible, so our goal going forward should be to improve it. And no one is better suited for the job than Technical Committee 39 of ECMA.

Sure, alternatives to JavaScript are born every day, like CoffeeScript, TypeScript and the millions of languages that compile to JavaScript. These alternatives might be useful for development stages (via source maps), but they will fail to supplant JavaScript in the long run for two reasons: Their communities will never be bigger, and their best features will be adopted by ECMAScript (i.e. JavaScript). JavaScript is not an assembly language: It’s a high-level programming language with source code that you can understand — so, you should understand it.

End-to-End JavaScript: Node.js And MongoDB

We’ve covered the reasons to use JavaScript. Next, we’ll look at JavaScript as a reason to use Node.js and MongoDB.

Node.js

Node.js is a platform for building fast and scalable network applications — that’s pretty much what the Node.js website says. But Node.js is more than that: It’s the hottest JavaScript runtime environment around right now, used by a ton of applications and libraries — even browser libraries are now running on Node.js. More importantly, this fast server-side execution allows developers to focus on more complex problems, such as Natural for natural language processing. Even if you don’t plan to write your main server application with Node.js, you can use tools built on top of Node.js to improve your development process; for example, Bower for front-end package management, Mocha for unit testing, Grunt for automated build tasks and even Brackets for full-text code editing.

So, if you’re going to write JavaScript applications for the server or the client, you should become familiar with Node.js, because you will need it daily. Some interesting alternatives exist, but none have even 10% of Node.js’ community.

MongoDB

MongoDB is a NoSQL document-based database that uses JavaScript as its query language (but is not written in JavaScript), thus completing our end-to-end JavaScript platform. But that’s not even the main reason to choose this database.

MongoDB is schema-less, enabling you to persist objects in a flexible way and, thus, adapt quickly to changes in requirements. Plus, it’s highly scalable and based on map-reduce, making it suitable for big data applications. MongoDB is so flexible that it can be used as a schema-less document database, a relational data store (although it lacks transactions, which can only be emulated) and even as a key-value store for caching responses, like Memcached and Redis.

Server Componentization With Express

Server-side componentization is never easy. But with Express (and Connect) came the idea of “middleware.” In my opinion, middleware is the best way to define components on the server. If you want to compare it to a known pattern, it’s pretty close to pipes and filters.

The basic idea is that your component is part of a pipeline. The pipeline processes a request (i.e. the input) and generates a response (i.e. the output), but your component isn’t responsible for the entire response. Instead, it modifies only what it needs to and then delegates to the next piece in the pipeline. When the last piece of the pipeline finishes processing, the response is sent back to the client.

We refer to these pieces of the pipeline as middleware. Clearly, we can create two kinds of middleware:

  • Intermediates
    An intermediate processes the request and the response but is not fully responsible for the response itself and so delegates to the next middleware.
  • Finals
    A final has full responsibility over the final response. It processes and modifies the request and the response but doesn’t need to delegate to the next middleware. In practice, delegating to the next middleware anyway will allow for architectural flexibility (i.e. for adding more middleware later), even if that middleware doesn’t exist (in which case, the response would go straight to the client).

user-manager-500-opt
(Large view)

As a concrete example, consider a “user manager” component on the server. In terms of middleware, we’d have both finals and intermediates. For our finals, we’d have such features as creating a user and listing users. But before we can perform those actions, we need our intermediates for authentication (because we don’t want unauthenticated requests coming in and creating users). Once we’ve created these authentication intermediates, we can just plug them in anywhere that we want to turn a previously unauthenticated feature into an authenticated feature.

Single-Page Applications

When working with full-stack JavaScript, you’ll often focus on creating single-page applications (SPAs). Most Web developers are tempted more than once to try their hand at SPAs. I’ve built several (mostly proprietary), and I believe that they are simply the future of Web applications. Have you ever compared an SPA to a regular Web app on a mobile connection? The difference in responsiveness is in the order of tens of seconds.

(Note: Others might disagree with me. Twitter, for example, rolled back its SPA approach. Meanwhile, large websites such as Zendesk are moving towards it. I’ve seen enough evidence of the benefits of SPAs to believe in them, but experiences vary.)

If SPAs are so great, why build your product in a legacy form? A common argument I hear is that people are worried about SEO. But if you handle things correctly, this shouldn’t be an issue: You can take different approaches, from using a headless browser (such as PhantomJS) to render the HTML when a Web crawler is detected to performing server-side rendering with the help of existing frameworks.

Client Side MV* With Backbone.js, Marionette And Twitter Bootstrap

Much has been said about MV* frameworks for SPAs. It’s a tough choice, but I’d say that the top three are Backbone.js, Ember and AngularJS.

All three are very well regarded. But which is best for you?

Unfortunately, I must admit that I have limited experience with AngularJS, so I’ll leave it out of the discussion. Now, Ember and Backbone.js represent two different ways of attacking the same problem.

Backbone.js is minimal and offers just enough for you to create a simple SPA. Ember, on the other hand, is a complete and professional framework for creating SPAs. It has more bells and whistles, but also a steeper learning curve. (You can read more about Ember.js here.)

Depending on the size of your application, the decision could be as easy as looking at the “features used” to “features available” ratio, which will give you a big hint.

Styling is a challenge as well, but again, we can count on frameworks to bail us out. For CSS, Twitter Bootstrap is a good choice because it offers a complete set of styles that are both ready to use out of the box and easy to customize.

Bootstrap was created in the LESS language, and it’s open source, so we can modify it if need be. It comes with a ton of UX controls that are well documented. Plus, a customization model enables you to create your own. It is definitely the right tool for the job.

Best Practices: Grunt, Mocha, Chai, RequireJS and CoverJS

Finally, we should define some best practices, as well as mention how to implement and maintain them. Typically, my solution centers on several tools, which themselves are based on Node.js.

Mocha and Chai

These tools enable you to improve your development process by applying test-driven development (TDD) or behavior-driven development (BDD), creating the infrastructure to organize your unit tests and a runner to automatically run them.

Plenty of unit test frameworks exist for JavaScript. Why use Mocha? The short answer is that it’s flexible and complete.

The long answer is that it has two important features (interfaces and reporters) and one significant absence (assertions). Allow me to explain:

  • Interfaces
    Maybe you’re used to TDD concepts of suites and unit tests, or perhaps you prefer BDD ideas of behavior specifications with describe and should. Mocha lets you use both approaches.
  • Reporters
    Running your test will generate reports of the results, and you can format these results using various reporters. For example, if you need to feed a continuous integration server, you’ll find a reporter to do just that.
  • Lack of an assertion library
    Far from being a problem, Mocha was designed to let you use the assertion library of your choice, giving you even more flexibility. You have plenty of options, and this is where Chai comes into play.

Chai is a flexible assertion library that lets you use any of the three major assertion styles:

  • assert
    This is the classic assertion style from old-school TDD. For example:

    assert.equal(variable, "value");
    
  • expect
    This chainable assertion style is most commonly used in BDD. For example:

    expect(variable).to.equal("value");
    
  • should
    This is also used in BDD, but I prefer expect because should often sounds repetitive (i.e. with the behavior specification of “it (should do something…)”). For example:

    variable.should.equal("value");
    

Chai combines perfectly with Mocha. Using just these two libraries, you can write your tests in TDD, BDD or any style imaginable.

Grunt

Grunt enables you to automate build tasks, anything including simple copying-and-pasting and concatenation of files, template precompilation, style language (i.e. SASS and LESS) compilation, unit testing (with Mocha), linting and code minification (for example, with UglifyJS or Closure Compiler). You can add your own automated task to Grunt or search the registry, where hundreds of plugins are available (once again, using a tool with a great community behind it pays off). Grunt can also monitor your files and trigger actions when any are modified.

RequireJS

RequireJS might sound like just another way to load modules with the AMD API, but I assure you that it is much more than that. With RequireJS, you can define dependencies and hierarchies on your modules and let the RequireJS library load them for you. It also provides an easy way to avoid global variable space pollution by defining all of your modules inside functions. This makes the modules reusable, unlike namespaced modules. Think about it: If you define a module like Demoapp.helloWordModule and you want to port it to Firstapp.helloWorldModule, then you would need to change every reference to the Demoapp namespace in order to make it portable.

RequireJS will also help you embrace the dependency injection pattern. Suppose you have a component that needs an instance of the main application object (a singleton). From using RequireJS, you realize that you shouldn’t use a global variable to store it, and you can’t have an instance as a RequireJS dependency. So, instead, you need to require this dependency in your module constructor. Let’s see an example.

In main.js:

  define(
      ["App","module"],
      function(App, Module){
          var app = new App();

          var module = new Module({
              app: app
          })

          return app;
      }
  );

In module.js:

  define([],
      function(){
          var module = function(options){
              this.app = options.app;
          };
          module.prototype.useApp = function(){
              this.app.performAction();
          };
          return module
      }
  );

Note that we cannot define the module with a dependency to main.js without creating a circular reference.

CoverJS

Code coverage is a metric for evaluating your tests. As the name implies, it tells you how much of your code is covered by your current test suite. CoverJS measures your tests’ code coverage by instrumenting statements (instead of lines of code, like JSCoverage) in your code and generating an instrumented version of the code. It can also generate reports to feed your continuous integration server.

Conclusion

Full-stack JavaScript isn’t the answer to every problem. But its community and technology will carry you a long way. With JavaScript, you can create scalable, maintainable applications, unified under a single language. There’s no doubt, it’s a force to be reckoned with.

 

Source:  http://www.smashingmagazine.com/2013/11/21/introduction-to-full-stack-javascript/

 

 
Comments Off on An Introduction To Full-Stack JavaScript

Posted by on June 24, 2015 in Javascript

 

SEO Tweaks for Big Impact

Search Engine Optimization can be a complicated and time-consuming endeavor. However, there are some small practices that can be implemented that don’t take too much time and that can really help your website gain a competitive edge in its web rankings. This is a list of ten such things compiled for your pleasure.

1. SEO Basics

Implementing the basics of SEO are among the easiest of tasks, and will yield the greatest results for your website.

The Title Tag is probably the most important part of a website for search engine optimization. It is the first thing the crawler will look at to determine your site’s subject matter. Your title tag should include some keywords, but not so many that your site be flagged for keyword stuffing. The order of words in your title tag is also important, and the closer important keywords are to the beginning of the title tag, the better it is for your rankings (I recommend a natural sounding flow to your title text). Although it varies by SERP, Google usually displays the first 65 to 75 characters of your title. This however, should not be a deterrent to use additional words or characters. Characters will be counted for web rankings even if they are not visible on the SERP. If it is your goal to optimized for localized search, than it is important to include localized keywords in the title area. You may also want to consider including branding for your overall site somewhere within the title space with some sort of separator, such as a hyphen. Make sure that all of your title tags are unique to their individual pages.

The Meta Description is also very important. It less important for rankings than it is for improving clickthrough rate, as it is usually the text that the user will read on the search page to decide whether they should click or not. In some cases the search engine will choose to display text from the page and not the text of the meta description. In this case, you should treat this as an indicator to rewrite your meta description text. It is worth noting that search engines usually only display the first 160 characters of the meta description, so it is important to keep your main message within that character count. The inclusion of more characters will likely be displayed truncated, displayed with ellipses. Although the keywords in meta descriptions don’t really effect search rankings, it is advised that the meta description differ for each page, so that the search engines don’t confuse the page for a duplicate.

Heading Tags are a very important factor for SEO. Search engines put an emphasis on the contents of these tags for determining what the site is about. It is important to note that the heading tags function hierarchically and should adhere to the correct structure. An H1 tag should always be included on a page, and an H2 tag should be used to break your writing down into further subsections. It is not necessary to overuse these subheading tags. They should only be used if it makes sense within your writing structure.

Using a Static URL Structure for your webpage or Permalink Format for your blog is advisable as a good SEO practice (although Googlebot can crawl dynamic URLs quite fine contrary to popular belief). Matt Cutts, the head of Google’s webspam team and a go-to resource on SEO for Google, has mentioned that the use of keywords in your URL can effect search rankings (albeit only slightly). I recommend your URL be representative of your page title, maybe with some unimportant words like “and” or “the” omitted to shorten the URL, and separated by hyphens (this page: “/ten-seo-tweaks”). Where this practice really shines through though and where it will have the greatest impact is on your page’s click-through rate. The reason for this is that people dislike long, meaningless URLs and they are more likely to click on a shorter one.

Although it isn’t necessary to have an XML Sitemap, it is still a good idea and will improve the crawl rate and indexation of your website. It becomes more important for large websites, or websites that are updated frequently for this reason. The sitemap should be validated and connected to your Google Webmaster Tools account.

2. Image Optimization

Since search engines can’t see or understand what is depicted in a picture, it is necessary that your provide search engines with details about images. Make sure to include a description of the image in the alt attribute within the img tag. You can also provide context to an image by using a descriptive filename. I recommend that you optimize the file size of your image to load at a decent speed, as this will help for SEO.

3. Webmaster Tools

Using Google Webmaster tools is a must for a website. It gives you great insight and control over the indexing of your website. You can check errors, perform geotargetting, remove an indexed page, reviewing inbound links, and many more functions that are very valuable for SEO. The Google Webmaster Central Blog has a very informative video entitled Using Webmaster Tools like an SEO that I recommend you check out for more information. In addition to Google Webmaster Tools, I recommend you also use Bing Webmaster Tools. It has a great interface that was designed with SEO in mind.

4. Google+

The advent of Google+ and the +1 button has sent shockwaves through the SEO world. For sometime now, social signals have been effecting search ranking, but not in the way that Google+ does. Webpages that have been +1’d by people in your circles will usually appear at the top of the search engine results page above other organic results, making a Google+ essential to your SEO strategies.

5. Social Networks

Sharing on social network other than Google+ can effect your search ranking a little bit (not nearly as much as with Google+ of course), and social signals will likely become more and more important to the future of search. Even if it doesn’t impact your search rankings, it is still recommended because it will only add to traffic and conversions for your website.

6. Google Authorship Markup

You can use rel=”author” on your website or blog to display your picture and author information next to a page in the SERP. The picture is linked to your Google+ profile. It can be used to add authority to your name and even improve click through rate (it helps your page stand out in the SERP). Cyrus Shepard recently wrote about he was able to further optimize his author picture to increase web traffic.

7. Social Meta Data

The use of Social Meta Data is will not have a direct effect on SEO, but will help with distribution amongst social networks which in turn effects SEO. Social signals have become increasingly important to search engine rankings. When your link is shared on a social network like Facebook or LinkedIn, social meta data will dictate the thumbnail, title, and description that will be displayed. If these aren’t set, they display poorly by default and fewer people will click on the link or reshare it. This is probably the most difficult to implement of the SEO tweaks that I am mentioning, and I apologize if it is confusing.

To Be Used With Facebook / Opengraph

First, modify the attributes of your <html> tag to look like <html xmlns:og=”http://opengraphprotocol.org/schema/” xmlns:fb=”http://ogp.me/ns/fb#” xmlns:og=”http://opengraphprotocol.org/schema/”>

Then, add the following to following to the <head> section of your webpage:

<meta property=”og:site_name” content=”Name of Website or Blog, Not the Page Name” />

Can be “article” or “website” depending on type of page.
<meta property=”og:type” content=”article” />

<meta property=”og:locale” content=”en_US” />
<meta property=”og:title” content=”Title For Your Webpage (Similar to Title Tag)” />
<meta property=”og:description” content=”Description Text for Webpage (Similar to Meta Description Tag Contents)” />

Image should be representative of the page in reference. It will appear as the thumbnail image when posted to Facebook and some other social networks.
<meta property=”og:image” content=”http://example.com/url_to_representative_image.jpg” />

To Be Used With Twitter Cards

Add the following to following to the <head> section of your webpage:

<meta name=”twitter:card” content=”summary”>
<meta name=”twitter:site” content=”@twitter_handle“>
<meta name=”twitter:creator” content=”@twitter_handle“>

Should be the Canonical URL of the Webpage
url” content=”http://www.example.com/self_referening_page.html“>

<meta name=”twitter:title” content=”Title For Your Webpage (Similar to Title Tag, Maxium 70 Characters)“>
<meta name=”twitter:description” content=”Description Text for Webpage (Similar to Meta Description Tag Contents, should be less than 200 characters)“>

Image should be representative of the page in reference. Must be at least 60px by 60px. Images greater than 120px by 120px will be resized and cropped in a square aspect ratio.
<meta name=”twitter:image” content=”http://example.com/url_to_representative_image.jpg“>

Make sure to check your implementation of these tags with Google’s Rich Snippet Testing Tool.

8. Keyword Research

Keyword research can be a long and tedious process to complete in full, but doing just a little bit can go a long way. I recommend optimized for 2 or 3 keywords, but it isn’t necessary to go crazy with them. Whatever you write should be natural sounding, maintaing an organic flow.
seo-keyword-graph

Also keep in mind that it is likely that you will be competing with other high profile websites for search ranking for certain keywords (this is bad for you). You may want to try and optimize for long tail keyword or keywords that are less competitive at first. There are several free tools at your disposal that can help you with your keyword research:

Note: Many of these tools are meant for PPC campaigns, but can also be used for SEO purposes.

9. Consistant Linking

Linking is factor which Google and other search engines use to rank your website. I won’t get into linking really, but would like to stress that it is important how that the link to your website be consistent across the internet. You can link to your website in your social networking profiles, email signatures, and beyond. Choose whether you would like to include the “www” or “/” at the end and be consistant everywhere you put this address. I also think it is worth mentioning that you should not spam the web with your website’s link.

10. Analytics

Employing the use of an analytics package like the free Google Analytics can be very helpful. You can use it to easily see what is working and what isn’t working. Use it to test and improve your SEO practices.

If anyone has any questions, feel free to ask them in the blog comment bellow. I would be happy to answer.

 
Comments Off on SEO Tweaks for Big Impact

Posted by on December 15, 2014 in Ecommerce, SEO - Search Engine Optimisation

 
 
%d bloggers like this: