Insecurity by Over-security

I’ve worked for many clients with many ideas of what makes a secure environment. Some use simple FTP servers, and some don’t trust anyone but their own employees. Others use multiple, nested levels of remote desktop connections, RSA tokens, and specialized software. But is there such a thing as too much security — so much that it actually becomes insecure?

Secure Password Requirements

To make a strong password, experts tell us it must include upper- and lower-case letters, numbers, and special characters. Remembering it will likely be difficult. Entering it will likely result in typos. If this is what you believe, you’re wrong, as illustrated by this popular XKCD comic.

XKCD comic strip

Because many systems require these complex passwords, you need to write them down. If you’re a good developer, you use a secure password database. If not, maybe you just attach sticky notes to a monitor. The problem is that when remembering a password is hard, people often store it insecurely. All that complexity is worthless if someone accidentally sweeps that sticky note under the door and into someone else’s hands.

Many people use online password storage services, like LastPass. These services are convenient, but also problematic. It is increasingly common for online companies to announce that someone hacked their server and leaked passwords.

Avoid this by using long, memorable passwords. It’s much more secure in the long run.

Frequent Password Resets

Password complexity is bad enough, but businesses also require passwords to be reset far too often. When systems require frequent password resets, most users create easily guessable variants of a password. For example, MarchPass123$ is followed up next month by AprilPass123$. If someone finds those old passwords, then figuring out the current password requires little effort. Passwords shouldn’t last forever, but requiring users to change them frequently only results in more security holes.

File Transfer Restrictions

Another common security feature I’ve noticed is the disabling of copy/paste or file shares while remotely connected to a machine. Rarely do I work on a project that requires remote access where I don’t need to transfer some files or copy and paste a large chunk of code.

When these features are disabled, users find work-arounds — usually insecure work-arounds. Typing by hand is tedious and usually impractical. More likely, you’ll upload the files to an online service or save them to a USB stick. I’ve even seen cases where companies block access to sites like Google Drive and Dropbox, which forces users to find less reputable online services. As more security is added, systems often become less secure.

Two-factor Authentication

Security experts eventually realized that complicated, oft-updated passwords were not working, so their solution: make things even harder for users. That’s right, in addition to the password nightmare, many systems now require users to enter a string of digits from a key fob or piece of software that changes every minute or so.

How does this lead to insecurity? Well, I’ve actually seen people write their username and password onto the fob itself. In cases of RSA software apps, they keep a plain text file with all the information needed to access the system right on the desktop.

The point I want to make is that when you make security harder for the user, the user will make the system less secure. The answer isn’t to add more complexity, it’s to change the security. If the only barrier to entry is a simple fingerprint scan or an easily remembered password, then users have no need to make notes or to circumvent the system.

Keep security practices simple to keep systems secure.

Getting Started with Experis Web Framework 2

Experis Web Framework 2, or EWF 2, is a front-end web development framework that provides organization and structure without adding a lot of file bloat. It makes use of many popular technologies, such as Compass and Sass, RequireJS, Grunt, and more. If you aren’t familiar with these useful tools, don’t worry – setup is simple, and before long, you’ll be wondering how you lived without them.

So, let’s get started. Here are the applications you need:

Node.js (

There are a myriad of things you can do with Node.js, but we’re going to use it primarily because it’s a prerequisite for using Grunt. Don’t let that stop you from learning more about it, though – if you’re a JavaScript app developer, Node.js is an essential tool.
Installation is easy. Just click the “Install” button on the home page. You can’t miss it. Run the installer when it finishes downloading, and keep clicking “Next” until you’re done.

Ruby (

Ruby is an open source programming language used by Sass/Compass. Therefore, we need to make sure it is installed and working. If you’re on a Windows machine, you can download the installer from OS X and Linux users can find instructions at

The installation defaults should be fine, but make sure to select the option to add the Ruby executable to your environment path variable.

Compass (

You can’t talk about Compass without understanding Sass first. Sass is basically CSS on steroids. It allows you to make use of variables, nested CSS selectors, and programmatic features that save time and effort. But what’s really amazing is that you don’t need to know anything more than standard CSS to start using it. That’s right – just start writing plain, vanilla CSS, and start gradually mixing in some of the Sass extras as you learn them.

So, what exactly is Compass, then? Well, it’s basically a library of common Sass functions – sort of like how jQuery relates to JavaScript. For example, instead of writing your own Sass linear gradient function, you can rely on the linear-gradient() mixin that Compass provides (a mixin outputs CSS when called). In case you’re wondering, the linear-gradient() mixin allows you to easily specify the conditions of a background gradient. The browser-specific vendor prefixes and syntax differences will be taken care of automatically.

Compass installation relies on the command line, but it’s only a couple lines of typing. On Windows, press the Windows key on your keyboard and the letter R. Then type “cmd” (without quotes) and press Enter. This will open the command prompt. You can use the Terminal application on OS X and Linux or other similar command line applications.

Next we need to make sure your Ruby gems are up to date. A gem is an application written in Ruby. By telling Ruby to update its gems, we can be sure we’re getting the latest version of Compass. Type the following and press Enter.

gem update --system

After a short while, the update should complete. To install Compass, type:

gem install compass

Hopefully, there weren’t any issues, and you’re good to go.

That’s it! No more software required. But… You should also consider installing LiveReload.

LiveReload (

This free browser extension is available for Chrome, Firefox, and Safari, and can be downloaded at or by searching for it in your browser extension store/gallery.

LiveReload makes it possible to write code and have it immediately update in your browser without the need to refresh. This is really amazing when used with Sass, RequireJS, and Grunt.

Downloading EWF 2

The framework files can be downloaded from GitHub at If you aren’t familiar with Git, feel free to click the “Download ZIP” button on the page for a simple ZIP file. The advantage to using Git is that you can easily pull in the latest updates and even contribute to EWF 2 if you so desire.

Either way you go about getting the files is fine. Just remember to unzip the files if you went that route. Now, open the folder in your editor of choice, and let’s take a look at what we have available.

About the File Structure

You’ll notice that there are two top level directories (javascripts and stylesheets) and some other files. Here’s a quick overview of the files in the root directory.

  • .gitattributes and .gitignore: These files are specific to Git and can be ignored or deleted if you aren’t interested in contributing to EWF 2.
  • config.rb: This is a settings file for Compass. It tells Compass where to find directories and how to output the final CSS. Learn more about the available settings at
  • Gruntfile.js: This file is used by Grunt to tell it what tasks it needs to run. We’ll talk more about Grunt later.
  • index.htm: EWF 2 includes this starter HTML page. Modify it to meet your needs when you’re ready to start.
  • package.json: Grunt uses this to tell it what packages it will need to download.
  • readme.htm: Here we have a quick tutorial as well as a self-serving example of something built with EWF 2. If you open it now, you’ll notice that there isn’t a CSS file for it to use. We need to compile the Sass first.
  • This file is a basic readme file in Markdown format. It is used by GitHub for a description. Go ahead and delete it.

If you open the javascript and stylesheets directories, you’ll see that they have a specific structure. Style sheets are organized into modules (Sass files that are used by other files, but don’t output anything on their own), partials (Sass files that make up the various pieces of your website), and vendor (third-party style sheets).

The javascripts directory has an app subdirectory (for JavaScript files specific to your website only) and a lib subdirectory (for third-party or reusable libraries and plugins).

While this is the recommended directory structure for working with EWF 2, you may change it to your liking, moving or deleting files as you find necessary. If you aren’t familiar with Compass and RequireJS, you should probably keep the files as they are, since the configuration for these technologies is tied to the directory structure.

Compiling CSS and JavaScript

As you should be aware, it is best practice to minify and combine your CSS and JS files. Unfortunately, this process takes time and has to be done every time you make an update. If you’ve worked with Compass before, you’re familiar with the compass watch command, which tells Compass to look for changes to your Sass files while you update them. And if you’re familiar with RequireJS, you may have used the r.js optimizer. There are many other tools – such as image optimizers and sprite generators – that are essential to building great websites, but they add time and complexity.

Fortunately, Grunt makes it possible to run all of these tools at one time, and only when they need to run. So if I make a Sass update, Compass will compile the CSS, and if I update one of my jQuery plugins, the RequireJS optimizer will kick in and generate a minified JS file. Additionally, if I’m using the LiveReload browser plugin, my browser will automatically refresh as I make these changes, showing my updates as I make them.

So how does all this magic happen? It only takes a few simple commands. Open your command prompt, and navigate to your project’s root directory. This is done with a command such as the following.

cd path/to/my-project

Now, run the following command to download all the packages we’ll be using:

npm install

The previous command reads your packages.json file and pulls in all the other files that you need for Grunt to work correctly. When it’s done, you need to install the Grunt CLI (Command Line Interface). You can do that with one more command:

npm install -g grunt-cli

That’s all. Now we can use Grunt to automate our tasks for us. Let’s start by compiling the CSS and JS that we need to view the readme.htm file in our browser. Run the following command:


When finished, you should get a “Done, without errors” message. Now open the readme.htm file in your browser. It should now appear as expected.

Working with EWF 2 in Real Time

We’ve seen how we can use the grunt command to compile our CSS and JS in one step, but it would be even better if Grunt automatically ran when we updated our files. If you look at the bottom of the Gruntfile.js file in the root directory, you’ll see three grunt.registerTask() calls. These are profiles that can be used to run certain tasks. The first profile, “default,” is the one used when we use the grunt command by itself. As you can see, it runs compass:dev and requirejs. Further up in the file, you can see the settings associated with these tasks.

Let’s run the “server” profile, which makes use of the “watch” and “connect” packages. The “watch” package causes Grunt to continue running and watches for file changes. The “connect” package creates a very basic web server for us so that we can take advantage of things like paths that don’t work well on file systems or Selectivizr, a script that allows older versions of Internet Explorer to use CSS 3 selectors. To start this profile, type the following in your command prompt:

grunt server

When it finishes compiling the CSS/JS and starts up the web server, it should say, “Waiting…” In the “connect” settings, we’re using port 9001, but you can change it to whatever you want. Open the index.htm file in your browser, but do it by navigating to http://localhost:9001/index.htm. If that doesn’t work, try

If you have the LiveReload plugin installed in your browser, go ahead and activate it. If you didn’t install it, you’ll still need to refresh your browser after making updates. Go ahead and make some updates in the index.htm file, Sass files, and JS files. Notice that after saving your changes, Grunt will automatically detect the changes and run the appropriate tasks.

When you’re finished, press Ctrl+C in the command prompt. Then press Y and Enter to confirm that you want to quit.


I hope that this was useful to you and that you’ve found a new, convenient way to work on front-end Web projects. As this was a “getting started” tutorial, I didn’t go in depth with my descriptions of the technologies, but please post in the comments if you have any questions or issues that you ran into.

We only scratched the surface of what you can do with EWF 2, and I plan on creating more tutorials to showcase some of its incredible features.

SEO and the Modern Web

The purpose of a search engine has always been to connect people with the information they seek, and as time goes by, they get better at doing it. In the early days, these Web crawlers were easily fooled by authors who engaged in keyword stuffing, link farming, and a myriad of other tricks to climb the search result ladder. As time went on, search algorithms evolved and those old devices no longer worked. Unfortunately, many people still think that there is a magic lever that a Web developer can pull to push their website to the top of the result list. Well, I’m going to tell you that it’s simply not true – or at least it hasn’t been discovered yet. Your search ranking is more in the hands of your marketing team and content authors than the Web developer now. So, if you’re not a great writer, it may be time to learn how to become one – or time to hire someone who is.

Quality Keyword- (and Synonym-) Rich Content

Writing high-quality content means writing for people, not machines. Continuously repeating a word or catering to what you think Google wants to hear instead of to your actual readers is likely to have the opposite effect: a lower ranking or even penalization. The first step to good SEO is to put your readers first.

That doesn’t mean you should avoid using keywords, just that you should use them strategically. Synonyms can be helpful in that respect. It’s also worth noting that you shouldn’t use words that are over-saturated in the major search engines already. For example, I’m not expecting to appear on the first page of results for words and phrases like “SEO” or “search results.” There are too many high-ranking websites that are already targeting those words. However, I may appear in a search for something like “modern optimization for higher search ranking.” You can check the popularity of a search using Google’s keyword tool. Do some research with some keywords you would like to use to see what the competition/demand ratio is.

Links and Referrals from High Ranking Websites

There’s no question anymore that the author plays the most important role in improving search results. However, it doesn’t matter how good your content is if no one else knows about it. Search engines are basically a form of popularity contest, and if other people aren’t talking about you, then don’t expect Google to trumpet your site either. That’s where marketing comes in. You need to promote your website. The easiest way to do this is to leverage your social media networks, such as Twitter and Facebook. Your public tweets and posts get picked up by search engines and count as a vote for your website. But more importantly, they may be picked up by other website owners who decide to help spread your word by linking to you. Links from well-regarded websites in their respective fields are essential to being noticed by the major search engines.

Don’t waste time posting links to your website in blog comments or by purchasing a spot in a list of “featured” sites. Google is smart enough to recognize these areas of a Web page, and you aren’t likely to see a good return on the time and/or money spent. Focus on avenues of marketing that will truly draw attention from experts in your field, and the search engines will follow.

Useful Website Optimizations to Improve SEO

Notice that, so far, I haven’t talked about anything that requires the assistance of an SEO “expert” or Web developer – except for having a website that allows you to publish content, of course. As I said earlier, a developer is no longer the leading influence in your search ranking. It’s true that we’ve already covered the two major influences on optimization: good content and lots of inbound links from respected websites. However, that doesn’t mean there aren’t changes you can make to improve your SEO.

Make Sure Your Site Loads Quickly

A website that takes more than a few seconds to load is going to seriously jeopardize your ranking. A user isn’t likely to stick around and wait for your website when it’s just as fast to click the “Back” button and click the next result in the list. Google knows this, so why would it include a slow Web page in the top results? Here are a few ways to improve your page load speed.

Use GZIP and Other Compression Techniques When Serving Files

Make sure your server GZIP compresses your files before sending them. This ensures that they download quickly by temporarily replacing common strings of text in the file and undoing the process on the Web browser’s end. To see if the files are being compressed, use your browser’s developer tools to check the server response, and look for the following: Content-Encoding: gzip

In addition to GZIP, you should strip whitespace characters from your HTML, CSS, and JS files (otherwise known as minification). For large files, this can make a big difference.

Image compression is important as well. If you aren’t sure when to use GIF, PNG, or JPG, then you need to learn. Also, don’t send images larger than they’ll need to be. This can be especially challenging when supporting small, mobile devices and high pixel-density displays.

Reduce the Number of HTTP Requests

Browsers usually only download a small number of files synchronously (i.e. at the same time). So if your website has seven CSS files and twelve jQuery plugins, you may be holding up the display of your website. Combine these files into as few as possible, or load your JavaScript asynchronously (lazy loading).

If you have a large number of background images, consider combining them into an image sprite. There are many tutorials that explain how this can be done. If you use Compass, then you can automate the sprite creation process.

Optimize Server-side Code

Slow loading pages aren’t always due to network-related activity and file sizes. If your server-side code is doing a lot of unnecessary work, then it could take awhile before the files even get sent off. The test-driven development (TDD) process is a great way to catch code that isn’t up to snuff.

Create Clean, Semantic HTML

If you use appropriate HTML tags to define your content, it will be easier for search engines to categorize your site and find relevant information. New HTML5 elements, such as main, article, and aside, are much more descriptive than generic div tags. Clean, well organized code means that Web spiders won’t have a hard time finding the “meat” of your content, and your file size will probably benefit as well.

Create a Google+ Account and Make Use of Rich Snippets

Rich snippets are a way of categorizing your content and providing short bits of useful info along with your search result description. For example, if your content is a review of a local restaurant, you can use a rich snippet to display a star rating with your search result. Types of rich snippets currently include:

While not a standard rich snippet, Google offers support for an author snippet that will cause the content author’s picture and number of people that have the author in their Google+ circles to appear next to the search result. This draws attention and will increase your click-through rate. It also requires minimal effort to set up.

Canonical URLs and 301 Redirects Ensure the Right URL is Indexed

Most websites use SEO-friendly URLs instead of complex query strings with ID’s and other variables. But they often forget that the page will be accessible through both the short, nice URL and the long, ugly one. If a search engine finds both URLs, it looks like you have duplicate content on your site. Most of the time, only one URL will appear in the result listing, but worst case, you may suffer a penalty if Google thinks you’re trying to game the system.

Canonical URLs are supported by all the big search players now, and they’re basically just a <link> tag that tells crawlers where to find the primary source of information (i.e. the friendly URL). Be careful though; if you don’t know what you’re doing, you could destroy your search ranking. Imagine a case where your CMS accidentally adds canonical link tags pointing to the wrong URL. You might not notice it without checking the HTML source code, but it would wreak havoc on your page rank. A sitemap.xml file is also important. Google, and probably others, use this file when determining the preferred URL to a page.

If canonical URLs aren’t a good fit, be sure to use 301 redirects to keep users and spiders off the wrong URL. Most public websites should have a 301 redirect for the “www” subdomain, either to it or from it – it’s up to you.

While we’re talking about redirects and canonicalization, it’s worth mentioning that a high number of broken links isn’t good for SEO either. Regularly check your pages for links that no longer work.

Shoot for Long Page Visits and Low Bounce Rates

Traffic tools, such as Google Analytics, provide a wealth of information about your site visitors. If the average time spent on your website is only a few seconds and you have a high bounce rate, it’s a sign that people aren’t finding what they want from your site. Take a closer look at your content and see how you can revise it to target the people that would find it useful.

Only One Meta Tag Still Applies to SEO

Okay, that heading may be exaggerating a bit, but only a little. The meta description tag is probably the most important of all the meta tags, and it’s still not as important as the other items I’ve listed. If your SEO expert is telling you that you need to optimize your meta tags, he’s probably behind the times.

However, the meta description tag will sometimes be used in the actual search result summary for your page – but only if Google couldn’t find something better in your content itself. The reason meta tags are large ignored by search engines is because they were abused so often by developers trying to get a leg up in the results. So now, less attention is paid to content that isn’t actually visible on the page. Still, make sure to include a description – it may give your page one more opportunity to reel in a reader.

In summary, in order to perform well in search listings, you need great content and lots of inbound links from great websites. These are the two most important things to SEO, but don’t forget that there is still room for a Web developer to make optimizations – just don’t expect him or her to bring your site to the top search result in Google with a few code changes.