When I started using Google Chrome more, I realized there wasn’t a NoScript (different from a pop-up blocker) plugin available like there is for Firefox. After doing some research, I found many users recommended uMatrix – a beefed up ad-blocker, which was forked from HTTP Switchboard. So I tried it out, and my first impression while using it was: “I have no idea what I am doing.” Fortunately, I persisted with using it because it is a lot simpler than it appears. I wasn’t too impressed with the alternatives based on the feedback and reviews from the webstore. I found that after you spend some time learning how it works and customizing the settings based on your browsing tendencies, it is actually a manageable and powerful alternative to Firefox’s NoScript. I’ll break down the core functionality of the plugin to spare you the painful learning curve of uMatrix.
- “That huge box that comes up – I don’t know what any of it is.” – Neither did I, entirely. But as I began learning more about web development from launching this blog, I realized it is actually quite simple: webpages are made up of (the columns) cookies, CSS (cascading style-sheets), images, scripts, frames, XHR and other types of code. Webpages also contain code from many different websites (the rows), which easily adds up to a grid of 8×8 or bigger resulting in 64 or more individual squares to click. Yeah, it gets busy. You only need to know a few of those, though.
- You can save your exceptions/rules. Don’t want cookies to be collected on one site, but they’re required on another? No problem, click the uMatrix icon next to the address bar, click the top half of the square where the cookie column and domain row intersect, then click the lock icon in the top row. Done. Not too bad huh? The same principle applies towards entire domains, listed in each row. If you click the top half of the domain name, the entire area should turn green and thus be ‘whitelisted’.
- “So what is the bare minimum I need to know?” By default, uMatrix blocks most things and will, at first, reduce the functionality of your browser and even hinder your ability to fully utilize a website by only loading part of the content. By adding exceptions one at a time, your browser with uMatrix is taking the least amount of risk necessary for your specific needs. For instance, I know that on StumbleUpon, all the content is contained within some frame code, so I simply allowed all frame content to load on the domain by clicking the top half of the “frame” square. Now I can use that website safely. Other pages are more convoluted and take a bit of experimenting to correctly add permissions.
Additionally, many popular websites like Twitter have content offloaded to other sub domains or similar sounding domain names, e.g., I have to allow code from “twimg.com” to allow the “Who to follow” box to load because that’s how they designed their site. You must refresh the webpage after adding new exceptions. Other times it is less clear what needs to be allowed. Almost every time, CAPTCHAs (“Completely Automated Public Turing test to tell Computers and Humans Apart”) are embedded and load from some random domain name – you’ll need to find which one it is to get through the verification. The Google sponsored CAPTCHAs (they are conspicuously designed like advertisements) are the only ones I really have trouble with, because I find myself making an exception, refreshing, discovering more code embedded within that code, allowing it and refreshing again. . . It quickly becomes exhausting and going this far to maintain security and privacy means you really value those things or are very patient. Or both.
Taking it further:
The documentation on Github goes in depth, showing how to create your own rulesets. In essence, you go to the options menu for the extension in your browser, click edit, then type in “example.com” “type” “allow/block”, like shown in the picture. Or just copy paste the sample rules provided in the link, your call. Adding exceptions graphically like I explained earlier is the equivalent of doing this.
There are “known bad domains” where less than reputable denizens of the web reside, or rather, where malware has been packaged into some trap. Thankfully there are people who document this kind of stuff and maintain lists of these bad domains. Also in the options menu, is the hosts file, which contains several of these lists and automatically blocks any request sent there (on purpose or accident).
When you load a webpage, you automatically submit some identifying information like browser type/version and operating system. Some people could care less, but the fact of the matter is that these small pieces of information can be combined with other small pieces to reliably identify who you are. The developer cited this publication. It is off by default, but can be enabled with one click. It works by giving shuffled common browser and operating system information in place of your real information. Be careful though, while your privacy and security are certainly important, not all websites are run by bad guys. User information and advertisements are crucial for analyzing traffic and providing revenue to fund the servers, so consider toggling off your incognito practices when visiting websites you support and trust.