Brave: Because It’s The Best Middle Available

I’m no different than a large portion of web users who are looking to read content and stay safe: I use one or more ad blockers. Is this stealing website content? No. I’d be happy to be served ads is they didn’t have such a bad track record in terms of security and privacy.


Many people are still ignorant to what information they are giving up to online advertisers. In a 2013 post by VentureBeat they noted:

Advertisers and the tracking companies they employee are able to gather all sorts of information about you, such as the websites you frequent and what kind of products you’re interested — and even some even scarier stuff like political views, health problems, and personal finances.

Over time the picture you provide to these private companies becomes clearer and clearer. Of course, you may not care if companies know you like chocolate chip cookies but you may not want them to know more personal things or, worse, be able to extrapolate things about you that you haven’t even unknowingly shared … not to mention government use for predictive modeling.


Privacy is important but this is the bigger problem in my opinion. Malvertising has proven a successful vector to infect users machines with malware. If you are interested in a time line of large malvertising events GeoEdge has a nice post. A quick summary of heavy hitters who have inadvertently exposed their readers to threats include The New York Times, eBay, LA Times, Spotify,, Huffinton Post, MSN, BBC, AOL and NFL. Of course, there are many more but that list should be enough to get anyone’s attention.


So what are valid options to protect personal privacy and security on the web?

Ignore Internet Content

This is the best option but it’s very unlikely. Everyone loses with this as content providers get nothing from their ads and readers don’t get any news.

Go To “Safe” Sources

Another good option, but about as unlikely as the first. It takes work to find out the sites that are not tracking or injecting third party advertising. It also assumes safe sources are always safe but the web is a constantly evolving place and a site may be totally different upon two visits.

Run an Ad Blocker

This is the most common solution today. It blocks as many ads and third party cookies as it can and generally keeps users safe. It’s not a perfect solution as the content providers miss getting any ad clicks/impressions but the reader gets a much safer (and faster) experience.

Some sites actively block ad blockers. When I come across these sites who nicely ask me to unblock their ads I head over to google and find another source for the same story. I don’t think I’m alone in doing that.

Use (something like) Brave That Shares Ad Revenue

This is a newer thing and the actual reason  I wanted to write this post. Brave seems to be a good middle ground which attempts to keep users safe while still providing money back to content providers. In some ways Brave is acting like a arbitrator to let everyone get something out of the deal. Users get content, creators get money. Yeah Brave (and users) get a cut to but that’s not so bad (though I’d be fine with not getting a cut at all as a user).

Here is the flows for revenue from Brave:



Unfortunately, the NAA  didn’t quite grasp the above idea and has called to Brave to stop. Surprise, at least one of the companies who signed the letter has put users at risk via malvertising on multiple occasions.

Brave has posted a rebuttal in an attempt to help NAA understand the business model and why it’s not illegal. Hopefully logic will triumph over emotion and posturing.

My Hope

My hope is that users will jump on to the idea that Brave provides (whether they use Brave or not) and that the NAA will understand that it is a business model where everyone wins, even their readers.


Adding SSL to MyFitnessPal with HTTPS Everywhere

MyFitnessPal is a simple, social site which helps track food, water and exercise. The web applications touts over 1 million foods and, if what you are eating is not listed, you can enter your own nutritional facts. Like many popular social applications MyFitnessPal uses SSL and, like many popular apps it moves the user AWAY from SSL after logging in. This means everything after login is being sent over the Internet in the clear.


There are a few reasons this is the case. The simplest answer is that they don’t realize that sending information of the Internet without any encryption is a problem. After all, it’s just food data right? But it’s not. It’s also the authentication token (in this case a cookie) which goes over the wire unencrypted.

They may turn it off to decrease load. I’ve heard this argument used before by people. It is true that SSL is ‘more expensive’ on the servers than plain HTTP but in the age of cloud computing, agile development and devops SSL should be an easy default.

No matter what the actual reason is please don’t take this as a slight to MyFitnessPal. Many sites have this issue. If they didn’t tools like HTTPS Everywhere wouldn’t exist to try and protect user data in transit.


OWASP explains what can happen as well as how to verify your safety. As far as I know the best fix is to install a rule in HTTP Everywhere to handle this site. Unfortunately most non-technical people may not be able to easily import the following but this is the rule that I came up with after noticing the lack of SSL post login:

<ruleset name="MyFitnessPal">
<target host=""/>
<target host=""/>
<target host=""/>
<securecookie host="^www\.myfitnesspal\.com$" name=".*"/>
<rule from="^http://myfitnesspal\.com/" to=""/>
<rule from="^http://(www|api)\.myfitnesspal\.com/" to="https://$"/>

Be aware though that this will NOT protect any data being transfered by the mobile applications. The real fix has to come from MyFitnessPal themselves. It looks like at least a few users have asked for the enhancement.

But Remember

Many sites have this issue. This issue should not stop you from using an application but do make an informed decision as to what data to pass along and what applications to link with. When possible use things such as HTTPS Everywhere. At the very least pay attention to your browser’s URL bar and know when your data is being sent in the clear.

Basic Web Security For The Average User

The Web has been around long enough that Web applications are a part of most everyone’s daily life. Even when a user is on a mobile device they could be interacting with Web services. Sadly there are still many applications and services out there which are lacking what should be the minimum security. Luckily there are some work around users can do to try and protect themselves when applications have less than stellar security practices.

Secure Socket Layer (SSL)

What is it?

SSL keeps the Web traffic between you and the originating server encrypted meaning its harder for someone else to see the data while it is in transit. If SSL is not in use the the Web traffic is viewable by anyone on the wire.

Can I increase my safety?

The best defense is diligence. Whenever you are going to be entering data, login or after you have logged in it’s best to make sure you see ‘https:’ at the start of the url.

For Firefox and Chrome users the EFF provides an extension called HTTPSEverywhere which attempts to keep your browsing over SSL where possible.

Cookie Flags

What is it?

Cookies are utilized to help identify and store small bits of information on the client side (your browser). Over the years there has been a few flags added to cookies that help keep them from getting to the wrong people as easily. These flags are Secure and HttpOnly. Secure tells the browser that it should only pass the cookie to the server if the connection is using SSL. If it isn’t using ng SSL the cookie does not get sent with the request. HttpOnly keeps the cookie out of the hands of javascript only giving it back to the originating server directly over HTTP/HTTPS requests.

Can I increase my safety?

Right now I’m not aware of any extensions which force cookies to be HttpOnly on the browser side. If you are aware of a good mitigation please comment.

Cross Site Scripting (XSS)

What is it?

OWASP defines XSS like so:

Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted web sites. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user in the output it generates without validating or encoding it.


Can I increase my safety?

By installing NoScript or something similar and utilizing Google Safe Browsing you can lessen the chance of successful XSS attacks but it is far from a silver bullet. There is work being done to to attempt to make XSS attacks much harder but it’s not currently in use.

Cross Site Request Forgery

What is it?

Cross site request forgeries happen when a browser on one page makes a request to a different site to attempt to do something on behalf of the user. There are many cases where this vulnerability is used by developers as functionality but that doesn’t make it any less dangerous.

Can I increase my safety?

Wikipedia has a good page on the subject:

Browser extensions such as RequestPolicy (for Mozilla Firefox) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites. The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests. The NoScript extension mitigates CSRF threats by distinguishing trusted from untrusted sites, and removing payloads from POST requests sent by untrusted sites to trusted ones.


Password Hashing

What is it?

When an application stores passwords it should do so in a way that it doesn’t actually know the password you’ve entered. It may sound odd but it’s pretty trivial to do this for application developers. Yet sometimes they forget to implement this hashing (or the application is ancient and never was updated to do this).

Can I increase my safety?

The best defense a user can do to protect against disclosure is to use different passwords for each site. Understandably this can sound scary: people feel like they have to remember too many passwords already. Thankfully there are applications which can help generate and, store and remember these passwords and let you remember just one. Here are a few popular choices:

External References

What is it?

It’s very common for Web applications to include javascript, images or css from other locations. The reason for this is either to increase performance by using the browser’s cache, utilization of a content delivery system, including ads/analytics or, in some cases, laziness. The problem for the user comes when the remote content becomes unsafe or the content acts in ways the user is not OK with.

It can also cause problems for the application including the content. For more information see my previous post over trusting others to host your content.

Can I increase my safety?

Being that, in most cases, the external references are required for the application to work it is quite hard to fully add protection in the browser but there are still some things that can help.

One more thing…

While not specifically related to the bare minimum of remote web application security it’s important to keep your browser and extensions/plugins updated with the latest patches and supported versions. This can help protect against sites which are hosting browser based exploits for one reason or another (of course, don’t purposely go visit such a site!). For plugins, Mozilla provides a great plugin check page which seems to work for Chrome to some degree as well. Go check your versions!

Trusting Others To Host Content In Your Web Apps

One of the things I tend to warn other developers about is the inclusion of third party content into their applications. No, I’m not talking about pulling in serialized data from trusted sources. I’m talking about simply adding “stuff on the site”. This isn’t anything ground breaking. In fact, it’s pretty obvious stuff. The news that Google was identifying The Verge as a malware host is a pretty good and well publicized example of what can go wrong. Keep in mind I’m going based on public information.

If you are unaware of The Verge here is how they define who they are:

The Verge was founded in 2011 in partnership with Vox Media, and covers the intersection of technology, science, art, and culture. Its mission is to offer in-depth reporting and long-form feature stories, breaking news coverage, product information, and community content in a unified and cohesive manner. The site is powered by Vox Media’s Chorus platform, a modern media stack built for web-native news in the 21st century.



When most general users go to a website the probably don’t realize that they may be making requests out to many locations. In the case of The Verge’s main page they are making request out to the following domains for images and javascript.


The more technical the reader is the more likely they are to understand that a visit to a website is probably going to include many requests to many different sites. The above list of domains are pretty normal. Facebook, Twitter and Google for social networking. Ad companies to show ads. Tracking companies for analytics. And the use of a CDN (content delivery network) is a pretty common practice for web applications which encounter high load. Nothing to see here, right?

Maybe a little

By including these domains there is an amount of trust given. If any of those third party sites encounter a security issue then the site doing the including could be affected. In this case it was’s reputation which was causing an issue. At one point Google’s Safe Browsing system identified malware being hosted on which means it is going to flag a requests to the domain as a possible problem. In fact, here is what Google Safe Browsing had for

What happened when Google visited this site?
Of the 8071 pages we tested on the site over the past 90 days, 3 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2012-09-17, and the last time suspicious content was found on this site was on 2012-09-16.
Malicious software includes 8 trojan(s), 2 exploit(s). Successful infection resulted in an average of 1 new process(es) on the target machine.

Malicious software is hosted on 1 domain(s), including 63.143.*.0/.

1 domain(s) appear to be functioning as intermediaries for distributing malware to visitors of this site, including u******.net/.

This site was hosted on 26 network(s) including AS29791 (VOXEL), AS36089 (OPENX), AS11855 (INTERNAP).

Has this site acted as an intermediary resulting in further distribution of malware?
Over the past 90 days, appeared to function as an intermediary for the infection of 3 site(s) including ml****.com/, fa******.com/, f*******

(Source – Asterisks added)

Because had a malware associated with it recently The Verge, who was using sbnation as a CDN, had it’s threat level raised. It’s even pretty obvious via the red warning that Chrome was giving users. “ contains content from, a site known to distribute malware”. If you are interested in what the errors looked like to users check out the thread that was going on the verge’s forums.

Was the situation dangerous?

I have a feeling in the above case it probably wasn’t. The cdn is probably used in a lot of different sites and the content in use is likely uploaded by the first party but, to be honest, I didn’t look into it as it’s an example of what can go wrong and not the the meat of the post. But it still isn’t the best position to be in. If users want to be protected then having a system like Safe Browsing warn them that a third party  host in use by the first party has been noted as distributing malware is probably a fair result.

Is including remote content really about trust?

Yeah, it kind of is. Let’s say there is a web application that allows you to get feedback on your site simply. All you need to do is drop in a small bit of javascript referencing the service and you will be set! Your all done and can have a drink to pre celebrate the great feedback you’ll get. But what if the developers create a terrible bug in the javascript you are including or, worse, something happens to the server that is hosting the javascript you now include? By adding in content from the third party you are trusting their security level matches or surpasses your own. You are also trusting that any third parties they are using meet or exceed your security level as well. If any/all third parties do not meet or exceed your standards then your users/visitors and brand (if applicable) could take a hit.

“Security? That’s the OS’s/Networks Job!”

I spend a good amount of my time doing software development. I’m one of those guys that has a bad habit of starting projects, getting half or three fourths of the way through and then coming up with another project to do (leaving the original out on in the cold). Needless to say I end up playing with a lot of tools and libraries to help with projects but I’ve started to notice a pattern. The assumption that behind the firewall everyone is friends.

In a more recent project I was working on it became apparent that a queuing system of some kind was going to be needed. Instead of running out and picking the most popular flavor of the month I figured the best move would be to give a few different systems used for queuing a run and see how they worked out. In general I was impressed with their abilities but found the security lacking greatly in a number of them.


Please be aware I’m not trying to discount any of these applications.The two I tried directly I really liked from a development point of view.


One of the earlier ones I checked out was Redis. It was blazing fast but the security model is interesting.

Redis is designed to be accessed by trusted clients inside trusted environments. This means that usually it is not a good idea to expose the Redis instance directly to the internet or, in general, to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket.


To make matters even more interesting it has support for a single password passed plainly over the wire. Granted, it’s possible to use an SSL proxy as the guide points out but with one user non-repudiation could be a serious problem (especially if logs go back to NAT’d addresses). In effect the security model of Redis seems to require a single tenant, well logged (at network, host and app level) and heavily ACL’d environment. With cloud hosting I’m not so sure how well one could ensure this is the case at all times. Granted, if it’s a single developer running his own infrastructure or a very small company/group/team then it could be possible that the model would work well enough. Honestly I couldn’t get over the fact I’d have to tell friends who wanted to play with the project they’d have to make a special environment before installing.


I didn’t end up trying beanstalk but did notice it had similar pitfalls. As Kurt Seifried points out in his blog:

The major downside to beanstalkd is that it doesn’t provide any encryption of network traffic or authentication capabilities, so any client with access to a running instance of a beanstalkd server will have access to all the queues in that running instance. This can be gotten around by wrapping beanstalkd in SSL (using stunnel or a similar service) to secure communications and limiting access to beanstalkd queues based on either IP address or by requiring SSL client authentication.


So again, if you want to use the service you must either setup extra hoops and/or have an incredibly locked down infrastructure.


ZeroMQ is really cool. But you end up with similar problems of network ACL’s providing all of your protection unless you write your own authentication and authorization mechanisms.

What security features does ØMQ support?

None at the moment. ØMQ does not deal with security by design but concentrates on getting your bytes over the network as fast as possible. Solutions exist for security at the transport layer which are well understood and have had many man-years of development invested in them, such as IPsec or OpenVPN.


Granted zmq is a bit lower level and used as a building block instead of a solution so it is understandable why some things are pushed back upon the developer to implement as needed.

But Who Cares?

It’s more about being aware.

  • Can anyone promise that network ACL’s won’t be modified to enable a shiny new application?
  • Can you be sure that the other side of the SSL connection will remain safe  and trustworthy?
  • Is any data making it’s way through which can have an effect on process inside the firewall guaranteed safe (example)?
  • If the hosts are multi tenant or in the cloud are you sure everyone who has access to the VM’s or networks are trustworthy?

You and/or the developers of these apps wouldn’t have come up with some kind of security solution if it was OK for any random Joe to play with the service. If someone is able to interact with a service which is “soft on the inside” then it’s likely that service would be an early target.

Simple Examples

For example, let’s imagine an attacker gets access to the service because he is able to take control of an approved host. If the service on the other side is Redis then the attacker could sit and gain information painlessly before copying work from that point forward. If it is a zmq port then an attacker could attach another process to it and get either a copy of everything (SUB, ”) or a subset of data (PULL). Beanstalk probably has similar abilities. The security on the other side of the connection, whether inside or outside the firewall, ends up being as important as the security on the inside as the level of access to the service is more or less the same. All or nothing.

Using an SSL tunnel and only allowing specific hosts may constitue as defense in depth on paper it doesn’t seem to be enough. Maybe I’m to paranoid but if there was authentication and basic authorization in or around the service an intruder would need to gain further information or perform more attacks to gain access.