Today’s technological climate demands more web security than ever against cyber attacks. As a member of the MentorMate Quality Assurance team, I perform numerous tests and inquiries to ensure web applications aren’t vulnerable to malicious hacks or data breaches.
Some of those assurance and quality control tests include:
A web crawler is a bot that scans every square pixel of an app, clicking on every link it encounters. They can sometimes gain access to unsecured pages if proper precautions aren’t taken. We use web crawlers to make sure every page of your app has the necessary level of security. After all, if a web crawler can sneak its way into a secure page, so can a savvy user.
Sometimes web security problems are caused by the least-expected culprit: Google. As one of the most sophisticated web crawlers out there, Google scans pages for keywords and presents its results to the user. Without proper security, those results can display something you might not want a user to see such as a list of contacts, credit card numbers, or other personal information.
As a QA, I first and foremost crawl apps manually to see if I can break into any protected information. Then, I run an app to see if it can catch something I missed. There are plenty of great free apps available — like Nutch and Heritrix — that can be easily found online. Most vulnerabilities these apps identify are relatively easy fixes.
Common Weaknesses in Login Pages
A login page is usually enough to stop cyber security risks. They’re still vulnerable to attacks though, so we need to ensure they can’t be bypassed or tricked in any way.
The easiest way to get past a login page is through the site’s metadata. Throughout production, developers occasionally add usernames or passwords as comments in the code. This keeps them close at hand, saving some time and effort. However, if they forget to remove the info upon completion of the project, that info becomes accessible to anyone who views the site’s metadata. As part of our Quality Assurance process, we sift through metadata for sensitive information to secure the login page. Accessing a site’s metadata is very straightforward. Simply navigate to the sources tab of your web browser and peruse through the files.
I’ve personally only stopped a release due to metadata once, but that’s enough for me to search extra carefully for it every time.
Common Weaknesses in Web Application Frameworks
Another common way to break into a login page is by inputting the default usernames that some frameworks use. These usernames are supposed to be changed or removed when a developer completes the project but sometimes, they forget.
Speaking of frameworks, let’s discuss a known exploit some of them have.
Since it’s such a widely known platform, I’ll use WordPress for my example. Unlike some other platforms, WordPress doesn’t limit the number of login attempts a user can try. This makes brute force attacks — trying different username and password combinations until one works — feasible. This is usually done with a script or a program, so having a user wait 10 minutes every 5 wrong times could add days, to the attempt, having a “Are you a human” check appear could stop it altogether. Fortunately, these and other simple code-based solution to prevent such activity are quick and easy to add.
SQL injection is another method of breaching a web-application — often data-driven — by adding extra SQL queries when inputting text. For example, when entering a username, the addition of “admin or 1=1” tells the SQL to always take it as a true statement.
This is a well-known exploit with detailed tutorials of it readily available online. That means anyone who wants to can access sensitive information if a web page is poorly sorted. In my experience, I’ve seen this work twice. Both times were in different places on the same web application but resulted in full open access to the site.
SQL injection, though well-known, is easy to prevent with prepared statements and Java frameworks designed to alleviate those vulnerabilities. If your web app uses PHP, be sure and check for that.
Unsecured HTTP Address
Moving on from frameworks, another method for cyber criminals to gain site access is by manually inputting a specific HTTP path.
This usually happens if access to a web app is hidden but not restricted. For example, let’s say you have admin rights to Google. When you navigate to the site, you see an extra button that says “current users.” It takes you to https://www.google.com/currentusers ( <- not an actual link) where you see a table showing everyone who’s currently using Google at that moment. If access to that table is hidden, instead of restricted, someone without admin rights wouldn’t see that button. However, they could append /currentusers to the Google web address and still be able to access it. All this might sound silly, but it happens quite often.
I’ll give you another example, this time from my personal life. When I was in college, I had to access a test via a web link “https://myUniversity.com/tests/math1?page=1” ( <- not an actual link). Simply by analyzing the web address, I assessed that a.) the University hosts all tests online and b.) this is the first page of the first math test. So what did I do? I decided to try an experiment of course. I changed the address to “https://myUniversity.com/tests/answerkeys” and found myself in a directory full of every answer key that was in the school’s system at that time. While they fixed that problem pretty fast, it was a pretty substantial problem to have in the first place.
Always Use TLS 1.2
The last four years have seen a massive uptick in the use of TLS 1.2. Many large companies like Google and Apple even reinforce its use for communicating with their servers. As QAs, we account for vulnerabilities like Poodle, Beast, and others by making sure outdated versions of TLS or SSL are disabled.
HTTPS & Web Security Threats
HTTPS (Hypertext Transfer Protocol Secure) appears in the URL when a website is secured by a TSL certificate. The “S” stands for secure, meaning the requests sent to the server — as well as the responses it sends back — are encrypted. Only the server can see what you’re sending, and only your computer can read the server’s responses.
Without proper encryption, anyone that wants to can stop a request to your server. They can look at it, steal info from it (or put malicious info into it), and send it on its merry way to the server — with you none the wiser. The same is possible with responses from the server back to you.
A cyber criminal can accomplish all this by using an exploit called KRACK (Key Reinstallation Attacks). Using KRACK, they get limited access your Wi-Fi network. They can also disable your HTTPS request using a script or see every request you send using a tool like Wireshark. These exploits create vulnerabilities for usernames, passwords, and security questions in real time as you’re sending them to the server.
Since the reveal of KRACK last November, many web apps have made HTTPS a requirement. Additionally, almost all big operating system vendors use a patch that lets you know when you’re not using HTTPS. These patches don’t completely eradicate the issue though. You still need to be aware and take appropriate action if a web browser warns you if you’re not using an HTTPS connection.
KRACK exploits a weakness in the WPA2 protocol, which every Wi-Fi network has been using since 2006. It’s actually quite remarkable that this exploit wasn’t discovered until last November when it could have been used for the past decade.
As new information and exploits become available to hackers, extra precautions must be taken to ensure a web site’s security. For both developers and QAs, it’s critical to stay one step ahead and help each other learn these precautions as they figure them out.