Test and QA Summer Social (Cardiff, Wales)

summer-01-1.jpg

Thanks to everyone who came to our Summer Social last night, it was fun to jabber on about all the exciting new developments that are happening in test at the moment and see what cool stuff you’ve all discovered. Thanks again to Yolk Recruitment for hosting the event and providing the beer and pizza!

I’ve tried to capture everything we talked about (a lot!) and provide the links to the tools and training resources that came up. Please let me know if I missed anything!

Test Automation

Selenium:

If you can’t Code yet:

  • Selenium IDE: Back from the dead and a good start for beginners.

  • Robot Framework: powerful, but buggy, with little-to-no support.

  • RPAs: Pretty, but limited, with Selenium under the hood.

throughout the Stack:

  • ToDo SubSecond: A brilliant example of functional testing in milliseconds (by eliminating I/O).

  • ToDoMVC: The same ‘ToDo’ app built in every possible web frontend framework - even pure JavaScript. Perfect for comparing the the different frameworks - and fun to automate each one and explore their quirks.

  • Property Cross: And the same idea for cross-platform development (also great for practicing cross-platform test automation).

Cross Platform Frameworks:

  • Appium is probably the best one out there. Exposes a single ‘selenium-like’ API, so you can run a single test against mobile, web and native apps.

Other Automation Frameworks:

Mutation Testing

Test Coverage is dead, long live Mutation Testing!

Visual Testing

  • Applitools is pretty much the market leader with their ‘AI’ product.

  • Jest Snapshots Not ‘visual’ testing. However the snapshots stop developers accidentally changing the DOM and breaking IDs, text or positioning of elements etc. Also available in Karma.

Pen Testing

Consumer-Driven Contract Testing

  • Spring cloud contract (docker-based for any language, looks great, we haven’t tried it though).

  • PACT contract testing (the original contract test framework).

  • It's even possible to do contract testing with Postman.

  • Dredd - test your API documentation.

  • Judge Dredd - convert your API docs to contract tests (supported by Hargreaves Lansdowne Tech - for microservices hosted on Kubernetes only it seems).

Performance Testing tools:

  • K6: performance testing from within your project (scripts are written in JavaScript). Gives performance testing power to the devs and a natural fit for CI.

  • Locust: performance testing in Python code. Suggested as a good alternative if JMeter struggles with your API.

  • JMeter Maven plugin

    • How to personalise Reports with JMeter plugin.

  • A detailed comparison of all the OSS Performance Testing Tools out there. Good to see that our favourite (K6) comes out well!

    NOTE: Don’t be put off by the fact that a lot of the most up-and-coming test tools use JavaScript as their scripting language. We believe that JavaScript will become essential to Technical Testers in the future so we highly recommend learning it. Here’s our mini-tutorial in JavaScript to get you started.

Test Management tools

  • XRay: Great tool if you use Jira, also handles cucumber feature files.

Smart Waits, Selenium Grid and what's Coming up in Selenium 4.0

Every year Test Automation Engineers from around the globe will research the latest tools and techniques in order to make their Test Automation Frameworks more stable, faster and easier to use and maintain. This is vital to ensure continued widespread adoption of their framework within the company. Bloated, out-of-date Frameworks soon get left behind.

 

In this article we'll take a look at some of the ways you can update your framework for 2019 and how to be prepared for 2020.

 

Tip #1: Dockerize your Selenium Grid

 

Why?

Selenium Grid is notoriously hard to set-up, unstable, and difficult to deploy or version control on a CI pipeline. A much easier, stable and maintainable way is to use the pre-built Selenium Docker images.

Note: The one downside of this method is that IE (Internet Explorer) is not supported, as it's not possible to containerize the Windows operating system.

 

Mini-Tutorial

Getting Set Up

To get up and running, first you need to have Docker and Docker Compose installed on your machine. If you're running Windows 10 or a Mac, then they will both be installed through the Docker Desktop.

 

Starting Your Grid

The official Selenium repository on Docker Hub contains pre-built docker images for your Selenium Hub and Firefox and Chrome Nodes.

The easiest way to use these in a local Selenium Grid is to construct a Docker Compose file within the root directory of your project. Name the file docker-compose.yml to keep things simple.

I've included an example below which creates the following Grid:

  • A single Selenium Hub
  • One Chrome node
  • One Firefox node.

 

docker-compose.yml

version: "3"
services:
  selenium-hub:
    image: selenium/hub:3.141.59-neon
    container_name: selenium-hub
    ports:
      - "4444:4444"
  chrome:
    image: selenium/node-chrome:3.141.59-neon
    volumes:
      - /dev/shm:/dev/shm
    depends_on:
      - selenium-hub
    environment:
      - HUB_HOST=selenium-hub
      - HUB_PORT=4444
  firefox:
    image: selenium/node-firefox:3.141.59-neon
    volumes:
      - /dev/shm:/dev/shm
    depends_on:
      - selenium-hub
    environment:
      - HUB_HOST=selenium-hub
      - HUB_PORT=4444

 

The Docker Compose file describes the set-up of your Grid. For more information about creating Docker Compose files, please see the official documentation.

 

To start your Grid, simply use any terminal window (a powershell or cmd window in Windows) to run the following command from the root directory of your project:

 

docker-compose up

 

Connecting to the Grid

You can connect to your Selenium Grid in exactly the same way as you normally do, as the Hub is listening on port 4444 of your local machine. Here's an example where we set up our Driver to use our Chrome Node.

 

Driver.java

protected static RemoteWebDriver browser;
DesiredCapabilities cap = new DesiredCapabilities();
ChromeOptions chromeOptions = new ChromeOptions();

cap.setCapability(ChromeOptions.CAPABILITY, chromeOptions);                
cap.setBrowserName("chrome");

driver = new RemoteWebDriver(cap);

 

You can then use the TestNG library to run your tests on multiple nodes in parallel as usual.

It's worth noting that it is possible to have multiple browsers running on each node. However this is discouraged, and using one browser per node is considered best practice for optimum performance.

 

Additional Tips and Tricks

If you want to see what's happening on the browser so you can debug your tests, then it's worth having a debug version of your docker-compose.yml file that downloads the debug browser nodes. These contain a VNC server so you can watch the browser as the test runs.

It's also possible to run the browsers headlessly for increased speed (the usual way) and Selenium also provides base versions of the images so you can build your own images if you need additional software installed.

To create a stable version of the Grid for your CI pipeline, it's also possible to deploy your Grid onto Kubernetes or Swarm. This ensures that any Dockers are quickly restored or replaced if they do fail.

 

Tip #2: Smart Waits

Why?

As any Test Automation Engineer knows, Waits are crucial to the stability of your Test Automation Framework. They can also speed up your test by rendering any sleeps or pauses redundant and overcome slow network and cross-browser issues. Below are some tips to make your Waits even more resilient.

 

Mini-Tutorial #1: Be Specific with your Waits

The ExpectedConditions class has grown over time and now encompasses almost every situation imaginable. While ExpectedConditions.presenceOfElementLocated(locator) is often enough, it's best practice to use the methods within the ExpectedCondition class to cover every user action, by embedding them into your [Actions.java](http://actions.java) class. This will bullet-proof your tests against most cross-browser or slow website issues.

 

For example if clicking on a link results in a new tab opening, then use ExpectedConditions.numberOfWindowsToBe(2). This will ensure that the tab is there before trying to switch to it.

 

You can also use a wait to ensure that you capture all the elements present on the page when using findElements. This can be especially useful if it takes time for a search page to return its results. For example, the line:

 

List<WebElement> results = driver.findElements(locators.RESULTS);

 

May result in an empty List array if your search results haven't loaded yet. Instead, it's better to use the numberOfElementsToBeMoreThan expected condition to wait for the results to be more than zero. For example:

 

WebElement searchButton = driver.findElement(locators.SEARCH_BUTTON);
searchButton.click(); 

new WebDriverWait(driver, 30)    
    .until(ExpectedConditions
        .numberOfElementsToBeMoreThan(locators.RESULTS, 0)); 

List<WebElement> results = driver.findElements(locators.RESULTS);
results.get(0).click();

 

Now your findElements command will only run after the search results have been returned.

This wait is also useful for finding single element when you're dealing with a frontend that doesn't play nicely with Selenium (e.g. Angular websites). Creating a method like this will protect your tests, making them much more stable.

 

protected static WebElement waitForElement(By locator){    
    try {        
        new WebDriverWait(browser, 30)                
            .until(ExpectedConditions                
                .numberOfElementsToBeMoreThan(locator, 0));    
    } catch (TimeoutException e){        
        e.printStackTrace();            
        Assert.fail("Timeout: The element couldn't be found in " + WAIT + " seconds!");    
    } catch (Exception e){              
        e.printStackTrace();        
        Assert.fail("Something went wrong!");    
    }    
    return browser.findElement(locator);    
}

 

It's even possible to wait for elements to no longer be visible. This is especially useful if you're waiting for a pop-up to disappear after you've clicked on the OK or Save button, before preceding with your test.

 

WebElement okButton = driver.findElement(locators.OK_BUTTON);
okButton.click();

new WebDriverWait(driver, 30)
    .until(
        ExpectedConditions
            .invisibilityOfElementLocated(locators.POPUP_TITLE)
);

 

All the methods described above and more are listed in the official documentation. It's well worth spending ten minutes reading through all the possibilities and improving the stability of your framework.

 

Mini-Tutorial #2: Logical Operators in Waits

A good way to build resilience into your Waits is by using Logical Operators. For example, if you wanted to check that an element has been located AND that it is clickable, you would use the following code (please note that these examples return a boolean value):

 

wait.until(ExpectedConditions.and(               
    ExpectedConditions.presenceOfElementLocated(locator),                    
    ExpectedConditions.elementToBeClickable(locator)
    )
);

 

The OR operator would be appropriate if you weren't sure whether or not the title of the page might change. Then you can include a check of the URL if the first condition fails, to confirm that you're definitely on the right page.

 

wait.until(ExpectedConditions.or(                
    ExpectedConditions.titleIs(expectedTitle),                 
    ExpectedConditions.urlToBe(expectedUrl)
    )
);

 

Or if you wanted to ensure that a checkbox is no longer enabled after an action is performed on the page, then the NOT operator is appropriate.

 

wait.until(ExpectedConditions.not(
    ExpectedConditions.elementToBeClickable(locator)
    )
);

 

Using operators can make your waits more resilient and result in tests that are less brittle.

 

Tip #3: Simulating Network Conditions

Why?

Running your Web App on localhost or on a local network can give a false impression as to its performance when running in the wild. The ability to throttle various upload and download speeds will give you a better representation as to how your application will run over the internet, where timeouts can cause actions to fail.

 

Mini-Tutorial

The following code will open the TopTal home page using different download and upload speeds. First we'll store our speeds in a TestNG data provider using the following code:

 

import org.testng.annotations.DataProvider;

public class ExcelDataProvider {

        @DataProvider(name = "networkConditions")
    public static Object[][] networkConditions() throws Exception {
        return new Object[][] {
                        // Upload Speed, Dowload Speed in kb/s and latency in ms.
            { 5000 , 5000, 5 },
            { 10000, 7000, 5 },
            { 15000, 9000, 5 },
            { 20000, 10000, 5 },
            { 0, 0 },
        };
    }
}

 

Note: The upload and download throttling is in kb/s and the latency is in ms.

Then we can use this data to run our test under different network conditions. Within the test, the CommandExecutor will execute the command in the browser's current session. This in turn will activate the necessary settings in Chrome's Developer Tools functionality to simulate our slow network. The code within the if statement can be included in a @BeforeClass method when running a suite of tests.

 

import org.testng.annotations.Test;
import com.google.common.collect.ImmutableMap;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.openqa.selenium.remote.Command;
import org.openqa.selenium.remote.CommandExecutor;
import org.openqa.selenium.remote.Response;

public class TestClass {

        // load our data provider
    @Test(dataProvider = "networkConditions")
    public void test(int download, int upload, int latency)
throws IOException {

                // only run if the network is throttled
        if (download > 0 && upload > 0) {
            CommandExecutor executor = driver.getCommandExecutor();

                        // create a hashmap of the required network conditions       
            Map map = new HashMap();
                        // you can even test 'offline' behaviour
            map.put("offline", false);
            map.put("latency", latency);

            map.put("download_throughput", downloadThroughput);
            map.put("upload_throughput", uploadThroughput);

                        // execute our code
            Response response = executor.execute(
new Command(driver.getSessionId(), 
"setNetworkConditions", 
 ImmutableMap.of("network_conditions", ImmutableMap.copyOf(map))));
        }

                // Open the website    
        driver.get("https://www.toptal.com/");

                // You can then check that elements are loaded etc. 
                // Don't forget to use waits!
    }
}

 

Bonus Tip: How to Manage your Cookies

Browser cookies can cause different behaviours in your application, depending on whether or not they have been saved from a previous session (e.g the application might load with a user already logged in). It's good practice to clear out your cookies before each test run to ensure that they don't cause problems.

The code below allows you to delete all your cookies:

 

driver.manage().deleteAllCookies();

 

You can also delete a cookie by name:

 

driver.manage().deleteCookieNamed("CookieName");

 

Or get the contents of a cookie:

 

String myCookie = driver.manage().getCookieNamed("CookieName").getValue();

 

Or get all the cookies:

 

List<Cookie> cookies = driver.manage().getCookies();

 

Test Automation in 2020: Looking to the Future

Selenium 4 will be released over the next few months. It's still under development, but as an alpha version has already been released, it's worth taking a look at what improvements it will offer.

Note: You can keep track of their progress by looking at the roadmap.

 

W3C WebDriver Standardization

No longer will Selenium need to communicate with the browser through the JSON wire protocol, instead automated tests will communicate directly with the browser. This should address the famous flaky nature of selenium tests, including protecting against browser upgrades. Hopefully test speed will increase also.

 

A Simpler Selenium Grid

The Selenium Grid will be more stable and easier to set-up and manage in Selenium 4. Users will no longer need to set-up and start hubs and nodes separately as the grid will act as a combined node and hub. Plus there will be better support for Docker, parallel testing will be included natively and it will provide a more informative UI. Request tracing with Hooks will also help you to debug your grid.

 

Documentation

The Selenium documentation will be getting a much needed overhaul, having not been updated since the release of Selenium 2.0.

 

Changes to the API

Support for Opera and PhantomJS browsers will be removed. Headless running can be performed with Chrome or Firefox, and Opera is built on Chromium and therefore Chromium testing is seen as sufficient for this Browser.

 

WebElement.getSize() and WebElement.getLocation() are now replaced with a single method WebElement.getRect(). However, as these are often used to create screenshots of a single element it's worth knowing that there will also be an API command to capture a screenshot of an element in Selenium 4.

 

For WebDriver Window, the getPosition and getSize methods will be replaced by getRect method and the setPosition and setSize methods will be replaced by the setRect method. fullscreen and minimize methods will be available, so these actions can be performed within your test.

 

Other Notable Changes:

  • The Options class for every browser will now extend the Capabilities class.
  • A driver.switchTo().parentFrame() method has been added to make frame navigation easier.
  • nice locators will be included that operate on a higher level to the current ones. They will be a subclass of By.
  • There will be a implementation of the DevTools API, allowing users to take advantage of features offered by using the Chrome Debugging Protocol (and equivalents on other browsers). These include:
    • Full page screenshots (including offscreen elements).
    • Streaming logs.
    • Waiting for mutation events on the page.
  • Many deprecated methods and classes will also be deleted.

 

Note: You can get an Alpha version of Selenium 4 from the Maven repository. It's highly recommended to try this out against your current framework (ideally on a sandbox branch), so you're ready for the change.

 

If you've found any more useful methods or techniques while performing a Spring Clean on your framework please do share them with other readers of this blog by adding them to the comments section below.

JavaScript, Java, .NET - which language to choose?

test-automation-choice.png

When it comes to choosing the right course for you, it can be difficult to decide between the different languages and frameworks out there. In this blog post, we try to sort the apples from the pears, the sheep from the goats, and (hopefully) make things a little clearer. Let’s start with JavaScript.

Jump to:

JavaScript

TLDR; Wonderful innovative tools that draw you in, even if the weirdness of the language may cost you your sanity. Learn this if working on a NodeJS application or if you’re only interested in UI automation and front-end testing.

Web browsers natively use JavaScript to perform dynamic actions on a page. It’s also an exciting and thoroughly modern language. This means that the Test Automation tools available are at the cutting edge when it comes to innovation and power.

Innovation

When it comes to innovation, nothing beats JavaScript. All the most exciting and beautiful tools are coming out of this space and if any tool is going to ultimately usurp Selenium, then this is where it will happen.

The Live Testing feature of Cypress is a revelation, while TestCafe also frees us from the confines of Selenium, giving us some much-needed stability while easily fitting into any wide testing framework. If you’re committed to Selenium, then WebDriverIO makes it simple, giving you a powerful mature platform that integrates with almost everything.

However, the price of innovation can mean you hit issues that haven’t been resolved yet. And these tools are innovating fast, haemorrhaging bugs right, left and centre.

However there are huge and enthusiastic communities supporting these tools, and any issues generally have workarounds or are fixed quickly. It also means any queries you have will be answered fast, a refreshing change from the support communities in other languages.

Ease of Use

All tools have great documentation and strive to make things as easy as possible for the tester.

Most of them excel when they’re reinventing what it means to do Automation Testing - Cypress and WebDriverIO in particular stand out when it comes to this - rather than trying to fit old models into their frameworks.

However, very much like the JavaScript language itself, it’s when you want to perform something bespoke that the learning curve suddenly becomes very steep.

For those just starting out, they will find the tools and the language much, much easier than Java and .NET. However, they can flatter to deceive. To get the most out of these tools and create Test Automation frameworks for complex applications, you will need knowledge of all the wonderful weirdness of JavaScript async/awaits, promises, prototypes, duck typing, function recursion and that can very quickly blow your mind.

Very much like Alice in Wonderland, some strong drugs may be needed to understand what is going on and to keep you sane.

Practicality

If you intend to purely be focussed on UI automation, or you’re working on NodeJS applications, then these are the tools for you. Plus they really are a joy to use.

However if you’re looking to automate the full stack of testing on a Java or .NET application, then read on to the next sections.

Overall

Innovative new tools such as TestCafe and Cypress make Test Automation fun. They also free us from the instability and limited commands of Selenium.

However, you will come across issues that haven’t been solved, and be prepared for a very steep learning curve later down the line - which will still be enjoyable for those who love to learn bizarre and wonderful things. For those of you who like their programming a little more predictable, take a look at the next sections.

Interested? You can book on our JavaScript courses here:

4 day Manual to Automation Tester course

2 day Crash Course in Test Automation



Java

TLDR; Learn this if you’re working in a Java software house and you want to manage or integrate with the the full-stack of automation testing. Otherwise, attend the JavaScript course for purely UI automation.

It goes without saying that you would only use a Java framework if you’re working on a Java application. The same is true for Test Automation in .NET (see the next section). There’s a steep learning curve right at the start as you must get to grips with Object-Oriented Programming in order to use any decent framework, however once you do, everything is simple from then on. There are none of the constant surprises of JavaScript.

Innovation

Any UI automation in Java will lean heavily on Selenium as this is the market leader in this area. Java is also a relatively stable language and once you understand the core concepts, these can be applied everywhere.

That isn’t to say that innovation isn’t happening in this space (hello Spock), however these tools will only make life easier for you, rather than introducing new concepts.

Ease of Use

As previously mentioned, for a non-developer to learn an object-oriented language is hard. However, once you get your head around the syntax (which is a nightmare) and the ways of working, then things only get easier.

If you’re using the IntelliJ IDE, then this will give you a lot of support and take some of the pain out of coding.

Practicality

If you work for a Java house and are looking to create a full-stack test automation framework for a complex application, then this is likely to be the best option.

Overall

A steep learning curve that it’s well worth overcoming to make you a better automation tester. The test automation models are generally a bit old and rely completely on Selenium, however there are innovations coming through that look to shake up this space and bring Test Automation with Java into the 21st Century.

Interested? You can book on our Java courses here:

4 day Manual to Automation Tester course

2 day Crash Course in Test Automation




.NET (C#)

TLDR; Learn this if you’re working in a .NET software house and you want to embrace the the full-stack of automation testing. Otherwise, attend the JavaScript course for purely UI automation.

Much like Java, if you work in a .NET software house and are working on a .NET application, then building your UI automation framework in .NET will make it much easier for you to build a robust test automation framework that sits tightly within your solution.

Innovation

The release of dotnet core into the open-source ecosystem has led to a nascent explosion in innovation around this area. Microsoft are also continuously nabbing all the best features of other frameworks and including them in their tools.

In particular, SpecFlow is much more user friendly and feature-rich than the Java version of Cucumber.

Ease of Use

Again, you will need to contend with the steep learning curve of Object-Oriented Programming and the horrible curly brackets syntax that confuses every new starter. Although, weirdly, C# has started incorporating a lot of the syntax of scripted languages, which can be confusing or exciting, depending on your point of view.

We love C# however, and do think it’s an enjoyable and lively language. Automation frameworks are built in a modular fashion that still remains tightly coupled to your application project thanks to solutions.

However you do need to contend with the bloated instability of Visual Studio, which is by far the worst IDE ever invented. In addition, the documentation that Microsoft provides for any of its products is at best horrible and at worst out-of-date. Visual Studio Code is one of the best IDEs, but has a steep learning curve for non-developers.

The documentation problem is changing though, as demonstrated by Microsoft’s recent release of Azure DevOps, so maybe they’re getting better in this area.

Practicality

The frameworks integrate well with any .NET project and can be as complex or as simple as you make them.

It’s also getting easier to use .NET tools for free as Microsoft embraces the open-source methodology, however, when using them in an enterprise environment, expect to be paying lots of licensing fees.

Overall

Once again there’s a reliance on Selenium for any web application UI testing, however the frameworks are powerful and integrate tightly with any .NET application under development.

A must if you’re working on a .NET project.

Interested? You can book on our .NET courses here:

4 day Manual to Automation Tester course

2 day Crash Course in Test Automation




What is the Page Object Model for Automation?

If an online shop changed the login process, or changed a link on the homepage to a button, your clever human brain could still work out how to login or click the button.

Your test automation suite is not quite as clever as that. If the link changed to a button, or the wording of the link changed, the test will likely not find the button and fail.

Automation tests can be pretty inflexible and when things change they just can't cope, so we need to keep that in mind when writing our tests. We have to accept that they'll need to be regularly maintained, and design our test suite to be easy to do so.

 

How do we do this?

In manual test cases, you have an ultimate goal, and a set steps to follow to get there. Each step will have an action to complete, and an expected result for that action. The test will pass or fail dependent on the result of all of the steps. In test automation you'll have exactly the same concept, but the actions are called methods. You write and store methods separately to the tests and expected results so you can reuse and maintain them.

In order to do this, we create "Page Objects" to store our methods (actions) on a per page basis, and then our Tests will link to these methods and we can wrap them in assertions to see if they work or not.

 

Page Objects

For each page on your website you'll create a page object class. It will contain any test methods (actions) that you might do on that page such as identify a form field, enter data, click links and look at the page title.

Comparing this back to a manual test, these are the actions part of the test step, not the expected results. For these page objects, we don't care about pass or fail, it doesn't matter if the link works or not, that's taken care of in the test. Methods are actions such as "Click on login link" or "Enter login information and submit". We can even have a method for "check for a message at the top of the page" which isn't a test, we've not said if there should be or not, we're just looking to see if it is there.

 
 

This keeps all of the coded automation actions in one easy to find place and makes them very easily reusable, which improves maintainability, reliability and visibility. 

 

Tests

This is where we actually create and define our tests and expected results. From within a test, we can "call" the methods on our page objects. This makes writing tests very fast, and you can reuse every method/action you have written as often as you like.

A manual test might look like this

Safe-Script.png

 

Then the automated test would look like this

 
Safe-Test.png
 

And when executing the test, it would call out to the Page Objects and execute the methods on those pages.

 

How does this make the Test Automation Suite maintainable?

At the start of this post, we talked about changing a link on the homepage from a link to a button.

In a manual test, we'd have to know where all of the steps are that need to be changed and we'd need to go to every test case, and update every test step that says to click on the link.

In our automation test suite, we know the link is on the home page.  We go to our home page object, find the method that says to click the link, and update it to click a button. Voila. One easy to find action is changed and the entire suite is up to date.

Getting Into Test Automation: What You’ll Need to Learn

It looks like a daunting journey from Manual Testing to Automation Testing. The different languages, tool options, structure and technical jargon you're faced with can make it difficult to know where to start. 

It doesn't have to be hard and scary. The steps to getting there are actually surprisingly simple.

  1. Understand the principles of Object Oriented Languages
  2. Choose a language and tools
  3. Learn how to interact with web pages
  4. Design a test suite structure
  5. Write tests
  6. Run tests and report

Looks a lot less scary as an achievable to-do list. Let’s look at each of these in a little more detail.

 

Understand the principles of Object Oriented Languages

This doesn't mean you have to be able to write an entire application in C# or Java. Most IDEs (integrated development environments) have built in code-completion, which is a bit like the autocomplete you get on phones. With a solid understanding of syntax and some knowledge of where you're going, the code will almost write itself.

 

Choose a Language and Tools

Generally, you will choose the language that your application is being developed in, so your framework can sit alongside application development.

The tools are influenced by the language you use and the project on which you’re working. Commonly, .NET houses have a tendency towards Visual studio and many Java places use Eclipse or IntelliJ. Once you know how to use one IDE you’ll easily pick up how to use others, similar to how different word processors have different looks and functionality, but ultimately you can write, edit and save documents.

 

Learn how to interact with web pages

Selenium with Webdriver is a tool that drives a browser. As a very basic model, think of the browser as a car, the Webdriver as a person driving, Selenium as a SatNav, and you are the person giving destinations to the SatNav. If you want to go to the homepage, you need to use Selenium to tell Webdriver to go to the homepage in the browser.

In order for the Webdriver to select a link, it needs you to tell it where that link is. You'll need to learn how to uniquely identify any element on a web page, with the most common methods being ID, XPath and CSS selectors. When you understand how these methods work, identifying elements is easy.

 

Design a test suite structure

In manual test scripts, you might have "Go to homepage URL:www.example.com" as a step used hundreds of times in all of your test cases. Good practice is to create that action separate to the test cases, and call it into the test cases that require it. That way if it changes you only need to update it in one place and it’s automatically pulled into all tests.

Exactly the same principle applies in Test Automation. The more steps you can pull out of the actual test, the more reliable and maintainable the tests become. In automation, the action is called a method. Methods are written completely separate to the tests, and then when the tests are executed they call the method. A very good model that implements this technique is the Page Object Model, taught on our courses.

 

Write tests

When writing a manual test script, you have a set of steps with an expected outcome for each. These are grouped together to create a single test case that will pass or fail depending on the results of the steps. The same principles apply in Automation tests. You create a test method (test case), in this test method you execute a set of page object methods (actions), and you use assertions to check all is as it should be (expected results).

Writing the tests might sound like the hardest bit, but once you have the understanding of how it all works and how to do it, it's actually the easy bit!

 

Run tests and report

Automation tests are usually used in one of two ways:

  • As part of BDD where you link the tests to your requirements.
    The tests are run as part of continuous integration and deployment, and the results report is a BDD level (requirements coverage) report.
     
  • As part of a regression test suite.
    Test are usually executed as part of continuous integration and deployment. Tests are expected to pass, and reports are simple. Tools/plugins exist to generate project facing reports if required. 

 

In Summary

Many people that want to get into test automation jump straight in to learning how to code. Although understanding how to code is important, it’s far from the only aspect, and as with spoken languages, it’s much easier to learn through use.

More important is learning how to set up your framework, and how the structure and relationships in the Automation Test suite works. Once you understand all of that, coding the tests is easy.

 
Safebear run hands on learn-by-doing training courses in Test automation, from crash courses with Selenium to a complete Manual to Automation tester course, as well as Performance testing courses. You can even learn about Blockchain!

Automated Visual Testing: The Missing Part of your CI Pipeline?

 
automated_visual_testing.jpg
 

Guest Post by Viv Richards Senior Test Engineer @Vizolution, organiser of SwanseaCon. Viv will also be talking at our next meetup event.

Selenium and similar tools are the de-facto standard when it comes to GUI testing. However, they go far enough?

Selenium tests are not infallible, as we found to our cost late one Friday evening when the support team got an urgent call. A few days after a system update, the customer could no longer use the application as the send button was not on the screen. The support person immediately checked the Selenium automation tests to check for any failures but was confused when they noticed that all the tests had passed… what had happened?

How good are you at spot the difference?

The send button had rendered on the form, so the Selenium tests all passed, however a CSS change had been made and so the send button had become hidden behind one of the input text areas. There was no way that Selenium could pick up this regression bug. Selenium only cared that the button appeared on screen, not where it appeared. Readability, usability and the appearance of a web application - all the things CSS control - are completely irrelevant when it comes to traditional GUI Test Automation.

Screen Shot 2018-04-22 at 20.41.27.png

 

But what if you could simply assert that your application needs to look like a standard image of the website? Or even that each element of a page looks like a base image to prevent this kind of regression?

A sea of Bloatware

When looking at visual testing with my current employer, we were unable to simply use another offering. Many options didn’t work straight out of the box, or contained lots of bloat which would just add an extra overhead to the maintainability of the tests. There had been a massive investment in our automation frameworks and lots of investment which had been made in upskilling developers/testers in C# and Selenium and possibly looking at new languages or frameworks could mean a steep learning curve. The main issue for us was that we needed it to fit in with our current automation framework, and so we had to create our own visual testing framework.

Reliability Issues

Whilst developing our our framework, it quickly became apparent that the rendering from different browsers would often cause tests to fail. For example, if we were to take a base image of our homepage in a Chrome browser and then run our test in Firefox to ensure the homepage  hadn’t changed, the test would fail due to the browser adding additional padding to some elements. We needed different sets of base images for different browsers.

Execution Speed

A major plus was the impact to our CI pipeline. When comparing a visual test of a web page to our traditional Selenium tests, where we asserted each element’s text, we noticed the visual tests executed far quicker. The test framework simply navigates in a web driver to the desired page, takes a snapshot which is held in memory and then byte by byte compares the image to the base image stored locally or wherever you’d prefer.

Easy to Maintain: A Dramatic Reduction in Code

Compared to Selenium, the tests were much easier to maintain. Using a visual approach can reduce the amount of lines of code required for an automated test dramatically. As an example, a previous test I’d been asked to write was 500+ lines, when writing this using the visual framework this was just four lines of code; one line to specify the base image, one line to specify the URL to compare, one line to do the compare and an assert, and that’s it!

Screen Shot 2018-04-22 at 20.41.57.png
Fantastic Feedback

One of the fantastic things with the visual testing framework, and specifically the one we designed, was that whenever a test would fail, a copy of the original image is created and when differences are found, a pink box is drawn, highlighting any areas of change and making them very easy and quick to identify.

Accuracy: Finding the Right Level of Tolerance.

During development of the testing framework, and whilst running spikes using various other visual testing tools, they were all pixel perfect. The frameworks were able to detect a single pixel difference. However, after a few months of testing and asking other team members to help out, the tests started to fail. We found that within our visual framework, we had to start allowing a certain amount of tolerance during comparison. A common issue we encountered was that the colour would sometimes be slightly off (a slightly different shade) depending on the machine taking the base image, or taking the comparison image. Whilst this wouldn’t be a problem on the machine which would always run the tests, it would cause issues when developers would run the tests locally. So, we had to introduce a tolerance for the 256 different intensities.

Screen Shot 2018-04-22 at 20.41.38.png

Whilst this no longer was a single pixel perfect we found it accurate enough to still enable us to assert that layouts were as expected, as well as checking that wording was correct and elements were being rendered as expected.

Dynamic Content

One of the big challenges for us has been dynamic content, often our web pages display a logged-on client name or perhaps a news feed. One of the ways we’ve been able to get around the issue is to create a helper which simply blankets over the dynamic elements.

Even with the ability to cover dynamic content, it’s not always desirable to cover all elements of a page. What if you only want to check a small portion of your page, perhaps just a button to ensure it still matches the customers set branding?

By using visual testing, you can not only test that a whole page matches what you expect it to but it is possible to check an individual element. By specifying an element by CssSelector or ID you can take a base image of that individual element and then run tests to check for changes.

Screen Shot 2018-04-22 at 20.58.58.png
In Summary

Our Automated Visual Testing tool has shown a great deal of promise. It has enough advantages over Selenium and 'spot the difference' manual checks for CSS issues that what started as an interesting experiment has grown into full-blown tool development. It still has its challenges (I didn't even get into any Storage issues), but there are clear ways to overcome these. It is also not a magic bullet, but with time, and getting to know the right tolerances for your application, you will start to wonder how you lived without it.

If you're interested in learning more, pick up a free copy of the framework we are developing over on GitHub and let us know what you think.

https://github.com/vivrichards600/AutomatedVisualTesting

We'd love to hear your thoughts!


Viv Richards is a test engineer at Vizolution, a blogger and a community bumble bee. He is a CodeClub volunteer, organises South Wales' largest agile and developer conference (SwanseaCon) and is co-organising DDD Wales.


Artificial Intelligence in 5 Minutes

 
clock_gt14g108_brain_gear_clock.jpg
 

I've been given the challenge of writing about Artificial Intelligence in five minutes. So I designed a robot to do it for me. Here’s what it came out with:

 

What is Artificial Intelligence?

Computers are excellent at things that us humans have only discovered relatively recently in our history, such as numbers, calculation and Netflix. However, we have traditionally always been better at areas that have taken millions of years of evolution, such as identifying images (two pictures of the same plant will look completely different to a computer if taken from a slightly different angle), understanding natural language, fine motor control and just learning and adapting to changes in the world around us.

 

However, computer science is getting better and better at mimicking the way the mind works and suddenly the gap between the ability of computers and the abilities of humans is starting to narrow.

 

This means that the two, previously disparate, disciplines of Neuroscience and Computing are drawing ever closer together as we develop ever more advanced artificial intelligence.

 

What are the types of Artificial Intelligence?

The two main types of AI that are mentioned most often are Machine Learning and Deep Learning.

 

Machine learning Algorithms parse data, learn from that data, and then apply what they’ve learned to make informed decisions. However they need to be shown a lot of ‘right’ answers. For example, for traditional Machine Learning, if you wanted it to recognise a cat, you would need to ‘tell’ it to look for something with whiskers, a nose, pointy ears, a tail etc., while Deep Learning would work all this out for itself.

 

Deep Learning is a subset of Machine Learning that mimics the working of the human brain by creating an artificial neural network. With Deep Learning, you only feed in the data and then tell the algorithm which outcomes are desirable and which are undesirable. The neural network itself will work out what data is important and how to consider that data to get a desired result.

 

The key point about Deep Learning is that it is ‘black box’. I.e. we will never know how it really identifies the ‘right’ answer. Deep Learning is more accurate than any other type of Artificial Intelligence, however when the ‘computer says no’ you never know why.

 

Why is everyone talking about Artificial Intelligence now?

The cost of computing power has plummeted while the speed and power of computers has shot up, making it possible for anyone to start coding and training their own AI bot. The tools have evolved to make it easier for anyone with a knowledge of coding to start in this space (traditionally the preserve of Data Scientists doing PhDs at Universities).

 

Also, we have huge amounts of data available with which we can train our AI bots.

 

What industries will be affected by Artificial Intelligence?

Every industry that involves humans will be affected. Workers in warehouses are being replaced by AI, AI can edit photos and create its own films, sell to customers, translate text or dynamically improve computer code. AI will increase productivity, some say at the expense of jobs.

 

What’s the future of AI?

Currently AI can learn to do a very specific thing very well. This is enough to be incredibly useful and will impact our lives more and more over the coming years, eventually becoming a tool as ubiquitous as the internet.

 

As AI becomes more advanced, there’s also the possibility that it might make the jump to higher forms of intelligence that will bring it closer to humans. However this is, maybe fortunately, a long way off.

Embrace your inner Polyglot: Learning Languages as a Tester

 
Learn to Program.jpg
 

Testing has experienced a paradigm shift in recent years. Automation has enabled methodologies such as Behaviour Driven Development (BDD), Test Driven Development (TDD) and Continuous Integration (CI) to flourish, dramatically reducing the number of missed deadlines, failed requirements and buggy products.

However, while Testers are core to the successful adoption of these technologies, they have struggled to adapt. Their involvement can be limited due to a simple lack of programming knowledge. Even our own Full-Stack Test Architect course requires that attendees must have a basic knowledge of at least one Object-Orientated Programming language in order to join the course.

However, getting up to speed isn't as hard as you may think.

New online courses have made this simple. For those of you who like to gobble down their learning in one go (like watching a whole box set over one weekend), there's the excellent and free LearnXinYminutes website. Here are the links for Java, CSharp and JavaScript:

https://learnxinyminutes.com/docs/java/
https://learnxinyminutes.com/docs/csharp/
https://learnxinyminutes.com/docs/javascript/

Which are addictive reading. You can download the entire code (use the 'Get the Code' link) and open it into any editor you like (we recommend IntelliJ for Java, VisualStudio for CSharp and VSCode for JavaScript) to run the code as you learn. 

If this seems intimidating and you're into more bite-sized learning, we recommend the SoloLearn tutorials. Here are the links to the language courses:

https://www.sololearn.com/Course/Java/
https://www.sololearn.com/Course/CSharp/
https://www.sololearn.com/Course/JavaScript/

Their courses are also free, and you can download the mini-lessons as apps on your phone, so you can learn to code while waiting for a bus or for the kettle to boil. Perfect for busy testers getting the coffee round in.

Finally, if you want to actually build something as you learn, we recommend Codecademy. Here's the language courses:

https://www.codecademy.com/learn/learn-java
https://www.codecademy.com/learn/introduction-to-javascript

Please note that there are parts of this course that you will have to upgrade to a Pro (paid) membership to access. There's also no CSharp course.

Ultimately, until you start using the language in a real-world setting that relates to your job, learning a language will remain challenging. However this is where Safebear can help. Completing one of these basic programming courses is enough to provide you with the skills needed to join the Full-Stack Technical Architect foundation course and bring together the language and practical experience you need to start using your skills on a real project.

So go on, take the plunge and learn a language. You'll be amazed at the opportunities that will open up for you.

Please note that Safebear has no affiliations with any of these third party training courses and is not responsible for any of the content.

Performance Testing - Which Tool to Choose?

 
safebear performance testing course
 


When it comes to Performance Testing, there are a huge number of tools out there. 

We've had feedback from our partner agencies that JMeter is the skill that is in the most demand from the market place which is why we run the course, however that's not a reason for you to adopt it. Below we will take a look at some of the alternatives.

JMeter is open source, so there are paid options like HPE LoadRunner that can be better depending on how you'll be using it. If you're testing a web (browser-based) application, then JMeter will be fine - it handles RESTful interfaces brilliantly. HPE LoadRunner supports many more rare protocols and also integrates with other HP Enterprise test tools. HPE LoadRunner also comes with support, which can be handy if your application is using difficult network protocols to communicate. Support is costly though and has mixed feedback.

When doing a comparison, you also need to consider that simulating a huge number of users needs a lot of computer power. If you use a tool that is not well known, you'll have to build up your own server stack to support the tool. If you're using JMeter, there's a lot of cloud-based infrastructure you can upload your scripts to and they provide the firepower, e.g. BlazeMeter or Flood.

Which means that you only pay for the power when you need it. Brilliant.

Running 'Performance as a Service' allows you to run your thousands of users in the cloud and then turn them off at the end of the test. That's the beauty of cloud power. BlazeMeter also offer great JMeter support and much better reporting functionality that the JMeter tool offers on its own. 

The cost of using these cloud-based systems (or supporting the server needs in-house) needs to be factored into any tool comparison. Most of these cloud-based performance infrastructures seem to accept JMeter scripts, so it would give you a broad choice of tools out there. I'm sure paid options like HP LoadRunner will have their own dedicated cloud-based options also.

In terms of other open-source options, Gatling is very well thought of in the industry and seems to be on a par with Jmeter. There's a good comparison between the two on OctoPerf's blog that is well worth a read.

Good luck and happy testing!

SafeBear x

Test Automation: Java or Python?

 
 

It's one of the most common questions we get:

Which is course is best for me, our Test Automation course in Python or our Test Automation course in Java? And why do we run the course in two different languages?  

The first and most import point to say is: it does not matter which language you automate in.

The application you want to test can be written in Java, Python, CSharp, Assembly, Scala or any other computing language you can think of. An automation framework written in Python will automate your manual tests and an automation framework written in Java will also automate your manual tests just as well. When it comes to automating the GUI, the underlying framework makes no difference.

Which course to choose depends on your situation. Of course, if you wish to become an automation expert, you probably want to take both as the frameworks are very different. Each fit different circumstances, as you'll see below, and sometimes it simply comes down to personal preference.

I've tried to break down the pros and cons of each in the hope that this makes your decision easier. If it's made it more complicated, I'm sorry.

Python is Perfect.

PROs:

Automate your world not just your tests. It's hard to make the move to becoming a technical tester without knowing Python. If there's one language you need to know to automate EVERYTHING, then this is the one to choose. You can automate spinning up environments, you can use it to scan ports and perform security tests, for CI it's invaluable. Python will become a friend you revisit again and again.

Short and simple. It laughably easy to use and read compared to Java's complex syntax. In addition: it's generally accepted that for every ten lines of code in Java you only need one line in Python.

Everyone's already done everything for you. So many other people use Python for the exact reason you do, so you can generally assume that some's already written the code you need and you can simply import it. 

Learning and support. There's a huge amount of learning and support materials out there on the Web. People generally agree that the support manuals are easier to understand than for any other language.

CONs:

IDE pain. Python is designed to be simple, versatile and scripted from the interpreter, so it doesn't play as well with IDEs as Java does. It's so simple and versatile that it's impossible for an IDE to understand what you're doing when you start creating objects and passing them around between methods. This can be very frustrating if you do want to use an IDE to create your framework.

Office support. Sometimes it's better to have local support. If no-one else in your office has Python knowledge, there's no-one to bounce questions off when you get stuck.

Java is Just Right. 

PROs:

IDE heaven. Java is a joy to code in IDEs such as IntelliJ. The IDE does most of the work for you, even taking most of the pain of the complicated syntax. The code completion features mean you can get a huge amount of work done while it feels like you've only typed a couple of characters.

PageFactory. Page Factory in Java simplifies your Selenium automation code and allows you to write easily understandable tests.

In-house support. Most testers work with Java developers. If you ever get stuck, there's someone a couple of desks away who can pop over and give you hand. This helps hugely with the learning curve and also gives you the benefit of their experience and knowledge. Before you know it, you'll be a pro.

CONs:

Gibberish. It's not at all easy to read Java compared to the plain English of Python. There's also a steep learning curve and the documentation isn't always that useful. There's a lot of online support though (see Stack Overflow).

Null Pointer pain. When Java throws an error and stack trace at you it's not always the easiest to understand or the most useful. IntelliJ helps where it can, but confusing error messages can make it frustrating to debug. 

Limited to Test Automation. You'll never use Java in any other areas of your testing. Not in CI, Performance, Security, Availability or anywhere else. You're stuck automating the GUI of your application and that's it.

I hope this has helped! If it's only made you more confused, please don't hesitate to get in touch with one of the education team at hello@safebear.co.uk or by calling us on +44 (0) 2921 28 0321. We're always happy to have a chat.

Team SafeBear.

Why Testers Should Code

 
Technical Testers
 

Although there's been a gravitational shift towards testers who can code in the industry, there's still some companies that struggle to understand the need to skill up this particular area of their workforce. Their logic does have some merit.

Testers are meant to act as USERS. If they code they'll stop thinking like users. Even worse, they'll start reporting bugs that are actually due to their buggy automation, environments or shell scripts and wasting development's time.

Unfortunately what is a bigger waste of time is manually testing an entire regression suite. Or IT Operations regularly resetting a complex 'as live' environment so it can be used for another cycle of testing. And what's even worse for the quality of the product is NOT regression testing an entire regression suite because of a lack of time or manual testers.

Manual testing will always catch more bugs than automated tests. 'As live' environments will always catch any environmental issues before it goes live. However no software development lifecycle will ever be able to absorb the kind of hit to timescales it takes to manually run a full regression suite on each release candidate, or run a production-like environment for Agile testing.

That's before we even get to how long Performance, Security or Availability (ITSCM) testing can take. All these should be fitted slickly into your Continuous Integration SDLC.

Testers need to be able to understand how to spin up virtual environments on a Linux box to test a new feature or retest a bug. They need to be able to automate their regression packs. They need to be able to integrate non-functional testing into your CI.

Some element of manual testing will always be needed to ensure that user experience is as smooth as possible, but if you want to deliver a quality product AND meet customer deadlines, then skill up your testers. Now!

Security Jargon #2: Script Kiddies

 
 

When most people run a virus scan on their computer they have no idea exactly what it's looking for, or how it identifies a virus and what it does to 'clean' your computer of the effects of the malicious software. They don't need to. Some very clever security experts have created a program that does all of this for you.

The same is true when someone wants to hack into your computer. There is an abundance of software that experienced hackers and security experts have written to easily identify any vulnerabilities in your network, offer up 'payloads' such as ransomware, trojans, webshells, back doors etc that make your network a hacker's playground.

Hackers who use these tools blindly, with no idea of the technology behind it are called 'Script Kiddies'. They only know what the tools tell them and can easily be stopped by following standard security practices such as patching regularly, good security hygene and scanning your own systems for vulnerabilities (although many companies are lax at even doing this).

They are no where near as skilled as genuine hackers or cyber criminals who can quickly identify vulnerabilities in your system that tools or scans won't pick up. However, on the down side, they are far more numerous and most likely to be testing your defences.

So, for those readers playing hacker top trumps, scores below:

Skill: 2/10

Numbers: 10/10

Threat: 6/10

Maliciousness: 4/10

Script kiddies can do a lot of naive or unintentional damage, even managing to cause major data breaches, so do the security basics and keep them out!

What Brexit means for Cyber Security in 10 seconds

 
 

BOOM. 

That ringing in your ears is the result of an explosion in world politics that will last for years to come.

So what will be the fallout for cybersecurity? Our IT elves here at safebear have found mostly negative reactions (summarised below) - although there is some, likely futile, hope that companies will no longer need to meet the costly requirements of EU security regulations (coughGDPRcough). If you find any impact that we've missed (positive or negative), please comment and we'll include them below! 

Start the clock...

Security Research

Outlook: Uncertain

Over a third of research funding for UK universities comes from the EU. The UK government will need to replace this in order for the UK to compete in the tech industry.

Jobs

Outlook: Uncertain

Some think that it will lead to a talent drain away from the UK as companies cut costs - preferring to take the risks of a cyber attack, however this is only likely if the UK enters another recession as a result. 

Some also say it is likely that cyber security firms (which have received heavy investment from the UK government in recent years) will relocate to the continent, where there will be a wider talent pool of cyber security professionals.

Intelligence and Homeland Security

Outlook: Poor

Sharing of intelligence is one of the cornerstones of the EU. And does help with the online battle against cyber criminals who could well take advantage of the confusion to attack UK companies, now seen as 'easy marks'.

Security Regulations

Outlook: No change

Everyone agrees that UK companies will still need to meet the GDPR and some say they may even have to jump through more hoops doing so. However some are more positive that this will not be the case and nothing will change.

A risk-based approach to cybercrime for SMEs

 
 

 

It's very hard to stop a determined burglar getting into your house.

If they were to watch your comings and goings every day, determine what alarm system you had set, maybe even found a way to cut the electrics to your house to stop it going off, then picked the lock of your secure front door (or even jimmy'd open a window), drugged your guard dog or bribed your neighbour for the spare key... eventually they'd find a way in.

This is similar to most systems that reside on the internet. A skilled, determined and patient hacker will eventually find their way into your system. However, in the same way that by double locking the front door and setting the burglar alarm before we go on holidays deters most criminals enough to go after easier prey, doing the basic security practices is usually enough.   

You certainly wouldn't go on holiday leaving your windows wide open. However that seems to be what most small business do.

The threats to small businesses from Cyber Crime are ever present, and the ability of a business to protect itself and minimise risk is critical to it’s overall success. The good news is that a few basic steps will drastically reduce the amount of risk to your business from the key threats. These steps are simple:

  • Installing anti-virus/malware software on all machines
  • Keeping software up-to-date
  • Using network and host based firewalls
  • Practicing good security habits, such as recognising and deleting malicious emails
  • Ensuring good password practices are in use

In addition, if you are storing your customer's data, you must:

  • Identify the costs of cleaning up after a breach or malware incident
  • Identify the costs of fines due to the loss of personal data or failing to meet other compliance requirements

Ultimately though, the more you invest in security measures, the safer your customer data and your IP will be from theft or ransomware. So, the question is, should your business be looking to further reduce risk, or are the basic measures enough? Before you can make this decision there are a few factors to consider. 

  • If you were a victim of cyber crime, and all your IT systems became unavailable, or all your data was compromised/lost, what would the financial costs be?
  • If you were a victim of cyber crime, and all your IT systems were unavailable, or all your data was compromised/lost, what would the potential damage to the reputation of your business be?

In reality, both of these factors come back to the financial cost, as a loss of reputation is likely to lead to a loss of customers, as Talk Talk found out after their very public breach.

So, when considering IT security measures, the cost of these measures should be evaluated against the cost of a breach and how likely a breach is to occur. Once these factors are known, the business value of security spending can be established, ensuring the security measures put in place match the risk profile of the business. 

The first step in assessing and minimising the risks to your business is to ensure you have the right skills available to do this effectively. Safebear can offer courses to help you cost-effectively develop these skills within your business, or can provide skilled security experts to guide you through the process.

Don't leave your windows open to cyber criminals. Ensure your employees are trained to do the basics right.

Security Speak #1: Google Dorking

 
 

A hacker's favourite exploit. Why risk detection by scanning a company's internet footprint when Google's done the hard work for you?

Google dorks are a list of search requests that pick up sensitive information unwittingly exposed to the unsuspecting world by careless employees. 

Discovered by Johnny Long back in 2002, Google Dorking is still very much alive. The power and flexibility of the Google search engine makes it difficult for companies to ensure that they're not exposing any sensitive data, however sometimes it's worryingly easy. A simple search query such as:

intext:"ssn" filetype:xls

is often all it takes to find vast quantities of social security numbers stored in publicly accessible files. Similarly, queries such as:

intitle: "index of" password

have been known to uncover user password lists.

Google dorks are collected on websites such as Google Dorking, the Exploit Database or enthusiastic amateurs. The NSA even have their favourite searches.

For more information about Google Dorks - Google it!

Weekly Roundup - Top Breaches

 
 

Twitter

The week started off with the news that 32 million Twitter passwords were circulating among the Russian hacker community online, causing Twitter to react by forcing users to reset their passwords.

However, the validity of the leak was subsequently disputed by Twitter who are known for their strong password security practices. Even so, safebear was the recipient of a forced password reset, so if you haven't signed up to two-factor authentication on the site, do so now.

Malware thriving in Company Shared Folders

Cloud security company Netskope have found that corporate shared folders are becoming increasingly full of Malware. A problem that only gets worse when uncontrolled and infected mobile devices are using shares on the network.

Word documents containing malicious macros were also found. Always ensure you open word documents in Word Reader if sent to you by email, but if you must open in Word, never enable macros.

University of Calgary Pays £10,000 to Recover Data

The University of Calgary paid £10,000 to unlock their data held hostage by ransomware. However they have not yet received reassurances that paying the ransom will lead to the data being recovered.

In an article published Tuesday by The Globe and Mail, University Vice President Linda Dalgetty said once the network was infected, the university couldn't risk losing critical data.

“We are a research institution," she was quoted as saying. "We are conducting world class research daily and we don’t know what we don’t know in terms of who’s been impacted and the last thing we want to do is lose someone’s life’s work."

Ransomware is becoming an epidemic - to stay safe, always remember to back up regularly, and keep your backups off your network when not in use.