Session Schedule

Here are the content details for this event's available sessions.
All times are London Local Time (LLT) Printable Schedule || Times, presenters and sessions are subject to change.


     All Sessions   |   Room 1
  • 9:15 AM
    Pushing us Towards the End (-to-End)
    Room 1
    IT constantly increases its ability to rapidly implement new features and changes to support the continuous changing business needs. This is pushed further ahead by creating applications from available and new services (including web services, and services that encapsulate legacy systems) and increased flexibility enabled by SOA, Cloud, and Sourcing. QA & Test are pushed towards the End: all pieces have to connect and work together to provide the required services to the business. In this presentation Ruud shares with you his experiences in how to deliver the End-to End test service you need to provide: integrated, production like, automated, continuous running, end-to-end. The service covers the full application landscape. Find answers to the questions: Who is responsible? Who provides the funding? Who develops, executes, maintains the automated test scripts? Who provides the test environment & data? How do we resolve issues and bugs (responsibility)? Join Ruud to learn how to prevent being pushed over the edge and stand up to the challenge of End-to End testing.
  • 10:00 AM
    The Crucible of Testing in Creating Software
    Room 1
    Software combines: Ideation (thinking about what we want to create), Creation (designing and writing the software); Distribution (getting software to the users); followed by Use. Refinement, Corrections and Enhancements drive updates and new versions often motivated by Enticements (motivating factors such as income, revenue, fame). Successful software can be created without much Testing, but seldom is; however much of Testing currently is perceived as low-value work done by a dedicated trade of 'Workers' where the Testing is effectively outsourced by the Creators. When Testing is performed well, the results can help to significantly improve the chances of success while simultaneously reducing the time, cost and effort required of the people involved.
  • 10:45 AM
    An Effective Agile Testing Framework
    Room 1
    In the early agile days, all testing activities in an agile project were automated unit testing and acceptance testing. That was before testers joined agile teams. In more complex and more critical agile projects being effective and efficient in testing is not that easy. In this session Anko Tijman will discuss the elements of agile testing that are of interest to you when you are testing in an agile context: Acceptance TDD, Unit- and Integration Testing, Non functional Testing, Exploratory Testing and Continuous Acceptance. With these elements, you will create a solid testing framework where errors are prevented, communication is encouraged and acceptance is integrated seamlessly. With this process framework, your agile testing process will be effective, efficient and state of the art.
  • 11:30 AM
    Hang on, DevOps, what about DevTest?
    Room 1
    The topic of 'DevOps' integrating Development and Operations in efficient and automated ways, is getting a lot of attention these days. But doesn't that assume we've already solved 'DevTest', the collaborative and effective integration of Development and Testing?

    In this session from Scott Rich, the IBM Rational Technical Lead for Rational Collaboratice Lifecycle Management, will assert that Development and Testing are still islands in most shops and that some trends in tooling are actually making it worse rather than better.

    Can Collaborative Lifecycle Management tooling and OSLC integration standards save the day?
  • 12:15 PM
    Visit the Expo
    Room 1

  • 1:00 PM
    Usability Testing on a Tight Budget
    Room 1
    Not everybody is building the next eBay or the newest operating system for the iPad, both applications with millions of end users and for which the usability is top priority. In most cases, we work on (internal) applications for a limited amount of users. But, yet you want to know whether this application is user friendly enough if only to ensure that the transition to the new application will go smooth for these end users. From this point of view, Erik will run through a few simple steps - easy to apply in different environments - for a first impression about the level of usability of your application.

    More concrete, it comes down to the fact that you start with a prototype before building your application This will give you already the first impressions. Then, you organise several interactive demos at well selected moments in time before moving on to the actual usability tests. These actual usability tests involve setting benchmarks with the help of your end users. Determining the minimum and maximum amount of clicks in order to fulfill assignments within the application can be such a benchmark. The same can be done with e.g. the elapsed time needed for the same assignments. During the execution of these tests, the results are measured against predetermined acceptance criteria. This measurement gives you a clear view on the usability of your application.

    During this presentation, Erik shows that usability testing does not always have to go together with large budgets and periods in time. Based on his experience, Erik will hand you as a listener a lot of examples and usefull tips to apply this approach in your own environment.
  • 1:45 PM
    When Testing Becomes the Risk
    Room 1
    Normally we test to prevent risks, to keep our name out of the headlines and to make sure bad things do not happen because of our software. However in some cases the testing activities themselves cause damage when they go wrong. One of the most horrifying tests ever to go wrong was the stress test that was done at Chernobyl in the mid-eighties which lead to the nuclear problems we all know too well. However problems like this happen much more often on a smaller scale.

    In this talk Bart Knaack will present 10 (other) real life cases on how testing has gone wrong and how we can prevent this from happening. His talk will be a feast of recognition to some and an eye-opener to others, but a joy for everyone!
  • 2:30 PM
    Mind, Map, and Strategy - Using a Mindmap to Develop and Communicate Your Test Strategy
    Room 1
    A test strategy is the set of big-picture ideas embodying the overarching direction or design of a test effort. It’s the significant values that will inspire, influence and ultimately drive your testing, and the overall decisions you have made about ways and means of delivering on those values.

    Rather than the weighty templates standard in many organisations, a lightweight medium like a mindmap is a far superior tool for developing a test strategy and communicating its essentials to your stakeholders. This session will walk through an example to illustrate how.
  • 3:15 PM
    Experience Driven Test Automation
    Room 1
    Oh no! Is this yet another approach to Test Automation? Actually, no it isn't. This is about what other peoples' experience with test automation can teach us - how it can help us capitalise on good ideas and avoid potentially useless ones.

    A new book by Dorothy Graham and Mark Fewster "Experiences of Test Automation" due to be published this autumn describes 29 case histories of test automation across a rich variety of application domains, environments and organisations. The book includes success stories, failure stories, and a few so-far-so-good stories.

    While every story is different, there are some common elements running through these case studies. In this presentation Mark highlights a few of the common themes covering a range of technical issues.
Panel Bottom