Technical Testing ⚙️/ Test Automation
🏛️Spor 2 | 🎤Track 2
Kl.10.30 | In English
Raw Numbers Mean Nothing: Making Performance Insights Count with ML and LLMs By Ernest Marzá Climent from Quibim
Abstract:
Performance testing plays a critical role in identifying bottlenecks, scaling issues, and stability risks—but its impact is often lost in translation. Reports filled with raw metrics and dense dashboards tend to go unnoticed or misunderstood, especially by non-technical stakeholders.
This session introduces a reporting pipeline designed to bridge that gap. It starts with load testing using K6 on a containerized backend, collecting both performance and system metrics. These are then cleaned using Isolation Forest for anomaly detection, and further analyzed with XGBoost to simulate how the system would behave under projected future loads.
Where this approach truly stands out is in its final step: using Large Language Models (LLMs) to translate technical findings into plain language, tailored to the needs of decision-makers. The outcome is a clear, actionable report—automatically generated in PDF format—that helps teams not only identify issues, but also communicate them with confidence and clarity.
Rather than overwhelming clients or product leads with raw numbers, this method delivers narrative insights that foster better collaboration, faster decision-making, and real business impact.
If you’ve ever felt that your performance tests deserved more attention or failed to spark action, this talk will show you how to turn data into influence.
Learning Objectives:
• Learn how to transform raw performance data into clean, structured, and meaningful insights
• Understand how ML techniques like Isolation Forest and XGBoost enhance performance analysis
• Discover how LLMs can generate clear summaries that make test results accessible to all stakeholders
• Explore a practical approach for automating end-to-end reporting, from load testing to final PDF delivery
• Gain techniques to improve communication between QA, development, and business teams
Kl. 11.30 | In English
Automated REST API vulnerability detection with WuppieFuzz By Thomas Rooijakkers from TNO
Abstract:
Today’s world depends on many digital services and the communication between them. To facilitate this communication between applications, standardised and well-specified application programming interfaces (APIs) are often used. In particular, the use of well-defined representational state transfer (REST) architectural constraints for APIs is popular. As an entry point to many applications, these APIs provide an interesting attack surface for malicious actors. Furthermore, since APIs often control access to business logic, a security lapse can have high-impact undesirable consequences. Thorough testing of these APIs is therefore essential to ensure business continuity. Manual testing cannot keep up, so automated solutions are needed. In this talk, we introduce and demonstrate WuppieFuzz, an open-source, automated testing tool that makes use of fuzzing techniques and code coverage measurements to find bugs, errors and/or vulnerabilities in REST APIs.
Learning Objectives:
Basics of fuzzing (fuzz testing)
Challenges in fuzzing REST APIs
Demonstration of fuzzing REST APIs
Kl. 13.15 | På Dansk
Er Agile-værdier i konflikt med brugen af Generativ Kunstig Intelligens (GenAI)? By Jan Piil from EPOS Group A/S
Abstract:
Are Agile values and Generative AI fundamentally at odds?
o This session challenges the assumption that Agile values contradict the use of Generative AI (GenAI). Instead, we will explore how GenAI can enhance Agile ways of working, accelerating software testing and quality assurance while staying true to Agile principles.
GenAI can support Agile values in several key ways:
• Individuals and interactions over processes and tools:
o GenAI fosters collaboration by providing intelligent assistants, automated meeting summaries, and enhanced communication tools.
• Working software over comprehensive documentation:
o Automated code generation, AI-driven testing, and bug detection enable teams to focus more on delivering functional software.
• Customer collaboration over contract negotiation:
o AI-powered analysis of user feedback and behavioral data helps teams better understand customer needs, strengthening collaboration.
• Responding to change over following a plan:
o GenAI can anticipate shifts in requirements, suggest adaptations, and support continuous learning, making Agile teams more responsive and flexible.
While this all sounds promising, how do we effectively integrate GenAI into Agile teams without compromising software testing and quality assurance?
Is GenAI the key to success? No – YOU are!
o The human mind remains essential for critical thinking, problem-solving, and guiding AI toward meaningful outcomes.
o People—not AI—are the key assets in achieving quality assurance and successful software delivery.
o Testers, developers, and QA professionals must shape AI’s role by leveraging their expertise, creativity, and unique insights.
The goal of this session is to inspire software testers, QA professionals, and Agile practitioners at all levels to think beyond predefined steps and use their expertise to deliver outstanding quality assurance. By embracing AI as an enabler—not a replacement—we can truly “light the path to next-gen testing.”
This talk will also include personal insights from my software testing journey, covering key professional milestones such as:
o Transitioning from waterfall to Agile testing during my time at Nokia.
o Leading and scaling international test teams across different cultures.
o Driving Agile adoption in a large-scale software test organization.
By sharing lessons learned and real-world experiences, this session will encourage attendees to reflect on how they can harness both Agile values and GenAI to enhance their own professional growth and testing strategies.
Learning Objectives:
Attendees will:
1. Gain insights into how GenAI can complement Agile values in software testing.
2. Be inspired to embrace a mindset of curiosity, adaptability, and continuous learning.
3. Recognize the importance of human expertise in guiding AI tools for quality assurance.
Kl. 14.15 | På Dansk
Why Quality Assurance Will Be at the Core of the AI Revolution By Asger Steen Pedersen
Abstract:
Artificial General Intelligence (AGI) — AI systems capable of performing tasks typically reserved for humans — is predicted to be here within this decade. The technology will soon disrupted most industries and as AI becomes increasingly sophisticated, responsibilities that once belonged exclusively to human employees will shift towards AI solutions. However, despite the clear trajectory of how AI will affect our society, no ones seem to know how best to implement this extremely powerful technology.
This leaves us in a dilemma. We know that AI is coming and that we need to implement it, but we don’t know how to do it. I believe that the best way to implement AI is by putting quality assurance (QA) at the very center.
In the future, much of the work done by humans will focus on checking and validating AI-generated outputs. Developers will largely be reviewing and refining code produced by AI. Managers will evaluate strategies and business proposals crafted by AI tools. Just as humans currently perform quality checks on AI-generated texts, similar tasks across numerous fields will become commonplace, positioning quality assurance as a universal skillset.
Historically, the role of testers and QA professionals was non-existing and once it did arrive it was often overlooked or deemed secondary. Over time, however, the complexity of software and digital systems has made quality assurance an integral, non-negotiable part of developing reliable, effective IT solutions. Today, testers hold critical roles in software development teams, actively shaping and refining digital products and ensuring user satisfaction, safety, and performance. And now that AI is coming they are uniquely positioned to implement this powerful technology in a safe yet effective manner.
In this session, I’ll discuss why QA will be central to the AI revolution and how people working in the field of QA can proactively prepare for and lead this change.
Learning Objectives:
1. Understand why Quality Assurance will become a core component of AI implementation strategies.
2. Recognise the necessity to structure the approach to AI around robust QA practices.
3. Identify the skills and knowledge required by QA professionals to effectively support AI systems.
4. Learn how the QA community can proactively advocate and promote the integration of QA within AI initiatives and be encourages to participate in this movement.