About SBFT

Optimization techniques can be applied to many aspects of the software development process: research areas known as Search-Based Software Engineering (SBSE). In our previous workshop editions, we focused on the application of SBSE to perform testing tasks, the so called Search-Based Software Testing (SBST). Ongoing research on SBST and Fuzz Testing are proposing techniques to address similar testing problems and with similar goals. This has led to the decision to rename the workshop to Search-Based and Fuzz Testing (SBFT). SBFT strategies have been applied to a wide variety of testing goals including achieving high coverage, finding faults and vulnerabilities, and checking various state-based and non-functional properties (e.g., scalability, acceptance).

The central objective of this workshop is to bring together researchers and industrial practitioners from SBST, Fuzzing, and the wider Software Engineering community to share experience and provide directions for future research on the automation of software testing. The second objective of this workshop is to encourage the use of search and fuzzing techniques to combine testing with other software engineering areas. SBFT is a two-day workshop that comprises a research track, keynotes, and popular testing tool competitions. Additionally, the workshop brings together experts for a panel discussion. All those activities will contribute to break new ground in SBFT research.

Attending SBFT

SBFT 2025 is co-located with ICSE 2025.
In order to attend SBFT, you have to register for our workshop using the official ICSE registration link.
Once you registered for SBFT, the ICSE team will send you an e-mail with the invitation to attend the conference.

Similar to last year, we will also live-stream SBFT 2025 via Twitch. Feel free to join our stream and ask questions in the chat if you are not registered.

Call for Papers

Researchers and practitioners are invited to submit:

  • Full papers (maximum of 8 pages, including references): Original research in SBFT, either empirical, theoretical, or practical experience using SBFT techniques. Since this edition, we also welcome research on challenges, innovations, and best practices in software quality assurance education with/for SBFT.
  • Short papers (maximum of 4 pages, including references): Work that describes novel techniques, ideas, and positions that have yet to be fully developed or are a discussion of the importance of a recently published SBFT result by another author in setting a direction for the SBFT community, and/or the potential applicability (or not) of the result in an industrial context.
  • Demonstrations papers (maximum of 4 pages, including references): Work that describes novel aspects of early prototypes or mature tools and communicates tools' envisioned users, the addressed SBFT challenges, and achieved results. Authors of regular research papers and tool competition participants are encouraged to submit an accompanying demonstration paper, ensuring that tool details are not discussed in the original paper.
  • Position papers (maximum of 2 pages, including references): Work that analyzes SBFT trends and raises important issues. Position papers are intended to seed discussion and propose new ideas; thus, they will be reviewed with respect to relevance and potential ability to spark discussions.
  • Tool competition entries: We invite researchers, students, and tool developers to design innovative new approaches to software test generation. SBFT 2025 will host four testing tool competitions: unit testing of Java, CPS testing of self-driving cars and unmanned aerial vehicles, and fuzz testing.

In all cases, papers should address a problem in the software testing/verification/validation domain or combine elements of those domains with other concerns in the software engineering lifecycle. Examples of problems in the software testing/verification/validation domain include (but are not limited to) generating testing data, fuzzing, prioritizing test cases, constructing test oracles, minimizing test suites, verifying software models, testing service-orientated architectures, constructing test suites for interaction testing, SBFT for AI applications, machine learning techniques for SBFT, and validating realtime properties.

The solution should apply any kind of fuzzing or a metaheuristic search strategy such as (but not limited to) random search, local search (e.g. hill climbing, simulated annealing, and tabu search), evolutionary algorithms (e.g. genetic algorithms, evolution strategies, and genetic programming), ant colony optimization, particle swarm optimization, and multi-objective optimization.

Special Issue

This topic is currently under discussion. More information coming soon!

Photo by Timothy Lethbridge, licensed under CC BY-NC 4.0.

CC BY-NC License TCSE logo  
  Sigsoft logo
Submission site

HotCRP

Important Dates

Adhering to ICSE’25 workshop dates (AOE):

Paper Submission

November 11 2024

November 18 2024

Paper Notification to Authors

December 8 2024

Tool Competition Submission

December 6 2024

Check with the competition organizers for possible deadline extensions.

Tool Competition Notification to Authors

January 17 2024

Tool Competition Report

Jan 24 2024

Camera Ready Due (both Papers and Tool Competitions)

February 5 2024

Author's Registration Due

TBA

Submission Guidelines

All submissions must conform to the ICSE’25 formatting and submission instructions. All submissions must be anonymized, in PDF format and should be performed electronically through HotCRP.