Selectical is designed to seamlessly fit into your existing literature review process, making the transition to AI-assisted reviews both intuitive and effective.
We understand that each researcher has a unique workflow, and Selectical is built to adapt. Whether you want a fully AI-supported review or prefer to use it as a second reviewer, Selectical integrates effortlessly. It works in three main modalities:
Combine a manual review with an AI-supported one
Enhance your manual review by incorporating AI support. Selectical can serve as an additional reviewer, providing a safety net to ensure that no relevant articles are overlooked.
Audit your manual selection with Selectical's AI
Use Selectical to check what you might have missed in your manual screening. The AI re-examines your selections and identifies any potentially relevant articles that may have been excluded.
Replace full manual review with Selectical
Replace your full manual review with Selectical’s AI-powered screening. Let the application handle the bulk of the workload while you focus on critical evaluations.
The AI learns from the human researcher which titles/abstracts are relevant and which are not. After some initial input from the researcher the AI will present the researcher with titles/abstract with the highest likelihood of relevancy. The researcher will label it as ‘relevant’ or ‘not relevant’, and the AI will learn from those decisions.
Using Selectical, the application will tell you when you can stop screening titles/abstracts.
How do you know that you won't miss any relevant articles then? Selectical uses an innovative strategy based on a set of rules to estimate the uncertainty around the titles/abstracts that are left. If no measurable uncertainty is left, the titles/abstracts unseen by the reviewer can be excluded.
People tend to want to control when to stop screening. Most other selection tools ask the researcher to decide when to stop screening, based on a certain indicator. However, how do you know you pick the right cut-off then? The uncertainty of the researcher will often lead to proceeding with screening longer than necessary.
We have extensively tested Selectical’s set of stopping rules, and over 99% of the articles relevant for data-extraction were found.
Selectical provides auditable results.
With Selectical you will get more granular model decisions by training the model on eligibility criterion level using large language models.
When done with screening, Selectical will assign a reason for exclusion to the titles/abstracts that were not seen by the reviewer.
We have tested the Selectical application on 36 literature review projects, which were simulated 25 times. The projects contained in total 80,000 abstracts of which 2,000 were selected as 'relevant' by researchers.
We tested on a variety of data:
Most time reduction was found in studies with > 4,000 titles/abstracts.
Pilot participants indicated a great user experience