ScreenLab is closing on 31 December 2025. Please download any heatmaps or scan results you wish to keep before this date. All data will be permanently deleted after shutdown.
ScreenLab employs cutting edge computer vision algorithms and advanced mathematical models to simulate human computer interaction. In a few seconds our technology can give you insights into your UI/UX that take weeks and cost thousands to get by traditional methods.
Understanding how people view websites is not just about detecting where their eyes are looking
Anyone who works with eye tracking data knows that the results you get are intricately linked with the task being performed. That’s why when we assembled the ScreenLab team we brought together not just extensive imaging science know-how but also top-flight experience in UI and UX design.
Physiology
Eyes are not just passive receptors; they group and process the signals received. Our simulation engine takes this into account when modelling how users respond.
Psychology
There are dozens of psychological cues in what the eye sees. Color response, pattern recognition and task being performed all change a viewer's observations.
Design
Web developers apply design principles because they understand this changes what users perceive. We've put our web design expertise in ScreenLab so that it does too.
The ScreenLab Model
Our model was designed from the ground up taking account of the physiology and psychology of vision as well as the application of design. Based on extensive, published data along with in-house research we refined attention theory for web design and identified key features and visual stimuli.
Our computer vision package, developed using the powerful OpenCV library, was then constructed to extract and grade these features.
Colour
Red grabs attention, but the differences in colors are just as important as the colors themselves. ScreenLab accounts for local and global contrast as well as color tone and brightness.
Motion
Our eyes and brains have evolved to snap attention to motion. ScreenLab includes evaluation of video and dynamic content so you get a true picture of what your users see.
Faces
From just a few weeks old our brains are wired to pick out faces. Even as adults we are still instinctively drawn to them. Our engine knows this too.
Patterns & Text
Our brains seek out patterns, even when there are none. We're also trained to see patterns like text. Put these together, and patterns become powerful attractors.
Contrast
Our eyes are hard-wired to respond to contrast and edges. ScreenLab models this as well as the brain's processing and enhancement.
Position
Every designer knows position matters. Users' experiences dictate where they look for features and how they expect different types of content to relate.
ScreenLab thinks like your customers allowing you to refine your designs in a fraction of the time.
Our model considers the same things your user’s brain does: color analysis, dynamic content and motion, spatial frequency, contrast and pattern recognition. We combine this data to give you hot zones and analyse them to provide quantitative image metrics.
Try ScreenLab today for free. No credit card required!
ScreenLab (web app and API) will be shutting down at the end of this year. After 31 December 2025, the service will no longer be available and all user data will be permanently deleted.
If you have any heatmaps, scan results, or other data stored in your account, please download it before the shutdown date.
Active subscriptions will be cancelled automatically, and no further charges will be made. If you have questions or need assistance with your data, you can contact us at hello@screenlab.io. Thank you for using ScreenLab.
Copyright 2014-2023 ScreenLab Ltd. All Rights Reserved.
We improve our products and advertising by using Microsoft Clarity to see how you use our website. By using our site, you agree that we and Microsoft can collect and use this data. Our Terms and Conditions page has more details.