Hybrid machine learning and quantum-informed modelling boosts solubility prediction for drug compounds

Home » Hybrid machine learning and quantum-informed modelling boosts solubility prediction for drug compounds

Jan 12, 2026

A new Open Access paper in Digital Discovery presents a scalable hybrid approach that combines physics-informed solubility estimates with interpretable machine learning to improve prediction of drug solubility in organic solvents.

Digital Discovery Journal

Our partners have published a new paper in the Royal Society of Chemistry journal Digital Discovery exploring how hybrid machine learning (ML) and quantum-informed modelling can improve solubility prediction for drug compounds in organic solvents. Solubility is a critical property for pharmaceutical formulation and processing, but physics-based approaches such as COSMO-RS can be computationally expensive at scale.

Abstract

Solubility is a physicochemical property that plays a critical role in pharmaceutical formulation and processing. While COSMO-RS offers physics-based solubility estimates, its computational cost limits large-scale application. Building on earlier attempts to incorporate COSMO-RS-derived solubilities into Machine Learning (ML) models, we present a substantially expanded and systematic hybrid QSAR framework that advances the field in several novel ways. The direct comparison between COSMOtherm and openCOSMO revealed consistent hybrid augmentation across COSMO engines and enhanced reproducibility. Three widely used ML algorithms, eXtreme Gradient Boosting, Random Forest, and Support Vector Machine, were benchmarked under both 10-fold and leave-one-solute-out cross-validation. The comparison between four major descriptor sets, including MOE, Mordred, RDKit descriptors, and Morgan Fingerprints, offering the first descriptor-level assessment of how COSMO-RS calculated solubility augmentation interacts with diverse chemical feature space. The statistical Y-scrambling was conducted to confirm that the hybrid improvements are genuine and not artefacts of dimensionality. SHAP-based feature analysis further revealed substructural patterns linked to solubility, providing interpretability and mechanistic insight. This study demonstrates that combining physics-informed features with robust, interpretable ML algorithms enables scalable and generalisable solubility prediction, supporting data-driven pharmaceutical design.

Fig. 1 Workflow for solubility modelling and interpretation. (1) A dataset of 714 binary solute–solvent systems is encoded using SMILES. (2) These<br />
SMILES serve as inputs for: openCOSMO solubility prediction utilising (a) surface charge distributions obtained from BP86/def2TZVPD and<br />
COSMO calculations (b) in cases where the solute is a solid, solute enthalpies of fusion and melting points, and (c) a representative COSMO<br />
surface charge density visualisation shown for illustration as part of the COSMO solubility output; and for the generation of MOE, RDKit, and<br />
Mordred descriptors as well as Morgan Fingerprints. (3) The resulting COSMO-RS solubility estimates and preprocessed descriptor sets are<br />
combined as input under a hybrid mode. Machine learning models (RF, XGBoost, SVM) are trained to predict solubility. (4) SHAP-based heatmaps<br />
then decompose model outputs into descriptor and fingerprint contributions, translating predictions into QSAR insights.

Fig. 1 Workflow for solubility modelling and interpretation. (1) A dataset of 714 binary solute–solvent systems is encoded using SMILES. (2) These SMILES serve as inputs for: openCOSMO solubility prediction utilising (a) surface charge distributions obtained from BP86/def2TZVPD and COSMO calculations (b) in cases where the solute is a solid, solute enthalpies of fusion and melting points, and (c) a representative COSMO surface charge density visualisation shown for illustration as part of the COSMO solubility output; and for the generation of MOE, RDKit, and Mordred descriptors as well as Morgan Fingerprints. (3) The resulting COSMO-RS solubility estimates and preprocessed descriptor sets are combined as input under a hybrid mode. Machine learning models (RF, XGBoost, SVM) are trained to predict solubility. (4) SHAP-based heatmaps then decompose model outputs into descriptor and fingerprint contributions, translating predictions into QSAR insights.

Read the Paper

A case study on hybrid machine learning and quantum-informed modelling for solubility prediction of drug compounds in organic solvents (Wang et al., Digital Discovery, 2026). DOI: 10.1039/D5DD00456J

Link: https://pubs.rsc.org/en/Content/ArticleLanding/2026/DD/D5DD00456J

Modern scientific research workflows use a plethora of diverse software tools and file formats. Unfortunately, the file formats that one software tool can export are often incompatible with the formats required for import by another.  Furthermore, the current capabilities for converting data between these different formats are often slow, unclear and error-prone, particularly because data formats vary in their structure and in the amount of information they can represent, making conversion between specific formats complex and sometimes resulting in information loss. PSDI’s Data Conversion Service (DCS) was created to address this challenge, offering researchers a single, trusted place to convert data formats while helping them understand the likely quality and limitations of different conversions.

Where the idea came from

The need for a Data Conversion Service was first identified during research carried out for the PSDI pilot phase at the University of Southampton, which was published in Digital Discovery. This research identified a recurring issue across the physical sciences: researchers were working with data that existed in many different formats, making collaboration and reuse difficult due to a lack of interoperability. Therefore, highlighting that there was a clear need for “data format conversion between different data types in order to facilitate data exchange between different services, and to allow users to collaborate using common formats.”

A key conclusion of this work was that this issue, alongside many other interoperability challenges could best be addressed by identifying existing software that already offers relevant functionality, and creating the infrastructure needed to allow these tools to work together.

Several converters had already been created by the scientific community to address some of these issues, such as Open Babel, although in their current form they were fragmented and offered little insight into conversion quality or potential information loss. Therefore, rather than creating another converter, PSDI’s focus shifted towards making better use of these existing software tools by bringing them together and exposing their capabilities more transparently.

As Dr. Samantha Pearman-Kanza, who was closely involved in shaping the early direction of the service, explains:

Rather than simply creating another conversion tool, the focus was on making the best use of existing software and elevating their offerings. The aim was to help researchers understand what conversions were possible across different scientific data formats , which existing tools could be used, and where the use of these tools for certain conversions might involve compromises in data quality.

From concept to working service

Early ideas explored a search interface that identified possible conversions and directed users to existing conversion software. This quickly evolved into a more researcher-friendly approach: integrating established converters directly into a single service and exposing their options in a consistent way.

Development was carried out by Research Software Engineers Dr. Ray Whorley, Dr. Bryan Gillis and Dr. Don Cruickshank, who initially prototyped the service as a small Python application before expanding it into a fully-fledged web service and suite of downloadable tools.

Reflecting on this evolution, Dr. Whorley says:

The service now incorporates widely used converters such as Open Babel, Atomsk and c2x. Users can upload files, choose input and output formats, apply available conversion options, and download both the converted file and a detailed log. Accessibility has been built in throughout, with users able to customise fonts, sizes and colour schemes.

The Data Conversion Service interface showing format selection, available converters and indicative conversion quality.

Supporting real research workflows

Alongside the web application, the team developed three downloadable tools: a local browser-based version, a command-line tool and a Python library. These are proving particularly valuable for researchers working with sensitive data or automated workflows.

As Dr. Whorley explains:

“The downloadable tools give researchers confidence that their data remains local, and they can be dropped straight into automated workflows.”

This flexibility allows the Data Conversion Service to support everything from quick, one-off conversions to large-scale, repeatable processing pipelines.

Supporting FAIR data and PSDI’s wider ecosystem

Interoperability is a core part of FAIR data practice, and the Data Conversion Service plays a key role in enabling it. Researchers often need to convert the output of one tool into a format that can be used by the next, or to revive legacy data stored in outdated formats. Our service helps reduce the technical barriers to doing both.

Looking ahead

Now that the Data Conversion Service is established, its future direction will be strongly shaped by user feedback. Researchers can report missing formats and conversions directly through the service, and suggestions are already influencing planned enhancements.

Alongside this, there is clear scope for closer integration between the Data Conversion Service and other PSDI tools and services, for example by enabling data transformed through the Data Revival Service (a service which takes scanned handwritten paper lab notebooks and converts them into machine-readable data) to be converted into a wider range of usable formats, or by generating chemical identifiers such as InChI or SMILES from a broader set of input formats for use in discovery services like Cross Data Search.

As Dr. Pearman-Kanza notes:

“The capacity to convert data between different formats is what really unlock reuse across tools, across projects and across disciplines.”

Potential future developments also include support for conversions that require more than one input file, additional conversion tools, chained conversions where no direct route exists, data visualisation, and an API to enable integration with other platforms and services.

A service built with researchers in mind

For the team, seeing the Data Conversion Service grow from an identified need into a live, widely usable tool has been deeply rewarding. The aim is to make data conversion clearer, more transparent and more inclusive, so researchers can spend less time wrestling with formats and software, and more time doing research.

As Dr. Pearman-Kanza puts it:

“If researchers can trust the conversion process and understand its limitations, they are better placed to make informed decisions about how their data can be used. This includes understanding when conversion is appropriate, what can be gained, and what might be lost, which is an important step towards better research practice overall.”


Try the Data Conversion Service

The Data Conversion Service is freely available to use and designed to fit a wide range of research needs, from quick, one-off conversions to integration within automated workflows. Researchers can explore the web-based service, download local tools, and provide feedback directly to help shape future development.

To get started, visit the live service, watch the short introduction video, explore the documentation, or download the tools to use locally within your own workflows.

Explore the Data Conversion Service and start converting your data with confidence.

 

Loading...