• ELRIG 2025: How Novo Nordisk is showing label-free route to functional phenotyping in brightfield high-content imaging

Laboratory events news

ELRIG 2025: How Novo Nordisk is showing label-free route to functional phenotyping in brightfield high-content imaging


At ELRIG 2025 Dr Barak Gilboa of Novo Nordisk unveiled a suite of AI-enabled approaches that turn brightfield microscopy from a low-contrast focusing aid into a powerful modality for high-content profiling, virtual staining and prediction of functional readouts at genome-wide scale


Dr Barak Gilboa used his presentation at the ELRIG 2025 meeting in Stevenage to contend that brightfield microscopy can serve as a serious high-content modality. As Principal Scientist for Data Products and Virtual Cell Platforms at Novo Nordisk, in Oxford, he explained how deep learning artificial intelligence (AI) models and optical adjustment have enabled brightfield imaging to generate cell painting-like profiles and functional predictions at a scale.

Gilboa began with an outline of the Oxford-based group’s remit within Novo Nordisk’s research organisation. The team supports genetic screens and target identification through several domains, including a functional genomics stream for perturbation screens; a genomics group focused on target maturation and precision medicine; and a smaller computational biology and AI unit that integrates multimodal data from human and preclinical studies. High-content imaging connects these activities by providing phenotypic context for genetic or pharmacological perturbations.

Contrasting fluorescence imaging with brightfield microscopy, Gilboa outlined the comparative traditional benefits of the techniques. Fluorescent assays – particularly cell painting – provide high contrast and signal-to-noise sensitivity, with stains that target specific cellular compartments such as nuclei and mitochondria. This specificity and clean optics make fluorescence the standard for phenotypic screening.

Brightfield, by contrast, produces low contrast imaging for live cells, with visible features often limited alongside debris, bubbles or other plate artefacts. Out-of-focus light and plastic surfaces can dominate, leaving images that can appear uninformative at first glance. On paper, brightfield seems ill-suited for high-content analysis.

“Brightfield has sat in the shadow of fluorescence for decades,” said Dr Gilboa.

“We wanted to test whether modern deep learning, combined with simple optical changes, could extract more information than the can be perceived by the human eye,” he added.

Gilboa then presented three case studies:

§  Label-free ‘paintless’ cell painting,

§  Virtual staining for lipid droplet analysis during adipogenesis,

§  Prediction of insulin-stimulated GLUT4 translocation from brightfield-derived features.

Gilboa detailed how traditional cell painting has become the preferred method for phenotypic screening without a predefined readout, because multiplexed fluorescent stains capture a wide range of cellular states. He asserted that the method has its limitations. It requires fixation, so it cannot track live-cell dynamics. It uses all fluorescence channels, leaving no capacity for pathway-specific labels and – importantly – it destroys cells during imaging preventing downstream analyses such as quantitative polymerase chain reaction, sequencing or metabolomics.

To overcome these restrictions, the Novo Nordisk team developed what it calls ‘paintless cell painting’ or ‘paintless profiling’. The technique improves brightfield contrast by defocusing slightly from the nominal focal plane. Using a 20× objective, the microscope acquires several z-planes, each a few micrometres apart. Out-of-focus planes lose resolution but gain contrast for whole-cell outlines and structures.

For deep learning, this trade-off is acceptable, because convolutional neural networks can interpret coarse morphological information even when fine detail is blurred. Convolutional is, in this sense, the mathematical usage where a ‘convolution’ is a specific mathematical operation that combines two functions to produce a third function, which expresses how the shape of one is modified by the other. In discrete form, it looks like a weighted sum of neighbouring values.

To adapt the data for pretrained networks, the team maps multiple z-planes into three channels to create a pseudo-colour image suitable for standard image-recognition architectures such as Google’s Xception – a convolutional neural network architecture. By removing the final classification layer, the network serves as a feature extractor, producing a high-dimensional embedding vector for each image tile. Rather than downsampling, they process the image in tiles and summarise patch-level embeddings with statistics such as the median or standard deviation. These embeddings become numerical fingerprints of cell morphology and state.

The first application involved an inflammatory signalling assay. Human endothelial-like cells were transfected (the process of deliberately introducing foreign nucleic acid) with small interfering RNA (siRNA), then stimulated with tumour necrosis factor alpha and stained for the surface adhesion molecules VCAM1 and ICAM1. Non-targeting siRNA controls showed strong staining after cytokine treatment, while knockdown of the tumour necrosis factor receptor removed the signal almost completely. Specific knockdown of VCAM1 or ICAM1 eliminated only the relevant stain. The plates also included a lethal control targeting PLK1 and other siRNA donors to generate a range of morphological effects.

When the team profiled these wells with paintless brightfield cell painting, the embeddings captured meaningful structure. In a low-dimensional map, non-targeting controls separated from wells with genetic perturbations. PLK1 knockdown clustered distinctly, reflecting its strong morphological effect. VCAM1 and ICAM1 knockdowns appeared in separate regions, and the tumour necrosis factor receptor knockdown occupied its own unique region consistent with biological differences. A classifier trained on the embeddings distinguished most perturbations with high accuracy, confirming that label-free brightfield images could encode the same biological distinctions that fluorescence imaging reveals.

The approach also uncovered subtler artefacts. Large-scale siRNA screens can show off-target effects from particular donors. In this case, the main structure of the data reflected the gene targeted, but some donors produced distinct signatures. Those were considered undesirable, because they indicated donor-specific artefacts. Donors whose features overlapped with the general population were preferred for subsequent large screens.

Using the same brightfield features, the team could also infer treatment status. A classifier predicted whether a field came from a tumour necrosis factor alpha-treated or untreated well with accuracy of about 80 per cent. Applying this to hit-selection, the team converted classification probabilities into a continuous ‘inflammation score’ suitable for statistical testing. Combining this brightfield-derived metric with conventional fluorescence-based measurements revealed hits that either modality alone would have missed. Combining these two modalities produced a richer picture of the cellular response.

The second case study followed adipogenesis over time using pooled clustered, regularly-interspaced short palindromic repeats (CRISPR) knockouts in pre-adipocyte-like cells. After antibiotic selection, the cells were induced to differentiate into adipocytes over two weeks and imaged at days 4, 6, 8, 11 and 14. Fluorescence stains marked nuclei, mitochondria and lipid droplets. Rather than treat brightfield as a supplementary channel, the researchers used it to train a virtual staining model that predicted lipid droplet fluorescence directly from brightfield.

The model – based on a U-Net convolutional neural network – took three brightfield planes as input and produced a single synthetic fluorescent image of lipid droplets. Training used day-14 data, when morphology was stable. On held-out day-14 images, the model reproduced droplet shape and intensity convincingly. When applied to earlier stages, however, it generated false positives because during its learning phase it was not presented with data on early cell development. Gilboa’s team corrected this by combining classical segmentation and elastic-net modelling to define where droplets could exist, producing more realistic ‘virtual droplets’. From these, they extracted features such as droplet size and texture.

An unexpected benefit of this pipeline appeared when the team compared replicates. The original fluorescence data showed a batch effect between two supposedly identical experiments, whereas the virtual brightfield readout did not. The variation therefore arose from staining, not biology. When CRISPR perturbations were plotted in feature space, genes known to reduce adipogenesis formed a continuum of less mature states, while essential genes lay elsewhere. The standard positive control occupied a distant region, suggesting an atypical mechanism rather than a simple enhancement of lipid accumulation.

Examining lipid droplet trajectories clarified this behaviour. In non-targeting controls, droplet numbers increased until day 11 before hitting a plateau. In positive controls, droplet numbers levelled out earlier, but droplet area continued to grow until day 14. This implied an upper limit on droplet numbers, after which further lipid accumulation resulted mainly from fusion and enlargement.

By conforming slopes to mean droplet area across timepoints, the team generated an ‘adipogenesis slope’ that responded to both loss- and gain-of-function perturbations. Reduced slopes indicated impaired adipogenesis, higher slopes indicated enhancement and some essential genes showed distinct trajectories. Longitudinal analysis thus provided biological resolution that static endpoints could not match.

The final case study involved a functional assay of insulin-stimulated glucose transporter type 4 (GLUT4) translocation. In adipocytes and muscle cells, GLUT4 resides in intracellular granules in the basal state. Insulin stimulation moves it to the plasma membrane, increasing glucose uptake. The team used a dual-reporter construct with green fluorescent protein on the cytosolic side of GLUT4 and an HA epitope – a short peptide tag derived from influenza virus haemagglutinin – in an extracellular loop. Binding of an anti-HA antibody to surface-exposed transporters gave a quantitative fluorescent readout of translocation. Generic stains for nuclei, mitochondria and cytoskeleton provided morphological context.

The team screened about 700 genes using small interfering RNA and CRISPR perturbations, acquiring multiparametric images. They first tested whether morphology alone could predict the HA-based functional readout. A regression model trained on CellProfiler-derived features reproduced both the dynamic range and the rank order of positive and negative controls. Analysis of feature importance showed that cell morphology contributed most, followed by nuclear shape and mitochondrial texture, consistent with insulin’s known cellular effects.

Encouraged by this result, the researchers repeated the analysis with brightfield as the only imaging modality.

To the naked eye, the brightfield images showed no obvious difference between insulin-treated and untreated wells. However, a segmentation model (trained by one of Novo Nordisk’s summer intern) identified nuclei and cell boundaries, from which the team has derived morphological descriptors. When they retrained the predictive model using only brightfield-derived features, it again reproduced the functional response pattern. Though visually featureless, label-free brightfield images contained sufficient information about cell organisation to predict a complex membrane translocation event.

“Brightfield will not replace fluorescence everywhere,” said Dr Gilboa.

“But we have shown that it can support phenotypic profiling, virtual staining and functional prediction at scales where labelled assays arex [either] too costly or too destructive,” he added.

Gilboa concluded that brightfield, despite its low contrast and lack of inherent specificity, can act as a viable, non-perturbative alternative to cell painting, especially when defocused multi-plane images are analysed with pretrained deep networks. Virtual staining can convert brightfield into interpretable pseudo-fluorescent channels that reveal dynamic processes such as lipid droplet fusion during adipogenesis. Feature extraction from brightfield can also underpin predictive models of complex cellular responses, such as insulin-stimulated GLUT4 translocation.

For Novo Nordisk, the goal now is to deploy these brightfield-based techniques at much larger scales, potentially imaging genome-wide perturbation libraries entirely in brightfield and inferring multiple functional phenotypes computationally.



Digital Edition

Lab Asia Dec 2025

December 2025

Chromatography Articles- Cutting-edge sample preparation tools help laboratories to stay ahead of the curveMass Spectrometry & Spectroscopy Articles- Unlocking the complexity of metabolomics: Pushi...

View all digital editions

Events

Smart Factory Expo 2026

Jan 21 2026 Tokyo, Japan

Nano Tech 2026

Jan 28 2026 Tokyo, Japan

Medical Fair India 2026

Jan 29 2026 New Delhi, India

SLAS 2026

Feb 07 2026 Boston, MA, USA

Asia Pharma Expo/Asia Lab Expo

Feb 12 2026 Dhaka, Bangladesh

View all events