Friday, October 31, 2025

Paper-Based Test Scans for Multiple Biomarkers in Human Serum

 









Researchers led by UCLA professor Aydogan Ozcan developed a deep learning-enabled biosensor for multiplexed, point-of-care (POC) testing of disease biomarkers. POC biosensors provide remote and resource-limited communities with an economical, practical alternative to centralized laboratory testing.
The UCLA-developed POC sensor includes a paper-based fluorescence vertical flow assay to simultaneously detect three biomarkers of acute coronary syndrome from human serum samples. The vertical flow assay is processed by a low-cost mobile reader, which quantifies the target biomarkers through trained neural networks.

According to the researchers, the competitive performance of the multiplexed computational fluorescence vertical flow assay, along with its inexpensive, paper-based design and hand-held footprint, give the POC sensor promise as a platform to expand access to diagnostics in resource-limited settings.

“Compared to a commonly used linear calibration method, our deep learning-based analysis benefits from the function approximation power of neural networks to learn nontrivial relationships between the multiplexed fluorescence signals from the paper-based sensor and the underlying analyte concentrations in serum,” researcher Artem Goncharov said. “As a result, we have accurate quantitative measurements for all three biomarkers of interest, despite the background noise present in clinical serum samples.”

Unlike lateral flow assays, which are the most common type of POC test, assays using the vertical flow of samples through stacked paper layers enable the arrangement of sensing regions in a 2D or 3D array and can achieve multiplexing with tens or even hundreds of independent testing channels represented by different affinity capture molecules. The vertical flow design of the POC sensor from the UCLA researchers has room for multiple test regions, with up to 100 individual test spots within a single disposable cartridge.

“This design essentially allows us to integrate tens of different POC sensors into a single cassette and perform multiplexed diagnostics tests in parallel with the same low-cost paper-based sensor,” Ozcan said.

The researchers used conjugated polymer nanoparticles (CPNs) — fluorescent labels with tunable emission and excitation properties and with minimally overlapping excitation and emission peaks — to design the fluorescence vertical flow assay. The CPNs have 480-nm excitation and 610-nm emission peaks, which helped the team reduce the strong autofluorescence background from the paper substrate.

The excitation energy transfer in CPNs takes place across the whole backbone, catalyzing an amplified emission that is higher than quantum dots (QDs). CPNs are also more stable on porous paper layers, with less photobleaching, and are larger than QDs, leading to improved luminescence.

Using human serum samples to quantify three cardiac biomarkers — myoglobin, creatine kinase-MB, and heart-type fatty acid binding protein — the researchers validated the fluorescence vertical flow assay platform. The assay achieved less than 0.52 ng/mL−1 limit-of-detection for all three biomarkers, with minimal cross-reactivity.

Biomarker concentration quantification, using the assay coupled to neural network-based inference, was blindly tested using 46 individually activated cartridges and human serum samples. The results showed a high correlation between the fluorescence vertical flow assay and the ground truth concentrations obtained through standard laboratory benchtop testing, with a greater than 0.9 linearity and a less than 15% coefficient of variation found for all three biomarkers.

The simple-to-operate POC sensor, the researchers said, involves only three injection steps performed through a single loading inlet. The steps can be executed by a minimally trained technician using a custom operation kit. The assay uses 50 µL of serum sample per patient and takes under 15 minutes to complete, which is on the same scale as, for example, COVID-19 rapid antigen tests that take between 15 and 30 minutes.

Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Tuesday, October 28, 2025

Pensievision's 3D Imaging Tech Shines at Luminate Finals Competition






Pensievision, a creator of 3D imaging technology for industrial applications and medical devices, received the Company of the Year Award at the Luminate NY Finals 2025, held this week in Rochester. Along with the title, the company received a $1 million investment from New York State through the Finger Lakes Forward Upstate Revitalization Initiative.

Pensievision's solution delivers 3D imaging for demanding environments, from medical diagnostics and factory floors to orbital missions. Its technology combines a miniaturized single-lens setup, artificial intelligence, and astronomy-inspired optics to enable high-precision insights in tight or complex environments where bulky, multi-lens or laser-based systems fail.

“It’s a compact, affordable camera that does very high-accuracy 3D mapping of surfaces, and it’s compact enough that it can fit on anything from a robotic arm to an endoscope that goes inside the body,” said Pensievision CTO Joseph Carson. The technology has been demonstrated in both applications, and will soon be demonstrated in an upcoming visit to the International Space Station, Carson said.

The company's platform utilizes a Corning Varioptic liquid lens to rapidly take images at varying focuses to produce a 3D image through focus mapping. AI-driven software is then used to process the messy 3D mapping into the underlying high-resolution quantitative map. The output map is precise and accurate enough to be used in mission-critical applications.

Pensievision plans to use the follow-on funding to anchor its growth in the Rochester region by engaging with local supply chains, hiring engineering talent from universities in the area, and partnering with Rochester-based design and manufacturing firms. According to Carson, there are at least three firms in the newly wrapped cohort with which he envisions Pensievision fostering relationships.

“There’s a lot of collaborative efforts between the companies in this cohort because they’re working in similar fields with different technologies,” said Luminate’s managing director, Sujatha Ramanujan. “It’s been a nice outcome.”

amPICQ, originally from Hyderabad, India, and now located in Rochester, was awarded the Outstanding Graduate Award and $500,000 in follow-on funding. Its team is designing and developing PICs to make quantum-safe security both practical and accessible across quantum communications, datacom, and telecom industries.

For the first time, three companies tied for the Distinguished Graduate Award, with each being earning $200,000. Oblate Optics, from San Diego, produces ultra-thin lenses that keep laser beams in perfect focus — even on curved or uneven surfaces — without the need to move or refocus the optics. Münster, Germany based Pixel Photonics uses its waveguide-integrating design for superconducting nanowire single-photon detectors to support OEM integration across quantum communications, microscopy, medical diagnostics, and advanced sensing. SNOChip, from Princeton, N.J., is a developer of on-chip optical components, such as microlens arrays, computer-generated holograms, and metasurfaces, designed for seamless integration with semiconductor lasers and sensor chips.

Event attendees voted LirOptic as the Audience Choice, and the company earned $10,000 in follow-on funding.

The investments were presented after a panel of judges from the optics and photonics industry and venture capital community scored the participating companies based on their business pitches and due diligence completed during the seven-month accelerator program. The finals event marks the completion of the eighth year of the cohort-based program, which now includes more than 80 portfolio companies, carrying an estimated combined market value of $700 million. As required by the award, all winners of the competition will commit to establishing operations in the region for at least the next 18 months.

Since its inception, Luminate NY has invested $21 million in 85 startups. Collectively, they have created more than 210 jobs in New York State and spent $21.6 million on more than 140 projects with regional design, manufacturing and supply chain companies. Twenty-two international companies have relocated to New York, and 41 portfolio companies have women in the C-suite.

Applications are now being accepted for round nine, through Jan. 12, 2026. Teams will receive $100,000 in funding upon program start, with the expectation that $50,000 will be used to engage resources in the Finger Lakes region.


Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Thursday, October 23, 2025

Laser-Induced Protein Detection Speeds Diagnosis of Disease






Researchers at Osaka Metropolitan University have developed an optical alternative to immunoassays and other methods used for protein analysis. The alternative method provides rapid, highly sensitive detection of proteins through laser irradiation.

According to the researchers, the light-induced acceleration-based technique could improve detection limit and quantitative measurement, using a small number of biological samples and a simple process, to aid in the ultra-early diagnosis of cancer, dementia, and infectious diseases.

Conventional techniques for protein detection, such as enzyme-linked immunosorbent assay (ELISA), require several hours and involve multiple steps, in addition to being less sensitive than the recently developed light-induced method.

In experiments, the researchers showed a successful deployment of their approach using only three minutes of laser irradiation. They achieved a sensitivity and ultrafast specific detection more than 100× that obtained in comparison with conventional protein detection methods. Further, the researchers showed that the technique could enable diagnoses with only a small amount of body fluids — such as a single drop of blood.

To develop an optical method to achieve control of antigen-antibody reaction and detect trace amounts of proteins, the researchers conducted basic research on the synergistic effects of optical pressure and fluid pressure and how to circumvent the effect of heat. They used target proteins, to which they introduced probe particles containing modified antibodies that selectively bound to the proteins. They confined the proteins and probe particles to a microchannel and irradiated the channel.

The probe particles were 2-μm-diameter polymer beads with a minimal amount of heat generation, due to the absorption of infrared laser light as well as strong light scattering.

The researchers then used light-induced acceleration to trap antigen-antibody reactions of trace amounts of proteins at the interface between solid and liquid (i.e., the bottom of the channel, which contained liquid samples).

After tuning the laser irradiation area to be comparable to the confinement geometry, the researchers irradiated a few hundred milliwatts of laser light, defocused to a spot size of approximately 70 μm in the microchannel, which had a width of approximately 100 μm.

The laser-assisted optical pressure on the proteins and probe particles increased the probability of interaction and the acceleration of antigen-antibody reactions. The collisional probability of the target molecules and probe particles was enhanced through optical force and fluidic pressure.

The “scattering force,” a component of the optical force, was enhanced to ensure accumulating force without any thermal damage to the antibody-modified probe particles and target proteins.

After testing various conditions, the researchers found that the antigen-antibody reaction was efficiently accelerated by adjusting the flow rate to 100 to 200 μm/s.

A black region formed in a portion of the assembled structure obtained by laser irradiation, because the optical transmission was blocked by the multilayered structure formation. The researchers found that the area of this region was positively correlated with the protein concentration. A model calculation, in which binding by an antigen-antibody reaction was expressed using cohesive energy, confirmed theoretically that the formation of the multilayered structure was caused by optical force and pressure-driven flow.

When the researchers irradiated the microchannel with IR laser light for three minutes, they were able to detect trace amounts of proteins at a sensitivity level approximately 100× higher than that of conventional protein testing. The researchers achieved rapid measurement of trace amounts on the order of tens of attograms (ag) (ag = 10−18 g; one quintillionth of a gram). They measured target protein trace amounts as small as one twenty-quadrillionth of a gram after only three minutes of irradiation.

The researchers applied the principle of light-induced acceleration to several different types of membrane proteins. In experiments, the optical technique demonstrated ultrafast, specific detection of target proteins with a smaller sample volume and higher sensitivity than conventional techniques. For example, in one type of membrane protein, the researchers detected 47 to 750 ag of target proteins, without any pretreatment, from a 300-nL sample after just three minutes of laser irradiation.

A progressive collaborative study on cancer marker measurement using patient-derived samples is underway as part of the Future Society Creation Project of the Japan Science and Technology Agency. An initial validation of the light-induced acceleration technique in clinical practice is planned, with the aim of developing a basic system within a few years, the researchers said.

Since antigen-antibody reaction is a common biochemical reaction, the technology has the potential to be used not only in the medical field but also in various industrial fields, such as testing for allergens in food and drink, detecting biological substances in the environment, and testing for intermediate products in the pharmaceutical process.

Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Wednesday, October 22, 2025

Team Applies Synthetic Wavelength Imaging to Skin Cancer Diagnoses, Treatment




 

Researchers at the University of Arizona will pursue the development of optical imaging technologies capable of deeper, clearer views into biological tissues, such as skin or soft tissue linings within the body. Led by Florian Willomitzer and Clara Curiel-Lewandrowski, the team is one of just four groups nationwide to receive funding through the Advancing Non-Invasive Optical Imaging Approaches for Biological Systems initiative.

The group will receive nearly $2.7 million from the National Institute of Health (NIH)’s Common Fund Venture Program. The final award amount is pending successful completion of milestones and availability of funds.

The team's noninvasive approach is based on synthetic wavelength imaging (SWI), which uses two separate illumination wavelengths to computationally generate one virtual, “synthetic” imaging wavelength. Due to the longer, synthetic wavelength, the signal is more resistant to light scattering inside tissue. At the same time, researchers can take advantage of the higher contrast information provided by the original illumination wavelengths.

“This project specifically focuses on nonmelanoma skin cancers, such as basal cell carcinoma or squamous cell carcinoma,” said principal investigator and project lead Willomitzer, an associate professor of optical sciences. “Those skin cancers can display significantly different imaging contrast properties than melanoma, which poses a unique challenge to the development of new 'deep' imaging technologies.”

Current skin cancer imaging methods, such as confocal microscopy or optical coherence tomography, use optical light with wavelengths in the visible to near-infrared spectrum. They offer superior contrast and resolution at shallow tissue depths, but their relatively short imaging wavelengths make them susceptible to light scattering deep inside biological tissue. Longer wavelength methods, like ultrasound or hybrid approaches, can image deeper layers, but they often lack resolution or sufficient contrast needed for certain cancer types.

“From a translational standpoint, this limitation is particularly important,” said Curiel-Lewandrowski, the other principal investigator, chair of the Department of Dermatology at the College of Medicine – Tucson and co-director of the Skin Cancer Institute at the University of Arizona Cancer Center. “Patients with nonmelanoma skin cancers often present with lesions that vary widely in size, depth and pattern of invasion.”

According to Curiel-Lewandrowski, imaging tools must be versatile enough to accurately assess tumor margins at the time of diagnosis, while also being robust and reliable enough to monitor how lesions respond over the course of treatment.

“To achieve this, we need tunable imaging capabilities that balance depth penetration with resolution and imaging contrast — something that current technologies cannot reliably provide,” she said.

The NIH's Common Fund Advancing Non-Invasive Optical Imaging Approaches for Biological Systems Venture Initiative seeks to overcome these and other limitations through technology development that will allow light to deeply image through tissue non-invasively at high resolution. Enhanced imaging techniques can make possible earlier detection of health conditions, more precise evaluation of cellular and tissue health, and advancements in non-invasive procedures to replace surgery. The NIH initiative seeks to produce highly detailed images that can reveal structures ranging from individual cells to larger features of living tissues. It also aims to record rapid biological processes, such as muscle contractions and pulse, with enough speed to capture them in real time.

“Synthetic wavelength imaging's resilience to scattering in deep tissue while preserving high tissue contrast at the optical carrier wavelengths is a rare combination,” Willomitzer said. “By pairing this property with advanced computational evaluation algorithms, our approach aims to break free from the conventional resolution-depth-contrast tradeoff.”

The team aims to bridge a critical gap in skin cancer care by advancing this new technology, Curiel-Lewandrowski said.

“Our goal is to translate these imaging advances into clinical practice,” she said. “If we can detect invasive lesions earlier, define tumor margins more precisely and monitor response to non-invasive treatments in real time, we can maximize the effectiveness of emerging therapeutic approaches. This will also allow us to tailor intervention length and dosing individually to each patient.”

Bio Photonics Research Award


Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Tuesday, October 21, 2025

Computational Method Streamlines Spectral Imaging, Cuts Costs




 

The versatility and precision of hyperspectral imaging make it an indispensable tool in numerous scientific and industrial applications, from medical imaging to environmental monitoring to quality control. But traditional hyperspectral imaging systems can be costly, cumbersome, and challenging to scale.

A computational spectral imaging system from the University of Utah provides a fast, inexpensive, efficient alternative to capturing high-quality spectral data. The system, which the team tested across biomedical, food-quality, and astronomical use cases, could establish a new framework for high-speed, high-fidelity spectral imaging with broad translational potential.

The system uses a diffractive filter array to project spectral information into the spatial domain, enabling the capture of a single-channel, 2D image that contains both spatial and spectral data. This 2D image, called a diffractogram, is computationally decoded to reconstruct a spectral image cube with 25 spectral bands in the 440-800 nm range. Each of the 25 separate images that comprise the cube represents a distinct slice of the visible spectrum.

The team modeled and designed the diffractive filter array and the algorithm to reconstruct the hyperspectral images from the raw data captured by the sensor.

By encoding a scene into a single, compact 2D image rather than a massive 3D data cube, the camera makes hyperspectral imaging faster and more efficient. The fast encoding enables the system, which is small enough to fit into a cellphone, to take high-speed, high-definition video.

“One of the primary advantages of our camera is its ability to capture the spatial-spectral information in a highly compressed, two-dimensional image instead of a three-dimensional data cube, and use sophisticated computer algorithms to extract the full data cube at a later point,” professor Apratim Majumder said. “This allows for fast, highly compressed data capture.”

The current prototype camera can take images at just over one megapixel in size (1304 x 744 pixels) and break them down into 25 separate wavelengths across the spectrum. The diffractive element, placed directly over the camera’s sensor, encodes spatial and spectral information for each pixel on the sensor.

“We introduce a compact camera that captures both color and fine spectral details in a single snapshot, producing a ‘spectral fingerprint’ for every pixel,” professor Rajesh Menon said.

To demonstrate the camera’s capabilities, the researchers applied standard inferencing techniques to reconstructed spectral images across various sectors. The system demonstrated a spectral reproduction error of less than 15% across the 440-800 nm band.

The researchers used the camera to classify lung and trachea tissues in ex vivo chicken lung images, predict the freshness of strawberries, and mimic the spectral filters that are used in stellar imaging. These experiments highlighted the system’s potential in medical diagnostics, food-quality assessment, and astronomical observations.

The diffractive, computational spectral imaging system offers several advantages. It provides snapshot capability, eliminating issues with scan-and-stitch methods. The diffractogram serves as a form of optical compression, efficiently storing spatio-spectral content in a compact, information-rich 2D array. This is particularly beneficial for applications with limited storage or transmission bandwidth, such as airborne or satellite imaging.

“Satellites would have trouble beaming down full image cubes, but since we extract the cubes in post-processing, the original files are much smaller,” Majumder said.

The system also provides the flexibility to perform reconstructions offline and on-demand, after data capture, for scenarios with limited on-board computational resources. Also, since the diffractogram encodes spectral information continuously, it allows for information to be selected on an application-specific spectral basis, yielding smaller image cubes and faster, more stable reconstructions.

Compared to traditional hyperspectral imaging systems, the computational spectral imager’s streamlined approach reduces costs significantly.

“Our camera costs many times less, is very compact and captures data much faster than most available commercial hyperspectral cameras,” Majumder said. “We have also shown the ability to post-process the data as per the need of the application and implement different classifiers suited to different fields such as agriculture, astronomy, and bioimaging.”

Hyperspectral cameras have long been used in agriculture, astronomy, and medicine, where subtle differences in color can make a big difference. But these cameras have historically been bulky, expensive, and limited to producing still images.

“When we started out on this research, our intention was to demonstrate a compact, fast, megapixel-resolution hyperspectral camera, able to record highly compressed spatial-spectral information from scenes at video-rates, which did not exist,” Majumder said.

“This work demonstrates a first snapshot, megapixel, hyperspectral camera,” he said. “Next, we are developing a more improved version of the camera that will allow us to capture images at a larger image size and an increased number of wavelength channels, while also making the nanostructured diffractive element much simpler in design.”

By making hyperspectral imaging cheaper, faster, and more compact, the computational camera advances spectral imaging technology and potentially opens the way for technologies that could change the way the world and its contents are seen.

Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Friday, October 17, 2025

Handheld Sensor Detects Markers for Early-Stage Alzheimer’s




        

A newly-developed, handheld optical sensor could make Alzheimer’s disease easier to detect in its early stages, when treatments for the disease are most effective. The photonic resonant sensor is the result of a collaboration among researchers at the University of York, the University of Strathclyde, and the University of São Paulo.

The team developed a sensor that can simultaneously detect two of the amyloid peptides that are indicators for Alzheimer’s, at the levels clinically required for diagnosis. The capability to simultaneously detect beta amyloid 40 and beta amyloid 42 in the blood opens a route to quantifying and analyzing their ratio, enabling the progression of the disease to be tracked. Single biomarker detection is insufficient for clinical diagnosis.

Photonic resonant sensors allow for the label-free detection of specific molecules, in addition to surface imaging and the multiplexing of different biomarkers. They are compatible with low-cost fabrication processes and can be implemented with minimal optoelectronic elements for the signal readout.

Detecting peptides, however, remains a challenge for this class of sensors, mainly due to the low molecular weight of the peptides. Amyloid peptides are small and occur at low concentrations.

To ensure a high-performing sensor that could detect peptides in the blood, the researchers integrated gold nanoparticles with a dielectric nanopillar photonic crystal structure in a dimer configuration. The gold nanoparticles amplified the optical signal used to detect Alzheimer’s disease biomarkers dramatically, compared to the team’s previous sensor design, which involved the use of parallel grooves.

“This new design has allowed us to detect the amyloid biomarkers at the ultralow, clinically relevant concentrations we need, which our previous sensor couldn’t quite reach,” researcher Steven Quinn said. “The added bonus is that the technology remains scalable, mass-producible, and we aim for it to be as simple to use as a Covid test.”

The sensor design combines high resonance Q-factor, amplitude, and sensitivity, leading to a high figure of merit. “When you compare different technologies in photonics, you use a ‘figure of merit,’ which is like a scorecard that takes into account key parameters like sensitivity and signal-to-noise ratio,” Quinn said. “Our new sensor’s scorecard outperforms competing technologies.”

The sensor can detect beta amyloid 40 and beta amyloid 42 peptides in the same channel, which is relevant for assessing disease progress, and opens a route toward multiplexing. To achieve high selectivity and specificity in the sensor, the researchers used an immunoassay design approach.

The researchers are integrating the sensor technology into a handheld device. Potentially, this device could provide an indication of disease within seconds from a simple finger-prick of blood, at a projected cost of less than £100 per test. Low-cost, point-of-care testing could broaden accessibility to early Alzheimer’s testing and diagnosis, giving more patients access to treatments that are most effective in the initial stages of the disease.

“New Alzheimer’s treatments work by specifically targeting the sticky amyloid proteins that build up in the brain,” Quinn said. “For these drugs to be effective, doctors first need to confirm that a patient has this protein build-up — a condition known as amyloid positivity. A simple, scalable blood test could be the way to facilitate widespread access to these emerging treatments.”

Current methods for diagnosing Alzheimer’s disease, such as brain scans (PET/MRI) or invasive lumbar punctures, are costly, time-consuming, and are not readily accessible. Highly accurate, lab-based blood tests are now available, but they rely on large, expensive machinery, with a single test potentially costing thousands of pounds.

The next milestone for the team will be to validate the photonic sensor using blood samples from patients with Alzheimer’s and a healthy control group. This crucial phase will determine how effectively the sensor can differentiate between the two groups.

The new sensor technology could be used to detect other biomarkers and markers for other diseases. “The same principles and protocols can be used to detect a protein called phosphorylated tau, another key Alzheimer’s biomarker, as well as alpha-synuclein in Parkinson’s disease,” Quinn said. “We believe this could become a platform technology to help differentiate between various forms of dementia, which is a major challenge for clinicians.”

Although the team still needs to demonstrate the effectiveness of the sensor in patient samples, it believes that the photonic biosensor holds significant promise as a cost-effective tool to open the door to widely available testing for Alzheimer’s and other neurodegenerative diseases.

“Our vision is a device that is user-friendly for clinicians and can be deployed in healthcare settings around the world,” Quinn said.

Bio Photonics Research Award


Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Thursday, October 16, 2025

Fiber Photometry Hastens Development of Alzheimer’s Disease Therapies






Use of mouse models to test new interventions for Alzheimer’s disease is a cornerstone of Alzheimer’s disease therapeutic development.

Current preclinical evaluation of Alzheimer’s disease pathology relies mostly on post-mortem analyses of animal models, which limits researchers’ ability to follow the progression of the disease or the efficacy of treatments over time.

In search of a method to observe the development of the disease and its response to therapies in real-time, researchers at the University of Strathclyde and the Italian Institute of Technology (IIT) investigated fiber photometry, an optical approach to monitoring neural activity in live animals. The researchers expanded the capabilities of in vivo fiber photometry, using it to examine the pathological features of an AD mouse model in a freely behaving condition. This approach to could help researchers uncover information about how Alzheimer's disease develops and enable more flexible testing of potential therapies.

Using a conventional, flat fiber-based photometry approach, the team confirmed in its initial experiments that amyloid plaque signals could be monitored across multiple depths in in vivo Alzheimer's disease mice models under anesthesia.

Instead of relying on genetically encoded sensors, the researchers implemented a non-genetic strategy, and injected the mice with a blood-brain-barrier-permeable fluorescent tracer, Methoxy-X04. The hydrophobic structure of this compound allows it to enter the brain, where it specifically binds to beta-sheets found within amyloid fibrils, allowing visualization of amyloid plaques in Alzheimer's disease models.

The team found that the depth profiles of the in vivo fluorescent signals correlated with the plaque density measured afterward in brain slices. A machine learning model could distinguish between the in vivo fluorescent signals of mice with and mice without amyloid plaques based on the depth profiles of their signals.

The researchers then assessed whether tapered optical fibers would allow depth-resolved photometry for plaque signals in ex vivo tissue. Upon examination of brain tissue slices, they found that the tapered fibers reliably tracked plaque distribution.

After validating the tapered fiber-based photometry approach in freely behaving mice, implantation into chronically in living mice revealed depth-specific increases in fluorescence after Methoxy-X04 injection in Alzheimer's disease model mice, but not in healthy controls. The technique showed age-dependent signal increases consistent with disease progression.

By exploiting the photonic properties of tapered fibers, the researchers establish depth-resolved photometry of amyloid plaque signals in vivo and ex vivo.

In contrast to existing methods, such as optoacoustic tomography, the optical fiber-based approach allows long-term monitoring of amyloid pathology across multiple deep brain regions in freely behaving animals. While the photometry technique cannot resolve individual plaques, it can provide a minimally invasive way to track pathological changes across time and across brain regions.

Amyloid plaques have long been recognized as a hallmark of Alzheimer's disease. Recent therapeutics targeting amyloid-β protofibrils or deposited amyloid plaques have proven effective in patients, and researchers have successfully translated early preclinical mouse data to the clinic. As such, evaluating new interventions in preclinical mouse models should continue to play an important role in accelerating future treatments for Alzheimer's disease.

The results of this research demonstrate the potential of using fiber optic photometry, which has been widely used in the neuroscience community, to monitor plaque signals in order to optimize therapeutic approaches and develop intervention strategies for Alzheimer's in a preclinical setting.

Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Tuesday, October 14, 2025

Hydrogel Improvements Boost Utility of Expansion Microscopy






Collaborators from Carnegie Mellon University, the University of Pittsburgh, and Brown University have described a microscopy technique and set of protocols that overcome a bottleneck to the expansion microscopy method. The collaborators developed “Magnify” as a variant of expansion microscopy that uses a hydrogel that retains a spectrum of biomolecules, offers a broader application to a variety of tissues, and increases the expansion up to 11× times linearly or approximately 1300 folds of the original volume.

Through the expansion microscopy process, samples are embedded in a swellable hydrogel that homogenously expands to increase the distance between molecules, which allows them to be observed in greater resolution. This allows nanoscale biological structures that previously could be viewed only via expensive high-resolution imaging techniques to be seen with standard microscopy tools.

In addition, the researchers said in their paper, “Current expansion microscopy protocols require prior treatment with reactive anchoring chemicals to link specific labels and biomolecule classes to the gel.”

In developing the Magnify method, according to the researchers, the team developed a gel that was mechanically sturdy to retain nucleic acids, proteins, and lipids without the need for a separate anchoring step.

Yongxin (Leon) Zhao, the Eberly Family Career Development Associate Professor of Biological Sciences at Carnegie Mellon, said, “We overcame some of the long-standing challenges of expansion microscopy. One of the main selling points for Magnify is the universal strategy to keep the tissue’s biomolecules, including proteins, nucleic acids, and carbohydrates, within the expanded sample.”

Keeping different biological components intact matters, since previous protocols required eliminating many various biomolecules that held tissues together, Zhao said. However, these molecules could contain valuable information for researchers.

“In the past, to make cells really expandable, you need to use enzymes to digest proteins, so in the end, you had an empty gel with labels that indicate the location of the protein of interest,” he said.

Using the Magnify method, the molecules are kept intact, and multiple types of biomolecules can be labeled in a single sample.

“Before it was like having single-choice questions. If you want to label proteins, that would be the version one protocol. If you want to label nuclei, then that would be a different version,” Zhao said. “If you wanted to do simultaneous imaging, it was difficult. Now with Magnify, you can pick multiple items to label, such as proteins, lipids, and carbohydrates, and image them together.”

Co-first author and postdoctoral researcher Aleksandra Klimas said that in addition to its high-resolution imaging properties, the newly described approach is advantageous because of its broad applicability. “Traditionally, you need expensive equipment and specific reagents and training. However, this method is broadly applicable to many types of sample preparations and can be viewed with standard microscopes that you would have in a biology laboratory,” she said.

Doctoral student Brendan Gallagher, an additional co-first author of the work, said that the team tried to make the protocols involved in its method as compatible as possible for researchers who could benefit from adopting Magnify. As a result, Gallagher said, Magnify works with different tissue types, fixation methods, and tissue that has been preserved and stored.

The developed protocols aim to provide a framework for those in neuroscience, pathology, and other biological and medical fields. According to the researchers, the small sizes of monomers, as well as the fast rate of diffusion, mean that Magnify may have applicability to thick tissues and whole organisms. “Magnify would be readily adaptable to generating nanoscale whole organ data sets, which currently rely on either lower-resolution tissue-clearing methods,” they said in their paper.

In addition, they said, because Magnify is a chemical strategy that does not rely on complex optics, its framework can be adapted to a range of imaging modalities and gel chemistries, as well as with other expansion microscopy strategies — which have previously demonstrated compatibility with existing superresolution techniques.


Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Monday, October 13, 2025

Speckle-correlation Technique Recovers Images of Obscured Objects in Real Time





Imaging through a light-scattering medium, such as clouds in the sky or tissues in the body, poses special challenges. The scattered light must be reconstructed, typically by using complex optical elements in an environment that is vulnerable to motion and mechanical instability. Computational algorithms are then able to post-process the detected light to generate an image.

A new approach to imaging reconstruction, developed by researchers at King Abdullah University of Science and Technology (KAUST) and the Xiong’an Institute of Innovation, uses speckles to enable clear images of obscured objects, whether static or moving, to be produced in real time.

Previous strategies for reconstructing scattered light have required some knowledge of the object and the ability to control the wavefront of light illuminating it. These strategies have not used directly obtained random speckle patterns for imaging, due to degradation and scattering. Rather, speckles have been seen as noise or chaotic patterns.

Speckle-correlation imaging, which extracts information about the source from fluctuations in the intensity (i.e., speckles) in the transmitted light, could offer an efficient approach to reconstruction. However, many of the technologies based on speckle-correlation require time-consuming computational reconstruction. Also, some information, such as the image orientation and location, is missed.

The KAUST team investigated a way to directly observe self-imaging units of small objects by viewing speckles through diffusers. By carefully examining a speckle, the researchers found that they could obtain clear self-imaging phenomena from a single shot of the speckle image.

By examining the inherent nature of speckles, the researchers were able to develop a self-imaging speckle model and validate it in experiments. This facilitated a visual understanding of speckles and their properties.

The researchers obtained an image directly from a single shot of the speckle image by passing light from a small, standardized test object through a thin diffusing material. By moving the direction of the camera away from the diffuser, the researchers were able to build a 3D image by taking slices through the speckles. When the researchers viewed enlarged sections of these images, they could see reproductions of the test object.

This approach allowed the researchers to see directly through the random diffuser with the naked eye and use real-time video imaging. The orientation of the object could be directly seen and traced in real time.

The new method requires no complex, expensive equipment for the active control of light, and no prior knowledge of the source or diffusion medium. There is no need for iterations or parameter adjustments.

The researchers said that the visibility of directly observed imaging patterns using the new method is equivalent to those processed with direct speckle autocorrelation imaging (SAI). Furthermore, using a simply modified SAI method with efficient joint-filtering, the researchers achieved an imaging quality and resolution comparable to the best results processed by previously reported computational reconstruction methods, according to the team.

“We have developed a strategy of calibration-free, reconstruction-free, real-time imaging of static and moving objects with their actual orientation information,” researcher Wenhong Yang said. “This novel technique only requires simple or low-cost devices, without the post-computational reconstruction.”

The results provide a fresh perspective on diffuser-imaging systems and could inspire new rapid, high-quality imaging applications through scattering diffusers. Optical imaging through scattering light plays a crucial role in many industries, including biomedicine and astronomy.

“Our work presents a significant step in the field of scattering imaging and will shed light on new avenues for imaging through diffusive media,” Yang said.


Bio Photonics Research Award


Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Saturday, October 11, 2025

Spatial Light Modulation Gauges How Lenses Slow Progress of Myopia





Myopia, or nearsightedness, is one of the most common ocular disorders worldwide and a leading cause of visual impairment in children. Although specialized eyeglass lenses have been clinically tested to treat myopia progression, an in-depth optical characterization of the lenses has not yet been performed.

Researchers from the ZEISS Vision Science Lab at the University of Tübingen and the University of Murcia undertook a comprehensive characterization to investigate the properties of spectacle lenses designed to slow the progress of myopia. The results of their study could help increase the efficacy of future lens designs.

Myopia is typically caused when a person’s eyes become elongated, which affects how the eyes focus on faraway objects. The condition can progress in children and teens as their bodies grow.

To reproduce pupil shape and myopic ocular aberrations, researchers developed an instrument that reproduced the aberrations in myopic eyes and enabled physical simulation of the pupil. They based their instrument on spatial light modulation (SLM) technology.

“After exploring the state of the art, we didn’t find a method that could be used to characterize the optical properties of these eyeglass lenses under real viewing conditions,” said researcher Augusto Arias-Gallego. As a result, Arias-Gallego said, the researchers endeavored to build an instrument that can measure the lens’ optical response to different angles of illumination, while also reproducing the myopic eye’s pupil and refractive errors.

The team’s instrument uses an illumination source mounted on an arm that rotates around the lens. After the light passes through the lens, it is guided to an SLM by a rotating mirror. The SLM is composed of tiny liquid crystal cells that modify the propagating light, boosting its spatial resolution.

The SLM reproduces the refractive errors and pupil shape of myopic eyes, allowing the researchers to re-create myopic aberrations and to produce different aberrations depending on the angle of illumination. Using the SLM, the researchers programmed the aberrations as phase maps and induced programmed amounts of defocus to perform through-focus testing.

Tests helped the researchers determine the image quality within the proximity of a simulated retinal position, shedding light on how the special lens interacts with eye elongation signaled at the retina. “By combining the through-focus results with light-scattering measurements, we were able to accurately characterize several types of eyeglass lenses,” Arias-Gallego said. The researchers then compared measurements for each lens with their reported clinical efficacy for slowing myopia progression, he said.

The researchers quantified and compared the focusing and scattering properties of a single vision lens with two types of spectacle lenses for myopia progression management: defocus incorporated multiple segments (DIMS) and diffusion-optical technology (DOT). They calculated four optical metrics potentially related to myopia progress and quantified the scattered light from the peripheral lens zones. Scattering was quantified by implementing the optical integration method.

The characterization showed an increased contrast and sharpness of images through the DIMS lens at the peripheral retina when inducing myopic defocus, with respect to the single vision and DOT lenses. It further showed that contrast reduction by the DOT lens was dependent on the luminance at the pupil.

According to Arias-Gallego, the results both raised new questions and pointed to potential strategies that could increase the efficacy of future designs.

“Insights into the link between the optical properties of myopia progression management lenses and effectiveness in real-world scenarios will pave the way to more effective treatments,” Arias-Gallego said. “This could help millions of children and is fundamental in understanding the mechanisms by which these lenses work.”

The researchers are working to adapt the SLM instrument to include sources with varying wavelengths.

Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Friday, October 10, 2025

TiHive Raises $9.3M to Advance Terahertz-AI Vision Technology









TiHive, a company focused on terahertz-AI vision systems, has raised €8 million ($9.3 million) to accelerate growth and expand internationally. The company’s technology combines industrial-grade, silicon-based terahertz imaging devices and AI to enable real-time, non-destructive, see-through quality and process control on production lines.

The company said the funding will support the commercialization of its industrial vision solutions, reinforce international deployment — particularly in hygiene, textiles, recycling, agriculture, and space industries — and accelerate R&D. The company aims to develop a new generation of terahertz chips with extended frequencies and advanced AI features.

TiHive’s systems are integrated directly on production lines and connected to the machines and to the cloud, measuring the quality and the process stability of thousands of products every minute. The technology platform uses CMOS technology, paired with advanced THz optics and an AI-powered software platform. By integrating terahertz technology on CMOS chips, TiHive’s approach enables miniaturization, scalable mass production, low energy consumption, and high-speed performance.

Founded in 2017 and currently employing 14 people, TiHive is backed by support from the EIC Accelerator and Bpifrance. Karista, a deep-tech hardware specialist, and Wind, a deep-tech venture capital fund, participated in the funding.

Bio Photonics Research Award


Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Thursday, October 9, 2025

Cell Manipulation Technique Enters into Commercial Market





In cell biology and medical imaging, the targeted manipulation of cells under controlled conditions is a major challenge in understanding processes and causal relationships. Researchers are dependent on tools that enable them to manipulate individual components of a cell in order to explore their effects on intracellular mechanisms and interactions. However, a common problem with conventional methods of cell manipulation is that the sample is disturbed by the manipulation and the results are therefore compromised.

A laser technology developed by researchers at the Max Planck Institute of Molecular Cell Biology and Genetics makes it possible to influence and specifically control movements within living cells and embryos. The technology, called Focused Light-Induced Cytoplasmic Streaming (FLUCS), can be used to help better understand embryonic developmental disorders.

Further, the FLUCS method allows noninvasive manipulation of cells — for example, in developmental biology. As an additional module for high-resolution microscopes, FLUCS will improve cell biological and medical research, as well as open possibilities in microfluidics.

The technology has been licensed by Rapp OptoElectronic, a photomanipulation and illumination systems developer.

FLUCS is a method of photomanipulation that makes it possible to specifically influence and control movements within cells and embryos with the help of laser beams. The beam selectively induces a thermal field in the cytoplasm, which locally changes the density and viscosity of the liquid medium and causes a flow due to the rapidly moving laser point. In contrast to conventional methods such as optical tweezers, the biomolecules floating in the cytoplasm are set in motion directly without the need for modification of the sample. They can still interact freely with their environment.

Using the method, researchers from Max Planck Institute generated controlled currents in living worm embryos and transported biomolecules to different parts of the growing embryo. Through targeted redistribution, they reported successful examination of the importance of the movement of the cytoplasm for the polarization of oocytes — and thus the question of which molecule has to go where exactly during development.

Based on successful joint development as well as the license agreement of the FLUCS technology from the Max Planck Institute to Rapp OptoElectronic, Rapp now offers FLUCS as a market-ready product to researchers and industrial users worldwide. A pilot system is located in the Light Microscope Facility of the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden. Here, FLUCS is available to interested scientists from inside and outside the Max Planck Society for their research. The device is integrated as an add-on module to high-resolution microscopes via standard interfaces.

“FLUCS fills a gap in the previously available micro-manipulation techniques to study the causes and consequences of intracellular movement,” said Sven Warnck, managing director of Rapp OptoElectronic. “Directed liquid flows are induced by moderately warming up the sample with a laser spot. Their path can be easily specified individually using the user-friendly software, for example as a line, circle, or free form. In this way, cell components such as organelles, PAR proteins, and even chromatin can be moved freely in the cell nucleus without having to hold or fix them.”

The technology has a broad range of potential applications. In cell biology, artificially generated cytoplasmic currents can be used, for example, to invert PAR proteins and thus influence embryonic development. In medical research, molecular mechanisms and signaling pathways in cells can be better researched and the development of drugs can be supported. In microfluidics, the behavior of liquid quantities in the micro- or picoliter range can be examined in more detail with the help of FLUCS, thus supporting new methods of laboratory measurement technology, quality control, or food safety.

Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Tuesday, October 7, 2025

Scalable 3D Micro-Printed Sensors Promise Optofluidic Disease Detection





Early-stage disease diagnosis relies on the highly sensitive detection of biomarkers, such as optical whispering-gallery-mode (WGM) microcavity sensors; such devices provide precise, label-free biosensing. However, scaling and integrating large-scale arrayed WGM microcavity sensors is challenging. Bottlenecks in sensor design can lead to these bottlenecks.

In response, researchers at Hong Kong Polytechnic University developed a 3D micro-printed WGM micro-laser sensor for sensitive on-chip biosensing. The developed sensor, a limacon-shaped WGM micro-laser sensor, was created using flexible micro-printing technology with the optical advantages of WGM micro-lasers.

In the device, optical WGM micro-laser sensors circulate light resonantly within tiny microcavities. Experimental results highlighted the potential of the device for ultralow-limit detection of biomarkers in early disease diagnosis. When target molecules bind to the cavity’s surface, they induce slight changes in the laser’s wavelength, enabling highly sensitive detection of biological substances.

“In the future, these WGM micro-laser sensors could be integrated into a microfluidic chip to enable a new generation of lab-on-a-chip devices for ultrasensitive, quantitative detection of multiple biomarkers," said research lead A. Ping Zhang. "This technology could be used for the early diagnosis of diseases such as cancers and Alzheimer's disease, or for fighting major health crises such as the COVID-19 pandemic.”

The challenge in scaling these sensors is the need to couple light entering and leaving them, which typically requires a tapered optical fiber with a diameter <2 μm, a comparatively small size that makes them difficult to align. Using the light emitted directly from the micro-laser sensor offers a promising alternative to using tapered optical fibers for light coupling. However, the circular microcavities of conventional WGM micro-lasers make efficient far-field light collection difficult, thereby limiting the readability of the sensor’s weak signal.

The sensor, using resonance and a narrow linewidth of lasing peaks, can detect immunoglobulin G, an extremely small but common antibody found in blood and other body fluids. Experimental results showed that the sensor can detect this antibody at a detection limit of ~70 ag/mL.

Integrating the micro-laser sensors into a microfluidic chip could lead to the eventually development of optofluidic biochips for rapid, quantitative, and simultaneous detection of multiple disease biomarkers.

Bio Photonics Research Award


Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Monday, October 6, 2025

Multi-Camera Microscope Produces Sharp Images of Large, Curved Samples




 


Microscopy samples are seldom completely flat across a centimeter-scale field of view. Mechanical scanning can keep all the parts of a large sample in focus, but scanning reduces throughput, slowing the imaging process.

To help large-area microscopy systems resolve trade-offs between field of view, resolution, and imaging speed, a team at Duke University developed a single-shot, re-imaging microscope that achieves seamless, gigapixel imaging over a 16.3 x 18.8 square millimeter (mm2) field of view, at 0.84-µm half-pitch resolution, without mechanical scanning.

The microscope, which the researchers call PANORAMA, could enhance imaging applications for biological research and medical diagnostics, as well for industrial inspection and quality control.

“This tool can be used wherever large-area, detailed imaging is needed,” researcher Haitao Chen said. “For instance, in medical pathology, it could scan entire tissue slides, such as those from a biopsy, at cellular resolution almost instantly. In materials science or industrial inspection, it could quickly inspect large surfaces, such as a chip wafer, at high detail.”

PANORAMA uses a telecentric photolithography lens, a large-aperture tube lens, and a flat micro-camera array with adaptive, per-camera focus control to provide sub-µm focus across flat, curved, and uneven samples that span cm.

The telecentric lens, originally developed for chip-making, is combined with a large tube lens that projects an image of the sample onto a flat array of 48 small cameras. Each camera images a portion of the scene or sample. The multi-camera configuration works like a single microscope, capturing high-resolution, gigapixel images of large and non-flat objects in a single snapshot. Each camera can be independently focused to match the sample surface, ensuring that the entire field of view stays sharp even if the sample is curved.

Additionally, the multi-camera, curvature-adaptive microscope also eliminates the need for scanning, which can take up to an hour. In a process that takes about 5-10 min, PANORAMA uses software to automatically stitch the images from each camera together into one continuous picture.

“The telecentric lens makes it possible to image a very wide field without distortion, while the multi-camera approach overcomes the usual size-and-resolution limit of a single sensor,” Chen said. “This combination lets us acquire a seamless, gigapixel image in a single snapshot, flattening out any curvature adaptively.” The detailed, gigapixel-scale images have 10-50x more pixels than the average smartphone camera image.

Imaging a prepared slide of rat brain tissue under brightfield illumination, which uses white light to reveal tissue structure, enabled the researchers to demonstrate the efficacy of the instrument and technique. The 48-camera array captured the entire 630-megapixel (MP) image in one snapshot, with no scanning required. The resulting image showed cellular structures as small as 0.84?µm, as well as neurons and dendrites across the sample.

The researchers also used PANORAMA to simultaneously acquire brightfield and fluorescence images of onion skin placed over a curved surface. By focusing each camera on the local curvature, they were able to obtain sharp images of the entire onion skin over the curved surface. The brightfield images revealed crisp cell walls, while the fluorescence images clearly showed stained nuclei.

“In practical terms, we saw a huge jump in throughput and flexibility — no more moving parts, no tedious focus-stacking, and no blind spots between cameras,” professor Roarke Horstmeyer, who led the research, said. “Compared to older multi-camera microscopes that needed scanning to fill gaps and maintain focus, our approach gives continuous full coverage at sub-micron resolution.”

The researchers are investigating how to improve the microscope by adding more cameras or larger sensors to capture an even bigger field, such as an entire petri dish, in a single shot. They are also developing an automated focus system, which will eliminate the need to adjust each camera manually for every sample. Future computational advances could make it possible for PANORAMA to perform 3D image reconstruction, provide depth maps in real time, and provide live videos of microscopic processes.

“Although traditional microscopes assume the sample is perfectly flat, real-life samples such as tissue sections, plant samples, or flexible materials may be curved, tilted, or uneven,” Horstmeyer said. “With our approach, it’s possible to adjust the focus across the sample, so that everything remains in focus even if the sample surface isn’t flat, while avoiding slow scanning or expensive special lenses.”

Bio Photonics Research Award


Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

Friday, October 3, 2025

LED Hyperspectral Imaging Device Promises Faster Gastrointestinal Cancer Diagnoses





Gastrointestinal (GI) cancer screening by endoscopy has improved localized cancer prognosis and diagnosis rate. Still, conventional GI endoscopy misses about 8-11% of tumors, due to lack of visibility during upper GI endoscopic exams.

One possible way to increase the sensitivity of endoscopic examinations is by using hyperspectral imaging techniques. Hyperspectral imaging captures images across discrete, narrowband wavelength channels, including wavelengths beyond the visible. By analyzing how cells reflect and absorb light across the electromagnetic spectrum, the technique enable users to acquire a unique spectral fingerprint of each cell in a tissue sample.

To improve endoscopic imaging and detect cancers at an earlier stage, a team led by professor Baowei Fei at the University of Texas at Dallas (UTD) developed an LED-based, real-time hyperspectral imaging device for endoscopes. The researchers designed a prototype based on a monochrome, micro-digital camera and a multiwavelength LED array comprising 18 LEDs in 18 different wavelengths ranging from 405-910 nm. The team aimed to achieve an image capturing rate of over 10 hypercubes per second (hps) without compromising spatial resolution.

The researchers used micro-LEDs with footprints smaller than 400 μm × 400 μm to miniaturize the device. This enabled the team to build an imaging device that could accommodate tens of LEDs at the tip of a clinical endoscopic catheter, and create a hyperspectral imaging system with an in situ light source.

By using an in situ hyperspectral light source, the researchers avoided the need for fiber optics, increasing the mobility of the endoscope catheter and lessening the complexity of the mechanical design.

The LED-based approach to wavelength scanning makes the device low-power, and allows illumination intensities to be adjusted dynamically based on the distance between the device and the target.

To evaluate the feasibility of using an LED-based illumination source for endoscopic imaging, the researchers studied their system’s performance on different normal and cancerous ex vivo tissues. They found that the hyperspectral signatures of different imaging targets acquired using the prototype hyperspectral imaging device were found to be comparable to the data obtained with the reference system.

The use of LEDs for hyperspectral imaging could enable numerous applications in endoscopic, laparoscopic, and handheld HSI devices for detecting disease, according to the researchers.

Ultimately, Fei aims to develop hyperspectral technology that can be used to track many different types of cancers, and that is small enough to be placed in handheld, affordable personal devices, such as a smartphone or pen that could be used to scan the skin or mouth, for example.

“Basically, you could complete the scan, and the information would be wirelessly transferred to the cloud," Fei said. "Then, AI may determine the lesion is suspicious and refer the person to a medical center for follow-up.

"Our goal is to produce imaging systems that are really affordable as well as cost-effective, meaning they could find cancers at earlier stages and reduce the need for unnecessary tissue removal and testing.”

Bio Photonics Research Award

Visit: biophotonicsresearch.com
Nominate Now: https://biophotonicsresearch.com/award-nomination/?ecategory=Awards&rcategory=Awardee

#MeatAnalysis #FluorescenceTech #FoodQuality #FoodSafety #SpectroscopyInFood #MeatAuthentication #RapidDetection #FoodScience #MeatFreshness #MolecularDetection #FoodIndustryInnovation #NonDestructiveTesting #FoodMonitoring #SpectroscopyApplications #QualityControl #AdvancedSpectroscopy #MeatSpoilageDetection #FoodIntegrity #SmartFoodTesting #RealTimeAnalysis #FoodAuthenticity #FoodSafetyInnovation #SpectroscopyResearch #NextGenFoodSafety #InnovativeFoodScience,

When AI meets physics: Unlocking complex protein structures to accelerate biomedical breakthroughs

Artificial intelligence (AI) is transforming how scientists understand proteins—these are working molecules that drive nearly every process ...