Posted in | News | Imaging

Deep Learning Classifies Non-Contact Droplets Using Only Visual Images

Non-contact dispensing of droplets or "jetting" is the precise depositing of tiny liquid droplets, ranging from picoliters to microliters, without the dispensing nozzle physically touching the surface, therefore protecting fragile substrates from damage. Non-contact dispensing is widely used in fields like microfluidics, bioprinting, drug discovery, microarray fabrication, and electronics manufacturing, where deposition of exact volumes of liquid are crucial.

TestRig setup showing the Mikrotron MotionBLITZ EoSens mini2 camera. Image courtesy of University of Freiburg

Maintaining dispensed drop volume is challenging, and is largely dependent on various factors such as ambient conditions, viscosity, how droplets deform or flow, the geometry of the dispenser, and actuation dynamics.

To uncover how to improve volume reliability, scientists at the University of Freiburg (Freiburg, Germany) compared seven neural network architectures using different sampling techniques, data cleaning conditions, and hundreds of acquisition batches. As neural networks are universal approximators, they can classify the liquids and adjust dispenser parameters on the fly as properties change.

TestRig Acquisition Setup

To fulfill the need of acquiring a large amount of visual data for deep learning, an automated acquisition setup, named TestRig, was developed. Built on top of an optical breadboard, TestRig features a nanoliter non-contact droplet dispenser mounted on a three-axis precision stage. The falling droplets were imaged as shadows by positioning the dispenser nozzle between a light-emitting diode.

Video Capture System

For each droplet dispensed, a sequence of 250 frames were recorded at a speed of 6806 frames-per-second by a Mikrotron MotionBLITZ EoSens mini2 camera. Internal image memory allows this 3-megapixel CMOS GigE camera to be operated without a connection to a host PC. MotionBLITZ® Director2 software configured and operated the camera, while in-house software was deployed to trigger the camera, configure the parameters of the actuation, and log the mass, ambient temperature, relative humidity, and pressure readings.

Camera frame size was defined as 750 pixels in height and 144 pixels in width to take into account the longest tail for a drop. This way, the frame contained the entire droplet from head to toe. Frames were then fed into the seven 2D and 3D neural network architectures for training and testing. Training on multiple acquisition batches helped the networks reduce “shortcut learning” that can arise by training only on a single batch.

After applying the different acquisition batches, the scientists found that deeper neural architectures were less accurate, while shallower architectures using both 2D and 3D convolutions performed better, specifically the ResNet-18 convolutional neural network. In fact, inference could be made using ResNet-18 with as little as one image. They also determined data cleaning caused a decrease in classification accuracy due to it reducing the amount of training data fed to the network.

The University of Freiburg study served as proof-of-concept for neural networks to predict the viscous forces of different fluids to inertial and surface tension, also known as Ohnesorge numbers. TestRig and deep learning architectures successfully analyzed visual patterns of droplets dispensed non-contactingly, essentially identifying the fluid's viscosity and surface tension characteristics solely from how it behaves when dispensed without direct contact with a surface.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.