16:00
Poster session II
SPARCITIY CONSTRAINED REGULARIZATION FOR BREAST ULTRASOUND IMAGING
Ana Ramirez, Sergio Abreo Carrillo, Koen van Dongen
Abstract: Ultrasound is a frequently used imaging modality for detecting and characterizing tumours and other lesions in breasts [1]. Most imaging modalities typically display the echogenicity of the different tissues and use this image to differentiate one lesion from the other. However, with advanced imaging methods such as Contrast Source Inversion (CSI), it is feasible to reconstruct speed of sound profiles of the tissue leading to an improved specificity [2].
Unfortunately, reconstructing speed of sound profiles from the measured wave field is an ill-posed inverse problem. Especially if the measurements are contaminated with noise, full-wave non-linear inversion methods such as CSI may diverge from the true solution. In the past, total variation (TV) as an additional constraint has been introduced to stabilize the inversion process. However, for very noisy data, TV as regularisation tool is not sufficient.
To improve the convergence for noisy data, we introduced a regularization method based on ideas coming from the area of compressive sensing [3]. In particular we regularize the inversion process by restricting the solution of the CSI method to be sparse in a transformation domain. Consequently, the new method estimates the contrast source and contrast function by minimizing the mean squared error between the measured and modelled data, while the sparsity constraint is included via an additional penalty term in the error functional.
The proposed method is tested on noise-free and noisy synthetic data representing a circular scan of a cancerous breast. Numerical experiments show that, for measurements contaminated with 5% noise, the sparsity constrained CSI improves the error of the reconstructed speed-of-sound profiles up to 70% in comparison with traditional CSI. Moreover, the results show that the method remains convergent for an increasing number of iterations.
REFERENCES
[1] P. B. Gordon and S. L. Goldenberg, “Malignant breast masses detected only by ultrasound: A retrospective review,” Cancer 76(4), 626–630 (1995).
[2] N. Ozmen, R. Dapp, M. Zapf, H. Gemmeke, N. Ruiter, and K. W. A. van Dongen, Comparing different ultrasound imaging methods for breast cancer detection,” IEEE Trans. Ultrason., Ferroelectr. Freq. Control 62(4), 637–646 (2015).
[3] A.B. Ramirez and K.W.A. van Dongen, "Sparsity Constrained Contrast Source Inversion", Journal of Acoustical Society of America 140(3), 1749-1757, (2016).
|
AUTOMATED MEASUREMENT OF FETAL HEAD CIRCUMFERENCE IN ULTRASOUND IMAGES
Thomas van den Heuvel, Chris de Korte, Bram van Ginneken
Abstract: Thomas L.A. van den Heuvel*, Chris L. de Korte and Bram van Ginneken
*Radboud university medical center,
Geert Grooteplein 10, 6525 GA Nijmegen
The Netherlands
e-mail: Thomas.vandenHeuvel@radboudumc.nl
Web page: www.radboudimaging.nl
ABSTRACT
Ultrasound imaging is widely used for screening and monitoring pregnant women. In ultrasound screening biometric measurements, like the crown-rump length, Head Circumference (HC), abdomen circumference and femur length are often computed to determine the Gestational Age (GA) of a fetus and to monitor the growth of a fetus. These measurements are usually obtained manually and this leads to inter- and intra-reader variability. An automated system could potentially reduce measuring time and variability, and assist less-experienced human observers. In this work we focus on automated measurement of the HC.
The automated system consists of three steps. First, a set of Haar-like features [2] was used to train a Random Forest classifier [3] to find the skull. Second, the fetal head was extracted using Hough transform [4] and dynamic programming. Finally, an ellipse was fitted through the dynamic program result to compute the HC. The system was evaluated on 335 ultrasound images. An experienced sonographer manually annotated the HC in each ultrasound image. A reference GA was obtained in the first trimester of the pregnancy with the use of the crown-rump length.
The difference between the automated measured HC of the automated system and the manually annotated HC was 0.6 ± 4.3 mm. The difference between the reference GA and the GA obtained from the manually annotated HC was 0.5 ± 6.1 days. The difference between the reference GA and the GA obtained from the measured HC of the automated system was 1.3 ± 7.5 days.
With the use of our system it is possible to automatically measure the HC and estimate the GA of a fetus. This system could potentially aid less-experienced human observers in developing countries, where there is a severe shortage of well-trained sonographers.
REFERENCES
[1] “Maternal mortality,” [Online] http://www.who.int/mediacentre/factsheets/fs348/en/, 2015, World Health Organization, fact sheet No. 348.
[2] R. Lienhart and J. Maydt, “An extended set of haar-like features for rapid object detection”, IEEE International Conference on Image Processing, Vol. 1, 2002.
[3] L. Breiman, “Random forests”, Machine learning, Vol. 45, No. 1, pp. 5-32, 2001.
[4] P.V.C. Hough, “Method and means for recognizing complex patterns”, 1962, U.S. Patent 3069654.
|
WIRELESS POWER TRANSFER AND OPTOGENETIC STIMULATION OF FREELY MOVING RODENTS
Farnaz Nassirinia, Freek Hoebeek, Wouter Serdijn
Abstract: Animal studies are commonly used to test the feasibility and effectiveness of promising novel neuroscience research ideas. One such new technique is optogenetic stimulation, a state-of-the-art brain stimulation technique. In optogenetics, genetic techniques are used to create light-sensitive proteins within the neuron membrane, thus allowing the affected region to become sensitive to light stimulation, for example through an inserted LED.
Current optogenetic stimulation methods use tethered setups and, typically, the animal-under-study is put into a fixed position. This introduces stress, which, besides an obvious reduction in animal welfare, may also influence the experimental results. Hence, an untethered setup is highly desirable. Therefore, in this study, we propose a wireless optogenetic stimulation setup, which allows for full freedom of movement of multiple rodents-under-study in a 40x40x20 cm environment.
We investigate a variety of wireless power transfer methods, which results in the choice for wireless power transfer through inductive coupling, as this allows for efficient power transfer over short range and has the least side-effects, making it the most suitable approach for this particular environment. The efficiency of inductive coupling is highly susceptible to vertical, lateral and angular misalignment of the coils. The wireless link is, therefore, designed to maximize the link efficiency and minimize the misalignments between the coils. In order to maximize the inductive power transfer link, we look into all the aspects that have an influence on the link efficiency, including coil shape and coil material. The goal is to obtain an inductive link that provides sufficient link efficiency throughout the entire 40x40x20 cm region of interest to be able to power the optogenetic stimulation receiver.
The entire wireless receiver module resides on the animal and, as such, is severely restricted in both size and weight. The complete module with receiver coil, rectifying and regulating electronics, micro-controller and stimulation optrodes can be at most 1x1x1 cm. A significant additional contribution is the creation of a novel micro-LED mounting technique, which
allows for the micro-LED array with multiple LEDs to be directly inserted into the brain. The use of a micro-LED array greatly improves the power efficiency, as the traditional LED-tooptical-fiber coupling is accompanied by large losses in light intensity. Moreover, a single micro-LED array is able to replace a number of optical fibers, resulting in a less-invasive procedure for more stimulation sites.
|
SEMI-AUTOMATIC HIPPOCAMPUS DELINEATION USING BI-LAPLACIAN INTERPOLATION
Fabian Bartel, Joost Hulshof, Hugo Vrenken, Michiel de Ruiter, Jose Belderbos, Marcel van Herk, Jan de Munck
Abstract: Background: Precise and reproducible hippocampus outlining is important to quantify hippocampal atrophy caused by neurodegenerative diseases and to spare the hippocampus in whole brain radiation therapy when performing prophylactic cranial irradiation or treating brain metastases. Hippocampus segmentation can take up to 2h for expert observers. The aim of this study is to reduce the amount of labor without compromising the accuracy.
Methods: In our approach an expert only needs to delineate the hippocampus on a few slices and to reconstruct an interpolating 3D surface by minimization its curvature, under the constraint that it passes through the delineated points. Linearization and discretization of this mathematical problem boils down to the solution of a large sparse system of equations representing the bi-Laplacian of the unknown part of the surface.
For 10 subjects back-to-back (BTB) T1-weighted 3D MPRAGE images were acquired at time-point baseline (BL-A and BL-B) and 12 months later (M12-A and M12-B). Hippocampi were manually segmented and converted into triangulated meshes. We extracted 3 to 12 evenly distributed contours to serve as simulated sparse delineations. We compared reconstructed hippocampi with the original hippocampi by computing means and standard deviations of percentage volume differences (VD) and Jaccard overlap indices (J) using the following comparisons: (1) reconstructed A/B - original A/B hippocampi, (2) reconstructed A - original B hippocampi and vice versa to avoid a bias from original contours, and (3) BL - M12 hippocampi for atrophy measurement.
Results: For the original BTB segmented hippocampi we obtain a mean J of 80(±3)% and a mean VD of -0.5(±3.0)%. The atrophy measurement for original segmented hippocampi was -4.2(±7.2)%.
For the reconstructed hippocampi using 3 contours we obtained for comparisons 1-3 the following mean J and VD: (1) J:54((±6)% and VD:-36.7(±4.7)%, (2) J:51((±6)% and VD:-37.2(±4.7)%, and (3) VD:-4.2(±7.2)mm3. Increasing the number of contours to 7 we obtained these results for the same comparisons: (1) J:83((±2)% and VD:-1.0(±2.4)%, (2) J:77((±3)% and VD:-1.5(±3.6)%, and (3) VD:-4.4(±4.9)%. For 12 contours the comparisons revealed: (1) J:83((±2)% and VD:-1.3(±0.8)%, (2) J:77((±3) and VD:-1.8(±2.8)%, and (3) VD:-3.8(±4.5)%. We obtained similar results for both BTB scans and both atrophy measurements.
Conclusions: With our novel method we were able to reconstruct hippocampi from a sparse delineation. Using 3 contours the results for the reconstructed hippocampi were not sufficient. With 12 contours we obtained very similar results compared to a full hippocampus segmentation reducing the manual labor to outline the hippocampus by half. Reconstructed hippocampi using 7 contours also showed good results emphasizing a larger validation study. These results were reproduced by performing an analogous analysis on the BTB scans.
|
STRUCTURED ELECTRONIC DESIGN OF HIGH-PASS Σ∆ CONVERTERS AND THEIR APPLICATION TO CARDIAC SIGNAL ACQUISITION
Samprajani Rout, Wouter Serdijn, Reza Lotfi
Abstract: Motivation: With the bandwidth of the ECG signal extending from sub-Hz to 200 Hz, a major challenge for an ECG readout system lies in implementing the high-pass (HP) cut-off frequency as this translates into the realization of large time constants on-chip [1]. Although
techniques such as those based on the use of pseudo-resistors to obtain very large time constants exist [2], they are heavily limited in both linearity and accuracy, which clearly dictates the need for alternative structures.
Proposed methodology: A structured electronic design approach based on state-space forms is proposed to develop HP Σ∆ converters targeting high accuracy of the HP cut-off frequency. Based on transfer function calculations, various specific HP Σ∆ topologies namely, biquad,
observable and controllable canonical and orthonormal HP Σ∆, can be made to satisfy the desired HP signal transfer with 2nd order noise-shaping. In order to establish the noise contributions of the integrators, intermediate transfer functions, viz., from the system input to
the integrator outputs, and from the integrator inputs to the system output, respectively, are mathematically derived and evaluated.
Results: The evaluation of the intermediate transfer functions show that the orthonormal topology is better than the observable canonical HP Σ∆ topology in terms of noise. Simulations conducted in MATLAB confirm the noise behaviour of the integrators and show that, apart from the first integrator, the HP integrator significantly contributes to the total
noise. Secondly, the noise and the harmonics at higher frequencies from the HP integrator are low-pass filtered. A 2nd order orthonormal HP Σ∆ modulator with a sampling frequency of 128 kHz for a bandwidth of 1-200 Hz to be implemented in 0.18 μm technology achieves a resolution of 12-bits at the HP cut-off frequency of 1 Hz which is a major improvement over
pseudo-resistors at the cost of higher area and power consumption. A robust, area-efficient and parasitic insensitive large time-constant switched-capacitor Nagaraj integrator leads to a HP cut-off frequency realization determined solely by the ratio of capacitors with an accuracy
upto 1%. In conclusion, we investigated HP Σ∆ topologies that can be used to realize very large time constants with high linearity and accuracy which shows a major improvement over the conventionally used topologies that employ pseudo-resistors.
Keywords: ECG, State-space forms, High-Pass Σ∆ converters, Orthonormal topology
REFERENCES
[1] Rachit Mohan, Senad Hiseni, Wouter A. Serdijn: A Highly Linear, Sigma-Delta Based, Sub-Hz High-Pass Filtered ExG Readout System, proc. IEEE International Symposium on Circuits and Systems, Beijing, China, May 19-23, 2013.
[2] R. Harrison and C. Charles, "A low-power low-noise cmos amplifier for neural recording applications," Solid-State Circuits, IEEE Journal of, vol. 38, no. 6, pp. 958 – 965, june 2003.
|
HIGH-RESOLUTION NEURAL READOUT FOR FUTURE CLOSED-LOOP COCHLEAR IMPLANTS
Ali Kaichouhi, Wouter Serdijn, Cees Bes
Abstract: A lot of people worldwide suffer from hearing losses. Quite a few of them can be helped with
a simple hearing aid that amplifies the external sound, but for many other people a more
complex device that is able to substitute for the work of the damaged parts of the inner ear
(cochlea) is needed. Such a device is called a cochlear implant.
A complete cochlear implant has a stimulator module as well as a readout module. A
stimulator module has been designed in [1], while the main focus of this work is the
implementation of the readout module. 16 readout channels are needed in parallel to record
the neural response on multiple electrodes. Since the channels are equivalent, the idea is to
design a readout system that allows real time sensing of single-channel neural signals and
then connect 16 of these in parallel.
While the cochlea is stimulated by the stimulator, its neural response is read by the readout
module, which is a challenging task. In fact, the neural response is in the range of 10 uV
whereas the stimulus itself and the corresponding artefact can range up to 15 V, leading to a
required dynamic range of 123 dB. Moreover, due to the rapid transients of the composite
signal, the bandwidth ranges to 300 kHz. Moreover, the noise in the band of interest of the
whole readout system should be lower than 1 uV. Due to its application in a cochlear implant,
the readout system should consume as little power as possible.
In this work a discrete circuit realization on a Printed Circuit Board (PCB) that meets all the
mentioned specifications is designed. The first block of the readout module is an
instrumentation amplifier with a really low input referred noise (< 1uV) and a variable gain
from 0.1 to 1000. By doing so, the aforementioned dynamic range can be handled. To prevent
damage to the tissue and electrodes in the cochlear implant the input bias current is not
allowed to exceed 20 pA. The output of the instrumentation amplifier needs to be processed
by a microcontroller. Therefore in between an Analog-to-Digital Converter (ADC) is needed.
The designed PCB realization, together with the stimulator implemented in [1] implements a
high-resolution neural stimulation and readout system, which will be used, initially, for animal
experiments and paves the way to fully-integrated closed-loop cochlear implants.
|
USING ADVANCED PHOTOPLETHYSMOGRAPHY ANALYSES TO STUDY SLEEP ARCHITECTURE IN INSOMNIA
Marina-Marinela Nano, Rik Vullings, Pedro Fonseca, Sebastiaan Overeem, Ronald Aarts
Abstract: Difficulties initiating or maintaining sleep are very prevalent sleep complaints in the general population. If sleeplessness is severe, chronic and leads to daytime consequences, the term insomnia disorder is used. Multinational studies that used the Diagnostic and Statistical Manual of Mental Disorders IV (DSM IV) criteria reported prevalence rates of insomnia disorder that range from 3.9% to 22.1%, with an average of approximately 10% [1]. Currently, diagnosis is mostly based on subjective symptoms, sleep diaries, and questionnaires. Objective measures of relevant physiology, obtained from e.g. photoplethysmography (PPG) and electrocardiography (ECG), may provide new insights into the mechanisms underlying the heterogeneity in the insomnia population, by enabling long-term home based assessment of sleep structure, including arousals.
While polysomnography is the gold standard for sleep staging, measures based on recording respiratory rates (RR) and/or heartrate variability (HRV), have also been used for sleep architecture characterization. These measures can be used to make a broad distinction of human sleep into Rapid Eye Movement (REM) and non-REM sleep. In healthy subjects, cardiorespiratory recordings may even enable sleep staging with relatively high performance [2]. However, it is unknown, whether this holds true for insomniac subjects who deal with difficulties in sleep maintenance and/or falling asleep.
In this study, we will use wrist-worn PPG measurements in a longitudinal follow-up of a large cohort of clinically well-characterized insomnia patients. This technology enables long term non-invasive and inexpensive assessment of cardiovascular autonomic control in a home environment. We will use different analysis methods, such as linear and non-linear analysis of HRV during different sleep stages to provide information on the autonomic changes that characterize wake-to-sleep transition, sleep onset, and different sleep stages in different insomnia sub-types. Additionally, alterations in the HRV characteristics might reflect arousals. Besides HRV, the morphology of PPG will be studied to identify possible cardiovascular features informative of sleep architecture in insomnia disorders (e.g. amplitude of the PPG wave used as stroke volume, vertical position of the dicrotic notch of the wave as indicator of vasomotor tone, etc.).
|
INFRARED FOR ESTIMATION OF RELATIVE FEET DISTANCE
Mohamed Irfan Mohamed Refai, Bert-Jan van Beijnum, Peter Veltink
Abstract: Instrumented Force Shoes™ (Xsens) was used in the INTERACTION project, to monitor gait and balance measures in stroke subjects [1]. The issue of drift in foot position estimation by Inertial Measurement Units (IMUs) is corrected by Ultrasound (US), which offers relative feet distance estimation [2]. However, the US system suffers from some limitations such as synchronization between transmitter and receiver module placed on either foot, and ambience temperature.
Other sensor systems including Stereo-photogrammetry, LIDAR, and magnets have been studied for relative feet distance estimation [3], [4]. However they suffer from limitations such as portability, and need for reference systems.
Reflective methods using Infrared (IR) systems for distance sensing does not suffer from the above limitations and is also portable. However, studies using them either incorporate heavy systems or show large mean errors [5]. Therefore, a better distance estimation model for IR systems is required.
In this study, an IR system - ZX Distance Sensor, is used. The sensor provides the location of the reflecting surface in 2-D. The sensor is placed on one foot and a reflective tape is used on the other. Simultaneously, distance is estimated using an US system. These distances and the outputs of the IR system are used to obtain a model that relates these quantities. The study is performed for stationary and walking cases, and the accuracy of the models will be studied.
The study is a part of project 7 of NeuroCIMT, funded by the Dutch National foundation STW.
REFERENCES
[1] B. Klaassen, B.-J. F. van Beijnum, M. Weusthof, D. Hof, F. B. van Meulen, Ed Droog, H. Luinge, L. Slot, A. Tognetti, F. Lorussi, R. Paradiso, J. Held, A. Luft, J. Reenalda, C. Nikamp, J. H. Buurke, H. J. Hermens, and P. H. Veltink, “A Full Body Sensing System for Monitoring Stroke Patients in a Home Environment,” Commun. Comput. Inf. Sci., vol. 511, pp. 378–393, 2016.
[2] D. Weenk, D. Roetenberg, B.-J. F. van Beijnum, H. Hermens, and P. H. Veltink, “Ambulatory Estimation of Relative Foot Positions by Fusing Ultrasound and Inertial Sensor Data.,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 4320, no. c, pp. 1–10, 2014.
[3] D. Roetenberg, P. Slycke, A. Ventevogel, and P. H. Veltink, “A portable magnetic position and orientation tracker,” vol. 135, no. 2, pp. 426–432, 2007.
[4] M. Teixidó, T. Pallejà, M. Tresanchez, M. Nogués, and J. Palacín, “Measuring oscillating walking paths with a LIDAR,” Sensors, vol. 11, no. 5, pp. 5071–5086, 2011.
[5] T. N. Hung and Y. S. Suh, “Inertial sensor-based two feet motion tracking for gait analysis.,” Sensors (Basel)., vol. 13, no. 5, pp. 5614–5629, 2013.
|
THE DESIGN OF SUPERELASTIX – A UNIFYING FRAMEWORK FOR A WIDE RANGE OF IMAGE REGISTRATION METHODOLOGIES
Floris Berendsen, Kasper Marstal, Stefan Klein, Marius Staring
Abstract: Image registration is a fundamental task in medical image processing and analysis. The objective of image registration is to find the spatial relationship between two or more images. Typically, intensity-based registration methods are formulated as an optimization problem. A transform describes the geometric mapping from one image to the other, a metric determines the dissimilarity between the images, and an optimizer searches for the transform giving the highest similarity between images. In the last decades numerous image registration methods and tools have emerged from the research community, diverging over various mathematical paradigms such as parametric versus diffeomorphic registration and continuous versus discrete optimization. Furthermore, the implementation of these methods are scattered over a plethora of toolboxes each with their own interface, limitations and modus operandi. Given an application, it is therefore difficult to rigorously compare different registration paradigms as well as different implementations of the same paradigm.
To enable researchers and developers to select the appropriate method for their application, we propose a unifying registration toolbox with a single high level user-interface. To avoid the large effort of refactoring existing code bases into a single software class hierarchy, we adopt a role-based software design in which components of various C++ source code bases can co-exist. These components are typically metrics, transforms, optimizers etc., which are or are not compatible across paradigms depending on their mathematical and/or software definitions.
To manage all possible types of component collaboration, all roles are maintained in an extensible collection of interface definitions. Any component in our toolbox must be defined in terms of one or more of these interfaces. By a user-defined network of user-selected components a large diversity of registration methods can be constructed. A generic handshake mechanism checks on the compatibility of the component interfaces and provides the user with feedback.
This design allows the embedding of other code bases at various levels of granularity, i.e. ranging from generic components with a single task to full registration methods as monolithic components.
We demonstrate the viability of our design by incorporating two paradigms from different code bases, that is, the parametric B-spline registration of elastix and the diffeomorphic exponential velocity field registration of the ITKv4 code base. The implementation is done in C++ and is available as open source. The progress of embedding more paradigms can be followed via https://github.com/SuperElastix/SuperElastix
|
NON – ASSEMBLY 3D PRINTED MECHANISMS: PLANS
Juan Cuellar, Paul Breedveld, Amir Zadpoor, Dick Plettenburg
Abstract: Fabrication of complex and multi articulated mechanisms is often seen as a time consuming and demanding process. Conventional manufacturing techniques are constrained to produce simple mechanisms, thus requiring complex assembly procedures to construct multi articulated mechanisms. For that reason, the development of functional multi articulated mechanisms that can be fabricated in a single step without the need of post assembly is very attractive.
Many research groups have approached this problem by adopting additive manufacturing (AM) techniques as the most feasible solution. This manufacturing method creates 3d constructs by sequentially adding material layer by layer. The advantages of this method are numerous, but most importantly, it enables the fabrication of structures with complex geometries regardless of any specialized manufacturing skill or labour demanding procedures. The versatility of AM techniques is the core motivation for a thorough change in the current way of designing and constructing complex mechanisms.
In fact, several groups have achieved successful fabrication of non – assembly mechanisms with different AM techniques. Overall, the mechanisms were successfully created with good kinematic characteristics and satisfactory performances. Nevertheless, the mechanisms produced were created for a very specific task and occasionally, with the requirement of extra procedures during their conception. Some of these procedures could be equally or even more complicated than traditional manufacturing procedures. Additionally, operational principles of each AM technique hold different drawbacks during the fabrication process of these non – assembly mechanisms, consequently narrowing the mechanisms complexity spectrum. It is therefore important to understand each AM technique to exploit their potential and to exacerbate their shortcomings in order to recognize up to what point the mechanisms complexity is feasible for a single step fabrication.
We envision to create fully working medical devices in – situ in a single fabrication step. Therefore, we plan to review the applications of AM techniques in the construction of non-assembly mechanisms and to discuss the challenges involved, thus providing perspective regarding the advantages and limitations of current AM techniques in the production of complex mechanisms.
|
PARAMETRIC ANALYSIS OF MICROMACHINING PIEZOELECTRIC CERAMIC USING A PULSED YB:KGW LASER
Verya Daeichin, E. Noothout, R. Vos, Koen van Dongen, Martin Verweij, Nicolaas de Jong
Abstract: In the field of medical ultrasound, transducer active material (e.g. piezoelectric) are typically cut using a diamond saw. However, with increase in the complexity of the geometry of the matrix transducers and application of very thin piezoelectric ceramic (< 100 µm) for high frequency ultrasound (>15 MHz) ultraviolet (UV) pulsed laser micromachining has been proposed [1, 2]. In this study, the effect of laser parameters such as laser fluence, laser pulse density, direction of the cut and sample velocity on the ablation rate and surface morphology are systematically investigated and will be discussed. The control over the ablation rate and shape of the cut (kerf) are of great importance in the fabrication of medical ultrasound matrix transducer were the piezoelectric layer is directly mounted on top of a substrate accommodating the electronics.
A UV laser (λ = 1030 nm) was operated at the third harmonic (343 nm) with a pulse duration of less than 300 femtoseconds for ablation of piezoelectric lead zirconate titanate (PZT) ceramics (PXE5 with a thickness of 600 µm) in air. The pulse density was varied from 1,000 to 10,000 pulse per millimetre (p/mm) with steps of 1000 pulses. The laser fluence was varied by increasing the laser input power from 1 W to 8.5 W which resulted in maximum output power of 1 W for the third harmonic at 10 W input power. It was observed that the ablation rate reached a saturation level at 7,000 p/mm. Keeping the pulse density at its maximum (10,000 p/mm), the ablation rate was linearly increased with increasing laser flounce.
REFERENCES
[1] R. Farlow, W. Galbraith, M. Knowles, G. Hayward, “Micromachining of a piezocomposite transducer using a copper vapor laser”, IEEE Trans. Ultrasonics, Ferroelectrics, and Frequency Control 48, 639 (2001)
[2] D.W. Zeng, K. Li, K.C. Yung, H.L.W. Chan, C.L. Choy, C.S. Xie, “UV laser micromachining of piezoelectric ceramic using a pulsed Nd:YAG laser”, Applied Physics A February 2004, Volume 78, Issue 3, pp 415–421.
Figure 1. Parametric analysis of laser parameters (pulse density, direction of the cut and sample velocity) on depth of cut in PTZ material (PXE5, 600 micron thick). a) Effect of pulse density in horizontal cuts with maximum input power of 8.5 W and sample velocity of 1 mm/s; b) effect of pulse density in vertical cuts with maximum input power of 8.5 W and sample velocity of 1 mm/s; c) effect of pulse density at sample velocity of 0.05 mm/s in vertical cuts and maximum out power of 8.5 W. The inset images are microscopic pictures in each series for the maximum pulse density of 10000 pulses per millimetre. On each boxplot, the central mark is the median, the edges of the box are the 25th and 75th percentiles
|
ADJUSTABLE DEVICES FOR THE PREVENTION OF DIABETIC FOOT ULCERS
Roy Reints, Juha Markus Hijmans, Klaas Postema, Bart Verkerke
Abstract: Up to 25% of all patients with Diabetes Mellitus (DM) will develop foot ulcers, that can eventually lead to amputation of the affected foot[1]. Diabetic foot ulcers mainly develop at the forefoot and first toe due to changes in foot structures that lead to elevated pressures at these sites[2]. Patients with DM who have developed peripheral neuropathy are especially at high risk, as protective sensation is reduced[2].
To prevent ulcer development, peak plantar pressures (PP) acting on the foot should not exceed 200 kP[3]. Pressure maps, measured in a laboratory setting, are commonly used to prescribe pressure redistributing footwear like rocker profile shoes or customized insoles. However, in daily life, the foot changes rapidly, resulting in inadequate offloading by the prescribed footwear. Therefore, our project aims to develop a plantar pressure monitoring system for patients with DM that provides feedback on how to prevent high plantar pressures and to create an in- and outsole that adapt to high plantar pressures instantaneously.
One of our concepts is a rocker profile shoe that can be adjusted to the measured plantar pressure changes within seconds. Rocker profiles are outersole adjustments that change the roll over point, or apex, of a shoe. The effects of several design variables for a rocker profile, such as apex position and apex angle, on plantar pressure have been described before[4]. However, the difference between rigid and flexible rocker profiles on plantar pressure is still unknown. Knowledge of this difference is essential in the design of an adjustable rocker profile shoe.
Therefore, a study was started to gain insight on the differences between rigid and flexible rocker profiles. Results of this study include that when compared to flexible rockers, rigid rockers showed larger reduction of PP for the central and lateral forefoot, while higher PP were found for the first toe. In future studies we will determine the effect of flexible rockers with different apex angles, as the apex angle in this study was kept similar between all conditions.
|
TOWARDS A FAMILY OF CUSTOMISABLE FLEXIBLE NEURAL IMPLANT
Vasiliki Giagka, Wouter Serdijn
Abstract: Since the appearance of the first active implantable device, a cardiac pacemaker, in 1958, technology improvements have paved the way for the development of several diagnostic or therapeutic devices that target a large variety of applications (e.g. hearing loss, bladder control, chronic pain). More recently, it has been proposed that precise electrical impulses targeting individual nerve fibres or specific brain regions could be used in a fashion analogous to pharmaceuticals, repairing lost function and restoring health [1]. These bioelectronic medicines, or electroceuticals, would have to be administered through miniature devices close to their targeted nerves.
Today, most implantable devices are still inherently hybrid systems, comprising a variety of components (i.e., electrodes, integrated circuits for electrical stimulation and recording of signals, decoupling capacitors, micro-controllers, batteries and a number of discrete components) that are typically individually fabricated and assembled together. These hybrid systems tend to be rather large for neural applications. Furthermore, these systems are normally custom designed for the targeted application and their structure and functionality are tailored to the specific requirements. This approach involves a massive design effort early on in the development process and implies many iterations and long prototyping times, usually even before the first proof-of-concept phase.
Neuromodulation systems share many common characteristics which can be grouped into families or “libraries” of components. We suggest that a co-ordinated effort for the development of such “libraries” would enable the desired miniaturisation and integration that is currently lacking. Towards such a technology library we propose the development of a platform technology on flexible and biocompatible materials. This platform will be used for the design and fabrication of unit devices, modules, with a minimum channel count for electrical and optogenetic stimulation, recording of neural signals, and the possibility of scaling the number of channels to fulfil different requirements. Custom designed integrated circuits will be developed and assembled on flexible substrates to meet the aforementioned goals and create a family of “universal implantable devices”. The characteristics of prototype implants will be customisable by selecting from a pre-defined set. Such devices could be used during the proof-of-concept phase, saving design time before the specifications are finalised for the final implant. The ambitious goal is to concentrate and combine all efforts to develop the technology suitable to fabricate a set of “universal implantable devices”.
|
CEREBELLUM-BASED CONTROL ARCHITECTURES FOR BIOLOGICALLY INSPIRED ROBOTS
Shravan Tata Ramalingasetty, Çağatay Soyer, Pieter Jonker
Abstract: Robotics has evolved since its inception and has found major applications in industries. With recent demand for medical and social robots, they are required to be flexible, compliant and adaptable. Currently, the robotics community is facing these requirements as challenging problems. Primates on the other hand seem to be excellent at such tasks, and possess the required attributes to perform in complex dynamic environments. Fine motor skills observed in vertebrates require involvement of several brain regions working together to achieve the desired motion. While brain studies have helped to identify these regions and their general role, it has been significantly more difficult to understand the mechanisms and the overall system architecture that give rise to the coherent and coordinated movements that humans and other vertebrates can perform with ease.
Among several brain centers involved in motion control, the cerebellum is understood to be contributing to fine motor skills, adaptation, learning new skills, and maintaining body posture and balance [1]. But the understanding of the computations that are responsible for the cerebellum’s functionality has seen no consensus yet among the existing vast number of theories. This work presents a discussion on existing theories and computational models of the cerebellum and experiments on cerebellum-based control architectures for biologically inspired robots [2].
The cerebellum itself is modelled using a structurally similar firing rate based neural network model [3]. This cerebellum model is accompanied by a classical proportional-derivative controller representing the primary motor control circuits in the brain. The work then focuses on formulating five different architectures combining the cerebellum and the primary controller in order to generate human-like motor control behavior. The controllers are tested in a simple DC motor position control task and a more complex Vestibulo-Ocular Reflex (VOR) task.
Initial results show that certain architectures can serve as acceptable models for cerebellum-based control, demonstrating the known contributions of the cerebellum, such as learning and adaptability. The results also give rise to critical questions on the role of the cerebellum, shortcomings of the existing learning rules and more.
|
IMMEDIATE DESIGN TO CARE
Linda Wauben, Ronald van Gils, Onno Helder
Abstract: Most research to improve safety of patients as well as safety for healthcare providers is performed according to scientific research. Besides scientific research there is also a need for more applied research. By combining applied research with user-centred design, we can study everyday challenges e.g. by means of studying workarounds. In close collaboration with the users (patients or healthcare providers) we can design solutions to immediately solve the problem. The aim of the Bachelor course ‘Healthcare Technology’ was to study the challenges patients and healthcare providers have to deal with and to design solutions that fit their needs.
The students within the course ‘Healthcare Technology’ consisted of students from the school of Healthcare (Nursing, Occupational therapy, Physiotherapy), the school of Engineering and Applied Science (Industrial Product Design, Healthcare Technology) and the school of Communication, Media and Information Technology (Media Technology). Each student group (consisting of 2 or 3 students) worked closely together with patients or healthcare providers. In some cases also a technology provider (a business partner) was involved.
The students worked according to the principles of ‘scrum sprint’. This meant they studied the challenges in the care domain, designed a solution together with the users and generated a new prototype every two weeks. The results were bi-weekly presented to the other students and lecturers, and the prototypes were presented and tested with the users. Based on the results of tests the next iterations started. In total four ‘sprints’ were performed.
In total, 19 different challenges and practical solutions were designed. Some examples include: Nudging in the Supermarket - a Healthy Route for Diabetes Patients; Virtual Reality as a Training Tool for Acute Care; Who’s Visiting - Self-management Tool for Elderly Living at Home; a Redesign for Holding Feeding Tubes besides the Incubator; a Cheap Laryngoscope for Low Income Countries; and Breaking Glass Ampoules Safely;
The projects will be finished in November. However, some of the solutions are already implemented in the hospital. During the conference we will present the results of the projects
Acknowledgement: We thank all the students and lecturers/researchers involved in the various research and design projects: Jan Arndts, Bart de Jong, Vera Maan, Raoul Nielen, Maike van Offeren, Lia Sterkenburg, Emile Verhage, Joachim van der Weegen, Anne van Woezik en Rob Zoeteweij
|
TOWARDS PERSONALIZED MODELLING FOR CARDIAC MECHANICS
Luca Barbarotta, Frans van de Vosse, Peter Bovendeerd
Abstract: Clinical examinations for heart related pathologies often provide only remote information to
clinicians. The decision making process is based upon evidence based qualitative
interpretation of those examinations.
Traditional and new imaging methodologies increased the amount of information that can be
retrieved for a patient. However, this information is not yet included in clinical practice
because its quantification and interpretation is not straightforward.
The aim of this project is to design a tool which extracts information, such as stiffness
parameter and contractility of the myocardium, translates it into a patient specific
mathematical model and provides a model based interpretation of that information in order to
assist clinicians during diagnosis and therapy selection.
At this purpose, personalization algorithms are applied to generic finite element models to
translate complex three-dimensional results into meaningful quantities. However, a
preliminary sensitivity analysis is needed to understand which model component must be
treated generically and which one must be personalized.
The use of personalized geometries is quite established but the level of detail needed to
actually improve computation must still be studied [1]. Furthermore, it is very well known
that cardiac mechanics is strongly affected by the fibre configuration within the myocardium
[2] which are very difficult to measure in patients and are subjected to high measurement
errors, consequently a generic model based fibre configuration is chosen.
Therefore, the first stage of our research will involve a sensitivity study concerning the use of
personalized geometries and the quantification of the uncertainty related to fibre measurement
in patients. The first analysis will show us which approach to consider for the geometry, while
the latter will tell us how to deal with fibre related uncertainty and it might help the
development of remodelling models [3] as an possible alternative to difficult and costly
procedure such as DT-MRI to prescribe fibre configuration. Then, we will apply different
approaches found in literature to build the personalization tool. We will exploit [4] to identify
model parameters with a strong effect on the output and then we will apply inverse analysis to
extract information from images by means of the reduced order unscented Kalman filter [5].
REFERENCES
[1] Geerts, L., et al., International Workshop on Functional Imaging and Modeling of the
Heart, Springer Berlin Heidelberg, (2003).
[2] Bovendeerd, P.H.M., et al, J. Biomech., Vol 25.10, pp. 1129-1140, (1992).
[3] Kroon, W., et al., Med. Image Anal., Vol 13. pp. 346-353 (2009).
[4] Donders, W., et al., Int. J. Numer. Method. Biomed. Eng., Vol 31.10, (2015).
[5] Marchesseau, S., et al., Biomech. Model. Mechanobiol., Vol 12.4, pp. 815-831, (2013).
|
A FIRST STEP IN SIMULATING A MECHANICALLY SUPPORTED HEART: MODEL IMPLEMTATION AND VERIFICATION
Eric Chen, Frans van de Vosse, Peter Bovendeerd
Abstract: Numerical simulations, while not a complete substitute for physical (in-vivo, ex-vivo, and in-silico) experiments, can provide insight on local mechanical properties of the heart muscle during the cardiac cycle. One study performed by Bovendeerd [1] shows the importance of the transmural distribution of myocardial fibers for determining ventricular shear strain, and by extension, stress. Another study [2] uses stress as a driving factor to model the phenomena of cardiac growth and remodeling. One commonality between these two aforementioned studies is that stress is considered an important and/or driving factor of the underlying mechanics and physiology of the heart.
In this ongoing project, we seek to examine the relationship between wall stress in mechanically supported hearts via left ventricular assist devices (LVADs) and LVAD parameters (e.g., pump volume). This topic is of particular interest due to studies [3] showing that a small, but non-zero, population of patients implanted with LVADs showed a recovery in cardiac output, significant enough that LVAD explantation was recommended and performed. To better understand the mechanical loading conditions that promote cardiac recovery, a new finite element method based simulation program is being created in-house.
As a first step, we present the implementation and verification steps taken to ensure that our program produces results that are reasonably error free in terms of numerical accuracy. Two constitutive models, one transversely isotropic and one orthotropic, representing the structure of myocardium were implemented and benchmarked. Benchmark tests include: uniaxial compression and extension, (simultaneous) biaxial compression and extension, simple shear in one plane, (simultaneous) biaxial shear in two planes, a pre-load and after-load experiment, inflation of a thick walled sphere, and finally, the passive filling of an isolated idealized left ventricle. Additionally, convergence tests with different mesh densities were performed to determine the minimum mesh quality required for sufficiently accurate results. Finally, we concluded this study by comparing the use of hexahedral and tetrahedral meshes for thin walled regions (e.g., the apex of the left ventricle).
|
LÖWNER BASED RESIDUAL WATER SUPPRESSION IN MAGNETIC RESONANCE SPECTROSCOPIC IMAGING
Halandur Nagaraja Bharath, Sabine Van Huffel, Diana Sima
Abstract: Introduction: Magnetic resonance spectroscopic imaging (MRSI) signals are often corrupted by residual water and artefacts. Residual water suppression plays an important role in accurate and efficient quantification of metabolites from MRSI. In general residual water is suppressed in the pre-processing step using a filter based method [1] or a subspace based method [2]. Tensorizing the matrix and using suitable tensor decompositions provide certain advantages [3]. In this work, a tensor based algorithm was developed to suppress the residual water simultaneously from all the voxels in the MRSI signal.
Methods: The spectrum in each voxel is modelled as a sum of single pole rational functions. For each voxel in the MRSI grid, a Löwner matrix is constructed from its spectrum. A 3-D tensor is constructed by stacking the Löwner matrix from all MRSI voxels in the third mode. Canonical polyadic decomposition (CPD) is then applied on the tensor to extract the individual rational function. The parameters of the rational function containing resonance frequency and damping factors are estimated from the mode-1 and mode-2 factor matrices obtained from CPD using least squares. The water component is constructed using only those sources whose resonance frequency is outside the region of interest (0.25 - 4.2 ppm). Finally, the water is suppressed by subtracting the water component from the measured MRSI spectra. This method is further improved by adding polynomial sources to model the water component along with the rational functions.
Results and Conclusion: The performance of the proposed method is tested using both simulated and in-vivo 1H MRSI signals. In the first simulation a simple water model is used; here the Löwner method performs better than the widely-used subspace-based HLSVD method, which works on a Hankel matrix from one spectrum at a time. In the second simulation the water signal is distorted using a decaying exponential, here the basic Löwner method introduces some baseline. The Löwner method with polynomial sources can handle this problem and performs better than HLSVD. On the in-vivo 1H MRSI data both basic Löwner and HLSVD had similar performance, however the Löwner method with polynomial sources performed significantly better in suppressing the residual water signal.
|
SENSORY SUBSTITUTION TO ENHANCE BALANCING OF SPINAL CORD INJURY SUBJECTS USING AN EXOSKELETON
Heidi Muijzer-Witteveen, Nevio Tagliamonte, Marcella Masciullo, Edwin van Asseldonk, Herman van der Kooij
Abstract: The Symbitron project aims for the development of an innovative exoskeleton for people with spinal cord injury (SCI). The main features of this exoskeleton will be that it is tailor-made for each SCI test pilot, provides human-like behaviour, enables bidirectional man-machine interactions and can be used without crutches. For bidirectional interactions, also the sensory pathways should be taken into account. With artifical feedback about the lost sensations, the exoskeleton pilot will be able to better control the exoskeleton and will also feel more in control, which will likely enhance the embodiment and acceptance of the exoskeleton.
One of the reasons why people with SCI have to use crutches to keep balance with an exoskeleton might be the lack of sensory input from below the level of the lesion [1]. In this pilot study we have evaluated the effect of added sensory feedback (vibrotactile stimulation) about the Center of Mass (CoM) on the performance of SCI test pilots in a balance task.
2 SCI test pilots (lesion level L3 & C7, ASIA score D) were being perturbed, with two different amplitudes, at the ankle and pelvis level, while wearing an ankle exoskeleton. The CoM movement was fed back to the subject via two C2 tactors (front and back) or via 8 coin motors (four on the front and four on the back) placed at the shoulder region of the subject. The CoM movement was divided in four discrete levels (for both directions) represented by four different amplitudes of stimulation of the C2 tactor or by activation of one of the four coin motors. The subjects were asked to return to their neutral upright position as quick as possible after each perturbation.
A trial of 20 perturbations was performed with both feedback methods (C2 and coin motors) and with the eyes closed (EC) and eyes open (EO). The order of trials was randomized and before and after the feedback trials two trials without feedback (EO & EC) were performed.
The time needed by the subjects to reach the neutral position after a perturbation and the number of oscillations around that position were calculated for each subject for all conditions. It was shown that without vibrotactile feedback one of the subjects had difficulties to return to his neutral position and sometimes didn’t return to this position at all after a perturbation. The other subject showed a clear decrease in response time with both feedback methods.
These first pilot tests have shown that it is possible to provide sensory information back to the user of an exoskeleton, who can interpret it and use it to better balance with the exoskeleton. Based on the results of this study, we will implement the feedback methods in next versions of the exoskeleton that also consists of knee and hip modules. Furthermore, sensory feedback will be provided during walking with the exoskeleton.
|
PERIOPERATIVE VOLUME STATUS MODELLING TO PREVENT HYPO- AND HYPERVOLEMIA IN MAJOR ABDOMINAL SURGERY
Tilaï Rosalina, Wouter Peeters, Arthur Bouwman, Erik Korsten, Rick Bezemer, Marc van Sambeek, Frans van de Vosse, Peter Bovendeerd
Abstract: Major (abdominal) surgery is often associated with severe blood loss, which is compensated by intravenous fluid administration during and after the surgery. Fluid administration is necessary to prevent hypovolemia, which carries the risk of hypo perfusion and multi-organ failure [1]. However, there is much debate about the optimal type, amount and manner of fluid administration. Currently most research is about the amount of fluid administered, since it has become clear that hypervolemia (fluid overloading) can cause complications such as edema formation and kidney failure [2]. This problem is especially relevant for the elderly population, who have a compromised regulatory system. During and after major surgeries it is hard to quantify the fluid status, due to fluid shifts from the intravascular to the extravascular space. Moreover, there are no non-invasive techniques available to measure volume status, hence, we are dependent on indirect measurements.
Aim of this project is to better estimate the fluid status of the patient by interpreting the indirect clinical data using a mathematical model of the underlying physiology. The model must contain systems representing the fluid distribution, cardiovascular function, pulmonary system, renal system, regulation (baro-/chemoreflexes and humoral control) and the effect of anaesthesia.
The first stage of the project was to evaluate candidate models from literature. No complete model combining the fluid distribution model with models of the cardiovascular and respiratory system was found. However, the work of Gyenge et al [3] was found to be a suitable starting point to model the fluid transport between plasma, interstitium, cellular compartments and renal excretion. The model will be extended with a cardiovascular and regulation model, as was done by e.g. Jongen et al. [4], and combine this with cardiopulmonary interactions such as described by Albanese et al. [5].
Finally, to use this model as a clinical decision support tool a balance between model complexity and available clinical input must be found. To achieve this, a model sensitivity analysis will be performed, and a continuous collaboration with clinical experts is ensured.
|
DESIGN AND EVALUATION OF A ROBOTIC NEEDLE STEERING MANIPULATOR FOR IMAGE-GUIDED BREAST BIOPSY
Ruud Spoor, Momen Abayazid, Vincent Groenhuis, Françoise Siepel, Stefano Stramigioli
Abstract: Breast cancer is one of the leading causes of cancer death in women. First signs of breast cancer are often found during the screening phase. In this phase, the breast is examined using screening techniques such as manual palpation, mammography, ultrasound and Magnetic Resonance Imaging (MRI). Suspected regions are further investigated by inserting a biopsy needle where imaging modalities such as MRI and ultrasound are used for guidance to target the lesion to acquire tissue samples for pathology assessment. As needle biopsy under MRI guidance is difficult due to the limited amount of space inside the MRI scanner and sensitivity to ferro-magnetic materials, ultrasound-guided biopsy is often the method of choice, due real-time, availability and lower cost advantages. However, ultrasound detectability of tumors is still lower than that of MRI.
This study is part of a project aiming to combine both MRI and ultrasound imaging worlds for precise robotic-assisted needle biopsy. An off-the-shelf robotic arm from KUKA (Munich, Germany) is used to steer an ultrasound probe [1][2] and needle guide end-effector. After localization of targeted lesion with the ultrasound transducer and merged MRI data, a needle guide is placed into the target position with the use of mechatronics. For Safety reasons, a biopsy needle is manually inserted through the guide into the breast by a radiologist [3].
The aim of this research is to design and evaluate an end-effector (Figure 1) to be mounted on a robotic arm. This end-effector is composed of an ultrasound probe holder and the needle-guide. The base of the end-effector interconnects motors, probe holder and robot interface to each other, Six set screws are used to clamp and align an ultrasound probe with the needle guide so that the controlled planar movement of the needle is visible in the ultrasound image plane. The needle guide is mechatronically manipulated through a parallel mechanism that is laser cut out of delrin parts. With the help of magnetic joints, the biopsy needle is released from its holder for safety when applied forces are higher than 1.5N. This was validated with spring scale measurements. Workspace analysis showed that the needle and guiding mechanism can handle breast sizes with diameters up to 200mm, measured from the base of the breast. A preliminary hazard analysis was performed and the design evaluation on a breast phantom showed that the end-effector is expected to provide safe insertion of the biopsy needle. The robotic arm provides more precise positioning of the needle on the breast surface. This helps in reducing the needle insertion path length in breast tissue and consequently minimizes the patient’s trauma. Based on the obtained results, further development of the current end-effector (prototype) is expected to result in a functional product. Validation of the end-effector developed in current study showed promising results for biopsy procedures.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 688188.
|
THE EFFECT OF BDNF VAL66MET ON MOTOR SKILL LEARNING: A TMS STUDY
Christopher Hauwert, Zeb Jonker, Rick van der Vliet, Jos van der Geest
Abstract: The aim of our study is to investigate the effect of the BDNF (brain-derived neurotrophic factor) val66met on motor skill learning and the underlying neurophysiological mechanisms. BDNF is a neurotrophin that is released throughout the brain in an activity-dependent manner and is thought to be involved in the strengthening of excitatory synapses and, in this way, in the excitation-inhibition balance of local networks1. Approximately 30% of the Western population has a val66met polymorphism in BDNF's prodomain, which reduces the amount of BDNF that is released from activated cells. BDNF val66met has been associated with reduced motor skill learning2, but also with improved response inhibition3. As motor skill learning is likely to be dependent on excitation, while response inhibition is likely to be dependent on inhibition, it might be possible that these effects are due to enhanced inhibitory features of the neuronal network.
The effects of the polymorphism on a behavioral level will be assessed using a motor skill learning task and a response inhibition task. The underlying neurophysiological mechanisms of this effect will be investigated using transcranial magnetic stimulation (TMS) measures.
Two TMS measurements will be used: 1. Motor map plasticity. A motor map consists of the area on the brain where motor evoked potentials of the target muscle can be produced when stimulated with TMS. Changes in this motor map after training are a measure for excitatory capacity. 2. The threshold for TMS short-interval cortical inhibition (SICI) and its decrease just before movement (SICI modulation). SICI is a paired-pulse protocol that represents the strength of GABAergic inhibition. The increase of SICI modulation is interpreted as disinhibition and is correlated to motor skill learning.
Our hypothesis is that, in line with earlier findings, the BDNF val66met polymorphism will correlate with less SICI modulation, less increase in motor map plasticity, a lower motor skill outcome, and improved response inhibition due to a shifted excitation-inhibition balance towards inhibition.
|
PARADOXAL EFFECT OF BDNF ON CEREBELLAR LEARNING
Zeb Jonker, Rick van der Vliet, Maarten Frens, Gerard Ribbers, Ruud Selles, Jos van der Geest
Abstract: Genetic factors are key in the emergence of personalized medicine. At present genetic tests are already used to determine the risk for disease, but in the future they could also be used to select the best treatment for individual patients.
In the field of stroke rehabilitation there is great interest in a polymorphism in Brain Derived Neurotrophic Factor (BDNF val66met). BDNF is released from activated brain cells and mediates the strengthening of excitatory synapses1. On a cellular level, the polymorphism reduces the amount of BDNF that is released from activated cells. On a behavioral level, the polymorphism is associated with impaired motor skill learning in healthy subjects2 and reduced physical recovery in hemorrhagic stroke patients3. Therefore, BDNF val66met is thought to have mainly unfavorable effects.
We performed a study in over 200 healthy subjects and found the paradoxal result that BDNF val66met actually enhances performance in eyeblink conditioning and visuomotor adaptation. Interestingly, both these tasks are mainly mediated by inhibitory connections in the cerebellum, whereas motor skill learning is mediated by excitatory connections in the motor cortex. Therefore, our future research is directed at the mechanism by which the polymorphism influences different types of learning.
Does BDNF val66met solely promote cerebellar learning, or does it cause a shift in the excitation-inhibition balance of the entire brain? When the mechanism is clear, treatments (medical or training) can be designed to nullify the unfavorable effect of the polymorphism in stroke patients. Regardless of the underlying mechanism, the finding that BDNF val66met influences different types of learning in different ways has direct implications for researchers. When working with small groups it might be sensible to control for the prevalence of this common (30% of the population) polymorphism.
|
THE INFLUENCE OF A NON-NEWTONIAN BLOOD VISCOSITY MODEL ON THE COMPUTED FRACTIONAL FLOW RESERVE
Kujtim Gashi, Rachel Motte, Tommy Hetterscheid, Marielle Bosboom, Frans van de Vosse
Abstract: Kujtim Gashi*, Rachel Motte†, Tommy Hetterscheid*, Marielle Bosboom*, Frans N. van de Vosse*
*Eindhoven University of Technology, Department of Biomedical Engineering,
Den Dolech 2, 5612 AZ, Eindhoven, the Netherlands
†Ecole des Mines Saint Etienne, Center for Biomedical and healthcare Engineering e-mail: k.gashi@tue.nl
Coronary heart disease (CHD) is a major cause of death in Europe with over 1.8 million deaths per year [1]. CHD in coronary arteries is characterised by stenoses which can eventually reduce the myocardial perfusion [2]. The fractional flow reserve (FFR) is a measure for the functional severity of a stenosis. It is defined as the ratio between the distal and the proximal pressure and measured invasively using a pressure wire. For approximately 30% of the patients the invasive FFR measurement results in a non-invasive treatment [3]. Hence, computed FFR is considered as non- invasive alternative. To this end computational fluid dynamics (CFD) models are made based on imaging data [4,5,6]. These models assume Newtonian behaviour for the blood, although blood viscosity is known to show shear thinning. In this study we investigate the effect of non-Newtonian blood viscosity behaviour on computed FFR (FFRcom).
In total 51 coronary vessels from patients of the Catharina Hospital were included. These coronaries, imaged using bi-plane angiography, were segmented in order to create 2D axisymmetric meshes. At the inlet a transient aortic pressure was prescribed whereas at the outflow a three-element windkessel was used representing the myocardium. In order to obtain a patient specific flow, the myocardial resistance was fitted based on the measured FFR. To study the influence of the non-Newtonian model, simulations were performed with a Carreau-Yasuda model for the blood. Two sets of parameter were used for the Carreau-Yasuda model as shown in Table 1 [7,8]. Simulation results were compared to results from simulations with a constant viscosity of 4.4E-3 Pa s.
Table 1: Two sets of parameters used for the Carreau-Yasuda model. Set 1 corresponds to the data from Leuprecht et al.[7] whereas set 2 is obtained from Gijsen et al. [8].
eta_0 [Pa s]
set 1 1.6E-01 set 2 2.2E-02
eta_inf [Pa s]
3.5E-03 2.2E-03
a n lambda [-] [-] [s]
0.64 0.213 8.2 0.64 0.392 0.11
The results showed that the non-Newtonian model for both parameter sets predicts a higher FFRcom. The FFRcom with set1 was on average 0.016 higher compared to the measured FFR with a 95% confidence interval (CI) between 0.006 and 0.026. Set 2 showed a higher FFRcom of 0.031 with a larger 95% CI between 0.012 and 0.053. For both non-Newtonian models the difference between predicted and measured FFR increased with decreasing FFR. In conclusion, the effect of non- Newtonian behavior on the FFRcom seems to be of minor influence. A future study including uncertainty in boundary conditions and geometry next to uncertainty in viscosity should give more insight in the relative contribution of all parameters that could influence FFRcom
REFERENCES
[1] http://www.escardio.org/static_file/Escardio/Press- media/press-releases/2013/EU-cardiovascular-disease- statistics-2012.pdf
[2] Stary, Herbert C., et al. Circ. 92.5 (1995): 1355-1374. [3] Patel, et al., N Engl J Med 2010; 362: 886-95
[4] Tu, Shengxian, et al. JACC: Cardiovascular Interventions 7.7 (2014): 768-777.
[5] Taylor, Charles A et al. JACC 61.22 (2013): 2233-2241. [6] Morris, Paul D., et al JACC: Cardiovascular Interventions 6.2 (2013): 149-157.
[7] A. Leuprecht, K. Perktold, CMBBE, Vol. 4, pp. 149- 163.
[8] F.J.H. Gijsen et al., Journal of Biomechanics 32, 601- 608.
|
RELATIONSHIP BETWEEN THE ICTAL EEG WAVEFORMS AND BRAIN CONNECTIVITY NETWORKS BASED ON PHASE SYNCHRONIZATION
Lei Wang, Johan Arends, Johannes van Dijk
Abstract: Background: Typical recurrent waveforms often exist in seizure EEG signals. Such morphological EEG patterns are related to different clinical seizure types [1] and may also correspond to different network connectivity of the brain system [2].
Data & Methods: The multi-channel scalp EEG signals were collected from 29 epilepsy patients with an intellectual disability. The major seizure patterns with typical EEG waveforms were annotated by experts. They are the fast spike, spike-wave, wave (or rhythmic activities [1]), and seizure-related EMG artifacts. Meanwhile, each EEG electrode was considered a node in a graph, and edges between pairs of nodes were weighted by their phase synchronization within a frequency band. Such graphs were used to represent temporary brain connectivity networks. We then studied whether significant difference exist in the brain networks with respect to different seizure patterns.
Results: Results reveal that brain connectivity networks during different seizure patterns show statistically significant differences.
Significance: The recognition of typical EEG waveforms can be used as an alternative method for evaluating the brain connectivity networks.
|
IMPROVED FAST IMAGE RECONSTRUCTION FOR ULTRAFAST PLANE-WAVE ULTRASOUND ACQUISITION USING A DECONVOLUTION-BASED FREQUENCY DOMAIN APPROACH
Chuan Chen, Gijs Hendriks, Hendrik Hansen, Chris de Korte
Abstract: For ultrafast ultrasound, image reconstruction is typically performed in time-domain which is computationally costly and therefore undesired given the high frame rate. Recently, an image reconstruction method in frequency domain was proposed, the Stolt’s f-k method, which is more computationally efficient. This method fits the plane-wave transmission-receiving process into the so-called exploding reflector model. However, this fitting is imprecise that degrades the image quality and results in artifacts of a certain shape (point spread pattern). To mitigate the image quality decline, we proposed to design a template that could be applied in frequency domain directly to deconvolute the point spread pattern.
|
CLINICAL FEASIBILITY OF A NEW DEVICE FOR PERSONALIZED ARTICULATING JOINT DISTRACTION IN TREATMENT OF TIBIOFEMORAL OSTEOARTHRITIS
Thijmen Struik, Roel Custers, Nick Besselink, Joris Jaspers, Anne Karien Marijnissen, Simon Mastbergen, Floris Lafeber
Abstract: Knee Joint Distraction (KJD) is a joint preserving surgical procedure that can postpone knee arthroplasty in case of knee osteoarthritis. Distraction is applied with an external fixator for 6-8 weeks.1,2 To reduce the burden on patients during treatment originating from a restriction in joint flexion, we evaluated an articulating frame. A personalized articulating KJD-device was developed, biomechanically tested, and technical feasibility was evaluated in cadaveric legs. Reproduction of joint specific motion was demonstrated and articulating KJD was concluded to be technically feasible.3,4 In this study, clinical feasibility was tested in 3 patients.
Patients received rigid knee joint distraction treatment in general practice. After 2-4 weeks, the frame was removed during a one day hospital visit and the joint was flexed in a continuous passive motion (CPM) device until 30° flexion was reached, or motion became painful. Subsequently, the articulating frame was assembled and personalized with custom hinge parts based on a non-invasive and computerized measurement of the joint motion. Weight-bearing and non-weight-bearing radiographs were taken at 0, 15, and 30° flexion for joint space width measurements. Finally, the articulating device was replaced by the rigid frame and treatment was continued according to clinical practice.
For none of the three patients, the articulating distractor could be personalized. In the first patient, 15° flexion was achieved on the CPM, but pin positions did not allow for positioning of the frame. In the other patients, 8° and 15° flexion was measured, which was too little motion for the custom software to generate personalized hinge parts. Pain at the pin sites during motion was reported by all patients.
Despite confirmation of joint-specific articulating distraction on cadaveric legs, clinical feasibility could not be demonstrated, mainly due to painful motion of soft tissues along the bone pins.
|
DEVELOPMENT OF AN USER-FRIENDLY KNEE JOINT DISTRACTION DEVICE
Thijmen Struik, Simon Mastbergen, Joris Jaspers, Roel Custers, Peter van Roermund, Karianne Lindenhovius, Vincent Cloostermans, Floris Lafeber
Abstract: Knee osteoarthritis is a degenerative joint disease, progressive over time, and characterized by pain and disability. In absence of disease modifying options, treatment is primarily symptomatic with total knee arthroplasty considered as gold standard for end-stage disease. A joint replacement however, has less favourable clinical outcome for the younger and active patient, resulting in an increased risk for revision surgery later on in life.1
Knee Joint Distraction (KJD) has been developed as joint preserving treatment that can postpone a primary prosthesis for a clinically relevant period, such that the chance for prosthesis revision is reduced. In KJD, the bony ends of the joint are set at a small distance (5mm) for a period of 6-8 weeks by means of an externally fixated distraction frame, connected to the bone with transcutaneous bone pins.2,3,4 As a consequence, joint motion and mobility are compromised.
Improvement of treatment was initially aimed for by allowing joint motion during treatment. However, clinical evaluation of a personalized articulating knee distraction device demonstrated that motion during knee distraction significantly increased the pain that patients experience at the pin tracts, as such no improvement of therapy itself (despite unknown clinical outcome) was obtained.5
Consequently, the options for further evolvement were explored in a multi-disciplinary team of engineers and end-users, including orthopaedic surgeons, nurses, and medical device reprocessing technicians. In addition, questionnaires on user experience were collected from patients that received knee distraction.
A set of requirements for a knee distraction device was defined and used in developing therapy improving equipment. New requirements, compared to the conventionally applied distraction device, covered aspects on the ease of use for the clinician and the comfort for the patient. The three main requirements comprised a reduction of the total number of actions during surgery, a reduction in device weight, and a reduction in protruding parts when assembled.
The newly developed knee distraction device will be subjected to a clinical evaluation, after biomechanical equivalence to the clinically applied device is verified.
|
ANALYSIS OF TRACHEOSTOMA MORPHOLOGY
Maartje Leemans, Maarten van Alphen, Michiel van den Brekel, Edsko Hekman
Abstract: Introduction: One third of the patients with larynx cancer have to undergo a total laryngectomy (TL) procedure, where the larynx is removed and the trachea is connected to the base of the neck, which forms the tracheostoma. To help patients regain their lost functions, multiple rehabilitation devices have been developed, such as the common heat and moisture exchange (HME) filter. This filter is placed in the tracheostoma opening and can either be fixated around (‘peristomal’) or inside (‘intraluminal’) the tracheostoma. To regain the ability of speech, the HME filter can be manually occluded. An automatic speaking valve (ASV) can be placed on top of the HME filter, to avoid manual occlusion and to remove the emphasis on the patient’s disability. An ASV is considered to be the optimal end result for rehabilitation after TL.
Problem definition: The circular intraluminal fixation devices (the so-called buttons) do not give an optimal or airtight fixation of the HME filter, can traumatize tracheal mucosa, may eject during speech or coughing and are only suitable for patients with a prominent peristomal ridge. These disadvantages lead to a low use of AVS’s. Many studies emphasize that the adjustment of the button to the patient’s tracheostoma morphology is the path towards successful airtight fixation. However, we have not found any reports in literature of the measurement of the internal tracheostoma and trachea morphology of TL patients.
Methodology: The internal morphology of the tracheostoma and trachea (diameters, distances and angles) were measured in CT scans of 20 TL patients, both in the sagittal and in the transversal plane.
Results: The tracheostoma depth, diameter of the tracheostoma opening and prominence of peristomal ridge differ significantly between patients. The tracheostoma opening has an elliptical shape with the longest axis vertically, but the internal morphology has an elliptical shape with the longest axis horizontally. Also, the course of the tracheostoma-trachea differs significantly between patients; for some patients, the entry course of the tracheostoma lumen is not perpendicular to the skin of the neck.
Conclusion: Commercially available buttons have a cylindrical shape, where the end surfaces are perpendicular to the longitudinal axis. In contrast, the overall morphology of the tracheostoma has an elliptical shape, and the tracheostoma opening and course of the tracheostoma lumen are not perpendicular to each other. Based on the measured morphology, the angle between the HME fixation and the intraluminal part of the button as well as the overall button shape should be adjusted. However, due to the large variation in tracheostoma morphology between patients, there is no ‘average tracheostoma morphology’ to which the button can be adjusted. Therefore, providing an airtight fixation for each patient would either require a large range of different buttons or a patient specific customization of the button.
|
FERRULE-TOP DYNAMIC INDENTATION OF BRAIN SLICES: A NEW INSTRUMENT FOR VISCOELASTICITY MAPPING OF BRAIN TISSUE AT THE MICRON SCALE
Nelda Antonovaite
Abstract: ABSTRACT
The mechanical properties of brain tissue are known to play an important role in brain development, normal brain functioning and in neurological disorders. The common methods used to measure those properties typically address the macroscopic scale. Yet, it is at the micron scale that most relevant physiological phenomena take place. Previous Scanning Force Microscopy (SFM) studies have confirmed that brain tissue has highly heterogeneous mechanical properties [1]. To further investigate this heterogeneity at the micron scale, we have developed a new indentation technique. Our cantilever-based force transducer with interferometric readout allows one to push a bead against the sample via a load- or indentation-controlled stroke (similar to SFM), while also probing the dynamic mechanical properties in a physiologically relevant frequency range of 0.01 – 10 Hz. Our initial results on mouse brain slices perfused with artificial cerebrospinal fluid demonstrate that the tissue has a strong viscoelastic response. Further indentation tests show that our approach can reproducibly map regional differences in brain viscoelasticity, even in the presence of very high (more than one order of magnitude) mechanical heterogeneities.
REFERENCES
[1] A.F. Christ, K. Franze, H. Gautier, P. Moshayedi, J. Fawcett, R.J.M. Franklin, R.T. Karadottir, J. Guck, “Mechanical difference between white and gray matter in the rat cerebellum measured by scanning force microscopy”, Journal of Biomechanics, Vol. 43, Issue 15, pp. 2986-2992, (2010).
|
|