multimodal fusion of emg and vision for human grasp intent interference in prosthetic hand control-凯发k8一触即发

multimodal fusion of emg and vision for human grasp intent interference in prosthetic hand control

发布时间:2022.04.13 15:08  访问次数:  作者:

返回列表

the use of  is getting more widespread in society. even relatively simple limb replacements are associated with an increase in quality of life, and we can just imagine how the quality of life would be upgraded by introducing high-degree precision, intelligent grasp and gesture prediction technologies. this is the main aim of the scientific study recently published on arxiv.org.

what is this research about?

a team of researchers behind this work presented a paper in which they devised a method for a multimodal data fusion dedicated to better predict intention of a user who wants to properly control a robotic hand prosthesis. this work encompasses the dataset collection and the development of a novel method combining two different gasp detection modes. dataset consists of a first-person video imagery, gaze and dynamic  data. later, this data is classified and segmented based on independent modalities (dynamic emg and visual grasp detection), and later compared to the multi-modal fusion of those modes with aim to achieve better robustness and accuracy.

why was this research conducted?

the aim of this research was to help amputees, especially lower arm amputees. according to the presented statistical data, approximately 1.6 million people suffered the loss of a limb in the year 2005, and most of them preferred a prosthetic limb as a replacement. undoubtedly, bionic prosthetics hold a vast potential to improve the quality of life for their users.

what are the limitations of the existing bionic models?

usually, robotic prosthetics or bionic arms are attached to patients with the promise of performing object manipulation in day-to-day activities. but the current methods have a limitation. bionic prosthetics are currently based on physiological methods like eeg (electroencephalogram) and emg (electromyography). these signals are physiological and therefore depend on many limitations such as muscle fatigue, electromagnetic interference, unexpected shifts of electrodes, motion artifacts, and the variation of the impedance of skin-electrode junction over time. visual evidence is also affected by factors like lighting, occlusion, and the change in the shape of the objects based on the angle through it is viewed. basically, the current models are susceptible to a certain margin of error as it is altered by a number of intrinsic and extrinsic factors.

how were the experiments conducted?

experimental data was collected from five perfectly healthy subjects – four males and one female, with their full consent. all the subjects were right-handed, and only the dominant hand was studied in this experiment.

an mvc test (maximum voluntary contraction) was conducted on all the involved muscles at the beginning of the test. after this, the subjects were put through a series of pre-designed motions to collect data using emg electrode and eye-tracking equipment.

how is the multimodal fusion method an upgrade to the existing modes?

this novel method aims to unify the positives of eeg, emg, and visual evidence by reducing the factors of inefficiency. the scientists presented a ‘bayesian evidence fusion’ framework using neural network models. they then analyzed a variety of performances as a function of time taken by the user’s hand to approach and grasp the object in front of it.

the data collected from this experiment and developed data processing model has demonstrated that a multimode fusion system is more efficient than both the grasp classification modes segregated individually.

what are the limitations of this method?

as mentioned above, this method relies on the fusion of the best and most efficient parts of physiological modes as well as visual evidence and relies on their complementarity for optimum performance.

while the prosthetic robotic arm is at rest, the object of interest is very clearly noticeable by the camera, which shows higher accuracy of the visual evidence counterpart. on the other hand, when the arm is active, and the subject reaches out to an object, the emg characteristic is more active than the visual classifier. fusion-based method outperformed individual classifiers in all scenarios, achieving the total grasp classification accuracy of 95.3%.

apparently there are no limitations to this technique compared to existing methods, except that more computational power is needed to process all the data. although, from practical perspective, this is not actually a limitation that could not be resolved with the current state of computational technology.

what is the future scope of this research?

this research could potentially spark the next generation of bio-prosthetics. with a comprehensive multidisciplinary integration of neural networks , programming, and biomechanics, this could be the key to help amputees across the world. wearing a new generation of smart prostheses would actually feel like a part of the amputee’s body while also providing seamless movement and useful real-life functionality.

source: mehrshad zandigohar, mo han, mohammadreza sharif, sezen yagmur gunay, mariusz p. furmanek, mathew yarossi, paolo bonato, cagdas onal, taskin padir, deniz erdogmus, gunar schirner 

原链接:


电话热线

tel:  021-63210200

业务咨询: info@oymotion.com

销售代理: sales@oymotion.com

凯发k8一触即发的技术支持: faq@oymotion.com

加入傲意: hr@oymotion.com

上海地址: 上海市浦东新区广丹路222弄2号楼6层

厦门地址: 厦门市集美区百通科技园1号楼301-1室


上海傲意信息科技有限公司 凯发k8官网下载客户端中心的版权所有 © 2015-2024


微信号:oymotion

扫描二维码,获取更多相关资讯

  • 凯发k8一触即发-凯发k8官网下载客户端中心
  • 我要留言
    点击更换验证码
    网站地图