"IEEE Spectrum announced a" scoreboard ", showing a few subclasses in the medical field, AI and human doctors who have more advantages. Use IEEE SPECTRUM to say," AI is challenged by doctors in the medical field, we have been In score.
This credit display -
Ai occupies a significant advantage is heart disease, stroke and autism;
The AI occupies a certain advantage is Azheimer, surgery;
AI and human doctors play a flat hand is brain tumors, ophthalmology, skin cancer;
Human doctors occupy a clear advantage of general diagnosis.
Heart disease: Ai system is more predicted with a total of 355 patients compared to standard prediction methods.
Researchers at Nottingham University of Nottingham, create a system that can predict which patients have heart attack or stroke in 10 years by scanning the patient's conventional medical data. Compared to standard prediction methods, the AI system is more predicted to correctly 355 patients.
Researchers Stephen Weng and his colleagues tested several different machine learning tools on 378,256 patients in the UK. These records recorded the health status of patients from 2005 to 2015, including some information on people, medical conditions, prescription drugs, visits, laboratory results, etc.
The researchers fed 75% of the medical records into their machine learning model, which is designed to find the characteristics of patients who have experienced heart attack or stroke within 10 years. The study team then tested the model in another 25% record to see how they predict that heart attack and stroke accuracy. They also tested the accuracy of the standard prediction method with the subset of this record.
Using 1.0 points to represent 100% accuracy, the standard method is divided into 0.728. The accuracy of the machine learning model is from 0.745 to 0.764, the neural network machine learning model has the highest score.
That is, the neural network model has correctly predicts 4,998 patients in cases that actually have a heart disease or stroke, 355 more than standard methods. With this prediction, the doctor can take preventive measures, such as the opening of the drug to reduce cholesterol.
Autism: Only three variables are used, the algorithm detected 8 auto-child children
A research team in the University of North Carolina took the 6-month-old child and autism-related brain development. The depth learning algorithm can use these data to predict whether children with high risk of autism can be diagnosed in 24 months.
The algorithm correctly predicts that the final diagnosis accuracy of high-risk children is 81%, and the sensitivity is 88%. Compared with the behavior questionnaire, this is undoubtedly more helpful - these survey questions diagnose early autism (about 12 months old), only 50% accuracy.
UNC psychologist and brain development researcher senior author Heather Hazlett said: "This is better than those of the previous methods and can be diagnosed when children are more."
This algorithm is well operating, only three variables - brain surface area, brain capacity and gender (boys are more prone to children more likely to have autism) - 8 auto-child children have been detected.
According to research team members, UNC Nerve Image Analysis and Research Laboratory Director Martinstyner said that the team training the algorithm initially uses half of the data training and the other half is used for testing. However, according to the requirements of the reviewers, they subsequently analyze more standard 10-fold analysis, where data is subdivided into 10 equal parts. Then the process of machine learning is 10 rounds, each train is used for 9 parts, and some of them is used for testing. Finally, finally collect 10 rounds of "Test" results for their prediction.
Fortunately, Styner said that two types of analysis - the initial 50/50 and the last 10-fold - show almost the same results. The team is satisfied with prediction accuracy.
Of course, Hazlett also said that the advancement and popularization of the project require some time. "This expensive diagnostic test is not all the families can afford."
Alzheimer: The new method may not be much better than the old, maybe just because it uses better data.
Harvard University, Massachusetts General Hospital and Researchers in Huazhong University of Science and Technology have designed a solution that combines FMRI brain scans and clinical data.
Quanzhengli, senior researcher, the clinical data science center of Massachusetts, said: "We tried to find Alzheimer in the early stage. Many people try to use traditional machine learning methods to do this, but the result is not so good, because this is A very difficult problem. "
After preliminary test, researchers said that their depth learning programs are more accurate when they are equipped with special FMRI data sets than other classification methods using more basic data sets. However, when these conventional classifiers use special data sets, they also have similar gains in precision.
Javier Escudero, an University of Edinburgh, said that this new method may not be much better than the old, may just be because it uses better data.
If this is the case, then other experts who want to diagnose Alzheimer's disease can be carefully observed by the depth learning method. According to this latest study, the FMRI scan showing the relationship between the brain area provides a thinner view than only the recorded measurement of only the measurement of the measurement.
The research team wants to see if they can use these changes in functional connections to predict Alzheimer's disease. They started from the 93 MCI patients and 101 normal patients provided by the Alzheimer's neural imaging plan. According to the time series of 130 FMRI measurements acquired from 90 regions from the participant's brain, the researchers can know the position of the signal flicker within a period of time.
Next, in a key step, the researchers processes the data set to perform quadratic measurements in the relevant brain regions. In other words, they construct a functional communication map showing which areas and signals are closely related to each other.
Finally, the team builds a deep learning program that can explain these modes, combined with clinical data such as age, gender and genetic risk factors, and predict whether a person will develop into Alzheimer's disease.
Finally, the team said that it uses specially handled function to connect the data set, and the patient will predict whether the patient will have the accuracy of Alzheimer's disease, close to 90%.
Robots can also use their own vision, tools and intelligence to sew the pig's small intestine. More importantly, SmartTissue Autonomous Robot (Star) is better than human surgeons than human surgeons.
The inventors of Star did not claim that the robot can soon replace humans in surgery. Instead, they use the concept of "automated automation".
One of the researchers, the Child Surgeon Peter Kim said the doctor's work did not be threatened. He said: "If there is a machine that works with us to improve the surgical results and safety, it will be a good thing."
The researchers have programmed their robots, and the surgicals called intestinal sewing - sewing the grooves of the cut. The team's senior engineer Ryandecker said that the suture must be close and regularly separated to prevent leakage. Experienced human surgeon also implemented the same task. When comparing the resulting suture, the pin of the STAR is more consistent and can prevent leakage.
In approximately 40% of experiments, researchers intervened and provided some type of guidance. In other 60% of the test, STAR completely completed this work.
Human surgeons can perform more routine or cumbersome operations for surgery.
STAR solves the challenges of soft tissue by integrating several different technologies. Its visual system depends on the near-infrared fluorescence (Nirf) label placed in the intestinal tissue; a special NIRF camera tracks these tags, while the 3D camera records the entire surgical image. Combined with all of these data, STAR can focus on the target. The robot has developed the planned task, and as the organization moves during the run, it automatically adjusts the plan.
Brain Tumor: IBM Watson took 10 minutes to analyze the patient's genome and put forward the treatment plan, experts spent 160 hours
Time is critical when treating brain tumors. In a new study, IBM Watson took 10 minutes to analyze the genome of patients with brain tumors and proposed the treatment plan. However, although human experts have spent 160 hours to formulate plans, the research results do not indicate that machines have won the people.
The patient is a 76-year-old man who complained to the doctor and is difficult to walk. The brain scan showed a tumor and the surgeon quickly treated. The man received three weeks of radiation treatment and started a long-term chemotherapy. Despite the best care, he died within a year. Although Watson and doctors analyzed the patient's genome, the treatment plan was proposed, but when his tissue sample was sequenced, the patient had worse.
LaxMiparida, leading the Watson genome team, explained that most cancer patients did not scan all of their genome (consisting of DNA of 3 billion units). Instead, they usually do a "group" test that only detects some gene subgroups known to play in cancer.
Researchers want to know if they scan the entire genome of patients, although more expensive and time consuming than running "group", whether they can provide truly useful information for the doctor design treatment plan.
The answer to this question is certain. Both NYGC clinicians and Watson have identified genetic mutations that have not been checked in Panel tests, which proposed a potential drug and clinical trial.
Second, researchers want to compare genome analyzes by IBM Watson and NYGC.
Both Watson and Experts have received the patient's genome information, which determined that the mutation generated genes, learned from these to the mutation in other cancer cases, looking for reports, and examined the patient's feasible clinical test. Humans spend "160 hours" to give recommendations, while Watson completed the above process within 10 minutes.
However, although Watson's solution is the fastest, it may not be the best. The NYGC clinician identifies the mutation of the two genes, comprehensively considers that the last doctor recommends that patients have participated in clinical trials for combined drug therapy. If the patient's health is still allowed, he will participate in this trial, this is his most promising opportunity. Watson did not synthesize information in this way, therefore did not give recommendations for clinical trials.
Ophthalmic diseases: Zhongshan University and Xi'an University of Electronic Science and Technology cooperates to develop CC-Cruise, currently with doctors
A research team in China has arguised that manual intelligence is likely to help eye disease medical diagnosis in the case of high quality data. Their Ai only trained 410-sheet-changing cataract (a rare disease that leads to irreversible blindness), plus 476 images of the diseaseless eye, and can judge the severity of cataract and provide therapeutic advice.
Released by DeepMind 2015 Research Report - This study describes how to deactivate the machine learning algorithm in a series of arcade games to defeat professional players - Zhongshan University eye doctor Haotianlin and colleagues have created an AI smart body to excavate Their childhood interbacuch clinical database.
Working with Xiyang Liu team in Xi'an University of Electronic Science and Technology, they have created CC-CRUISER, a AI program that diagnose congenital cataracts to predict the severity of the disease and give treatment decisions. This program creates a depth learning algorithm and training with the above image.
The researchers were then tested five times for CC-CRUISER. First, in computer simulation, AI programs can distinguish patients and health individuals at 98.87% accuracy. Each of the three indicators of the severity of the disease is estimated to have an opaque area, density and position - accuracy of 93%. The program also provides the accuracy of 97.56% of the treatment recommendations.
Next, the team used 57 children's eye images in China's three cooperative hospitals for clinical trials. The selected hospital is not specially diagnosed or treating this condition. Because the research team hopes that the platform will eventually help lack expert hospitals. In the test, CC-CRUISER performance: achieving 98.25% identification accuracy; all three severity indicators have more accurate accuracy of more than 92%, and the accuracy of the treatment is more than 92.86%.
In order to simulate the use of real world, they have compared the work of the procedure and ophthalmologists. Three ophthalmologists - an expert, a backbone and a shallower - and CC-Cruiser conducted 50 clinical cases of PK. Computers and doctors are comparable.
In the trial, AI made a few cases of incorrect markers, and Lin hoped that greater data sets can improve their performance. The team plans to establish a cooperative cloud platform, but LIN emphasizes that the technology is "insufficient" to determine the best treatment process with 100% accuracy. Therefore, the doctor should make full use of the machine's recommendations to identify and prevent potential error categories and make their own supplements.
Skin cancer: The maximum data set of automatic skin cancer classification
Stanford University researchers have developed an algorithm that identifies skin cancer in the photo. It is not the first automation system that identifies skin lesions, but may be the most powerful.
The research team built a depth learning algorithm in GoogleNet Inception V3 architecture, namely a convolutional neural network algorithm. Stanford University researchers fine-tune more than 130,000 skin lesions of more than 2,000 diseases, which may be the largest data set in automatic skin cancer classification.
In the study, the results of this algorithm were compared to the diagnosis of 21 dermatologists. The doctor checked hundreds of skin lesions and determined whether it was further detected or ensured that the patient was benign. This algorithm detects the same image and gives its diagnosis. There have been no images before doctors and algorithms.
The final result, the computer is consistent with experts. For example, the program can distinguish between keratin-forming cell carcinoma - the most common human skin cancer - and benign skin growth called seborrheic alcommunication.
Before the real application, Stanford University's system will need to be more serious. Researchers did not require algorithm to resolve seborrhee, melanoma, which canCan be a difficult point.
General diagnosis: During about 72%, the doctor gives the correct diagnosis. AI gives correct diagnosis in 34%
In the PK of AI and doctors, doctors still have a field that can win. There is a report on the previously published Jama Internal Medicine that pointing out that a set of automatic diagnostic APP diagnostics is far lower than the doctor.
MEHROTRA and his team were published in BMJ (previously known as British Medical Journal), fed 45 patients into 23 symptoms detection systems, including subsequent diagnosis of asthma and malaria. patient. The group found that more than one-third of the detector gave the correct diagnosis.
In the new experiment, the researchers compared the quasiogram of the detector and 234 physicians. For each case, at least 20 doctors can give the accuracy of the top three diagnosis.
The doctor gives the correct diagnosis within about 72% of the time. The application gives the correct diagnosis in 34%.
"Doctors are not perfect," Mehrotra said. "They can still diagnose errors in 10% to 15%. However, self-diagnosis app wants to go beyond doctors, but also time."
Original link: https://www.eeboard.com/news/ai-98/
Search for the panel network, pay attention, daily update development board, intelligent hardware, open source hardware, activity and other information can make you master. Recommended attention!
[WeChat scanning picture can be directly paid]
Technology early know:
The new mobile phone screen will be detonated: Jingdong's sixth generation flexible OLED panel will be mass production, surpass the international panel big factories can be stayed!
Loss and design of electric trucks
Worship! God-level DIY: large-scale unattended cars that can be loaded with weapons systems
The 5 engineers complained in the Taiguang enterprises in Taikuoguang companies for commercial spies!
MobiveIL combined with Crossbar launches RERAM technology-based SSD, ultra-fast read and write speed, ultra high density roller existing SSD "
Our other product: