3 Key Benefits Of Deploying Edge AI At The Point Of Care
3 Key Benefits Of Deploying Edge AI At The Point Of Care
Geoffrey Sheir, an edge inference expert at PA Consulting and Dan Talmage, a medical device development expert at PA Consulting wrote in the latest issue of meddeviceonline.com: “The point of care is shifting, with monitoring, diagnosis, and treatment taking place increasingly outside the conventional hospital setting and moving into patients’ homes. This is an exciting shift in the paradigm of healthcare.
The arrival of cloud computing has allowed industries to accelerate data processing by leveraging its immense scalability. Medtech companies have deployed artificial intelligence and machine learning algorithms in the cloud, where there is virtually unlimited processing power. This has improved patient outcomes, for example, through improved diagnostics and digital biomarkers. However, cloud computing’s reliance on the transfer of data to and from remote servers has some limitations, preventing many applications from shifting to the home setting.
The remoteness of the servers causes the first issue: latency. Real-time processing and turnaround time are often critical in rapid decision-making. Unpredictable load on the available resources can cause unpredictable timings on turnaround of processed data, potentially even changing the result of the processing.
The second issue is that cloud computing often requires a continuous connection to remote servers to exchange information. This reliance on an uninterrupted data pipe means any healthcare solution depending on cloud compute for AI algorithms will only work effectively in well-connected locations.
Last is data security; a patient’s medical information, according to laws such as HIPAA and GDPR, must adhere to strict handling and privacy requirements. This raises the concern of how to safely train and use algorithms running on third-party hardware and securely send data over networks. Such concerns have slowed the adoption of cloud-based AI in the medical field.
With a growing demand for AI in medtech applications, deployment of technology that can mitigate these risks must be considered for future applications. Edge inference is the practice of deploying machine learning (ML) models directly onto devices, allowing the data to be captured and processed at the point of care, rather than in the cloud. Processing data at the edge enables real-time processing, a reduced reliance on network quality, and increased data security, and the availability of specialist hardware means that edge inference is viable now. By identifying situations where these benefits can be obtained, medtech companies can use edge inference to shift the point of care.
Bring Care To The Front Line
Devices running edge inference computing enable real-time diagnostics to be performed outside specialist centers, reducing the time to treatment. The biggest opportunities exist for conditions where diagnoses are relatively accurate, but where patient outcomes are poor because diagnoses are not being done in a timely manner.
For example, a handheld retinal camera developed in Taiwan uses edge inference to diagnose diabetic eye disease, allowing primary caregivers to perform diagnoses that typically would be done by an ophthalmologist. The accuracy of the diagnostics provided is comparable to a specialist but is 10 times quicker than a competing cloud-based offering and does not require transport to a specialist ophthalmologist. Bringing these capabilities to the point of care not only reduces the risk and time taken for transport but also makes specialist care more accessible in primary care facilities without resident specialists.
Edge inference can also act as a real-time decision support tool for medical procedures. Virgo Surgical Video Solutions used edge inference in an endoscopy demo, detecting pre-cancerous growths with a latency of 17 ms, which is unachievable by beaming the data to and from the cloud. For GE HealthCare’s X-ray machines equipped with their Critical Care Suite ML algorithms, the X-ray machine automatically measures endotracheal tube positioning within seconds, allowing physicians to correct positioning errors in real time. This could feasibly extend to other procedures involving the insertion of medical devices into the body, such as stents, as well as more advanced procedures such as surgery.
Edge inference will ultimately enable closed-loop systems that perform monitoring, diagnostics, and treatment all in real time, reducing the need for manual intervention. Closed-loop intervention systems already exist for diabetes, like Medtronic’s MiniMed 780G insulin pump, where predictive algorithms monitor a patient’s insulin levels and perform interventions accordingly. We can extend this to a huge range of therapeutic areas and conditions that might require real-time interventions based on continuous monitoring, such as sepsis, cardiovascular activity, and seizure, where standard of care can be elevated across the board through the deployment of edge inference.
Reduce Reliance On High-Quality Networks
While solutions that process all the data in the cloud exist, maintaining data quality from the edge requires sufficient network bandwidth and reliability. This is by no means a guarantee, as the last mile of connectivity is governed by local internet service providers (ISPs). By processing data locally, edge inference reduces reliance on bandwidth, allowing care to be delivered in primary care facilities and in the home (e.g., on ambulatory monitoring systems) where network quality is not guaranteed.
Applications that capture data from body worn devices are a great example. Transmission of all data for cloud processing can take time, leading to a delayed diagnostic, reduced battery life, and increased data costs. Engineers should be considering whether early stages of the AI pipeline could run locally on a device, as this will significantly mitigate the challenges.
Additionally, opportunities to leverage edge inference can be found where care capabilities are not reaching as many people as they could be due to a lack of strong network infrastructure. For the handheld retinal camera, the fact that processing is done on-device allows for diagnoses to be made in facilities without an internet connection with reliably high bandwidth. The reduced reliance on network connectivity improves the scalability of solutions and allows smaller-scale primary care centers, or even visiting clinicians to the home, to deploy the same diagnostic capabilities as hospitals with a stronger network infrastructure.
Keep Patient Data Secure And Compliant
By processing data on the device and reducing the amount of data sent over the network, edge inference can help maintain data security for medical devices, so that compliance with data security regulations, such as HIPAA in the U.S. and GDPR for the EU, can be managed more easily compared to cloud processing. This can be leveraged particularly where there is a large stream of continuous data that needs to be kept secure, such as, for example, in monitoring devices such as wearables.
Video data can contain personally identifiable information and is a data type that is particularly important to safeguard. Care.ai created a HIPAA-compliant room-based patient monitoring device, which monitors patient behavior and alerts the clinicians when an adverse event is detected, such as, for example, if the patient falls or is at risk of developing pressure ulcers by sleeping in one position for too long. No video data is transmitted offboard for processing, preserving the patient’s privacy.
Other types of data, such as physiological data, may not be personally identifiable right now, but there is a similar need to keep this data secure to protect against compromise by future techniques. This is particularly pertinent to the raft of wearables performing continuous monitoring where a huge range of physiological data may be collected alongside personally identifiable data (e.g., GPS data). By reducing the amount of data these devices need to send over the network, edge inference helps keep this secure, too.
Edge Inference Is Viable Now
Edge inference is viable now, enabled by hardware and software advancements that make development and production easy. Chip vendors, such as NVIDIA and Coral, are providing a wealth of choices for hardware platforms on which to build edge inference solutions, such as dev kits to get development started quickly, as well as System on Modules with the versatility to build custom production systems. These platforms are also able to run on low power budgets, packing machine learning processing into small footprint, low-power devices.”
Please find the complete paper here.
Topics: #healthcare #lifeSciences #medicaldevices #medtech #medicaltechnology #MedSysCon #FDA #AI
For further information please get in touch with us: