How AI is changing healthcare

AI in Healthcare
AI in Healthcare

AI is the next big wave which will change we know the world for generations to come.  AI has attracted over $17 billion in investments since 2009  and will add over 15 trillion to world economy by 2030  as per estimates.

The term AI was coined in 1956 and even thought of by ancient philosophers, but Some of the early work in this space was done at Stanford University for treating blood infections. Till about early 2000’s most of the work in AI was limited to universities like MIT, Stanford, Rutgers etc.

 

One of the domains which stands to benefit the most from AI is healthcare. The healthcare industry is advancing in discoveries daily as technology advances in major ways. We have done amazing things in the last few years and currently Artificial Intelligence has been dominating as the main point of interest. AI is being harnessed to increase longevity and health of the human race.

As an example we all know one problem with hospitals is wait times. As a hospital, doctors need to make every second count. With help of AI hospitals can assign beds to patients faster and more effectively. While this may seem like a useless task it prevents having employees do this job, and little by little, it saves a lot of time. In the John Hopkins Hospital, this has been able to see and predict future requests for beds, and even plan for future unavailabilities. As per the recent article in HBR, It decreased wait times, and even allowed them to accept over 50% more new patients from other hospitals. AI can also do the paperwork that takes doctors a significant amount of time, giving them more time to engage with their patients. Every second that AI saves is another second for doctors to save a life.

Besides preparation, AI directly uses Brain Computer Interfaces. This can be used to decode neural activity. Potentially it could be used to help the many people with ALS and strokes, as well as the half a million people yearly that have spinal cord injuries. Neurological problems have been extremely difficult, if not impossible to solve. AI is helping in ways unimaginable 10 years ago. When AI is allowed to look at all the data from patients, it can notice patterns and analyze them in ways that would be humanly unachievable. AI will make sense of data allowing them to predict things that will happen to specific patients with incredible accuracy. AI could take all of the unstructured data and classify them, and this is especially useful as we are expected to double medical data every 73 days from 2020, according to IBM.

Even selfies can be used to find diseases. An algorithm can find the subjects facial features, and predict facial feature abnormalities. Just in a few pictures the AI can analyze things that we would need expensive equipment and preparation to find out. AI with expensive tools such as x-rays and MRI scans, can find out all problems instantaneously. AI is highly useful in predicting patterns. This can be used to predict problems and also patient recovery time. With the right data sets, AI will be able to foresee diseases like seizures and sepsis.

At SequoiaAT we have started taking small steps towards AI in medicine by collaborating with companies in life-sciences and medical domains. We have been working with them on solutions which further this goal.

AI will do everything that humans can do in a fraction of the time, in all helping and curing more people. AI will save unbelievable amounts of money, and even more time, making every second count.

Visualization frameworks for Bio-Informatics

By Anu P

With the advent of fast genome sequencing techniques, biological datasets worldwide have exploded to tremendous sizes today. For instance, a single patient’s sample after sequencing and several stages of data processing and analysis could run into over a Terra byte! Raw sequencing data that comes out of the sequencing machine is at an abstract level of potentially useful information, requiring significant processing to be converted into meaningful form to drive genomics research.

Some of the data conversion steps being highly computation intensive and/or requiring specialized bioinformatics algorithms, a large portion of the bio-informatics data processing pipeline is implemented in the cloud today. However, as the data resident in the “genomics cloud” reaches the hands of the researcher, it is only as good for research as the analytics and visualization capabilities.

Visualization is a graphical representation of data intended to provide the user a qualitative understanding of information. Data visualization techniques greatly enhance the user’s understanding and interpretation of these massive data sets. A visualization-integrated bio-informatics pipeline provides researchers with the ability to explore genomics data and enables them to progressively iterate, backtrack or zero-in on their analysis steps, thereby enabling them to infer high-impact conclusions with an improved degree of confidence within a reasonable time.

The two essential attributes of a successful data visualization framework are:

1)   High interactivity

2)   Performance at the speed of analysis

Interactivity implies the ability to manipulate graphical entities to derive intuitive data representations. Interactive graphics involves the detection, measurement and comparison between points, lines, shapes and images being represented for the effectiveness of user interpretation, accuracy of quantitative evaluation, aesthetics and adaptability. Enhancing data interpretation by varying the views, labelling to retrieve the original data, zooming in to focus the clarity of data, exploring the neighboring points and a user adjustable mapping can create a good data exploration experience to the user.

Consequently, as the user continuously manipulates data (applies filters, adjusts thresholds, tunes parameters like scale and dynamic range of values) to make “research sense” out of the data, the visualization framework should permit

1) Discrete or continuously variable settings with user-friendly controls like text boxes, selection drop-downs, sliders, knobs etc. and

2) Quick redrawing of the updated graphical representation after every change is made in user settings.

General-purpose and traditional analytics software packages that have been adopted in bio-informatics often come with add-on packages for interactive visualization to a basic level of utility for research. With an easy non-programmer model that appeals very much to researchers, these packages provide interactive graphs and plots. Having an in-built web server eliminates the need to install any client applications, all that the user needs is a browser and an URL to point it to.

However, when it comes to enormous datasets that range millions of data points, these in-built/add-on visualization frameworks are found to be incapable of giving the user an acceptable (sub 1-second?) performance each time a user setting is changed. Therefore, guaranteeing an analysis-continuum to the users remains challenging. Besides the rendering stability of these in-built/add-on packages is often found problematic when large data sets are thrown at them, with statistical methods applied on the data. Rendering inaccuracies including gross misrepresentations of data are frequently encountered that expose the limitations of their scalability.

Here comes the need for evaluating, piloting and implementing visualization frameworks based on customized graphical libraries that leverage fast rendering techniques in a browser environment. As was proven by our experiments with multiple fast-visualization techniques, a customized visualization framework for bio-informatics is the sole solution to match the user’s speed of analysis to provide an enhanced time-to-insights experience.

In conclusion, bio-informatics visualization framework needs to be highly interactive and lightning fast to handle data sets in the millions. Further, from the bioinformatics pipeline provider’s perspective, scalability for a large number of concurrent users and security of data are the other key attributes to be satisfied by the visualization framework, as is applicable to the other modules like data transformation and analytics modules in the pipeline.

Critical Success Factors for Medical Device Product Development

According to published market reports, by year 2021 medical device market is expected to grow to a staggering $340+ billion. The opportunities are expected to be more in general medical devices, cardiovascular and surgical & infection control segments. With such a tremendous market opportunities in the global market, it is imperative that medical device product developers to be aware of the stringent demands of design and development which emphasis on safety and compliance to established regulations and standards. Over the years, our experience with major medical product companies like Johnson & Johnson, Boston Scientific, Medtronics, Baxter, etc., we could see and experience various development approaches, challenges and stringent standards compliance needed by both client audit teams and independent audit teams. Some of the products developed included disposable colonoscope, automated sterilizers, blood glucose meters and a drug dispensing implantable device. This is an attempt to share our experience in essential elements in product design. Similar share on process elements will be posted soon.

Medical devices can be broadly classified into three market segments – Diagnostic, Therapeutic and Implantable. Based on Safety and Risk assessment the devices are classified into Class I, Class II or Class III device. Product designers and manufacturers, must demonstrate adequate controls and “compliance” to avoid being found guilty of deficiencies. It is important to understand that in this domain “intentions do not count but action alone”.

Product Development

Product development rigor depends on the product safety classification, history and whether it is a “first of its kind” product or “me too” product. Focus should be on characteristics of materials used, effective documentation from the proof of concept phase in case of first of its kind product. Manufacturing Process is important (especially material consistency and sterilization & hygiene). Software development needs to demonstrate complete verification and validation throughout the development life cycle.  Severity of device failure decides the development rigor (Level of Concern Analysis (LOCA). Proof of positive compliance needs to be recorded and submitted

The product life cycle phases are Concept à Design à Implement àManufacture à Disposal. This life cycle looks very much standard one but what differentiates is the focus you need to bring in each of these phases from product, process and compliance perspective. In concept phase inputs are to be considered from market, existing products, product category specific standards. In design phase DFX aspects should be planned and incorporated. Design rigor is brought in through processes like DFMEA (Design for Failure Mode Effect Analysis), Reliability Prediction, PFMEA ( Process Failure Mode Effect Analysis), System Hazard analysis, Software Hazard analysis, Requirement trace matrix, COTS (Commercial Off-the Shelf) products validation, test plans covering verification and validation with both positive and negative compliance.

User Interface design is another important aspect that needs to be practiced. This will contribute in improving the safety of medical devices and equipment by reducing the likelihood of user error. This can be accomplished by the systematic and careful design of the user interface, i.e., the hardware and software features that define the interaction between users and equipment.

Focus on Six early engagement areas will significantly contribute in developing a safe and reliable product. These are – PCB layout and fabrication, PCB assembly, Component engineering, Test engineering, System engineering and packaging and Product support.

Conclusion

Fundamental to designing and developing a medical product which is safe and effective is to integrate safety into product development. Objective should be to Remove or lower the risk at design phase, followed by Protecting for risks which cannot be removed at the design phase and failing which Inform the user about the residual risks through appropriate methods. The goal is to cancel all foreseeable life time of the apparatus – transportation, installation, usage, shutdown and disposal.

Link to Article on Linkedin