SequoiaAT in 10 Best IoT Solutions Providers of 2019

Sequoia AT is pleased to announce that they are on the list of the CIO bulletin’s 10 Best IoT solution providers for 2019. Speaking on the occasion, COO of the company, KR Gopinath says “I am glad that they recognized what we do here in Sequoia. Our team’s outside the box thinking and persistence for making our customers products better is why we were recognized by the CIO Bulletin.”

SequoiaAT currently has two development centers in Santa Clara (USA) and Trivandrum (India). SequoiaAT is planning on expansion of their development offices in Santa Clara (USA)  & setting up new office in Kochi (India).

Working with passion is the internal theme at Sequoia. And this recognition is a proof of what every Sequoiaan believes in. Ram Mohan (Director) says that “At SequoiaAT the quality starts by ensuring that we hire for our culture. We hire only individuals who are extremely passionate about their work. This enables us to go beyond our customer expectations.”

SequoiaAT was named perviously named in the Top 100 Tech companies founded by Indians in Silicon India Magazine

Complete link to this article in CIO Bulletin can be found at this link 

How AI is changing healthcare

AI in Healthcare
AI in Healthcare

AI is the next big wave which will change we know the world for generations to come.  AI has attracted over $17 billion in investments since 2009  and will add over 15 trillion to world economy by 2030  as per estimates.

The term AI was coined in 1956 and even thought of by ancient philosophers, but Some of the early work in this space was done at Stanford University for treating blood infections. Till about early 2000’s most of the work in AI was limited to universities like MIT, Stanford, Rutgers etc.

 

One of the domains which stands to benefit the most from AI is healthcare. The healthcare industry is advancing in discoveries daily as technology advances in major ways. We have done amazing things in the last few years and currently Artificial Intelligence has been dominating as the main point of interest. AI is being harnessed to increase longevity and health of the human race.

As an example we all know one problem with hospitals is wait times. As a hospital, doctors need to make every second count. With help of AI hospitals can assign beds to patients faster and more effectively. While this may seem like a useless task it prevents having employees do this job, and little by little, it saves a lot of time. In the John Hopkins Hospital, this has been able to see and predict future requests for beds, and even plan for future unavailabilities. As per the recent article in HBR, It decreased wait times, and even allowed them to accept over 50% more new patients from other hospitals. AI can also do the paperwork that takes doctors a significant amount of time, giving them more time to engage with their patients. Every second that AI saves is another second for doctors to save a life.

Besides preparation, AI directly uses Brain Computer Interfaces. This can be used to decode neural activity. Potentially it could be used to help the many people with ALS and strokes, as well as the half a million people yearly that have spinal cord injuries. Neurological problems have been extremely difficult, if not impossible to solve. AI is helping in ways unimaginable 10 years ago. When AI is allowed to look at all the data from patients, it can notice patterns and analyze them in ways that would be humanly unachievable. AI will make sense of data allowing them to predict things that will happen to specific patients with incredible accuracy. AI could take all of the unstructured data and classify them, and this is especially useful as we are expected to double medical data every 73 days from 2020, according to IBM.

Even selfies can be used to find diseases. An algorithm can find the subjects facial features, and predict facial feature abnormalities. Just in a few pictures the AI can analyze things that we would need expensive equipment and preparation to find out. AI with expensive tools such as x-rays and MRI scans, can find out all problems instantaneously. AI is highly useful in predicting patterns. This can be used to predict problems and also patient recovery time. With the right data sets, AI will be able to foresee diseases like seizures and sepsis.

At SequoiaAT we have started taking small steps towards AI in medicine by collaborating with companies in life-sciences and medical domains. We have been working with them on solutions which further this goal.

AI will do everything that humans can do in a fraction of the time, in all helping and curing more people. AI will save unbelievable amounts of money, and even more time, making every second count.

Visualization frameworks for Bio-Informatics

By Anu P

With the advent of fast genome sequencing techniques, biological datasets worldwide have exploded to tremendous sizes today. For instance, a single patient’s sample after sequencing and several stages of data processing and analysis could run into over a Terra byte! Raw sequencing data that comes out of the sequencing machine is at an abstract level of potentially useful information, requiring significant processing to be converted into meaningful form to drive genomics research.

Some of the data conversion steps being highly computation intensive and/or requiring specialized bioinformatics algorithms, a large portion of the bio-informatics data processing pipeline is implemented in the cloud today. However, as the data resident in the “genomics cloud” reaches the hands of the researcher, it is only as good for research as the analytics and visualization capabilities.

Visualization is a graphical representation of data intended to provide the user a qualitative understanding of information. Data visualization techniques greatly enhance the user’s understanding and interpretation of these massive data sets. A visualization-integrated bio-informatics pipeline provides researchers with the ability to explore genomics data and enables them to progressively iterate, backtrack or zero-in on their analysis steps, thereby enabling them to infer high-impact conclusions with an improved degree of confidence within a reasonable time.

The two essential attributes of a successful data visualization framework are:

1)   High interactivity

2)   Performance at the speed of analysis

Interactivity implies the ability to manipulate graphical entities to derive intuitive data representations. Interactive graphics involves the detection, measurement and comparison between points, lines, shapes and images being represented for the effectiveness of user interpretation, accuracy of quantitative evaluation, aesthetics and adaptability. Enhancing data interpretation by varying the views, labelling to retrieve the original data, zooming in to focus the clarity of data, exploring the neighboring points and a user adjustable mapping can create a good data exploration experience to the user.

Consequently, as the user continuously manipulates data (applies filters, adjusts thresholds, tunes parameters like scale and dynamic range of values) to make “research sense” out of the data, the visualization framework should permit

1) Discrete or continuously variable settings with user-friendly controls like text boxes, selection drop-downs, sliders, knobs etc. and

2) Quick redrawing of the updated graphical representation after every change is made in user settings.

General-purpose and traditional analytics software packages that have been adopted in bio-informatics often come with add-on packages for interactive visualization to a basic level of utility for research. With an easy non-programmer model that appeals very much to researchers, these packages provide interactive graphs and plots. Having an in-built web server eliminates the need to install any client applications, all that the user needs is a browser and an URL to point it to.

However, when it comes to enormous datasets that range millions of data points, these in-built/add-on visualization frameworks are found to be incapable of giving the user an acceptable (sub 1-second?) performance each time a user setting is changed. Therefore, guaranteeing an analysis-continuum to the users remains challenging. Besides the rendering stability of these in-built/add-on packages is often found problematic when large data sets are thrown at them, with statistical methods applied on the data. Rendering inaccuracies including gross misrepresentations of data are frequently encountered that expose the limitations of their scalability.

Here comes the need for evaluating, piloting and implementing visualization frameworks based on customized graphical libraries that leverage fast rendering techniques in a browser environment. As was proven by our experiments with multiple fast-visualization techniques, a customized visualization framework for bio-informatics is the sole solution to match the user’s speed of analysis to provide an enhanced time-to-insights experience.

In conclusion, bio-informatics visualization framework needs to be highly interactive and lightning fast to handle data sets in the millions. Further, from the bioinformatics pipeline provider’s perspective, scalability for a large number of concurrent users and security of data are the other key attributes to be satisfied by the visualization framework, as is applicable to the other modules like data transformation and analytics modules in the pipeline.

Cross-platform vs. Native Mobile App Development: Which one to choose

 
Aruna R S
Aruna R S

Today, 99.6% of all smartphones run on either IOS or Android. Increasingly mobile apps have gained significance as way to not only conduct business but also for raising brand awareness. There are hundreds of new applications being launched on a daily basis. In the last few years, the concept of cross-platform mobile app development has taken off in a big way. It allows the developer to write the code once and employ it across all platforms – Android, IOS or Windows. Some of the advantages of developing Cross Platform apps.

Cross-platform vs Native apps:

Native apps

Native apps are written in languages that the platform accepts natively. For example, Swift or Objective-C is used to write native IOS apps, Java is used to write native Android apps, and C# for the most part for Windows Phone apps.

Apple and Google offer app developers their own development tools, interface elements and standardized SDK; XCode and Android Studio. This allows any professional developer to develop a native app relatively easily.

Advantages

  • Since native apps work with the device’s built-in features, they are easier to work with and also perform faster on the device.
  • Native apps get full support from the concerned app stores and marketplaces. Users can easily find and download apps of their choice from these stores.
  • Because these apps have to get the approval of the app store they are intended for, the user can be assured of complete safety and security of the app.
  • Native apps work out better for developers, who are provided the SDK and all other tools to create the app with much more ease.

Cross-platform apps

While cross-platform development is somewhat an umbrella term for any mobile app project that targets multiple platforms, hybrid is a subtype that implies the use of a specific development model. Legitimate representatives of hybrid development tools are Cordova and Phone Gap. Both allow to create apps that are web/native ‘hybrids’, with the code being written in HTML, CSS or JavaScript, and later wrapped in an invisible native WebView browser.

Cross-platform development tools that do not use WebView and communicate with the platform directly aren’t united in any subgroup. Existing under the general term of cross-platform development, they are sometimes called native development tools, which just makes it all even more confusing. For the sake of convenience, we’ll refer to these tools as ‘near-native’ here and will explain why they deserve such a praise.

In ideal scenario, cross-platform apps work on multiple operating systems with a single code base. There are 2 types of cross-platform apps:

  1. Native Cross-Platform Apps
  2. Hybrid ‘HTML 5’ Cross-Platform Apps

Native Cross-platform Apps

Native cross-platform apps are created when you use APIs that are provided by the Apple or Android SDK but implement them in other programming languages that aren’t supported by the operating system vendor. Generally, a third-party vendor provides an integrated development environment that handles the process of creating the native application bundle for iOS and Android from a single cross-platform codebase. In this case, the final product is an app that still uses native APIs, and cross-platform native apps can achieve almost native performance without any lag visible to the user. Native Script, Xamarin, and React Native are the most common examples native cross-platform languages.

Hybrid HTML 5 cross-platform apps

Although mobile applications are designed for smartphones and tablets, it is back end servers (either on-prem or Cloud-based) that handle application logic. Since both IOS and Android SDKs feature advanced web components, skilled software engineers often utilize Web View to create parts of an application’s GUI (Graphical User Interface) with HTML 5, CSS and JavaScript. The most popular hybrid app development framework is Apache Cordova (formerly known as PhoneGap).

Mobile app development tools

Xamarin:

Xamarin apps are built with standard, native user interface controls. Built with #C and .NET, Xamarin allows developers to re-use code and simplifies the process of creating dynamic layouts in iOS.Apps not only look the way the end user expects, they behave that way too. Xamarin apps have access to the full spectrum of functionality exposed by the underlying platform and device, including platform-specific capabilities like iBeacons and Android Fragments. Xamarin apps leverage platform-specific hardware acceleration and are compiled for native performance. This can’t be achieved with solutions that interpret code at runtime.

Apache Cordova

Apache Cordova is an open-source mobile development framework. It allows you to use standard web technologies – HTML5, CSS3, and JavaScript for cross-platform development. Applications execute within wrappers targeted to each platform, and rely on standards-compliant API bindings to access each device’s capabilities such as sensors, data, network status, etc. Cordova has no limitations in relation to natively developed applications. What you get with Cordova is simply a JavaScript API, which serves as a wrapper for native code and is consistent across devices. You can consider Cordova to be an application container with a web view, which covers the entire screen of the device. The web view used by Cordova is the same web view used by the native operating system. On IOS, this is the Objective-C UIWebView class; on Android, this is android.webkit.WebView.

Apache Cordova comes with a set of pre-developed plugins which provide access to the device’s camera, GPS, file system etc. As mobile devices evolve, adding support for additional hardware is simply a matter of developing new plugins.

React Native

The React Native framework was created by Facebook, and its development started as a result of a hackathon back in 2013. React is an example of a technology that the developer community created for itself when developers were looking for a tool that would combine the good things about mobile development with the power and agility of the native React environment. React Native’s genesis resulted in a huge enthusiastic community investing into the framework’s development, and there are catalogs of freely available components that go with it.

React Native uses various UI blocks to compose rich mobile apps for both IOS and Android using a common JavaScript codebase. React Native also allows developers to see their code and its implementation on real mobile screens next to each other in real time.

React Native provides development tools for debugging and application packaging, which saves time.

Which One to Choose

So, if you want to impress users with a lightning fast interface, rich functionality, and overall performance, native apps are what you need. In addition, you get better security and stability. The price for this is that you’ll most likely need to hire two dedicated teams for each platform. Small business may not be able to afford develop an application for both platforms.

Cross platform apps, on the other hand, can be developed for both IOS and Android. Plus, cross platform apps are much easier in terms of maintenance and deployment, so you can spend more time and money on marketing and attracting new customers. However, their biggest disadvantage is lower performance, which may be especially crucial if you’re developing an application with features that require deep hardware integration.

Integrated Big Data-as-a-Service (BDaaS): A new opportunity for B2B

Manoj K Nair

Big data as a technology passed through various stages of evolution during the last few years, which still keeps it hot in the list of tech buzzwords! Starting with handling the 3 V’s of data – Volume (of data to be handled), Velocity (of data generated) and Variety (of data generated), it has spread wings to more V’s – Veracity (to ensure data integrity and reliability), Vulnerability (to address privacy and confidentiality concerns) and Value (of information)!

As Google showed the way, collection and collation of huge volumes of data and applying the right analytics to gain valuable insights into the business and optimization possibilities is the key to extracting the full potential of the data-driven industry. Today Chief Data Officers are building strategies to organize their data and to derive business intelligence from it to drive radical transformation of businesses in many sectors such as industrial, retail, logistics, healthcare etc.

BDaaS (Big Data-as-a-Service) is gaining momentum, enabling external experts to take the company’s customer data to the cloud and to provide analytical insights for decision making. Offered as a managed service, it frees up the customer from substantial initial investment and helps offer RoI-driven spending. This article focusses on BDaaS, describing the potential that enables our customers to conceptualize and launch new business models.

Large corporations with structured and centralized ERP systems wouldn’t benefit as much from BDaaS as compared to unorganized sectors comprising of diverse players each with their own fragmented IT infrastructures. For instance, unorganized retail is a heterogeneous sector with a geographically distributed supply chain that spans across medium and small players, having considerable differences in their levels of process maturity. Stand-alone islands of software application are encountered many times and so are ad-hoc (or legacy) structures of data storage and archival. B2B companies providing services to geographically spread out customers in many traditional supply chains like chemicals/reagents for laboratory use, petrochemical (non-fuel) derivatives and medical drugs could benefit from the transformational potential of BDaaS.

Suppose you are a B2B player in one of these or similar sectors, let us take a closer look at your business and customer data! Could your expertise in the industry be leveraged to identify a new data-driven model by “integrating your customer data” to offer new intelligence gleaned from it? This integration gives you the data in ‘sector level’ rather than ‘individual’ customer level. You will be able to identify sector level intelligence and provide it to all your customers which will be mutually beneficial for all.

In order to accomplish this outcome, you will most often need external expertise in big data to work collaboratively with you (or your domain consultants) in order to build a BDaaS platform to offer your customers. The value of business intelligence that the platform brings helps them win in their businesses and their patronage in turn helps your business model succeed.

So has been our experience working with a world leader in the pharmacy supply chain across North America. Besides supplying medicines and medical equipment to their customers, they also provide inventory and patient management software to their customers. The software installed in each of the numerous hospitals gathers transactional data over time. We worked closely with the customer’s consultants on the feasibility of data integration and created a centralized control center using big data technologies such as Spark and Kafka. Hosted on the cloud, the platform captures streaming data from different hospitals and pushes them to the centralized system that offers a metered BDaaS service to end-customers, the analytics insights helping them to optimize their businesses.

The path to big data implementation, however, was filled with several challenges, a few of which are:

Data security

With the regulatory requirements concerning medical information like the HIPPA standards, compliance is mandatory. Only non-sensitive data at a lower level of granularity is collected, that respects privacy concerns of the individual hospitals of exposure of their patients’ sensitive information. This is the key factor to the success of the project both from customer buy-in and regulatory compliance points of view. The collected data is pushed to cloud securely with transport layer security.

Verity of data

The data being heterogeneous and scattered is the foremost challenge while implementing big data solutions. Even though most hospitals use our customer’s software a few others use their own legacy software. Data could be isolated even across the departments in the same organization! We built data collector modules which can be customized easily to collect data from various sources and push it to the cloud. Rationalizing the relevant data fields from these diversified sources and integrating it provides a lot of insight into possibilities of analytics.

Time to market and initial investment

Being a metered service we had to make sure that customer’s cost is kept linear with usage. Databricks big data platform with reliable Open Source Kafka data injector gives us a balanced and scalable framework to meet this objective.

After data was made available from all sources centrally for analysis it was discovered that information on the availability of particular medicines in each hospital along with demand predictability has the potential to reduce the associated transportation costs by around 20%. Data-driven drill down revealed for instance that for a particular area with a prevalence of influenza but with shortage of the corresponding medicine, the system can identify the best possible area (nearest, where there is enough stock but no demand currently) from which this medicine can be arranged. Supply chain demand mitigation by coordinating drug supply between customers can significantly save inventory and transportation cost for customers. More importantly, it saves precious reaction time for their end-users, which would not have been possible without the magic of BDaaS.

In your own strategy to connect your fragmented customer data centrally to provide mutually beneficial information, the role of an experienced big data partner is indeed crucial. Combine the power of your domain expertise with big data specialists to create new data-driven business models which besides increasing your revenues could make you the hub to all customers thereby increasing the bonding of existing ones and attracting new ones.

Maintenance Management & IIOT

Aju Kuriakose

Aju Kuriakose

IoT is changing the world around us. This change is affecting every walk of life including the maintenance industry. Maintenance management used to depend on skills of the maintenance managers to troubleshoot skills and was least data-driven as they have very limited data to fall back on when it came to machine health. However, it is rapidly changing. It is becoming heavily data-driven than skill driven. Advances in wireless communications and data processing enable maintenance managers to gauge the health of the factory in an instant.

We can tell that its no longer a hype but a reality and proof is in the fact that the leading organization- OPC Foundation is spending time in developing the Unified Architecture (UA) Specification for IIoT in the manufacturing environment. The standard is being developed to enable IIOT devices to easily pass information between sensors, machines, monitoring devices and the cloud in a secure and open way. Also OPC, AMT & OMAC have jointly developed  Packaging Machine Language (PackML) and MTConnect which combines OPC UA with existing industry standards to lower cost of predictive maintenance.

Low cost of IIOT sensors is making predicting failure or measuring remaining useful life (RUL)  of a tool a no-brainer enabling maximum uptime at optimum costs . As an example, a drill over course of its functioning will start to suffer wear. As we continue using them regularly at some point of time they become unusable either because the precision of the job falls below the parameter or the drill bit breaks off.  With the combination of Industrial IoT sensors and AI techniques today we can easily predict the remaining useful life of the tool .

Any maintenance professional will agree with me that predictive maintenance is a journey they have to take but IIOT makes the journey easier. Retrofitting existing machine with a sensor to measure machine health becomes very easy. One of the companies where we work with to enable this transitions is OPA By Design. It is a smart device which can be tagged to any existing machine at a very minimal cost to measure 8 different parameters and report it maintenance supervisors via mobile app & cloud. Since the machine is constantly being monitored, any sign in degradation in the health of the machine is alerted instantly

IIOT is also enabling to drive down the inventory holding cost as now maintenance supervisors have better predictability of machine failure and hence they have to stock less spares. It also results in fewer emergency inventory orders and less downtime due to out-of-stock inventory.

IIOT is not changing anything for the maintenance professional except the fact that he can now listen to his assets and make informed decisions based on actual data on the health of the asset. IIOT is not going to fix the problem for him. He will still have to depend on his best technician to fix it reliably

 Link to article on Linkedin

 

 

Reinventing manufacturing tests for automotive electronics

 Ram Mohan Ramakrishnan

Ram Mohan Ramakrishnan

Automotive electronics has been making steady gains in percentage cost of the total vehicle cost world-wide. Consequently, it has been facing some of the same challenges that were faced earlier (and mostly solved by automated tests) in other areas of automobile mass-manufacturing – fabrication, mechanical assembly, electrical components and hydraulic systems.

A typical example is the Electronic Control Unit (ECU) that has become the heart (or brain!) of the modern automobile. An ECU receives inputs from various sensors and sends outputs to multiple actuators, in addition to communicating with other ECUs of related subsystems in the vehicle. Some ECUs implement performance critical functions such as fuel injection, ignition timing etc., whereas others control safety critical systems such as Anti-skid Braking (ABS), Electronic Stability Control (ESC) etc. Therefore an automated manufacturing test station for the ECU is significantly complex in design, involving several pieces of instrumentation, simulation of sensors and multiple automotive communication protocols.

Let’s see if some real-world figures could lend a quantitative perspective to this mass-manufacturing challenge. For instance let’s take the case of a mid-size automotive OEM that sells over a 100,000 vehicles annually, with production in 2 plants of identical capacity. That would mean at least (taking Engine Control) an equal number of ECUs supplied annually by their Tier-1 ECU Manufacturer who needs to manufacture around 8 ECUs in an hour, assuming full 3-shift operations. Assuming 4 parallel assembly-lines, it gives less than 30 minutes to manufacture an ECU! The time available practically for testing ECUs at the End-of-Line (EoL) is even shorter. Assuming 2 parallel test stations, the operator typically would have less than a minute to test an ECU – to load it on the test station, execute the automated tests, to know if it passed or failed, print a bar code and affix it to the passed piece (or dump the failed piece into the reject bin) and unload the ECU, and ready to load the next one! Added is the complexity of different versions of the same ECU that are simultaneously in production. Since batches having different versions of ECU come to the same test station, the operator would need to reconfigure the station for a different set of tests each time. The reconfiguration must be completed typically within 4 to 5 minutes before loading the next ECU type.

Now let’s review how this challenge applies (or doesn’t apply!) to different segments in the automotive industry. It’s a no-brainer that any Tier-1 Manufacturer (or OEM) in the business would have all of this covered in their factory floors already, if not they would hardly be selling! However it is no longer the steady-state in the case of a newly introduced ECU design, be it part of a new brand of vehicle the OEMs plan to introduce to the market, or be it related to an additional feature, like adaptive cruise control, that’s being introduced for a new model variant. Does the Tier-1 Manufacturer have the required engineering bandwidth to design the test station themselves? In the case of technology transfer for ECU design from a global principal, does the Tier-1 Manufacturer have in-house expertise in the early stages to develop a test station on time before pilot production starts? In the case of in-house development of the ECU, does the Tier-1 Manufacturer really have the resources, bandwidth and simply the time to get the test station ready before the ECU design passes all type tests and hits production?

Alternatively, do existing test station vendors for other components, like starter motors, tiltable mirror assemblies or instrument clusters, have the necessary expertise to design such a complex test station? What about ECUs for Electric Vehicles (and hybrids) that are predicted to transform the entire motoring landscape forever! Not to forget the two-wheeler (and three-wheeler) segments, which under the rapidly closing time window of emission control regulations (Bharat Stage-VI in India although behind Euro-VI by a few years, has a 2020 deadline currently!) will be forced to switch to ECU based fuel-injection etc. in a few years’ time in order to legally sell in the market.

Here’s where a little foresight into accelerating the design of manufacturing test solutions could benefit the relevant stakeholders. At Deep Thought Systems, We have designed and developed a reliable, modular and generic platform called TestMate for building manufacturing test stations specifically for ECUs. We have successfully customized Testmate to supply EoL test stations for ECUs to Indian Tier-1 Manufacturers and OEMs in a very short turnaround.

The Human Machine Interface (HMI) of the Testmate, the main part that the operator sees and operates on a continuous basis, is a very generic requirement that consists of rugged enclosure, controls and indications for long years of reliable performance in an assembly floor. They say, and we’ve witnessed it ourselves, that routine use of test stations by the creed of factory operators indeed constitutes a really hash environment! The mounting, orientation, peripherals for viewing and printing, display properties etc. are all ergonomically designed, optimally for continuous usage by an operator over an 8-hour shift (or longer!). We have successfully installed the test station in factory floors where they are being used continuously for years, with zero support calls.

We work with the customer on the ECU connector type, to design a custom cable harness and test fixture that includes the mating connector, with locking arrangement. The fixture design ensures proper contact between the pins of the ECU connector and the mating connector over months of continuous loading and unloading. We equip the customer with spare cable harness to handle the unlikely event of damage due to exceedingly rough/careless usage by operators, which can be easily replaced onsite without having to depend on a service engineer.

Built on the same principles as our other automotive offerings for vehicle diagnostics, testing and simulation, Testmate is capable of communicating with various ECU designs over multiple automotive communication protocols like CAN, K-Line and LIN and messaging standards like J1979, J1939, UDS, KWP2000 etc. We work with the customer to customize it for the ECUs communication specification. Apart from testing continuous engine parameters, Diagnostic Trouble Codes defined for the ECU can also be tested. Containing many building blocks of an actual ECU, for many communication tests the test station appears to the ECU as a peer ECU (sometimes multiple) of the related sub-system(s)!

Testmate can reliably simulate inputs to the ECU, ranging from the simplest ignition key switch to the complex crankshaft position waveform that is a critical input for many engine control functions. It also measures the ECU’s outputs, ranging from the discrete voltages or timed pulses to PWM waveforms to actuators, and evaluates it against defined limits for pass or fail. In addition to functional tests, power supply and other electrical (negative) tests can be performed to test how well the ECU hardware responds to abnormal conditions, like reversed polarity of the power supply, under voltage etc. The I/O instrumentation is completely custom-designed as per the interface specification of the ECU.

The HMI software supports multiple levels of users, with differential permissions defined for each login level, like running tests, modifying test parameter limits, changing the sequence of tests, error message text, test calibration and troubleshooting. All tests are logged for later review by supervisors or managers. For failed tests clear troubleshooting assistance is displayed/logged as to which specific test failed and how exactly, so that the defective unit can be repaired. An ECU may come in twice for tests, once after bare assembly without the enclosure, and once again after the enclosure is fitted.

Finally it all comes together in the hands of the operator, who after loading an ECU has less than a minute to run the automated tests to know if it is a pass or a fail. Pass is good news always, the ECU gets a bar-coded label stuck on it and moves forward to the next stage. However a fail is hardly the end of the road because in order to keep the rejection costs low failed units need to be repaired, with the test station providing precise troubleshooting information to get it repaired quickly. In this context a few pertinent questions for relevant Tier-1 Manufacturers and OEMs are:

1) How much of ECU test station design could be generic, versus how much of it should essentially remain ECU design specific?

2) Does it justify to their business to completely reinvent a unique solution to their challenge in terms of engineering effort, cost or timelines? While large parts of the challenge retain a commonality, which a generic test platform such as Testmate has not only abstracted, but also been customized for specific ECUs and proven on the factory floor.

At Deep Thought Systems, we clearly understand the generic and reusable parts of the TestMate platform which help accelerate the design of EoL Test Stations. A high-performance hardware platform, powered by a real-time operating system and sound embedded firmware design practices ensures fast test execution and that all timing considerations in vehicle communication protocols are taken care of. Thanks to our expertise in digital and mixed signal hardware design, we are able to quickly customize other parts of the test station like I/O interfaces, ECU fixture and HMI software as per the customer’s specification and needs with total assurance of the customer’s Intellectual Property.

Another closely related area for production where we could work with customers to provide a quick solution is the design and supply of ECU Flashing units. Operators use the flashing units to flash the firmware into ECUs after assembly. The design of the ECU flashing unit is greatly accelerated by our generic ECU flashing framework, where the only input required from the customer is the seed generation algorithm for unlocking the ECU, which could be imported into our firmware as a library (in binary form) to protect the customer’s (or principal’s) confidentiality. In conclusion, our expertise and track record of supplying and installing EoL test stations on factory floors and supporting production personnel in the usage and fine-tuning of these systems will ensure an efficient and trouble-free operation for the customer for the entire production lifecycle.

Link to Linkedin article