Reinventing manufacturing tests for automotive electronics

 Ram Mohan Ramakrishnan

Ram Mohan Ramakrishnan

Automotive electronics has been making steady gains in percentage cost of the total vehicle cost world-wide. Consequently, it has been facing some of the same challenges that were faced earlier (and mostly solved by automated tests) in other areas of automobile mass-manufacturing – fabrication, mechanical assembly, electrical components and hydraulic systems.

A typical example is the Electronic Control Unit (ECU) that has become the heart (or brain!) of the modern automobile. An ECU receives inputs from various sensors and sends outputs to multiple actuators, in addition to communicating with other ECUs of related subsystems in the vehicle. Some ECUs implement performance critical functions such as fuel injection, ignition timing etc., whereas others control safety critical systems such as Anti-skid Braking (ABS), Electronic Stability Control (ESC) etc. Therefore an automated manufacturing test station for the ECU is significantly complex in design, involving several pieces of instrumentation, simulation of sensors and multiple automotive communication protocols.

Let’s see if some real-world figures could lend a quantitative perspective to this mass-manufacturing challenge. For instance let’s take the case of a mid-size automotive OEM that sells over a 100,000 vehicles annually, with production in 2 plants of identical capacity. That would mean at least (taking Engine Control) an equal number of ECUs supplied annually by their Tier-1 ECU Manufacturer who needs to manufacture around 8 ECUs in an hour, assuming full 3-shift operations. Assuming 4 parallel assembly-lines, it gives less than 30 minutes to manufacture an ECU! The time available practically for testing ECUs at the End-of-Line (EoL) is even shorter. Assuming 2 parallel test stations, the operator typically would have less than a minute to test an ECU – to load it on the test station, execute the automated tests, to know if it passed or failed, print a bar code and affix it to the passed piece (or dump the failed piece into the reject bin) and unload the ECU, and ready to load the next one! Added is the complexity of different versions of the same ECU that are simultaneously in production. Since batches having different versions of ECU come to the same test station, the operator would need to reconfigure the station for a different set of tests each time. The reconfiguration must be completed typically within 4 to 5 minutes before loading the next ECU type.

Now let’s review how this challenge applies (or doesn’t apply!) to different segments in the automotive industry. It’s a no-brainer that any Tier-1 Manufacturer (or OEM) in the business would have all of this covered in their factory floors already, if not they would hardly be selling! However it is no longer the steady-state in the case of a newly introduced ECU design, be it part of a new brand of vehicle the OEMs plan to introduce to the market, or be it related to an additional feature, like adaptive cruise control, that’s being introduced for a new model variant. Does the Tier-1 Manufacturer have the required engineering bandwidth to design the test station themselves? In the case of technology transfer for ECU design from a global principal, does the Tier-1 Manufacturer have in-house expertise in the early stages to develop a test station on time before pilot production starts? In the case of in-house development of the ECU, does the Tier-1 Manufacturer really have the resources, bandwidth and simply the time to get the test station ready before the ECU design passes all type tests and hits production?

Alternatively, do existing test station vendors for other components, like starter motors, tiltable mirror assemblies or instrument clusters, have the necessary expertise to design such a complex test station? What about ECUs for Electric Vehicles (and hybrids) that are predicted to transform the entire motoring landscape forever! Not to forget the two-wheeler (and three-wheeler) segments, which under the rapidly closing time window of emission control regulations (Bharat Stage-VI in India although behind Euro-VI by a few years, has a 2020 deadline currently!) will be forced to switch to ECU based fuel-injection etc. in a few years’ time in order to legally sell in the market.

Here’s where a little foresight into accelerating the design of manufacturing test solutions could benefit the relevant stakeholders. At Deep Thought Systems, We have designed and developed a reliable, modular and generic platform called TestMate for building manufacturing test stations specifically for ECUs. We have successfully customized Testmate to supply EoL test stations for ECUs to Indian Tier-1 Manufacturers and OEMs in a very short turnaround.

The Human Machine Interface (HMI) of the Testmate, the main part that the operator sees and operates on a continuous basis, is a very generic requirement that consists of rugged enclosure, controls and indications for long years of reliable performance in an assembly floor. They say, and we’ve witnessed it ourselves, that routine use of test stations by the creed of factory operators indeed constitutes a really hash environment! The mounting, orientation, peripherals for viewing and printing, display properties etc. are all ergonomically designed, optimally for continuous usage by an operator over an 8-hour shift (or longer!). We have successfully installed the test station in factory floors where they are being used continuously for years, with zero support calls.

We work with the customer on the ECU connector type, to design a custom cable harness and test fixture that includes the mating connector, with locking arrangement. The fixture design ensures proper contact between the pins of the ECU connector and the mating connector over months of continuous loading and unloading. We equip the customer with spare cable harness to handle the unlikely event of damage due to exceedingly rough/careless usage by operators, which can be easily replaced onsite without having to depend on a service engineer.

Built on the same principles as our other automotive offerings for vehicle diagnostics, testing and simulation, Testmate is capable of communicating with various ECU designs over multiple automotive communication protocols like CAN, K-Line and LIN and messaging standards like J1979, J1939, UDS, KWP2000 etc. We work with the customer to customize it for the ECUs communication specification. Apart from testing continuous engine parameters, Diagnostic Trouble Codes defined for the ECU can also be tested. Containing many building blocks of an actual ECU, for many communication tests the test station appears to the ECU as a peer ECU (sometimes multiple) of the related sub-system(s)!

Testmate can reliably simulate inputs to the ECU, ranging from the simplest ignition key switch to the complex crankshaft position waveform that is a critical input for many engine control functions. It also measures the ECU’s outputs, ranging from the discrete voltages or timed pulses to PWM waveforms to actuators, and evaluates it against defined limits for pass or fail. In addition to functional tests, power supply and other electrical (negative) tests can be performed to test how well the ECU hardware responds to abnormal conditions, like reversed polarity of the power supply, under voltage etc. The I/O instrumentation is completely custom-designed as per the interface specification of the ECU.

The HMI software supports multiple levels of users, with differential permissions defined for each login level, like running tests, modifying test parameter limits, changing the sequence of tests, error message text, test calibration and troubleshooting. All tests are logged for later review by supervisors or managers. For failed tests clear troubleshooting assistance is displayed/logged as to which specific test failed and how exactly, so that the defective unit can be repaired. An ECU may come in twice for tests, once after bare assembly without the enclosure, and once again after the enclosure is fitted.

Finally it all comes together in the hands of the operator, who after loading an ECU has less than a minute to run the automated tests to know if it is a pass or a fail. Pass is good news always, the ECU gets a bar-coded label stuck on it and moves forward to the next stage. However a fail is hardly the end of the road because in order to keep the rejection costs low failed units need to be repaired, with the test station providing precise troubleshooting information to get it repaired quickly. In this context a few pertinent questions for relevant Tier-1 Manufacturers and OEMs are:

1) How much of ECU test station design could be generic, versus how much of it should essentially remain ECU design specific?

2) Does it justify to their business to completely reinvent a unique solution to their challenge in terms of engineering effort, cost or timelines? While large parts of the challenge retain a commonality, which a generic test platform such as Testmate has not only abstracted, but also been customized for specific ECUs and proven on the factory floor.

At Deep Thought Systems, we clearly understand the generic and reusable parts of the TestMate platform which help accelerate the design of EoL Test Stations. A high-performance hardware platform, powered by a real-time operating system and sound embedded firmware design practices ensures fast test execution and that all timing considerations in vehicle communication protocols are taken care of. Thanks to our expertise in digital and mixed signal hardware design, we are able to quickly customize other parts of the test station like I/O interfaces, ECU fixture and HMI software as per the customer’s specification and needs with total assurance of the customer’s Intellectual Property.

Another closely related area for production where we could work with customers to provide a quick solution is the design and supply of ECU Flashing units. Operators use the flashing units to flash the firmware into ECUs after assembly. The design of the ECU flashing unit is greatly accelerated by our generic ECU flashing framework, where the only input required from the customer is the seed generation algorithm for unlocking the ECU, which could be imported into our firmware as a library (in binary form) to protect the customer’s (or principal’s) confidentiality. In conclusion, our expertise and track record of supplying and installing EoL test stations on factory floors and supporting production personnel in the usage and fine-tuning of these systems will ensure an efficient and trouble-free operation for the customer for the entire production lifecycle.

Link to Linkedin article

Crowdfunding- a boon or a burden to Tech Startups

These days we see quite a few technology companies going the crowdfunding route(Indiegogo/Kickstarter) to get to market sooner rather than wait for the traditional way of raising money to build the product. It appears as a beautiful idea if co-founders do not want to give out equity but raise money to get to market.  But I personally feel that this is a double edge sword and entrepreneurs have to be very careful with the choices as it may end up hurting more than helping in the long run.

What I have noticed is companies look forward to crowd sourcing mostly for either one of the following reasons

  1. Raise money to help them accelerate the engineering cycle time and help them reach market faster with confirmed orders  – The challenge with this usually is if you are not far enough into your engineering /product cycle with most problems solved the money raised through these campaigns are in most cases is not enough to get to production and delivery.
  2. Create a sales and marketing buzz which then later helps them to get leverage with retailers and opens up many channels – This is a fantastic model because getting into some of the traditional channels to sell a product is not easy. But these days Bestbuy/Amazon etc. have a separate focus on successful products from these campaigns. So this will enable the startups to get into shelf faster if they are successful. This also gives a better chance of getting picked up by some distributors.
  3. Show the demand the product has in the market to convince tradition VC’s to put in money into the company – This is a good idea only if you are convinced that your product is going to be a runaway success else the chances are that it can do more harm.  Any thing less than a runaway success is going to raise more questions and challenges when startups try to raise money than help.

I have read few statistics and based our experience, the projects mostly do not make it out in time. This ends up damaging credibility with the same customer base which supported the product. And now if the product turns out to be below par after a long wait, we have a very unhappy customer to deal with also.

What I noticed is that many of these companies fail to deliver on time because

  1. These companies are either trying to solve some really challenging engineering problems which need a tremendous amount of money than what they can get from a Kickstarter/Indiegogo campaign. So they start falling behind on development goals and delivery deadlines
  2. They have the right idea and concept but limited experience in delivering products end to end and when they start dealing with it they realize the unknowns are lot more than the knowns and they start slipping
  3. These companies are fighting battles on many fronts and crowd sourcing is just one of the avenues. So they do tend to get carried away in their engineering cycle when they see greener pastures which ends up adding delays.

My personal thought always has been crowd funding is a good platform if you are done with 80% of your engineering . As I mentioned earlier this is because the money you raise from pre-selling this product is usually good to pay for your production needs only. However, if you are planning on doing your core engineering and delivering product based on this money then the likelihood of failure and delays are very high. The only exception I can think of is if the company has a reliable partner/team in a country like India, China where a bulk of the engineering is being done then this money does tend to help even if they are behind in their product lifecycle.

I think backers need to check with the company before putting money in as to how much of engineering is already done and ask to show working prototypes, actual Industrial design mockups, software demos etc. before trying to put money in.  Also, it may help to ask how the money collected is going to be spent because it gives you an idea of the readiness of the product you are backing.

Link to article on Linkedin

Reality check of PaaS Solutions for distributed systems in IOT and Big Data applications

Manoj K Nair

Manoj K Nair

‘Platform as a Service’ (PaaS) in the distributed systems arena is gaining wide adoption nowadays as the cloud is gaining more customer confidence. The latest IDC forecast states “By 2020, IDC forecasts that public cloud spending will reach $203.4 billion worldwide”. They also predict a fast growth in the PaaS segment, precisely in the next five years, Compound Annual Growth Rate (CAGR) is predicted at 32.2% which is very promising. PaaS Solutions for distributed systems have captured the serious attention of big players, like Amazon (AWS EMR), Google (Google Cloud Platform), Microsoft (HDinsight), Databricks (Unified Analytic platform) etc., and the count is growing by the day. The same is the case for IOT, with platforms from Amazon (AWS IOT), IBM (Bluemix), CISCO (Cloud Connect) etc. being the major ones in the growing list.

The explosive growth of PaaS Solutions is boosted by the complexity of DevOps and administration nightmares encountered in distributed systems; we still remember the Apache Hadoop version upgrades that always led to sleepless nights!

PaaS Solutions absorb a lot of complexities of the distributed systems which allows us to,

1.     Do the evaluation of platforms straight away. You don’t need to wait anymore for cost approvals, deployment completion as in the case of On Premises or IaaS deployments.

2.     IOT enabling becomes as fast as just plugging in an agent in your device.

3.     Automatic version upgrades of opensource distributed platforms like Apache Spark, Apache Hadoop, Apache Kaa etc. becomes just configuration changes.

4.     You can enjoy the additional features like Notebook integration, REST API service support etc. provided by the vendors

All fine! But are there any hidden factors in PaaS Solutions that need to be considered? From my experience of the past few years, it is a big YES! Especially for IoT and Big Data applications.

A ready-made dress may still need alterations!!!

PaaS solutions allow us to remain focused on the application use case by simplifying the spinning up of any platform with few clicks. Moving to another platform configuration is as easy as changing a few parameters and doing a restart. Major configurations and optimizations inside the platform are completely transparent to the user, which is an advantage most of the time.

However complete transparency to the system is not always insightful. You may need to play around with platform configurations to tune your application on top of it – scenarios like trying a few customized or new plugins into the platform which can give extra muscle to your application. As the open source incubations are growing rapidly and lot of new innovative tools in distributed systems are getting released every month, you need to have the flexibility to use them on the platform. Debugging or performance benchmarking of the application running over a totally transparent underlying platform is not good news for system designers. So when the platform is said to be transparent, we should also check the level of control we have over the platform.

For instance, while working with a major US healthcare player for collecting their large data streams for predictive and descriptive analytics, we were using Kafka for data injection and Apache Spark Streaming PaaS for data landing and processing. The initial evaluation and selection of the platform went well with standard architectural considerations and we were happy with the platform choice. Once the development of the application’s functionality was over and alpha tests completed, we started looking to make a few optimizations and tuning as part of the refactoring, for which accessing the platform cluster nodes became essential. We requested the platform vendor for access to the cluster nodes, but their reply was disappointing. Their customer support said “It’s completely transparent to user and we do not recommend any access or modification of the platform configurations”. We were stuck!!!

In another case of a Smart Battery IoT project, we were pushing status info from the smart device to an IOT PaaS platform for self-tuning. The data was being stored internally in the PaaS system. Things were working great and we were able to view the data using their custom tools and REST API based limited query access. However in our project, we made a strategy to create a raw data lake into AWS S3 for future analytics. To our surprise we found that there isn’t an option for data export! Being a very basic yet important feature, we contacted the IoT platform technical support. Their response was “Yeah, it is a simple feature, but it is not in our ‘Business priority list’ of features. So, it may take us some more time to do it”. How much more time was unknown! We were stuck again, and had to review the raw data lake policy.

In both cases, our development plans were seriously impacted and we were forced to skip/postpone major use cases, or start looking out for alternative platform to migrate to, although so late into the project. Let’s closely observe the responses from technical support in these two cases for a few interesting facts.

Case 1: “It’s completely transparent to user and we do not recommend any access or modification of the platform configurations

Transparency of platform complexities is definitely an important motivation to opt for PaaS, as it gives a quick, efficient and cost effective way of building the distributed system. But it is important to have an insight into the platform internals and in few cases some control as well. Being a system designer, we don’t like to swallow things as they are!!! After all, “platform limitations” is definitely not the story we want to tell our customers! In this specific case, we were looking to try out external monitoring tools that need a few agents to be installed into the cluster nodes. Eventually supporting a third-party BI tool took us roughly two months, in coordinating between the technical teams at the PaaS vendor and the BI vendor. This is simply not acceptable to the customer in terms of time or budget.

Case 2: “Yeah, it is a simple feature, but it is not in our ‘Business priority list’ of features. So, it may take us some more time to do it

Not just disappointing, this is alarming!!! Technical interoperability for the customer’s data should not be restricted for the sake of business priority. Unfortunately, the so called “business priority” often loses focus in retaining customers which reminds one of a “My way or the Highway” strategy! No customer wants his data being stuck in a specific platform. We need the flexibility to move it through multiple platforms, as business data has latent insights which could be extracted through different systems today or in the future.

To sum up, apart from traditional architectural considerations for selection between OnPremises or IaaS, PaaS or SaaS, we should be vigilant regarding these hidden factors during the selection of distributed platforms especially for IoT and Big Data applications, where large amounts of data are generated. The hidden factors are tricky in the sense that they may not be visible in the first look.

Some of the architectural considerations that help mitigate these hidden factors are given below.

1.     Create a proper migration plan – This may not be a short-term goal. But it becomes very important because as and when the data grows you may end up in a world of restrictions.

2.     Make sure you have enough control over the platform internals – Although you want to avoid administration overheads as much as possible, you still need good control of the platform for development, refactoring and analysis. Distributed system usage without platform control is painful in the long run. Telnet or SSH access to the cluster nodes, privilege to install custom tools and configuration level flexibility are few items to be verified in general.

3.     Third party integration flexibility – Most of the time, the system that we develop would be part of a pipeline and may need integration with customized systems like monitoring tools, custom logging methods etc. which make the integration hooks critical.

4.     Platform vendor’s willingness to provide functionality on demand – Platform vendors should be able to handle custom functionality requests on demand. We cannot wait indefinitely for the platform to support it in due course. Make sure that their quick and efficient response is covered in your Service License Agreement (SLA).

Distributed Platform as a Service is definitely growing rapidly, and customers will continue to invest heavily for the combo-advantages of reduced Capex cost, reduced time to market and reduced maintenance/administrative complexity. But I hope the quality and competitiveness of PaaS Solutions also matures fast for the benefit of investing customers, like our IoT and Big Data customers at Sequoia AT. Let’s hope a day will come soon when the platform vendors start advertising their respective platforms by throwing an open challenge “Hey, try out our PaaS solution and if you don’t like it, migrate to any other PaaS solution in 24 Hours or 1 Week!!!”

Link to Article on Linkedin

Industrial IOT (IIOT) – Hype or Reality

We have been hearing a lot about IIOT to be the real revolution happening or industry 4.0 or the next industrial revolution. However its not like a eureka moment and it’s not something set out in future. It’s been an evolution into it for past many years. Factories were already getting digital with the industrial internet, with Industrial IOT its just picked up pace. The tools, technology and ease of data access has accelerated the pace of this adoption.

Large industrial houses like GE or Siemens or ABB were always an IIOT company although they were not known for it. They had the ability to monitor and managed the expensive machinery health Since it was important for them to prevent downtime to customers . It also enabled them to learn in real time how machine was being used to improve their engineering. The thing that has changed is that this capability can be offered and implemented by any industrial plant of any size or revenue.

IIOT is a vast area which includes everything from sensors to data big data & AI. After ERP, IIOT will change it further to pick up on problems sooner and there by saving time and money . Imagine a small shop manufacturing pumps – They can now be connected real-time to their sales offices so they know which pumps are selling each day which in turn enables them to adjust the production to what’s needed more, to getting inventory only when they need based on this data, and the predictive maintenance systems enable them to know if there is any flaws in the manufacturing process and once the pumps are installed at customer premises they can collect the live data and alert customer of any possible problem they are foreseeing .

There are many companies operating in this space trying to address different parts of the puzzle . At SequoiaAT we have been fortunate to work with 2 companies in this space @opabydesign work in building condition monitoring and predictive maintenance. @Deepthoughts works on building energy monitoring solutions for factories to ensure that machines are running efficiently and at optimum energy consumption

We may not see or experience much change in daily life unless we are involved in the industry or factory and that’s our daily job. Supervisors expertise to be called upon to identify and fix a problem is going away as a decision will be made on actual data and not the experience of a floor supervisor. Today if you are a small shop, and say a machine starts making noise most likely your workers are depended on more experienced people to troubleshoot and tell what the problem could be with help of these cheap but effective smart devices .

Link to Article on Linkedin

Bringing home SAE J1939 Heavy-Duty Protocol Simulation

The J1939 standard for heavy-duty vehicles drafted by the SAE (Society of Automotive Engineers) in the mid-90s was driven originally by the “ECU trend” with the main objective of controlling exhaust gas emissions under increasingly tightening US and European regulations. Having gained wide acceptance ever since among diesel engine manufacturers, the SAE J1939 heavy-duty protocol has presently reached the stature of the de-facto standard for Truck and Bus manufacturers worldwide, for communication between various vehicle components and for diagnostics.

J1939 is a set of standards that includes a higher layer messaging protocol that works over CAN (Controller Area Network) protocol at the physical layer. The communication model supports both peer-to-peer and broadcast communication. The J1939 message format uses the Parameter Group Number (PGN) to label a group of related parameters, each of which may be represented by a Suspect Parameter Number (SPN). Continuously varying vehicle parameters (like Engine RPM etc.) are defined along with their valid range, offset, scaling etc. Besides, discrete (ON/OFF) parameters (like Brake Switch ON etc.) are defined separately. Commands to Enable/Disable specific vehicle functions (like Engine Fuel Actuator Control etc.) are defined.

Time based updates happen at 20 milliseconds (or lower) repetition rate, whereas the rate is significantly higher at higher Engine RPMs. Some periodic messages contain information that is of particular interest only when a specific state change occurs, and there is a defined range of repetition rates for these messages. Diagnostic messages (DMs) from various sub systems (like emission control etc.) are defined as per the Diagnostics Application Layer of the J1939 standard that includes services like periodic broadcasts of active DTCs (Diagnostic Trouble Codes), reading and clearing DTCs etc. Manufacturer specific parameter groups are supported that allow OEMs to define their proprietary message in addition to standard messages.

ECU design engineers of vehicle sub-systems at automotive OEMs, Tier-1 suppliers and R&D Service Companies routinely use J1939 Simulators for their product development, test and validation activities. In the early stages of development, a simulator comes handy for providing signals from other vehicle components exactly the same way as it would be in the real vehicle environment without the need for an actual vehicle in the lab. For instance a design engineer working on an ECU development program for Transmission Control would need signals from Engine Control system, Braking System etc. in order to validate his design functionality and performance. The ECU would get all these signals from the Simulator exactly as it would receive them in a vehicle environment, the physical connection provided by 2 CAN wires (CAN-HI and CAN-LO) and Ground (GND), taken out from the Simulator’s 16-pin OBD (or 9-pin D-Sub) connector using a custom wire harness to the mating connector of the ECU.

The J1939 simulator provides the design engineer with the ability to generate and vary individual parameters in order to check the response of the system under design/test. The required variations could be manually controlled using (rotary knob) potentiometers for continuously varying parameters. Some simulators automate the variation according to a pre-defined curve. A linear ramp that sweeps the full range (0-100%) of the given parameter, in increasing steps of 1%, is typical. Advanced simulators based on engine modeling data provide the ability to vary multiple parameters simultaneously in a specific relationship with reference to each other for better real-world simulation. A cost effective alternative to this would be to record multiple parameters of interest from the actual vehicle under standard test/driving conditions for the required duration, also known as the drive signature, and playing back the captured signature in the lab in the same time base, although with a lesser timing accuracy. Add on the simulation of actual vehicle hardware, like sensors, actuators etc. to create a fully Hardware-In-The-Loop (HIL) simulation and the full-extent of the simulation picture becomes complete.

Indian automotive R&D groups have traditionally banked on imported tools for J1939 simulation. Originating from USA, Canada, Germany etc. many of them come with pricey licenses although offering just an elementary 5-signal manual simulation. A few sophisticated ones with automatic ramp sweeps etc. are super-pricey, that even Indian R&D subsidiaries of multi-national OEMs have to contend with time-sharing the same simulator across multiple engineers/teams. It is in this context that a strong need is being felt for a high quality, cost effective J1939 Simulator that is indigenously designed and manufactured, that could provide many Indian customers the much-needed scalability for their R&D activities and reduce their dependence on imports.

Awareness on the availability of an indigenous product is the starting point however strict adherence to the standard is a hard requirement, including very strict timing considerations, in order to create a positive lean among automotive customers who always select and use only “proven technology”. Benchmarking data with reference to competing products could help customers get quantitative insights. Pilot trials could help them in familiarizing themselves with the indigenous product and to evaluate it against their experience with imported tools.

We at Deep Thought Systems design manufacture and supply J1939 simulators to Indian automotive customers, in addition to other offerings for CAN/J1939 logging, test/diagnostics, J1939 based displays and ECU manufacturing test automation. In our endeavors in bringing the above mentioned advantages to the Indian automotive R&D sector, we found that a customer’s need many a time is a highly customized simulator for their specific application. And thanks to our expertise in automotive protocols like CAN, OBD-II and J1939, and being fully in control of the hardware design, component sourcing and manufacturing as well as the embedded firmware and application development, we find ourselves well placed to deliver to these custom needs.

Post Scriptum:

A later industry development has been that all the major European heavy-duty OEMs came together in 2000 to co-develop the Fleet Management Standard (FMS) which is based on J1939 that incidentally opened up possibilities for manufacturer-agnostic telematics applications. The J1939 simulator, combined with suitable GPS simulation having the required levels of performance, offer telematics product designers a proven means to quickly test and validate their design well before going for in-vehicle tests.

 Link to article on Linkedin

Alexa- What excites me to explore this latest from Jeff Bezos’s research hub

 Anu Pauly

Anu Pauly

Nowadays voice communication has become the easiest way to interact than the other mediums of communications. Since 1994, when Jeff Bezos founded Amazon, they have been the inventors from STEM to Prime to Web Services to Kindle and the latest addition of Echo, Echo Dot and Echo Show. Echo series connects to the voice-controlled intelligent personal assistant service Alexa, one among that best till date.  Alexa is named after the ancient library of Alexandria. Using Alexa you can call out your wishes and see them fulfilled—at least simple ones. For example to know the weather of any place, play music, do a Google search etc..

Alexa Enabled Devices available in the market are Amazon Echo, Echo Dot, Echo Show and a new addition announced The Echo Look. You can explore these amazing products in https://echosim.io and login to Amazon.

The Alexa Voice Service is currently only available for US, German and UK customers with an Amazon account.

The architecture of Alexa is, when the user asks something like “Alexa, tell me the weather of San Francisco”, the audio request will go to the Amazon voice Service(AVS) i.e Alexa .It converts speech to text. The keywords are “Weather” and “San Francisco”, processes it and returns as Voice to User. Alexa Skills have two parts Configuration i.e. data in Developer Portal and Hosted Service are responding to User requests The Hosted Services available are Amazon Lambda or an internet accessible HTTPS endpoint with a trusted certificate. You can build skills using Alexa Skills Kit(ASK). The Skills that are supporting here are Custom Skills, Flash Briefing Skill and Smart Home Skill.

About the architecture of Alexa Skills Kit(ASK), when the user speaks a phrase beginning with “Alexa” and the Echo hears it, the audio is sent to AVS for processing. An Alexa skill request is sent to your server(Lambda) for business logic processing. Then server responds with a JSON payload including text to speak. Finally, AVS sends your text back to the device for voice output.

The specialties of these device are the far field’s microphone and there’s no need of an activation, simply say the trigger words like “Alexa”(default),”Echo”, “Computer”. So that it can respond to Voice Commands from almost anywhere within Earshot. Microsoft’s Cortana , Google Assistant and Apple’s Siri provides the similar Services. However, if you get used to Alexa it feels much more natural and responsive than speaking to a phone-based voice assistant. Voice control frees you from being constantly tethered to your smartphone.

Manufacturers of automobiles, kitchen appliances, door locks, sprinklers, garage-door openers and many other recently connected products are working to bring to Alexa or a similar voice-driven service to their devices

Alexa is particularly useful for smart-home because it allows you to control your connected devices without having to take out your phone and launch an App.

Despite the success and growing interest in Alexa products and services, Amazon still faces scrutiny over the potential privacy implications of having an always-on, always-listening device in peoples’ homes, cars and other personal spaces.

I was excited to know about Echo, so tried my part to add Custom Skill in Alexa. I could build a sample Quiz where Alexa acts as a Quiz Master. It was fun, but more importantly, I am onto see how effectively this can be benefited for Connected Homes.

Ultimately, Alexa is using natural language processing system(voice)to interact, so no need for the user to change his accent. Be You and Enjoy Alexa!

 Link to Article on Linkedin

IoT in Construction Industry

Aju Kuriakose

Aju Kuriakose ( Linkedin Profile)

I absolutely love my job and what we do at SequoiaAT. I am fortunate to learn about many new industries and technologies as we work on helping Startups and Fortune 500 companies with new product ideas. As part of engaging with a recent customer, I got a learn a lot about the home construction business in the USA.

I was surprised to see how one of the oldest industries know to mankind- the home building was slow in adopting the latest technology trends. Although the commercial construction side has been fast to adapt to the new technologies to provide energy efficient building, the home construction side has been way behind on this

I think the reason for slow adoption is Homes are basic necessities of humans and probably the biggest investment an average person makes in his/her life. So given everything equal, people, in general, will spend more money on buying a bigger house for same dollar value than buying a high-tech house. Also, another reason could be that the necessity always exists. There is no urgency to reinvent or adopt technology at a fast pace.

As Smart buildings become popular and the norm, the home construction industry will have to change itself and adapt itself to the new norms. Sooner or later there will be a disruptive force in the industry. The scope of bringing IoT-enabled technologies are unlimited in every part of the supply chain right from selling to occupancy. As an example – Having an IoT beacon beam out to potential buyer passing by information about the house or lot.

Incorporating some of the smart sensors for occupancy, temperature etc. during constructions itself rather than retrofitting. It will save 100’s and 1000’s of dollars for the homeowners in long run for a fraction of the additional cost while setting it up. Devices like Powerwalls from Tesla or leak Detection systems from Flowlabs will get embedded into the home construction process. Smart asset tracking and people tracking will ensure that right tools are present at the right place at right time.

The backend construction cycle itself will get faster with the use of technology as the tools & equipment be used can be tracked better and maintained better leading to less downtime and loss of working hours. IoT enables devices will also ensure better safety at the worksite and can lead to better co-ordination.

Even for a consumer point of view, warranties can be tracked better as every single component used in the house being build can be tracked and managed to their warranty.  I think like commercial building owners advertise how energy efficient the building are, builders need to start paying more attention to  HERS index (Home Energy Efficiency Rating System)

As of now, I think many of these technology implementations are at a hobbyist level and need mass adoption. I think buyers need to start pushing on the HERS index rating to put pressure on homebuilders to use better technology to produce more energy efficient homes.

Home construction companies which adopt and embrace the new technologies will have better and faster turn around time and edged out the competition. Plumbers, electricians, painters etc. will have to learn new skills to incorporate technology tools into their work to stay competitive.

Link to Article on Linkedin

Critical Success Factors for Medical Device Product Development

According to published market reports, by year 2021 medical device market is expected to grow to a staggering $340+ billion. The opportunities are expected to be more in general medical devices, cardiovascular and surgical & infection control segments. With such a tremendous market opportunities in the global market, it is imperative that medical device product developers to be aware of the stringent demands of design and development which emphasis on safety and compliance to established regulations and standards. Over the years, our experience with major medical product companies like Johnson & Johnson, Boston Scientific, Medtronics, Baxter, etc., we could see and experience various development approaches, challenges and stringent standards compliance needed by both client audit teams and independent audit teams. Some of the products developed included disposable colonoscope, automated sterilizers, blood glucose meters and a drug dispensing implantable device. This is an attempt to share our experience in essential elements in product design. Similar share on process elements will be posted soon.

Medical devices can be broadly classified into three market segments – Diagnostic, Therapeutic and Implantable. Based on Safety and Risk assessment the devices are classified into Class I, Class II or Class III device. Product designers and manufacturers, must demonstrate adequate controls and “compliance” to avoid being found guilty of deficiencies. It is important to understand that in this domain “intentions do not count but action alone”.

Product Development

Product development rigor depends on the product safety classification, history and whether it is a “first of its kind” product or “me too” product. Focus should be on characteristics of materials used, effective documentation from the proof of concept phase in case of first of its kind product. Manufacturing Process is important (especially material consistency and sterilization & hygiene). Software development needs to demonstrate complete verification and validation throughout the development life cycle.  Severity of device failure decides the development rigor (Level of Concern Analysis (LOCA). Proof of positive compliance needs to be recorded and submitted

The product life cycle phases are Concept à Design à Implement àManufacture à Disposal. This life cycle looks very much standard one but what differentiates is the focus you need to bring in each of these phases from product, process and compliance perspective. In concept phase inputs are to be considered from market, existing products, product category specific standards. In design phase DFX aspects should be planned and incorporated. Design rigor is brought in through processes like DFMEA (Design for Failure Mode Effect Analysis), Reliability Prediction, PFMEA ( Process Failure Mode Effect Analysis), System Hazard analysis, Software Hazard analysis, Requirement trace matrix, COTS (Commercial Off-the Shelf) products validation, test plans covering verification and validation with both positive and negative compliance.

User Interface design is another important aspect that needs to be practiced. This will contribute in improving the safety of medical devices and equipment by reducing the likelihood of user error. This can be accomplished by the systematic and careful design of the user interface, i.e., the hardware and software features that define the interaction between users and equipment.

Focus on Six early engagement areas will significantly contribute in developing a safe and reliable product. These are – PCB layout and fabrication, PCB assembly, Component engineering, Test engineering, System engineering and packaging and Product support.

Conclusion

Fundamental to designing and developing a medical product which is safe and effective is to integrate safety into product development. Objective should be to Remove or lower the risk at design phase, followed by Protecting for risks which cannot be removed at the design phase and failing which Inform the user about the residual risks through appropriate methods. The goal is to cancel all foreseeable life time of the apparatus – transportation, installation, usage, shutdown and disposal.

Link to Article on Linkedin 

Reading JUnit Source Code

@xiaojun-zhang

Preparation and JUnit Basic Usage

First step is to download the JUnit source code, create a new project in Eclipse, and then import the downloaded JUnit code.

Second step is to define the code to be tested. Below is the code, it’s very simple, just calculates the sum of two integers.

public class SimpleClass {
    public static int add(int a, int b) {
        return a + b;
    }
}

Third step is to write the test case. JUnit test cases can be implemented by inheriting the junit.framework.TestCase class. Methods, whose name begins with ‘test’, defined in subclasses of ‘TestCase’ will be treated as test cases. Multiple test cases can form a TestSuite. JUnit4 can also define test cases with @Test annotation.

public class TestSimpleClass extends TestCase {
    public void testAdd() {
        int result = SimpleClass.add(1, 1);
        Assert.assertEquals(2, result);
    }
}

In the above code, the testAdd() method is parsed as a test case. You can also define a test case by defining the suite() method, which isn’t too much difference.

Now we are ready to run and test the code.

JUnit Source Code Structure

Below is JUnit’s package structure:

Package Stucture

The core code is in junit.framework package. Package junit.runner is for running the test cases, in which BaseTestRunner is an abstract class. Package Junit.awtui, junit.swingui, junit.textui are three different graphical interfaces to run test cases, each package has a TestRunner inherited from junit.runner.BaseTestRunner.

The code I red is mainly in junit.framework package, junit.textui package, and the abstract class BaseTestRunner.

Below is the class diagram:

Class Diagram

Running Process of JUnit

JUnit’s running process can be divided into three steps, preparaing test cases, running test cases and collecting test results. In following code, getTest(testcase) is to prepare test cases, doRun(suite, wait) is to run the test cases:

try {
    if (!method.equlas(""))
        return runSingleMethod(testCase, method, wait);
        
    Test suite = getTest(testCase);
    return doRun(suite, wait);
} catch (Exception e) {
    throw new Exception("Could not create and run test suite: " + e);
}

Below is the sequence diagram:

Sequence Diagram

Design Pattern in JUnit

Composite

JUnit uses composite pattern when creating test cases. Junit.framework.Test is the interface that defines the test case, which includes both run() and countTestCases () methods. TestCase and TestSuite are two classes that implement the interface, TestCase is for single test case, and TestSuite is for a collection of test cases.

Observer

Observer pattern is used when collecting test results. junit.framework.TestResult manges a collection of TestListeners, these TestLiteners will be notified when execution of test case is failed.

Manage a collection of TestListener:

/**
* Registers a TestListener
*/
public synchronized void addListener(TestListenser listener) {
    fListeners.addElement(listener);
}

/**
*  unregisters a TestListener
*/
public synchronized void removeListener(TestListenser listener) {
    fListeners.removeElement(listener);
}

When the test case fails to execute, notify TestListenner to handle the errors:

public synchronized void addError(Test test, Throwable t) {
    fErrors.addElement(new TestFailure(test, t));
    for (Enumeration e = cloneListeners().elements; e.hasMoreElements; ) {
        ((TestListener)e.nextElement()).addError(test, t);
    }
}

Template Method

Template mode is relatively simple, it’s used in BaseTestRunner.getTest(), TestCase.runBare().

Link to article on GitHub

Implementing FAQ Bots using Microsoft Bot Framework

 Subitha Sudhakaran

When we build complex apps, the customers or users will have several queries regarding the working of this App. In normal case, we will provide help documents, FAQ or a forum to submit their queries etc. But now the trend is to move to conversational computing. Bots are the emerging mode for conversational computing. We can use Bots to simplify the customer interactions using the services like Skype, mail, Facebook messenger etc. We can consider it as an artificial user. Even if it is an artificial user, it should be smart enough to do the things which replace human activities. But Bots can be smart, semi- smart or dummy based on how we implement it. Artificial intelligence is the key for Smart BOTs.

I have been exploring and working on using Microsoft Bot framework at SequoiaAT, we can build and deploy chat bot’s across different services. It is intended to be cross- platform and cloud-based. Each Bot application is available to the users through a set of channels such as Skype, mail, text etc. Bot connector is the central component that connected our Bot with these channels. You can configure in which all channels your bot should be available. The architecture of a Bot connector is REST based i.e. there is no constraint on framework or the language. The only constraints are given by the endpoint names and exposed behaviour. Bot builder SDK s an open source SDK hosted on Github. There are two SDKs, one for to use with .Net and one for Node.js.

FAQ BOTS – Microsoft is continuing to update their services, adding new features and functions. Now it includes support for Azure Bot services, a cloud based development platform. Microsoft has implemented different templates to make it easier to use. One of the useful templates is “Question and Answer”. This is an ideal template for anyone building a customer service bot because it can work with your FAQs. In many cases, question and answer already exists in contents like FAQ in some documents. QnAMaker tool will allows us to consume the existing FAQ content and expose it as an HTTP endpoint. QnAMaker is a free web based service to respond to the users’ queries in a conversational way. So there is no coding required developing, training and managing the Bot service using QnA Maker.  As an end user you are only required to provide the predefined question and answer to the QnA  and It takes care of the rest. The result will be an endpoint which accepts the questions and returns a json response containing the matching response.

Bot framework is implemented by wiring up with LUIS (Language Understanding Intelligent Serivce). We don’t need to ask the exact question we stored in knowledgebase to get the relevant answer. It will look for keywords and based on the keywords, it provide appropriate responses. Sometimes it may match more than one possible responses. In that case, it will return the answer with high score. QnA Maker will help us to take our existing knowledge base and help us to turn into conversational services.

Pre-requisite for creating a FAQ BOT using the QnA Maker is a Microsoft id to create a QnA Bot service. We can login to the site https://qnamaker.ai/ for creating the QnA Service. Creating the QnA service is a pretty straight forward with minimum steps. One of the advantages I see here is, it is easy to create a FAQ bot for existing applications if we already have FAQ documents in some document format or in deployed in any page. Supported FAQ documents are .tsv, .pdf, .doc, .docx. So basically we can provide this document or page URL as an input while creating the QnA service. We are permitted to provide multiple input files and it will add the question and answer as a key value pair in its knowledge base. For new applications, if we don’t have any FAQ documents already, we create Question answer pair while creating the service itself. It supports testing our service in a conversational way before publishing the service. Once we published the service, we can make simple HTTP calls to call the QnA Maker service from our BOT applications. Basically when a user ask a question, the BOT application will internally pass the user keyword to the QnA service and will check the knowledge base for approximate matching record.

Once you are done creating the Bot application and integrating it with the QnA service, we need to publish that on the Microsoft Azure. For that you need Microsoft Azure subscription. You can get a free trial for limited period. It is possible to publish our Bot application directly from the Visual studio using the Publish option available. We just only need to follow the Publish wizard steps to complete the process. Finally we can see our published application on Microsoft Azure. After publishing, we need to register your Bot with Microsoft Bot framework.

For registering your Bot, you need to sign in https://dev.botframework.com/ using your Microsoft id and click on the register bot option. We need to fill in the details it needs. There we can see a BOT ID and an App Secret .These are used for authenticating our Bot application with the Microsoft Bot framework. Bot interacts with users naturally from your website or from different other channels. Current supported channels are Skype, Slack, Facebook Messenger, Office 365 mail, teams etc. We can decide in which on channel we would like to add our Bot application while registering the BOT. We can select the channels from the list of supported channels. Once you have added your bot to any of these services, it is always online and you can send message anytime and get an instant response.