Cross-platform vs. Native Mobile App Development: Which one to choose

 
Aruna R S
Aruna R S

Today, 99.6% of all smartphones run on either IOS or Android. Increasingly mobile apps have gained significance as way to not only conduct business but also for raising brand awareness. There are hundreds of new applications being launched on a daily basis. In the last few years, the concept of cross-platform mobile app development has taken off in a big way. It allows the developer to write the code once and employ it across all platforms – Android, IOS or Windows. Some of the advantages of developing Cross Platform apps.

Cross-platform vs Native apps:

Native apps

Native apps are written in languages that the platform accepts natively. For example, Swift or Objective-C is used to write native IOS apps, Java is used to write native Android apps, and C# for the most part for Windows Phone apps.

Apple and Google offer app developers their own development tools, interface elements and standardized SDK; XCode and Android Studio. This allows any professional developer to develop a native app relatively easily.

Advantages

  • Since native apps work with the device’s built-in features, they are easier to work with and also perform faster on the device.
  • Native apps get full support from the concerned app stores and marketplaces. Users can easily find and download apps of their choice from these stores.
  • Because these apps have to get the approval of the app store they are intended for, the user can be assured of complete safety and security of the app.
  • Native apps work out better for developers, who are provided the SDK and all other tools to create the app with much more ease.

Cross-platform apps

While cross-platform development is somewhat an umbrella term for any mobile app project that targets multiple platforms, hybrid is a subtype that implies the use of a specific development model. Legitimate representatives of hybrid development tools are Cordova and Phone Gap. Both allow to create apps that are web/native ‘hybrids’, with the code being written in HTML, CSS or JavaScript, and later wrapped in an invisible native WebView browser.

Cross-platform development tools that do not use WebView and communicate with the platform directly aren’t united in any subgroup. Existing under the general term of cross-platform development, they are sometimes called native development tools, which just makes it all even more confusing. For the sake of convenience, we’ll refer to these tools as ‘near-native’ here and will explain why they deserve such a praise.

In ideal scenario, cross-platform apps work on multiple operating systems with a single code base. There are 2 types of cross-platform apps:

  1. Native Cross-Platform Apps
  2. Hybrid ‘HTML 5’ Cross-Platform Apps

Native Cross-platform Apps

Native cross-platform apps are created when you use APIs that are provided by the Apple or Android SDK but implement them in other programming languages that aren’t supported by the operating system vendor. Generally, a third-party vendor provides an integrated development environment that handles the process of creating the native application bundle for iOS and Android from a single cross-platform codebase. In this case, the final product is an app that still uses native APIs, and cross-platform native apps can achieve almost native performance without any lag visible to the user. Native Script, Xamarin, and React Native are the most common examples native cross-platform languages.

Hybrid HTML 5 cross-platform apps

Although mobile applications are designed for smartphones and tablets, it is back end servers (either on-prem or Cloud-based) that handle application logic. Since both IOS and Android SDKs feature advanced web components, skilled software engineers often utilize Web View to create parts of an application’s GUI (Graphical User Interface) with HTML 5, CSS and JavaScript. The most popular hybrid app development framework is Apache Cordova (formerly known as PhoneGap).

Mobile app development tools

Xamarin:

Xamarin apps are built with standard, native user interface controls. Built with #C and .NET, Xamarin allows developers to re-use code and simplifies the process of creating dynamic layouts in iOS.Apps not only look the way the end user expects, they behave that way too. Xamarin apps have access to the full spectrum of functionality exposed by the underlying platform and device, including platform-specific capabilities like iBeacons and Android Fragments. Xamarin apps leverage platform-specific hardware acceleration and are compiled for native performance. This can’t be achieved with solutions that interpret code at runtime.

Apache Cordova

Apache Cordova is an open-source mobile development framework. It allows you to use standard web technologies – HTML5, CSS3, and JavaScript for cross-platform development. Applications execute within wrappers targeted to each platform, and rely on standards-compliant API bindings to access each device’s capabilities such as sensors, data, network status, etc. Cordova has no limitations in relation to natively developed applications. What you get with Cordova is simply a JavaScript API, which serves as a wrapper for native code and is consistent across devices. You can consider Cordova to be an application container with a web view, which covers the entire screen of the device. The web view used by Cordova is the same web view used by the native operating system. On IOS, this is the Objective-C UIWebView class; on Android, this is android.webkit.WebView.

Apache Cordova comes with a set of pre-developed plugins which provide access to the device’s camera, GPS, file system etc. As mobile devices evolve, adding support for additional hardware is simply a matter of developing new plugins.

React Native

The React Native framework was created by Facebook, and its development started as a result of a hackathon back in 2013. React is an example of a technology that the developer community created for itself when developers were looking for a tool that would combine the good things about mobile development with the power and agility of the native React environment. React Native’s genesis resulted in a huge enthusiastic community investing into the framework’s development, and there are catalogs of freely available components that go with it.

React Native uses various UI blocks to compose rich mobile apps for both IOS and Android using a common JavaScript codebase. React Native also allows developers to see their code and its implementation on real mobile screens next to each other in real time.

React Native provides development tools for debugging and application packaging, which saves time.

Which One to Choose

So, if you want to impress users with a lightning fast interface, rich functionality, and overall performance, native apps are what you need. In addition, you get better security and stability. The price for this is that you’ll most likely need to hire two dedicated teams for each platform. Small business may not be able to afford develop an application for both platforms.

Cross platform apps, on the other hand, can be developed for both IOS and Android. Plus, cross platform apps are much easier in terms of maintenance and deployment, so you can spend more time and money on marketing and attracting new customers. However, their biggest disadvantage is lower performance, which may be especially crucial if you’re developing an application with features that require deep hardware integration.

Maintenance Management & IIOT

Aju Kuriakose

Aju Kuriakose

IoT is changing the world around us. This change is affecting every walk of life including the maintenance industry. Maintenance management used to depend on skills of the maintenance managers to troubleshoot skills and was least data-driven as they have very limited data to fall back on when it came to machine health. However, it is rapidly changing. It is becoming heavily data-driven than skill driven. Advances in wireless communications and data processing enable maintenance managers to gauge the health of the factory in an instant.

We can tell that its no longer a hype but a reality and proof is in the fact that the leading organization- OPC Foundation is spending time in developing the Unified Architecture (UA) Specification for IIoT in the manufacturing environment. The standard is being developed to enable IIOT devices to easily pass information between sensors, machines, monitoring devices and the cloud in a secure and open way. Also OPC, AMT & OMAC have jointly developed  Packaging Machine Language (PackML) and MTConnect which combines OPC UA with existing industry standards to lower cost of predictive maintenance.

Low cost of IIOT sensors is making predicting failure or measuring remaining useful life (RUL)  of a tool a no-brainer enabling maximum uptime at optimum costs . As an example, a drill over course of its functioning will start to suffer wear. As we continue using them regularly at some point of time they become unusable either because the precision of the job falls below the parameter or the drill bit breaks off.  With the combination of Industrial IoT sensors and AI techniques today we can easily predict the remaining useful life of the tool .

Any maintenance professional will agree with me that predictive maintenance is a journey they have to take but IIOT makes the journey easier. Retrofitting existing machine with a sensor to measure machine health becomes very easy. One of the companies where we work with to enable this transitions is OPA By Design. It is a smart device which can be tagged to any existing machine at a very minimal cost to measure 8 different parameters and report it maintenance supervisors via mobile app & cloud. Since the machine is constantly being monitored, any sign in degradation in the health of the machine is alerted instantly

IIOT is also enabling to drive down the inventory holding cost as now maintenance supervisors have better predictability of machine failure and hence they have to stock less spares. It also results in fewer emergency inventory orders and less downtime due to out-of-stock inventory.

IIOT is not changing anything for the maintenance professional except the fact that he can now listen to his assets and make informed decisions based on actual data on the health of the asset. IIOT is not going to fix the problem for him. He will still have to depend on his best technician to fix it reliably

 Link to article on Linkedin

 

 

Reinventing manufacturing tests for automotive electronics

 Ram Mohan Ramakrishnan

Ram Mohan Ramakrishnan

Automotive electronics has been making steady gains in percentage cost of the total vehicle cost world-wide. Consequently, it has been facing some of the same challenges that were faced earlier (and mostly solved by automated tests) in other areas of automobile mass-manufacturing – fabrication, mechanical assembly, electrical components and hydraulic systems.

A typical example is the Electronic Control Unit (ECU) that has become the heart (or brain!) of the modern automobile. An ECU receives inputs from various sensors and sends outputs to multiple actuators, in addition to communicating with other ECUs of related subsystems in the vehicle. Some ECUs implement performance critical functions such as fuel injection, ignition timing etc., whereas others control safety critical systems such as Anti-skid Braking (ABS), Electronic Stability Control (ESC) etc. Therefore an automated manufacturing test station for the ECU is significantly complex in design, involving several pieces of instrumentation, simulation of sensors and multiple automotive communication protocols.

Let’s see if some real-world figures could lend a quantitative perspective to this mass-manufacturing challenge. For instance let’s take the case of a mid-size automotive OEM that sells over a 100,000 vehicles annually, with production in 2 plants of identical capacity. That would mean at least (taking Engine Control) an equal number of ECUs supplied annually by their Tier-1 ECU Manufacturer who needs to manufacture around 8 ECUs in an hour, assuming full 3-shift operations. Assuming 4 parallel assembly-lines, it gives less than 30 minutes to manufacture an ECU! The time available practically for testing ECUs at the End-of-Line (EoL) is even shorter. Assuming 2 parallel test stations, the operator typically would have less than a minute to test an ECU – to load it on the test station, execute the automated tests, to know if it passed or failed, print a bar code and affix it to the passed piece (or dump the failed piece into the reject bin) and unload the ECU, and ready to load the next one! Added is the complexity of different versions of the same ECU that are simultaneously in production. Since batches having different versions of ECU come to the same test station, the operator would need to reconfigure the station for a different set of tests each time. The reconfiguration must be completed typically within 4 to 5 minutes before loading the next ECU type.

Now let’s review how this challenge applies (or doesn’t apply!) to different segments in the automotive industry. It’s a no-brainer that any Tier-1 Manufacturer (or OEM) in the business would have all of this covered in their factory floors already, if not they would hardly be selling! However it is no longer the steady-state in the case of a newly introduced ECU design, be it part of a new brand of vehicle the OEMs plan to introduce to the market, or be it related to an additional feature, like adaptive cruise control, that’s being introduced for a new model variant. Does the Tier-1 Manufacturer have the required engineering bandwidth to design the test station themselves? In the case of technology transfer for ECU design from a global principal, does the Tier-1 Manufacturer have in-house expertise in the early stages to develop a test station on time before pilot production starts? In the case of in-house development of the ECU, does the Tier-1 Manufacturer really have the resources, bandwidth and simply the time to get the test station ready before the ECU design passes all type tests and hits production?

Alternatively, do existing test station vendors for other components, like starter motors, tiltable mirror assemblies or instrument clusters, have the necessary expertise to design such a complex test station? What about ECUs for Electric Vehicles (and hybrids) that are predicted to transform the entire motoring landscape forever! Not to forget the two-wheeler (and three-wheeler) segments, which under the rapidly closing time window of emission control regulations (Bharat Stage-VI in India although behind Euro-VI by a few years, has a 2020 deadline currently!) will be forced to switch to ECU based fuel-injection etc. in a few years’ time in order to legally sell in the market.

Here’s where a little foresight into accelerating the design of manufacturing test solutions could benefit the relevant stakeholders. At Deep Thought Systems, We have designed and developed a reliable, modular and generic platform called TestMate for building manufacturing test stations specifically for ECUs. We have successfully customized Testmate to supply EoL test stations for ECUs to Indian Tier-1 Manufacturers and OEMs in a very short turnaround.

The Human Machine Interface (HMI) of the Testmate, the main part that the operator sees and operates on a continuous basis, is a very generic requirement that consists of rugged enclosure, controls and indications for long years of reliable performance in an assembly floor. They say, and we’ve witnessed it ourselves, that routine use of test stations by the creed of factory operators indeed constitutes a really hash environment! The mounting, orientation, peripherals for viewing and printing, display properties etc. are all ergonomically designed, optimally for continuous usage by an operator over an 8-hour shift (or longer!). We have successfully installed the test station in factory floors where they are being used continuously for years, with zero support calls.

We work with the customer on the ECU connector type, to design a custom cable harness and test fixture that includes the mating connector, with locking arrangement. The fixture design ensures proper contact between the pins of the ECU connector and the mating connector over months of continuous loading and unloading. We equip the customer with spare cable harness to handle the unlikely event of damage due to exceedingly rough/careless usage by operators, which can be easily replaced onsite without having to depend on a service engineer.

Built on the same principles as our other automotive offerings for vehicle diagnostics, testing and simulation, Testmate is capable of communicating with various ECU designs over multiple automotive communication protocols like CAN, K-Line and LIN and messaging standards like J1979, J1939, UDS, KWP2000 etc. We work with the customer to customize it for the ECUs communication specification. Apart from testing continuous engine parameters, Diagnostic Trouble Codes defined for the ECU can also be tested. Containing many building blocks of an actual ECU, for many communication tests the test station appears to the ECU as a peer ECU (sometimes multiple) of the related sub-system(s)!

Testmate can reliably simulate inputs to the ECU, ranging from the simplest ignition key switch to the complex crankshaft position waveform that is a critical input for many engine control functions. It also measures the ECU’s outputs, ranging from the discrete voltages or timed pulses to PWM waveforms to actuators, and evaluates it against defined limits for pass or fail. In addition to functional tests, power supply and other electrical (negative) tests can be performed to test how well the ECU hardware responds to abnormal conditions, like reversed polarity of the power supply, under voltage etc. The I/O instrumentation is completely custom-designed as per the interface specification of the ECU.

The HMI software supports multiple levels of users, with differential permissions defined for each login level, like running tests, modifying test parameter limits, changing the sequence of tests, error message text, test calibration and troubleshooting. All tests are logged for later review by supervisors or managers. For failed tests clear troubleshooting assistance is displayed/logged as to which specific test failed and how exactly, so that the defective unit can be repaired. An ECU may come in twice for tests, once after bare assembly without the enclosure, and once again after the enclosure is fitted.

Finally it all comes together in the hands of the operator, who after loading an ECU has less than a minute to run the automated tests to know if it is a pass or a fail. Pass is good news always, the ECU gets a bar-coded label stuck on it and moves forward to the next stage. However a fail is hardly the end of the road because in order to keep the rejection costs low failed units need to be repaired, with the test station providing precise troubleshooting information to get it repaired quickly. In this context a few pertinent questions for relevant Tier-1 Manufacturers and OEMs are:

1) How much of ECU test station design could be generic, versus how much of it should essentially remain ECU design specific?

2) Does it justify to their business to completely reinvent a unique solution to their challenge in terms of engineering effort, cost or timelines? While large parts of the challenge retain a commonality, which a generic test platform such as Testmate has not only abstracted, but also been customized for specific ECUs and proven on the factory floor.

At Deep Thought Systems, we clearly understand the generic and reusable parts of the TestMate platform which help accelerate the design of EoL Test Stations. A high-performance hardware platform, powered by a real-time operating system and sound embedded firmware design practices ensures fast test execution and that all timing considerations in vehicle communication protocols are taken care of. Thanks to our expertise in digital and mixed signal hardware design, we are able to quickly customize other parts of the test station like I/O interfaces, ECU fixture and HMI software as per the customer’s specification and needs with total assurance of the customer’s Intellectual Property.

Another closely related area for production where we could work with customers to provide a quick solution is the design and supply of ECU Flashing units. Operators use the flashing units to flash the firmware into ECUs after assembly. The design of the ECU flashing unit is greatly accelerated by our generic ECU flashing framework, where the only input required from the customer is the seed generation algorithm for unlocking the ECU, which could be imported into our firmware as a library (in binary form) to protect the customer’s (or principal’s) confidentiality. In conclusion, our expertise and track record of supplying and installing EoL test stations on factory floors and supporting production personnel in the usage and fine-tuning of these systems will ensure an efficient and trouble-free operation for the customer for the entire production lifecycle.

Link to Linkedin article

Industrial IOT (IIOT) – Hype or Reality

We have been hearing a lot about IIOT to be the real revolution happening or industry 4.0 or the next industrial revolution. However its not like a eureka moment and it’s not something set out in future. It’s been an evolution into it for past many years. Factories were already getting digital with the industrial internet, with Industrial IOT its just picked up pace. The tools, technology and ease of data access has accelerated the pace of this adoption.

Large industrial houses like GE or Siemens or ABB were always an IIOT company although they were not known for it. They had the ability to monitor and managed the expensive machinery health Since it was important for them to prevent downtime to customers . It also enabled them to learn in real time how machine was being used to improve their engineering. The thing that has changed is that this capability can be offered and implemented by any industrial plant of any size or revenue.

IIOT is a vast area which includes everything from sensors to data big data & AI. After ERP, IIOT will change it further to pick up on problems sooner and there by saving time and money . Imagine a small shop manufacturing pumps – They can now be connected real-time to their sales offices so they know which pumps are selling each day which in turn enables them to adjust the production to what’s needed more, to getting inventory only when they need based on this data, and the predictive maintenance systems enable them to know if there is any flaws in the manufacturing process and once the pumps are installed at customer premises they can collect the live data and alert customer of any possible problem they are foreseeing .

There are many companies operating in this space trying to address different parts of the puzzle . At SequoiaAT we have been fortunate to work with 2 companies in this space @opabydesign work in building condition monitoring and predictive maintenance. @Deepthoughts works on building energy monitoring solutions for factories to ensure that machines are running efficiently and at optimum energy consumption

We may not see or experience much change in daily life unless we are involved in the industry or factory and that’s our daily job. Supervisors expertise to be called upon to identify and fix a problem is going away as a decision will be made on actual data and not the experience of a floor supervisor. Today if you are a small shop, and say a machine starts making noise most likely your workers are depended on more experienced people to troubleshoot and tell what the problem could be with help of these cheap but effective smart devices .

Link to Article on Linkedin

Alexa- What excites me to explore this latest from Jeff Bezos’s research hub

 Anu Pauly

Anu Pauly

Nowadays voice communication has become the easiest way to interact than the other mediums of communications. Since 1994, when Jeff Bezos founded Amazon, they have been the inventors from STEM to Prime to Web Services to Kindle and the latest addition of Echo, Echo Dot and Echo Show. Echo series connects to the voice-controlled intelligent personal assistant service Alexa, one among that best till date.  Alexa is named after the ancient library of Alexandria. Using Alexa you can call out your wishes and see them fulfilled—at least simple ones. For example to know the weather of any place, play music, do a Google search etc..

Alexa Enabled Devices available in the market are Amazon Echo, Echo Dot, Echo Show and a new addition announced The Echo Look. You can explore these amazing products in https://echosim.io and login to Amazon.

The Alexa Voice Service is currently only available for US, German and UK customers with an Amazon account.

The architecture of Alexa is, when the user asks something like “Alexa, tell me the weather of San Francisco”, the audio request will go to the Amazon voice Service(AVS) i.e Alexa .It converts speech to text. The keywords are “Weather” and “San Francisco”, processes it and returns as Voice to User. Alexa Skills have two parts Configuration i.e. data in Developer Portal and Hosted Service are responding to User requests The Hosted Services available are Amazon Lambda or an internet accessible HTTPS endpoint with a trusted certificate. You can build skills using Alexa Skills Kit(ASK). The Skills that are supporting here are Custom Skills, Flash Briefing Skill and Smart Home Skill.

About the architecture of Alexa Skills Kit(ASK), when the user speaks a phrase beginning with “Alexa” and the Echo hears it, the audio is sent to AVS for processing. An Alexa skill request is sent to your server(Lambda) for business logic processing. Then server responds with a JSON payload including text to speak. Finally, AVS sends your text back to the device for voice output.

The specialties of these device are the far field’s microphone and there’s no need of an activation, simply say the trigger words like “Alexa”(default),”Echo”, “Computer”. So that it can respond to Voice Commands from almost anywhere within Earshot. Microsoft’s Cortana , Google Assistant and Apple’s Siri provides the similar Services. However, if you get used to Alexa it feels much more natural and responsive than speaking to a phone-based voice assistant. Voice control frees you from being constantly tethered to your smartphone.

Manufacturers of automobiles, kitchen appliances, door locks, sprinklers, garage-door openers and many other recently connected products are working to bring to Alexa or a similar voice-driven service to their devices

Alexa is particularly useful for smart-home because it allows you to control your connected devices without having to take out your phone and launch an App.

Despite the success and growing interest in Alexa products and services, Amazon still faces scrutiny over the potential privacy implications of having an always-on, always-listening device in peoples’ homes, cars and other personal spaces.

I was excited to know about Echo, so tried my part to add Custom Skill in Alexa. I could build a sample Quiz where Alexa acts as a Quiz Master. It was fun, but more importantly, I am onto see how effectively this can be benefited for Connected Homes.

Ultimately, Alexa is using natural language processing system(voice)to interact, so no need for the user to change his accent. Be You and Enjoy Alexa!

 Link to Article on Linkedin

IoT in Construction Industry

Aju Kuriakose

Aju Kuriakose ( Linkedin Profile)

I absolutely love my job and what we do at SequoiaAT. I am fortunate to learn about many new industries and technologies as we work on helping Startups and Fortune 500 companies with new product ideas. As part of engaging with a recent customer, I got a learn a lot about the home construction business in the USA.

I was surprised to see how one of the oldest industries know to mankind- the home building was slow in adopting the latest technology trends. Although the commercial construction side has been fast to adapt to the new technologies to provide energy efficient building, the home construction side has been way behind on this

I think the reason for slow adoption is Homes are basic necessities of humans and probably the biggest investment an average person makes in his/her life. So given everything equal, people, in general, will spend more money on buying a bigger house for same dollar value than buying a high-tech house. Also, another reason could be that the necessity always exists. There is no urgency to reinvent or adopt technology at a fast pace.

As Smart buildings become popular and the norm, the home construction industry will have to change itself and adapt itself to the new norms. Sooner or later there will be a disruptive force in the industry. The scope of bringing IoT-enabled technologies are unlimited in every part of the supply chain right from selling to occupancy. As an example – Having an IoT beacon beam out to potential buyer passing by information about the house or lot.

Incorporating some of the smart sensors for occupancy, temperature etc. during constructions itself rather than retrofitting. It will save 100’s and 1000’s of dollars for the homeowners in long run for a fraction of the additional cost while setting it up. Devices like Powerwalls from Tesla or leak Detection systems from Flowlabs will get embedded into the home construction process. Smart asset tracking and people tracking will ensure that right tools are present at the right place at right time.

The backend construction cycle itself will get faster with the use of technology as the tools & equipment be used can be tracked better and maintained better leading to less downtime and loss of working hours. IoT enables devices will also ensure better safety at the worksite and can lead to better co-ordination.

Even for a consumer point of view, warranties can be tracked better as every single component used in the house being build can be tracked and managed to their warranty.  I think like commercial building owners advertise how energy efficient the building are, builders need to start paying more attention to  HERS index (Home Energy Efficiency Rating System)

As of now, I think many of these technology implementations are at a hobbyist level and need mass adoption. I think buyers need to start pushing on the HERS index rating to put pressure on homebuilders to use better technology to produce more energy efficient homes.

Home construction companies which adopt and embrace the new technologies will have better and faster turn around time and edged out the competition. Plumbers, electricians, painters etc. will have to learn new skills to incorporate technology tools into their work to stay competitive.

Link to Article on Linkedin

Critical Success Factors for Medical Device Product Development

According to published market reports, by year 2021 medical device market is expected to grow to a staggering $340+ billion. The opportunities are expected to be more in general medical devices, cardiovascular and surgical & infection control segments. With such a tremendous market opportunities in the global market, it is imperative that medical device product developers to be aware of the stringent demands of design and development which emphasis on safety and compliance to established regulations and standards. Over the years, our experience with major medical product companies like Johnson & Johnson, Boston Scientific, Medtronics, Baxter, etc., we could see and experience various development approaches, challenges and stringent standards compliance needed by both client audit teams and independent audit teams. Some of the products developed included disposable colonoscope, automated sterilizers, blood glucose meters and a drug dispensing implantable device. This is an attempt to share our experience in essential elements in product design. Similar share on process elements will be posted soon.

Medical devices can be broadly classified into three market segments – Diagnostic, Therapeutic and Implantable. Based on Safety and Risk assessment the devices are classified into Class I, Class II or Class III device. Product designers and manufacturers, must demonstrate adequate controls and “compliance” to avoid being found guilty of deficiencies. It is important to understand that in this domain “intentions do not count but action alone”.

Product Development

Product development rigor depends on the product safety classification, history and whether it is a “first of its kind” product or “me too” product. Focus should be on characteristics of materials used, effective documentation from the proof of concept phase in case of first of its kind product. Manufacturing Process is important (especially material consistency and sterilization & hygiene). Software development needs to demonstrate complete verification and validation throughout the development life cycle.  Severity of device failure decides the development rigor (Level of Concern Analysis (LOCA). Proof of positive compliance needs to be recorded and submitted

The product life cycle phases are Concept à Design à Implement àManufacture à Disposal. This life cycle looks very much standard one but what differentiates is the focus you need to bring in each of these phases from product, process and compliance perspective. In concept phase inputs are to be considered from market, existing products, product category specific standards. In design phase DFX aspects should be planned and incorporated. Design rigor is brought in through processes like DFMEA (Design for Failure Mode Effect Analysis), Reliability Prediction, PFMEA ( Process Failure Mode Effect Analysis), System Hazard analysis, Software Hazard analysis, Requirement trace matrix, COTS (Commercial Off-the Shelf) products validation, test plans covering verification and validation with both positive and negative compliance.

User Interface design is another important aspect that needs to be practiced. This will contribute in improving the safety of medical devices and equipment by reducing the likelihood of user error. This can be accomplished by the systematic and careful design of the user interface, i.e., the hardware and software features that define the interaction between users and equipment.

Focus on Six early engagement areas will significantly contribute in developing a safe and reliable product. These are – PCB layout and fabrication, PCB assembly, Component engineering, Test engineering, System engineering and packaging and Product support.

Conclusion

Fundamental to designing and developing a medical product which is safe and effective is to integrate safety into product development. Objective should be to Remove or lower the risk at design phase, followed by Protecting for risks which cannot be removed at the design phase and failing which Inform the user about the residual risks through appropriate methods. The goal is to cancel all foreseeable life time of the apparatus – transportation, installation, usage, shutdown and disposal.

Link to Article on Linkedin 

Get ready to talk with Smart Devices

Aju Kuriakose

Controlling devices at home and workplace with voice was what we saw only in SciFi movies a few years ago. However with the advancement in AI, NLP etc. this has become a reality now. The number of devices with which we can interact via voice at home and office has grown in past 3-4 years.

Amazon has an undisputed leadership in this and its ecosystem is way ahead in market penetration compared to its competitor. There are many more companies in the foray including OK google and Cortana from Microsoft. There are also many smaller companies like Cyberon, Conexant all working towards enabling voice controlled devices.

As per Strategy Analysis voice could capture up to 12% of Industrial-IoT applications by 2022 and In the consumer segment, voice has the potential to capture up to 18% of application in the 2020 to 2022 timeframe.

Using voice to control devices will dominate and become the most preferred way to interact with IoT devices in coming years. It is because voice is the most natural way to communicate. Amazon and Google have made it very easy to voice enable smart devices. It is also very affordable today as devices don’t need to add much processing power because they leverage the cloud infrastructure to do the heavy lifting. Many companies are going to leverage this model, but there are companies like sensory which is trying to push the voice enablement to the edge. Its TrulyNatural is an embedded large vocabulary continuous speech recognizer system for devices which may not be cloud connected like a toaster or a space heater or a coffee maker.

Device manufacturers will have to pick bets on which technologies to integrate into their product for voice control as the landscape is still evolving. Most likely companies will end up integrating 2-3 leading voice enablement API’s into their product lines.

The key factors to be considered while making the technology choice is (a) Signal to noise ratio. These smart devices will be used in a variety of environment and the ability to capture voice by reducing background sounds is very important (b) Identification & Isolation – The system should have the capability to separate command from other ambient sounds. (c) Capture – Most times the source could be moving or at distance from the mic ( 1 meter to 10 meters) based on the environment in which it is installed. The devices should have the capability to accommodate this.

However, there are privacy and security issues which will have to be addressed as the popularity grows because voice-enabled devices are always listening for the wake-up keywords. So if someone hacks into the system, they will be able to listen to your private conversations. Also, ethical usage of the information obtained becomes important because companies trying to build on their NLP and AI algorithms may decide to listen to all our conversations to strengthen their capabilities.

Interacting with devices via voice reduces multiple steps in our daily lives when we control them. ( Example controlling a thermostat at home, in past we had to get up, go to the thermostat and press buttons multiple times to set the desired temperature. Today we can do it just with a simple statement ” Set temperature to 72 Degrees.”) . So irrespective of the challenges it may bring, we will continue to expand the boundaries of voice capability to control devices. As speech changed humans forever, enabling voice-based commands to communicate with everyday devices will change the world forever.  

Link to article on Linkedin

Configuring headless systems

Ram Mohan Ramakrishnan

Ram Mohan Ramakrishnan

Headless systems abound in the embedded world, in consumer electronics, industrial automation, communications, automobiles etc. In these embedded devices, a Human Machine Interface (HMI) is conspicuous by its absence. That’s to say the device lacks a user display like a monitor, or LCD panel and nor does it have an input device like a keyboard/mouse or remote control.

Yet many of these systems need manual configuration by the user. The authorized user should be able to change certain operating parameters of the device within a predefined set of values. For instance, the Right/Left speaker settings in a home audio system, or safety thresholds in an industrial chemical reaction process, or DNS Server setting on a network router, or an engine speed/RPM limit setting on a telematics device etc.

User configuration circumvents the need to change the embedded software code (also known as the firmware) to address each and every individual user’s preference/scenarios. This would otherwise imply access to the source code (which in most cases is not open to users), making the required code modification, followed by compilation and re-flashing the updated firmware into the device. Configurability, the attribute that allows users to change key system parameters (that are internally used for computation by the firmware code) to predefined values at run-time, has traditionally been a critical design consideration for embedded system designers.

The user performs configuration settings usually using a software application that runs on a desktop or mobile device that presents the user with a configuration interface. The first generation of configuration applications (configuration files being their predecessors) ran on the desktop and used the serial/RS232 port, omnipresent at that time, to connect to the embedded device. These Apps were ported later on either to USB, which replaced RS232 on most modern computers, or ran as such by leveraging the virtual com port (a USB driver abstraction) that made the USB port appear as a normal serial port to the App. Examples of these applications can be found with printers, CNC machines, process control equipment etc.

After the World Wide Web established itself as a platform-independent paradigm for server access, the HTTP protocol came to be adopted across the spectrum of embedded designers for the purpose of device configuration. The inventors of the “www” may never have dreamed that the advantage they had created of not having to install a client application (and ending up managing multiple versions of client Apps for various OS/versions) would some day result in the creation of tiny embedded web-servers that run inside embedded devices. These are low footprint HTTP servers (like HTTPD) which make it possible for the embedded device to be configured using just a browser running on any platform. The IP connectivity typically being over either wired network like Ethernet, or wireless that is Wi-Fi or cellular (GPRS/3g), implemented as light-weight TCP/IP protocol stacks (a la lwIP) to run on resource constrained systems.

For instance, most telecom equipment, network switches, routers and gateways come with configuration pages that can be accessed using a browser. The first step in configuration typically is login, to authenticate user credentials and confirming the level of authority to perform configuration tasks. Subsequently a set of configuration pages are served out from a central menu, where the user can set/change specific parameter values for a group of configuration parameters related to IP networking, that are grouped together on that page. For enterprise routers, the configuration includes simple settings, like DHCP/Static IP, Default Gateway, Subnet mask, DNS Settings etc., but grow pretty complex, to Firewall/DMZ settings, Access Control, VPN/tunneling, Port forwarding etc. Thus it requires a qualified and experienced networking engineer to configure it correctly and to maintain the network equipment up and running.

Narrowing down focus to the world of consumer electronics, a few major industry trends seem to have influenced the way users configure headless systems in the home entertainment space today.

1) The ongoing explosion of mobile Apps, on Android, iPhone, Windows phone etc. – To such an extent that “there is an App for anything and everything”! This seems to have turned the tables on the earlier “browser” trend. Who cares for the browser now, there’s an App that does it anyway! And everyone knows it’s a song to just download/install a (whatever) App from the (which ever) Store.

2) The proliferation of home routers – With exploding popularity and plummeting costs, that the term “IP address” today is considered the tech-ABC of modern life. (In the 70’s the term “IP address” used to be high-tech parlance, but today it forms a part of the general vocabulary of a high school kid!) Since IP networking involves a highly standardized set of settings, like DHCP/Static IP, DNS settings etc., a home user can usually manage to set it up without much trouble. Or can at least leave the defaults alone knowing it is good enough to work correctly!

3) Wi-Fi gaining the status of defacto standard in consumer electronics – Thanks mainly to its cable-free, high bandwidth and moderately high range qualities, all ideal for the home segment. Given the massive penetration of Wi-Fi in homes today, networking for most consumer-electronics manufacturers defaults to Wi-Fi, although occasional segments of wired Ethernet could still be present in the home.

Streaming media protocols like Digital Living Network Alliance (DLNA) that work over self-discovery based networking protocols like Universal Plug-n-Play (UPnP) therefore work seamlessly over Wi-Fi, making it a natural choice for most device manufacturers, ranging from the traditional players like Sony, LG, Samsung etc. to recent innovators in this space like Google, Microsoft, Amazon etc. (*) These devices are far from standardized in their function, ranging from simple audio streaming systems, or gaming consoles to full-fledged home theatre systems. Some of these products come with Wi-Fi remotes and configuration (mobile) Apps. The important commonality is that these devices are all accessible on the Wi-Fi network, all the time.

So why configuration for a consumer electronics device! Imagine a user picks up the product from a store and brings it home, so how does the device know the user’s home network details so that it can connect? How could the user tell the consumer electronics device, “this is my network SSID, and this is the passkey”? – A mandatory and generic configuration that is valid for any make/model, for these are the vital pieces of information without which the device cannot join the home network successfully.

Further configuration needs will depend on the nature and functionality of the device. For instance, for a DLNA compliant device, what is the role of that device, whether is it a controller, or a renderer etc. Being non-standard and product-specific configuration settings, a browser based configuration process using menu-driven web-pages may not be intuitive to most home users. Hence custom Apps on popular platforms like Android, iOS etc. (available on the respective App stores for users to download and install) have become in vogue.

The user being a layman here (unlike in the case of a network switch where the user is an experience network administrator) the Config. App needs to provide a zero-learning, intuitive and wizard-driven experience. The Config. App needs to anticipate the user’s concerns, learning curve and proactively push help, like context-sensitive FAQs or “How to” videos, rather than let issues happen and then provide troubleshooting tips. As someone rightly said, “think not how you as an engineer would use it, but how your mom and dad would, and get it to work correctly too”!

The duration of time when the user is configuring the device is called configuration (or config) mode, whereas most of the time while in normal operation serving the home entertainment function, it is said to be in normal mode. To support the web server design pattern for configuration purposes, the device could be switched to become a Wi-Fi Access Point while in config mode. This means that the Config. App (usually a mobile App) can now connect to the device, and the user can set the configurations. Whereas for performing its normal home entertainment function, the device could be switched back to act as Wi-Fi Station, that normally connects to the home router/Wi-Fi Access Point.

Modern Wi-Fi chipsets (a la Broadcom) come with multiple mode operation – Wi-Fi Station (STN) and Wi-Fi Access-Point (AP) modes. Alternatively, a Soft-AP implementation on a single mode chipset would also serve the same purpose, as long as it can be made to switch between modes. The actual switching to config mode could be triggered on a special combo keypress (often with predefined timing) on the device itself. For instance, the Config. App of a consumer electronics device could display the following instruction to the user, “To enter Config mode, press and hold the FFWD and REV buttons simultaneously for more than 10 seconds. The Yellow LED will start blinking to indicate it has entered Config mode “.

From a connectivity view-point it is interesting to note that while in Config mode, the device loses its connection with the home network, since it is now acting as an Access-Point. With the mode switch it has started its own network for configuration Apps to connect, during which its normal functionality like rendering music is not available. It is sometimes even confusing to users, especially if internet access is involved, like Internet Radio etc.

Switching back to normal mode typically happens at reboot. Alternatively it could be implemented as the response to a reboot command while in Config. Mode. Following this mode switch the device regains its ability to play music etc. Another interesting behavior is that when this mode switch/reboot happens, the Smartphone loses its Access Point effectively so it automatically rejoins the home network to which it was connected earlier, a subtle yet standard behavior across Smartphones.

Since the embedded device and its Config. App have now become two peer Wi-Fi stations on the home network, it necessitates a mechanism for the App to know if (and when) the device rejoins the network after reboot. This could be implemented in the device using a broadcast mechanism of datagram messages, which the App could be programmed to detect with a suitable timeout.

Given the need for the Config. App to be available to users on multiple Smartphone platforms like Android, iOS etc. embedded system designers have adopted HTTP-based RESTful APIs as the programming interface to the embedded device. The embedded web-server is in fact a REST Server exporting REST APIs, which are implemented either as GET or as POST requests. The mobile Apps could call these REST APIs, with the data being exchanged as JSON payload of these requests. The embedded firmware that includes the REST Server is usually validated early on in the development cycle using generic REST API Clients (a la Chrome Advanced REST API Client), following which the Android and iOS Apps could be developed and hosted in the respective Stores.

Although not related to configuration, another generic scenario encountered with consumer electronics devices is firmware upgrade, or OTA (Over-the-air) upgrade as it has come to be known. The App typically alerts the user regarding the availability of the latest firmware, based on which the user decides to initiate the upgrade. The firmware image gets downloaded from a designated server of the manufacturer, typically an FTP server, file integrity check is performed after download, the firmware image is written into Flash, the boot loader parameters are modified and the device reboots (often more than once). All of this happens sequentially, silently under the hood, the App finally informing the user regarding the status of the upgrade.

The OTA feature also helps the manufacturer ship the hardware to hit the stores early, while allowing them to conveniently roll out updated features to users. So the next time we buy a device, like a fitness band, and install its App and it triggers off an upgrade (the first-time upgrade often being unsolicited!), we know what’s been happening! The App therefore forms a critical link in the whole upgrade process since the user needs to be kept in the loop always.

Configuration of headless systems has come a long way, the connectivity and Apps migrating across multiple technologies and platforms. In consumer electronics an increasing list of non-Configuration features are getting included in the Apps, such as firmware upgrades, library settings, playlists, favorites etc., making today’s consumer electronics products ever more user friendly and competitive.

(*) – While products from most home audio leaders like Bose, Denon, Yamaha etc. do support the DLNA standard, some of them are known to be moving towards proprietary APIs for media streaming and control. However they will most probably continue to leverage the underlying UPnP layer for device discovery, description and connection services.