Why Verification of SoCs is Critical in High Integrity Applications

By Enrique Martinez-Asensio, Functional Safety Manager in Silicon Characterization at EnSilica

High integrity/reliability electronic systems (hi-rel) refer to applications where failures are simply not an option. Industries that fit this category include automotive, aerospace, medical, and manufacturing, where reliability and safety functions are not just critical, but mandated through various regulatory standards.

Many of these applications are driven by a dedicated system-on-chip (SoC) which incorporate processing units, memory, analog, RF, and more in devices containing millions of transistors and thousands of embedded code lines. Such high numbers are understandably daunting; how can we be sure that nothing will go wrong?

Some of the most infamous accidents in the aviation or automotive fields have been attributed to bugs in vehicle control hardware or software. The Boeing 737 MAX incident led to pilots losing control of the aircraft due to a faulty sensor reading, causing two fatal crashes and grounding the 737 for more than a year while investigations took place. The Toyota “unintended acceleration” problem in 2009 led to numerous deaths and the emergency recall of around 10 million vehicles. Both incidents led to lengthy and expensive lawsuits against the involved companies.

Design vulnerabilities have since come to light, which point to insufficient verification of the electronic systems involved. This sends a very clear message: we still have a lot of learning to do when it comes to the design and deployment of hi-rel electronic systems.

Some changes are required in the development process

Having a robust development process is a must when dealing with hi-rel systems. From top-level requirement specifications to detailed implementation, having a clean documentation management system with full traceability will ensure that any changes along the design cycle are properly monitored, analyzed, and approved. The so-called V-Model, which splits the development process into design, implementation, and integration/testing, is commonly used to guide this process. But more can be done.

A deep analysis of how things can fail, the consequences, and remedies is absolutely necessary. This can be achieved by using standard techniques like the FMEA (Failure Mode Effects Analysis) and/or the FTA (Faults Tree Analysis). The relevant standards of these approaches require developers to provide objective evidence of the achieved level of safety through specific metrics, like unsafe FIT rates, the SPFM (Single Point of Failure Metrics), or SFF (Safe Fault Fraction).

Existing industry standards

Several industry standards have already been published around the concepts of reliability and functional safety with the purpose of ensuring that compliant products will be safe. The entire product life-cycle is covered in such documents: product definition, project management, design, implementation, integration, verification, validation, production and even service and decommissioning.

For instance, the automotive standards ISO 26262 and ISO/PAS 21448 (SOTIF) apply to most of the non-entertainment electronics present in car: engine control, braking (ABS), airbag, radar/lidar anti-collision systems, and especially to the newest generation of ADAS system. Industrial control systems must also follow the IEC 61508 standard when safety is critical, and robotic systems are particularly subject to an adapted standard: the ISO 13849.

What do these standards have in common? The need for a tight control of all the design and verification processes, the analysis of how things can fail, and the adoption of new approaches to the hardware and software development methodologies. With this in mind, the verification phase becomes a crucial milestone in achieving both reliability and functional safety.

All the standards mentioned – and more – have sections dedicated to the product verification. In the context of semiconductors, verifying complex SoC containing millions of gates is not an easy task, but it becomes even harder if such pieces of silicon serve high integrity systems: specific scenarios where faults are present must be taken into account to make sure that the system will react properly.

The critical role of verification

Verifying the correct behaviour of a SoC against the specified safety or reliability requirements is probably the most critical step in the chip product life-cycle, and the previously mentioned standards dedicate entire sections to this topic. At either the hardware, software or hardware-software integration levels, different methods are recommended to guarantee that the product won’t cause issues when doing its job. As an example, the ISO 26262 recommends the following hardware verification methods when a product must handle ASIL D safety requirements:

  • Requirements compliance, especially safety ones
  • Internal and external interfaces
  • Boundary values
  • Knowledge or experience based error guessing (lessons learned)
  • Functional dependencies
  • Common limit conditions, sequences and sources of dependent failures
  • Environmental conditions and operational use cases
  • Process worst cases and significant variants
  • Fault injection simulation

 

Verifying the compliance to the specified standards is especially important when it refers to safety requirements. By using safety analysis techniques (FMEA, FTA, etc), safety engineers can determine what mechanisms are necessary to tackle safety issues, and then it becomes the verification engineer’s task to prove their efficiency.

Safety standards don’t concern themselves with technical details around implementation, they just say that specific test cases must be created for each of the referred bullet points. It is up to the verification engineer themselves to determine the best technical approach, using the standard verification techniques in silicon design: RTL simulation, STA, Monte-Carlo, etc. Needless to say that all steps must come with the right documentation and traceability through a verification plan, verification specs and, finally, a verification report.

Fault injection

Knowing the response of a SoC under faulty conditions is of paramount importance in the verification of high integrity systems, and the technique called “fault injection” provides the solution.

A common mistake of newcomers to the hi-rel engineering is to confuse the terms “fault simulation” with “fault injection”.  The term “fault simulation” refers to the standard technique of verifying how good a test pattern is in terms of observability; so, how a fault occurred in an internal node (stuck-at-1, stuck-at-0) will be detected at the I/O pins. This technique helps to build effective test patterns for the device screening at the industrial production stage, and it is normally available in the simulation EDA tools. The figure of merit when using this methodology is the percentage of faults covered by a given test pattern.

The term “fault injection” is something different.  It is a technique oriented to verify if the internal chip mechanisms designed to mitigate failures react properly. For example, in a digital chip containing memories equipped with error detection and correction (EDC), a soft-error toggling a memory cell must not have consequences if the EDC works properly.

The fault injection tools user interface is much more complex than the fault simulation one, since the pass/fail criteria are not so obvious:  in some cases the system requirements can stipulate that in the event of a failure, the SoC must jump to a safe state (e.g. ISO 26262), so a simple comparison with a reference good pattern is not sufficient, and some more elaborated comparison is needed.  In silicon chips, the fault injection methodology must be performed at the pre-tape-out stage or at a device validation/qualification time.

Different approaches are available for this task at the SoC RTL or gate level, including some dedicated EDA tools; numerous literature is available on the web about this topic. A possible solution is to use the Verilog built-in scripting language (PLI)  by creating dedicated test cases containing PLI-coded fault injectors, as depicted in the following figure.

 

Verilog RTL test case with fault injection

Fault injection instances can be properly placed in the Verilog code upon the definition of the test campaign, targeting the specific nodes where faults should be injected.

The fault injection at the silicon validation or qualification stages can be done in different ways, depending on the product complexity and category. Aerospace chips normally require a radiation qualification campaign, to verify their immunity to soft-errors or some other radiation effects. Even automotive standards, like the AEC-Q100 recommends a soft-error qualification for chips containing more than 1Mbit RAM.

Testability

In the design of high integrity SoCs there is pitfall: what is good for reliability is not good for testability. Indeed, the use of redundancy like the standard 3-votes TMR flip-flops (Triple Modular Redundancy)  may lead to unscreened faults during the chip production, since the TMR will “correct” them.  Therefore, special design measures must be taken during the design and verification phases to disable such redundancy when the chip is not in mission mode. For example, a chip using scan-path as a DFT strategy,  must treat every TMR flip-flop as three different ones.

It goes without saying that putting a chip in test mode must be designed in such a way that it becomes virtually impossible to happen accidentally during mission mode. This is a test case that must be part of every fault injection simulation campaign.

Embedded Software

Complex SoC normally contain embedded processors who manage the overall data processing flow.  Internal ROM, OTP or flash memories store the firmware image, and depending on the application, it is loaded at production time or at system startup via comms devices (bootloader).  Moreover, systems based on flash memories offer the possibility to upgrade the image once the application is in the field.

The software development process is subject to exhaustive verification steps, and in the case of ISO 26262 ASIL D, the standard proposes different methods, which often need the use of auxiliary tools.

The software architecture definition and the subsequent code writing stage, must guarantee a clean and readable code. A classic open source code analysis tool called “lint” has coined the verb “linting” for this kind of verification. Code restriction rules, like the known MISRA C must be respected to avoid the obfuscation that some languages can introduce, like C or C++.

Prior to integrating the different software units, an individual verification must be performed through the so-called “unit testing”, and some recommended methods by ISO 26262 at this step are:

  • Analysis of requirements, especially safety ones
  • Analysis of boundary values; bugs normally hide in the corners.
  • Error guessing, using the lessons learned process

 

Testing individual software units that will be integrated in the SoC flash, or other non-volatile memory, may be challenging, since normally, the companion hardware is not yet available at the verification time. Some techniques allowing the hardware-software co-verification must be used, like the so-called “hardware-in-the-loop” (HIL), through the use of emulation FPGAs or some other EDA tools dedicated to this purpose.

Such tools monitor the code behaviour, providing as well additional reports about the code’s branch or conditions coverage, requested by the involved standards.

At the final hardware-software verification phase, once silicon samples are available, some verification methods are recommended by ISO 26262:

  • Interface test
  • Fault injection test
  • Resource usage test
  • Back-to-back comparison test between model and code

 

Again, the fault injection test shows up.  In such an environment, with the real hardware available, another methodology is necessary. Some different approaches have been proposed, for example, using the JTAG debug interface with a script based fault injection campaign.

Fault injection at the prototype verification

Conclusions

Different standards tailored to different industrial contexts have been published to guide hi-rel product development, but all of them have some something in common: high integrity systems require a careful verification plan, able to reproduce critical situations that could occur in the field to ensure that the implemented safety measures do the job properly. Even if effective verification methods have been proposed, the high complexity and time-consuming nature of these tasks shows that there is still a lot of room for improvement in order to make this process more reliable and efficient.

At EnSilica, we have a robust development process, as well as the experience and necessary tools to produce the most demanding hi-rel SoC serving applications in the automotive, aerospace, medical, and industrial fields.

 

The Future of Healthcare: AI, Wearable Technology, and the Role of ASICs

 

While healthcare may have lagged behind sectors like fintech or education, its digital transformation in recent years has been nothing short of seismic. New technologies as well as social and environmental challenges have propelled the industry forward, from the emergence of telehealth and virtual appointments, to machine learning (ML) powered diagnostics and remote patient monitoring. One major technological development that is particularly suited to healthcare is wearable technology. The market for wearable devices has boomed in recent years, with some analysts predicting it will grow by 97% to reach $161 billion by 2033. Much of that growth can be attributed to applications in healthcare, whether it’s individuals looking to take control of their own health, or healthcare providers looking to improve their diagnostic and monitoring capabilities.

 

Wearable devices are nothing short of game changing. Equipped with advanced sensors, they can continuously monitor vital signs such as heart rate, blood pressure, glucose levels, and more, providing invaluable real-time data to patients and healthcare providers without the need for cumbersome home kits or frequent hospital visits. When combined with AI and ML, wearable devices have the potential not only to improve patient outcomes, but drive the industry forward in terms of clinical research and diagnostics. From cochlear implants and hearing aids, to remote fertility monitoring and mobile cardiac telemetry, the possibilities are seemingly endless.

 

However, there is a catch. Wearable devices are small and inconspicuous by design, and are expected to function continuously for long periods of time, and that creates engineering problems. Battery power and heat dissipation and two areas that must be carefully considered, and with an increasing number of devices expected to perform at the edge, local processing capabilities are also a factor.

 

Enter ASICs, or Application Specific Integrated Circuits. These specialized chips are designed to perform dedicated functions with higher efficiency and lower power consumption than general-purpose processors. In the context of healthcare, ASICs are crucial to ensuring that wearables can operate for extended periods on minimal battery power, making them reliable for continuous monitoring. ASICs can also facilitate edge computing, where data processing occurs directly on the device, preserving privacy and ensuring functionality even in areas with poor connectivity. Before we explore the technology in more detail, let’s first explore the reasons behind the boom in wearable technology and how it’s creating unprecedented opportunities for early detection, continuous monitoring, and personalized treatment plans.

 

The burden of non-communicable diseases (NCDs)

Non-communicable diseases (NCDs), including cardiovascular diseases, cancer, respiratory diseases, and diabetes, are the leading cause of death globally, accounting for three out of four deaths worldwide, according to the World Health Organization (WHO). These chronic conditions place a tremendous burden on healthcare systems and economies, particularly in low- and middle-income countries, where healthcare resources are often limited and overstretched. Early detection and continuous monitoring are essential strategies in combating NCDs, as timely intervention can prevent severe complications, remove some of the burden placed on healthcare delivery, and ultimately reduce mortality rates.

 

Traditional methods of diagnosing and monitoring NCDs have typically relied on sporadic testing of key indicators such as blood pressure, glucose, and cholesterol levels. However, this approach can actually hinder early detection and timely intervention, as critical changes in a patient’s condition may go unnoticed between tests. The advent of wearable technology addresses this gap by providing continuous, real-time monitoring of vital signs. These devices, ranging from smartwatches to specialized medical monitors, collect data seamlessly and frequently, offering a comprehensive view of a patient’s health status. By enabling ongoing assessment and immediate alerts to potential health issues, wearables empower both patients and healthcare providers to take proactive measures in managing NCDs effectively.

 

The synergy of wearables, AI, and edge computing

Remote patient monitoring isn’t new. What is new, however, is the rapid design and manufacture of new devices that can leverage AI and ML to maximize its potential. AI excels in processing and analyzing vast amounts of data, identifying patterns and anomalies that might be missed by human observation. In healthcare, AI-driven analysis can enhance the accuracy of diagnostics and prognostics, offering personalized insights based on an individual’s unique health data, and predicting health issues before they become critical.

One essential piece to this puzzle is the concept of edge computing. Traditionally, data from wearables would be transmitted to cloud servers for processing, requiring significant bandwidth and posing potential privacy risks. Edge computing avoids these issues by processing data locally on the device itself. This approach not only reduces the amount of data that needs to be transferred, but also ensures that sensitive medical information remains secure. What’s more, edge computing enables devices to function effectively even in areas with poor internet connectivity, a crucial advantage in low- and middle-income regions where NCDs are most prevalent. By embedding AI capabilities directly into wearables through advanced chips like ASICs, healthcare technology can provide faster, more reliable, and more secure solutions, revolutionizing the management and treatment of chronic diseases.

 

 

Power efficiency and the role of ASICs

The design and manufacture of wearable tech is not without its challenges. Wearables are often required to operate continuously for extended periods, sometimes 24/7, to provide real-time health monitoring. Ensuring that these devices consume minimal power while maintaining high functionality is crucial not only for user convenience but also for the feasibility of continuous health monitoring. ASICs are custom-designed chips tailored to perform specific tasks with greater efficiency than general-purpose processors. Unlike traditional processors, which may carry unnecessary functionalities that drain battery life, ASICs include only the circuits required for the specific application, thereby reducing energy consumption. This optimization allows wearable devices to function longer on a single battery charge – critical for uninterrupted monitoring.

 

ASICs can also facilitate the integration of multiple functions onto a single chip, including analogue front-ends (AFEs), data converters, voltage references, and oscillators. This integration not only reduces the physical size of the device but also minimizes overall power consumption by eliminating the need for multiple discrete components. Local data processing is also possible, reducing the need to transfer large amounts of data to external servers. This local processing capability allows for real-time analysis and decision-making, essential in healthcare settings where timely responses can be life-saving. Although developing an ASIC involves higher upfront costs compared to using commercial off-the-shelf (COTS) components, the long-term benefits often outweigh these initial investments. For instance, integrating multiple functionalities into a single chip can significantly reduce the bill of materials (BoM) and streamline the supply chain, leading to cost savings over time. Put simply, ASICs will continue to play a crucial role in enhancing the power efficiency and functionality of healthcare wearable devices, paving the way for more effective and accessible health monitoring solutions.

 

As healthcare technology continues to evolve, the fusion of AI, wearable devices, and ASICs heralds a new era of proactive and personalized medicine. By overcoming the challenges of power efficiency and data privacy, these innovations promise not only to enhance patient care but also to democratize access to advanced medical diagnostics and monitoring, particularly in underserved regions.

by David Rivas -Marchena

EnSilica: Leading the Way in FinFET Technology

In the dynamic landscape of wireless communication, semiconductor innovations play a pivotal role. Among these, FinFET technology stands out as a game-changer. Let’s explore how FinFETs are shaping the future of 5G and satellite communication.

MIMO Systems: A Brief Overview

Deployment of 5G terrestrial and satellite communications relies heavily on the integration of Multiple-Input, Multiple-Output (MIMO) systems to meet the link budget requirements and mitigate interference.

Fin field-effect transistors (FinFETs), represent a departure from traditional planar transistors. Their three-dimensional fin-like structure allows for better control over current flow, reducing the leakage and improving power efficiency. This technology has been the key enabler for the design of highly integrated digital radio ASICs, based on RF ADCs and DACs, supporting multiple standards.

Key Changes with FinFETs

  1. Improved RF and Analog performance:
  • FinFETs exhibit superior Gm and Rout characteristic, which is paramount to implement the highly linear analog signal processing required in MIMO systems.
  • Excellent RF performance with fT values of 600GHz.
  • MIMO signals remain pristine even in challenging environments.
  1. Power Efficiency Boost:
  • The subthreshold slope of FinFET devices is practically ideal. This drastically reduces leakage current—the silent power drain—that has plagued semiconductor devices for years.
  • For 5G devices and satellite communication terminals, where power constraints are stringent, FinFET-based designs shine.

 

Design Challenges

Despite the improvements in performance, FinFET technology introduces complexity in the design that impact the work of engineers and their approach to RF ASICs.

Now more than ever analog designers, who delve into the intricacies of FinFET, need accurate modelling to ensure that their designs align with the theoretical predictions. Rigorous simulations validate performance metrics, allowing for confident deployment.

At the same time designers face a delicate balancing act. Performance, power consumption, and chip area—these factors intertwine. Effective trade-offs lead to optimized designs that meet real-world requirements.

Impact on Satellite Communication

  • Custom ASICs empower cost-effective, low-power satellite broadband user terminals.
  • These terminals, based on MIMO communication systems, enable seamless connectivity, even in remote areas. Collaborations with space agencies underscore the commitment to advancing satellite communication technology.
  • Hybrid satellite/5G networks are on the horizon, promising ubiquitous connectivity.

The journey through the FinFET landscape is one of innovation, precision, and impact. As we embrace the future of wireless communication, semiconductor advancements remain a beacon—a testament to human ingenuity.

 

For more in-depth insights, please refer to the full article in the Microwave Journal by EnSilica’s FinFET expert Gabriele Devita

Designing RF ASICs for Space: Understanding the Environmental Challenges of Satellite-Based Electronics

In the first installment of this two-part blog series on space-based ASICs, we delve into the intricate challenges of designing ASICs for the demanding environment of space. The stakes are high; launching and maintaining satellite equipment in orbit is a costly endeavor, so ensuring the effectiveness and reliability of integral components is paramount.

 

In this article, we’re focusing on the unique environmental challenges that ASICs used on satellites encounter beyond the Earth’s atmosphere. Unlike their terrestrial counterparts, these specialized circuits must withstand extreme conditions that go far beyond the usual demands of electronic components. From the intense radiation to the unforgiving vacuum of space, every aspect of their design demands meticulous attention to detail and precision. This is where the resilience and ingenuity of satellite ASICs truly shine, paving the way for space-based innovation and observation.

 

Satellite Orbiting Earth.

Satellite Orbiting Earth. 3D Scene. Elements of this image furnished by NASA.

 

The Unique Environment of Space for Satellite ASICs

The environment of space presents a range of challenges that are vastly different from those on Earth, posing unique hurdles for the design and functionality of RF ASICs on satellites. One of the most significant factors is the extreme temperature variations that space-bound equipment must endure. Unlike the more controlled terrestrial environments, satellite-mounted ASICs can be exposed to temperatures ranging from the intense cold of deep space to the searing heat when exposed to direct sunlight. This extreme range, which can fluctuate by as much as 150°C, demands robust design considerations to ensure the operational integrity and longevity of the ASICs in such fluctuating conditions.

 

Another critical aspect unique to space is the vacuum environment. This absence of atmosphere affects not only the thermal management of satellite ASICs but also impacts material selection and structural design. In space, the lack of air means that traditional cooling methods through air convection are ineffective, necessitating reliance on thermal radiation for heat dissipation. This shift requires a different approach to thermal management, with a focus on radiation and insulation techniques. Additionally, the vacuum of space can lead to outgassing from materials, potentially causing delamination or other forms of degradation. Ensuring that ASICs are designed with these factors in mind is crucial for their successful operation in the harsh and unforgiving environment of space.

 

Understanding Radiation and Its Impact on Satellite ASICs

A critical challenge in the design of an RF ASIC destined for space is the management and mitigation of radiation, a pervasive and potentially destructive force in the extraterrestrial environment. Space radiation primarily comprises high-energy particles, including protons and electrons from solar winds and cosmic rays from distant galaxies, which can have severe effects on electronic components. These particles, when interacting with the delicate structures of ASICs, can lead to various forms of damage, such as the buildup of charged particles in the gate oxides of transistors, altering their operational characteristics. In the realm of satellite ASICs, this can manifest as changes in threshold voltages in transistors, potentially leading to malfunction or failure of the circuit. The impact is more pronounced in smaller gate sizes, common in modern ASIC designs, where the probability of radiation-induced damage is significantly higher. Understanding these radiation effects is not only crucial for the initial design but also for the ongoing reliability and functionality of satellite ASICs operating in such a high-radiation environment. This understanding forms the basis for developing effective mitigation strategies to protect these sophisticated components from the harsh realities of space radiation.

 

 

Mitigation Techniques Part 1: Design and Material Considerations

When it comes to satellite ASICs, effective mitigation against the harsh radiation of space is achieved through a blend of innovative design and strategic material selection. For RF ASICs, which are integral in communication and signal processing in satellites, the choice of gate material and structure is pivotal. Materials that offer higher resistance to radiation help in reducing the vulnerability of these ASICs to radiation-induced damages, such as threshold voltage shifts, which are critical in maintaining signal integrity and performance. The physical design of the ASICs, including the layout and sizing of the gates, is also tailored with radiation resilience in mind. This is particularly important for RF ASICs, where precision and reliability in signal processing are paramount.

 

What’s more, the overall packaging of these ASICs plays a vital role in radiation protection. Utilizing radiation-hardened packaging materials and specialized insulation techniques provides an additional defense layer, crucial for shielding the sensitive electronic components from direct radiation exposure. This approach is especially relevant for RF ASICs, as it ensures the integrity and efficiency of communication systems, which are often the lifeline of satellite operations. These design and material considerations form the cornerstone of the development of robust satellite ASICs, ensuring their operational effectiveness and longevity in the challenging environment of space.

 

Stay tuned for our next blog, where we will delve deeper into advanced mitigation techniques and explore the crucial role of software strategies in safeguarding ASICs against the unpredictable nature of space.

 

*The opening image of the James Webb Space Telescope is credited to NASA/Desiree Stover.

 

Medical ASICs: Balancing Power and Performance in Wearable Technologies

When it comes to medical devices, navigating the intricate world of ASICs is something of a tightrope walk. Each step of the design phase involves striking a delicate balance between power, efficiency, and performance, where every gain comes at a cost. Medical ASICs are the beating hearts of potentially life-saving devices that must remain operational around the clock, yet are constrained by the finite capacity of their batteries. In this first part of our two-part blog series, we explore the nuanced tradeoffs that define ASIC design in medical applications, focusing on the harmonization of “always-on” functionality with the critical limitations of power resources, and how these factors interplay to shape battery size and device performance.

Balancing “always on” demands with battery capabilities

The need for “always on” functionality is now commonplace, but nowhere is it more crucial than in the development of wearable medical devices. This presents a formidable challenge when it comes to designing medical ASICs, where high performance meets the stark reality of limited battery capabilities. That’s why custom ASICs, the linchpins in these medical devices, are increasingly engineered with a keen focus on striking this critical balance. But it’s not easy. The battery’s capacity, a finite resource, automatically becomes the defining factor for the device’s energy management and overall functionality. This limitation is even more acute in devices that eschew traditional batteries for innovative energy-harvesting solutions.

 

However, there are ways of managing this trade-off, particularly when it comes to the strategic decision-making around the device’s duty cycle. Despite the “always on” label, an optimally designed medical device actually spends much of its time in a low-power state, conserving energy by remaining dormant until needed. This is where the ingenuity of custom ASICs shines, employing power gating as an essential tool to control leakage current in transistors – a persistent issue in medical ASICs.

 

The solution often lies in creating multiple clock and power domains within the ASIC, allowing power to be judiciously supplied to specific subsystems only when necessary. Typically, the most consistently active elements are a low-power timer and a memory buffer, which periodically activate the front-end circuitry for brief data conversion and storage tasks. This selective engagement, a hallmark of sophisticated ASIC design, is pivotal in marrying the necessity of “always-on” functionality with the stringent power limitations that are a hallmark of modern medical devices.

 

Optimizing performance with limited battery size

The pursuit of high performance in medical ASICs often leads designers into something of a struggle with battery size. High-resolution Analog-to-Digital Converters (ADCs), a staple in many medical devices for their accuracy and dynamic range, typically employ a sigma-delta architecture. This architecture, while cost-effective and precise, can be a significant drain on power resources. The challenge here is to achieve the necessary performance without imposing undue demands on the battery.

 

In sigma-delta ADCs, a digital filter section effectively trades sample rate for resolution, derived from a relatively simple analog input stage. This setup is ideal for managing interference in noisy environments, common in medical applications. However, the downside is the substantial energy required not just for the oversampling and filtering by the Digital Signal Processor (DSP), but also for the extensive digital post-processing on the host microcontroller. Each capture cycle, therefore, becomes a power-intensive operation, exacerbated by the high latency characteristic of sigma-delta converters, especially when high resolution is desired.

 

A more energy-efficient approach involves tackling interference closer to its source, using mixed-signal circuitry within the ASIC to address common noise sources. This strategy allows for a cleaner, lower-rate signal to be sent to the microcontroller, reducing overall circuit activity and, consequently, power consumption. Custom DSPs integrated into the ASIC can perform digital filtering of the oversampled signal, achieving two critical goals: reducing the dynamic range requirement for the ADC and enabling transmission of the filtered signal at a lower sample rate. This not only conserves power but also allows for the buffering of output samples within the ASIC, reducing the frequency at which the microcontroller needs to wake up for data processing. In some designs, only specific signal features or events, such as abnormal heart-rate readings, are transmitted, further minimizing power usage and maximizing the efficiency of medical ASICs.

 

Final thoughts

When it comes to designing medical ASICs, the tradeoffs between “always-on” functionality, performance, and battery size are not just challenges; they’re also opportunities for innovation and optimization. As we have explored, the key lies in intelligent design choices that balance these competing demands, ensuring that medical devices deliver on their promise of reliability and efficiency. Custom ASICs, with their ability to finely tune power states and manage energy consumption smartly, are the counterweights key to mastering this balancing act.

 

However, the exploration of power and performance tradeoffs is just the beginning. In the next article in this two-part series, we’ll delve into the equally critical aspects of functionality versus form factor and the balancing act between reducing the Bill of Materials (BoM) and managing costs. These considerations are pivotal in shaping the final design and effectiveness of medical ASICs, further demonstrating the nuanced complexity of ASIC design in the medical field. Remember, in this high-stakes arena, understanding and managing these tradeoffs is not just about technical proficiency; it’s about crafting solutions that ultimately enhance patient care and safety.

 

For more information, contact EnSilica’s expert team

EnSilica to supply beamforming SatCom ICs to Germany’s VITES to enable power- and cost-efficient flat panel terminals for NGSO-constellations

Focus on connectivity systems for moving vehicles. Initial applications include commercial and government land-based vehicles

  

EnSilica (AIM: ENSI), a leading turnkey supplier of mixed signal ASICs and SoCs, has announced it is to supply VITES with a new beamformer chip for satellite user terminals. VITES will use the chip at the heart of its new ViSAT-Ka-band terminal.

EnSilica’s beam-forming chip is optimised to enable VITES’s creation of power- and cost-efficient ground-based flat panel user terminals for satellite communication systems that can be used across a range of fixed and SatCom on the move (SOTM) applications.VITES specialises in broadband wireless systems for professional applications and has been delivering flat panel terminals based on its own phased array technology since 2019.

Its new ViSAT-Ka-band terminal is intended to be integrated into vehicles for Communications-on-the Move applications. As such it is able to track the movement of low earth orbit (LEO) and other non geo synchronous (NGSO) satellites and allows users to access high-speed connectivity anywhere on the planet while the vehicle is moving.

ViSAT-Ka uses an innovative scalable architecture that enables extremely power efficient TDD and FDD terminals at affordable prices. On this advanced technical base, VITES is developing terminals for both land-based and maritime vehicles of any kind, including for cost and power sensitive automotive applications.

This means that vehicles can access “Always on” broadband connections outside of 4G / 5G network coverage areas, with high throughput, and in an ultra-compact form factor that allows for both retro and line fitting.

Paul Morris, VP of the RF and communications business unit at EnSilica commented:

“EnSilica has been investing in this area for several years working closely with the UK Space Agency, European Space Agency and VITES GmbH. VITES have provided clear terminal requirements so we have been able to optimise our solution for the market. This beamformer chip can be paired with a range of Ka- and Ku-band RFICs including our own EN92030 allowing it to cover the various LEO or GEO constellations. “ 

Martin Gassner, CEO of VITES said:

“For the upcoming NGSO-constellations, quantum leap solutions are required in order to drive performance up, while driving cost and power consumption down. Our new ViSAT-Ka-Band terminals are the solution to these challenges. We’re delighted to be partnering with EnSilica and working closely with them on leading SOTM solutions that are delivering broadband connectivity even when there is no access to terrestrial networks.”

 

 Fig 1) The ViSAT-Ka-band terminal is intended to be integrated into vehicles for Communications-on-the Move.

       

Fig 2) Map showing 4G and 5G coverage in US and Germany, with significant areas not served by any operator. Copyright nPerf

For further information please contact:

 

EnSilica plc

Ian Lankshear, Chief Executive Officer

www.ensilica.com

Via Vigo Consulting

+44 (0)20 7390 0233

 

VITES GmbH

Martin Gassner, Chief Executive Officer

www.vites.de

 

 

+49 (0)89 6088 4600

info@VITES.DE

Sonus PR (Technology Public Relations)

Rob Ashwell

+44 (0)7800 515 001 ensilica@sonuspr.com.com

 

About EnSilica

EnSilica is a leading fabless design house focused on custom ASIC design and supply for OEMs and system houses, as well as IC design services for companies with their own design teams. The company has world-class expertise in supplying custom RF, mmWave, mixed signal and digital ICs to its international customers in the automotive, industrial, healthcare and communications markets. The company also offers a broad portfolio of core IP covering cryptography, radar, and communications systems. EnSilica has a track record of delivering high quality solutions to demanding industry standards. The company is headquartered near Oxford, UK and has design centres across the UK, in Bangalore, India and in Port Alegre, Brazil.

 

About VITES

VITES GmbH (“VITES”) is an innovative German supplier of high-performance broadband wireless systems and customized solutions for professional applications. Main focus is the development of FPA-terminals for SATCOM-on-the-Move applications. These terminals are based on a power efficient active phased array technology, appropriate for any kind of vehicle, including governmental as well as commercial use.

In the areas of public safety, security and disaster management, VITES offers nomadic LTE-/5G- (ViCell) as well as broadband IP-Mesh-Networks (ViMesh). The brand vikomobil 2.0 represents complete customer specific mobile communication nodes (“cell on wheels”) that are energy autonomous and integrate SATCOM.

EnSilica und VITES kooperieren bei der Chipsatz-Entwicklung zur Realisierung energie- und kosteneffizienter Flat-Panel-Terminals für NGSO-Konstellationen

Fokus auf breitbandige SATCOM-Datenanbindung während der Fahrt. Erste Anwendungen umfassen kommerzielle Fahrzeuge und solche der Sicherheitsbehörden

  

EnSilica (AIM: ENSI), ein führender Anbieter von Mixed-Signal-ASICs und SoCs, gibt bekannt, die VITES GmbH mit einem neuen Beamforming-Chipsatz (elektronische Funkstrahl-Formung) für Satelliten-User-Terminals zu beliefern. VITES wird die ASIC-Entwicklung maßgeblich begleiten und den Chipsatz in seinen neuen ViSAT-Ka-Band-Terminals einsetzen.

 

Der Beamforming-Chipsatz von EnSilica ist daraufhin optimiert, energie- und kosteneffiziente, bodengestützte Flat-Panel-User-Terminals für Satellitenkommunikationssysteme realisieren zu können, die sich für ortsfeste SATCOM- und mobile SATCOM-on-the-Move-/Communications-on-the-Move Anwendungen eignen.

VITES ist auf Breitband-Funksysteme für professionelle Anwendungen spezialisiert und liefert seit 2019 Flat-Panel-Terminals auf Basis einer eigenen Phased-Array-Technologie.

Integriert in Fahrzeuge, ermöglicht das neue ViSAT-Ka-Band-Terminal der VITES die Konnektivität mit Satelliten während der Fahrt (Communications-on-the-Move, COTM). Das Terminal ist in der Lage, den Funkstrahl vollelektronisch auf die Bewegung von erdnahen Satelliten (LEO, Low Earth Orbit) und anderen nicht geosynchronen Satelliten (NGSO) nachzuführen. Nutzer erhalten damit einen schnellen Netzwerkzugang überall auf der Welt, auch während der Fahrt.

ViSAT-Ka basiert auf einer skalierbaren Architektur, die sehr energieeffiziente TDD- und FDD-Terminals zu erschwinglichen Preisen ermöglicht. Auf dieser fortschrittlichen technischen Basis entwickelt VITES Terminals für Land- und Seefahrzeuge aller Art – auch für kostensensitive Automotive-Anwendungen, die zudem eine sehr geringe Verlustleistung erfordern.

Mit dieser Technologie haben Fahrzeuge auch außerhalb der 4G-/5G-Netzabdeckung Zugang zu „Always-on“-Breitbandverbindungen mit hohem Durchsatz – und das in einer sehr flachen, kompakten Formgebung, die sich gleichermaßen für Nachrüstungen, als auch für die Fahrzeug-Erstausrüstung ab Werk eignet.

Paul Morris, VP der RF and Communications Business Unit bei EnSilica, dazu:

„EnSilica investiert seit mehreren Jahren in diesen Bereich und arbeitet eng mit der UK Space Agency, der European Space Agency und der VITES GmbH zusammen. VITES hat klare Anforderungen für die Beamforming-Chips geliefert, so dass wir unsere Lösung für den Markt optimieren konnten. Der Chipsatz lässt sich mit Ka- und Ku-Band-RFICs kombinieren, einschließlich unserem EN92030 Ka-band-Chip, so dass er die verschiedenen LEO- oder GEO-Konstellationen abdeckt.“ 

Martin Gassner, CEO von VITES, fügte hinzu:

„Für die kommenden NGSO-Konstellationen sind Terminals erforderlich, die in Bezug auf Performance, Energieeffizienz und Herstellungskosten einen Quantensprung darstellen. Diese Anforderungen erfüllen wir mit unseren neuen ViSAT-Ka-Band-Terminals. Wir freuen uns über die Partnerschaft mit EnSilica und die Zusammenarbeit bei der Entwicklung von COTM-Lösungen, die Breitbandverbindungen auch dort ermöglichen, wo es keinen Zugang zu terrestrischen Netzen gibt.“

Bild 1: Das ViSAT-Ka-Band-Terminal misst je nach Version zwischen ca. 16 x 10 cm und 50 x 30 cm und ist für den Einbau in Fahrzeuge für Communications-on-the-Move vorgesehen

       

Bild 2: 4G- und 5G-Abdeckung in Deutschland, wobei wesentliche Gebiete von keinem Betreiber versorgt werden (Quelle: nPerf).

Kontakt für weitere Informationen:

 

EnSilica plc

Ian Lankshear, Chief Executive Officer

www.ensilica.com

Via Vigo Consulting

+44 (0)20 7390 0233

 

VITES GmbH

Martin Gassner, Chief Executive Officer

www.vites.de

 

 

+49 (0)89 6088 4600

info@VITES.DE

Sonus PR (Technology Public Relations)

Rob Ashwell

+44 (0)7800 515 001 ensilica@sonuspr.com.com

 

Über EnSilica

EnSilica ist ein führendes Fabless-Design-Unternehmen, das sich auf kundenspezifisches ASIC-Design für Erstausrüster (OEMs) und Systemhäuser sowie auf IC-Designdienstleistungen für Firmen mit eigener IC-Entwicklung konzentriert. Das Unternehmen besitzt erstrangige Erfahrungen in der Entwicklung kundenspezifischer HF-, mmWave-, Mixed-Signal- und Digital-ICs für internationale Kunden in den Bereichen Automotive, Industrie, Gesundheitswesen und Kommunikation. Darüber hinaus bietet das Unternehmen ein breites Spektrum an Kern-IP für Kryptographie, Radar und Kommunikationssysteme. EnSilica hat umfangreiche Erfahrung in der Lieferung qualitativ hochwertiger Dienstleistungen, die anspruchsvolle Industrienormen erfüllen. Das Unternehmen hat seinen Hauptsitz in der Nähe von Oxford, Großbritannien, und verfügt über Entwicklungszentren in ganz Großbritannien, in Bangalore, Indien, und in Port Alegre, Brasilien.

Über VITES

Die VITES GmbH („VITES“) ist ein innovativer deutscher Anbieter hochleistungsfähiger Breitband-Funksysteme und kundenspezifischer Lösungen für professionelle Anwendungen. Der Schwerpunkt liegt auf der Entwicklung von Flat-Panel-Terminals für Communications-on-the-Move-Anwendungen. Diese Terminals basieren auf einer energieeffizienten aktiven Phased-Array-Technologie, die für jede Art von Fahrzeugen geeignet ist, sowohl für den kommerziellen Einsatz als auch bei Sicherheitsbehörden.

Im Bereich öffentliche Sicherheit und Katastrophenschutz bietet VITES nomadische LTE-/5G- (ViCell) als auch breitbandige IP-Mesh-Netze (ViMesh) an. Die Marke vikomobil 2.0 steht für mobile energieautarke Mobilfunkknoten („Mobilfunkzelle auf Rädern“) die zusätzlich SATCOM integrieren.

ensilica custom asic banner

How to write an ASIC design specification well – Part 3

This is a 3 part blog. If you missed it, click here to read part 1 and part 2.

The 7 Most Common Mistakes and How To Avoid Them

The first step in developing a new ASIC for any function – be it a wireless chip for a wearable healthcare sensor device, an autonomous vehicle ASIC, or a communications system for a satellite – is the specification.

To ensure the product you need is produced, is done so on time and on budget, and that errors aren’t introduced that could require a recall, the specification has to be written clearly and concisely.

In this, the final part, we look at the most common mistakes and how to avoid them.

1) Loose language

The language used in the document matters. It can add clarity, and it can add ambiguity.
For example, there is a big difference between ‘shall’ (expresses a requirement), ‘will’ (makes a statement of fact), and ‘should’ (expresses an aspiration).

2) Not being concise

Requirements should be as concise as possible, and broken down into individual items to give full clarity when reporting compliance during the development.
For example, it is better to say, “The product must do X” and “The product should do Y” versus “The product must do X and should do Y”.

3) Missing information

Any aspect of the product’s functionality that is missing from the requirements will not make it to specification document. And, therefore, will be missing in the product.

4) Lacking rationales

Rationales should be added for each requirement where appropriate.
But note, these should not be extensions of the requirement. And compliance will be determined against the requirement, not the rationale.

5) How vs what

The requirements describe what the product must do rather than how it must provide that functionality.

6) Including non-essential functionality

If something is not essential for the intended product functionality, then it should not be a requirement.

7) Lacking measurability

Requirements should be clear and unambiguous, and terminologies or expressions that are not verifiable should be avoided all costs.

This means subjective statements like “The product must be low power” should be defined to say “The product must consume less than X”.

And for reporting compliance, there must be a way to uniquely identify each requirement. This enables compliance reports to make clear statements such as “Specification S.17 addresses requirement R.49”.

Getting a specification wrong can be catastrophic. By following these seven tips you’ll be well-placed to get it right first time.

If you want to find out more about how EnSilica can help, click here.

How to write an ASIC design specification well – Part 2

This is a 3 part blog. If you missed it, click here to read part 1.

Who Is The Specification For

The first step in developing a new ASIC for any function – be it a wireless chip for a wearable healthcare sensor device, an autonomous vehicle ASIC, or a communications system for a satellite – is the specification.

To ensure the product you need is produced, is done so on time and on budget, and that errors aren’t introduced that could require a recall, the specification has to be written clearly and concisely.
In this, the second of three blogs on the topic, we look at the audiences for needs the specification and the information they need.

“The specification can (and does) act as the communication channel, helping information to flow.”

Who reads it?

There are two audiences.

1) Your internal system development team. If they lack understanding or agreement of the product’s definition, there’s little chance that the development team can successfully deliver what is required.

2) Our development team. Because it’s natural and common for individual members in a development team to read the sections of the specification focused on their tasks… and only these sections, the specification document allows a development team to be structured correctly and the right skill sets brought together to get it right first time.

Potential pitfalls

However, the specification document does not act as a panacea to magically resolve poor communication between customer and supplier. Nor will it fix potential communication problems between people within a development team, however it can (and does) act as the communication channel, helping information to flow.

If you want to find out more about how EnSilica can help, click here.

Click here to read part 1 and part 3.

How to write an ASIC design specification well – Part 1

The Purpose Of The Specification

The first step in developing a new ASIC for any function – be it a wireless chip for a wearable healthcare sensor device, an autonomous vehicle ASIC, or a communications system for a satellite – is the specification.

To ensure the product you need is produced, is done so on time and on budget, and that errors aren’t introduced that could require a recall, the specification has to be written clearly and concisely.
In this, the first of three blogs on the topic, we look at the purpose of the specification to help you avoid some of the most common mistakes.

“One common misconception is that the specification can serve as a datasheet or user manual. It cannot and does not.”

The specification’s purpose

To get it designs right first time, it is crucial to clearly define the required functionality and behaviours that the product must possess; and the specification is there to define these. And to also state that any functionality that is explicitly not required.

Who writes it

Typically, the engineering team generates the specification in collaboration with the end customer. It will be based on a requirements document, which will be provided by the customer.

What the document enables

Instead, the primary role of a specification is to unambiguously set out the customer’s requirements. With each customer requirement directly or indirectly addressed by the functionality defined in the specification.

There should be a clear mapping, (one-to-one / one-to-many), between each customer requirement and each function defined in the specification.

What it should not be used for

One common misconception is that the specification can serve as a datasheet or user manual. It cannot and does not.

What happens once it’s written

Once written, it is essential for the customer and development team to thoroughly review and approve the specification and ensure it adequately meets each requirement.

Additionally, the compliance report prepared by the development team should be reviewed and approved by the customer to make sure everything has been interpreted correctly.

If you want to find out more about how EnSilica can help, click here.

Click here to read part 2 and part 3.